Follow Up Boss's API Pagination Silently Drops Records (And How to Fix It)

Published on March 19,2026

I was building a sync pipeline that pulls contact data from Follow Up Boss into a database.

But there was a problem. A specific contact I knew existed in FUB wasn't showing up in the synced data.

A quick search against the API confirmed the record was there. So why wasn't my sync picking it up?

What followed was a two-day investigation that uncovered an undocumented limitation in FUB's API. One that silently drops the vast majority of records during pagination.

Here's what I found, and the one-line fix hiding in a different page of the docs.

The Setup

I was paginating through FUB's /people endpoint using cursor-based pagination. The approach their own docs recommend. My code followed every nextLink until the API stopped returning one. Pretty standard stuff.

For smaller accounts, this works perfectly. But for accounts with hundreds of thousands of contacts, it doesn't.

The Symptom

The FUB account I was syncing had hundreds of thousands of contacts. The API confirmed this. Every response included a total of hundreds of thousands in its metadata. But my sync was only capturing a small fraction of that. No errors. No timeouts. The API just... stopped giving me nextLink values, as if I'd reached the end.

I hadn't.

The Investigation

My first thought was sort order. FUB defaults to sort=created (descending), which means the cursor traverses records by creation date. I switched to sort=id, which gave me a stable, sequential ordering.

That helped. The number of records I could retrieve jumped significantly. Better, but still a fraction of the actual total.

Next hypothesis: payload size. I was requesting fields=allFields, which returns every field on a person record, including nested arrays of emails, phones, addresses, and all custom fields. Maybe FUB's cursor was hitting a response size ceiling.

I trimmed down to only the specific fields my sync actually uses;16 fields instead of "all." Same result. The cursor stopped at exactly the same point.

Then I tried fields=id, just the ID, nothing else. Every single record came back.

The Findings

The cursor stopped at roughly 950 pages whenever any real fields were requested, regardless of how many there were. It's not a payload size issue. FUB has an undocumented server-side cursor depth limit that only fields=id bypasses.

This is a silent data loss bug. The API tells you the real total, provides a mechanism to page through the records, and then quietly stops after returning only a fraction of them. No error, no warning.

What I Almost Built

At this point, I was planning a two-pass sync:

It would have worked. But it would have meant twice the API calls, more complexity, and a slower sync.

The Actual Fix

While researching whether the batch ID approach was viable, I stumbled on FUB's Common Filters and Parameters documentation page. It describes idGreaterThan and idLessThan - simple query filters, not cursor-based pagination.

GET /v1/people?fields=allFields&sort=id&limit=100&idGreaterThan=0

This doesn't use FUB's cursor mechanism at all. Each request is stateless. You just track the last ID you saw and pass it as idGreaterThan on the next request. No server-side cursor to exhaust.

It returns every record.

The fix was replacing this:

// Follow the cursor (breaks after ~950 pages with real fields)
let nextBatchUrl = `${API_URL}/people?limit=100&fields=allFields&sort=id`;

while (nextBatchUrl !== null) {
  const batch = await fetchBatch(nextBatchUrl);
  yield batch.people;
  nextBatchUrl = batch._metadata.nextLink;
}

With this:

let lastSeenId = 0;

while (true) {
  const params = new URLSearchParams({
    limit: "100",
    fields: "allFields",
    sort: "id",
    idGreaterThan: lastSeenId.toString(),
  });

  const batch = await fetchBatch(`${API_URL}/people?${params}`);

  if (batch.people.length === 0) break;

  lastSeenId = batch.people.at(-1).id;
  yield batch.people;
}

Same number of API calls. Same data. No silent truncation.

Why This Matters

If you're integrating with Follow Up Boss and paginating through any endpoint with more than a few thousand records, you may be missing data. The API won't tell you. Your code will run cleanly. You'll just have an incomplete dataset and no indication that anything went wrong.

This affects any integration that does full syncs: CRM migration tools, analytics platforms, data warehouses, backup services. The total field in FUB's response metadata will confidently tell you the real count, while the cursor silently stops returning records far short of it.

The Debugging Lesson

I spent two days on this because I kept optimizing within the same mechanism, tweaking sort order, reducing fields, testing payload sizes. Every experiment gave me new data, so it felt like progress. But I was converging on a diagnosis, not a solution.

The solution came from a different page of the docs entirely. Not the pagination page I'd read ten times, but the common filters page I'd never opened. Twenty minutes of reading the full API surface would have revealed idGreaterThan on day one.

The general principle: when a mechanism silently fails, bypass it. Don't tune it. And before you start debugging, map out all the tools available to you, not just the one you're already using.



More in this category: How to Send SMS Using a Custom Conversation Provider in GHL with Sakari and Make.com ยป