# New Jobs

Fantastic.jobs checks for new jobs every hour. Two endpoints are available:

| Endpoint | Sources | Refresh cadence |
| --- | --- | --- |
| [`active-ats`](/api/new-jobs#ats-jobs) | 54 ATS platforms (company career pages) | Hourly |
| [`active-jb`](/api/new-jobs#job-board-jobs) | LinkedIn, Wellfound, Y Combinator | LinkedIn hourly (select countries); others every few hours |

## `active-ats`

All ATS platforms are polled hourly to discover new jobs. The ATS platforms we currently index are:

`adp`, `applicantpro`, `ashby`, `bamboohr`, `breezy`, `careerplug`, `comeet`, `csod`, `dayforce`, `dover`, `eightfold`, `firststage`, `freshteam`, `gem`, `gohire`, `greenhouse`, `hibob`, `hirebridge`, `hirehive`, `hireology`, `hiringthing`, `icims`, `isolved`, `jazzhr`, `jobvite`, `join.com`, `kula`, `lever.co`, `manatal`, `oraclecloud`, `pageup`, `paradox`, `paycom`, `paycor`, `paylocity`, `personio`, `phenompeople`, `pinpoint`, `polymer`, `recruitee`, `recooty`, `rippling`, `rival`, `smartrecruiters`, `successfactors`, `taleo`, `teamtailor`, `trakstar`, `trinet`, `ultipro`, `werecruit`, `workable`, `workday`, `zoho`

New ATS platforms are added regularly. Keep an eye on our [changelog](/changelog) to stay up to date.

## `active-jb`

For job boards, LinkedIn jobs in English-speaking countries are indexed hourly: United States, United Kingdom, Canada, New Zealand, Australia, Singapore, and Ireland.

Other major countries are checked every few hours, and remaining countries are updated less frequently.

Job board sources available via the `source` parameter: `linkedin`, `wellfound`, `ycombinator`.

> **Note on expiry tracking:** only `linkedin` listings are re-checked for expiration and surfaced via [`/expired-jb`](/api/expired-jobs#expired-job-board-jobs). `wellfound` and `ycombinator` listings are never flagged as expired - if you ingest them you'll need your own freshness logic (e.g. age-out anything older than N days).

## `time_frame`

| Value | Intended use | Window | Ingestion delay |
| --- | --- | --- | --- |
| `1h` | Hourly polling | Rolling 1-hour window | 1 hour |
| `24h` | Daily polling | Rolling 24-hour window | 1 hour |
| `7d` | Weekly batch / short backfill | Rolling 7-day window, refreshed every 1–3 minutes | ~45 minutes |
| `6m` | Full backfill | Rolling 6-month window, refreshed every 1–3 minutes | ~45 minutes |

To give your users an edge, we recommend using `1h` to keep your feed as fresh as possible. If hourly polling isn't practical, `24h` works well - just make sure to call it **during the same one-hour window every day** to avoid pulling duplicate jobs.

Both `1h` and `24h` serve jobs with a one-hour enrichment delay (UTC):

- `1h` — if you call the endpoint at 09:15 you receive all jobs indexed between **07:00 and 08:00**.
- `24h` — if you call on 2026-01-02 at 09:15 you receive all jobs indexed between **2026-01-01 07:00 and 2026-01-02 08:00**.

Enrichments applied before a job enters these windows include LLM-assisted field extraction, location normalisation, company data, and company reviews. See [Enrichments](/documentation/enrichments) for the full list of enriched fields.

We recommend using `7d` and `6m` **only for backfilling your database**. Both windows are refreshed roughly every minute with a ~45-minute delay. See [Recommended Strategy](/documentation/recommended-strategy#backfill) for a full backfill guide including cursor pagination.

## Key parameters

| Parameter | Applies to | Description |
| --- | --- | --- |
| `time_frame` | Both | **Required.** `1h`, `24h`, `7d`, or `6m`. See table above. |
| `description_format` | Both | `text` or `html`. **Omit to exclude descriptions entirely** — descriptions are not returned by default because they add significant payload size. |
| `include_basic_organization_details` | `active-ats` only | Set to `true` to include inline LinkedIn company fields (name, industry, headcount, HQ, etc.). Not needed if you already call [`/organizations-advanced`](/api/organizations#advanced-organization-details) — all those fields are included there. `org_linkedin_slug` is always returned regardless. |
| `title` | Both | Natural-language title search. `"software engineer"` for exact phrase, `Software OR Engineer` for either word. Use `title_advanced` for Boolean operators. |
| `location` | Both | Natural-language location search. Use full names: `"United States"` not `US`. Multi-location: `"United States" OR Canada`. See [Nuances of Location Search](/documentation/nuances-of-location-search). |
| `source` | Both | Comma-separated list of ATS/job-board sources to include. Useful for filtering to a specific ATS. |

For the full parameter reference, see the [API reference](/api/new-jobs).

## Pagination

For `1h`, `24h`, and `7d`, use **offset pagination**. Set a `limit` between 100 and 1,000 and keep increasing `offset` by `limit` until the response returns fewer rows than `limit`:

```
request 1: limit=1000&offset=0
request 2: limit=1000&offset=1000
request 3: limit=1000&offset=2000
...
```

For `6m`, use **cursor pagination** instead. Offset becomes inefficient at deep pages on very large feeds:

```
request 1: limit=200&cursor=1
request 2: limit=200&cursor=<last id from request 1>
request 3: limit=200&cursor=<last id from request 2>
...
```

Pass the last `id` returned as the `cursor` for the next request. Note that cursor pagination orders results by `id` ascending rather than `date_posted` descending — pick one strategy and stick with it. If both `cursor` and `offset` are passed, `cursor` wins.

See [Recommended Strategy](/documentation/recommended-strategy#pagination) for more detail.

## Credit consumption

Both `active-ats` and `active-jb` consume **Jobs credits** — one credit per job returned. All other endpoints (modified, expired, organizations) are complimentary and do not count against your Jobs quota.

To avoid consuming duplicate credits, always match your polling cadence to your `time_frame`:

- `1h` → poll every hour
- `24h` → poll once per day in the same hour

See [Plans, Limits & Upgrades](/documentation/credit-usage) for quota details, overage pricing, and how to track usage via response headers.
