Skip to main content

Sync strategies

When you integrate an external system like an ERP or CRM with Propeller, you need a strategy for keeping data in sync on an ongoing basis. This page covers the most common patterns.

Full sync vs delta sync

A full sync sends all records from the external system to Propeller in every run. This is the simplest approach and works well when the dataset is small enough to process within a reasonable time window. Because the bulk endpoints use upsert operations, they take care on their own if the action is create or update, so sending records that already exist in Propeller will update them rather than create duplicates.

A delta sync only sends records that have been created or modified since the last sync. This is useful when the dataset is large and a full sync would take too long or put unnecessary load on the source system. Delta syncs require the external system to support filtering by a date modified or date created field.

Choosing a strategy

For most integrations the recommended approach is:

  • Daily full sync when the dataset is manageable (for example up to a few thousand companies or contacts). This is the simplest option and ensures all data stays consistent.
  • Daily delta sync with periodic full sync when the dataset is large. Run delta syncs daily to keep up with changes, and schedule a full sync weekly or monthly as a safety net to catch anything the delta syncs might have missed.

There is no difference in implementation between an initial import and an ongoing sync. The bulk endpoints handle both scenarios through their upsert behavior. You can use the same integration for your first import and for all subsequent syncs.

Batch sizing

All bulk endpoints accept multiple records per request. Splitting your data into batches is recommended for two reasons: it keeps individual request payloads manageable and it makes debugging easier when something goes wrong.

Recommended batch sizes:

  • Companies, contacts and customers: 100 to 200 per batch
  • Products with attributes: roughly 100 per batch, depending on the amount of attribute data per product
  • Inventory: although 1000 per batch is possible, 300 is a safer choice
  • Categories: 100 to 200 per batch

Even when the endpoint can handle larger payloads, smaller batches make it easier to identify which records caused errors. If one record in the payload fails, the whole batch fails.

See also