Chapter 3. Using RHMAP Data Sync Framework

3.1. Data Sync Framework

The RHMAP mobile data synchronization framework includes the following features:

  • Allows mobile apps to use and update data offline (local cache)
  • Provides a mechanism to manage bi-directional data synchronization from multiple Client Apps using the Cloud App and into back-end data stores
  • Allows data updates (that is, deltas) to be distributed from the Cloud App to connected clients
  • Enables data collision management from multiple updates in the cloud
  • Allows RHMAP Apps to seamlessly continue working when the network connection is lost, and allows them to recover when the network connection is restored.

3.1.1. High Level Architecture



Please refer to the Data Sync Framework Terminology defined in Sync terminology.

The Sync Framework comprises a set of Client App and Node.js Cloud App APIs.

The Client App does not access the back-end data directly. Instead, it uses the Sync Client API to Read and List the data stored on the device and send changes (Creates, Updates, and Deletes) to the Cloud App. Changes made to the data locally are stored in the local Sync Data Cache before being sent to the Cloud App. The Client App receives notifications of changes to the remote dataset from the Sync Client API.

The Client App then uses the Client Sync Service to receive the changes (deltas) made to the remote dataset from the Cloud App and stores them in the Sync Data Cache. The Client App also sends Updates made locally to the Cloud App using the Client Sync Service. When the Client App is off-line, cached updates are flushed to local storage on the device, allowing the changes to persist in case the Client App is closed before network connection is re-established. The changes are pushed to the Cloud App the next time the Client App goes online.

The Cloud App does not access the Back End data directly, but only through the Sync Cloud API. The Cloud App Uses the Sync Cloud API to receive updates from the Client App using the Client Sync Service. These updates are stored in the Cloud App Sync Data Cache. The Cloud App uses the Sync Cloud API to manage the Back End data in hosted storage using standard CRUDL (create, read, update, delete and list) and collisionHandler functions.

In addition to the standard data handler functions, the Cloud App can also employ user-defined data access functions.

3.1.2. API

The Client and Node.js API calls for Sync are documented in the following guides:

3.1.3. Getting Started


To use sync framework with Hosted RHMAP, you must upgrade your database. To upgrade the database:

  1. In the Data Browser section of the Cloud App page in Studio, click the Upgrade Database button in the top right corner, and confirm by clicking Upgrade Now. Wait until the upgrade process finishes.
  2. Redeploy the Cloud App by clicking Deploy Cloud App in the Deploy section.

    After the database upgrade is complete, new collections with the prefix fhsync_ are created to enable sync functionality. Red Hat recommends that you keep these collections, even if you do not intend to use sync functionality.

If you do not upgrade your database, you will encounter Internal Server Error (500).

To implement the Sync framework in your App:

  1. Init $fh.sync on the client side

      //See [JavaScript SDK API](../api/app_api.html#app_api-_fh_sync) for the details of the APIs used here
      var datasetId = "myShoppingList";
      //provide sync init options
        "sync_frequency": 10,
        "do_console_log": true,
        "storage_strategy": "dom"
      //provide listeners for notifications.
        var code = notification.code
        if('sync_complete' === code){
          //a sync loop completed successfully, list the update data
            function (res) {
              console.log('Successful result from list:', JSON.stringify(res));
            function (err) {
              console.log('Error result from list:', JSON.stringify(err));
        } else {
          //choose other notifications the app is interested in and provide callbacks
      //manage the data set, repeat this if the app needs to manage multiple datasets
      var query_params = {}; //or something like this: {"eq": {"field1": "value"}}
      var meta_data = {};
      $fh.sync.manage(datasetId, {}, query_params, meta_data, function(){

    About Notifications

    The Sync framework emits different types of notifications during the sync life cycle. Depending on your app’s requirements, you can choose which type of notifications your app listens to and add callbacks. However, it’s not mandatory, the Sync framework performs synchronization without notification listeners.

    Adding appropriate notification listeners helps improve the user experience of your app:

    • Show critical error messages to the user in situations where Sync framework errors occur. For example, client_storage_failed.
    • Log errors and failures to the console to help debugging. For example, remote_update_failed, sync_failed.
    • Update the UI related to the sync data if delta is received, for example, there are changes to the data, you can use delta_received, record_delta_received.
    • Monitor for collisions.

      Make sure to use $fh.sync APIs to perform CRUDL operations on the client.

  2. Init $fh.sync on the cloud side

    This step is optional, and only required if you are overriding dataset options on the server, for example, modifying the sync loop frequency with the Dataset back end. See the Considerations section below if changing the default sync frequency.

    var fhapi = require("fh-mbaas-api");
    var datasetId = "myShoppingList";
    var options = {
      "syncFrequency": 10
    fhapi.sync.init(datasetId, options, function(err) {
      if (err) {
      } else {
        console.log('sync inited');

    You can now use the Sync framework in your app, or use the sample app to explore the basic usage: Client App and Cloud App.

    If the default data access implementations do not meet your requirements, you can provide override functions. Avoiding Unnecessary Sync Loops

Because the client and server sync frequencies are set independently, two sync loops may be invoked per sync frequency if the server-side sync frequency differs from the client-side frequency. Setting a long frequency on a client does not change the sync frequency on the server. To avoid two sync loops, set the syncFrequency value of the dataset on the server to the sync_frequency value of the corresponding dataset on the client.

For example:

  • syncFrequency on the server-side dataset is set to 120 seconds.
  • sync_frequency on the client-side dataset is also set to 120 seconds.

However, if you require different frequencies on the client and server, you can set different values.

3.1.4. Using Advanced Features of the Sync Framework Define the Source Data for a Dataset

The Sync Framework provides hooks to allow the App Developer to define the source data for a dataset. Typically, the source data is an external database (MySql, Oracle, MongoDB etc), but this is not a requirement. The source data for a dataset can be anything, for example, csv files, FTP meta data, or even data pulled from multiple database tables. The only requirement that the Sync Framework imposes is that each record in the source data has a unique Id and that the data is provided to the Sync Framework as a JSON Object.

In order to synchronize with the back end data source, the App developer can implement code for synchronization.

For example, when listing data from back end, instead of loading data from database, you might want to return hard coded data:

  1. Init $fh.sync on the client side

    This is the same as Step 1 in Getting Started.

  2. Init $fh.sync on the cloud side and provide overrides.

    var fhapi = require("fh-mbaas-api");
    var datasetId = "myShoppingList";
    var options = {
      "syncFrequency": 10
    //provide hard coded data list
    var datalistHandler = function(dataset_id, query_params, cb, meta_data){
      var data = {
        '00001': {
          'item': 'item1'
        '00002': {
          'item': 'item2'
        '00003': {
          'item': 'item3'
      return cb(null, data);
    fhapi.sync.init(datasetId, options, function(err) {
      if (err) {
      } else {
        $fh.sync.handleList(datasetId, datalistHandler);

    Check the Node.js API Sync section for information about how to create more overrides.

3.2. Sync Terminology

3.2.1. Sync Protocol

The protocol for communication between the Sync Client and the Sync Server.

3.2.2. Sync Server

The Sync Server is the server part of the Sync Protocol, and is included in the fh-mbaas-api module. It:

3.2.3. Sync Client

The Sync Client is the client part of the Sync Protocol There are 3 Sync Client implementations available:

3.2.4. Sync Server Loop

The Sync Server Loop is a function that runs continuously on the Sync Server with a 500ms wait between each run.
During each run, it iterates over all DataSet Clients to see if a DataSet should be synced from the DataSet Backend.

3.2.5. Sync Client Loop

The Sync Client Loop is a function that runs continuously on the Sync Client with a 500ms wait between each run.
During each run, it iterates over all DataSet Clients to see if a DataSet should be synced with the Sync Server.

3.2.6. Sync Frequency

On the Sync Client, this is the interval between checks for updates from the Sync Server for a particular DataSet.
On the Sync Server, this is the interval between checks for updates from the DataSet Backend for a particular DataSet.

For more information, see Configuring Sync Frequency.

3.2.7. DataSet

A DataSet is a collection of records synchronized between 1 or more Sync Clients, the Sync Server and the DataSet Backend.

Red Hat recommends that you use an indexing strategy when working with data sets to improve performance. For more information about indexing strategies for MongoDB, please see the MongoDB Manual.

3.2.8. DataSet Backend

The system of record for data synchronized between the Sync Client and the Sync Server.
It can be any system that provides an API and can be integrated with from the Sync Server, for example, a mysql database or a SOAP service.
The Sync Server exposes the Sync Server API for integration with a DataSet Backend using DataSet Handlers.

3.2.9. DataSet Handler

A DataSet Handler is a function for integrating the Sync Server into a DataSet Backend.
There are many handlers for doing CRUDL actions on a DataSet and managing collisions between DataSet Records.
The default implementation of these handlers uses fh.db (MongoDB backed in an RHMAP MBaaS).
You can override each of these handlers. See the Sync Server API for details.

IMPORTANT: If you are overriding handlers, Red Hat recommends overriding all handlers to avoid unusual behavior with some handlers using the default implementation and others using an overridden implementation.

3.2.10. DataSet Client

A DataSet Client is a configuration stored in the Sync Client & Sync Server for each DataSet that is actively syncing between the client & server.
It contains data such as:

3.2.11. DataSet Record

A DataSet Record is an individual record in a DataSet.
It contains:

  • the raw data that this record represents, for example, the row values from a MySQL table
  • a Hash of the raw data

3.2.12. Hash

There are 2 types of Hash used in the Sync Protocol:

  • hash of an individual DataSet Record which is used to compare individual records to see if they are different.
  • hash of all DataSet Records for a particular DataSet Client which is used to compare a client’s set of records with the servers without iterating over all records.

3.3. Sync Server Architecture

For a general overview of the Sync Framework, see Sync Overview and Sync Terminology.

3.3.1. Architecture

The Sync Server architecture includes: * HTTP handlers * Queues and processors * The sync scheduler

Each of these components persist data in MongoDB.

Sync Server Architecture HTTP Handlers

These handlers are responsible for handling the Sync requests from Sync Clients. Sync HTTP Handler

Creates or updates the Dataset Client and pushes pending records and acknowledgements on to the appropriate queues for processing. Sync Records HTTP Handler

Compares up-to-date data with a client’s state. After getting the delta, it checks for updates that are processed, but not yet synced. This handler iterates through all the records in the delta. If any records are in the pending queue or have been applied, this handler removes them from the delta and returns the updated delta to the client. Queues

The following queues are used in the Sync Framework:

  • fhsync_queue - jobs for datasets that require synchronization.
  • fhsync_ack_queue - jobs for pending changes that require acknowledgement.
  • fhsync_pending_queue - jobs for pending changes that require processing.

Messages are placed on these queues and are consumed by processors. Processors

Each queue has a corresponding processor:

  • Sync Processor - takes jobs from fhsync_queue and processes those jobs.
  • Ack Processor - takes acknowledgements from fhsync_ack_queue and removes those acknowledgements from MongoDB.
  • Pending Processor - takes pending items from fhsync_pending_queue and applies the changes to the Dataset Backend.

Each worker in a Sync Server has one instance of each of these processors allowing the tasks to be distributed. Sync Scheduler

When horizontally scaled, each Sync Worker attempts to become the Sync Scheduler at fixed intervals. Each worker tries to obtain a lock which is located in MongoDB. The worker that has the Sync Scheduler lock determines which Datasets need to be synchronized by looking at the timestamp of the last synchronization and the sync frequency for the Dataset. If a Dataset needs to be synchronized, a job is added to fhsync_queue.

3.4. Data Sync Configuration Guide

Data Sync configuration can be applied to the client-side and also in the cloud (server-side).

The sync frequency on the client-side is set using the sync_frequency variable. To see an example of sync_frequency being set, see the code in this section of the documentation.

The sync frequency in the cloud is set using the syncFrequency variable. To see an example of syncFrequency being set, see the code in this section of the documentation.

3.4.1. Configuring Sync Frequency

The sync frequency is the time period the system waits between 2 sync processes.

IMPORTANT: It is possible to configure the frequency differently on the client and server. However, Red Hat recommends using the same setting to avoid the following scenarios:

  • The client calls more frequently than the server checks for updates from the DataSet Backend, causing unnecessary traffic from the client.
  • the client calls less frequently than the server checks for updates from the DataSet Backend, causing the server to drop its DataSet from the cache because of inactivity.

The sync frequency value of a server determines how often the sync processor runs. Every time the sync processor executes, it performs a list operation on the Dataset Backend to synchronize the data with a local copy. To determine the best value of the sync frequency for your application, review the following sections.

  • How quickly do you want your clients to see changes from others?

    When a client submits changes, those changes are applied to the Dataset Backend directly. To ensure high performance, other clients get data from the local copy. This means other clients can only get the new changes after the next sync processor run. If it is required that other clients get the changes as soon as possible, then consider setting a low value for the sync frequency.

  • How long it takes the sync processor to run?

    The sync frequency value determines how long the system waits between sync processor executions, that is, the sync frequency is the time from the completion of the one execution to the start time of next execution. This means there is never a situation where 2 sync processors are running at the same time. Therefore:

    actual sync period = sync processor execution time + the sync frequency

    This helps you calculate the number of requests the system makes to the Dataset Backend.

    To determine how long each sync processor execution takes, you can query the sync stats endpoint to see the average Job Process Time it takes for the sync_worker to complete.

  • How much load can the Dataset Backend service handle?

    Every time the sync processor runs, it performs a list operation on the Dataset Backend. When you configure the sync frequency, you need to estimate how many requests it generates on the backend, and make sure the backend can handle the load.

    For example, if you set the sync frequency of a dataset to 100ms, and each sync processor execution is taking 100ms to run, that means the server generates about 5 req/sec to the backend. However, if you have another dataset with a sync frequency of 100ms that uses the same backend, there will be about 10 req/sec to the backend. You can perform load tests against the backend to determine if the backend can handle that load.

    However, this value does not grow when you scale the app. For example, if you have multiple workers in your server, the sync processor executions are distributed among the workers rather than duplicated among them. This design protects the backend when the app is under heavy load.

  • How much extra load does it cause to the server?

    When the data is returned from the backend, the server must save the data to the local storage (MongoDB). The system only performs updates if there are changes. But it needs to perform a read operation first to get the current data in the local storage. When there are a lot of sync processor executions, it could cause extra load on the server itself. Sometimes, you need to take this into consideration, especially if the dataset is large.

    To understand the performance of the server, you can use the sync stats endpoint to check CPU usage, and the MongoDB operation time.

You can use the sync frequency value to control the number of requests the server generates to the backend. It is acceptable to set it to 0ms, as long as the backend can handle the load, and the server itself is not over-loaded.

3.4.2. Configuring the Workers

There are different queues used to store the sync data, as described in the Sync Architecture. To process the data, a corresponding worker is created for each queue. Its sole task is to take a job off the queue, one at a time, and process it. However, there is an interval value for how long between finishing one job and getting the next available job. To maximize the worker performance, you can configure this value. Purpose of the Intervals

The request to get a job off the queue is a non-blocking operation. When there are no jobs left on the queue, the request returns and the worker attempts to get a job again.

In this case, or if jobs are very fast to complete, a worker could overload the main event loop and slow down any other code execution. To prevent this scenario, there is an interval value configuration item for each worker:

  • pendingWorkerInterval
  • ackWorkerInterval
  • syncWorkerInterval

The default interval value is very low (1ms), but configurable. This default value assumes the job is going to take some time to execute and have some non-blocking I/O operations (remote HTTP calls, DB calls, etc) which allows other operations to be completed on the main event loop. This low default interval allows the jobs to be processed as quickly as possible, making more efficient use of the CPU. When there are no jobs, a backoff mechanism is invoked to ensure the workers do not overload resources unnecessarily.

If the default value is causing too many requests to the Dataset Backend, or you need to change the default interval value, you can override the configuration options for one or more worker. Worker Backoff

When there are no jobs left on a queue, each worker has a backoff strategy. This prevents workers from consuming unnecessary CPU cycles and unnecessary calls to the queue. When new jobs are put on the queue, the worker resets the interval when it next checks the queue.

You can override the behavior of each worker with the following configuration options:

  • pendingWorkerBackoff
  • ackWorkerBackoff
  • syncWorkerBackoff

By default, all workers use an exponential strategy, with a max delay value. For example, if the min interval is set to 1ms, the worker waits 1ms after processing a job before taking another job off the queue. This pattern continues as long as there are items on the queue. If the queue empties, the interval increases exponentially (2ms, 4ms, 8ms, 16ms, …​ ~16s, ~32s) until it hits the max interval (for example, 60 seconds). The worker then only checks the queue every 60 seconds for a job. If it does find a job on the queue in the future, the worker returns to checking the queue every 1ms.

For more information, please refer to the Sync API Doc.

3.4.3. Managing Collisions

A collision occurs when a client attempts to send an update to a record, but the client’s version of the record is out of date. Typcially, this happens when a client is off line and performs an update to a local version of a record.

Use the following handlers to deal with collisions:

  • handleCollision() - Called by the Sync Framework when a collision occurs. The default implementation saves the data records to a collection named "<dataset_id>_collision".
  • listCollision() - Returns a list of data collisions. The default implementation lists all the collision records from the collection name "<dataset_id>_collision".
  • removeCollision() - Removes a collision record from the list of collisions. The default implementation deletes the collision records based on hash values from the collection named "<dataset_id>_collision".

You can provide the handler function overrides for dealing with data collisions. Options include:

  • Store the collision record for manual resolution by a data administrator at a later date.
  • Discard the update which caused the collision. To achieve this, the handleCollision() function would simply not do anything with the collision record passed to it.


    This may result in data loss as the update which caused the collision would be discarded by the Cloud App.

  • Apply the update which caused the collision. To achieve this, the handleCollision() function would need to call the handleCreate() function defined for the dataset.


    This may result in data loss as the update which caused the collision would be based on a stale version of the data and so may cause some fields to revert to old values.

The native sync clients use similar interfaces. You can check the API and example codes in our iOS Github repo and Android Github repo.

3.5. Sync Server Upgrade Notes

3.5.1. Overview

This section targets developers who:

  • use Sync Server in their application
  • are upgrading the version of fh-mbaas-api from <7.0.0 to >=7.0.0

If you are already using fh-mbaas-api@>=7.0.0, do not follow any of the procedures in this section.


There are no changes to the Sync Client in this upgrade.

3.5.2. Prerequisites

Prior to 7.0.0 the Sync Server used the fh.db API to store sync operational data in MongoDB. fh.db is a wrapper around MongoDB that may go through an intermediate http API (fh-ditch). This resulted in a resticted set of actions that could be performed on the sync operational data. It also limited the potential use of modules that connect directly to MongoDB. As of fh-mbaas-api@7.0.0, the Sync Server requires a direct connection to MongoDB.

This means:

  • for a hosted MBaaS you must 'Upgrade' your App Database.
  • for a self-managed MBaaS, no action is required as all Apps get their own Database in MongoDB by default

3.5.3. Data Handler Function Signature Changes

The method signature for sync data handlers are different for the new Sync Framework. If you implemented any data handler, you must change the parameter ordering. These changes are to conform to the parameter ordering convention in javascript, that is, a callback is the last parameter.

IMPORTANT Make sure that the callback function, passed to each handler as a parameter, runs for each call. This ensures that the worker can continue after the handler has completed.

The data handlers and their signature prior to and as of 7.0.0 are:

// <7.0.0
sync.handleList(dataset_id, function(dataset_id, params, callback, meta_data) {});
sync.globalHandleList(function(dataset_id, params, callback, meta_data) {});
// >=7.0.0
sync.handleList(dataset_id, function(dataset_id, params, meta_data, callback) {});
sync.globalHandleList(function(dataset_id, params, meta_data, callback) {});

// <7.0.0
sync.handleCreate(dataset_id, function(dataset_id, data, callback, meta_data) {});
sync.globalHandleCreate(function(dataset_id, data, callback, meta_data) {});
// >=7.0.0
sync.handleCreate(dataset_id, function(dataset_id, data, meta_data, callback) {});
sync.globalHandleCreate(function(dataset_id, data, meta_data, callback) {});

// <7.0.0
sync.handleRead(dataset_id, function(dataset_id, uid, callback, meta_data) {});
sync.globalHandleRead(function(dataset_id, uid, callback, meta_data) {});
// >=7.0.0
sync.handleRead(dataset_id, function(dataset_id, uid, meta_data, callback) {});
sync.globalHandleRead(function(dataset_id, uid, meta_data, callback) {});

// <7.0.0
sync.handleUpdate(dataset_id, function(dataset_id, uid, data, callback, meta_data) {});
sync.globalHandleUpdate(function(dataset_id, uid, data, callback, meta_data) {});
// >=7.0.0
sync.handleUpdate(dataset_id, function(dataset_id, uid, data, meta_data, callback) {});
sync.globalHandleUpdate(function(dataset_id, uid, data, meta_data, callback) {});

// <7.0.0
sync.handleDelete(dataset_id, function(dataset_id, uid, callback, meta_data) {});
sync.globalHandleDelete(function(dataset_id, uid, callback, meta_data) {});
// >=7.0.0
sync.handleDelete(dataset_id, function(dataset_id, uid, meta_data, callback) {});
sync.globalHandleDelete(function(dataset_id, uid, meta_data, callback) {});

// <7.0.0
sync.listCollisions(dataset_id, function(dataset_id, callback, meta_data) {});
sync.globalListCollisions(function(dataset_id, callback, meta_data) {});
// >=7.0.0
sync.listCollisions(dataset_id, function(dataset_id, meta_data, callback) {});
sync.globalListCollisions(function(dataset_id, meta_data, callback) {});

// <7.0.0
sync.removeCollision(dataset_id, function(dataset_id, collision_hash, callback, meta_data) {});
sync.globalRemoveCollision(function(dataset_id, collision_hash, callback, meta_data) {});
// >=7.0.0
sync.removeCollision(dataset_id, function(dataset_id, collision_hash, meta_data, callback) {});
sync.globalRemoveCollision(function(dataset_id, collision_hash, meta_data, callback) {});

3.5.4. Behavior Changes

As the sync server now connects directly to MongoDB, there is some setup time required on startup. If you currently use sync.init(), wrap these calls in a sync:ready event handler. For example, if you use the following code:

fh.sync.init('mydataset', options, callback);

Modify it to put it in an event handler.'sync:ready', function syncReady() {
  sync.init('mydataset', options, callback);

Alternatively, you could use the event emitter from the sync API

fh.sync.getEventEmitter().on('sync:ready', function syncReady() {
  sync.init('mydataset', options, callback);

3.5.5. Logger Changes

The logLevel option passed into fh.sync.init() is no longer available. By default, the new Sync Server does not log anything. All logging uses the [debug]( module. If you want log output from the Sync Server, you can set the DEBUG environment variable. For example:


To see all logs from the entire SDK, you can use


All other environment variables and behavior features of the debug module are available.

3.6. Sync Server Performance and Scaling

3.6.1. Overview

The sync server is designed to be scalable.

This section helps you understand the performance of the sync server, and the options for scaling.


If you are using a 1 CPU core with the sync server included with fh-mbaas-api version 7.0.0 or later, the performance could decrease compared to previous versions with default configurations. To improve performance of the sync server on a single core, consider adjusting the configuration as described in the Section 3.4, “Data Sync Configuration Guide”

3.6.2. Inspecting Performance

There are 2 options to inspect the performance of the sync server:

  1. Query the /mbaas/sync/stats endpoint.

    By default, the Sync framework saves metrics data into Redis while it is running. You can then send a HTTP GET request to the /mbaas/sync/stats endpoint to view the summary of those metrics data.

    The following information is available from this endpoint:

    • CPU and Memory Usage of all the workers
    • The time taken to process various jobs
    • The number of the remaining jobs in various job queues
    • The time taken for various API calls
    • The time taken for various MongoDB operations

      For each of those metrics, you are able to see the total number of samples, the current, maximum, minimum and average values.

      By default, it collects the last 1000 samples for each metric, but you can control that using the statsRecordsToKeep configuration option.

      This endpoint is easy to use and provides enough information for you to understand the current performance of the sync server.

  2. Visualize the metrics data with InfluxDB and Grafana

    If you want to visualize the current and historical metrics data, you can instruct the sync server to send the metrics data to InfluxDB, and view the graphs in Grafana.

    There are plenty of tutorials online to help setup InfluxDB and Grafana. For example:

    • How to setup InfluxDB and Grafana on OpenShift

      Once you have InfluxDB running, you just need to update the following configurations to instruct the sync server to send metrics data:

    • metricsInfluxdbHost
    • metricsInfluxdbPort


      Make sure metricsInfluxdbPort is a UDP port

      To see the metrics data graph in Grafana, you need to create a new dashboard with graphs. The quickest way is to import this Grafana databoard file. Once the app is running, you can view metrics data in the Grafana dashboard.

      For more details about how to configure the Grafana graphs, please refer to the Grafana Documentation.

3.6.3. Understanding Performance

To understand the performance of the sync server, here are some of the key metrics you need to look at: CPU Usage

This is the most important metric. On one hand, if CPU is over loaded, then the sync server cannot respond to the client requests. On the other hand, we want to utilize the CPU usage as much as possible.

To balance that, establish a threshold to determine when to scale the sync server. The recommended value is 80%.

If CPU utilization is below that, it is not necessary to scale the sync server, and you can probably reduce a few worker interval configurations to increase the CPU usages. However, if CPU usage is above that threshold, consider the following adjustments to improve performance: Remaining Jobs in Queues

The sync server saves various jobs in queues to process them later.

If the number of jobs in queues keeps growing, and the CPU utilization is relatively low, reduce worker interval configurations to process the jobs quicker.

If the sync server is already under heavy load, consider scaling the sync server to allow new workers to be created to process the jobs. API response time

If you observe increases in the response time for various sync APIs, and the CPU usage is going up, it means the sync server is under load, consider scaling the sync server.

However, if the CPU usage does not change much, that typically means something else is causing the slow down, and you need to investigate the problem. MongoDB operation time

In a production environment, the time for various MongoDB operations should be relatively low and consistent. If you start observing increases of time for those operations, it could mean the sync server is generating too many operations on the MongoDB and starts to reach the limit of MongoDB.

In this case, scaling the sync server does not help, because the bottleneck is in MongoDB. There are a few options you can consider:

  • Turn on caching by setting the useCache flag to true. This reduces the number of database requests to read dataset records.
  • Increase the various worker intervals and sync frequencies.
  • If possible, scale MongoDB.

3.6.4. Scaling the Sync Server

If you decide to scale the sync server, here are some of the options you can consider: Scaling on an Hosted MBaaS

There are 2 options to scale the sync server on the RHAMP SaaS platform: Use the Node.js Cluster Module

To scale inside a single app, you can use Nodejs Clustering to create more workers. Deploy More Apps

Another option is to deploy more apps but point them to the same MongoDB as the existing app. This allows you to scale the sync server even further.

To deploy more apps:

  • Deploy a few more apps with the same code as the existing app.
  • Find out the MongoDB connection string of the existing app.

    It is listed on the Environment Varaible screen in the App Studio, look for a System Environment Variable called FH_MONGODB_CONN_URL

  • Copy the value, and create a new environment variable called SYNC_MONGODB_URL in the newly created apps, and paste the MongoDB url as the value.
  • Redeploy the apps.

With this approach, you can separate the HTTP request handling and sync data processing completely.

For example, if there are 2 apps setup like this, App 1 and App 2, and App 1 is the cloud app that accepts HTTP requests. You can then:

  • Set the worker concurrencies to 0 to disable all the sync workers in App 1. It is then dedicated to handle HTTP requests.
  • Increase the concurrencies of sync workers in App 2, and reduce the sync interval values.

Please check $fh.sync.setConfig for more information about how to configure the worker concurrencies.

3.7. MongoDB Collections Created by the Sync Server

3.7.1. Overview

The sync server will maintain various collections in MongoDB while it’s running.

This document will explain what collections the sync server will create and their purpose.


You should not modify these collections, as it may cause data loss.

3.7.2. Sync Server Collections

All the collections created by the sync server will have the prefix fhsync. fhsync_pending_queue

This collection is used to save the changes submitted from all the clients for all the Datasets.

Some of the useful fields for debugging are:

  • tries: If the value is greater than 0, it means the change has been processed already by the sync server.
  • payload.hash: The unique identifier of the pending change.
  • payload.cuid: The unique id of the client.
  • payload.action: The type of the change, like create or update.
  • payload.pre: The data before the change was made.
  • The data after the change was made.
  • payload.timestamp: When the change was made on the client. fhsync_<datasetId>_updates

When a pending change from the fhsync_pending_queue collection is processed, the result is saved in this collection. The client will get the result when they next sync, any trigger any relevant client notifications.

Some of the useful fields for debugging are:

  • hash: The unique identifier of the pending change from the above collection.
  • type: If the change is applied successfully. Possible values are applied, failed or collision. fhsync_ack_queue

After a client gets the results of its submitted changes (as saved in the fhsync_<datasetId>_updates collection), it will confirm the acknowledgements with the server so that server can remove them. This collection is used to save the acknowledgements submitted by the clients.

Some of the useful fields for debugging are:

  • payload.hash: The unique identifier of a pending change from the fhsync_pending_queue collection. fhsync_datasetClients

This collection is used to persist all the Dataset clients that are managed by the sync server.

Some of the useful fields for debugging are:

  • globalHash: The current hash value of the Dataset Client.
  • queryParam: The query parameters associated with the Dataset Client.
  • metaData: The meta data associated with the Dataset Client.
  • recordUids: The unique ids of all the records that belong to the Dataset Client.
  • syncLoopEnd: When the last sync loop finished for the Dataset Client. fhsync_<datasetId>_records

This data in this collection is a local copy of the data from the Dataset Backend. It will help to speed up the sync requests from the clients, and also reduce the number of requests to the Dataset Backend.

Some of the useful fields for debugging are:

  • data: The actual data of the record returned from the Dataset Backend.
  • uid: The unique id of the record.
  • refs: The ids of all the Dataset Clients that contain this record. fhsync_queue

This collection is used to save the requests to sync fhsync_<datasetId>_records with the Dataset Backend.

Some of the useful fields for debugging are:

  • tries: If it’s greater than 0, it means the request is already processed by the sync server. fhsync_locks

Only 1 worker is allowed to sync with the Dataset Backend at any given time. To ensure that, a lock is used. This collection is used to persist the lock. It is unlikely you’ll ever need to look at this collection, unless debugging an issue with the locking mechanism.

3.7.3. Pruning the Queue Collections

For each of the queue collections, a document is not removed immediately after being processed. Instead, it is marked as deleted. This will allow developers to use them as an audit log, and also help with debugging.

To prevent these queues from using too much space, you can set a TTL (time to live) value for those messages. Once the TTL value is reached, these messages will be deleted from the database.

For more information, see the "queueMessagesTTL" option in $fh.sync.setConfig.

3.8. Sync Server Debugging Guide

3.8.1. Client Changes Are Not Applied

To help debug this issue, answer the following questions:

Has the client sent the change to the Server?

Determine whether the change is sent by looking at the request body in sync requests after the client made the change. The change should be in the pending array. For example:

      "name":"Modified Name",
      "name":"Original Name",

If the change is not in the pending array, verify your device is online. Check for any errors in the Client App and verify you are calling the relevant sync action, for example, sync.doUpdate() for an update. It may help to debug or add logging in the Client App around the code where you make the change.

Is there a record for this change in the fhsync_pending_queue collection?

If 'No', the change was not received by the server, or there was an error receiving it.

  • Verify the App successfully sent the change. If it did not, debug the App to understand the issue. There may be an error in the App, or an error in the response from the Server.
  • Check the server logs for errors when receiving the change from the App. If there are no errors, see Enabling Debug Logs.

It is possible the record existed in the fhsync_pending_queue collection, but the Time To Live (TTL) period for queues has passed for the record, and it was removed. If this is the case, increasing the TTL enables debugging.

Does the record have a timestamp for the deleted field?

If 'No', the item has not been processed yet

  • Typically, the pending worker is busy processing items ahead of it in the queue. Wait until the item gets to the top of the queue for processing.
  • If an item is not processed after a significant time, or the queue is empty except for this item, check the server logs for any errors. If there are no errors, see Enabling Debug Logs.

Is there a record for the update in the fhsync_<datasetid>_updates collection?

If 'No', the update may have encountered an error while processing.

Is the type field in the record set to failed or collision?

If 'Yes', the update could not be applied to the Dataset back end.

  • A 'collision' should have resulted in the collision handler being called. The collision needs resolution.
  • A 'failed' update should have resulted in a notification on the client, with a reason for the failure. The reason is documented in the msg field of the record.

If 'No', and the type is applied, you need to debug the create or update handler to see why the handler assumed the change was applied to the Dataset back end.

The type field should never be anything other than collision, failed or applied.

3.8.2. Changes applied to the Dataset back end do not propagate to other Clients

Has sufficient time passed for the change to sync from the Dataset back end to clients?

After a change has been applied to the Dataset back end, there are 2 things that need to happen before other clients get that change.

  • a sync loop on the server must complete to update the records cache, that is, the list handler is called for that dataset
  • a sync loop on the client(s) must complete for the client(s) local record cache to be updated from the servers record cache

If sufficient time has passed, check the server logs for any errors during the sync with the Dataset back end.

Is there a recent record in the fhsync_queue collection for the Dataset?

If 'No', it is possible the TTL value for the record has passed and it was deleted. In this case, the TTL value can be increased to enable further debugging.

Another possibility is the sync scheduler has not scheduled a sync for that Dataset. The most likely reason is a combination of no currently active clients for that Dataset and the clientSyncTimeout has elapsed since the a client was last active for that Dataset.

Does the record have a timestamp for the deleted field?

If 'No', this means the sync is not processed yet.

  • Typically, the sync worker is busy processing items ahead of it in the queue. Wait until the item gets to the top of the queue.
  • If an item is not processed after a significant time, or the queue is empty except for this item, check the server logs for any errors. If there are no errors, see Enabling Debug Logs.

Is the record in the fhsync_<datasetid>_records cache up to date?

The list handler should have been called, and the result added to the records cache. To verify the records cache is updated, check the fhsync_<datasetid>_records collection for the record that was updated. The data in this record should match the data in the Dataset back end. If it does not, check the server logs for errors and the behavior of the list handler. It may help to add logging in your list handler.

Is the client sync call successful?

Check that there is a valid response from the server when the client makes its sync call. If the call is successful, verify the client is getting the updated record(s). If the updated records are not received by the client, even though the server cache has them, verify the query parameters and any meta data sent to the server are correct. Enabling the Debug logs may help determine how the incorrect data is sent back to the client.

3.8.3. Enabling Debug Logs

To enable sync logs for debugging purposes, set the following environment variable in your server.


This process generates a lot of logs. Each log entry is tagged with a specific Dataset ID that is being actioned in the context of that log message, if possible. These logs can be difficult to interpret, but allow you track updates from clients and the various stages for those updates. It may help to compare logs for a successful scenario with an unsuccessful scenario, and identify which stage a failure occurs.

The most likely causes of issues are in custom handler implementations, particularly related to edge cases. It can be useful to add additional logs in your custom handlers.

Dataset back end connectivity issues, particularly intermittent issues, can be difficult to debug and identify. It can help to have external monitoring or checks on the Dataset back end.