Concepts
At a high-level, with RestApp builds and syncs your model from source to destination. This is called a data pipeline.
Terms to know
This is all the sources from where you retrieve your data.
This is any destination to where you send your data.
These are end-to-end models built in No Code to transform your data from source to destination.
Connectors are all the integrations available to retrieve data as source (like Postgres, MySQL, MongoDB..) and send data as destination (like Hubspot, GoogleSheets, SFTP...).
The type of sync RestApp will perform, either Add data (Insert), Add & Update data (Upsert) and Erase & Replace data(Drop).
Fields are the total number of fields in downstream SaaS desination (e.g., Hubspot) which RESTApp syncs data into for a workspace. In the pipeline, fields can be found within a pipeline as "matching fields".
Every automation (job) for a pipeline can be triggered either manually or at regular intervals through a RestApp enabled scheduler.
This is a specific scope where admins can share with rights and permissions to users any connectors and any pipelines created to ensure visibility and monitoring of all the data-as-a-product.
More on Pipelines
Pipelines are essentially end-to-end models built with No code functions (SQL, NoSQL & Python) attached to sources that return a set of records to a destination. RestApp processes the pipeline in end-to-end mode, so the pipeline is performed from any source to any of your destination. RestApp can perform cross-table queries as expected, as long as the credentials stored in RestApp are scoped with permissions to access all of the queried tables.
In particular, a pipeline enables you simply to add sources, then model your data with some operations with built-in SQL & Python functions and then send to specific destination by choosing the syncing mode.
You should take some time to develop your pipelines that will best extract and shape the data into a format that will be smoothly ingested by your destination. RestApp will not, for instance, automatically extract fields from a JSON object in a cell in your source to send as a scalar to your destination. But you can do that in our No/Low Code editor!
More on Automation
After you have built a pipeline, you will create an automation that specifies a schedule, which could be triggered manually or at a regular schedule. When we run a sync, we fetch the data from your source as specified in your pipeline and we compare each record by the unique identifier (the primary key) you specified when creating the model. From there, we find changes that may be either new records, changed records, or deleted records and we update the destination as specified in the pipeline.
More on Destination
Depending on your destination, there may be one or more supported sync modes:
- The "Add data" mode will only add new records to your destination.
- The "Add & Update data" mode will add new records and update existing records.
- The "Erase & Replace data" mode will erase existing records and replace them with new ones.