Skip to main content

Data Flows

Data flows define and describe the path of data through the Nexla platform, from source to destination. The primary resources in any flow are its data sets, which are chained together in acyclic tree structures and are associated with resources describing the source, sharing and destinations of the data.

Flow resources are nested JSON objects. The root object contains a flows array containing one or more complete data flows, which normally begin at a data set associated with a data source and terminate in a data set or data sink.

Each dataset object in a data flow contains a resource object, which may be null, and a children array, which may be empty. It also contains various attributes describing the data set itself, e.g. name, description, etc.

The resource object describes an associated data source, sharers, or destinations, if any exists at that point in the flow.

The children array contains all downstream data sets connected to the current one, UNLESS the data set is upstream from the data set for which the data flow request was made, in which case only the branch leading to that data set is included.

If a flow terminates with a dataset associated with one or more destinations to which the outgoing data is to be written, those data destination objects are contained in a data_sinks array within the resource.

The following example shows the basic tree structure of a flow, with node level details omitted:

{
"flows": [
{
"id": 1,
"parent_data_set_id":null,
"data_source":{
"id": 10
},
"data_sinks":[],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 2,
"parent_data_set_id":1,
"data_sinks":[ ... ],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 3,
"parent_data_set_id":1,
"data_sinks":[...],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": []
}
]
}
]
}
],
"data_sources": [ ... ],
"data_sets": [ ... ],
"data_sinks": [ ... ],
"data_credentials": [ ... ],
"orgs": [ ... ],
"users": [ ... ]
}

The response object also contains arrays of expanded resource objects for each resource included in the returned flows.

List All Flows

Use the endpoint below to view all of the user's data flow resources.

List All Flows: Request
GET /data_flows
List All Flows: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks":[],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks":[],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [
5029,
5030
],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
},
{
"id": 5060,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
},
{
"id": 5063,
"parent_data_set_id": null,
"data_source": {
"id": 5024
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5065,
"parent_data_set_id": 5063,
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5066,
"parent_data_set_id": 5065,
"data_sinks": [
5031,
5032
],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "PAUSED",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [
5028
]
},
...
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "PAUSED",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
...
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [
5028
]
},
...
],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
...
},
...
]
}

Show Flows for a Data Source

Use the methods below to retrieve only the flows connected to a particular data source.

Show Flows For A Source: Request
GET /data_flows/data_source/{data_source_id}
Show Flows For A Source: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks":[],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks":[],
"sharers":{
"sharers":[],
"external_sharers":[]
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [
5029,
5030
],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "PAUSED",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [
5028
]
}
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "PAUSED",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
...
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [
5028
]
},
...

],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
"description": null,
"credentials_type": "s3",
"verified_status": "200 Ok",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
]
}

Show Flows for a dataset

Use the method below with any dataset id to get the full description of the flow to which the dataset belongs. Note that the response to can be the same for two different data set ids if the data sets are both part of the same flow.

Flow For A Dataset: Request
GET /data_flows/{data_set_id}
Flow For A Dataset: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "PAUSED",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "PAUSED",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5061,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5059,
"data_sinks": [],
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5062,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"status": "PAUSED",
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
},
{
"id": 5030,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 2",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
"description": null,
"credentials_type": "s3",
"verified_status": "200 Ok",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
]
}

Show Flows to a Destination

Use the endpoint below to retrieve only the flows connected to a particular data destination. Note that the response for a flow connected to a data_sinksets on the branch from the data source that lead directly to the destination.

Flows To A Destination: Request
GET /data_flows/data_sink/{data_sink_id}
Flows To A Destination: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "PAUSED",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "PAUSED",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5061,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5059,
"data_sinks": [],
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5062,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"status": "PAUSED",
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
},
{
"id": 5030,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 2",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
"description": null,
"credentials_type": "s3",
"verified_status": "200 Ok",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
]
}

Update Flow

Most updates to data flow configurations must be done directly with PUT requests on the component resources such as data_sources, data_sets. and data_sinks. However, /data_flows does support a few composite updates:activate, pause, and delete. These are cascaded across all components of the flow when applicable.

Additionally, Nexla CLI supports methods to export and import full flow specifications.

Control Data Flow

Activate Full Flow

Use the methods below to activate all the component resources of a flow. If the root data source is not already activated, it will be activated.

Activate Full Flow: Request
PUT /data_flows/data_source/{data_source_id}/activate
Activate Full Flow: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "ACTIVE",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "ACTIVE",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5061,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5059,
"data_sinks": [],
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
{
"id": 5062,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"status": "ACTIVE",
"name": null,
"description": null,
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "ACTIVE",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
},
{
"id": 5030,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 2",
"status": "ACTIVE",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
"description": null,
"credentials_type": "s3",
"verified_status": "200 Ok",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
]
}

Pause Full Flow

Use the methods below to pause all the component resources of a flow. /data_flows/<data_set_id>/pause and /data_flows/data_sink/<data_sink_id>/pause are also supported, but only pause the flow from the requested resource downwards. To pause the entire flow from a downstream resource, include the ?all=1 query parameter.

Pause Full Flow: Request
PUT /data_flows/data_source/{data_source_id}/pause
Pause Full Flow: Response
{
"flows": [
{
"id": 5059,
"parent_data_set_id": null,
"data_source": {
"id": 5023
},
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5061,
"parent_data_set_id": 5059,
"data_sinks": [],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": [
{
"id": 5062,
"parent_data_set_id": 5061,
"data_sinks": [5029, 5030],
"sharers": {
"sharers": [],
"external_sharers": []
},
"children": []
}
]
}
]
}
],
"data_sources": [
{
"id": 5023,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Source 1",
"status": "PAUSED",
"description": "Simple reference data source. Uses default settings and does not require ingestion.",
"source_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
}
],
"data_sets": [
{
"id": 5059,
"owner_id": 2,
"org_id": 1,
"parent_data_set_id": null,
"data_source_id": 5023,
"name": "Reference Data Set 1",
"description": "Pre-canned data set for reference data source.",
"status": "PAUSED",
"data_sinks": [],
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
},
...
],
"data_sinks": [
{
"id": 5029,
"owner_id": 2,
"org_id": 1,
"name": "Reference Data Sink 1",
"status": "PAUSED",
"description": null,
"sink_type": "s3",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z",
"data_credentials": [5028]
},
...
],
"data_credentials": [
{
"id": 5028,
"owner_id": 2,
"org_id": 1,
"name": "Reference Flow Credentials 1",
"description": null,
"credentials_type": "s3",
"verified_status": "200 Ok",
"tags": [],
"created_at": "2018-05-16T17:56:17.000Z",
"updated_at": "2018-05-16T17:56:17.000Z"
}
]
}

Delete Flows

Issue a DELETE request to any of the /data_flows endpoints with a specific data source, dataset or destination id to delete the resource at that id and its downstream resources. Include the all=1 query parameter to delete the entire flow, including upstream resources.

The presence of any ACTIVE resources in the data flow to be delete will cause the request to fail with a Method-Not-Allowed (405) error and the JSON response will list the resources that must be paused.

A successful request to delete a data flow returns Ok (200) with no response body.

Delete Flow: Request
DELETE /data_flows/data_source/{data_source_id}
...
Alternate:
/data_flows/{data_set_id}
/data_flows/data_sink/{data_sink_id}
Delete Flow: Response
{
"data_sources": [5023],
"data_sets": [5059, 5061, 5062],
"message": "Active flow resources must be paused before flow deletion!"
}

Import & Export Flows

Nexla CLI supports methods to export flow specification into a JSON file, and subsequently import the JSON specification into a new flow in the same or a different user account.

Export a Flow

Use this method to export one or more flows originating from a data source.

Each Nexla source can have multiple branche flows/pipelines connected to it. Use the -a option to automatically export all branches. Alterately, call this method without the -a option and the CLI will list out pipelines originating from that flow and allow you to select which branches should be exported into a local JSON file.

Flow Export: Request
nexla flows export

usage: nexla flows export [--source SOURCE] [--output_file OUTPUT_FILE] [options]

description: Export flow specification

arguments:
--source SOURCE, -s SOURCE
id of source to be exported
--output_file OUTPUT_FILE, -o OUTPUT_FILE
name of output file to be exported

options:
-a, --all Export all the flows of source(by default without entering the pipeline ids)

For example, this call triggers export of flows for source 9311 to a local file ~/Desktop/export_file.json

Flow Export: Example
    nexla flows export -s 9311 -o ~/Desktop/export_file.json
nexla flows export -s 9311 -o ~/Desktop/export_file.json -a
Flow Export: Response
Example 1: With the -a option to export all flow branches of a source
➜ nexla flows export -s 9505 -o ~/Desktop/export_9505.json -a
[2022-06-17 11:10:57 UTC] Getting all pipeline ids...
[2022-06-17 11:10:57 UTC] Found 2 pipelines, exporting them
[2022-06-17 11:10:58 UTC] Creating template for dataset, sink and datamap
[2022-06-17 11:10:58 UTC] Scanning pipeline 1
[2022-06-17 11:11:01 UTC] Scanning pipeline 2
[2022-06-17 11:11:05 UTC] Fetching source details
[2022-06-17 11:11:06 UTC] exporting json..

Example 2: Without the -a option, waits for user input to select flow branches that should be exported
➜ nexla flows export -s 9505 -o ~/Desktop/export_9505.json
+-----------+------+-----------------------------+-----------------------------+-----------+
| pipeline_id | source | detected_dataset | dataset_1 | destination |
+-----------+------+-----------------------------+-----------------------------+-----------+
| 1 | 9505 | 14325 (1 - nexla_test,PAUSED) | | 8102 (1234) |
+-----------+------+-----------------------------+-----------------------------+-----------+
| 2 | 9505 | 14325 (1 - nexla_test,PAUSED) | 14450 (1 - nexla_test,PAUSED) | |
+-----------+------+-----------------------------+-----------------------------+-----------+
Enter pipeline ids : 1
[2022-06-16 08:14:04 UTC] Creating template for dataset, sink and datamap
[2022-06-16 08:14:04 UTC] Scanning pipeline 1
[2022-06-16 08:14:08 UTC] Fetching source details
[2022-06-16 08:14:10 UTC] exporting json..

Import A Flow

Use this method to import flow from a previously exported JSON file. This is a quick way to spin out replicas of a data flow with modifications as needed.

Import Flows: Request
➜ nexla flows import
usage: nexla [-h] [--input_file INPUT_FILE] [--properties PROPERTIES]

Import Flows

optional arguments:
-h, --help show this help message and exit
--input_file INPUT_FILE, -i INPUT_FILE
path of json file to be imported
--properties PROPERTIES, -p PROPERTIES
path of properties json file

Some flow import scenarios, like migrating flows across environments/accounts, often require additional input like assigning/creating credentials for the relevant sources and destinations. You can choose to provide the data for this input either in an optional properties files or as user input during the import.

Example of flows import with properties file

In the properties file we can change the credentials according to the flow import requirement.

Import Flow: Request
  nexla flows import -i ~/Desktop/export_9505.json -p ~/Desktop/export_9505_properties.json
Import Flow: Response
[2022-06-17 12:33:50 UTC] Using credential 6952 from properties file
[2022-06-17 12:33:50 UTC] Creating source : nexla_test
[2022-06-17 12:33:52 UTC] Data Source created with ID: 11204
[2022-06-17 12:33:53 UTC] Creating Dataset : 1 - nexla_test
[2022-06-17 12:33:56 UTC] ID: 17284, Name: 1 - nexla_test
[2022-06-17 12:33:59 UTC] Created Dataset with ID, 17284 from dataset 17283
[2022-06-17 12:33:59 UTC] Created Dataset id ====> 17284
[2022-06-17 12:33:59 UTC] Parent dataset id for sink is ===> 17283
[2022-06-17 12:33:59 UTC] Creating Sink : 1234
[2022-06-17 12:34:02 UTC] Sink created with ID: 9535, and associated with dataset 17283

Example of flows import without properties file

While importing the flow without the properties file, CLI will ask you to choose the credentials that are relevant to the flow.

Import Flow: Request
  nexla flows import -i ~/Desktop/export_9505.json
Import Flow: Response
[2022-06-18 04:24:59 UTC] Credential Name given on Exported Pipeline :  sk21
[2022-06-18 04:24:59 UTC] Available gdrive credentials
[2022-06-18 04:24:59 UTC] credential_id credential_name
[2022-06-18 04:24:59 UTC] 7041 sk21 (Copy) (Copy)
[2022-06-18 04:24:59 UTC] 7039 sk21 (Copy)
[2022-06-18 04:24:59 UTC] 6952 sk21
Enter credential_id : 6952
[2022-06-18 04:25:09 UTC] Creating source : nexla_test
[2022-06-18 04:25:11 UTC] Data Source created with ID: 11213
[2022-06-18 04:25:13 UTC] Creating Dataset : 1 - nexla_test
[2022-06-18 04:25:15 UTC] ID: 17290, Name: 1 - nexla_test
[2022-06-18 04:25:18 UTC] Created Dataset with ID, 17290 from dataset 17289
[2022-06-18 04:25:18 UTC] Created Dataset id ====> 17290
[2022-06-18 04:25:18 UTC] Parent dataset id for sink is ===> 17289
[2022-06-18 04:25:20 UTC] Credential Name given on Exported Pipeline : Abs_test
[2022-06-18 04:25:20 UTC] credential_id credential_name
[2022-06-18 04:25:20 UTC] 7040 Abs_test (Copy) (Copy)
[2022-06-18 04:25:20 UTC] 7038 Abs_test (Copy)
[2022-06-18 04:25:20 UTC] 6954 Abs_test
[2022-06-18 04:25:20 UTC] 6953 Azure Blob Storage_test
Enter credential_id : 6954
[2022-06-18 04:25:30 UTC] Creating Sink : 1234
[2022-06-18 04:25:32 UTC] Sink created with ID: 9546, and associated with dataset 17289