

# Neptune Loader Command
<a name="load-api-reference-load"></a>

Loads data from an Amazon S3 bucket into a Neptune DB instance.

To load data, you must send an HTTP `POST` request to the `https://your-neptune-endpoint:port/loader` endpoint. The parameters for the `loader` request can be sent in the `POST` body or as URL-encoded parameters.

**Important**  
The MIME type must be `application/json`.

The Amazon S3 bucket must be in the same AWS Region as the cluster.

**Note**  
You can load encrypted data from Amazon S3 if it was encrypted using the Amazon S3 `SSE-S3` mode. In that case, Neptune is able to impersonate your credentials and issue `s3:getObject` calls on your behalf.  
You can also load encrypted data from Amazon S3 that was encrypted using the `SSE-KMS` mode, as long as your IAM role includes the necessary permissions to access AWS KMS. Without proper AWS KMS permissions, the bulk load operation fails and returns a `LOAD_FAILED` response.  
Neptune does not currently support loading Amazon S3 data encrypted using the `SSE-C` mode.

You don't have to wait for one load job to finish before you start another one. Neptune can queue up as many as 64 jobs requests at a time, provided that their `queueRequest` parameters are all set to `"TRUE"`. The queue order of the jobs will be first-in-first-out (FIFO). If you don't want a load job to be queued up, on the other hand, you can set its `queueRequest` parameter to `"FALSE"` (the default), so that the load job will fail if another one is already in progress.

You can use the `dependencies` parameter to queue up a job that must only be run after specified previous jobs in the queue have completed successfully. If you do that and any of those specified jobs fails, your job will not be run and its status will be set to `LOAD_FAILED_BECAUSE_DEPENDENCY_NOT_SATISFIED`.

## Neptune Loader Request Syntax
<a name="load-api-reference-load-syntax"></a>

```
{
  "source" : "string",
  "format" : "string",
  "iamRoleArn" : "string",
  "mode": "NEW|RESUME|AUTO",
  "region" : "us-east-1",
  "failOnError" : "string",
  "parallelism" : "string",
  "parserConfiguration" : {
    "baseUri" : "http://base-uri-string",
    "namedGraphUri" : "http://named-graph-string"
  },
  "updateSingleCardinalityProperties" : "string",
  "queueRequest" : "TRUE",
  "dependencies" : ["load_A_id", "load_B_id"]
}
```

**edgeOnlyLoad Syntax**  
 For an `edgeOnlyLoad`, the syntax would be: 

```
{
"source" : "string",
"format" : "string",
"iamRoleArn" : "string",
"mode": "NEW|RESUME|AUTO",
"region" : "us-east-1",
"failOnError" : "string",
"parallelism" : "string",
"edgeOnlyLoad" : "string",
"parserConfiguration" : {
    "baseUri" : "http://base-uri-string",
    "namedGraphUri" : "http://named-graph-string"
},
"updateSingleCardinalityProperties" : "string",
"queueRequest" : "TRUE",
"dependencies" : ["load_A_id", "load_B_id"]
}
```

## Neptune Loader Request Parameters
<a name="load-api-reference-load-parameters"></a>
+ **`source`**   –   An Amazon S3 URI.

  The `SOURCE` parameter accepts an Amazon S3 URI that identifies a single file, multiple files, a folder, or multiple folders. Neptune loads every data file in any folder that is specified.

  The URI can be in any of the following formats.
  + `s3://bucket_name/object-key-name`
  + `https://s3.amazonaws.com/bucket_name/object-key-name`
  + `https://s3.us-east-1.amazonaws.com/bucket_name/object-key-name`

  The `object-key-name` element of the URI is equivalent to the [prefix](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html#API_ListObjects_RequestParameters) parameter in an Amazon S3 [ListObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html) API call. It identifies all the objects in the specified Amazon S3 bucket whose names begin with that prefix. That can be a single file or folder, or multiple files and/or folders.

  The specified folder or folders can contain multiple vertex files and multiple edge files.

   For example, if you had the following folder structure and files in an Amazon S3 bucket named `bucket-name`: 

  ```
  s3://bucket-name/a/bc
  s3://bucket-name/ab/c
  s3://bucket-name/ade
  s3://bucket-name/bcd
  ```

   If the source parameter is specified as `s3://bucket-name/a`, the first three files will be loaded. 

  ```
  s3://bucket-name/a/bc
  s3://bucket-name/ab/c
  s3://bucket-name/ade
  ```
+ **`format`**   –   The format of the data. For more information about data formats for the Neptune `Loader` command, see [Using the Amazon Neptune bulk loader to ingest data](bulk-load.md).

**Allowed values**
  + **`csv`** for the [Gremlin CSV data format](bulk-load-tutorial-format-gremlin.md).
  + **`opencypher`** for the [openCypher CSV data format](bulk-load-tutorial-format-opencypher.md).
  + **`ntriples`** for the [N-Triples RDF data format](https://www.w3.org/TR/n-triples/).
  + **`nquads`** for the [N-Quads RDF data format](https://www.w3.org/TR/n-quads/).
  + **`rdfxml`** for the [RDF\$1XML RDF data format](https://www.w3.org/TR/rdf-syntax-grammar/).
  + **`turtle`** for the [Turtle RDF data format](https://www.w3.org/TR/turtle/).
+ **`iamRoleArn`**   –   The Amazon Resource Name (ARN) for an IAM role to be assumed by the Neptune DB instance for access to the S3 bucket. For information about creating a role that has access to Amazon S3 and then associating it with a Neptune cluster, see [Prerequisites: IAM Role and Amazon S3 Access](bulk-load-tutorial-IAM.md).

  Starting with [engine release 1.2.1.0.R3](engine-releases-1.2.1.0.R3.md), you can also chain multiple IAM roles if the Neptune DB instance and the Amazon S3 bucket are located in different AWS Accounts. In this case, `iamRoleArn` contains a comma-separated list of role ARNs, as described in [Chaining IAM roles in Amazon Neptune](bulk-load-tutorial-chain-roles.md). For example:

------
#### [ AWS CLI ]

  ```
  aws neptunedata start-loader-job \
    --endpoint-url https://your-neptune-endpoint:port \
    --source "s3://(the target bucket name)/(the target date file name)" \
    --format "csv" \
    --iam-role-arn "arn:aws:iam::(Account A ID):role/(RoleA),arn:aws:iam::(Account B ID):role/(RoleB),arn:aws:iam::(Account C ID):role/(RoleC)" \
    --s3-bucket-region "us-east-1"
  ```

  For more information, see [start-loader-job](https://docs.aws.amazon.com/cli/latest/reference/neptunedata/start-loader-job.html) in the AWS CLI Command Reference.

------
#### [ SDK ]

  ```
  import boto3
  from botocore.config import Config
  
  client = boto3.client(
      'neptunedata',
      endpoint_url='https://your-neptune-endpoint:port',
      config=Config(read_timeout=None, retries={'total_max_attempts': 1})
  )
  
  response = client.start_loader_job(
      source='s3://(the target bucket name)/(the target date file name)',
      format='csv',
      iamRoleArn='arn:aws:iam::(Account A ID):role/(RoleA),arn:aws:iam::(Account B ID):role/(RoleB),arn:aws:iam::(Account C ID):role/(RoleC)',
      s3BucketRegion='us-east-1'
  )
  
  print(response)
  ```

------
#### [ awscurl ]

  ```
  awscurl https://your-neptune-endpoint:port/loader \
    --region us-east-1 \
    --service neptune-db \
    -X POST \
    -H 'Content-Type: application/json' \
    -d '{
          "source" : "s3://(the target bucket name)/(the target date file name)",
          "iamRoleArn" : "arn:aws:iam::(Account A ID):role/(RoleA),arn:aws:iam::(Account B ID):role/(RoleB),arn:aws:iam::(Account C ID):role/(RoleC)",
          "format" : "csv",
          "region" : "us-east-1"
        }'
  ```

**Note**  
This example assumes that your AWS credentials are configured in your environment. Replace *us-east-1* with the Region of your Neptune cluster.

------
#### [ curl ]

  ```
  curl -X POST https://your-neptune-endpoint:port/loader \
    -H 'Content-Type: application/json' \
    -d '{
          "source" : "s3://(the target bucket name)/(the target date file name)",
          "iamRoleArn" : "arn:aws:iam::(Account A ID):role/(RoleA),arn:aws:iam::(Account B ID):role/(RoleB),arn:aws:iam::(Account C ID):role/(RoleC)",
          "format" : "csv",
          "region" : "us-east-1"
        }'
  ```

------
+ **`region`**   –   The `region` parameter must match the AWS Region of the cluster and the S3 bucket.

  Amazon Neptune is available in the following Regions:
  + US East (N. Virginia):   `us-east-1`
  + US East (Ohio):   `us-east-2`
  + US West (N. California):   `us-west-1`
  + US West (Oregon):   `us-west-2`
  + Canada (Central):   `ca-central-1`
  + Canada West (Calgary):   `ca-west-1`
  + South America (São Paulo):   `sa-east-1`
  + Europe (Stockholm):   `eu-north-1`
  + Europe (Spain):   `eu-south-2`
  + Europe (Ireland):   `eu-west-1`
  + Europe (London):   `eu-west-2`
  + Europe (Paris):   `eu-west-3`
  + Europe (Frankfurt):   `eu-central-1`
  + Middle East (Bahrain):   `me-south-1`
  + Middle East (UAE):   `me-central-1`
  + Israel (Tel Aviv):   `il-central-1`
  + Africa (Cape Town):   `af-south-1`
  + Asia Pacific (Hong Kong):   `ap-east-1`
  + Asia Pacific (Tokyo):   `ap-northeast-1`
  + Asia Pacific (Seoul):   `ap-northeast-2`
  + Asia Pacific (Osaka):   `ap-northeast-3`
  + Asia Pacific (Singapore):   `ap-southeast-1`
  + Asia Pacific (Sydney):   `ap-southeast-2`
  + Asia Pacific (Jakarta):   `ap-southeast-3`
  + Asia Pacific (Melbourne):   `ap-southeast-4`
  + Asia Pacific (Malaysia):   `ap-southeast-5`
  + Asia Pacific (Mumbai):   `ap-south-1`
  + Asia Pacific (Hyderabad):   `ap-south-2`
  + China (Beijing):   `cn-north-1`
  + China (Ningxia):   `cn-northwest-1`
  + AWS GovCloud (US-West):   `us-gov-west-1`
  + AWS GovCloud (US-East):   `us-gov-east-1`
+ **`mode`**   –   The load job mode.

  *Allowed values*: `RESUME`, `NEW`, `AUTO`.

  *Default value*: `AUTO`

****
  + `RESUME`   –   In RESUME mode, the loader looks for a previous load from this source, and if it finds one, resumes that load job. If no previous load job is found, the loader stops.

    The loader avoids reloading files that were successfully loaded in a previous job. It only tries to process failed files. If you dropped previously loaded data from your Neptune cluster, that data is not reloaded in this mode. If a previous load job loaded all files from the same source successfully, nothing is reloaded, and the loader returns success.
  + `NEW`   –   In NEW mode, the creates a new load request regardless of any previous loads. You can use this mode to reload all the data from a source after dropping previously loaded data from your Neptune cluster, or to load new data available at the same source.
  + `AUTO`   –   In AUTO mode, the loader looks for a previous load job from the same source, and if it finds one, resumes that job, just as in `RESUME` mode.

    If the loader doesn't find a previous load job from the same source, it loads all data from the source, just as in `NEW` mode.
+  **`edgeOnlyLoad`**   –   A flag that controls file processing order during bulk loading. 

  *Allowed values*: `"TRUE"`, `"FALSE"`.

  *Default value*: `"FALSE"`.

   When this parameter is set to "FALSE", the loader automatically loads vertex files first, then edge files afterwards. It does this by first scanning all files to determine their contents (vertices or edges). When this parameter is set to "TRUE", the loader skips the initial scanning phase and immediately loads all files in the order they appear. For more information see [bulk load optimize](https://docs.aws.amazon.com//neptune/latest/userguide/bulk-load-optimize.html). 
+ **`failOnError`**   –   A flag to toggle a complete stop on an error.

  *Allowed values*: `"TRUE"`, `"FALSE"`.

  *Default value*: `"TRUE"`.

  When this parameter is set to `"FALSE"`, the loader tries to load all the data in the location specified, skipping any entries with errors.

  When this parameter is set to `"TRUE"`, the loader stops as soon as it encounters an error. Data loaded up to that point persists.
+ **`parallelism`**   –   This is an optional parameter that can be set to reduce the number of threads used by the bulk load process.

  *Allowed values*:
  + `LOW` –   The number of threads used is the number of available vCPUs divided by 8.
  + `MEDIUM` –   The number of threads used is the number of available vCPUs divided by 2.
  + `HIGH` –   The number of threads used is the same as the number of available vCPUs.
  + `OVERSUBSCRIBE` –   The number of threads used is the number of available vCPUs multiplied by 2. If this value is used, the bulk loader takes up all available resources.

    This does not mean, however, that the `OVERSUBSCRIBE` setting results in 100% CPU utilization. Because the load operation is I/O bound, the highest CPU utilization to expect is in the 60% to 70% range.

  *Default value*: `HIGH`

  The `parallelism` setting can sometimes result in a deadlock between threads when loading openCypher data. When this happens, Neptune returns the `LOAD_DATA_DEADLOCK` error. You can generally fix the issue by setting `parallelism` to a lower setting and retrying the load command.
+ **`parserConfiguration`**   –   An optional object with additional parser configuration values. Each of the child parameters is also optional:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html)

  For more information, see [SPARQL Default Graph and Named Graphs](feature-sparql-compliance.md#sparql-default-graph).
+ **`updateSingleCardinalityProperties`**   –   This is an optional parameter that controls how the bulk loader treats a new value for single-cardinality vertex or edge properties. This is not supported for loading openCypher data (see [Loading openCypher data](#load-api-reference-load-parameters-opencypher)).

  *Allowed values*: `"TRUE"`, `"FALSE"`.

  *Default value*: `"FALSE"`.

  By default, or when `updateSingleCardinalityProperties` is explicitly set to `"FALSE"`, the loader treats a new value as an error, because it violates single cardinality.

  When `updateSingleCardinalityProperties` is set to `"TRUE"`, on the other hand, the bulk loader replaces the existing value with the new one. If multiple edge or single-cardinality vertex property values are provided in the source file(s) being loaded, the final value at the end of the bulk load could be any one of those new values. The loader only guarantees that the existing value has been replaced by one of the new ones.
+ **`queueRequest`**   –   This is an optional flag parameter that indicates whether the load request can be queued up or not. 

  You don't have to wait for one load job to complete before issuing the next one, because Neptune can queue up as many as 64 jobs at a time, provided that their `queueRequest` parameters are all set to `"TRUE"`. The queue order of the jobs will be first-in-first-out (FIFO). 

  If the `queueRequest` parameter is omitted or set to `"FALSE"`, the load request will fail if another load job is already running.

  *Allowed values*: `"TRUE"`, `"FALSE"`.

  *Default value*: `"FALSE"`.
+ **`dependencies`**   –   This is an optional parameter that can make a queued load request contingent on the successful completion of one or more previous jobs in the queue.

  Neptune can queue up as many as 64 load requests at a time, if their `queueRequest` parameters are set to `"TRUE"`. The `dependencies` parameter lets you make execution of such a queued request dependent on the successful completion of one or more specified previous requests in the queue.

  For example, if load `Job-A` and `Job-B` are independent of each other, but load `Job-C` needs `Job-A` and `Job-B` to be finished before it begins, proceed as follows:

  1. Submit `load-job-A` and `load-job-B` one after another in any order, and save their load-ids.

  1. Submit `load-job-C` with the load-ids of the two jobs in its `dependencies` field:

  ```
    "dependencies" : ["job_A_load_id", "job_B_load_id"]
  ```

  Because of the `dependencies` parameter, the bulk loader will not start `Job-C` until `Job-A` and `Job-B` have completed successfully. If either one of them fails, Job-C will not be executed, and its status will be set to `LOAD_FAILED_BECAUSE_DEPENDENCY_NOT_SATISFIED`.

  You can set up multiple levels of dependency in this way, so that the failure of one job will cause all requests that are directly or indirectly dependent on it to be cancelled.
+ **`userProvidedEdgeIds`**   –   This parameter is required only when loading openCypher data that contains relationship IDs. It must be included and set to `True` when openCypher relationship IDs are explicitly provided in the load data (recommended).

  When `userProvidedEdgeIds` is absent or set to `True`, an `:ID` column must be present in every relationship file in the load.

  When `userProvidedEdgeIds` is present and set to `False`, relationship files in the load **must not** contain an `:ID` column. Instead, the Neptune loader automatically generates an ID for each relationship.

  It's useful to provide relationship IDs explicitly so that the loader can resume loading after error in the CSV data have been fixed, without having to reload any relationships that have already been loaded. If relationship IDs have not been explicitly assigned, the loader cannot resume a failed load if any relationship file has had to be corrected, and must instead reload all the relationships.
+ `accessKey`   –   **[deprecated]** An access key ID of an IAM role with access to the S3 bucket and data files.

  The `iamRoleArn` parameter is recommended instead. For information about creating a role that has access to Amazon S3 and then associating it with a Neptune cluster, see [Prerequisites: IAM Role and Amazon S3 Access](bulk-load-tutorial-IAM.md).

  For more information, see [Access keys (access key ID and secret access key)](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
+ `secretKey`   –   **[deprecated]** The `iamRoleArn` parameter is recommended instead. For information about creating a role that has access to Amazon S3 and then associating it with a Neptune cluster, see [Prerequisites: IAM Role and Amazon S3 Access](bulk-load-tutorial-IAM.md).

  For more information, see [Access keys (access key ID and secret access key)](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).

### Special considerations for loading openCypher data
<a name="load-api-reference-load-parameters-opencypher"></a>
+ When loading openCypher data in CSV format, the format parameter must be set to `opencypher`.
+ The `updateSingleCardinalityProperties` parameter is not supported for openCypher loads because all openCypher properties have single cardinality. The openCypher load format does not support arrays, and if an ID value appears more than once, it is treated as a duplicate or an insertion error (see below).
+ The Neptune loader handles duplicates that it encounters in openCypher data as follows:
  + If the loader encounters multiple rows with the same node ID, they are merged using the following rule:
    + All the labels in the rows are added to the node.
    + For each property, only one of the property values is loaded. The selection of the one to load is non-deterministic.
  + If the loader encounters multiple rows with the same relationship ID, only one of them is loaded. The selection of the one to load is non-deterministric.
  + The loader never updates property values of an existing node or relationship in the database if it encounters load data having the ID of the existing node or relationship. However, it does load node labels and properties that are not present in the existing node or relationship. 
+ Although you don't have to assign IDs to relationships, it is usually a good idea (see the `userProvidedEdgeIds` parameter above). Without explicit relationship IDs, the loader must reload all relationships in case of an error in a relationship file, rather than resuming the load from where it failed.

  Also, if the load data doesn't contain explicit relationship IDs, the loader has no way of detecting duplicate relationships.

Here is an example of an openCypher load command:

------
#### [ AWS CLI ]

```
aws neptunedata start-loader-job \
  --endpoint-url https://your-neptune-endpoint:port \
  --source "s3://bucket-name/object-key-name" \
  --format "opencypher" \
  --user-provided-edge-ids \
  --iam-role-arn "arn:aws:iam::account-id:role/role-name" \
  --s3-bucket-region "region" \
  --no-fail-on-error \
  --parallelism "MEDIUM"
```

For more information, see [start-loader-job](https://docs.aws.amazon.com/cli/latest/reference/neptunedata/start-loader-job.html) in the AWS CLI Command Reference.

------
#### [ SDK ]

```
import boto3
from botocore.config import Config

client = boto3.client(
    'neptunedata',
    endpoint_url='https://your-neptune-endpoint:port',
    config=Config(read_timeout=None, retries={'total_max_attempts': 1})
)

response = client.start_loader_job(
    source='s3://bucket-name/object-key-name',
    format='opencypher',
    userProvidedEdgeIds=True,
    iamRoleArn='arn:aws:iam::account-id:role/role-name',
    s3BucketRegion='region',
    failOnError=False,
    parallelism='MEDIUM'
)

print(response)
```

------
#### [ awscurl ]

```
awscurl https://your-neptune-endpoint:port/loader \
  --region us-east-1 \
  --service neptune-db \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{
        "source" : "s3://bucket-name/object-key-name",
        "format" : "opencypher",
        "userProvidedEdgeIds": "TRUE",
        "iamRoleArn" : "arn:aws:iam::account-id:role/role-name",
        "region" : "region",
        "failOnError" : "FALSE",
        "parallelism" : "MEDIUM"
      }'
```

**Note**  
This example assumes that your AWS credentials are configured in your environment. Replace *us-east-1* with the Region of your Neptune cluster.

------
#### [ curl ]

```
curl -X POST https://your-neptune-endpoint:port/loader \
  -H 'Content-Type: application/json' \
  -d '{
        "source" : "s3://bucket-name/object-key-name",
        "format" : "opencypher",
        "userProvidedEdgeIds": "TRUE",
        "iamRoleArn" : "arn:aws:iam::account-id:role/role-name",
        "region" : "region",
        "failOnError" : "FALSE",
        "parallelism" : "MEDIUM"
      }'
```

------

The loader response is the same as normal. For example:

```
{
  "status" : "200 OK",
  "payload" : {
    "loadId" : "guid_as_string"
  }
}
```

## Neptune Loader Response Syntax
<a name="load-api-reference-load-return"></a>

```
{
    "status" : "200 OK",
    "payload" : {
        "loadId" : "guid_as_string"
    }
}
```

**200 OK**  
Successfully started load job returns a `200` code.

# Neptune Loader Errors
<a name="load-api-reference-load-errors"></a>

When an error occurs, a JSON object is returned in the `BODY` of the response. The `message` object contains a description of the error.

**Error Categories**
+ `Error 400`   –   Syntax errors return an HTTP `400` bad request error. The message describes the error.
+ `Error 500`   –   A valid request that cannot be processed returns an HTTP `500` internal server error. The message describes the error.

The following are possible error messages from the loader with a description of the error.

**Loader Error Messages**
+ `Couldn't find the AWS credential for iam_role_arn`  (HTTP 400)

  The credentials were not found. Verify the supplied credentials against the IAM console or AWS CLI output. Make sure that you have added the IAM role specified in `iamRoleArn` to the cluster.
+ `S3 bucket not found for source`  (HTTP 400)

  The S3 bucket does not exist. Check the name of the bucket.
+ `The source source-uri does not exist/not reachable`  (HTTP 400)

  No matching files were found in the S3 bucket.
+ `Unable to connect to S3 endpoint. Provided source = source-uri and region = aws-region`  (HTTP 500)

  Unable to connect to Amazon S3. Region must match the cluster Region. Ensure that you have a VPC endpoint. For information about creating a VPC endpoint, see [Creating an Amazon S3 VPC Endpoint](bulk-load-data.md#bulk-load-prereqs-s3).
+ `Bucket is not in provided Region (aws-region)`  (HTTP 400)

  The bucket must be in the same AWS Region as your Neptune DB instance.
+ `Unable to perform S3 list operation`  (HTTP 400)

  The IAM user or role provided does not have `List` permissions on the bucket or the folder. Check the policy or the access control list (ACL) on the bucket.
+ `Start new load operation not permitted on a read replica instance`  (HTTP 405)

  Loading is a write operation. Retry load on the read/write cluster endpoint.
+ `Failed to start load because of unknown error from S3`  (HTTP 500)

  Amazon S3 returned an unknown error. Contact [AWS Support](https://aws.amazon.com/premiumsupport/).
+ `Invalid S3 access key`  (HTTP 400)

  Access key is invalid. Check the provided credentials.
+ `Invalid S3 secret key`  (HTTP 400)

  Secret key is invalid. Check the provided credentials.
+ `Max concurrent load limit breached`  (HTTP 400)

  If a load request is submitted without `"queueRequest" : "TRUE"`, and a load job is currently running, the request will fail with this error.
+ `Failed to start new load for the source "source name". Max load task queue size limit breached. Limit is 64`  (HTTP 400)

  Neptune supports queuing up as many as 64 loader jobs at a time. If an additional load request is submitted to the queue when it already contains 64 jobs, the request fails with this message.

# Neptune Loader Examples
<a name="load-api-reference-load-examples"></a>

 This example demonstrates how to use the Neptune loader to load data into a Neptune graph database using the Gremlin CSV format. The request is sent as an HTTP POST request to the Neptune loader endpoint, and the request body contains the necessary parameters to specify the data source, format, IAM role, and other configuration options. The response includes the load ID, which can be used to track the progress of the data loading process. 

**Example Request**  
The following is a request sent via HTTP POST using the `curl` command. It loads a file in the Neptune CSV format. For more information, see [Gremlin load data format](bulk-load-tutorial-format-gremlin.md).  

```
aws neptunedata start-loader-job \
  --endpoint-url https://your-neptune-endpoint:port \
  --source "s3://bucket-name/object-key-name" \
  --format "csv" \
  --iam-role-arn "ARN for the IAM role you are using" \
  --s3-bucket-region "region" \
  --no-fail-on-error \
  --parallelism "MEDIUM" \
  --no-update-single-cardinality-properties \
  --no-queue-request
```
For more information, see [start-loader-job](https://docs.aws.amazon.com/cli/latest/reference/neptunedata/start-loader-job.html) in the AWS CLI Command Reference.

```
import boto3
from botocore.config import Config

client = boto3.client(
    'neptunedata',
    endpoint_url='https://your-neptune-endpoint:port',
    config=Config(read_timeout=None, retries={'total_max_attempts': 1})
)

response = client.start_loader_job(
    source='s3://bucket-name/object-key-name',
    format='csv',
    iamRoleArn='ARN for the IAM role you are using',
    s3BucketRegion='region',
    failOnError=False,
    parallelism='MEDIUM',
    updateSingleCardinalityProperties=False,
    queueRequest=False
)

print(response)
```

```
awscurl https://your-neptune-endpoint:port/loader \
  --region us-east-1 \
  --service neptune-db \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{
        "source" : "s3://bucket-name/object-key-name",
        "format" : "csv",
        "iamRoleArn" : "ARN for the IAM role you are using",
        "region" : "region",
        "failOnError" : "FALSE",
        "parallelism" : "MEDIUM",
        "updateSingleCardinalityProperties" : "FALSE",
        "queueRequest" : "FALSE"
      }'
```
This example assumes that your AWS credentials are configured in your environment. Replace *us-east-1* with the Region of your Neptune cluster.

```
curl -X POST https://your-neptune-endpoint:port/loader \
  -H 'Content-Type: application/json' \
  -d '{
        "source" : "s3://bucket-name/object-key-name",
        "format" : "csv",
        "iamRoleArn" : "ARN for the IAM role you are using",
        "region" : "region",
        "failOnError" : "FALSE",
        "parallelism" : "MEDIUM",
        "updateSingleCardinalityProperties" : "FALSE",
        "queueRequest" : "FALSE"
      }'
```

**Example Response**  

```
{
    "status" : "200 OK",
    "payload" : {
        "loadId" : "ef478d76-d9da-4d94-8ff1-08d9d4863aa5"
    }
}
```