CreateLocationHdfsCommand

Creates a transfer location for a Hadoop Distributed File System (HDFS). DataSync can use this location as a source or destination for transferring data.

Before you begin, make sure that you understand how DataSync accesses HDFS clusters .

Example Syntax

Use a bare-bones client and the command you need to make an API call.

import { DataSyncClient, CreateLocationHdfsCommand } from "@aws-sdk/client-datasync"; // ES Modules import
// const { DataSyncClient, CreateLocationHdfsCommand } = require("@aws-sdk/client-datasync"); // CommonJS import
const client = new DataSyncClient(config);
const input = { // CreateLocationHdfsRequest
  Subdirectory: "STRING_VALUE",
  NameNodes: [ // HdfsNameNodeList // required
    { // HdfsNameNode
      Hostname: "STRING_VALUE", // required
      Port: Number("int"), // required
    },
  ],
  BlockSize: Number("int"),
  ReplicationFactor: Number("int"),
  KmsKeyProviderUri: "STRING_VALUE",
  QopConfiguration: { // QopConfiguration
    RpcProtection: "DISABLED" || "AUTHENTICATION" || "INTEGRITY" || "PRIVACY",
    DataTransferProtection: "DISABLED" || "AUTHENTICATION" || "INTEGRITY" || "PRIVACY",
  },
  AuthenticationType: "SIMPLE" || "KERBEROS", // required
  SimpleUser: "STRING_VALUE",
  KerberosPrincipal: "STRING_VALUE",
  KerberosKeytab: new Uint8Array(), // e.g. Buffer.from("") or new TextEncoder().encode("")
  KerberosKrb5Conf: new Uint8Array(), // e.g. Buffer.from("") or new TextEncoder().encode("")
  AgentArns: [ // AgentArnList // required
    "STRING_VALUE",
  ],
  Tags: [ // InputTagList
    { // TagListEntry
      Key: "STRING_VALUE", // required
      Value: "STRING_VALUE",
    },
  ],
};
const command = new CreateLocationHdfsCommand(input);
const response = await client.send(command);
// { // CreateLocationHdfsResponse
//   LocationArn: "STRING_VALUE",
// };

CreateLocationHdfsCommand Input

See CreateLocationHdfsCommandInput for more details

Parameter
Type
Description
AgentArns
Required
string[] | undefined

The Amazon Resource Names (ARNs) of the DataSync agents that can connect to your HDFS cluster.

AuthenticationType
Required
HdfsAuthenticationType | undefined

The type of authentication used to determine the identity of the user.

NameNodes
Required
HdfsNameNode[] | undefined

The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

BlockSize
number | undefined

The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).

KerberosKeytab
Uint8Array | undefined

The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

KerberosKrb5Conf
Uint8Array | undefined

The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf file by providing the file's address. If you're using the CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text.

If KERBEROS is specified for AuthenticationType, this parameter is required.

KerberosPrincipal
string | undefined

The Kerberos principal with access to the files and folders on the HDFS cluster.

If KERBEROS is specified for AuthenticationType, this parameter is required.

KmsKeyProviderUri
string | undefined

The URI of the HDFS cluster's Key Management Server (KMS).

QopConfiguration
QopConfiguration | undefined

The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value.

ReplicationFactor
number | undefined

The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.

SimpleUser
string | undefined

The user name used to identify the client on the host operating system.

If SIMPLE is specified for AuthenticationType, this parameter is required.

Subdirectory
string | undefined

A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /.

Tags
TagListEntry[] | undefined

The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.

CreateLocationHdfsCommand Output

Parameter
Type
Description
$metadata
Required
ResponseMetadata
Metadata pertaining to this request.
LocationArn
string | undefined

The ARN of the source HDFS cluster location that you create.

Throws

Name
Fault
Details
InternalException
server

This exception is thrown when an error occurs in the DataSync service.

InvalidRequestException
client

This exception is thrown when the client submits a malformed request.

DataSyncServiceException
Base exception class for all service exceptions from DataSync service.