Prerequisites for using AWS B2B Data Interchange - AWS B2B Data Interchange

Prerequisites for using AWS B2B Data Interchange

This topic describes how to sign up for an AWS account, create an admin user, and configure an Amazon S3 bucket to use with B2B Data Interchange.

Sign up for an AWS account

If you do not have an AWS account, complete the following steps to create one.

To sign up for an AWS account
  1. Open https://portal.aws.amazon.com/billing/signup.

  2. Follow the online instructions.

    Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.

    When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.

Create a user with administrative access

After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.

Secure your AWS account root user
  1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password.

    For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide.

  2. Turn on multi-factor authentication (MFA) for your root user.

    For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide.

Create a user with administrative access
  1. Enable IAM Identity Center.

    For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide.

  2. In IAM Identity Center, grant administrative access to a user.

    For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide.

Sign in as the user with administrative access
  • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.

    For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide.

Assign access to additional users
  1. In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.

    For instructions, see Create a permission set in the AWS IAM Identity Center User Guide.

  2. Assign users to a group, and then assign single sign-on access to the group.

    For instructions, see Add groups in the AWS IAM Identity Center User Guide.

Configure an Amazon S3 bucket

You need to have an Amazon S3 bucket set up and ready to use. B2B Data Interchange requires buckets for storing input, output, and instruction documents. For details, see Getting started with Amazon S3.

  • The Amazon S3 bucket must be in the same AWS account as the B2B Data Interchange user.

  • The Amazon S3 bucket must be in the same region as the B2B Data Interchange user.

Amazon S3 bucket policies and permissions

Before you can begin transforming and generating Electronic Interchange Data (EDI) documents, you need to set up the Amazon S3 bucket policies that you need for working with B2B Data Interchange resources. This topic also provides example policies to help you get started.

Configure your Amazon S3 bucket policies

You can copy example policies as described in the preceding section. If one or both of your buckets use SSE-KMS encryption, you also need to update your AWS KMS key policy, as described in Example bucket policies.

Note

For details on temporary files and directories, see Temporary files and Amazon S3 permissions.

Perform this procedure for both your input and output directories.

Configure your bucket policy
  1. Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ and navigate to your bucket.

  2. After you open the detail page for your bucket, choose the Permissions tab.

  3. In the Bucket policy panel, choose Edit.

  4. Paste in the appropriate bucket policy, depending on whether this is your input or output bucket.

  5. Choose Save to save the policy.

Configure your Amazon S3 bucket EventBridge setting

You need to turn on Amazon EventBridge for your input and output Amazon S3 buckets.

Turn on EventBridge notifications
  1. Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ and navigate to your bucket.

  2. After you open the detail page for your bucket, choose the Properties tab.

  3. Scroll down to the Amazon EventBridge panel. If notifications are off, proceed to the next step. If they are on, you can skip the remainder of this procedure.

  4. To turn on EventBridge notifications, choose Edit.

  5. Select On, and choose Save changes.

Temporary files and Amazon S3 permissions

For your output bucket policies, you need to have the s3:GetObject and s3:DeleteObject permissions. These permissions are required so that B2B Data Interchange read and then remove temporary files that the service uses to transform your EDI documents.

The service uses s3:DeleteObject to delete temporary files, which can be ten times as large as the X12 input file. If your bucket policy doesn't include s3:DeleteObject, the service continues to work as expected. However, B2B Data Interchange would not be able to delete these temporary files: they would then remain in Amazon S3 (and incur charges).

The service adds a new prefix to your output directory, customerOutputDirectory/parsed, for its use, and customerOutputDirectory/tradingPartnerId/parsed for use by Amazon S3 (if you have a partnership). These locations are used exclusively for holding temporary files. If your bucket policy includes the s3:DeleteObject permission, you should never see these folders. If you don't have that permission, then the temporary files continue to be written and remain in these folders.

Example bucket policies

You need to update your Amazon S3 bucket policies to include the appropriate permissions so that the B2B Data Interchange service can access your input documents and store the generated outputs.

The following are policies copied from the Create trading capability page. You can select View to view your bucket. Then, from your bucket page, choose Permissions > Bucket policy > Edit, and then paste this policy into the Policy field.

Note

In these examples, replace each user input placeholder with your own information.

Example Amazon S3 input bucket policy

Example Amazon S3 input bucket policy copied from the Trading capabilities page.

{ "Version": "2012-10-17", "Id": "B2BIEdiCapabilityInputPolicy", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "b2bi.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:GetObjectAttributes" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/input-folder*", "Condition": { "StringEquals": { "aws:SourceAccount": "account-id" } } } ] }
Example Amazon S3 output bucket policy

Example Amazon S3 output bucket policy copied from the Trading capabilities page.

{ "Version": "2012-10-17", "Id": "B2BIEdiCapabilityOutputPolicy", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "b2bi.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/output-folder/*", "Condition": { "StringEquals": { "aws:SourceAccount": "account-id" } } } ] }

If you have SSE-KMS encryption enabled on your input or output bucket, you need to update the key policy in AWS KMS. You need to add the B2B Data Interchange service principal and the appropriate permissions to the policy.

Example Amazon S3 input AWS KMS key policy

The following example policy is for use with an encrypted input/source bucket. It includes the permission needed to decrypt an encrypted file.

{ "Version": "2012-10-17", "Id": "B2BIEdiCapabilityInputKeyPolicy", "Statement": [ { "Sid": "Allow administration of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow B2Bi access", "Effect": "Allow", "Principal": { "Service": "b2bi.amazonaws.com" }, "Action": "kms:Decrypt", "Resource": "*" } ] }
Example Amazon S3 output AWS KMS key policy

The following example policy is for use with an encrypted output bucket. It includes the permission needed to encrypt a file for storing into the bucket.

{ "Version": "2012-10-17", "Id": "B2BIEdiCapabilityOutputKeyPolicy", "Statement": [ { "Sid": "Allow administration of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow B2Bi access", "Effect": "Allow", "Principal": { "Service": "b2bi.amazonaws.com" }, "Action": "kms:GenerateDataKey", "Resource": "*" } ] }

If you are using the same bucket for input and output, you can use either example key policy, and add in the other permission. In this case, the policy is as follows.

{ "Version": "2012-10-17", "Id": "B2BIEdiCapabilityOutputKeyPolicy", "Statement": [ { "Sid": "Allow administration of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow B2Bi access", "Effect": "Allow", "Principal": { "Service": "b2bi.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" } ] }