Generative AI for the AWS SRA - AWS Prescriptive Guidance

Generative AI for the AWS SRA

This section provides current recommendations for using generative AI securely to improve the productivity and efficiency for users and organizations. It focuses on the use of Amazon Bedrock based on the AWS SRA's holistic set of guidelines for deploying the full complement of AWS security services in a multi-account environment. This guidance builds upon the SRA to enable generative AI capabilities within an enterprise-grade, secure framework. It covers key security controls such as IAM permissions, data protection, input/output validation, network isolation, logging, and monitoring that's specific to Amazon Bedrock generative AI capabilities.

The target audience for this guidance are security professionals, architects, and developers who are responsible for securely integrating generative AI capabilities into their organizations and applications. 

The SRA explores the security considerations and best practices for these Amazon Bedrock generative AI capabilities: 

The guidance also covers how to integrate Amazon Bedrock generative AI functionality into traditional AWS workloads based on your use case. 

The following sections of this guidance expand on each of these four capabilities, discuss the rationale of what the capability is and its usage, cover security considerations pertaining to the capability, and explain how you can use AWS services and features to address the security considerations (remediation). The rationale, security considerations, and remediations of using foundation models (capability 1) applies to all other capabilities, because they all use model inference. For example, if your business application uses a customized Amazon Bedrock model with retrieval augmented generation (RAG) capability, you have to consider the rationale, security considerations and remediations of capabilities 1, 2, and 4.

The architecture illustrated in the following diagram is an extension of the AWS SRA Workloads OU previously depicted in this guide.

A specific OU is dedicated for applications that use generative AI. The OU consists of an Application account where you host your traditional AWS application that provides specific business functionality. This AWS application uses the generative AI capabilities that Amazon Bedrock provides. These capabilities are served out of the Generative AI account, which hosts relevant Amazon Bedrock and associated AWS services. Grouping AWS services based on application type helps enforce security controls through OU-specific and AWS account-specific service control policies. This also makes it easier to implement strong access control and least privilege. In addition to these specific OUs and accounts, the reference architecture depicts additional OUs and accounts that provide foundational security capabilities that apply to all application types. The Org Management, Security Tooling, Log Archive, Network, and Shared Services accounts are discussed in earlier sections of this guide.

Design consideration

If your application architecture requires generative AI services provided by Amazon Bedrock and other AWS services to be consolidated within the same account where your business application is hosted, you can merge the Application and Generative AI accounts into a single account. This will also be the case if your generative AI usage is spread across your entire AWS organization.

AWS SRA architecture to support generative AI
Design considerations

You can further break out your Generative AI account based on the software development lifecycle (SDLC) environment (for example, development, test, or production), or by model or user community.

  • Account separation based on the SDLC environment:  As a best practice, separate the SDLC environments into separate OUs. This separation ensures proper isolation and control over each environment and supports. It provides:

    • Controlled access. Different teams or individuals can be granted access to specific environments based on their roles and responsibilities. 

    • Resource isolation. Each environment can have its own dedicated resources (such as models or knowledge bases) without interfering with other environments. 

    • Cost tracking. Costs associated with each environment can be tracked and monitored separately. 

    • Risk mitigation. Issues or experiments in one environment (for example, development) don't impact the stability of other environments (for example, production). 

  • Account separation based on the model or user community: In the current architecture, one account provides access to multiple FMs for inference through AWS Bedrock. You can use IAM roles to provide access control to pre-trained FMs based on user roles and responsibilities. (For an example, see the Amazon Bedrock documentation.) Conversely, you can choose to separate your Generative AI accounts based on risk level, model, or user community. This can be beneficial in certain scenarios: 

    • User community risk levels: If different user communities have varying levels of risk or access requirements, separate accounts could help enforce appropriate access controls and filters. 

    • Customized models: For models that are customized with customer data, if comprehensive information about the training data is available, separate accounts could provide better isolation and control. 

Based on these considerations, you can evaluate the specific requirements, security needs, and operational complexities associated with your use case. If the primary focus is on Amazon Bedrock and pre-trained FMs, a single account with IAM roles could be a viable approach. However, if you have specific requirements for model or user community separation, or if you plan to work with customer-loaded models, separate accounts might be necessary. Ultimately, the decision should be driven by your application-specific needs and factors such as security, operational complexity, and cost considerations.

Note: To simplify the following discussions and examples, this guide assumes a single Generative AI account strategy with IAM roles.

Amazon Bedrock

Amazon Bedrock is an easy way to build and scale generative AI applications with foundation models (FMs). As a fully managed service, it offers a choice of high-performing FMs from leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. It also offers a broad set of capabilities needed to build generative AI applications, and simplifies development while maintaining privacy and security. FMs serve as building blocks for developing generative AI applications and solutions. By providing access to Amazon Bedrock, users can directly interact with these FMs through a user-friendly interface or through the Amazon Bedrock API. Amazon Bedrock's objective is to provide model choice through a single API for rapid experimentation, customization, and deployment to production while supporting fast pivoting to different models. It's all about model choice.

You can experiment with pre-trained models, customize the models for your specific use cases, and integrate them into your applications and workflows. This direct interaction with the FMs enables organizations to rapidly prototype and iterate on generative AI solutions, and to take advantage of the latest advancements in machine learning without the need for extensive resources or expertise in training complex models from scratch. The Amazon Bedrock console simplifies the process of accessing and using these powerful generative AI capabilities.

Amazon Bedrock provides an array of security capabilities to help with the privacy and security of your data: 

  • All user content that's processed by Amazon Bedrock is isolated by user, encrypted at rest, and stored in the AWS Region where you are using Amazon Bedrock. Your content is also encrypted in transit by using TLS 1.2 at the minimum. To learn more about data protection in Amazon Bedrock, see the Amazon Bedrock documentation

  • Amazon Bedrock doesn't store or log your prompts and completions. Amazon Bedrock doesn't use your prompts and completions to train any AWS models and doesn't distribute them to third parties.

  • When you tune an FM, your changes use a private copy of that model. This means that your data isn't shared with model providers or used to improve the base models. 

  • Amazon Bedrock implements automated abuse detection mechanisms to identify potential violations of the AWS Responsible AI Policy. To learn more about abuse detection in Amazon Bedrock, see the Amazon Bedrock documentation

  • Amazon Bedrock is in scope for common compliance standards, including International Organization for Standardization (ISO), System and Organization Controls (SOC), Federal Risk and Authorization Management Program (FedRAMP) Moderate, and Cloud Security Alliance (CSA) Security Trust Assurance and Risk (STAR) Level 2. Amazon Bedrock is Health Insurance Portability and Accountability Act (HIPAA) eligible, and you can use this service in compliance with the General Data Protection Regulation (GDPR). To learn whether an AWS service is within the scope of specific compliance programs, see AWS services in Scope by Compliance Program and choose the compliance program that you're interested in. 

To learn more, see the AWS secure approach to generative AI

Guardrails for Amazon Bedrock

Guardrails for Amazon Bedrock enables you to implement safeguards for your generative AI applications based on your use cases and responsible AI policies. A guardrail in Amazon Bedrock consists of filters that you can configure, topics that you can define to block, and messages to send to users when content is blocked or filtered.

Content filtering depends on the confidence classification of user inputs (input validation) and FM responses (output validation) across six harmful categories. All input and output statements are classified into one of four confidence levels (none, low, medium, high) for each harmful category. For each category, you can configure the strength of the filters. The following table shows the degree of content that each filter strength blocks and allows.

Filter strength

Blocked content confidence

Allowed content confidence

None

No filtering

None, low, medium, high

Low

High

None, low, medium

Medium

High, medium

None, low

High

High, medium, low

None

When you're ready to deploy your guardrail to production, you create a version of it and invoke the version of the guardrail in your application. Follow the steps in the API tab in the Test a guardrail section of the Amazon Bedrock documentation. 

Security

By default, guardrails are encrypted with an AWS managed key in AWS Key Management Services (AWS KMS). To prevent unauthorized users from gaining access to the guardrails, which could result in undesired changes; we recommend that you use a customer managed key to encrypt your guardrails and restrict access to the guardrails by using least privilege IAM permissions.

Amazon Bedrock model evaluation

Amazon Bedrock supports model evaluation jobs. You can use the results of a model evaluation job to compare model outputs, and then choose the model that best suits your downstream generative AI applications.

You can use an automatic model evaluation job to evaluate a model's performance by using either a custom prompt dataset or a built-in dataset. For more information, see Create a model evaluation job and Use prompt datasets for model evaluation in the Amazon Bedrock documentation.

Model evaluation jobs that use human workers bring human input from employees or subject matter experts to the evaluation process. 

Security

Model evaluation should occur in a development environment. For recommendations for organizing your non-production environments, see the Organizing Your AWS Environment Using Multiple Accounts whitepaper.

All model evaluation jobs require IAM permissions and IAM service roles. For more information, see the Amazon Bedrock documentation for permissions that are required to create a model evaluation job by using the Amazon Bedrock console, the service role requirements, and the required cross-origin resource sharing (CORS) permissions. Automatic evaluation jobs and model evaluation jobs that use human workers require different service roles. For more information about the policies that are needed for a role to perform model evaluation jobs, see Service role requirements for automatic model evaluation jobs and Service role requirements for model evaluation jobs that use human evaluators in the Amazon Bedrock documentation.

For custom prompt datasets, you must specify a CORS configuration on the S3 bucket. For the minimal required configuration, see the Amazon Bedrock documentation. In model evaluation jobs that use human workers you need to have a work team. You can create or manage work teams while setting up a model evaluation job and add workers to a private workforce that's managed by Amazon SageMaker Ground Truth. To manage work teams that are created in Amazon Bedrock outside of job setup, you must use the Amazon Cognito or Amazon SageMaker Ground Truth consoles. Amazon Bedrock supports a maximum of 50 workers per work team.

During the model evaluation job, Amazon Bedrock makes a temporary copy of your data, and then deletes the data after the job finishes. It uses an AWS KMS key to encrypt it. By default, the data is encrypted with an AWS managed key, but we recommend that you use a customer managed key instead. For more information, see Data encryption for model evaluation jobs in the Amazon Bedrock documentation.