Amazon Bedrock 中预配置吞吐量的代码示例 - Amazon Bedrock

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

Amazon Bedrock 中预配置吞吐量的代码示例

以下代码示例演示如何使用和 Python SDK 创建、使用和管理预配置吞吐量。 AWS CLI

AWS CLI

通过在终端中运行以下命令,创建一个MyPT基于名为的自定义模型调用的无承诺预配置吞吐量MyCustomModel,该模型是从 Anthropic Claude v2.1 模型中自定义的。

aws bedrock create-provisioned-model-throughput \ --model-units 1 \ --provisioned-model-name MyPT \ --model-id arn:aws:bedrock:us-east-1::custom-model/anthropic.claude-v2:1:200k/MyCustomModel

响应返回 a provisioned-model-arn。留出一些时间让创作完成。要检查其状态,请在以下命令provisioned-model-id中提供已配置模型的名称或 ARN。

aws bedrock get-provisioned-model-throughput \ --provisioned-model-id MyPT

更改预配置吞吐量的名称,并将其与从 Anthropic Claude v2.1 自定义的其他模型相关联。

aws bedrock update-provisioned-model-throughput \ --provisioned-model-id MyPT \ --desired-provisioned-model-name MyPT2 \ --desired-model-id arn:aws:bedrock:us-east-1::custom-model/anthropic.claude-v2:1:200k/MyCustomModel2

使用以下命令对更新的预配置模型运行推理。您必须提供UpdateProvisionedModelThroughput响应中返回的预配置模型的 ARN 作为。model-id输出将写入当前文件夹中名为 output.txt 的文件中。

aws bedrock-runtime invoke-model \ --model-id ${provisioned-model-arn} \ --body '{"inputText": "What is AWS?", "textGenerationConfig": {"temperature": 0.5}}' \ --cli-binary-format raw-in-base64-out \ output.txt

使用以下命令删除预配置吞吐量。您将不再需要为预配置吞吐量付费。

aws bedrock delete-provisioned-model-throughput --provisioned-model-id MyPT2
Python (Boto)

通过运行以下代码片段,创建一个MyPT基于名为的自定义模型调用的无承诺预配置吞吐量MyCustomModel,该模型是从 Anthropic Claude v2.1 模型中自定义的。

import boto3 bedrock = boto3.client(service_name='bedrock') bedrock.create_provisioned_model_throughput( modelUnits=1, provisionedModelName='MyPT', modelId='arn:aws:bedrock:us-east-1::custom-model/anthropic.claude-v2:1:200k/MyCustomModel' )

响应返回 a provisionedModelArn。留出一些时间让创作完成。您可以使用以下代码片段检查其状态。您可以提供预配置吞吐量的名称或响应中返回的 ARN CreateProvisionedModelThroughput作为。provisionedModelId

bedrock.get_provisioned_model_throughput(provisionedModelId='MyPT')

更改预配置吞吐量的名称,并将其与从 Anthropic Claude v2.1 自定义的其他模型相关联。然后发送GetProvisionedModelThroughput请求并将已配置模型的 ARN 保存到变量中以用于推理。

bedrock.update_provisioned_model_throughput( provisionedModelId='MyPT', desiredProvisionedModelName='MyPT2', desiredModelId='arn:aws:bedrock:us-east-1::custom-model/anthropic.claude-v2:1:200k/MyCustomModel2' ) arn_MyPT2 = bedrock.get_provisioned_model_throughput(provisionedModelId='MyPT2').get('provisionedModelArn')

使用以下命令对更新的预配置模型运行推理。您必须提供预配置模型的 ARN 作为。modelId

import json import logging import boto3 from botocore.exceptions import ClientError class ImageError(Exception): "Custom exception for errors returned by the model" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_text(model_id, body): """ Generate text using your provisioned custom model. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (json): The response from the model. """ logger.info( "Generating text with your provisioned custom model %s", model_id) brt = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = brt.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get("body").read()) finish_reason = response_body.get("error") if finish_reason is not None: raise ImageError(f"Text generation error. Error is {finish_reason}") logger.info( "Successfully generated text with provisioned custom model %s", model_id) return response_body def main(): """ Entrypoint for example. """ try: logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = arn_myPT2 body = json.dumps({ "inputText": "what is AWS?" }) response_body = generate_text(model_id, body) print(f"Input token count: {response_body['inputTextTokenCount']}") for result in response_body['results']: print(f"Token count: {result['tokenCount']}") print(f"Output text: {result['outputText']}") print(f"Completion reason: {result['completionReason']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except ImageError as err: logger.error(err.message) print(err.message) else: print( f"Finished generating text with your provisioned custom model {model_id}.") if __name__ == "__main__": main()

使用以下代码片段删除预配置吞吐量。您将不再需要为预配置吞吐量付费。

bedrock.delete_provisioned_model_throughput(provisionedModelId='MyPT2')