Amazon Titan Multimodal Embeddings G1 - Amazon Bedrock

기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.

Amazon Titan Multimodal Embeddings G1

이 섹션에서는 Amazon을 사용하기 위한 요청 및 응답 본문 형식과 코드 예제를 제공합니다Titan Multimodal Embeddings G1.

요청 및 응답

요청 본문은 InvokeModel요청 body 필드에 전달됩니다.

Request

Amazon의 요청 Titan Multimodal Embeddings G1 본문에는 다음 필드가 포함됩니다.

{ "inputText": string, "inputImage": base64-encoded string, "embeddingConfig": { "outputEmbeddingLength": 256 | 384 | 1024 } }

다음 필드 중 하나 이상이 필요합니다. 결과 텍스트 임베딩과 이미지 임베딩 벡터의 평균을 구하는 임베딩 벡터를 생성하려면 두 가지를 모두 포함하십시오.

  • inputText— 임베딩으로 변환할 텍스트를 입력합니다.

  • inputImage— 임베딩으로 변환하려는 이미지를 base64로 인코딩하고 이 필드에 문자열을 입력합니다. 이미지를 base64로 인코딩하고 base64로 인코딩된 문자열을 디코딩하여 이미지로 변환하는 방법의 예시는 코드 예시를 참조하세요.

다음 필드는 선택사항입니다.

  • embeddingConfig— 출력 임베딩 벡터에 대해 다음 길이 중 하나를 지정하는 outputEmbeddingLength 필드를 포함합니다.

    • 256

    • 384

    • 1024 (기본값)

Response

응답에는 다음 필드가 포함됩니다. body

{ "embedding": [float, float, ...], "inputTextTokenCount": int, "message": string }

필드가 아래에 설명되어 있습니다.

  • 임베딩 — 제공한 입력의 임베딩 벡터를 나타내는 배열입니다.

  • inputTextToken개수 — 텍스트 입력의 토큰 수입니다.

  • 메시지 — 생성 중에 발생하는 모든 오류를 지정합니다.

예제 코드

다음 예제는 Python에서 온디맨드 처리량으로 Amazon Titan Multimodal Embeddings G1 모델을 호출하는 방법을 보여줍니다. SDK 탭을 선택하면 각 사용 사례의 예시를 볼 수 있습니다.

Text embeddings

이 예제에서는 Amazon Titan Multimodal Embeddings G1 모델을 호출하여 텍스트 임베딩을 생성하는 방법을 보여줍니다.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from text with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for a text input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "amazon.titan-embed-image-v1" input_text = "What are the different services that you offer?" output_embedding_length = 256 # Create request body. body = json.dumps({ "inputText": input_text, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated text embeddings of length {output_embedding_length}: {response['embedding']}") print(f"Input text token count: {response['inputTextTokenCount']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating text embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()
Image embeddings

이 예제에서는 Amazon Titan Multimodal Embeddings G1 모델을 호출하여 이미지 임베딩을 생성하는 방법을 보여줍니다.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from an image with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import base64 import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for an image input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") # Read image from file and encode it as base64 string. with open("/path/to/image", "rb") as image_file: input_image = base64.b64encode(image_file.read()).decode('utf8') model_id = 'amazon.titan-embed-image-v1' output_embedding_length = 256 # Create request body. body = json.dumps({ "inputImage": input_image, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated image embeddings of length {output_embedding_length}: {response['embedding']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating image embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()
Text and image embeddings

이 예제에서는 Amazon Titan Multimodal Embeddings G1 모델을 호출하여 결합된 텍스트 및 이미지 입력에서 임베딩을 생성하는 방법을 보여줍니다. 결과 벡터는 생성된 텍스트 임베딩 벡터와 이미지 임베딩 벡터의 평균입니다.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from an image and accompanying text with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import base64 import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for a combined text and image input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "amazon.titan-embed-image-v1" input_text = "A family eating dinner" # Read image from file and encode it as base64 string. with open("/path/to/image", "rb") as image_file: input_image = base64.b64encode(image_file.read()).decode('utf8') output_embedding_length = 256 # Create request body. body = json.dumps({ "inputText": input_text, "inputImage": input_image, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated embeddings of length {output_embedding_length}: {response['embedding']}") print(f"Input text token count: {response['inputTextTokenCount']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()