本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
Meta Llama 模型
本節說明 的請求參數和回應欄位 Meta Llama 模型。使用此資訊對 進行推論呼叫 Meta Llama 具有 InvokeModel和 InvokeModelWithResponseStream(串流) 操作的模型。本節也包含 Python 顯示如何呼叫的程式碼範例 Meta Llama 模型。若要在推論操作中使用模型,您需要模型的模型 ID。若要取得模型 ID,請參閱 Amazon Bedrock 模型 IDs。某些模型也適用於 Converse API。檢查 Converse 是否API支援特定 Meta Llama 模型,請參閱 支援的模型和模型功能。如需更多程式碼範例,請參閱 使用 的 Amazon Bedrock 程式碼範例 AWS SDKs。
Amazon Bedrock 中的基礎模型支援輸入和輸出模態,因模型而異。若要檢查 Meta Llama 模型支援,請參閱 Amazon Bedrock 中支援的基礎模型。若要檢查哪些 Amazon Bedrock 具有 Meta Llama 模型支援,請參閱 依功能提供的模型支援。檢查哪些 AWS 區域 Meta Llama 模型可在 中使用,請參閱 AWS 區域模型支援。
當您使用 進行推論呼叫時 Meta Llama 模型,您會包含模型的提示。如需為 Amazon Bedrock 支援的模型建立提示的一般資訊,請參閱 提示工程概念。用於 Meta Llama 特定提示資訊,請參閱 Meta Llama 提示工程指南
注意
Llama 3.2 Instruct 模型使用 geofencing。這表示這些模型無法在 AWS 區域資料表中列出的這些模型可用的區域之外使用。
本節提供從 使用下列模型的相關資訊 Meta.
Llama 2
Llama 2 Chat
Llama 3 Instruct
Llama 3.1 Instruct
Llama 3.2 Instruct
請求和回應
請求內文會在請求的 body
欄位中傳遞至 InvokeModel或 InvokeModelWithResponseStream。
範例程式碼
此範例示範如何呼叫 Meta Llama 2 Chat 13B 模型。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate text with Meta Llama 2 Chat (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_text(model_id, body): """ Generate an image using Meta Llama 2 Chat on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The text that the model generated, token information, and the reason the model stopped generating text. """ logger.info("Generating image with Meta Llama 2 Chat model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') response = bedrock.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Meta Llama 2 Chat example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "meta.llama2-13b-chat-v1" prompt = """<s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> There's a llama in my garden What should I do? [/INST]""" max_gen_len = 128 temperature = 0.1 top_p = 0.9 # Create request body. body = json.dumps({ "prompt": prompt, "max_gen_len": max_gen_len, "temperature": temperature, "top_p": top_p }) try: response = generate_text(model_id, body) print(f"Generated Text: {response['generation']}") print(f"Prompt Token count: {response['prompt_token_count']}") print(f"Generation Token count: {response['generation_token_count']}") print(f"Stop reason: {response['stop_reason']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) else: print( f"Finished generating text with Meta Llama 2 Chat model {model_id}.") if __name__ == "__main__": main()