기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.
Meta Llama 모델
이 섹션에서는 다음 모델을 사용하기 위한 추론 매개변수와 코드 예제를 제공합니다.Meta.
Llama 2
Llama 2 Chat
Llama 3 Instruct
Llama 3.1 Instruct
다음과 같이 추론 요청을 합니다.Meta Llama InvokeModelOR InvokeModelWithResponseStream(스트리밍) 이 있는 모델. 사용하려는 모델의 모델 ID가 필요합니다. 모델 ID를 가져오려면 을 참조하십시오아마존 베드락 모델 IDs.
요청 및 응답
요청 본문은 요청 body
필드에서 InvokeModel또는 으로 전달됩니다 InvokeModelWithResponseStream.
예제 코드
이 예제에서는 를 호출하는 방법을 보여줍니다. Meta Llama 2 Chat 13B 모델.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate text with Meta Llama 2 Chat (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_text(model_id, body): """ Generate an image using Meta Llama 2 Chat on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The text that the model generated, token information, and the reason the model stopped generating text. """ logger.info("Generating image with Meta Llama 2 Chat model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') response = bedrock.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Meta Llama 2 Chat example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "meta.llama2-13b-chat-v1" prompt = """<s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> There's a llama in my garden What should I do? [/INST]""" max_gen_len = 128 temperature = 0.1 top_p = 0.9 # Create request body. body = json.dumps({ "prompt": prompt, "max_gen_len": max_gen_len, "temperature": temperature, "top_p": top_p }) try: response = generate_text(model_id, body) print(f"Generated Text: {response['generation']}") print(f"Prompt Token count: {response['prompt_token_count']}") print(f"Generation Token count: {response['generation_token_count']}") print(f"Stop reason: {response['stop_reason']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) else: print( f"Finished generating text with Meta Llama 2 Chat model {model_id}.") if __name__ == "__main__": main()