Amazon Titan Model teks - Amazon Bedrock

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Amazon Titan Model teks

Model Amazon Titan Text mendukung parameter inferensi berikut.

Untuk informasi lebih lanjut tentang Titan Pedoman rekayasa prompt teks, lihat Titan Pedoman Rekayasa Teks Prompt.

Untuk informasi lebih lanjut tentang Titan model, lihatIkhtisar Amazon Titan model.

Permintaan dan tanggapan

Badan permintaan diteruskan di body bidang InvokeModelWithResponseStreampermintaan InvokeModelatau permintaan.

Request
{ "inputText": string, "textGenerationConfig": { "temperature": float, "topP": float, "maxTokenCount": int, "stopSequences": [string] } }

Parameter-parameter berikut diperlukan:

  • inputText— Prompt untuk menyediakan model untuk menghasilkan respons. Untuk menghasilkan respons dalam gaya percakapan, bungkus prompt dengan menggunakan format berikut:

    "inputText": "User: <prompt>\nBot:

textGenerationConfigItu opsional. Anda dapat menggunakannya untuk mengonfigurasi parameter inferensi berikut:

  • suhu — Gunakan nilai yang lebih rendah untuk mengurangi keacakan dalam respons.

    Default Minimum Maksimum
    0,7 0.0 1.0
  • TopP — Gunakan nilai yang lebih rendah untuk mengabaikan opsi yang kurang mungkin dan mengurangi keragaman respons.

    Default Minimum Maksimum
    0,9 0.0 1.0
  • maxTokenCount— Tentukan jumlah maksimum token yang akan dihasilkan dalam respons. Batas token maksimum diberlakukan secara ketat.

    Model Default Minimum Maksimum
    Titan Teks Lite 512 0 4,096
    Titan Teks Ekspres 512 0 8,192
    Premier Teks Titan 512 0 3,072
  • stopSequences— Tentukan urutan karakter untuk menunjukkan di mana model harus berhenti.

InvokeModel Response
{ "inputTextTokenCount": int, "results": [{ "tokenCount": int, "outputText": "\n<response>\n", "completionReason": "string" }] }

Badan respons berisi bidang-bidang berikut:

  • inputTextTokenHitung — Jumlah token dalam prompt.

  • hasil — Sebuah array dari satu item, sebuah objek yang berisi bidang-bidang berikut:

    • tokenCount— Jumlah token dalam respons.

    • outputText— Teks dalam tanggapan.

    • completionReason— Alasan respon selesai dihasilkan. Alasan berikut mungkin:

      • FINISHEDResponsnya sepenuhnya dihasilkan.

      • LENGTH— Respons terpotong karena panjang respons yang Anda tetapkan.

      • STOP_ CRITERIA _ MET — Respons terpotong karena kriteria berhenti tercapai.

      • RAG_ QUERY _ WHEN _ RAG _ DISABLED — Fitur ini dinonaktifkan dan tidak dapat menyelesaikan kueri.

      • CONTENT_ FILTERED — Konten disaring atau dihapus oleh filter konten yang diterapkan.

InvokeModelWithResponseStream Response

Setiap potongan teks di badan aliran respons dalam format berikut. Anda harus memecahkan kode bytes bidang (lihat Kirim satu prompt dengan InvokeModel API operasi contoh).

{ "chunk": { "bytes": b'{ "index": int, "inputTextTokenCount": int, "totalOutputTextTokenCount": int, "outputText": "<response-chunk>", "completionReason": "string" }' } }
  • indeks — Indeks potongan dalam respons streaming.

  • inputTextTokenHitung — Jumlah token dalam prompt.

  • totalOutputTextTokenCount— Jumlah token dalam respons.

  • outputText— Teks dalam tanggapan.

  • completionReason— Alasan respon selesai dihasilkan. Alasan berikut mungkin terjadi.

    • FINISHEDResponsnya sepenuhnya dihasilkan.

    • LENGTH— Respons terpotong karena panjang respons yang Anda tetapkan.

    • STOP_ CRITERIA _ MET — Respons terpotong karena kriteria berhenti tercapai.

    • RAG_ QUERY _ WHEN _ RAG _ DISABLED — Fitur ini dinonaktifkan dan tidak dapat menyelesaikan kueri.

    • CONTENT_ FILTERED — Isi disaring atau dihapus oleh filter yang diterapkan.

Contoh kode

Contoh berikut menunjukkan cara menjalankan inferensi dengan Amazon Titan Teks model Premier dengan PythonSDK.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to create a list of action items from a meeting transcript with the Amazon Titan Text model (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError class ImageError(Exception): "Custom exception for errors returned by Amazon Titan Text models" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_text(model_id, body): """ Generate text using Amazon Titan Text models on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (json): The response from the model. """ logger.info( "Generating text with Amazon Titan Text model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get("body").read()) finish_reason = response_body.get("error") if finish_reason is not None: raise ImageError(f"Text generation error. Error is {finish_reason}") logger.info( "Successfully generated text with Amazon Titan Text model %s", model_id) return response_body def main(): """ Entrypoint for Amazon Titan Text model example. """ try: logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") # You can replace the model_id with any other Titan Text Models # Titan Text Model family model_id is as mentioned below: # amazon.titan-text-premier-v1:0, amazon.titan-text-express-v1, amazon.titan-text-lite-v1 model_id = 'amazon.titan-text-premier-v1:0' prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream for our new product launch Brant: Sure Miguel, is there anything in particular you want to discuss? Miguel: Yes, I want to talk about how users enter into the product. Brant: Ok, in that case let me add in Namita. Namita: Hey everyone Brant: Hi Namita, Miguel wants to discuss how users enter into the product. Miguel: its too complicated and we should remove friction. for example, why do I need to fill out additional forms? I also find it difficult to find where to access the product when I first land on the landing page. Brant: I would also add that I think there are too many steps. Namita: Ok, I can work on the landing page to make the product more discoverable but brant can you work on the additonal forms? Brant: Yes but I would need to work with James from another team as he needs to unblock the sign up workflow. Miguel can you document any other concerns so that I can discuss with James only once? Miguel: Sure. From the meeting transcript above, Create a list of action items for each person. """ body = json.dumps({ "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 3072, "stopSequences": [], "temperature": 0.7, "topP": 0.9 } }) response_body = generate_text(model_id, body) print(f"Input token count: {response_body['inputTextTokenCount']}") for result in response_body['results']: print(f"Token count: {result['tokenCount']}") print(f"Output text: {result['outputText']}") print(f"Completion reason: {result['completionReason']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except ImageError as err: logger.error(err.message) print(err.message) else: print( f"Finished generating text with the Amazon Titan Text Premier model {model_id}.") if __name__ == "__main__": main()

Contoh berikut menunjukkan cara menjalankan inferensi dengan Amazon Titan Text G1 - Express model dengan PythonSDK.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to create a list of action items from a meeting transcript with the Amazon &titan-text-express; model (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError class ImageError(Exception): "Custom exception for errors returned by Amazon &titan-text-express; model" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_text(model_id, body): """ Generate text using Amazon &titan-text-express; model on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (json): The response from the model. """ logger.info( "Generating text with Amazon &titan-text-express; model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get("body").read()) finish_reason = response_body.get("error") if finish_reason is not None: raise ImageError(f"Text generation error. Error is {finish_reason}") logger.info( "Successfully generated text with Amazon &titan-text-express; model %s", model_id) return response_body def main(): """ Entrypoint for Amazon &titan-text-express; example. """ try: logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = 'amazon.titan-text-express-v1' prompt = """Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream for our new product launch Brant: Sure Miguel, is there anything in particular you want to discuss? Miguel: Yes, I want to talk about how users enter into the product. Brant: Ok, in that case let me add in Namita. Namita: Hey everyone Brant: Hi Namita, Miguel wants to discuss how users enter into the product. Miguel: its too complicated and we should remove friction. for example, why do I need to fill out additional forms? I also find it difficult to find where to access the product when I first land on the landing page. Brant: I would also add that I think there are too many steps. Namita: Ok, I can work on the landing page to make the product more discoverable but brant can you work on the additonal forms? Brant: Yes but I would need to work with James from another team as he needs to unblock the sign up workflow. Miguel can you document any other concerns so that I can discuss with James only once? Miguel: Sure. From the meeting transcript above, Create a list of action items for each person. """ body = json.dumps({ "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 4096, "stopSequences": [], "temperature": 0, "topP": 1 } }) response_body = generate_text(model_id, body) print(f"Input token count: {response_body['inputTextTokenCount']}") for result in response_body['results']: print(f"Token count: {result['tokenCount']}") print(f"Output text: {result['outputText']}") print(f"Completion reason: {result['completionReason']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except ImageError as err: logger.error(err.message) print(err.message) else: print( f"Finished generating text with the Amazon &titan-text-express; model {model_id}.") if __name__ == "__main__": main()