Model distillation in Amazon Bedrock - Amazon Bedrock

Model distillation in Amazon Bedrock

Model distillation is the process of transferring knowledge from a larger more intelligent model (known as teacher) to a smaller, faster, cost-efficient model (known as student). In this process, the student model becomes as performant as the teacher for a specific use case. Amazon Bedrock Model Distillation uses the latest data synthesis techniques to generate diverse, high-quality responses (known as synthetic data) from the teacher model, and fine-tunes the student model.

To use Amazon Bedrock Model Distillation, you select a teacher model whose accuracy you want to achieve for your use case, and a student model to fine-tune. Then, you provide use case-specific prompts as input data. Amazon Bedrock generates responses from the teacher model for the given prompts, and then uses the responses to fine-tune the student model. You can optionally provide labeled input data as prompt-response pairs. Amazon Bedrock may use these pairs as golden examples while generating responses from the teacher model. Or, if you already have responses that the teacher model generated and you've stored them in the invocation logs, then you can use those existing teacher responses to fine-tune the student model. For this, you must provide Amazon Bedrock access to your invocation logs. An invocation log in Amazon Bedrock is a detailed record of model invocations. For more information, see Monitor model invocation using CloudWatch Logs.

Only you can access the final distilled model. Amazon Bedrock doesn't use your data to train any other teacher or student model for public use.

How Amazon Bedrock Model Distillation works

Amazon Bedrock Model Distillation is a single workflow that automates the process of creating a distilled model. In this workflow, Amazon Bedrock generates responses from a teacher model, adds data synthesis techniques to improve response generation, and fine-tunes the student model with the generated responses. The augmented dataset is split into separate datasets to use for training and validation. Amazon Bedrock uses only the data in the training dataset to fine-tune the student model.

After you've identified your teacher and student models, you can choose how you want Amazon Bedrock to create a distilled model for your use case. Amazon Bedrock can either generate teacher responses by using the prompts that you provide, or you can use responses from your production data via invocation logs. Amazon Bedrock Model Distillation uses these responses to fine-tune the student model.

Creating a distilled model using prompts that you provide

Amazon Bedrock uses the input prompts that you provide to generate responses from the teacher model. Amazon Bedrock then uses the responses to fine-tune the student model that you've identified. Depending on your use case, Amazon Bedrock might add proprietary data synthesis techniques to generate diverse and higher-quality responses. For example, Amazon Bedrock might generate similar prompts to generate more diverse responses from the teacher model. Or, if you optionally provide a handful of labeled input data as prompt-response pairs, then Amazon Bedrock might use these pairs as golden examples to instruct the teacher to generate similar high-quality responses.

Note

If Amazon Bedrock Model Distillation uses its proprietary data synthesis techniques to generate higher-quality teacher responses, then your AWS account will incur additional charges for inference calls to the teacher model. These charges will be billed at the on-demand inference rates of the teacher model. Data synthesis techniques might increase the size of the fine-tuning dataset to a maximum of 15k prompt-response pairs. For more information about Amazon Bedrock charges, see Amazon Bedrock Pricing.

Creating a distilled model using production data

If you already have responses generated by the teacher model and stored them in the invocation logs, you can use those existing teacher responses to fine-tune the student model. For this, you will need to provide Amazon Bedrock access to your invocation logs. An invocation log in Amazon Bedrock is a detailed record of model invocations. For more information, see Monitor model invocation using CloudWatch Logs.

If you choose this option, then you can continue to use Amazon Bedrocks inference API operations, such as InvokeModel or Converse API, and collect the invocation logs, model input data (prompts), and model output data (responses) for all invocations used in Amazon Bedrock. When you generate responses from the model using the InvokeModel or Converse API operations, you can optionally add requestMetadata to the responses. This can help you filter your invocation logs for specific use cases, and then use the filtered responses to fine-tune your student model. When you choose to use invocation logs to fine-tune your student model, you can have Amazon Bedrock use the prompts only, or use prompt-response pairs.

Choosing prompts with invocation logs

If you choose to have Amazon Bedrock use only the prompts from the invocation logs, then Amazon Bedrock uses the prompts to generate responses from the teacher model. In this case, Amazon Bedrock uses the responses to fine-tune the student model that you've identified. Depending on your use case, Amazon Bedrock Model Distillation might add proprietary data synthesis techniques to generate diverse and higher-quality responses.

Note

If Amazon Bedrock Model Distillation uses its proprietary data synthesis techniques to generate higher-quality teacher responses, then your AWS account will incur additional charges for inference calls to the teacher model. These charges will be billed at the on-demand inference rates of the teacher model. Data synthesis techniques may increase the size of the fine-tuning dataset to a maximum of 15k prompt-response pairs. For more information about Amazon Bedrock charges, see Amazon Bedrock Pricing.

Choosing prompt-response pairs with invocation logs

If you choose to have Amazon Bedrock use prompt-response pairs from the invocation logs, then Amazon Bedrock won't re-generate responses from the teacher model and use the responses from the invocation log to fine-tune the student model. For Amazon Bedrock to read the responses from the invocation logs, the teacher model specified in your model distillation job must match the model used in the invocation log. If you've added request metadata to the responses in the invocation log, then to fine-tune the student model, you can specify the request metadata filters so that Amazon Bedrock reads only specific logs that are valid for your use case.