

# Generating images with Amazon Nova Canvas
<a name="image-generation"></a>

With the Amazon Nova Canvas model, you can generate realistic, studio-quality images by using text prompts. You can use Amazon Nova Canvas for text-to-image and imaging editing applications.

Amazon Nova Canvas supports the following features:
+ Text-to-image (T2I) generation – Input a text prompt and generate a new image as output. The generated image captures the concepts described by the text prompt.
+ Image conditioning – Uses an input reference image to guide image generation. The model generates output image that aligns with the layout and the composition of the reference image, while still following the textual prompt.
+ Color guided content – You can provide a list of hex color codes along with a prompt. A range of 1 to 10 hex codes can be provided. The image returned will incorporate the color palette provided by the user.
+ Image variation – Uses 1 to 5 images and an optional prompt as input. It generates a new image that borrows characteristics from the reference images including style, color palette, and subject.
+ Inpainting – Uses an image and a segmentation mask as input (either from the user or estimated by the model) and reconstructs the region defined by the mask. Use inpainting to replace masked pixels with new generated content.
+ Outpainting – Uses an image and a segmentation mask as input (either from the user or estimated by the model) and generates new content that seamlessly extends the masked region, effectively replacing the image background.
+ Background removal – Automatically identifies multiple objects in the input image and removes the background. The output image has a transparent background.
+ Subject consistency – Subject consistency is achieved by fine-tuning the model with reference images to preserve the chosen subject (for example, pet, shoe, or handbag) in generated images.
+ Content provenance – Use publicly available tools such as [Content Credentials Verify](https://contentcredentials.org/verify) to check if an image was generated by Amazon Nova Canvas. This should indicate the image was generated unless the metadata has been removed.
+ Watermarking – Adds an invisible watermark to all generated images to reduce the spread of misinformation, assist with copyright protection, and track content usage. Watermark detection is available to help you confirm whether an image was generated by an Amazon Nova model, which checks for the existence of this watermark. .


|  | Amazon Nova Canvas | 
| --- |--- |
| Model ID | amazon.nova-canvas-v1:0 | 
| Input Modalities | Text, Image | 
| Output Modalities | Image | 
| Max Prompt Length | 1024 characters | 
| Max Output Resolution (generation tasks) | 4.19 million pixels (that is, 2048x2048, 2816x1536) | 
| Max Output Resolution (editing tasks) | Must meet all of the following:   4096 pixels on its longest side   Aspect ratio between 1:4 and 4:1   Total pixel count of 4.19 million or smaller    | 
| Supporting Input Image Types | PNG, JPEG | 
| Supported Languages | English | 
| Regions | US East (N. Virginia), Europe (Ireland), and Asia Pacific (Tokyo) | 
| Invoke Model API | Yes | 
| Fine-tuning | Yes | 
| Provisioned throughput | No | 

**Topics**
+ [

# Image generation and editing
](image-gen-access.md)
+ [

# Virtual try-on
](image-gen-vto.md)
+ [

# Visual Styles
](image-gen-styles.md)
+ [

# Request and response structure for image generation
](image-gen-req-resp-structure.md)
+ [

# Error handling
](image-gen-errors.md)
+ [

# Code examples
](image-gen-code-examples.md)

# Image generation and editing
<a name="image-gen-access"></a>

Amazon Nova Canvas is available through the Bedrock [InvokeModel API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and supports the following inference parameters and model responses when carrying out model inference.

**Topics**
+ [

## Image generation request and response format
](#image-gen-req-resp-format)
+ [

## Input images for image generation
](#image-gen-input-images)
+ [

## Masking images
](#image-gen-masking)
+ [

## Supported image resolutions
](#image-gen-resolutions)

## Image generation request and response format
<a name="image-gen-req-resp-format"></a>

When you make an [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) call using the Amazon Nova Canvas model, replace the `body` field of the request with the format that matches your use-case. All tasks share an `imageGenerationConfig` object, but each task has a parameters object specific to that task. The following use-cases are supported: 


| Task Type Value | Task Parameter Field | Task Category | Description | 
| --- | --- | --- | --- | 
| TEXT\$1IMAGE with text only | textToImageParams | Generation | Generate an image using a text prompt. | 
| TEXT\$1IMAGE with image conditioning | textToImageParams | Generation | Provide an input conditioning image along with a text prompt to generate an image that follows the layout and composition of the conditioning image. | 
| COLOR\$1GUIDED\$1GENERATION | colorGuidedGenerationParams | Generation | Provide a list of color values in hexadecimal format (e.g. \$1FF9800) along with a text prompt and optional reference image to generate an image that follows the specified color palette. | 
| IMAGE\$1VARIATION | imageVariationParams | Generation | Provide one or more input images—with or without a text prompt—to influence the generated image. Can be used to influence the visual style of the generated image (when used with a text prompt), to generate variations of a single image (when used without a text prompt), and for other creative effects and control. | 
| INPAINTING | inPaintingParams | Editing | Modify an image by changing the area inside of a masked region. Can be used to add, remove, or replace elements of an image. | 
| OUTPAINTING | outPaintingParams | Editing | Modify an image by changing the area outside of a masked region. Can be used to replace the background behind a subject. | 
| BACKGROUND\$1REMOVAL | backgroundRemovalParams | Editing | Automatically remove the background of any image, replacing the background with transparent pixels. Can be useful when you want to later composite the image with other elements in an image editing app, presentation, or website. The background can easily be changed to a solid color through custom code as well. | 
| VIRTUAL\$1TRY\$1ON | virtualTryOnParams | Editing | Provide a source image and a reference image, superimposing an object in the reference image onto the source image. Can be used to visualize clothing and accessories on different models or in different poses, alter the style and appearance of an object or article or clothing, or transfer styles and designs from one object to another.  | 

## Input images for image generation
<a name="image-gen-input-images"></a>

Many task types require one or more input images to be included in the request. Any image used in the request must be encoded as a Base64 string. Generally, images can be in PNG or JPEG format and must be 8 bits per color channel (RGB). PNG images may contain an additional alpha channel, but that channel must not contain any transparent or translucent pixels. For specific details on supported input image dimensions, see [Supported image resolutions](#image-gen-resolutions).

A *mask image* is an image that indicates the area to be inpainted or outpainted. This image can contain only pure black and pure white pixels.

For inpainting requests, the area that is colored black is called *the mask* and will be changed. The rest of the mask image must contain only pure white pixels. Pure white pixels indicate the area outside the mask.

For outpainting requests, the area that is colored white will be changed by the model.

Mask images must not contain any pixels that are not pure black or pure white. If you are using a JPEG image as a mask, it must be compressed at 100% quality to avoid introducing non-white or non-black pixels during compression.

For examples of how to encode or decode an image to or from a Base64 string, see [the code examples](https://docs.aws.amazon.com/nova/latest/userguide/image-gen-code-examples.html).

## Masking images
<a name="image-gen-masking"></a>

When you're editing an image, a mask is a way of defining the regions to edit. You can define a mask in one of three ways:
+ `maskPrompt` – Write a natural language text prompt describing the part(s) of the image to be masked.
+ `maskImage` – A black and white image where pure black pixels indicate the area inside the mask and pure white pixels indicate the area outside the mask.

  For inpainting request, the black pixels will be changed by the model. For outpainting requests, the while pixels will be altered.
+ `garmentBasedMask` – An image-based mask that defines a region to be replaced along with some limited styling options.

You can use a photo editing tool to draw masks or create them with your own custom code. Otherwise, use the maskPrompt field to allow the model to infer the mask.

## Supported image resolutions
<a name="image-gen-resolutions"></a>

You may specify any output resolution for a generation task as long as it adheres to the following requirements:
+ Each side must be between 320-4096 pixels, inclusive.
+ Each side must be evenly divisible by 16.
+ The aspect ratio must be between 1:4 and 4:1. That is, one side can't be more than 4 times longer than the other side.
+ The total pixel count must be less than 4,194,304.

Most of these same constraints apply to input images, as well. However, the sides of the images do not need to be evenly divisible by 16.

# Virtual try-on
<a name="image-gen-vto"></a>

*Virtual try-on* is an image-guided use case of inpainting in which the contents of a reference image are superimposed into a source image based on the guidance of a mask image. Amazon Nova Canvas has been tuned for garments, accessories, furniture, and related objects. The model also generalizes well to other cases, such as adding a logo or text into an image. 

You can generate up to five images with the virtual try-on API. By default only one image is generated.

To perform a virtual try-on, you must provide three images:
+ *Source image* - The original image that you want to modify. For example, this might be an image or a person or a room scene.
+ *Reference image* - The image containing the item, object, or article that you want to superimpose into source image. For example, this might contain a jacket, bowl, or couch. For garments, the reference image can contain garments on or off a body and can contain multiple products that represent distinct outfit components (such as shirts, pants, and shoes in a single image).
+ *Mask image* - The image that defines which part of the source that you want to modify. A mask image is a black and white image used to define which part of the source image should be modified. Black pixels indicate the area of the source image to modify while white pixels indicate areas of the image to preserve. You can either provide your own mask image or you can let the model create one for you based on other input parameters you provide.

  The mask image can be returned as part of the output if specified.

Here are some examples of how the model works.

------
#### [ Upper body clothing ]

The following images show an example of how Amazon Nova superimposes an upper body article of clothing onto a model.


| Source image | Reference image | Output | 
| --- |--- |--- |
|  ![\[A man wearing sunglasses, looking to left, wearing a blue shirt.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto1_source.jpg)  |  ![\[A pink-red button down shirt.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto1_ref.jpg)  |  ![\[A mean wearing sunglasses, looking to the left, wearing a pink-red button down shirt.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto1_output.png)  | 

------
#### [ Couch in a room ]

The following images show an example of how Amazon Nova superimposes a couch into a room of furniture.


| Source image | Reference image | Output | 
| --- |--- |--- |
|  ![\[A midcentury, modern grey couch in a room surrounded by other decorations.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto2_source.jpg)  |  ![\[An orange couch against a white background.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto2_ref.jpg)  |  ![\[An orange couch in a room surrounded by other decorations.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/vto2_output.png)  | 

------

Unlike other Amazon Nova Canvas task types, virtual try-on does not support a text prompt or negative text prompt.

## Defining the mask image
<a name="image-gen-vto-mask"></a>

You can either directly provide a mask image by specifying `maskType: "IMAGE"` or allow the model to compute it automatically using auxiliary inputs such as `maskType: "GARMENT"` or `maskType: "PROMPT"`.

When a mask type of `"GARMENT"` is specified, Amazon Nova Canvas creates a garment-aware mask based on a `garmentClass` input parameter value that you specify. In most cases, you can use one of the following high-level garment classes:
+ `"UPPER_BODY"` - Creates a mask that includes full arm length.
+ `"LOWER_BODY"` - Creates a mask the includes full leg length with no gap between the legs.
+ `"FOOTWEAR"` - Creates a mask that fits the shoe profile demonstrated in the source image.
+ `"FULL_BODY"` - Creates a mask equivalent to the combination of `"UPPER_BODY"` and `"LOWER_BODY"`.

You can use the `"PROMPT"` mask type to use natural language to describe the item in the source image that you want to replace. This is useful for non-garment scenarios. This feature utilizes the same auto-masking functionality that exists in the `"INPAINTING"` task type via the `maskPrompt` parameter.

**Warning**  
Masks created with the `"PROMPT"` mask type will adhere tightly to the shape of the item you describe. This can be problematic in many scenarios because the product you are adding might not share the same silhouette or size of the item you are replacing. For this reason, the virtual try-on API also provides an optional `maskShape` parameter that can be set to `"BOUNDING_BOX"`. We recommend using this setting (which is the default) in most cases when using the `"PROMPT"` mask type.

## Generating new poses, hands, or faces
<a name="image-gen-vto-exclusions"></a>

You can instruct the model to either keep or regenerate the pose, hands, or face of the person in the source image. When you choose to keep these elements, they are automatically removed from the mask image, regardless of which `maskType` you have chosen.

You might want to preserve pose, hands, or face in the following situations:
+ You are developing an application that allows end-users to draw their own masks. Preserving these features prevents the end-users from accidentally including the hands or face in the mask.
+ You are using `maskShape: BOUNDING_BOX` but don't want to generate new hands or face. With `preserveFace: ON` or `preserveHands: ON`, these features are automatically removed from the mask.
+ You are using `maskType:GARMENT` and `maskShape: BOUNDING_BOX` with a model that is not in an upright posture. In this case, the bounding box mask can overlap the face and we recommend using `preserveFace: ON`. 

Conversely, you might want to regenerate the pose, hands, or face in the following situations:
+ For garments that cover the neck, `preserveFace: ON` can exclude enough of the neck to have a detrimental impact on the output.
+ When the model is wearing high-heeled shoes and the reference image is of flat-heeled shoes, or vice-versa. In this case, preserving the body pose creates unnatural looking results.
+ Similar to the previous point, when trying on handbags or other accessories, generating new poses or hands can generate more natural-looking results.

## Styling cues
<a name="image-gen-vto-styling"></a>

The `garmentStyling` parameter allows you to preserve or alter specific garment styling cues that you might find in a photo shoot. For example, Amazon Nova Canvas can modify the styling of a shirt so that its sleeves are either rolled up or down or it can modify the shirt so that it is tucked in or not. The following options are available:
+ `"longSleeveStyle"` - Controls whether the sleeves of a long-sleeve shirt are rolled up or down.
  + `"SLEEVE_DOWN"` - Can be applied when the source image is wearing a long-sleeve shirt (sleeves up or down), short-sleeve shirt, or no-sleeve shirt.
  + `"SLEEVE_UP"` - Can be applied when the source image is wearing a long-sleeve shirt with the sleeves up, short-sleeve shirt, or no-sleeve shirt.
+ `"tuckingStyle"` - Controls whether an upper body garment appears tucked in or loose.
  + `"UNTUCKED"` - Can be applied regardless of whether the source image has the shirt tucked or untucked.
  + `"TUCKED"` - Can be applied when the source image has the shirt tucked in.
+ `"outerLayerStyle"` - Controls whether an upper body garment is styled open or closed. This defaults to `"CLOSED"` which is appropriate for most garments (such as shirts and sweaters). For outer garments, like jackets, setting this value to `"OPEN"` guarantees that the original upper body garment from the source image will be retained with the new outer garment being layered over it. Using a value of `"CLOSED"` with an outer garment might not always render the garment as closed. This is because a value of `"CLOSED"` only guarantees that every upper body garment in the source image will be replaced and can sometimes result in an open outer layer with a new under layer visible beneath.
  + `"CLOSED"`
  + `"OPEN"`

For more information, see the `garmentStyling` parameters in [Request and response structure for image generation](image-gen-req-resp-structure.md).

## Image stitching
<a name="image-gen-vto-stitching"></a>

Virtual try-on allows you to determine how images are stitched together to create the final image. You can choose from `"BALANCED"`, `"SEAMLESS"`, and `"DETAILED"`. Each merge style takes a different approach to how it stitches the elements together to create the final image, each with its own benefits and tradeoffs.
+ `"BALANCED"` - Protects any non-masked pixels in the original image, ensuring they remain 100% accurate to the original. In some cases, there will be a slight perceptible color or texture mismatch in the output image that presents as a kind of “ghost” image of the mask shape. This is most likely to occur when the image features a person standing against a solid color or uniformly textured background. To avoid this, you can use the `"SEAMLESS"` merge style instead.
+ `"SEAMLESS"` - Ensures that there will never be a noticeable seam between the masked and non-masked images areas in the final image. The tradeoff is that all pixels in the image change slightly and sometimes fine-grained details are diminished in the non-masked areas of the image.
+ `"DETAILED"` - Can greatly improve fine-grained details like logos and text, especially when the masked area is relatively small compared to the overall image. The model achieves this by performing inpainting on a tightly cropped, higher resolution version of the original image that only includes the masked area. It then merges the result back into the original image. As with using `"BALANCED"` mode, this mode can sometimes result in a visible seam.

# Visual Styles
<a name="image-gen-styles"></a>

Amazon Nova Canvas allows you to generate images in a variety of predefined styles. With the `"TEXT_TO_IMAGE"` task type, use the `style` parameter to pick a predefined visual style. Choose from these available styles:
+ `"3D_ANIMATED_FAMILY_FILM"` - A style that alludes to 3D animated films. Featuring realistic rendering and characters with cartoonish or exaggerated physical features. This style is capable of producing character-focused images, object- or prop-focused images, and environment- or setting-focused images of both interiors and exteriors.
+ `"DESIGN_SKETCH"` - A style featuring hand-drawn line-art without a lot of wash or fill that is not too refined. This style is used to convey concepts and ideas. It is useful for fashion and product design sketches as well as architectural sketches.
+ `"FLAT_VECTOR_ILLUSTRATION"` - A flat-color illustration style that is popular in business communications. It is also useful for icon and clip art images.
+ `"GRAPHIC_NOVEL_ILLUSTRATION"` - A vivid ink illustration style. Characters do not have exaggerated features, as with some other more cartoon-ish styles.
+ `"MAXIMALISM"` - Bright, elaborate, bold, and complex with strong shapes, and rich details. This style can be applied to a variety of subjects, such as illustrations, photography, interior design, graphic design, or packaging design.
+ `"MIDCENTURY_RETRO"` - Alludes to graphic design trends from the 1940s through 1960s.
+ `"PHOTOREALISM"` - Realistic photography style, including different repertoires such as stock photography, editorial photography, journalistic photography, and more. This style shows realistic lighting, depth of field, and composition fitting the repertoire. The most common subjects are humans, but can also include animals, landscapes, and other natural features.
+ `"SOFT_DIGITAL_PAINTING"` - This style has more finish and refinement than a sketch. It includes shading, three dimensionality, and texture that might be lacking in other styles.

**Note**  
Amazon Nova Canvas is not limited to the styles in this list. You can achieve many other visual styles by omitting the `style` parameter and describing your desired style within your prompt. Optionally, you can use the `negativeText` parameter to further steer the style characteristics away from undesired characteristics.

The following images display the same image generated in each of the previously described styles.

## 3D animated family film
<a name="styles-collapsable1"></a>

![\[The image depicts an elephant in the 3d animated family film style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/3D_ANIMATED_FAMILY_FILM.png)


## Design sketch
<a name="styles-collapsable2"></a>

![\[The image depicts an elephant in the design sketch style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/DESIGN_SKETCH.png)


## Flat vector illustration
<a name="styles-collapsable3"></a>

![\[The image depicts an elephant in the flat vector illustration style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/FLAT_VECTOR_ILLUSTRATION.png)


## Graphic novel illustration
<a name="styles-collapsable4"></a>

![\[The image depicts an elephant in the graphic novel illustration style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/GRAPHIC_NOVEL_ILLUSTRATION.png)


## Maximalism
<a name="styles-collapsable5"></a>

![\[The image depicts an elephant in the maximalism style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/MAXIMALISM.png)


## Midcentury retro
<a name="styles-collapsable6"></a>

![\[The image depicts an elephant in the midcentury retro style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/MIDCENTURY_RETRO.png)


## Photorealism
<a name="styles-collapsable7"></a>

![\[The image depicts an elephant in the photorealism style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/PHOTOREALISM.png)


## Soft digital painting
<a name="styles-collapsable8"></a>

![\[The image depicts an elephant in the soft digital painting style.\]](http://docs.aws.amazon.com/nova/latest/userguide/images/SOFT_DIGITAL_PAINTING.png)


# Request and response structure for image generation
<a name="image-gen-req-resp-structure"></a>

**Image generation**  
The following examples present different image generation use cases. Each example provides an explanation of the fields that are used for the image generation.

------
#### [ Text-to-image request ]

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "text": string,
        "negativeText": string,
        "style": "3D_ANIMATED_FAMILY_FILM" |
        "DESIGN_SKETCH" | "FLAT_VECTOR_ILLUSTRATION" |
        "GRAPHIC_NOVEL_ILLUSTRATION" | "MAXIMALISM" |
        "MIDCENTURY_RETRO" | "PHOTOREALISM" |
        "SOFT_DIGITAL_PAINTING"
    },
    "imageGenerationConfig": {
        "width": int,
        "height": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int,
        "numberOfImages": int
    }
}
```

The following `textToImageParams` fields are used in this request:
+ `text` (Required) – A text prompt to generate the image. The prompt must be 1-1024 characters in length.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.
+ `style` (Optional) – Specifies the style that is used to generate this image. For more information, see [Visual Styles](image-gen-styles.md).

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------
#### [ Text-to-image request with image conditioning ]

```
{
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {
        "conditionImage": string (Base64 encoded image),
        "controlMode": "CANNY_EDGE" | "SEGMENTATION", 
        "controlStrength": float,
        "text": string,
        "negativeText": string,
        "style": "3D_ANIMATED_FAMILY_FILM" |
        "DESIGN_SKETCH" | "FLAT_VECTOR_ILLUSTRATION" |
        "GRAPHIC_NOVEL_ILLUSTRATION" | "MAXIMALISM" |
        "MIDCENTURY_RETRO" | "PHOTOREALISM" |
        "SOFT_DIGITAL_PAINTING"
    },
    "imageGenerationConfig": {
        "width": int,
        "height": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int,
        "numberOfImages": int
    }
}
```

The following `textToImageParams` fields are used in this request:
+ `conditionImage` (Required) – A JPEG or PNG image that guides the layout and composition of the generated image. The image must be formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `controlMode` (Optional) – Specifies what conditioning mode is be used. The default value is "CANNY\$1EDGE".
  + `CANNY_EDGE` – Elements of the generated image will follow the prominent contours, or "edges", of the condition image closely.
  + `SEGMENTATION` – The condition image will be automatically analyzed to identify prominent content shapes. This analysis results in a segmentation mask which guides the generation, resulting in a generated image that closely follows the layout of the condition image but allows the model more freedom within the bounds of each content area.
+ `controlStrength` (Optional) – Specifies how similar the layout and composition of the generated image should be to the `conditionImage`. The range is 0 to 1.0, and lower values introduce more randomness. The default value is 0.7.
+ `text` (Required) – A text prompt to generate the image. The prompt must be 1-1024 characters in length.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.
+ `style` (Optional) – Specifies the style that is used to generate this image. For more information, see [Visual Styles](image-gen-styles.md).

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------
#### [ Color guided image generation request ]

```
{
    "taskType": "COLOR_GUIDED_GENERATION",
    "colorGuidedGenerationParams": {
        "colors": string[] (list of hexadecimal color values),
        "referenceImage": string (Base64 encoded image),
        "text": string,
        "negativeText": string
    },
    "imageGenerationConfig": {
        "width": int,
        "height": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int,
        "numberOfImages": int
    }
}
```

The following `colorGuidedGenerationParams` fields are used in this request:
+ `colors` (Required) – A list of up to 10 color codes that define the desired color palette for your image. Expressed as hexadecimal values in the form “\$1RRGGBB”. For example, "\$100FF00" is pure green and "\$1FCF2AB" is a warm yellow. The `colors` list has the strongest effect when a `referenceImage` is not provided. Otherwise, the colors in the list and the colors from the reference image will both be used in the final output.
+ `referenceImage` (Optional) – A JPEG or PNG image to use as a subject and style reference. The colors of the image will also be incorporated into you final output, along with the colors in from the `colors` list. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `text` (Required) – A text prompt to generate the image. The prompt must be 1-1024 characters in length.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------
#### [ Image variation request ]

```
{
    "taskType": "IMAGE_VARIATION",
    "imageVariationParams": {
        "images": string[] (list of Base64 encoded images),
        "similarityStrength": float,
        "text": string,
        "negativeText": string
    },
    "imageGenerationConfig": {
        "height": int,
        "width": int,
        "cfgScale": float,
        "seed": int,
        "numberOfImages": int
    }
}
```

The following `imageVariationParams` fields are used in this request:
+ `images` (Required) - A list of 1–5 images to use as references. Each must be in JPEG or PNG format and encoded as Base64 strings. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `similarityStrength` (Optional) – Specifies how similar the generated image should be to the input images. Valid values are betweeen 0.2-1.0 with lower values used to introduce more randomness.
+ `text` (Required) – A text prompt to generate the image. The prompt must be 1-1024 characters in length. If you omit this field, the model will remove elements inside the masked area. They will be replaced with a seamless extension of the image background.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------

**Image editing**  
The following examples present different image editing use cases. Each example provides an explanation of the fields that are used to edit the image.

------
#### [ Inpainting request ]

```
{
    "taskType": "INPAINTING",
    "inPaintingParams": {
        "image": string (Base64 encoded image),
        "maskPrompt": string,
        "maskImage": string (Base64 encoded image),
        "text": string,
        "negativeText": string
    },
    "imageGenerationConfig": {
        "numberOfImages": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int
    }
}
```

The following `inPaintingParams` fields are used in this request:
+ `image` (Required) - The JPEG or PNG that you want to modify, formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `maskPrompt` or `maskImage` (Required) – You must specify either the `maskPrompt` or the `maskImage` parameter, but not both.

  The `maskPrompt` is a natural language text prompt that describes the regions of the image to edit. 

  The `maskImage` is an image that defines the areas of the image to edit. The mask image must be the same size as the input image. Areas to be edited are shaded pure black and areas to ignore are shaded pure white. No other colors are allowed in the mask image.

  Note that inpainting and outpainting requests are opposites in regard to the color requirements of the mask images.
+ `text` (Required) – A text prompt that describes what to generate within the masked region. The prompt must be 1-1024 characters in length. If you omit this field, the model will remove elements inside the masked area. They will be replaced with a seamless extension of the image background.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------
#### [ Outpainting request ]

```
{
    "taskType": "OUTPAINTING",
    "outPaintingParams": {
        "image": string (Base64 encoded image),
        "maskPrompt": string,
        "maskImage": string (Base64 encoded image),
        "outPaintingMode": "DEFAULT" | "PRECISE",
        "text": string,
        "negativeText": string
    },
    "imageGenerationConfig": {
        "numberOfImages": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int
    }
}
```

The following `outPaintingParams` fields are used in this request:
+ `image` (Required) - The JPEG or PNG that you want to modify, formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `maskPrompt` or `maskImage` (Required) – You must specify either the `maskPrompt` or the `maskImage` parameter, but not both.

  The `maskPrompt` is a natural language text prompt that describes the regions of the image to edit. 

  The `maskImage` is an image that defines the areas of the image to edit. The mask image must be the same size as the input image. Areas to be edited are shaded pure black and areas to ignore are shaded pure white. No other colors are allowed in the mask image.

  Note that inpainting and outpainting requests are opposites in regard to the color requirements of the mask images.
+ `outPaintingMode` - Determines how the mask that you provide is interpreted.

  Use `DEFAULT` to transition smoothly between the masked area and the non-masked area. Some of the original pixels are used as the starting point for the new background. This mode is generally better when you want the new background to use similar colors as the original background. However, you can get a halo effect if your prompt calls for a new background that is significantly different than the original background.

  Use `PRECISE` to strictly adhere to the mask boundaries. This mode is generally better when you are making significant changes to the background.
+ `text` (Required) – A text prompt that describes what to generate within the masked region. The prompt must be 1-1024 characters in length. If you omit this field, the model will remove elements inside the masked area. They will be replaced with a seamless extension of the image background.
+ `negativeText` (Optional) – A text prompt to define what not to include in the image. This value must be 1-1024 characters in length.

**Note**  
Avoid using negating words (“no”, “not”, “without”, etc.) in your `text` and `negativeText` values. For example, if you do not want mirrors in an image, instead of including "no mirrors" or "without mirrors" in the `text` field, use the word "mirrors" in the `negativeText` field.

------
#### [ Background removal request ]

```
{
    "taskType": "BACKGROUND_REMOVAL",
    "backgroundRemovalParams": {
        "image": string (Base64 encoded image)
    }
}
```

The following `backgroundRemovalParams` field is used in this request:
+ `image` (Required) – The JPEG or PNG that you want to modify, formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.

The `BACKGROUND_REMOVAL` task will return a PNG image with full 8-bit transparency. This format gives you smooth, clean isolation of the foreground objects and makes it easy to composite the image with other elements in an image editing app, presentation, or website. The background can easily be changed to a solid color using simple custom code.

------
#### [ Virtual try-on ]

```
{
    "taskType": "VIRTUAL_TRY_ON",
    "virtualTryOnParams": {
        "sourceImage": string (Base64 encoded image),
        "referenceImage": string (Base64 encoded image),
        "maskType": "IMAGE" | "GARMENT" | "PROMPT",
        "imageBasedMask":{
            "maskImage": string (Base64 encoded image),
        },
        "garmentBasedMask":{
            "maskShape": "CONTOUR" | "BOUNDING_BOX" | "DEFAULT",
            "garmentClass": "UPPER_BODY" | "LOWER_BODY" |
            "FULL_BODY" | "FOOTWEAR" | "LONG_SLEEVE_SHIRT" |
            "SHORT_SLEEVE_SHIRT" | "NO_SLEEVE_SHIRT" |
            "OTHER_UPPER_BODY" | "LONG_PANTS" | "SHORT_PANTS" |
            "OTHER_LOWER_BODY" | "LONG_DRESS" | "SHORT_DRESS" |
            "FULL_BODY_OUTFIT" | "OTHER_FULL_BODY" | "SHOES" |
            "BOOTS" | "OTHER_FOOTWEAR",
            "garmentStyling":{ 
                "longSleeveStyle": "SLEEVE_DOWN" | "SLEEVE_UP",
                "tuckingStyle": "UNTUCKED" | "TUCKED",
                "outerLayerStyle": "CLOSED" | "OPEN",
            }
        },
        "promptBasedMask":{
            "maskShape": "BOUNDING_BOX" | "CONTOUR" | "DEFAULT",
            "maskPrompt": string,
        },
        "maskExclusions": { 
            "preserveBodyPose": "ON" | "OFF" | "DEFAULT",
            "preserveHands": "ON" | "OFF" | "DEFAULT",
            "preserveFace": "OFF" | "ON" | "DEFAULT"
        },
        "mergeStyle" : "BALANCED" | "SEAMLESS" | "DETAILED" ,
        "returnMask": boolean,
    },
    "imageGenerationConfig": {
        "numberOfImages": int,
        "quality": "standard" | "premium",
        "cfgScale": float,
        "seed": int
    }
}
```

The following `virtualTryOnParams` fields are used in this request:
+ `sourceImage` (Required) – The JPEG or PNG that you want to modify, formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `referenceImage` (Required) – The JPEG or PNG that contains the object that you want to superimpose onto the source image, formatted as a Base64 string. See [Input images for image generation](image-gen-access.md#image-gen-input-images) for additional requirements.
+ `maskType` (Required) – Specifies whether the mask is provided as an image, prompt, or garment mask.
+ `imageBasedMask` – Required when `maskType` is `"IMAGE"`.

  The `maskImage` is an image that defines the areas of the image to edit. The mask image must be the same size as the input image. Areas to be edited are shaded pure black and areas to ignore are shaded pure white. No other colors are allowed in the mask image.
+ `garmentBasedMask` – Required when `maskType` is `"GARMENT"`.
  + `maskShape` (Optional) – Defines the shape of the mask bounding box. The shape and size of the bounding box can have an affect on how the reference image is transferred to the source image.
  + `garmentClass` (Required) – Defines the article of clothing that is being transferred. This parameter allows the model focus on specific parts of the reference image that you want to transfer. 
  + `garmentStyling` (Optional) – Provides styling cues to the model for certain articles of clothing. The `longSleeveStyle` and `tuckingStyle` parameters apply only to upper body garments. The `outerLayerStyle` parameter applies only to outer layer, upper body garments.
+ `promptBasedMask` (Required) – Required when `maskType` is `"PROMPT"`.
  + `maskShape` (Optional) – Defines the shape of the mask bounding box. The shape and size of the bounding box can have an affect on how the reference image is transferred to source image.
  + `maskPrompt` (Required) – A natural language text prompt that describes the regions of the image to edit.
+ `maskExclusions` (Optional) – When a person is detected in the source image, these parameters determine whether their body pose, hands, and face should be kept in the output image or regenerated.
+ `mergeStyle` (Optional) – Determines how the source and reference images are stitched together. Each merge style takes a different approach to how it stitches the elements together to create the final image, each with its own benefits and tradeoffs.
  + `"BALANCED"` - Protects any non-masked pixels in the original image, ensuring they remain 100% accurate to the original. In some cases, there will be a slight perceptible color or texture mismatch in the output image that presents as a kind of “ghost” image of the mask shape. This is most likely to occur when the image features a person standing against a solid color or uniformly textured background. To avoid this, you can use the `"SEAMLESS"` merge style instead.
  + `"SEAMLESS"` - Ensures that there will never be a noticeable seam between the masked and non-masked images areas in the final image. The tradeoff is that this mode results in all pixels in the image changing slightly and can sometimes diminish fine-grained details in the non-masked areas of the image.
  + `"DETAILED"` - Can greatly improve fine-grained details like logos and text, especially when the masked area is relatively small compared to the overall image. The model achieves this by performing inpainting on a tightly cropped, higher resolution version of the original image that only includes the masked area. It then merges the result back into the original image. As with using `"BALANCED"` mode, this mode can sometimes result in a visible seam.
+ `returnMask` (Optional) – Specifies whether the mask image is returned with the output image.

------

**Response body**  
The response body will contain one or more of the following fields:

```
{
    "images": "images": string[] (list of Base64 encoded images),
    "maskImage": string (Base64 encoded image),
    "error": string
}
```
+ `images` – When successful, a list of Base64-encoded strings that represent each image that was generated is returned. This list does not always contain the same number of images that you requested. Individual images might be blocked after generation if they do not align with the AWS Responsible AI (RAI) content moderation policy. Only images that align with the RAI policy are returned.
+ `maskImage` - When you specified that the mask image should be returned with the output, this is where it is returned.
+ `error` – If any image does not align with the RAI policy, this field is returned. Otherwise, this field is omitted from the response.

The `imageGenerationConfig` field is common to all task types except `BACKGROUND_REMOVAL`. It is optional and contains the following fields. If you omit this object, the default configurations are used.
+ `width` and `height` (Optional) – Define the size and aspect ratio of the generated image. Both default to 1024.

  The `width` and `height` values should not be provided for the `"INPAINTING"`, `"OUTPAINTING"`, or `"VIRTUAL_TRY_ON"` task types.

  For the full list of supported resolutions, see [Supported image resolutions](image-gen-access.md#image-gen-resolutions).
+ `quality` (Optional) - Specifies the quality to use when generating the image - "standard" (default) or "premium".
+ `cfgScale` (Optional) – Specifies how strictly the model should adhere to the prompt. Values range from 1.1-10, inclusive, and the default value is 6.5.
  + Low values (1.1-3) - More creative freedom for the AI, potentially more aesthetic, but low contrast and less prompt-adherent results
  + Medium values (4-7) - Balanced approach, typically recommended for most generations
  + High values (8-10) - Strict prompt adherence, which can produce more precise results but sometimes at the cost of natural aesthetics and increased color saturation
+ `numberOfImages` (Optional) – The number of images to generate.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/nova/latest/userguide/image-gen-req-resp-structure.html)
+ `seed` (Optional) – Determines the initial noise setting for the generation process. Changing the seed value while leaving all other parameters the same will produce a totally new image that still adheres to your prompt, dimensions, and other settings. It is common to experiment with a variety of seed values to find the perfect image.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/nova/latest/userguide/image-gen-req-resp-structure.html)

**Important**  
Resolution (`width` and `height`), `numberOfImages`, and `quality` all have an impact on the time it takes for generation to complete. The AWS SDK has a default `read_timeout` of 60 seconds which can easily be exceeded when using higher values for these parameters. Therefore, it is recommended that you increase the `read_timeout` of your invocation calls to at least 5 minutes (300 seconds). The code examples demonstrate how to do this.

# Error handling
<a name="image-gen-errors"></a>

There are three primary types of errors that you want to handle in your application code. These are input validation errors, AWS Responsible AI (RAI) input deflection errors, and RAI output deflection errors. These errors are unique to Amazon Nova Canvas.

Input validation errors occur when you use an unsupported value for an input parameter. For example, a width value that doesn’t match one of the supported resolutions, an input image that exceeds the maximum allowed size, or a `maskImage` that contains colors other than pure black and white. All input validation errors are expressed as a `ValidationException` which contains a message string describing the cause of the problem.

RAI input deflection errors occur when any of the input text values or images are determined to violate the AWS Responsible AI policy. These errors are expressed as a `ValidationException` with one of the following messages:
+ Input text validation message - “This request has been blocked by our content filters. Please adjust your text prompt to submit a new request.”
+ Input image validation message - “This request has been blocked by our content filters. Please adjust your input image to submit a new request.”

RAI output deflection errors occur when an image is generated but it is misaligned with the AWS Responsible AI policy. When this occurs, an exception is not used. Instead, a successful response is returned, and its structure contains an error field which is a string with one of the following values:
+ If all requested images violate RAI policy - “All of the generated images have been blocked by our content filters.”
+ If some, but not all, requested images violate RIA policy - “Some of the generated images have been blocked by our content filters.”

# Code examples
<a name="image-gen-code-examples"></a>

The following examples provide sample code for various image generation tasks.

------
#### [ Text to image generation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a text prompt with the Amazon Nova Canvas model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas  example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = 'amazon.nova-canvas-v1:0'

    prompt = """A photograph of a cup of coffee from the side."""

    body = json.dumps({
        "taskType": "TEXT_IMAGE",
        "textToImageParams": {
            "text": prompt
        },
        "imageGenerationConfig": {
            "numberOfImages": 1,
            "height": 1024,
            "width": 1024,
            "cfgScale": 8.0,
            "seed": 0
        }
    })

    try:
        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred:", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Inpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use inpainting to generate an image from a source image with 
the Amazon Nova Canvas  model (on demand).
The example uses a mask prompt to specify the area to inpaint.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas  model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "INPAINTING",
            "inPaintingParams": {
                "text": "Modernize the windows of the house",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskPrompt": "windows"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Outpainting ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use outpainting to generate an image from a source image with 
the Amazon Nova Canvas  model (on demand).
The example uses a mask image to outpaint the original image.
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas  model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas  example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image and mask image from file and encode as base64 strings.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')
        with open("/path/to/mask_image", "rb") as mask_image_file:
            input_mask_image = base64.b64encode(
                mask_image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "OUTPAINTING",
            "outPaintingParams": {
                "text": "Draw a chocolate chip cookie",
                "negativeText": "bad quality, low res",
                "image": input_image,
                "maskImage": input_mask_image,
                "outPaintingMode": "DEFAULT"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        }
        )

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image variation ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image variation from a source image with the
Amazon Nova Canvas  model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas  model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas  example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "IMAGE_VARIATION",
            "imageVariationParams": {
                "text": "Modernize the house, photo-realistic, 8k, hdr",
                "negativeText": "bad quality, low resolution, cartoon",
                "images": [input_image],
                "similarityStrength": 0.7,  # Range: 0.2 to 1.0
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image conditioning ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate image conditioning from a source image with the
Amazon Nova Canvas model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "text": "A robot playing soccer, anime cartoon style",
                "negativeText": "bad quality, low res",
                "conditionImage": input_image,
                "controlMode": "CANNY_EDGE"
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Color guided content ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image from a source image color palette with the
Amazon Nova Canvas   model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas  model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas  example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "COLOR_GUIDED_GENERATION",
            "colorGuidedGenerationParams": {
                "text": "digital painting of a girl, dreamy and ethereal, pink eyes, peaceful expression, ornate frilly dress, fantasy, intricate, elegant, rainbow bubbles, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration",
                "negativeText": "bad quality, low res",
                "referenceImage": input_image,
                "colors": ["#ff8080", "#ffb280", "#ffe680", "#ffe680"]
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "height": 512,
                "width": 512,
                "cfgScale": 8.0
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Background removal ]

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an image with background removal with the
Amazon Nova Canvas   model (on demand).
"""
import base64
import io
import json
import logging
import boto3
from PIL import Image
from botocore.config import Config

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by Amazon Nova Canvas"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_image(model_id, body):
    """
    Generate an image using Amazon Nova Canvas  model on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        image_bytes (bytes): The image generated by the model.
    """

    logger.info(
        "Generating image with Amazon Nova Canvas model %s", model_id)

    bedrock = boto3.client(
        service_name='bedrock-runtime',
        config=Config(read_timeout=300)
    )

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    base64_image = response_body.get("images")[0]
    base64_bytes = base64_image.encode('ascii')
    image_bytes = base64.b64decode(base64_bytes)

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Image generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated image with Amazon Nova Canvas model %s", model_id)

    return image_bytes


def main():
    """
    Entrypoint for Amazon Nova Canvas  example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = 'amazon.nova-canvas-v1:0'

        # Read image from file and encode it as base64 string.
        with open("/path/to/image", "rb") as image_file:
            input_image = base64.b64encode(image_file.read()).decode('utf8')

        body = json.dumps({
            "taskType": "BACKGROUND_REMOVAL",
            "backgroundRemovalParams": {
                "image": input_image,
            }
        })

        image_bytes = generate_image(model_id=model_id,
                                     body=body)
        image = Image.open(io.BytesIO(image_bytes))
        image.show()

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating image with Amazon Nova Canvas  model {model_id}.")


if __name__ == "__main__":
    main()
```

------