

# SQL Server modernization
<a name="sql-server-modernization"></a>

AWS Transform for SQL Server Modernization is an AI-powered service that automates the full-stack modernization of Microsoft SQL Server databases and their associated .NET applications to Amazon Aurora PostgreSQL. The service orchestrates the entire migration journey from schema conversion, data migration and modifying application code to match the new target PostgreSQL, making your teams more productive by automating complex and labor-intensive tasks.

## Supported regions
<a name="supported-regions"></a>

AWS Transform for SQL Server is available in US East (N. Virginia) - us-east-1

**Cross-Region Usage:** For databases in unsupported regions, you can clone the database to a supported region for transformation, then deploy the results back to your target region.

## Capabilities and key features
<a name="capabilities-key-features"></a>

### Database transformation
<a name="database-transformation"></a>
+ **Schema conversion:** Automatically converts SQL Server schemas to Aurora PostgreSQL, including tables, views, indexes, constraints, and relationships
+ **Stored procedure transformation:** Converts T-SQL stored procedures to PL/pgSQL with AI-enhanced accuracy
+ **Data migration:** Migrates data with integrity validation using AWS Database Migration Service (DMS)
+ **Database objects:** Supports triggers, functions, views, computed columns, and identity columns
+ **Validation:** Automated data integrity verification and referential integrity checks

### Application transformation
<a name="application-transformation"></a>
+ **Entity Framework transformation:** Updates Entity Framework 6.3-6.5 and EF Core 1.0-8.0 configurations for PostgreSQL
+ **ADO.NET transformation:** Converts ADO.NET data access code from SQL Server to PostgreSQL providers. 
+ **Connection string updates:** Automatically updates all database connection strings to the new target PostgreSQL database
+ **Database provider changes:** Replaces SQL Server providers with Npgsql (PostgreSQL provider)
+ **ORM configuration updates:** Modifies data type mappings, identity columns, and database-specific configurations

### Orchestration & validation
<a name="orchestration-validation"></a>
+ **Wave-based modernization:** Organizes large estates into logical migration phases
+ **Dependency mapping:** Identifies relationships between applications and databases
+ **Human-in-the-loop (HITL) checkpoints:** Provides review and approval gates at critical stages
+ **Automated validation:** Tests schema compatibility, data integrity, and application functionality
+ **CI/CD integration:** Integrates with existing development pipelines

### Deployment
<a name="deployment"></a>
+ **Amazon ECS and Amazon EC2 deployment:** Automated containerized deployment with auto-scaling support
+ **Infrastructure-as-code generation:** Creates CloudFormation or AWS CDK templates
+ **Automated deployment validation:** Verifies successful deployment with health checks
+ **Rollback capabilities:** Supports rollback procedures if issues arise

## Supported versions and project types
<a name="supported-versions-project-types"></a>

### SQL Server versions
<a name="sql-server-versions"></a>

AWS Transform supports the following SQL Server versions:


| SQL Server Version | Support Status | 
| --- | --- | 
| SQL Server 2022 | Supported | 
| SQL Server 2019 | Supported | 
| SQL Server 2017 | Supported | 
| SQL Server 2016 | Supported | 
| SQL Server 2014 | Supported | 
| SQL Server 2012 | Supported | 
| SQL Server 2008 R2 | Supported | 

**Note**  
All SQL Server editions are supported (Express, Standard, Enterprise). SQL Server can be hosted on AWS (Amazon RDS for SQL Server or SQL Server on Amazon EC2) or hosted outside of AWS.

### .NET versions
<a name="dotnet-versions"></a>


| .NET Version | Support Status | 
| --- | --- | 
| .NET 10 | Supported | 
| .NET 8 | Supported | 
| .NET 7 | Supported | 
| .NET 6 (Core) | Supported | 
| .NET Framework 4.x and earlier | Not Supported | 

**Important**  
Legacy .NET Framework 4.x and earlier versions are not supported. If your application uses .NET Framework, you must first upgrade to .NET Core 6\$1 using AWS Transform for .NET modernization before using SQL Server transformation capabilities.

### Entity Framework versions
<a name="entity-framework-versions"></a>


| Framework | Supported Versions | 
| --- | --- | 
| Entity Framework 6 | 6.3, 6.4, 6.5 | 
| Entity Framework Core | 1.0 through 8.0 | 
| ADO.NET | All versions (GA) | 

### Source code repositories
<a name="source-code-repositories"></a>

AWS Transform supports the following repository platforms via AWS CodeConnections:
+ GitHub and GitHub Enterprise
+ GitLab.com
+ Bitbucket Cloud
+ Azure Repositories

### Target database
<a name="target-database"></a>

AWS Transform targets Amazon Aurora PostgreSQL (PostgreSQL 15\$1 compatible) with support for the latest Aurora features and optimizations.

### Technical requirements
<a name="technical-requirements"></a>

#### Database requirements
<a name="database-requirements"></a>
+ Microsoft SQL Server version 2008 R2 through 2022
+ SQL Server hosted on AWS (Amazon RDS for SQL Server or SQL Server on Amazon EC2) or hosted outside of AWS
+ For AWS-hosted databases, the database and AWS Transform must be in the same AWS Region.
+ For databases hosted outside of AWS, network connectivity to the AWS Transform service is required.
+ Database user with VIEW DEFINITION and VIEW DATABASE STATE permissions
+ Database passwords using printable ASCII characters only (excluding '/', '@', '"', and spaces)
+ VPC containing the source SQL Server must have subnets in at least 2 different Availability Zones (required for DMS replication subnet groups)

#### Application requirements
<a name="application-requirements"></a>
+ .NET Core 6, 7, or 8 applications
+ Entity Framework 6.3-6.5 or Entity Framework Core 1.0-8.0, or ADO.NET
+ Database connections discoverable in source code
+ Applications successfully build and run
+ Source code in supported repository platforms

#### AWS account requirements
<a name="aws-account-requirements"></a>
+ AWS account with administrator access
+ IAM Identity Center enabled
+ Required service roles created (see setup instructions below)
+ VPC with appropriate network configuration

### Data processing and storage
<a name="data-processing-storage"></a>

#### Processing location
<a name="processing-location"></a>
+ Schema processing occurs in a DMS instance within your VPC
+ Data migration is optional and can be excluded if required
+ Transformation artifacts are stored in the AWS Transform service region

#### Stored artifacts
<a name="stored-artifacts"></a>

The following items are stored in the service region:
+ Agent logs
+ Assessment results
+ SQL schema files
+ DMS output artifacts

**Important**  
**Important for Data Residency:** Even when data migration is opted out, metadata and processing artifacts are stored in the service region. This is important for organizations with strict data residency requirements.

#### Artifact management
<a name="artifact-management"></a>
+ Customer option for encryption using your own KMS keys
+ Defined TTL (time-to-live) period for all artifacts
+ Artifacts can be downloaded for offline storage

# SQL Server requirements
<a name="sql-requirements"></a>

## Database requirements
<a name="sql-database-requirements"></a>

### Externally hosted databases
<a name="externally-hosted-databases"></a>

Databases hosted outside of AWS are supported. To use an externally hosted database, you must ensure network connectivity between the database and the AWS Transform service.

### SSIS workloads
<a name="ssis-workloads"></a>
+ **Limitation:** SQL Server Integration Services (SSIS) transformation is not supported.
+ **Workaround:** Use AWS Glue or other ETL services for SSIS migration.

### Linked servers
<a name="linked-servers"></a>
+ **Limitation:** External database connections via linked servers require human configuration.
+ **Workaround:** Reconfigure linked servers after migration or use alternative approaches such as database links or application-level integration.

### Complex T-SQL patterns
<a name="complex-tsql-patterns"></a>
+ **Limitation:** Some advanced T-SQL patterns may need human intervention.
+ **Workaround:** Use Amazon Kiro CLI for complex SQL conversions. AWS Transform flags these for review during the transformation process.

## Application requirements
<a name="source-application-requirements"></a>

### Legacy .NET Framework
<a name="legacy-dotnet-framework"></a>
+ **Limitation:** .NET Framework 4.x and earlier versions are not supported.
+ **Workaround:** Use AWS Transform for .NET to upgrade to .NET Core 6\$1 first, then use SQL Server transformation.

### Entity Framework versions
<a name="entity-framework-version-requirements"></a>
+ **Limitation:** Only Entity Framework 6.3-6.5 and EF Core 1.0-8.0 are supported.
+ **Workaround:** Upgrade to a supported Entity Framework version before transformation.

### VB.NET applications
<a name="vbnet-applications"></a>
+ **Limitation:** VB.NET is not supported.
+ **Workaround:** Convert to C\$1 or use AWS Transform custom to convert from VB.NET to C\$1.

### Cross-database dependencies
<a name="cross-database-dependencies"></a>
+ **Limitation:** Challenges when database schemas interact across multiple databases.
+ **Workaround:** Review and refactor cross-database queries before migration. Consider consolidating databases or using PostgreSQL schemas.
+ **Impact:** May require human intervention for complex cross-database scenarios.

### Repository-database coupling
<a name="repository-database-coupling"></a>
+ **Limitation:** Challenges when a single repository serves multiple databases.
+ **Workaround:** Consider repository restructuring or phased migration approach.
+ **Impact:** May require additional planning for wave-based migrations.

## Infrastructure requirements
<a name="infrastructure-requirements"></a>

### Single account/region per job
<a name="single-account-region-per-job"></a>
+ **Limitation:** Each transformation job targets one AWS account and region.
+ **Workaround:** Create multiple transformation jobs for multi-account or multi-region deployments.

### Deployment targets
<a name="deployment-targets"></a>
+ **Limitation:** Amazon ECS and Amazon EC2 deployments are supported.

## Repository requirements
<a name="repository-requirements"></a>

### Private NuGet packages
<a name="private-nuget-packages"></a>
+ **Limitation:** Private NuGet packages require additional configuration.
+ **Workaround:** Configure private NuGet feeds in transformation settings before starting the job.

# SQL Server setup
<a name="sql-server-setup"></a>

Perform these steps in your SQL Server environment to enable AWS Transform modernization.

## Database setup and configuration
<a name="database-setup-configuration"></a>

### Step 1: Create database user with required permissions
<a name="create-database-user"></a>

Create a dedicated database user for AWS Transform with the necessary permissions. If you already have a DMS Schema Conversion user, you can reuse it.

Connect to your SQL Server instance and run the following commands:

```
-- Create the login in master database
USE master;
CREATE LOGIN [atx_user] WITH PASSWORD = 'YourStrongPassword123!';

-- Switch to your application database
USE [YourDatabaseName];
CREATE USER [atx_user] FOR LOGIN [atx_user];

-- Grant required permissions
GRANT VIEW DEFINITION TO [atx_user];
GRANT VIEW DATABASE STATE TO [atx_user];
ALTER ROLE [db_datareader] ADD MEMBER [atx_user];

-- Grant master database permissions
USE master;
GRANT VIEW SERVER STATE TO [atx_user];
GRANT VIEW ANY DEFINITION TO [atx_user];
```

**Note**  
Repeat the database-specific commands (USE, CREATE USER, GRANT) for each database you want to modernize.

The db\$1datareader role is only necessary for data migration, not for schema conversion alone.
+ The db\$1datareader role grants read access to all tables in the database
+ This role is required ONLY when performing data migration.
+ For schema conversion only (without data migration), the db\$1datareader role is NOT required
+ The other permissions (VIEW DEFINITION, VIEW DATABASE STATE, etc.) are sufficient for schema conversion

### Step 2: Store credentials in AWS Secrets Manager
<a name="store-credentials-secrets-manager"></a>

Store your database credentials securely in AWS Secrets Manager. Skip this step if you already have a secret created for DMS.

1. Navigate to AWS Secrets Manager in the console

1. Choose** Store a new secret**

1. Configure the secret:
   + **Secret type:** Credentials for other database
   + **Database**: Microsoft SQL Server
   + **Username**: atx\$1user (or your chosen username)
   + **Password**: The password you created
   + **Server name**: Your SQL Server endpoint
   + **Database name**: Your database name
   + **Port**: 1433 (or your custom port)

1. Choose **Next**

1. Enter secret name: atx-db-modernization-sqlserver

1. Add required tags (these tags are mandatory):
   + Key: Project, Value: atx-db-modernization
   + Key: Owner, Value: database-connector

1. Choose **Next** through remaining screens

1. Choose **Store**

1. Note the Secret ARN for use in the next step

**Important**  
Database passwords must use printable ASCII characters only, excluding '/', '@', '"', and spaces. Secrets scheduled for deletion can cause transformation failures.

### Step 3: Create required DMS roles
<a name="create-required-dms-roles"></a>

AWS Transform requires specific IAM roles for DMS operations. Deploy these roles using the CloudFormation template below.

**Note**  
If your AWS account already has existing DMS-related roles, modify this template to reuse those resources rather than creating duplicates.

Create a file named dms-roles.yaml with the following content:

```
AWSTemplateFormatVersion: '2010-09-09'
Description: 'DMS Service Roles for AWS Transform SQL Server Modernization'

Resources:
  DMSCloudWatchLogsRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: dms-cloudwatch-logs-role
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - dms.amazonaws.com
                - schema-conversion.dms.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole

  DMSS3AccessRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: dms-s3-access-role
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - dms.amazonaws.com
                - schema-conversion.dms.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: S3TaggedAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetBucketLocation
                  - s3:GetBucketVersioning
                  - s3:PutObject
                  - s3:PutBucketVersioning
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:ListBucket
                  - s3:DeleteObject
                Resource: arn:aws:s3:::atx-db-modernization-*
                Condition:
                  StringEquals:
                    aws:ResourceAccount: !Ref AWS::AccountId

  DMSSecretsManagerRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: dms-secrets-manager-role
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - dms.amazonaws.com
                - schema-conversion.dms.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: SecretsManagerTaggedAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - secretsmanager:GetSecretValue
                  - secretsmanager:DescribeSecret
                Resource: '*'
                Condition:
                  StringEquals:
                    secretsmanager:ResourceTag/Project: atx-db-modernization
                    secretsmanager:ResourceTag/Owner: database-connector

  DMSVPCRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: dms-vpc-role
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - dms.amazonaws.com
                - schema-conversion.dms.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole

  DMSServerlessRole:
    Type: AWS::IAM::ServiceLinkedRole
    Properties:
      AWSServiceName: dms.amazonaws.com
      Description: 'Service Linked Role for AWS DMS Serverless'

Outputs:
  DMSCloudWatchLogsRoleArn:
    Description: ARN of the DMS CloudWatch Logs Role
    Value: !GetAtt DMSCloudWatchLogsRole.Arn
  DMSS3AccessRoleArn:
    Description: ARN of the DMS S3 Access Role
    Value: !GetAtt DMSS3AccessRole.Arn
  DMSSecretsManagerRoleArn:
    Description: ARN of the DMS Secrets Manager Role
    Value: !GetAtt DMSSecretsManagerRole.Arn
  DMSVPCRoleArn:
    Description: ARN of the DMS VPC Role
    Value: !GetAtt DMSVPCRole.Arn
    Export:
      Name: !Sub ${AWS::StackName}-VPCRole

  DMSServerlessRoleArn:
    Description: ARN of the DMS Serverless Role
    Value: !Sub 'arn:aws:iam::${AWS::AccountId}:role/aws-service-role/dms.amazonaws.com/AWSServiceRoleForDMSServerless'
    Export:
      Name: !Sub ${AWS::StackName}-ServerlessRole
```

Deploy the CloudFormation stack using AWS CLI:

```
aws cloudformation create-stack \
  --stack-name dms-roles \
  --template-body file://dms-roles.yaml \
  --capabilities CAPABILITY_NAMED_IAM \
  --region us-east-1
```

Or deploy using the AWS Console:

1. Navigate to CloudFormation in the AWS Console

1. Choose **Create stack**

1. Select **Upload a template file**

1. Upload the dms-roles.yaml file

1. Enter stack name: dms-roles

1. Acknowledge IAM capabilities

1. Choose **Create stack**

### Step 4: Configure network security
<a name="configure-network-security"></a>

Ensure proper network connectivity between AWS Transform, your SQL Server database, and other AWS services.

#### Security group configuration (recommended approach)
<a name="security-group-configuration"></a>

**Recommended Approach:** Use security group-based access control rather than IP-based rules. This provides better security, easier management, and works seamlessly with AWS Transform's architecture.

**Why Security Group-Based Access Control?**
+ DMS Schema Conversion creates Elastic Network Interfaces (ENIs) within your VPC
+ Your databases do not need to be publicly accessible
+ AWS Transform does not expose private IP addresses, making IP-based rules complex
+ Security group references provide dynamic, automatic updates as resources scale

##### Configure your SQL Server security group
<a name="configure-sql-server-security-group"></a>

When you configure the DMS Schema Conversion Instance Profile in AWS Transform, you specify a Security Group for the DMS SC instance. Your database security group should allow inbound traffic from this DMS SC security group.

Step-by-step configuration:

1. Identify the DMS Schema Conversion Security Group:
   + This is specified when creating the Instance Profile in AWS Transform
   + Note the Security Group ID (e.g., sg-0123456789abcdef0)

1. Update your SQL Server Security Group inbound rules:
   + **Type**: Custom TCP
   + **Port**: 1433 (or your custom SQL Server port)
   + **Source**: The DMS Schema Conversion Security Group ID
   + **Description**: "Allow DMS Schema Conversion access"

1. For Aurora PostgreSQL target (after creation):
   + **Type**: PostgreSQL
   + **Port**: 5432 (or your custom PostgreSQL port)
   + **Source**: The DMS Schema Conversion Security Group ID
   + **Description**: "Allow DMS Schema Conversion access"

**Important**  
**Important for Least Privileged Security Models:** If your organization uses a "least privileged" security model that blocks all traffic by default, you must explicitly allow inbound traffic from the DMS Schema Conversion Security Group to your database port. Do not open port 1433 to all sources or IP ranges.

#### Required AWS service connectivity
<a name="required-aws-service-connectivity"></a>

Ensure your VPC can communicate with:
+ AWS Transform service endpoints
+ AWS DMS endpoints
+ Aurora PostgreSQL endpoints
+ S3 endpoints for artifact storage
+ AWS Secrets Manager endpoints
+ AWS CodeConnections endpoints

**VPC Endpoints:** For private networks, configure VPC endpoints for required AWS services to avoid internet gateway dependencies.

## Requirements for externally hosted databases
<a name="external-database-requirements"></a>

If your SQL Server database is hosted outside of AWS, ensure the following prerequisites are met and then complete the setup steps before you begin the modernization.

**Prerequisites**
+ An AWS account with a VPC
+ Network connectivity between the VPC and the external database. For information about configuring network connectivity, see [Configuring network connectivity](https://docs.aws.amazon.com/dms/latest/userguide/instance-profiles-network.html) in the AWS DMS User Guide.

**Setup steps**

1. Create a secret in AWS Secrets Manager with the connection details for the external database. For more information, see [Step 2: Store credentials in AWS Secrets Manager](#store-credentials-secrets-manager).

1. When prompted, provide the VPC ID and security group ID for connecting to the external database. AWS Transform prompts you for this information because the database hostname in the secret cannot be resolved within the AWS account.

# AWS Transform setup
<a name="sql-server-transform-setup"></a>

Begin your modernization journey by accessing the AWS Transform console and initiating a new modernization project. This step establishes your workspace and sets up the foundational configuration for your full-stack transformation.

**Actions to Complete:**
+ Navigate to AWS Transform service in the AWS Management Console
+ Select **Microsoft Modernization** from the service options
+ Choose **Create new modernization project**
+ Provide project name and description
+ Select your target AWS region (must match your SQL Server location)
+ Review and confirm project settings

**Note**  
**Best Practice:** Use descriptive project names that include the application name and target environment (e.g., 'CustomerPortal-Production-Modernization') to easily identify projects in multi-wave scenarios.

# Create SQL Server modernization job
<a name="sql-server-create-job"></a>



## Connect database and source code repo
<a name="sql-server-connect-database-repo"></a>

### Configure database connector
<a name="sql-server-connect-database"></a>

Establish secure connectivity to your SQL Server databases by configuring database connectors. The connector performs environment analysis, dependency discovery, and enables AWS Transform to access your database schemas and metadata for assessment and conversion.

**Actions to Complete:**
+ Select **Configure Database Connector** from the setup wizard
+ Choose connection method: **New Connection** or **Existing Connection**
+ Provide SQL Server connection details (endpoint, port, database names)
+ Configure authentication and test connectivity
+ Review discovered databases and confirm selection

**Note**  
**Network Connectivity:** Ensure your SQL Server security groups and NACLs allow inbound connections from AWS Transform service endpoints.

### Connect source code repository
<a name="sql-server-connect-source-code"></a>

Enable AWS Transform to access your .NET application source code through AWS CodeConnections. This integration allows the service to analyze your application code, identify database dependencies, and perform automated code transformations for PostgreSQL compatibility.

**Actions to Complete:**
+ Navigate to **Connect Source Code Repository** section
+ Select your source code platform (GitHub, GitLab, Bitbucket, Azure Repos)
+ Configure AWS CodeConnections and authorize access
+ Select repositories containing .NET applications
+ Specify branch for analysis (typically main/master or development)
+ Validate repository access and code structure

**Note**  
**Repository Discovery:** AWS Transform automatically scans your repositories to identify .NET projects, Entity Framework configurations, database connection strings, and SQL dependencies.

**Note**  
**Security Note:** AWS Transform only requires read access to your repositories and creates new feature branches for transformed code. Your main branch remains untouched.

## Human in the loop
<a name="human-in-the-loop"></a>

AWS Transform uses human-in-the-loop (HITL) mechanisms to ensure quality and allow you to review and approve critical transformation decisions. The following checkpoints require your attention:

### Wave plan review and approval
<a name="wave-plan-review-approval"></a>

**What you review:** The proposed migration waves, including which databases and applications are grouped together and the sequence of waves.

You can:
+ Approve the wave plan
+ Customize waves by moving databases between waves
+ Modify wave sequence
+ Split or merge waves

After approval, AWS Transform proceeds with schema conversion.

### Schema conversion review
<a name="schema-conversion-review"></a>

**What you review:** Converted database objects including tables, stored procedures, functions, and triggers. Action items highlight objects that require attention.

You can:
+ Accept converted code
+ Modify converted code
+ Flag for human review after transformation
+ View side-by-side comparison of original and converted code

**What happens after approval:** AWS Transform applies the schema to Aurora PostgreSQL and proceeds with data migration (if configured).

### Application code review
<a name="application-code-review"></a>

**What you review:** All application code changes including Entity Framework configurations, connection strings, data access code, and stored procedure calls.

You can:
+ Accept changes for each file
+ Modify transformed code
+ Reject changes (not recommended)
+ Add comments for your team
+ Download transformed code for local review

**What happens after approval:** AWS Transform commits the changes to a new branch in your repository and proceeds with validation.

### Validation results review
<a name="validation-results-review"></a>

**What you review:** Automated validation results including schema compatibility, data integrity checks, and application build status.

You can:
+ Review passed tests
+ Investigate failed tests
+ Address warnings
+ Proceed to deployment or return to fix issues

**What happens after approval:** AWS Transform prepares for deployment to your target environment.

### Deployment approval
<a name="deployment-approval"></a>

**What you review:** Deployment configuration including infrastructure-as-code templates, ECS service configuration, and deployment settings.

You can:
+ Review and customize deployment settings
+ Approve deployment to proceed
+ Delay deployment for additional testing
+ Download infrastructure code for review

After approval, AWS Transform deploys your modernized application and database to the target environment.

## Assessment and wave planning
<a name="sql-server-assessment-wave-planning"></a>

### Set up landing zone
<a name="sql-server-setup-landing-zone"></a>

Configure the target infrastructure environment where your modernized applications will be deployed. The landing zone setup includes Aurora PostgreSQL provisioning, networking configuration, and reference deployment configurations.

**Actions to Complete:**
+ Select **Set up Landing Zone** from the configuration menu
+ Configure Aurora PostgreSQL target (version, instance class, Multi-AZ)
+ Configure network and security settings (VPC, subnets, security groups)

**Note**  
**Cost Consideration:** Aurora PostgreSQL instances incur ongoing costs. Consider using Aurora Serverless v2 for development/testing environments.

### Assessment
<a name="sql-server-assessment"></a>

Conduct comprehensive analysis of your database schemas, application code, and dependencies. The assessment phase provides transformation complexity ratings, identifies potential challenges, and generates detailed reports.

**Actions to Complete:**
+ Select Repository Branch for code analysis
+ Select Repositories and Databases for transformation
+ Review Assessment Results and complexity ratings
+ Generate Assessment Report with effort projections
+ Export report for stakeholder review and approval

**Note**  
**Assessment Insights:** The assessment provides detailed analysis of schema objects, stored procedures, application dependencies, and estimates the level of human intervention required.

### Wave planning
<a name="sql-server-wave-planning"></a>

Organize your modernization into manageable waves based on transformation complexity, business priorities, and application dependencies. AWS Transform generates intelligent wave recommendations that you can customize.

**Actions to Complete:**
+ Review AI-Generated Wave Plan and recommendations
+ Customize Wave Configuration based on business priorities
+ Validate Dependencies between applications and databases
+ Finalize Wave Plan using conversational interface
+ Set wave execution priorities and assign responsibilities

**Note**  
**AI-Powered Recommendations:** You can interact with the AI through natural language to adjust recommendations: 'Move CustomerDB to Wave 1' or 'Combine these two applications into the same wave.'

**Note**  
**Best Practice:** Start with a pilot wave containing 1-2 low-complexity databases to establish processes and build team confidence.

## Data migration for each wave
<a name="sql-server-data-migration"></a>

### Schema conversion for each wave
<a name="sql-server-schema-conversion"></a>

Transform SQL Server database schemas to PostgreSQL-compatible equivalents using AI-enhanced conversion capabilities. This process converts tables, stored procedures, functions, triggers, and other database objects.

**Actions to Complete:**
+ Provide Schema Conversion Targets and configure preferences
+ Execute Automated Conversion using DMS Schema Conversion service
+ Review Conversion Results and identify human intervention items
+ Handle Manual Interventions through HITL prompts
+ Validate human corrections and re-run conversion if needed

**Note**  
**Common Manual Interventions:** Linked servers, user-defined types, advanced T-SQL patterns, and vendor-specific SQL features typically require human review and correction.

### Data migration for each wave
<a name="sql-server-data-transfer"></a>

Transfer data from SQL Server to Aurora PostgreSQL using AWS Database Migration Service (AWS DMS) integration. Choose between production data migration or synthetic data generation for testing purposes.

**Actions to Complete:**
+ Choose **Production Data Migration** or **Synthetic Data Generation**
+ Configure DMS replication instance for data transfer
+ Monitor migration progress and handle data inconsistencies
+ Validate data integrity and referential constraints
+ Confirm successful completion before proceeding

**Note**  
**Automated DMS Integration:** AWS Transform abstracts the complexity of DMS configuration, automatically handling endpoint creation, task configuration, and data type mappings.

**Note**  
**Data Validation:** The service performs empirical testing by running identical queries against both source and target databases to ensure data consistency.

## Code migration for each wave
<a name="sql-server-code-migration"></a>

Transform .NET application code to work with Aurora PostgreSQL, including Entity Framework configuration updates, connection string modifications, SQL query adaptations, and framework-specific adjustments.

**Actions to Complete:**
+ Execute Automated Code Analysis for SQL Server dependencies
+ Perform Framework Transformation (Entity Framework provider updates)
+ Adapt SQL Query syntax to PostgreSQL-compatible SQL
+ Configure Target Branch Management for transformed code
+ Handle Manual Interventions for complex patterns

# SQL Server modernization workflow
<a name="sql-server-modernization-workflow"></a>

This section provides a step-by-step walkthrough of the complete SQL Server modernization process using AWS Transform.

## Step 1: Create SQL Server modernization job
<a name="step-1-create-job"></a>

Begin your modernization journey by creating a new transformation job in the AWS Transform console.

1. Sign in to the AWS Transform console

1. Choose **Create modernization job**

1. Select **Windows modernization **job and then select **SQL Server modernization**

1. Enter job details:
   + **Job name**: Descriptive name for your project
   + **Description**: Optional description
   + **Target region**: AWS region for deployment

1. Choose **Create job**
**Important**  
Do not include personally identifiable information (PII) in your job name.

## Step 2: Connect to SQL Server database
<a name="step-2-connect-sql-server"></a>

Connect AWS Transform to your SQL Server database to enable schema analysis and conversion.

### Create a database connector
<a name="create-database-connector"></a>

1. In your SQL Server modernization job, navigate to Connect to resources

1. Choose **Connect to SQL Server database**

1. Choose **Create new connector**

1. Enter the connector information:
   + C**onnector name**: Descriptive name
   + **AWS account ID**: Account where SQL Server is hosted

1. Once confirmed, you will receive a link for approval. Copy the approval link to get approval from your AWS admin for the account. Once they have approved, you can proceed to the next step.

1. After your administrator has approved the connector request, click Submit to proceed to the source code connection setup.

## Step 3: Connect source code repository
<a name="step-3-connect-source-code"></a>

AWS Transform needs access to your .NET application source code to analyze and transform the code that interacts with your SQL Server database.

### Set up AWS CodeConnections
<a name="setup-codeconnections"></a>

1. In your SQL Server modernization job, navigate to **Connect to resources**

1. Choose **Connect source code repository**

1. If you don't have an existing connection, choose **Create connection**

1. Select your repository provider:
   + GitHub / GitHub Enterprise
   + GitLab.com
   + Bitbucket Cloud
   + Azure Repositories

1. Follow the authorization flow for your provider

1. After authorization, choose **Connect**

### Select your repository and branch
<a name="select-repository-branch"></a>

1. Select your repository from the list

1. Choose the branch you want to transform (typically main, master, or develop)

1. (Optional) Specify a subdirectory if your .NET application is not in the repository root

1. Choose **Continue**

**Note**  
AWS Transform will create a new branch for the transformed code. You can review and merge the changes through your normal code review process.

### Repository access approval
<a name="repository-access-approval"></a>

For GitHub and some other platforms, the repository administrator must approve the connection request:

1. AWS Transform displays a verification link

1. Share this link with your repository administrator

1. The administrator reviews and approves the request in their repository settings

1. Once approved, the connection status changes to *Approved*

**Important**  
The approval process can take time depending on your organization's policies. Plan accordingly.

## Step 4: Create deployment connector (optional)
<a name="step-4-create-deployment-connector"></a>

If you want to deploy the transformed applications into your AWS account, you have the option to select a deployment connector.

### Set up deployment connector
<a name="setup-deployment-connector"></a>

1. Select **Yes** if you want to deploy your applications. Selecting **No** will skip this step.

1. Add your AWS account where you want to deploy the transformed applications.

1. Add a name that helps you remember the connector easily

1. Submit the connector request for approval.

### Deployment connector approval
<a name="deployment-connector-approval"></a>

Your AWS account administrator must approve the connection request for deployment connector.

1. AWS Transform displays a verification link

1. Share this link with your AWS account administrator

1. The administrator reviews and approves the request in their repository settings

1. Once approved, the connection status changes to *Approved*

**Important**  
The approval process can take time depending on your organization's policies. Plan accordingly.

## Step 5: Confirm your resources
<a name="step-5-confirm-resources"></a>

After connecting to your database and repository, AWS Transform verifies that all required resources are accessible and ready for transformation.

### What AWS Transform verifies
<a name="what-trn-verifies"></a>
+ **Database connectivity:** Connection is active, user has required permissions, databases are accessible, version is supported
+ **Repository access:** Repository is accessible, branch exists, .NET project files detected, database connections discoverable
+ **Environment readiness:** VPC configuration supports DMS, required AWS service roles exist, network connectivity established, region compatibility confirmed

### Review the pre-flight checklist
<a name="review-preflight-checklist"></a>

1. Navigate to **Confirm your resources** in the job plan

1. Review the checklist items:
   + ✅ Database connection verified
   + ✅ Repository access confirmed
   + ✅ .NET version supported
   + ✅ Entity Framework or ADO.NET detected
   + ✅ Network configuration valid
   + ✅ Required permissions granted

1. If all items show as complete, choose Continue

1. If any items show warnings or errors, address them before proceeding

## Step 6: Discovery and assessment
<a name="step-6-discovery-assessment"></a>

AWS Transform analyzes your SQL Server database and .NET application to understand the scope and complexity of the modernization.

### What gets discovered
<a name="what-gets-discovered"></a>
+ **Database objects:** Tables, views, indexes, stored procedures, functions, triggers, constraints, data types, computed columns, identity columns, foreign key relationships
+ **Application code:** .NET project structure, Entity Framework models and configurations, ADO.NET data access code, database connection strings, stored procedure calls, SQL queries in code
+ **Dependencies:** Which applications use which databases, cross-database dependencies, shared stored procedures, common data access patterns

### Discovery process
<a name="discovery-process"></a>
+ AWS Transform begins discovery automatically after resource confirmation
+ Discovery typically takes 5-15 minutes depending on database size and application complexity
+ Monitor progress in the worklog
+ AWS Transform displays real-time updates as objects are discovered

### Review discovery results
<a name="review-discovery-results"></a>

After discovery completes, navigate to **Discovery and assessment **to review:

**Database Analysis:**
+ **Object count**: Number of tables, views, stored procedures, functions, triggers
+ **Complexity score**: Assessment of transformation complexity (Low, Medium, High)
+ **Action items**: Objects that may require human attention
+ **Supported features**: Database features that will convert automatically
+ **Unsupported features**: Features that require workarounds

**Application Analysis:**
+ **Project type**: ASP.NET Core, Console App, Class Library, etc.
+ **.NET version**: Detected .NET Core version
+ **Data access framework**: Entity Framework version or ADO.NET
+ **Database connections**: Number of connection strings found
+ **Code complexity**: Assessment of transformation complexity

**Dependency Map:**
+ Visual representation of application-to-database relationships
+ Cross-database dependencies
+ Shared components

### Understanding complexity assessment
<a name="understanding-complexity-assessment"></a>

AWS Transform classifies your modernization into three categories:


| Complexity | Characteristics | Expected Outcome | 
| --- | --- | --- | 
| Low (Class A) | Standard SQL patterns (ANSI SQL), simple stored procedures, basic data types, Entity Framework with standard configurations | Minimal human intervention expected, high automation success rate | 
| Medium (Class B) | Advanced T-SQL patterns, complex stored procedures with business logic, user-defined functions, computed columns | Some human intervention required, expert review recommended | 
| High (Class C) | CLR assemblies, linked servers, Service Broker, complex full-text search | Significant human refactoring required, consider phased approach | 

### Assessment report
<a name="assessment-report"></a>

AWS Transform generates a detailed assessment report that includes:
+ Executive summary with high-level overview
+ Complete database inventory
+ Application inventory
+ Transformation readiness percentage
+ Effort estimation
+ Risk assessment and mitigation strategies
+ Recommended approach

You can download the assessment report for offline review and sharing with stakeholders.

## Step 7: Generate and review wave plan
<a name="step-7-wave-plan"></a>

For large estates with multiple databases and applications, AWS Transform generates a wave plan that sequences the modernization in logical groups.

### What is a wave plan?
<a name="what-is-wave-plan"></a>

A wave plan organizes your modernization into phases (waves) based on:
+ Dependencies between databases and applications
+ Business priorities
+ Risk tolerance
+ Resource availability
+ Technical complexity

Each wave contains a group of databases and applications that can be modernized together without breaking dependencies.

### Review the wave plan
<a name="review-wave-plan"></a>

1. Navigate to **Wave planning** in the job plan

1. Review the proposed waves

1. For each wave, review:
   + Databases included
   + Applications included
   + Dependencies on other waves
   + Estimated transformation time
   + Complexity level
   + Deployable applications

### Customize the wave plan
<a name="customize-wave-plan"></a>

You can customize the wave plan to match your business needs in 2 ways:

**Using JSON:**

1. Choose **Download all waves** to get a JSON file with all the waves

1. Modify waves in the JSON by:
   + Moving databases between waves
   + Splitting waves into smaller groups
   + Merging waves together
   + Changing wave sequence
   + Adding or removing databases from scope

1. Upload the JSON file back to the console by choosing **Upload wave plan**

1. AWS Transform validates your changes and warns if dependencies are violated

1. Choose **Confirm waves** to update the wave plan

**Using Chat:**

You can modify the wave plans by chatting with the agent and asking it to move the repositories and databases to specific waves. This approach works well if you need to make minor edits to the waves.

**Important**  
Ensure that dependencies are respected when customizing waves. Transforming a dependent application before its database can cause issues.

### Single database modernization
<a name="single-database-modernization"></a>

If you're modernizing a single database and application, AWS Transform creates a simple plan with one wave. You can proceed directly to transformation without wave planning.

### Approve the wave plan
<a name="approve-wave-plan"></a>

1. After reviewing and customizing (if needed), choose **Approve wave plan**

1. AWS Transform locks the wave plan and proceeds to transformation

1. You can still modify the plan later by choosing **Edit wave plan**

## Step 8: Schema conversion
<a name="step-8-schema-conversion"></a>

AWS Transform converts your SQL Server database schema to Aurora PostgreSQL, including tables, views, stored procedures, functions, and triggers.

### How schema conversion works
<a name="how-schema-conversion-works"></a>

AWS Transform uses AWS DMS Schema Conversion enhanced with generative AI to:
+ Analyze SQL Server schemas and relationships
+ Map data types from SQL Server to PostgreSQL equivalents
+ Transform T-SQL to PL/pgSQL
+ Handle identity columns, computed columns, and constraints
+ Validate conversion and referential integrity
+ Generate action items for objects requiring human review

### Supported conversions
<a name="supported-conversions"></a>

**Automatically converted:**
+ Tables, views, and indexes
+ Primary keys and foreign keys
+ Check constraints and default values
+ Most common data types
+ Simple stored procedures
+ Basic functions and triggers
+ Identity columns (converted to SERIAL or GENERATED)
+ Most computed columns

**May require human review:**
+ Complex stored procedures with advanced T-SQL
+ SQL Server-specific functions (GETUTCDATE, SUSER\$1SNAME, etc.)
+ Computed columns with complex expressions
+ Full-text search indexes
+ XML data type operations
+ HIERARCHYID data type (requires ltree extension)

**Not automatically converted:**
+ CLR assemblies
+ Linked servers
+ Service Broker
+ SQL Server Agent jobs

### Start schema conversion
<a name="start-schema-conversion"></a>

1. Navigate to Schema conversion in the job plan

1. Review the conversion settings:
   + Target PostgreSQL version
   + Extension options (ltree, PostGIS, etc.)
   + Naming conventions

1. Choose **Start conversion**

1. Monitor progress in the worklog

1. Conversion typically takes 10-30 minutes depending on the number of database objects

### Review conversion results
<a name="review-conversion-results"></a>

After conversion completes, navigate to Review schema conversion:

**Conversion Summary:**
+ **Objects converted**: Count of successfully converted objects
+ **Action items**: Objects requiring human attention
+ **Warnings**: Potential issues to review
+ **Errors**: Objects that could not be converted

**Review by Object Type:**
+ **Tables**: Data type mappings, constraints, indexes
+ **Stored procedures**: T-SQL to PL/pgSQL conversion
+ **Functions**: Function signature and logic changes
+ **Triggers**: Trigger syntax and timing changes

### Review action items
<a name="review-action-items"></a>

1. Choose **View action items**

1. For each action item, review:
   + **Object name**: The database object
   + **Issue type**: What requires attention
   + **Severity**: Critical, Warning, or Info
   + **Recommendation**: Suggested resolution
   + **Original code**: SQL Server version
   + **Converted code**: PostgreSQL version

1. For each action item, you can:
   + **Accept**: Use the converted code
   + **Modify**: Edit the converted code
   + **Flag for later**: Mark for human review after transformation

### Example: Stored procedure conversion
<a name="example-stored-procedure-conversion"></a>

SQL Server T-SQL:

```
CREATE PROCEDURE GetProductsByCategory
    @CategoryId INT,
    @PageSize INT = 10
AS
BEGIN
    SET NOCOUNT ON;
    
    SELECT TOP (@PageSize)
        ProductId,
        Name,
        Price,
        DATEDIFF(DAY, CreatedDate, GETUTCDATE()) AS DaysOld
    FROM Products
    WHERE CategoryId = @CategoryId
    ORDER BY Name
END
```

Converted PostgreSQL PL/pgSQL:

```
CREATE OR REPLACE FUNCTION get_products_by_category(
    p_category_id INTEGER,
    p_page_size INTEGER DEFAULT 10
)
RETURNS TABLE (
    product_id INTEGER,
    name VARCHAR(255),
    price NUMERIC(18,2),
    days_old INTEGER
) AS $$
BEGIN
    RETURN QUERY
    SELECT 
        p.product_id,
        p.name,
        p.price,
        EXTRACT(DAY FROM (NOW() - p.created_date))::INTEGER AS days_old
    FROM products p
    WHERE p.category_id = p_category_id
    ORDER BY p.name
    LIMIT p_page_size;
END;
$$ LANGUAGE plpgsql;
```

Changes made:
+ Procedure converted to function returning TABLE
+ Parameter names prefixed with p\$1
+ TOP converted to LIMIT
+ DATEDIFF converted to EXTRACT
+ GETUTCDATE() converted to NOW()
+ Column names converted to lowercase (PostgreSQL convention)

### Approve schema conversion
<a name="approve-schema-conversion"></a>

1. After reviewing all action items and making necessary modifications

1. Choose **Approve schema conversion**

1. AWS Transform prepares the converted schema for deployment to Aurora PostgreSQL

**Note**  
You can download the converted schema as SQL scripts for offline review or version control.

## Step 9: Data migration (optional)
<a name="step-9-data-migration"></a>

AWS Transform provides options for migrating data from SQL Server to Aurora PostgreSQL. Data migration is optional and can be skipped if you only need schema and code transformation.

### Data migration options
<a name="data-migration-options"></a>

**Option 1: Production Data Migration**

Migrate your actual production data using AWS DMS:
+ Full initial load of all data
+ Continuous replication during testing (CDC)
+ Minimal downtime cutover
+ Data validation and integrity checks

**Option 2: Skip Data Migration**

Transform schema and code only:
+ Useful for development/testing environments
+ When data will be migrated separately
+ For proof-of-concept projects

### Configure data migration
<a name="configure-data-migration"></a>

1. Navigate to **Data migration** in the job plan

1. Choose your migration option:
   + **Migrate production data**
   + **Skip data migration**

1. If migrating production data, configure:
   + **Migration type**: Full load, or Full load \$1 CDC
   + **Validation**: Enable data validation
   + **Performance**: DMS instance size
   + Choose **>Start migration**

### Production data migration process
<a name="production-data-migration-process"></a>

If you choose to migrate production data:

1. **Initial sync**: AWS DMS performs full load of all tables

1. **Continuous replication**: (If CDC enabled) Keeps data synchronized

1. **Validation**: Verifies row counts and data integrity

1. **Cutover preparation**: Prepares for final synchronization

Migration Timeline:
+ Small databases (< 10 GB): 30 minutes - 2 hours
+ Medium databases (10-100 GB): 2-8 hours
+ Large databases (> 100 GB): 8\$1 hours

### Data validation
<a name="data-validation"></a>

AWS Transform validates migrated data with the following checks:
+ Row count comparison (source vs target)
+ Primary key integrity
+ Foreign key relationships
+ Data type compatibility
+ Computed column results
+ Null value handling

## Step 10: Application code transformation
<a name="step-10-application-code-transformation"></a>

AWS Transform transforms your .NET application code to work with Aurora PostgreSQL instead of SQL Server. It asks for a target branch name in your repositories to commit the transformed source code. Once you enter the branch name, AWS Transform will create a new branch and initiate the transformation to match the PostgreSQL database.

### What gets transformed
<a name="what-gets-transformed"></a>

**Entity Framework Changes:**
+ **Database provider**: UseSqlServer() → UseNpgsql()
+ **Connection strings**: SQL Server format → PostgreSQL format
+ **Data type mappings**: SQL Server types → PostgreSQL types
+ **DbContext configurations**: SQL Server-specific → PostgreSQL-specific
+ **Migration files**: Updated for PostgreSQL compatibility

**ADO.NET Changes:**
+ **Connection classes**: SqlConnection → NpgsqlConnection
+ **Command classes**: SqlCommand → NpgsqlCommand
+ **Data reader**: SqlDataReader → NpgsqlDataReader
+ **Parameters**: SqlParameter → NpgsqlParameter
+ S**QL syntax**: T-SQL → PostgreSQL SQL

**Configuration Changes:**
+ Connection strings in appsettings.json
+ Database provider NuGet packages
+ Dependency injection configurations
+ Startup/Program.cs configurations

### Start code transformation
<a name="start-code-transformation"></a>

1. Navigate to Application transformation in the job plan

1. Review the transformation settings:
   + **Target .NET version** (if upgrading)
   + **PostgreSQL provider version**
   + **Code style preferences**

1. Choose **Start transformation**

1. Monitor progress in the worklog

1. Transformation typically takes 15-45 minutes depending on codebase size

## Step 11: Review transformation results
<a name="step-11-review-transformation-results"></a>

Before proceeding to deployment, review the complete transformation results to ensure everything is ready for testing.

You can download the transformed code from the repository branch for:
+ Local testing and validation
+ Code review in your IDE
+ Integration with your CI/CD pipeline
+ Version control commit

You can also download the transformation summary to review the natural language changes that are made by AWS Transform as part of the transformation.

### Transformation summary
<a name="transformation-summary"></a>

1. Navigate to **Transformation summary** in the job plan

1. Review the overall results:
   + **Schema conversion**: Objects converted, action items, warnings
   + **Data migration**: Tables migrated, rows transferred, validation status
   + **Code transformation**: Files changed, lines modified, issues resolved
   + **Readiness score**: Overall readiness for deployment

### Generate transformation report
<a name="generate-transformation-report"></a>

AWS Transform generates a comprehensive transformation report:

1. Choose **Generate report**

1. Select report type:
   + **Executive summary**: High-level overview for stakeholders
   + **Technical details**: Complete transformation documentation
   + **Action items**: List of human tasks required

1. Choose **Download report**

The report includes:
+ Transformation scope and objectives
+ Objects and code transformed
+ Issues encountered and resolutions
+ Validation results
+ Deployment readiness assessment
+ Recommendations for testing

## Step 12: Validation and testing
<a name="step-12-validation-testing"></a>

Before deploying to production, validate that the transformed application works correctly with Aurora PostgreSQL.

### Validation types
<a name="validation-types"></a>

**Automated Validation:** AWS Transform performs automated checks:
+ Schema validation against source database
+ Data integrity verification
+ Query equivalence testing
+ Connection string validation
+ Configuration validation

**Human Validation:** You should perform additional testing:
+ Functional testing of application features
+ Integration testing with other systems
+ Performance testing and benchmarking
+ User acceptance testing
+ Security testing

### Run automated validation
<a name="run-automated-validation"></a>

1. Navigate to **Validation** in the job plan

1. Choose **Run validation**

1. AWS Transform executes validation tests:
   + Database connectivity
   + Schema compatibility
   + Data integrity
   + Application build
   + Basic functionality

1. Review validation results:
   + Passed: Tests that succeeded
   + Failed: Tests that need attention
   + Warnings: Potential issues to review

### Testing checklist
<a name="testing-checklist"></a>

**Database Functionality:**
+ All tables accessible
+ Stored procedures execute correctly
+ Functions return expected results
+ Triggers fire appropriately
+ Constraints enforced properly
+ Indexes improve query performance

**Application Functionality:**
+ Application starts successfully
+ Database connections established
+ CRUD operations work correctly
+ Stored procedure calls succeed
+ Transactions commit/rollback properly
+ Error handling works as expected

**Data Integrity:**
+ Row counts match source
+ Primary keys unique
+ Foreign keys valid
+ Computed columns correct
+ Null handling appropriate
+ Data types compatible

**Performance:**
+ Query response times acceptable
+ Connection pooling configured
+ Indexes optimized
+ No N\$11 query issues
+ Batch operations efficient
+ Resource utilization reasonable

## Step 13: Deployment
<a name="step-13-deployment"></a>

After successful validation deploy your modernized application and database to production.

### Deployment options
<a name="deployment-options"></a>
+ Amazon ECS and Amazon EC2 Linux

### Pre-deployment checklist
<a name="pre-deployment-checklist"></a>

Before deploying to production:
+ All validation tests passed
+ Performance testing completed
+ Security review completed
+ Backup and rollback plan documented
+ Monitoring and alerting configured
+ Team trained on new environment
+ Stakeholders informed of deployment
+ Maintenance window scheduled

### Deploy to Amazon ECS
<a name="deploy-to-amazon-ecs"></a>

1. Navigate to **Deployment** in the job plan

1. Choose **Deploy to ECS**

1. Configure deployment settings:
   + **Cluster**: Select or create ECS cluster
   + **Service**: Configure ECS service
   + **Task definition**: Review generated task definition
   + **Load balancer**: Configure ALB/NLB
   + **Auto-scaling**: Set scaling policies

1. Review infrastructure-as-code (CloudFormation template or AWS CDK code)

1. Choose **Deploy**

### Monitor deployment
<a name="monitor-deployment"></a>

AWS Transform deploys your application:

1. Creates Aurora PostgreSQL cluster

1. Applies database schema

1. Loads data (if applicable)

1. Deploys application containers

1. Configures load balancer

1. Sets up auto-scaling

Monitor deployment progress and verify:
+ Infrastructure provisioning
+ Database initialization
+ Application deployment
+ Health checks passing
+ Application accessible
+ Database connections working
+ Logs showing normal operation

### Post-deployment validation
<a name="post-deployment-validation"></a>

After deployment:

**Smoke Testing:**
+ Verify critical functionality
+ Test key user workflows
+ Check integration points
+ Monitor error rates

**Performance Monitoring:**
+ Track response times
+ Monitor database queries
+ Check resource utilization
+ Review application logs

**User Validation:**
+ Conduct user acceptance testing
+ Gather feedback
+ Address any issues
+ Document lessons learned

### Rollback procedures
<a name="rollback-procedures"></a>

If issues arise after deployment:

**Immediate Rollback:**
+ Revert to previous application version
+ Switch back to SQL Server (if still available)
+ Restore from backup if needed

**Partial Rollback:**
+ Roll back specific components
+ Keep database changes
+ Revert application code only

**Forward Fix:**
+ Apply hotfix to Aurora PostgreSQL version
+ Deploy updated application code
+ Monitor for resolution

**Important**  
Keep your SQL Server database available for a period after cutover to enable rollback if needed.

### Post-deployment optimization
<a name="post-deployment-optimization"></a>

After successful deployment:

**Performance Tuning:**
+ Optimize slow queries
+ Adjust connection pool settings
+ Fine-tune Aurora PostgreSQL parameters
+ Review and optimize indexes

**Cost Optimization:**
+ Right-size Aurora instance
+ Configure auto-scaling appropriately
+ Review storage settings
+ Optimize backup retention

**Monitoring Setup:**
+ Configure CloudWatch dashboards
+ Set up alerting
+ Enable Enhanced Monitoring
+ Configure Performance Insights

**Documentation:**
+ Update runbooks
+ Document architecture changes
+ Train operations team
+ Create troubleshooting guides

# Test and deploy after transformation
<a name="sql-server-test-deploy"></a>

Deploy your transformed applications to the landing zone and validate the connectivity between your applications and the transformed database, and optionally set up CI/CD pipelines. Sample CI/CD pipeline configurations are generated as reference to help you implement your deployment automation. This final phase validates the complete modernization.

## Configure resources
<a name="sql-server-configure-resources"></a>

You need to configure resources used in deployment such as IAM roles, instance profiles and S3 bucket. AWS Transform does not have permissions to create S3 buckets or to make changes in your IAM. This action is needed to be performed by a user who has permissions to create new roles. Usually it is account administrator.

To simplify the process, we provide a setup script that automatically:
+ Executes the necessary CloudFormation templates
+ Creates the ECS Service-Linked Role
+ Sets up all required IAM resources

**Important**  
Run the provided setup script as is - do not modify any resource names in the templates or script as this may impact the deployment process.

For Linux:

```
./setup.sh --region <YOUR-Region>
```

For Windows:

```
./setup.ps1 -Region <YOUR-Region>
```

## Check security groups in your VPC
<a name="sql-server-check-security-groups"></a>

AWS Transform deploys the applications to the same subnet and same security group as Aurora PostgreSQL instance.

In order for the application to connect to the Aurora instance, make sure that this security group has outbound rules allowing connection to the same security group and inbound rules allowing connection from the same security group.

## Select and configure applications
<a name="sql-server-select-configure-applications"></a>

The AWS Transform UI shows a list of applications that can be deployed. Select application and click **Configure and Commit** to open a configuration window where you can specify:
+ Target infrastructure for the applications: select ECS or one of the EC2 instances
+ Application or instance parameters

After you complete and close the configuration window, AWS Transform:
+ Generates deployment artifacts based on your configuration
+ Commits these artifacts to your repository, including:
  + Deployment scripts
  + CloudFormation templates
  + Sample CI/CD pipeline configurations (which you can use as reference for setting up your deployment automation)

Once the commit is complete, the application status changes to *Configured and committed*. You can then select the applications and use the **Deploy** button in the top right to initiate deployment.

## ECS deployment
<a name="sql-server-ecs-deployment"></a>

All applications with target ECS are deployed into a new ECS cluster that is created in your account. Each application is deployed in its own dedicated container - multiple applications per container are not supported. You configure container parameters for each application individually, such as required CPU and Memory.

## EC2 deployment
<a name="sql-server-ec2-deployment"></a>

You can select one of proposed EC2 instances and can configure the its parameters such as instance type. Multiple applications can be deployed to the same EC2 instance:
+ To deploy multiple applications to a single instance, select the same target EC2 instance for each of these applications during configuration.
+ All applications that have the same instance selected as a target are deployed to that instance and assigned different random ports.
+ Note that if you edit instance parameters for one application, and then edit another application targeting the same instance, the changes affect all applications deployed to that instance.

## Deployment success
<a name="sql-server-deployment-success"></a>

Upon successful deployment, your applications are running on Aurora PostgreSQL with reduced licensing costs, improved performance, and cloud-native scalability. The job stays open unless explicitly stopped. This allows the option for a possible re-deployment with updated deployment parameters.

# Best practices and troubleshooting
<a name="sql-server-best-practices"></a>

Best practices, and common issues and their resolutions during the modernization process.

## ECS application logs
<a name="sql-server-ecs-application-logs"></a>

### CloudWatch logs
<a name="sql-server-cloudwatch-logs"></a>
+ All ECS container logs are automatically sent to CloudWatch Logs
+ Access logs in the CloudWatch console under Log Groups
+ Log group naming format: /aws/ecs/\$1application-name\$1
+ Each container instance creates a new log stream within the group

### Viewing logs
<a name="sql-server-viewing-logs"></a>

**Through AWS Console:**
+ Navigate to CloudWatch > Log Groups
+ Select your application's log group
+ Choose the relevant log stream to view container logs

**Using AWS CLI:**

```
aws logs get-log-events --log-group-name /aws/ecs/your-app-name --log-stream-name your-stream-name
```

### Common log locations
<a name="sql-server-common-log-locations"></a>
+ Application logs: CloudWatch Logs
+ ECS Service Events: ECS Console > Cluster > Service > Events tab
+ Container health/status: ECS Console > Cluster > Service > Tasks tab

## Database connection management
<a name="sql-server-database-connection-management"></a>

Applications use environment variables for database connection settings

If you experience connectivity issues:
+ Verify the current connection settings in your environment variables
+ Update environment variables to modify database connection strings as needed
+ Connection string changes can be made through environment variable updates without application redeployment

## Database connection issues
<a name="database-connection-issues"></a>

**Problem:** Cannot connect AWS Transform to SQL Server

Solutions:
+ Verify network connectivity between AWS Transform and SQL Server
+ Check security group rules for proper port access (1433)
+ Confirm database credentials in Secrets Manager
+ Test database permissions with the created user
+ Ensure SQL Server is configured for mixed mode authentication
+ Verify secret has required tags (Project: atx-db-modernization, Owner: database-connector)

## Firewall and security group issues
<a name="firewall-security-group-issues"></a>

**Problem:** Connection timeout or "cannot reach database" errors

**Root Cause:** Security groups or network ACLs blocking traffic

Solutions:

1. Verify Security Group Configuration:
   + Confirm your SQL Server security group has an inbound rule allowing port 1433 from the DMS Schema Conversion security group
   + Check that the source is the security group ID (e.g., sg-0123456789abcdef0), not an IP address
   + Verify the DMS Schema Conversion security group is correctly specified in the Instance Profile
   + Ensure there are no conflicting deny rules

1. Check Network ACLs:
   + Verify subnet-level Network ACLs allow inbound traffic on port 1433
   + Ensure Network ACLs allow outbound ephemeral ports for return traffic
   + Check both the database subnet and DMS subnet Network ACLs

1. Verify VPC Configuration:
   + Confirm the DMS Schema Conversion instance and SQL Server are in the same VPC or have proper VPC peering
   + Check route tables allow traffic between subnets
   + Verify no firewall appliances are blocking traffic

1. Test Connectivity:
   + Launch a test EC2 instance in the same subnet as DMS Schema Conversion
   + Attach the same security group as DMS Schema Conversion
   + Test connection to SQL Server using telnet or SQL Server Management Studio
   + If test succeeds, the issue is with AWS Transform configuration; if it fails, the issue is network/firewall

**Common Mistake:** Opening port 1433 to 0.0.0.0/0 (all sources) is a security risk. Always use security group-based access control to limit access to only the DMS Schema Conversion security group.

## Schema conversion issues
<a name="schema-conversion-issues"></a>

**Problem:** Schema conversion shows many action items

Solutions:
+ Review action items in conversion report
+ Prioritize based on impact
+ Use Amazon Q Developer for complex SQL conversions
+ Consult AWS Support for guidance
+ Consider phased approach for complex databases

## Application transformation issues
<a name="application-transformation-issues"></a>

**Problem:** Application transformation fails to build

Solutions:
+ Review build errors in transformation report
+ Configure private NuGet feeds if needed
+ Update package references if required
+ Check for Windows-specific dependencies
+ Review transformation logs for detailed errors

## Data migration issues
<a name="data-migration-issues"></a>

**Problem:** Data migration validation fails

Solutions:
+ Review validation report for specific failures
+ Check data type mappings
+ Verify identity column configuration (GENERATED BY DEFAULT vs GENERATED ALWAYS)
+ Review computed column expressions
+ Contact AWS Support for complex data issues

## Resource cleanup issues
<a name="resource-cleanup-issues"></a>

**Problem:** Transformation job fails with resource errors

Solutions:
+ Check for existing DMS resources (migration projects, data providers, instance profiles)
+ Clean up failed or incomplete resources from previous attempts
+ Verify secrets are not scheduled for deletion
+ Check service quotas for DMS and Aurora PostgreSQL
+ Contact AWS Support if cleanup doesn't resolve the issue

## Deployment issues
<a name="deployment-issues"></a>

**Problem:** Transformed application cannot connect to Aurora PostgreSQL

Solutions:
+ Verify connection string format for PostgreSQL
+ Check security group rules
+ Verify database credentials in Secrets Manager
+ Ensure SSL/TLS is properly configured
+ Test connection using psql or pgAdmin

## Getting additional help
<a name="getting-additional-help"></a>

When contacting AWS Support, please provide:
+ Transformation job ID
+ AWS account ID
+ Region
+ Error messages and screenshots
+ Transformation logs (available in AWS Transform console)