Save time converting Terraform templates to AWS using Amazon Bedrock

Save time converting Terraform templates to AWS using Amazon Bedrock

Save time converting Terraform templates to AWS using Amazon Bedrock

Author: Troy Ameigh
Published on: 2024-03-15 21:20:40
Source: Integration & Automation

Disclaimer:All rights are owned by the respective creators. No copyright infringement is intended.


If you’ve spent time migrating non-AWS Terraform templates to the Amazon Web Services (AWS) Cloud, you know it can be a time-consuming, manual process with ample room for error. Most organizations would rather allocate engineering resources toward higher-value, more impactful initiatives such as cloud modernization.

With all the latest rage about generative artificial intelligence (AI), perhaps you’ve thought about looking into a possible AI solution for automating your template conversion projects. If so, keep reading.

In this article, I show you two examples of how to convert a sample Microsoft Azure-based Terraform template to AWS using Amazon Bedrock, a fully managed service for building generative AI applications on AWS. Amazon Bedrock offers a wide range of foundation models from leading AI companies via a single API. And, because it’s serverless, you can quickly customize models using your own data and integrate them into your environment without infrastructure management.

Solution overview

My solution provides two walkthrough exercises. In the first one, I show you how to use the Amazon Bedrock console to convert a sample non-AWS Terraform template into one designed for the AWS Cloud. It uses the Anthropic Claude version 2.1 foundation model, but you can try it using other supported models when you’re ready to explore further. For a complete list, see Supported foundation models in Amazon Bedrock.

While the first walkthrough provides step-by-step instructions for converting a template using the console, the second one is designed for organizations with a large catalog of templates. To address this more realistic scenario, you perform the conversion via a Python script, setting you up for a pipeline conversion workflow down the road. Once the template is converted, the only remaining task is to review the code for organization-specific parameters. The key is automation, and the goal is massive time savings.

Prerequisites

Before getting started with either walkthrough, ensure that you have the following:

In addition, to complete the second walkthrough, ensure that you have the following on your local machine:

Walkthrough 1: Converting a Terraform template using the Amazon Bedrock console

  1. Log in to your AWS account as a user with Amazon Bedrock permissions.
  2. Sign in to the AWS Management Console and open the Amazon Bedrock console in US West (Oregon) Region.
  3. In the search bar, enter Amazon Bedrock.
  4. In the left navigation pane, under Playgrounds, choose Text.
  5. Choose Select model. (Note: If you don’t see any foundation models, you may need to request access. For details, see Model access.)
  6. Choose Anthropic, Claude, v2.1 FM. For the Throughput option, leave the default setting On-demand.
  7. Choose Apply. Now you can interact with Amazon Bedrock using the Anthropic Claude v2.1 foundation model.
  8. In the text window, enter the following prompt:
    convert to work on AWS and output as a template

    Note: Be sure to use this exact prompt in the text window so that Amazon Bedrock outputs a complete template. For example, if you shortened the prompt to convert to work on AWS, Amazon Bedrock would output a list of proposed changes, but the template would be incomplete.

  9. In the text window, enter the following content. (Note: If you complete the second walkthrough, you will use this same template.)
    # Create a resource group if it doesn't exist
    resource "azurerm_resource_group" "blogterraformgroup" {
        name     = "blogResourceGroup"
        location = "westus"
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Create virtual network
    resource "azurerm_virtual_network" "blogterraformnetwork" {
        name                = "blogVnet"
        address_space       = ["10.0.0.0/16"]
        location            = "westus"
        resource_group_name = azurerm_resource_group.blogterraformgroup.name
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Create subnet
    resource "azurerm_subnet" "blogterraformsubnet" {
        name                 = "blogSubnet"
        resource_group_name  = azurerm_resource_group.blogterraformgroup.name
        virtual_network_name = azurerm_virtual_network.blogterraformgroup.name
        address_prefixes       = ["10.0.1.0/24"]
    }
    
    # Create public IPs
    resource "azurerm_public_ip" "blogterraformpublicip" {
        name                         = "blogPublicIP"
        location                     = "westus"
        resource_group_name          = azurerm_resource_group.blogterraformgroup.name
        allocation_method            = "Dynamic"
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Create Network Security Group and rule
    resource "azurerm_network_security_group" "blogterraformnsg" {
        name                = "blogNetworkSecurityGroup"
        location            = "westus"
        resource_group_name = azurerm_resource_group.blogterraformgroup.name
    
        security_rule {
            name                       = "SSH"
            priority                   = 1001
            direction                  = "Inbound"
            access                     = "Allow"
            protocol                   = "Tcp"
            source_port_range          = "*"
            destination_port_range     = "22"
            source_address_prefix      = "*"
            destination_address_prefix = "*"
        }
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Create network interface
    resource "azurerm_network_interface" "blogterraformnic" {
        name                      = "blogNIC"
        location                  = "westus"
        resource_group_name       = azurerm_resource_group.blogterraformgroup.name
    
        ip_configuration {
            name                          = "blogNicConfiguration"
            subnet_id                     = azurerm_subnet.blogterraformsubnet.id
            private_ip_address_allocation = "Dynamic"
            public_ip_address_id          = azurerm_public_ip.blogterraformpublicip.id
        }
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Connect the security group to the network interface
    resource "azurerm_network_interface_security_group_association" "example" {
        network_interface_id      = azurerm_network_interface.blogterraformnic.id
        network_security_group_id = azurerm_network_security_group.blogterraformnsg.id
    }
    
    # Generate random text for a unique storage account name
    resource "random_id" "randomId" {
        keepers = {
            # Generate a new ID only when a new resource group is defined
            resource_group = azurerm_resource_group.blogterraformgroup.name
        }
    
        byte_length = 8
    }
    
    # Create storage account for boot diagnostics
    resource "azurerm_storage_account" "blogstorageaccount" {
        name                        = "diag${random_id.randomId.hex}"
        resource_group_name         = azurerm_resource_group.blogterraformgroup.name
        location                    = "westus"
        account_tier                = "Standard"
        account_replication_type    = "LRS"
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
    }
    
    # Create (and display) an SSH key
    resource "tls_private_key" "example_ssh" {
      algorithm = "RSA"
      rsa_bits = 4096
    }
    output "tls_private_key" { 
        value = tls_private_key.example_ssh.private_key_pem 
        sensitive = true
    }
    
    # Create virtual machine
    resource "azurerm_linux_virtual_machine" "blogterraformvm" {
        name                  = "blogVM"
        location              = "westus"
        resource_group_name   = azurerm_resource_group.blogterraformgroup.name
        network_interface_ids = [azurerm_network_interface.blogterraformnic.id]
        size                  = "Standard_DS1_v2"
    
        os_disk {
            name              = "blogOsDisk"
            caching           = "ReadWrite"
            storage_account_type = "Premium_LRS"
        }
    
        source_image_reference {
            publisher = "Canonical"
            offer     = "UbuntuServer"
            sku       = "22.04-LTS"
            version   = "latest"
        }
    
        computer_name  = "blogvm"
        admin_username = "azureuser"
        disable_password_authentication = true
    
        admin_ssh_key {
            username       = "azureuser"
            public_key     = file("~/.ssh/id_rsa.pub")
        }
    
        boot_diagnostics {
            storage_account_uri = azurerm_storage_account.blogstorageaccount.primary_blob_endpoint
        }
    
        tags = {
            environment = "Amazon Bedrock Terraform Conversion Blog"
        }
            
        connection {
            host = self.public_ip_address
            user = "azureuser"
            type = "ssh"
            private_key = "${file("~/.ssh/id_rsa")}"
            timeout = "4m"
            agent = false
        }
    
  10. In the Configurations section, adjust the slider to set the value of Maximum Length to 2048.
  11. Choose Run.
  12. Review the converted template, make a note of the list of changes, and update additional items in the output as needed.

Sample output:

 Assistant:
 Here is the converted Terraform template to deploy similar infrastructure on AWS:

```
# Create VPC
resource "aws_vpc" "blog_vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "Blog VPC" 
  }
}

# Create Internet Gateway
resource "aws_internet_gateway" "blog_igw" {
  vpc_id = aws_vpc.blog_vpc.id

  tags = {
    Name = "Blog IGW"
  }
}

# Create Public Subnet
resource "aws_subnet" "blog_public_subnet" {
  vpc_id = aws_vpc.blog_vpc.id
  cidr_block = "10.0.1.0/24"
  availability_zone = "us-west-1a"

  tags = {
    Name = "Blog Public Subnet"
  }
} 

# Create Route Table and add public route
resource "aws_route_table" "blog_public_rt" {
  vpc_id = aws_vpc.blog_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.blog_igw.id
  }

  tags = {
    Name = "Blog Public Route Table"
  }
}

# Associate public subnet with public route table
resource "aws_route_table_association" "blog_public_assoc" {
  subnet_id = aws_subnet.blog_public_subnet.id
  route_table_id = aws_route_table.blog_public_rt.id
}

# Create Security Group for SSH access
resource "aws_security_group" "blog_sg" {
  name = "blog_sg"
  vpc_id = aws_vpc.blog_vpc.id

  ingress {
    from_port = 22
    protocol = "tcp"
    to_port = 22
    cidr_blocks = ["0.0.0.0/0"] 
  }
  
  egress {
    from_port = 0 
    protocol = "-1"
    to_port = 0 
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Generate key pair for SSH access
resource "aws_key_pair" "blog_auth" {
  key_name = "blog_key"
  public_key = file("~/.ssh/id_rsa.pub")
}

# Create EC2 Instance
resource "aws_instance" "blog_server" {
  ami = "ami-0b5eea76982371e91" # Amazon Linux 2 AMI 
  instance_type = "t2.micro"

  subnet_id = aws_subnet.blog_public_subnet.id
  vpc_security_group_ids = [aws_security_group.blog_sg.id]
  key_name = aws_key_pair.blog_auth.id

  tags = {
    Name = "Blog Server"
  }
}
```

The main changes:

- Use AWS VPC, subnets, route tables, IGWs instead of VNETs and subnets
- Security groups instead of NSGs 
- EC2 instances instead of Azure VMs
- Key pairs instead of SSH keys

Let me know if you have any other questions!  

Amazon Bedrock has now converted the Terraform template and provided a list of changed items. If your organization requires changes that weren’t handled by Amazon Bedrock, you can change them manually.

Walkthrough 2: Converting Terraform templates using Python

  1. On your local machine, create a file named bedrock-blog.py, and add the following content. If the Terraform template file is in the same directory as the Python script, only include the file name. Otherwise, include the path.
    import boto3
    import json
    
    
    def template_conversion(template_file_path):
    
        """
        This function converts a Terraform template to work on AWS.
        The different functional models have individual request and response formats.
        For the formatting for the Anthropic Claude, refer to:
        https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html
    
        """
    
        try:
    
            with open(template_file_path) as file:
    
                template = file.read()
    
            bedrock_runtime = boto3.client(region_name="us-east-1",
                                           service_name="bedrock-runtime")
    
            prompt = f"Convert to work on AWS output as a template\n{template}"
    
            # Claude requires you to enclose the prompt as follows:
            enclosed_prompt = "\n\nHuman: " + prompt + "\n\nAssistant:"
    
            body = json.dumps({
                "prompt":  enclosed_prompt,
                "max_tokens_to_sample": 4096,
                "temperature": 0.5,
            }
            ).encode()
    
            response = bedrock_runtime.invoke_model(body=body, modelId="anthropic.claude-v2")
            response_body = json.loads(response.get('body').read())
            print(response_body.get('completion'))
            return response_body.get('completion')
    
        except ClientError:
            logger.error("Couldn't invoke Anthropic Claude")
            raise
    
    
    result = template_conversion('azure.tf')
    
    with open('aws.tf', 'a') as file:
        file.write(result)
        file.close()
    
    with open('aws.tf', 'r+') as file:
        lines = file.readlines()
        file.seek(0)
        file.truncate()
        file.writelines(lines[2:])
    
    with open("aws.tf") as f:
        lines = f.readlines()
    
    index = -1
    for i, line in enumerate(reversed(lines)):
        if "}" in line:
            index = len(lines) - i
            break
    
    if index != -1:
        del lines[index+1:]
    
    with open("aws.tf", "w") as f:
        f.writelines(lines)
    
    				

    This script does the following actions:

    • Imports Boto3 and JSON packages.
    • Defines a template parameter using a template_conversion function call with a template_file_path parameter.
      Python script
    • Sets up a Python try-except statement within the function. The try statement reads the Terraform template, formats the prompt, sets the parameters passed to the foundation model, and executes the call to Amazon Bedrock. The except clause performs a simple check to confirm that the foundation model is invoked.
      Python script
    • Initiates the function and uses the results to create an AWS version of the Terraform template named aws.tf.
      Python script
    • Parses the template and removes unneeded responses from Amazon Bedrock to create a clean file.
  2. Copy the contents of step 9 in the first walkthrough into a new file named nonaws.tf, and save it into the same directory as the Python script.
  3. Run the following script to convert the nonaws.tf file to an AWS-compliant template named aws.tf: python bedrock-blog.py.
  4. Run the following script to view the new template: cat aws.tf.

Conclusion

As generative AI proliferates across industries, organizations grapple with understanding its true value and application to daily workflows. While capable of document creation, image editing, and coding, many developers don’t fully understand AI’s role in performing complex tasks like code conversion. I hope this blog post helped clear some of that ambiguity by demonstrating how you can use a powerful AI tool like Amazon Bedrock to automate rote migration tasks and give you back valuable time to create real business value.

Next steps

Take your skills to the next level and explore the powerful capabilities of Amazon Bedrock via the AWS Management Console and SDKs. In addition to the Anthropic Claude v2.1 foundation model that we used in this blog post, take some time to experiment with other supported models, including Amazon Titan, AI21 Labs, Cohereic, and more. To understand variations in terminology between models, see Inference parameters for foundation models. I also invite you to learn how to develop custom models and see for yourself how they can improve your conversion accuracy from the typical 85-90% to over 95%.

About the author

Troy Ameigh

Troy is a Specialist Solutions Architect at AWS, focusing on AWS Partner integration and automation. With almost 30 years of experience in the IT industry, Troy has great passion for promoting Infrastructure as Code (IaC) and leveraging his years of experience in Linux, AWS, networking, storage, containers, virtualization and architecting,


Disclaimer: All rights are owned by the respective creators. No copyright infringement is intended.

Leave a Reply

Your email address will not be published. Required fields are marked *

Secured By miniOrange