AI Agents vs AI Systems for Software Architecture
Bruna Gomes | Aug 20, 2025
In the age of automation and scalability, Infrastructure as Code (IaC) has transformed the way IT teams manage and provision resources. What used to be done with time-consuming manual processes or improvised scripts can now be described in clear, versionable, and reusable configuration files.
Among IaC tools, Terraform stands out as one of the most powerful and popular, allowing engineers to define complex infrastructures in code, promoting consistency, collaboration, and efficiency.
Here at Cheesecake Labs, we use Terraform to automate processes and create scalable solutions, especially in projects involving multi-clouds and multi-regions.
But how do you structure a Terraform project so it’s organized, easy to maintain, and ready to grow?
In this blog post, we’ll share our approach to creating a clear and scalable Terraform multi-cloud project structure, including best practices, a suggested folder structure, and explanations of the files that make up each part of the project.
If you’ve ever wondered, “How do I organize my Terraform directories?”, this guide is for you!
When working with complex infrastructures that span multiple cloud providers and regions, you need tools that offer flexibility, consistency, and strong ecosystem support.
Terraform meets those requirements for several reasons, including:
Terraform gives us the building blocks we need to structure scalable, modular, and maintainable infrastructure across clouds and regions, without locking ourselves into any single provider.
Implementing a Terraform project that spans multiple clouds and regions comes with plenty of challenges. Here are a few that you’ll likely encounter:
One of the main challenges is managing the Terraform state, which stores the mapping between the resources defined in the configuration files and the actual resources provisioned.
In a multi-cloud and multi-region environment, this state can become fragmented and difficult to manage, especially when different teams or processes need to access different parts of the state simultaneously. Plus, the risk of state conflicts or corruption is real if the project is not well structured.
Ensuring modularity and code reuse is crucial to avoiding redundancies and maintaining development efficiency.
Creating modules that can be easily reused in different clouds and regions is fundamental to keeping the project scalable and manageable in the long term.
It also helps to propagate good practices between environments when adopted. However, this requires careful organization of files and folders and a clear understanding of the particularities of each cloud provider.
Finally, configuring cloud providers in Terraform can be complex. Each provider has its own APIs and configuration requirements, which means you need to create specific configurations for each one, while ensuring that the code is consistent and easy to maintain.
In addition, each region may have its own peculiarities, like service availability or compliance requirements, which add another layer of complexity to the project.
To efficiently manage a project that spans several clouds and regions, it is essential to have a well-organized folder structure. Here’s a structure that can be adapted to your project’s needs:
/terraform/
├── docs/
├── live/
│ ├── development/
│ ├── aws/
│ ├── us-east-1/
│ ├── _setup/
│ ├── azure/
│ ├── eastus/
│ ├── _setup/
│ ├── staging/
│ ├── aws/
│ ├── ...
│ └── production/
│ ├── aws/
│ ├── ...
├── modules/
│ ├── aws/
│ ├── network/
│ ├── 1.0.0/
│ ├── 1.3.1/
│ ├── ...
│ ├── database/
│ ├── ...
│ ├── azure/
│ ├── network/
│ ├── ...
Why does this structure work so well? Here’s how it works:
Now that we have the folder structure, let’s detail the files that make up the live (the actual environments where resources are provisioned) and modules (where we define reusable blocks) folders.
Here’s our file structure suggestion:
module "network" {
source = "./modules/network"
name_prefix = var.name_prefix
cidr_prefix = var.cidr_prefix
}
Code language: JavaScript (javascript)
variable "region" {
description = "AWS region to deploy resources"
type = string
default = "us-east-1"
}
Code language: JavaScript (javascript)
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.vpc.id
}
Code language: JavaScript (javascript)
locals {
name_prefix = var.name_prefix
}
Code language: JavaScript (javascript)
terraform {
backend "s3" {
bucket = "project-terraform-state"
dynamodb_table = "project-terraform-locks"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}
provider "aws" {
region = var.region
allowed_account_ids = var.allowed_account_ids
default_tags {
tags = var.default_tags
}
}
Code language: JavaScript (javascript)
region = "us-east-2"
Code language: JavaScript (javascript)
resource "aws_vpc" "vpc" {
cidr_block = "${var.cidr_prefix}.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = merge(local.tags, {
Name = "${local.name_prefix}-vpc"
})
}
Code language: JavaScript (javascript)
variable "name_prefix" {
description = "Name prefix"
type = string
}
Code language: JavaScript (javascript)
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.vpc.id
}
Code language: JavaScript (javascript)
locals {
Name_prefix = var.name_prefix
azs = [for i, az in var.azs : "${data.aws_region.current.name}${az}"]
}
Code language: JavaScript (javascript)
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
}
}
Code language: JavaScript (javascript)
This separation keeps the modules generic and the live folders specific, balancing reuse and customization.
A critical aspect of Terraform projects is state management (terraform.tfstate). In multi-cloud and multi-region environments, storing state centrally and securely is essential. To do this, we introduce the _setup folder inside each region folder (e.g., /live/development/aws/_setup/).
The _setup folder follows the same file structure as the live folder:
The purpose of _setup is to provision the resources needed to manage the state of Terraform remotely and robustly, configuring the Terraform backend that the live folders will use. This includes, for example:
By running terraform apply on each region’s _setup folder, these resources are created, and each region’s backend can be configured to use them, so that each region’s state can be stored remotely separately.
The backend provides centralization, versioning, security, and concurrency control, making state management more reliable in complex projects.
The terraform.tfstate file generated by the _setup folder is versioned directly inside it in the Git repository.
Why? Because it only contains information about the state management resources (e.g., the S3 bucket), which are not sensitive. The state of the live folders (which describes the actual infrastructure) is stored remotely in the provisioned bucket, with encryption and access control.
This approach simplifies the start-up of new projects or regions, ensuring that the remote backend is ready before provisioning the main resources.
Structuring a multi-cloud and multi-region Terraform project can be challenging, but with an organized and modular approach, you can create a scalable and easy-to-manage solution.
The key is to adopt best practices such as segregating environments and regions, reusing modules, and parameterizing variables.
With this approach, you and your team will be prepared to face the challenges of a multi-cloud environment with confidence and efficiency.
Do you need help provisioning and managing highly available and scalable infrastructures? At Cheesecake Labs, our team of DevOps and Cloud Computing professionals prioritizes creating environments tailored to each project and client’s needs, in an agile and documented manner, always focusing on the cost-benefit ratio.
The guy from security, automation, camping, motorcycle, travel, and collaboration! Let’s talk?