Terraform Multi-Cloud Project Structure: Best Practices for Multi-Region Setups

In the age of automation and scalability, Infrastructure as Code (IaC) has transformed the way IT teams manage and provision resources. What used to be done with time-consuming manual processes or improvised scripts can now be described in clear, versionable, and reusable configuration files.

Among IaC tools, Terraform stands out as one of the most powerful and popular, allowing engineers to define complex infrastructures in code, promoting consistency, collaboration, and efficiency.

Here at Cheesecake Labs, we use Terraform to automate processes and create scalable solutions, especially in projects involving multi-clouds and multi-regions.

But how do you structure a Terraform project so it’s organized, easy to maintain, and ready to grow?

In this blog post, we’ll share our approach to creating a clear and scalable Terraform multi-cloud project structure, including best practices, a suggested folder structure, and explanations of the files that make up each part of the project. 

If you’ve ever wondered, “How do I organize my Terraform directories?”, this guide is for you!

Why Terraform?

When working with complex infrastructures that span multiple cloud providers and regions, you need tools that offer flexibility, consistency, and strong ecosystem support.

Terraform meets those requirements for several reasons, including:

  • Cloud-agnostic by design: Terraform supports all major cloud providers — like AWS, Azure, and Google Cloud — through a unified configuration language (HCL). This makes it ideal for managing multi-cloud environments from a single codebase.
  • Declarative and predictable: Terraform lets you describe the desired state of your infrastructure. It then takes care of provisioning, updating, or deleting resources to match that state. This helps reduce human error and makes changes more predictable.
  • Strong support for modularity: Using modules, you can break your infrastructure into reusable components. This is especially useful when you want to replicate infrastructure across regions or clouds while maintaining consistency.
  • Remote state and change tracking: Terraform maintains a state file that tracks what resources it manages. This enables features like plan previews (terraform plan), targeted applies, and rollback strategies when using remote state backends.
  • Tooling and ecosystem: From integrations with CI/CD tools to community-backed modules in the Terraform Registry, there’s a mature ecosystem around Terraform that supports automation, testing, and scaling best practices.

Terraform gives us the building blocks we need to structure scalable, modular, and maintainable infrastructure across clouds and regions, without locking ourselves into any single provider.

Challenges in structuring a multi-cloud and multi-region Terraform project

Implementing a Terraform project that spans multiple clouds and regions comes with plenty of challenges. Here are a few that you’ll likely encounter:

Managing the Terraform state

One of the main challenges is managing the Terraform state, which stores the mapping between the resources defined in the configuration files and the actual resources provisioned.

In a multi-cloud and multi-region environment, this state can become fragmented and difficult to manage, especially when different teams or processes need to access different parts of the state simultaneously. Plus, the risk of state conflicts or corruption is real if the project is not well structured.

Ensuring modularity and code reuse

Ensuring modularity and code reuse is crucial to avoiding redundancies and maintaining development efficiency.

Creating modules that can be easily reused in different clouds and regions is fundamental to keeping the project scalable and manageable in the long term.

It also helps to propagate good practices between environments when adopted. However, this requires careful organization of files and folders and a clear understanding of the particularities of each cloud provider.

Configuring cloud providers

Finally, configuring cloud providers in Terraform can be complex. Each provider has its own APIs and configuration requirements, which means you need to create specific configurations for each one, while ensuring that the code is consistent and easy to maintain.

In addition, each region may have its own peculiarities, like service availability or compliance requirements, which add another layer of complexity to the project.

Folder structure your Terraform multi-cloud/multi-region project

To efficiently manage a project that spans several clouds and regions, it is essential to have a well-organized folder structure. Here’s a structure that can be adapted to your project’s needs:

/terraform/
├── docs/
├── live/
│ ├── development/
│   	  ├── aws/
│   	  	├── us-east-1/
│   	  	    ├── _setup/
│	  ├── azure/
│   	  	├── eastus/
│   	  	    ├── _setup/
│ ├── staging/
│   	  ├── aws/
│	  ├── ...
│ └── production/
│   	  ├── aws/
│	  ├── ...
├── modules/
│ ├── aws/
│   	  ├── network/
│   	  	├── 1.0.0/
│   	  	├── 1.3.1/
│   	  	├── ...
│	  ├── database/
│	  ├── ...
│ ├── azure/
│   	  ├── network/
│	  ├── ...

Why this structure?

Why does this structure work so well? Here’s how it works:

  1. Separation by environment: Directories such as development, staging, and production isolate environments, making it easier to manage differences such as instance sizes, security policies, or specific variables. This prevents changes in one environment from accidentally impacting another.
  2. Organization by provider: Folders such as AWS and Azure within each environment reflect the cloud providers, allowing specific configurations without mixing different logics.
  3. Separation by region: Folders such as us-east-1 (AWS) or eastus (Azure) separate regions, reflecting particularities such as latency, service availability, or compliance requirements. This makes it easier to manage regional resources and apply specific configurations, such as availability zones or local firewall rules.
  4. Terraform state segregation: By dividing the project into environments, providers, and regions, the Terraform state (terraform.tfstate) is naturally segregated. In large structures, this significantly decreases the chance of state corruption, because each live folder (and its regional subfolders) maintains its own independent state file. Instead of a single monolithic state, which can become a point of failure in complex projects, this approach distributes the risk, making the infrastructure more resilient and easier to debug.
  5. Reusable modules: The modules folder contains specific code blocks for each provider (e.g.,aws/network, azure/database), promoting reuse and consistency across all environments and regions.
  6. Module versioning: Versioned modules allow the code to evolve in a controlled manner. Changes can be tested in development, promoted to staging, and finally applied in production. We chose this versioning approach in folders (1.0.0, 1.3.1, etc.) in a monorepo for its simplicity, but you can also use separate repositories for each module, depending on your needs.
  7. Documentation: This topic is often overlooked, but having infrastructure documentation can be very helpful. For example, it can help facilitate understanding and onboard new people, or help reduce complexity by explaining the entire project in simpler terms.

File Structure

Now that we have the folder structure, let’s detail the files that make up the live (the actual environments where resources are provisioned) and modules (where we define reusable blocks) folders. 

Here’s our file structure suggestion:

Live folders (e.g., /live/development/aws/)

  • main.tf: Terraform’s entry point. It defines the resources to be provisioned by calling modules and configuring dependencies. We keep it simple, delegating the heavy logic to the modules.
module "network" {
 source = "./modules/network"
 name_prefix = var.name_prefix
 cidr_prefix = var.cidr_prefix
}Code language: JavaScript (javascript)
  • variables.tf: Declares the project’s variables, such as region, instance type, or environment name. This allows customization without changing the main code.
variable "region" {
 description = "AWS region to deploy resources"
 type        = string
 default     = "us-east-1"
}Code language: JavaScript (javascript)
  • outputs.tf: Defines Terraform outputs, such as instance IPs or service URLs. This information is helpful for CI/CD pipelines or automation scripts.
output "vpc_id" {
 description = "VPC ID"
 value       = aws_vpc.vpc.id
}Code language: JavaScript (javascript)
  • locals.tf: Stores local variables, such as combinations of values or environment-specific constants. It helps maintain DRY (Don’t Repeat Yourself) code.
locals {
 name_prefix         = var.name_prefix
}Code language: JavaScript (javascript)
  • provider.tf: Configures cloud providers (e.g., AWS, Azure) with credentials and parameters such as region. We parameterize with variables for flexibility.
terraform {
 backend "s3" {
   bucket         = "project-terraform-state"
   dynamodb_table = "project-terraform-locks"
   key            = "terraform.tfstate"
   region         = "us-east-1"
   encrypt        = true
 }
}


provider "aws" {
 region              = var.region
 allowed_account_ids = var.allowed_account_ids
 default_tags {
   tags = var.default_tags
 }
}
Code language: JavaScript (javascript)
  • terraform.auto.tfvars: Contains default values for the variables, loaded automatically by Terraform. Ideal for environment-specific configurations.
region              = "us-east-2"Code language: JavaScript (javascript)

Modules folders (e.g., /modules/aws/network/)

  • main.tf: Defines the module’s resources, such as a VPC or subnet. It’s the heart of the module, but focused on a single responsibility.
resource "aws_vpc" "vpc" {
 cidr_block           = "${var.cidr_prefix}.0.0/16"
 enable_dns_hostnames = true
 enable_dns_support   = true


 tags = merge(local.tags, {
   Name = "${local.name_prefix}-vpc"
 })
}Code language: JavaScript (javascript)
  • variables.tf: Lists the variables that the module accepts, such as CIDR blocks or tags. This allows the module to be configurable when called.
variable "name_prefix" {
 description = "Name prefix"
 type        = string
}Code language: JavaScript (javascript)
  • outputs.tf: Returns values generated by the module, such as resource IDs, for use in other modules or in live.
output "vpc_id" {
 description = "VPC ID"
 value       = aws_vpc.vpc.id
}Code language: JavaScript (javascript)
  • locals.tf: Stores calculations or derived values within the module, keeping the logic encapsulated.
locals {
 Name_prefix = var.name_prefix
 azs  		= [for i, az in var.azs : "${data.aws_region.current.name}${az}"]
}Code language: JavaScript (javascript)
  • version.tf: Specifies the minimum versions of Terraform and the provider (e.g., terraform { required_providers { aws = “>= 4.0” } }). Ensures compatibility and consistency.
terraform {
 required_version = ">= 1.0"
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = ">= 5.0"
   }
 }
}Code language: JavaScript (javascript)

This separation keeps the modules generic and the live folders specific, balancing reuse and customization.

The _setup folder: Managing Terraform state

A critical aspect of Terraform projects is state management (terraform.tfstate). In multi-cloud and multi-region environments, storing state centrally and securely is essential. To do this, we introduce the _setup folder inside each region folder (e.g., /live/development/aws/_setup/).

_setup Folder Structure

The _setup folder follows the same file structure as the live folder:

  • main.tf, variables.tf, outputs.tf, locals.tf, provider.tf, terraform.auto.tfvars.

Objective

The purpose of _setup is to provision the resources needed to manage the state of Terraform remotely and robustly, configuring the Terraform backend that the live folders will use. This includes, for example:

  • An S3 bucket (on AWS) or Storage Account (on Azure) to store the remote status file.
  • A DynamoDB table (on AWS) for lock control, avoiding conflicts in simultaneous executions.

By running terraform apply on each region’s _setup folder, these resources are created, and each region’s backend can be configured to use them, so that each region’s state can be stored remotely separately.

The backend provides centralization, versioning, security, and concurrency control, making state management more reliable in complex projects.

Particularity of the state

The terraform.tfstate file generated by the _setup folder is versioned directly inside it in the Git repository.

Why? Because it only contains information about the state management resources (e.g., the S3 bucket), which are not sensitive. The state of the live folders (which describes the actual infrastructure) is stored remotely in the provisioned bucket, with encryption and access control.

This approach simplifies the start-up of new projects or regions, ensuring that the remote backend is ready before provisioning the main resources.

Conclusion

Structuring a multi-cloud and multi-region Terraform project can be challenging, but with an organized and modular approach, you can create a scalable and easy-to-manage solution.

The key is to adopt best practices such as segregating environments and regions, reusing modules, and parameterizing variables.

With this approach, you and your team will be prepared to face the challenges of a multi-cloud environment with confidence and efficiency.

Do you need help provisioning and managing highly available and scalable infrastructures? At Cheesecake Labs, our team of DevOps and Cloud Computing professionals prioritizes creating environments tailored to each project and client’s needs, in an agile and documented manner, always focusing on the cost-benefit ratio.

About the author.

Álan Monteiro
Álan Monteiro

The guy from security, automation, camping, motorcycle, travel, and collaboration! Let’s talk?