AWS Network Setup

This document outlines a network solution for a medium-sized enterprise with the following requirements:

  1. Separation and parity between development, testing, and production environments.
  2. Segmentation of trusted (not internet-accessible) and untrusted (internet-accessible) resources.
  3. Hub-and-spoke architecture for efficient connectivity.
  4. Minimal costs to maintain affordability.

To address costs (#4), using EC2 instances offers the lowest entry cost. Instead of using the AWS NAT Gateway (which costs about $40/month), we use a minimal EC2 instance running Linux as our NAT, reducing the cost to $4/month.

For the hub-and-spoke model, AWS Transit Gateway is the default advertised solution. However, it incurs a $35/month charge for the hub, plus $15/month per spoke or isolated network segment. For our design, this totals $120/month.

Instead, we use EC2 instances and SSH tunnels to create routable connections between segments. Each segment is in a separate VPC, connected through a Private Endpoint Service shared in the hub. This approach costs $15/month for the VPC Endpoint Service and $4/month per segment, totaling approximately $45/month.

The tradeoff is a higher technical complexity, which we aim to solve in this guide.

1. Network Architecture

Our network is divided into the following segments:

For Production and possibly QA, we further divide into:

Each of these segments is further split into:

Network Architecture

2. AWS Setup

2.1. Multi-Account Setup

First, we set up AWS accounts using a multi-account strategy. This approach provides:

Refer to AWS best practices for multi-account management:

AWS enforces limits on accounts and sub-accounts, so consider common separation strategies:

For this project, we create an account per environment, resulting in:

2.2. Root Account Setup

Enable Multi-Factor Authentication (MFA) for all users:

After securing the root account with MFA, proceed with creating sub-accounts:

Once sub-accounts are created, configure IAM Identity Center for user access management:

Refer to "One place to assign permissions to multiple AWS accounts" for centralized permission management.

2.3. Account Structure

We'll create multiple AWS accounts and VPCs:

Account Structure

Ensure that your primary user account has access to all sub-accounts.

We should have the following accounts:

The test accounts will be used to test changes to the network before deploying into the production network.

2.3.1. IAM Identity Center

Setup IAM Identity Center in the same region as you'r primary production VPC's will be. Create your user and setup MFA. Create a group "administrators" and add you user.

In "IAM Identity Center", on the left under "Multi-account permissions", select "Permission sets". Click "create permission set" in the top right. Use a predefined permissions set "AdministratorAccess". Name the permission set "administrator_permissions" and create the permission set.

In "IAM Identity Center" on the left is "Multi-account permissions" - select "AWS accounts" under that. Select the accounts and then click "Assign users or groups" button in the top right. Select the "administrators" group and click next. Select the "administrator_permissions" permission set and click next. Submit the request.

Once done, you can log into the AWS portal and you should see the accounts listed there.

2.3.2. Automation Accounts

Log into each account and create automation accounts for use in CI/CD pipelines.

Log into the AWS account. Go to "IAM" and then "Users" on the left. Create a user named "automation_user". On set permissions, attach "AdministratorAccess" policies directly. Create the user. Go back to "IAM" then "Users" on the left and select the "automation_user" to edit it. Go to the "Security credentials" tab and under "Access keys" click the "Create access key" button. The use case and description do not matter as much as possessing the key - the key is everything - protect it.

We'll need to configure two "aws.crendtials" files using the following template:

[network_automation_user]
aws_access_key_id=********************
aws_secret_access_key=****************************************

[development_automation_user]
aws_access_key_id=********************
aws_secret_access_key=****************************************

[qa_automation_user]
aws_access_key_id=********************
aws_secret_access_key=****************************************

[production_automation_user]
aws_access_key_id=********************
aws_secret_access_key=****************************************

[hub_automation_user]
aws_access_key_id=********************
aws_secret_access_key=****************************************

You'll want to update the environment scripts to point to those files. Do not include those files in the repository. The environment files are:

3. Configurations

Configuring the AWS Network project will require the modification of:

3.1. Root DNS Setup

The network account is used to manage the root level DNS for your domain name. You'll need to configure the "root zone" in the "network" account and set the "dns_root" variable to it:

Make sure to set both the "dns_root" and "dns_root_zone_id".

Also make sure to set the IP address of the NAT. This will have to be done after install deployment so you can determine the IP address. Update the vars file with the NAT's address and apply the Terraform a second time.

3.2. Gitea Setup

You'll need to setup the Gitea environment variables:

The file should look like:

export USER="**********************"
export PASS="**********"

Make sure this is the Gitea administrator and password. You can now run .github/scripts/gitea-setup.sh. This will setup the secrets needed in Gitea.

At this point, you should be able to run the pipeline in Gitea. For deployment, before Gitea is running, we'll use the command line.

4. Deployment

Change to the root directory of the repository. There should be a ".git" folder there. Run the following command:

source .github/scripts/env.sh
source .github/scripts/env_test.sh
bash $GITHUB_WORKSPACE/.github/workflows/build.sh
bash $GITHUB_WORKSPACE/.github/workflows/deploy_secrets.sh
bash $GITHUB_WORKSPACE/.github/workflows/deploy_tf.sh

This will boot up a 'test' network using the test accounts. Once this is done, you should be able to remote into the individual machines and 'ping' around.

4.1. Testing

Edit the terraform/test-vms.tf file and uncomment all the machines. You'll then redeploy to have Terraform create the VMs. After creating the VMs, use the following command to configure SSH.

cd .github/scripts
bash ssh_config.sh