Integrated Testing to Scalable Cloud Deployment: A Complete Infrastructure Template

From Local Integration test with Docker Compose to Scalable Cloud Apps with AWS Fargate and Terraform

Ravi Yasakeerthi
4 min readJun 21, 2024

Think about testing multiple microservices with their dependencies right in your local environment. Or running complex test cases that involve several interconnected services in your CI/CD pipeline. And then, deploying those same services to the cloud with minimal infrastructure management and operational overhead. Specially when you don’t want to manage Kubernetes Cluster. Recently, I received several inquiries on this very topic and decided it was time to put together a comprehensive demo project to address these challenges.

In this blog post, we’ll explore how to set up a local environment for testing integrated services and seamlessly deploy them to the cloud. The goal is to provide a quick and efficient way to handle development, testing, and deployment with minimal hassle. We’ll use Docker Compose for local testing and Terraform for cloud provisioning, ensuring a smooth transition from local development to production in AWS Fargate.

Let’s dive in and see how you can streamline your development workflow, making it easier to manage and deploy your microservices both locally and in the cloud.

Project Overview

In this project, we have a high-level architecture that consists of the following components:

  • Frontend Service: Built with Vue.js, the frontend service is responsible for the user interface. It makes API calls to the backend service to fetch data.
  • Backend Service: Developed with Python Flask, the backend service handles API requests from the frontend. It processes these requests and interacts with the database to retrieve and return data.
  • Database: A MySQL database that stores the list of books. The backend service queries this database to fetch the required data and serves it to the frontend.
  • Tests: We have implemented tests to ensure the reliability and correctness of our application
    Backend Tests: Using Python’s requests library, we verify the functionality of the backend API endpoints, ensuring they process requests and interact with the database correctly.
    Frontend Tests: Using Playwright, we simulate user actions to validate that the frontend displays data fetched from the backend service correctly.

The frontend service communicates with the backend service via HTTP requests. When the frontend needs to display a list of books, it sends a request to the backend API. The backend then queries the MySQL database, retrieves the list of books, and sends this data back to the frontend, which displays it to the user.

Cloud Deployment Architecture

Here you can find the source code for the project.

Local Setup

Setting up the local environment was the first step. Using Docker, we containerized our frontend, backend, and MySQL database to ensure consistency across development and production environments.

Why Use Docker Compose?

Docker Compose is an excellent tool for defining and running multi-container applications. It allows us to quickly spin up a local integrated environment where all the services, including frontend, backend, and database, can run together seamlessly. This setup closely mirrors the production environment, making it easier to catch integration issues early in the development process. By using Docker Compose, we can ensure that our local environment is consistent and easily replicable, simplifying testing and debugging.

Docker Compose File:

version: '3'
services:
frontend:
build: ./frontend
ports:
- "8080:8080"
environment:
- VUE_APP_BACKEND_URL=http://backend:5001
depends_on:
- backend
- mysql

backend:
build: ./backend
ports:
- "5001:5001"
environment:
MYSQL_DATABASE_HOST: "mysql"
MYSQL_DATABASE_PORT: "3306"
MYSQL_DATABASE_USER: "tf_rds_user1"
MYSQL_DATABASE_PASSWORD: "password" # Set your password here
MYSQL_DATABASE_DB: "bookdb"
USE_LOCAL_DB: "true"
AWS_REGION: "us-east-1"
DB_MASTER_USER: "root"
DB_MASTER_PASSWORD: "rootpassword"
depends_on:
- mysql

mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: "rootpassword"
ports:
- "3306:3306"
volumes:
- mysql-data:/var/lib/mysql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql

test:
build:
context: ./tests
dockerfile: Dockerfile.test
depends_on:
- frontend
- backend
- mysql

volumes:
mysql-dat

Running Locally

docker-compose up --build

Running the docker-compose up with build param will build your services and bring up the local env. You can access your frontend service on localhost:8080

Database Initialization and Data Seeding

Local Database Initialization

For the local environment, we use a SQL script to initialize the database and seed it with some initial data. This script is executed automatically when the MySQL container starts, thanks to the Docker Compose setup. The init.sql file contains the necessary SQL commands to create the database schema and insert some sample data.

Cloud Database Initialization

For the cloud environment, we use an RDS database that resides in a private subnet without public access. Therefore, we cannot directly run SQL scripts on the RDS instance from our local machine. Instead, we use an AWS Lambda function to initialize the database and seed the data.

Build Your Cloud Environment

The Terraform script included in the repository is configured to set up all the necessary cloud resources. This includes the VPC network components, Security Groups, Load Balancers, Fargate Tasks, and Services. Simply run terraform apply, and it will provision all the required components and provide you with the frontend load balancer endpoint, which you can access through your browser.

--

--

No responses yet