🌐 Hosting Full-Stack Application on AWS 🚀

⚛️ React Frontend on S3, 🖥️ Backend on EC2, and 🗄️ Database on RDS PostgreSQL

Table of contents



Part 1: AWS Infrastructure Setup for a Full-Stack Application

Introduction

In this guide, we'll cover how to set up your AWS infrastructure step by step, starting with creating an AWS account, configuring IAM users, and setting up the foundation to deploy your application securely.


Step 1: Create an AWS Account

If you don’t already have an AWS account, here’s how to create one:

  1. Visit AWS Sign Up.

  2. Fill in your details:

    • Email: Use a valid, regularly checked email.

    • Password: Choose a strong password.

    • Account name: Enter a name like MyApp AWS Account.

  3. Add billing information:

    • AWS requires a credit or debit card for billing.

    • Initially, many services are free under the Free Tier.

  4. Verify your identity using your phone number.

  5. Choose a support plan:

    • For now, stick with the free Basic Support Plan.

Step 2: Secure Your Root Account

After account creation, your root account has full control of AWS. Protect it immediately:

  1. Enable Multi-Factor Authentication (MFA):

    • Go to My Security CredentialsActivate MFA.

    • Use an app like Google Authenticator to scan the QR code.

  2. Avoid using the root account for day-to-day tasks. Instead, create an IAM admin user.


Step 3: Set Up IAM Users and Groups

AWS IAM (Identity and Access Management) allows you to manage user permissions securely.

Step 3.1: Create an IAM Admin User

  1. Go to IAM DashboardUsersAdd User.

  2. User details:

    • Name: admin-user.

    • Access type: Select Programmatic Access (for CLI/SDK/API) and AWS Management Console Access.

  3. Attach policies:

    • Select AdministratorAccess (full control).
  4. Complete setup:

    • Download the access key and secret key.

Step 3.2: Create IAM Groups

Groups simplify managing permissions for multiple users.

  1. Go to IAM DashboardGroupsCreate Group.

    • Group name: developers-group.
  2. Attach policies based on roles:

    • Developers: AmazonEC2FullAccess, AmazonS3FullAccess.

    • Database Admins: AmazonRDSFullAccess.


Step 3.3: Assign Users to Groups

  1. Add the admin-user to the developers-group for now.

  2. This ensures you can securely start development.


Step 4: Configure the AWS CLI

AWS CLI (Command Line Interface) allows you to interact with AWS services.

Step 4.1: Install AWS CLI

  1. Download and install the AWS CLI for your OS:

  2. Verify installation:

     aws --version
    

Step 4.2: Configure AWS CLI

  1. Run the following command:

     aws configure
    
  2. Provide:

    • Access key: From your IAM user.

    • Secret key: Downloaded earlier.

    • Region: E.g., ap-south-1 (Mumbai).

    • Output format: json.

  3. Test the setup:

     aws s3 ls
    

    If successful, this lists your S3 buckets (if any exist).


Step 5: Set Up Security Groups

Security Groups (SGs) act as virtual firewalls to control inbound and outbound traffic for AWS resources.

Step 5.1: Default Security Group

By default, AWS creates a default SG with no restrictions. Create custom SGs for better security.

Step 5.2: Create a Custom Security Group

  1. Go to EC2 DashboardSecurity GroupsCreate Security Group.

  2. Configure:

    • Name: MyApp-SG.

    • Rules:

      • SSH (22): Allow from your IP address only.

      • Custom TCP (8000): Allow for backend communication (change later as needed).

      • PostgreSQL (5432): Allow only from specific IPs (e.g., your EC2 instance).

      • HTTP (80): Allow all traffic for frontend access.


Step 6: Set Up RDS (Relational Database Service)

RDS allows you to use managed databases like PostgreSQL without worrying about infrastructure.

Step 6.1: Create a PostgreSQL Database

  1. Go to RDS DashboardCreate Database.

  2. Configuration:

    • Engine: PostgreSQL.

    • Instance size: db.t2.micro (free tier eligible).

    • Public access: Enable (we’ll restrict later with SGs).

    • Credentials:

      • Username: postgres.

      • Password: yourpassword.

  3. Wait for the database to be ready and note the endpoint.


Step 7: Set Up S3 Bucket for Static Files

S3 will be used later for hosting the frontend. For now, let’s create a bucket.

Step 7.1: Create an S3 Bucket

  1. Go to S3 DashboardCreate Bucket.

  2. Configuration:

    • Name: myapp-static-files.

    • Region: Match with your resources (e.g., ap-south-1).

    • Block public access: Leave it enabled for now (we’ll configure it later).



Part 2: Deploying the Backend on AWS

Introduction

In this part, we’ll focus on deploying your backend application to an AWS EC2 instance, configuring it to connect to the RDS PostgreSQL database, and ensuring everything runs smoothly.


Step 1: Launch an EC2 Instance

Step 1.1: Select Instance

  1. Go to the EC2 DashboardLaunch Instance.

  2. Configure:

    • Name: backend-server.

    • AMI: Choose an Amazon Linux 2 or Ubuntu Server 22.04 LTS.

    • Instance type: t2.micro (free tier eligible).

Step 1.2: Key Pair

  1. Select or create a key pair for secure SSH access.

  2. Download the private key (.pem file).

Step 1.3: Configure Network

  1. Attach the Security Group you created earlier (MyApp-SG).

  2. Ensure it allows:

    • SSH (22): Your IP only.

    • Backend port (e.g., 8000): Allow from 0.0.0.0/0 temporarily for testing.

Step 1.4: Storage

Keep the default 8GB unless your backend requires more.

Step 1.5: Launch

  1. Launch the instance and wait for it to start.

  2. Note the Public IP Address.


Step 2: SSH Into the Instance

  1. Open your terminal and navigate to the folder containing the key pair file.

  2. Connect:

     ssh -i "your-key.pem" ec2-user@<EC2-Public-IP>
    

    If using Ubuntu:

     ssh -i "your-key.pem" ubuntu@<EC2-Public-IP>
    

Step 3: Install Dependencies

Your backend likely requires Node.js, Python, or another runtime. Let’s assume a Node.js backend for this guide.

Step 3.1: Update Packages

Run:

sudo apt update && sudo apt upgrade -y

Step 3.2: Install Node.js

  1. Install Node.js:

     curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
     sudo apt install -y nodejs
    
  2. Verify:

     node -v
     npm -v
    

Step 3.3: Install Git

sudo apt install -y git

Step 4: Clone the Backend Repository

  1. Navigate to a directory (e.g., /home/ubuntu):

     cd ~
    
  2. Clone the repository:

     git clone https://github.com/<your-repo-url>.git
    
  3. Navigate into the project:

     cd <your-project-folder>
    

Step 5: Set Up Environment Variables

  1. Create an .env file for environment variables:

     nano .env
    
  2. Add variables, e.g.:

     DB_HOST=<RDS-endpoint>
     DB_USER=postgres
     DB_PASSWORD=yourpassword
     DB_PORT=5432
     DB_NAME=vea
     APP_PORT=8000
    
  3. Save and exit (Ctrl + O, Enter, Ctrl + X).


Step 6: Install Dependencies and Start the Server

  1. Install dependencies:

     npm install
    
  2. Start the backend:

     npm run dev
    
  3. If successful, your backend should now be running on http://<EC2-Public-IP>:8000.


Step 7: Test Backend with RDS

  1. Access the backend endpoint in your browser:

     http://<EC2-Public-IP>:8000
    
  2. Check logs to confirm database connectivity. Debug errors if any.


Step 8: Use PM2 for Production

To keep the backend running even after the terminal is closed, use PM2:

  1. Install PM2 globally:

     sudo npm install -g pm2
    
  2. Start the backend using PM2:

     pm2 start npm --name "backend" -- start
    
  3. Save the PM2 process list to restart automatically after reboot:

     pm2 save
     pm2 startup
    

Step 9: Secure Your Backend

  1. Restrict port 8000 in the Security Group:

    • Allow access only from specific frontend IP or load balancer.
  2. Update .env for stricter database access:

    • Add DB_SSL=true and configure RDS SSL connection.
  3. Enable firewall rules (optional):

     sudo ufw allow 8000
     sudo ufw enable
    

Step 10: Monitor Logs

  1. Use PM2 to check logs:

     pm2 logs backend
    
  2. Or view logs directly:

     tail -f logs/backend.log
    


Part 3: Deploying the Frontend on AWS

Introduction

We will:

  1. Build the React frontend.

  2. Host it on an S3 bucket.

  3. Use CloudFront for faster delivery and HTTPS.

  4. Connect it to the backend running on EC2.


Step 1: Build the React Application

  1. Navigate to the React app folder on your local machine:

     cd <your-react-project-folder>
    
  2. Install dependencies:

     npm install
    
  3. Update the backend API endpoint in your frontend code:

    • Locate the API URL in your .env or configuration files.

    • Replace it with the EC2 backend URL:

        REACT_APP_API_URL=http://<EC2-Public-IP>:8000
      
  4. Build the application for production:

     npm run build
    
    • This will generate a build/ folder with optimized static files.

Step 2: Create an S3 Bucket for Frontend Hosting

  1. Go to the S3 DashboardCreate Bucket:

    • Bucket name: my-frontend-app.

    • Region: Same as other resources (e.g., ap-south-1).

    • ACLs: Enable public access.

  2. Configure Bucket Settings:

    • Uncheck Block all public access.

    • Acknowledge warnings about public access.

  3. Enable Static Website Hosting:

    • Go to PropertiesStatic website hosting → Enable.

    • Index document: index.html.

    • Error document: index.html (for React single-page apps).

  4. Upload the build files:

    • Go to the bucket → Upload.

    • Drag and drop all files from the build/ folder.

  5. Make files publicly accessible:

    • Go to PermissionsBucket policy.

    • Add the following policy:

        {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Sid": "PublicReadGetObject",
              "Effect": "Allow",
              "Principal": "*",
              "Action": "s3:GetObject",
              "Resource": "arn:aws:s3:::my-frontend-app/*"
            }
          ]
        }
      
  6. Test the hosted app:


Step 3: Set Up CloudFront for CDN and HTTPS

  1. Go to the CloudFront DashboardCreate Distribution:

    • Origin Domain Name: Enter the S3 bucket website endpoint.

    • Viewer Protocol Policy: Redirect HTTP to HTTPS.

    • Cache Policy: Use the default policy for now.

    • Alternate Domain Names (CNAMEs): Add your custom domain if you’re using one (optional).

  2. Configure Behavior:

    • Set Default Root Object: index.html.
  3. Deploy the distribution:

    • Note the CloudFront Distribution URL (e.g., https://<unique-id>.cloudfront.net).
  4. Test the application:

    • Open the CloudFront URL in your browser.

    • Ensure the app loads correctly with HTTPS.


Step 4: Connect Frontend with Backend

  1. Update the React app API endpoint to use the backend’s public domain or IP:

     REACT_APP_API_URL=https://<your-backend-domain-or-ip>:8000
    
  2. Rebuild the React app:

     npm run build
    
  3. Re-upload the build files to S3.

  4. Test end-to-end functionality:

    • Open the frontend URL (CloudFront or S3).

    • Verify that API requests successfully connect to the backend.


Step 5: Optional - Add a Custom Domain with Route 53

  1. Register a domain in Route 53 (e.g., myapp.com).

  2. Create an A Record pointing to the CloudFront distribution.

  3. Test the app using your custom domain:

     https://myapp.com
    

Step 6: Enable Monitoring and Logs

  1. Enable S3 Access Logs for bucket requests.

  2. Enable CloudFront Logging for detailed CDN usage.



Setting Up RDS for the Backend

We’ll configure AWS RDS to host the PostgreSQL database, link it with the backend, and ensure secure access.


Step 1: Create an RDS Instance

  1. Go to RDS DashboardCreate Database.

  2. Choose Database Creation Method:

    • Select Standard Create.
  3. Choose Engine:

    • Select PostgreSQL.

    • Version: Latest stable version (e.g., 16.x).

  4. Configure Database Settings:

    • DB Instance Identifier: vea-database.

    • Master Username: postgres.

    • Master Password: Create a strong password and save it securely.

  5. Instance Size:

    • For development/testing, select:

      • DB Instance Class: db.t3.micro (free tier eligible).

      • Storage: General Purpose SSD (20 GB).

  6. Network Settings:

    • VPC: Choose the default VPC.

    • Subnet Group: Select the default subnet group.

    • Public Access: Enable if you want to access it publicly (for now).

    • Security Group: Select or create a security group with the following:

      • Inbound rule: Allow PostgreSQL (port 5432) from your IP (or backend server IP).
    • Availability Zone: No preference.

  7. Database Authentication:

    • Use the password authentication method.
  8. Create Database:

    • Click Create Database and wait for it to be available.

Step 2: Configure RDS for Backend Connection

  1. Go to RDS Instances → Select your instance → Connectivity & Security.

  2. Copy the Endpoint (e.g., vea-database.c1o6wqcicobo.ap-south-1.rds.amazonaws.com).


Step 3: Set Up the Database Schema

  1. Access RDS using psql:

    • Install the psql client on your machine if not already installed.

    • Connect to the database:

        psql -h vea-database.c1o6wqcicobo.ap-south-1.rds.amazonaws.com -U postgres -d postgres
      
  2. Create a New Database:

     CREATE DATABASE vea;
    
  3. Switch to the vea Database:

     \c vea
    
  4. Create Tables: Define the schema used by your backend. Example:

     CREATE TABLE users (
         id SERIAL PRIMARY KEY,
         name VARCHAR(100),
         email VARCHAR(100) UNIQUE NOT NULL,
         password VARCHAR(100) NOT NULL,
         created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
     );
    

Step 4: Update Backend Configuration

  1. Update the backend .env file:

     DB_HOST=vea-database.c1o6wqcicobo.ap-south-1.rds.amazonaws.com
     DB_PORT=5432
     DB_USER=postgres
     DB_PASSWORD=<your-postgres-password>
     DB_NAME=vea
    
  2. Test the backend connection:

    • Start the backend server:

        npm run dev
      
    • Check logs to confirm the connection:

        Connected to PostgreSQL on port 5432
      

Step 5: Secure RDS

  1. Limit Security Group Access:

    • Restrict the inbound rule to allow only the backend server’s IP or your local IP.
  2. Enable Backups:

    • Go to RDS Dashboard → Select your instance → Modify.

    • Enable automated backups and set the retention period.

  3. Enable Monitoring:

    • Enable CloudWatch for performance metrics like CPU, memory, and connections.
  4. Encrypt Data:

    • Ensure your RDS instance uses encryption for data at rest.

Step 6: Test End-to-End Functionality

  1. Use Postman or the frontend to make API requests.

  2. Verify that data is being stored and retrieved from the RDS database.



Troubleshooting Errors Encountered During the Project

1. IAM Permissions Issue

When it Happened: During the setup of the backend to access RDS or S3 buckets.
Error Message:

AccessDenied: User is not authorized to perform the action

Cause: The IAM role did not have sufficient permissions for the RDS or S3 service.
Resolution:

  • Modified the IAM role to include the following policies:

    • AmazonRDSFullAccess

    • AmazonS3FullAccess

  • Attached the IAM role to the EC2 instance running the backend.
    How I Resolved It:
    In the IAM Setup step, I ensured that the role assigned to the EC2 instance had the required permissions to interact with both RDS and S3 services. I modified the role's policy to include the necessary access levels and attached it to the EC2 instance.

Why This Is Important:
IAM roles control access to AWS services. Without the correct permissions, your backend may not be able to interact with essential services like RDS or S3. When facing similar issues, always check if the right permissions are granted to the IAM role associated with the service or resource.


2. Database Connection Error

When it Happened: While trying to connect the backend to RDS.
Error Message:

error: permission denied for table users

Cause: The database user (ubuntu) did not have sufficient privileges to access the users table.
Resolution:

  • Connected to the database as the postgres user.

  • Granted privileges to the ubuntu user:

      GRANT ALL PRIVILEGES ON TABLE users TO ubuntu;
    

How I Resolved It:
During the RDS Integration step, I encountered this issue due to missing database privileges. I logged into the database as the postgres superuser and granted the necessary permissions to the ubuntu user to allow access to the users table.

Why This Is Important:
Database users need explicit permissions to interact with tables. Always ensure the database user has the correct privileges for the tables or operations they need to perform.


3. Backend Environment Variables Missing

When it Happened: During backend deployment.
Error Message:

DB_HOST is not defined

Cause: The .env file was either missing or improperly configured.
Resolution:

  • Created a .env file in the backend directory with the following content:

      DB_HOST=vea-database.c1o6wqcicobo.ap-south-1.rds.amazonaws.com
      DB_PORT=5432
      DB_USER=ubuntu
      DB_PASSWORD=your_password
      DB_NAME=vea
    
  • Restarted the backend server to apply the changes.

How I Resolved It:
In the Backend Configuration step, I identified that the .env file was not set up properly. After creating and configuring the file with the correct database connection details, I restarted the backend to apply the changes.

Tip for Environment Variables:
To ensure security, always store sensitive environment variables like database credentials in .env files or use AWS Secrets Manager to keep them safe. Avoid hardcoding these values directly in your codebase.


4. RDS Public Accessibility

When it Happened: During initial attempts to access the database.
Error Message:

Connection timed out

Cause: The RDS instance was not publicly accessible, and no security group allowed incoming traffic on port 5432.
Resolution:

  • Enabled public accessibility for the RDS instance.

  • Updated the security group to allow inbound traffic from my IP address on port 5432.

How I Resolved It:
In the RDS Setup section, I realized that the RDS instance wasn't publicly accessible. To resolve this, I enabled public access for the RDS instance and updated the security group rules to allow traffic on port 5432 from my IP address.

Security Note:
While enabling public accessibility can be helpful during development, it's essential to restrict access to specific IPs in production environments to enhance security.


5. Frontend Not Loading

When it Happened: While hosting the React frontend on S3 and accessing it via CloudFront.
Error Message:

403 Forbidden

Cause: The S3 bucket’s policy didn’t allow public read access for the files.
Resolution:

  • Updated the S3 bucket policy to allow public read access:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": "*",
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::your-bucket-name/*"
              }
          ]
      }
    
  • Enabled static website hosting on the bucket.

How I Resolved It:
In the Frontend Hosting step, I faced the 403 Forbidden error due to incorrect S3 bucket permissions. I updated the bucket policy to allow public read access to the files and enabled static website hosting for the bucket.

Additional Insight:
Always ensure your S3 bucket policy allows the necessary actions (e.g., s3:GetObject for read access). If you're hosting a frontend on S3, enabling static website hosting is also crucial for accessing the site via the URL.


Common Errors and Resolutions

  • IAM Permissions Issue: Resolved by attaching the correct policies to the IAM role.

  • Database Connection Error: Fixed by granting proper permissions to the database user.

  • RDS Public Accessibility: Addressed by enabling public access and updating the security group.

  • Frontend 403 Forbidden Error: Resolved by updating the S3 bucket policy to allow public read access.

These are the common errors I encountered while setting up and deploying my project. By systematically troubleshooting each issue and applying the correct resolutions, I was able to successfully complete the setup.