Automating S3 Backups for EC2-Hosted Databases Using Cron Jobs on Ubuntu

Automating S3 Backups for EC2-Hosted Databases Using Cron Jobs on Ubuntu

Dive into automating backups for your database on AWS EC2 with Ubuntu, ensuring data safety and accessibility. This guide covers setting up EC2, configuring database backups, and leveraging cron jobs to store data securely on Amazon S3. Learn to streamline your backup process, maintaining optimal performance and peace of mind with best practices for cloud storage management.


  • An EC2 instance hosting your databases 💻.
  • An S3 bucket for storage 💾.
  • A selection of snacks 🍬.

Begin by setting up the necessary dependencies. You will need to install:

  • Cron

Installing Cron

Execute the following command to install Cron

sudo apt update
sudo apt install cron

Use the following command, to enable Cron:

sudo systemctl enable cron

Installing AWS CLI

To install the AWS CLI, which enables connection to AWS services (in this case, S3), start with the following command

curl "" -o ""

After downloaded, extract the contents of the downloaded zip file with the following command.


Wait for the extraction process to complete, then install the CLI with the following command:

sudo ./aws/install

After installation, run the following command to configure AWS:

sudo aws configure

This command will prompt you to enter your AWS Access Key, Secret Key, region, and output format. Fill in the details to complete the configuration process. If you need any assistance, please follow the link below.

📎Click Here - Medium

Now that all the dependencies are installed, we can proceed to write the shell script that will take a backup of the database and upload it to S3.

Create a file on the server

sudo nano /home/ubuntu/

Add the following script to the file, ensuring that you modify the SQL command according to the database being utilized.


echo "* * * * * Backup started * * * * *"

# Current date in yyyy-mm-dd format
DATE=$(date +%F)

# Database credentials

# S3 bucket name

# Backup filename

# Dump database
mysqldump -u ${DB_USER} -p${DB_PASS} ${DB_NAME} > ${BACKUP_FILENAME} #make sure to change based ont the Database

echo "* * * * * Backup ended * * * * *"

echo "* * * * * export started * * * * *"

# Upload to S3
sudo aws s3 cp ${BACKUP_FILENAME} ${S3_PATH}/${DB_NAME}_${DATE}.sql

echo "* * * * * export started * * * * *"

# Remove the backup file

# Cleanup: Keep only the latest 7 backups
sudo aws s3 ls ${S3_PATH}/ | sort | grep ${DB_NAME} | head -n -7 | awk '{print $4}' | while read -r FILENAME; do
    sudo aws s3 rm "${S3_PATH}/${FILENAME}"

The script is tailored to back up the database and upload the backup file to S3. After the upload is completed, it ensures efficient space management by deleting older backups stored in S3, retaining only the latest 7 records. If you wish to alter the number of backups kept, you can adjust this setting by changing the -7 to your desired number in the script.

Grant execution permissions to the file we created.

sudo chmod +x /home/ubuntu/

Once the file has been granted execution permissions, run the file and verify whether a backup has been uploaded to S3.

you can run the file with the following command.


To set up Cron, use the following command.

sudo crontab -e

This command will ask you to select a text editor. Choose nanoas the editor by selecting option 1.

Insert the provided code into the opened file, save it by pressing ctrl + S, and exit by pressing ctrl + X.

0 0 * * * /home/ubuntu/ >> /home/ubuntu/cron.log 2>&1

0 0 * * * specifies that it will execute daily at 12 AM

The path /home/ubuntu/ indicates the location of the script to be run.

The segment >> /home/ubuntu/cron.log 2>&1 directs the output to a file named cron.log, with 2>&1 ensuring that both standard output and error logs are captured.

We can check if the cron is executed by examining the cron.log file, which stores the logs.