back to blog

Migrating your x86 EC2 web servers to a Graviton2 instances

Read Time 14 mins | Written by: Kenneth Hough

What is Graviton2?

Graviton2 is the next iteration of AWS’s ARM-based processors.  Graviton2 was announced in December of 2019 and is a 64-bit processor with 64 cores (quadrupled from the original Graviton processors) built on ARM’s Neoverse N1 cores.  This power-packed processor delivers the best price performance for various workloads.  Amazon states that, for example, the M6g instance provides 40% better price performance when compared to its x86 counterpart.

So what are we looking at with prices.  As of July 6th, 2021, the on-demand pricing in the us-east-2 region for a t4g.small is $0.0168 per hour compared to $0.0208 per hour for the t3.small.  Now that may be a small difference of only $0.0040 per hour, but can add up in the long run. For example, running a t4g.small on-demand for one month will cost you $12.4992 just for the compute hours, while on the t3.small it will cost $15.4792.  That’s $3 in savings – although I recommend combining other cost saving techniques such as taking advantage of Saving Plans and Reserved Instances (topic for another article).

References:

 

Migration Steps

In this example, I’m assuming a simple web server running apache on a x86 instance, such as a t3.small.  I will try to generalize the steps as much as possible so that you can apply this to your specific environment.

  1. Snapshots, snapshots, snapshots, snapshots.

First and foremost, take a snapshot! We need this not only for the migration but it is ALWAYS a good idea to take routine/automated snapshots of your instances before making changes or deleting.  I highly recommend organization of all sizes to develop policies for backups and retention, and having a solid disaster mitigation plan. Contact us if you need help with yours!

Ok, so let’s make a snapshot of our existing x86 instance.  Navigate and click on your instance from the list as seen in the example below.

List of ec2 instances

Here, we are using an example WordPress website running on a t3.small.  After clicking on your instance, a detail view will display below with different tabs. Click on the Storage tab and then on the volume ID, which will direct you to the volume page of the EC2 console.

Instance Storage detail tab
List of EBS volumes

From the list of EBS volumes, right click on the one that is attached to your instance.  If you have clicked the volume ID from the EC2 instance detail page, then that specific volume should be the only one visible (as seen above).

Creating a snapshot of an EBS

After clicking “Create Snapshot,” follow the steps to create your snapshot. I recommend being as details as possible and using resource tags as seen in the example below.

Creating a snapshot
List of snapshots

 

2. Create a Graviton2 instance

Once your snapshot is created (not necessarily ready – it’s ok to continue on to this step even if the snapshot status is pending), we’re going to create a new Graviton2 instance. For this example, since the original x86 instance was a t3.small, we will create a similar ARM instance using t4g.small.

Start by launching a new instance and picking the 64-bit Arm option for Amazon Linux 2 AMI and t4g.small.

Launching a new EC2 instance
Selecting instance type t4g.small

Follow the rest of the launch wizard making sure that the configurations mirror the original instance, such as Security Groups and volume sizes.  If necessary, make changes to the configuration as needed for your new deployment.

Once the instance is launched and ready, connect to it and install the necessary software packages. For a WordPress deployment, like in this example, feel free to follow my modified example or the original AWS tutorial, which can be found at: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hosting-wordpress.html

Ok, let’s connect to your instance and run the following set of commands to setup swap and a web server with PHP7.3 (NB: for a production environment, I recommended using a higher version of PHP due to security issues).

Setup Swap
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon -s
echo '/swapfile swap swap defaults 0 0' | sudo tee -a /etc/fstab

Install Updates and Software Packages

sudo yum update -y
sudo amazon-linux-extras install -y php7.3
sudo yum install httpd mariadb-server php73 php-gd php-pdo php-mbstring php-mysqli php-opcache mysql git
sudo yum install -y mod_sslsudo yum install -y mod_ssl

Enable http and set group ownership and membership

sudo systemctl enable httpd
sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www

* Don’t forget to apply server configurations at this point, such as modifying httpd.conf

Reboot instance

sudo reboot

You can confirm that the apache web server was installed correctly by navigating to the instance’s public address.  Hopefully you will see the default apache success page. Once confirmed, you are ready to migrate your existing web files over.

3. Create a new EBS volume from the snapshot

Once your instance has been correctly configured, you are ready to migrate your web files over to your new instance.  We will first create a new EBS volume from the snapshot you created in Step 1. Navigate to and right-click your snapshot and select create volume as seen in the screenshot below.

Create EBS volume fron snapshot

Follow the on-screen instructions to create your snapshot. An important configuration setting to note is that the availability zone of the volume must be the same as the new instance you created, or you will not be able to attach the volume to your instance.

Creating a new volume from a snapshot

Once you’ve created the new volume, you should see it in the list of available volumes.  Right click the newly created volume and select attach volume.  You will be ask to type the instance ID or a name (this is why resource tags are great! It’s easier to identify your resources such as instances this way).

Attaching a volume to an instance
Select an instance to attach a volume

 

4. Mounting the volume and copying the data over

Now that our new volume we just created is attached to our instance, let’s mount it. Connect to your new Graviton2 instance we created in step 3 and run the following command to confirm that we have successfully attached our new volume and to obtain the device name for that volume.

sudo lsblk

You should see a result similar to the following screenshot. In this example, you will see that nvme1n1p1 is our newly attached volume where we want to copy data from. 

Listing available block devices

To mount our volume, we first need to create a target folder where the volume will be made available – we’ll call this data.

sudo mkdir /data

Next, let’s moun the drive.

sudo mount /dev/nvme1n1p1 /data

Once the volume is mounted it’s time to copy the web files over.  Execute the following command to copy the entire folder over to the new server.  Run the command as necessary to copy any additional files and folders.

cp -a /data/var/ww/html/. /var/www/html/

Voilà! Now it’s time to clean up.  Let’s unmount the drive and delete the temporary target folder data.  Just an FYI, if you have changed your current working directory to inside the data folder, you will need to navigate back out or you will receive a busy error and will not be able to unmount the drive.

sudo umount /data

sudo rm -rf /data

There you go, now logout of the instance and detach the temporary volume.  I recommend deleting the temporary volume but maybe hang on to the snapshot for a while until you are absolutely certain that the new instance is working. 

Detach EBS volume from instance
Delete EBS volume

And that concludes this little tutorial on migrating your x86 instances over to an Amazon’s Graviton2 instance.  I hope this has been helpful and if you have any questions or need assistance with managing your cloud environment, contact us for a free consult!

A Framework Built to Accelerate App Development for Startups

Kenneth Hough

Background

I founded KeyQ in March of 2020 with the vision of helping businesses achieve the next level of success through delivering innovative and meaningful cloud solutions. Since its inception, I have worked with several businesses, non-profit organizations, and universities to design and build cloud applications that have helped streamline their business processes and reduce costs.

Prior to KeyQ, I was a medical researcher at the University of Alabama at Birmingham (UAB) in the Division of Pulmonary, Allergy, and Critical Care Medicine. UAB is also where I worked on my doctoral thesis under the mentorship of Dr. Jessy Deshane and Dr. Victor Thannickal. During my doctoral work at UAB I was exposed to the “omics” and big data, which has influenced my career choice to develop data-driven analytics platforms in the cloud.

I also have to give a big shoutout to my undergraduate education at Worcester Polytechnic Institute (WPI), where I majored in biochemistry. WPI’s motto is “Lehr und Kunst,” which roughly translates to “Theory and Practice” or “Learning and Skilled Art.” WPI truly cherishes and upholds this pedagogy, which can be seen by the teaching styles and class sizes. The learning experience I had at WPI is unique and has shaped me to be who I am, being able to learn, practice and apply.

Personal Interests

I love to learn innovative technologies and try new things. I have a broad area of interests that include serverless architectures, machine learning, artificial intelligence, bioinformatics, medical informatics, and financial technology. I am also working towards my CFA level 1 exam for 2021. Other interests and hobbies include traveling, rock climbing, rappelling, caving, camping and gardening!