Get AWS EFS working with WordPress. This article covers must-know tips on configuring NFS file caching, getting Php to work properly with a NFS volume, actively managing AWS Burst Credits, site management considerations, and performance monitoring.


AWS’ Elastic File System (EFS) service can seem like the white whale of the arcane world of horizontal scaling for WordPress. It’s so promising and so easy to create and mount, but it delivers repugnant performance results; initially at least. If you’ve already tried migrating a WordPress site for a local file system to an AWS EFS volume then you likely saw the performance of your site dramatically degrade to the point of being unusable. But why? There are two reasons.

First, with NFS there are unavoidable increases in file access latency. This situation amplifies whenever your web app (more rather its underlying Php engine) accesses many small files over a short period of time. Unfortunately, that’s basically what Php is doing all the time, making local file caching critically important. Fortunately this is easy to remedy by installing cachefiles and memcached.

Second, your IO operations on EFS volumes are subject to limits based on a size tier system called Burst Credits, and it is often the case that your EFS volume initially resides within the pathetically slow bottom tier of this system. This is also easily remedied, as we’ll see below.

Some site management processes, namely virus scanning and site backups, will require re-thinking so as to avoid exhausting Burst Credits. We’ll look at the key factors below.

EFS Is Slow

Let’s get that elephant out of the room right away. EFS is up to three orders of magnitude slower than the EBS counterpart from which you want to migrate, and the tips in this article are simply ways to work around this reality rather than change it. Php applications like WordPress are a worst-case scenario for this problem in that migrating to EFS will always result in some performance degradation in various parts of the application. Lets looks at why.

The charts below from Amazon EBS Volume Types indicate that EBS volumes of 100G or less allow throughput of 1,750mib / second whereas an NFS volume of the same size allows throughput of 500kib / per second (see table below for EFS Volume Sizing). That’s up to a 3,500x decrease in throughput! But this oversimplifies matters. You also need to look at what happens with individual disk fetches, or IOPS in AWS parlance. Generally speaking, any storage system will more efficiently retrieve a large stream of data like say, a video file, more efficiently that it can a random collection of small files like say, a bunch of Php files. That is to say that an EFS volume proportionately works harder to serve up a Php application, exacerbating an already-large reduction to data throughput. It behooves you to keep this in mind as we move on to my performance tuning tips.

EBS Throughput by Size
EBS IOPS by Size

1. Add File Caching To Your EC2 Instance

Red Hat published an excellent file caching solution for NFS back around 2006, and which is installable on Amazon Linux as a yum package. It’s idiot proof and a highly effective way to minimize file reads from your NFS volume. Given the wretched performance all customers initially see from EFS I don’t know why AWS does not include this in their mounting instructions.

sudo yum install -y nfs-utils   # to ensure that NFS is installed on your Amazon Linux instance.
sudo yum install cachefilesd
sudo service cachefilesd start
sudo chkconfig cachefilesd on    #autostart cachefilesd

AWS provides precise EFS volume mounting instructions via a hyperlink inside the settings of your EFS volume within the AWS Console. However, you’ll need to make a small modification to these instructions so that your volume is aware of cachefiles. AWS’ mount instructions include a list of options which you’ll see immediately after the -o parameter.

Change this line: “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2”
To the following: “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,fsc”

The additional option “fsc” instructs NFS that we’ll be using File System Caching on the mounted volume. The complete set of instructions will look similar to the following:

sudo mkdir /efs
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,fsc [ADD YOUR FILE SYSTEM END POINT]:/ /efs
sudo service cachefilesd restart  #to ensure that our cachefiles service becomes aware of our newly mounted efs volume

Automatically mount volume on startup.

sudo vim /etc/fstab
[ADD YOUR FILE SYSTEM END POINT]:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,fsc,_netdev 0 0

2. Size Your Volume For AWS Burst Credits

AWS governs IO operations on EFS volumes based on their size. If you’re migrating a single WordPress site to EFS then this will almost certainly cause problems for you. The workaround is to create a dummy file to force the size of your EFS volume into the performance tier that your site requires. Personally, I’d never given any thought to my site’s IO performance requirements and so I had no idea of my needs. If that’s your situation as well you should take a close look at AWS’ pricing and performance tier structure, also presented below. Keep in mind that artificially increasing the size of your EFS volume also makes it cost more.

File System Size (GiB) Baseline Aggregate Throughput (MiB/s) Burst Aggregate Throughput (MiB/s) Maximum Burst Duration (Min/Day) % of Time File System Can Burst (Per Day)
10 0.5 100 7.2 0.5%
256 12.5 100 180 12.5%
512 25.0 100 360 25.0%
1024 50.0 100 720 50.0%
1536 75.0 150 720 50.0%
2048 100.0 200 720 50.0%
3072 150.0 300 720 50.0%
4096 200.0 400 720 50.0%

Here is an example code snippet that you can use to create a dummy file of exactly 256G, which is exactly enough size to push your EFS volume up to the 2nd performance tier. Change the parameters to tailor the file size to your needs. Take note however that it takes around an 90 minutes to create a dummy file 256G in size. This snippet therefore wraps the command in a “nohup” command that runs the command on a background thread. You can monitor progress using the command line “ls -lh”.

cd /efs
sudo nohup dd if=/dev/urandom of=2048gb-b.img bs=1024k count=256000 status=progress &