Bulk Purge CDN (CloudFlare) objects

There are situations when you are in need to purge same item from CloudFlare cache being called from different URLs e.g: our Magento Multi-Store has 14 stores and there are some objects which are not part of main CDN domain (cdn.domain.com) and directly called from origin server for example:


Its a pain if you manually purge single item dozen of times with suffix so I wrote a script which generates all subsequent URLs and you just need to copy and paste them in CloudFlare Purge input box.

Download the script attached here and create a file containing  your items paths (relative) as below and execute the script.


Purging made easy 🙂


AWS IAM Policy exclusion

Few days back I was in a situation were I have to provide full access (sort of) on our AWS account to one of our partner so they can setup an Elastic Beanstalk cluster. Obviously I was not in favor to provide complete access to the resources which are critical for us so I come up with below IAM policy which is not only protecting our critical resources but also providing full access to deploy a Elastic Beanstalk cluster.

Download: IAM Policy

The logic in above policy is to add an additional tag “critical : true” (Key and Value respectively) to all of your existing resources and setup an explicit deny on resources which matches the tag (Note that not all AWS resources support tagging). To further extend the protection we are also setting up direct explicit deny rules on each resource on which we don’t want to provide access to IAM user.


Backup AWS Route53 Zones in BIND format

Your are using AWS Route53 to host your DNS and you need to backup all hosted zones along with complete records which should also compatible to BIND. Don’t scratch your head below script will take care of it, all you need is to install a handy tool for managing Route53 called “cli53“.

apt-get install python-pip
pip install cli53

Once cli53 is installed download the the script from here and before executing it you only need to specify your access keys. The script will fetch the name of all domains currently created in your account and backup them in Bind format to individual files.



Automatic RootKit scanning using “chkrootkit”

Scanning servers for presence of any hidden RootKit was always an important role of system administration but recently discovery of Linux Ransomeware (luckily fix too) has highlighted the importance of it and also motivated me to write something about it. So what I have come with?

chkrootkit is among mostfamous and widely used tool for the detection of rootkits so I decided to write a script which take care of the installation, scanning and the report. The installation is pretty easy and quickly but why I wrote the script?

Usually when smart hackers are able to inject rootkit in server they also scan the system for the presence of AntiRootKit and if they found they replace it with hacked version of tool, but they are not done yet as they also replace your system binaries (ls, ps, egrep, awk etc) which Antirootkit tools uses hence by using the modified binaries Antirootkit tools fail to detect the presence of any rootkit and you think that you are safe.

So what script does?

The script download the latest version of chkrootkit, compile it, download safe versions of my system libraries (Ubuntu 14.04.3 LTS, which you need to replace with your own as you should not be using mine on your servers), scan the system and send its report on your email. As we don’t want hackers to know that we are using antirootkit when everything is done the script also remove the traces of chkrootkit installation and anything related. Lastly remove itself (the bash script) from filesystem and clear history. You can find the script here

Once the script downloaded you need to execute it as below

./chkrootkit.sh && history -c && history -w

Notes: Before using the script you should also review your hosts file (/etc/hosts) for any malformed entries as that way hacker can lead you to its own website and download modified version of chkrootkit.

Don’t install any cron of it as it will help hacker to detect the presence of antirootkit tool.

Update: If you are receiving “chkrootkit: can’t find `ssh’” error then its a bug in current release and you can find additional details here.


HOT Backups to S3 Directly

There are situations when to want to backup your data directly to remote location e.g: you are out of free space, you want to provide backup of data to your client or developers, the server is going to terminate shortly (abuse, default etc) and you don’t have free space to backup and download your data instantly etc.

For situations like those I have wrote a script which will directly backup your webroot and mysql database to S3 bucket directly (without creating any local copy before uploading to S3) and create instant public or private download links.

Download the script from here. The script required s3cmd installed and configured but don’t worry the script will take care of it as well, all you need is to adjust the following variables before executing it.


public: Yes or No (decide whether download link should be public or private)
tempfiles: Path where script download temporarily files
access_key: AWS Access Key
access_key: AWS Secret Key
bucket: S3 bucket name.
webroot: Path to your webroot.
db: Name of your database

That’s all, just update the above varibales and you are ready to take your backup. Backups were never easy before, isn’t?



How to enable Online Certificate Status Protocol (OCSP) Stapling in Nginx

Generally when you access a secure site (HTTPS) browser has to create another request to specific certificate revocation server known as OCSP Responder to verify whether the certificate is revoked or not, due to this some browsers (Chrome etc) does not implement OCSP because of this overhead.

By implementing OCSP Stapling you can replace the role of your OCSP Responder by letting the your web server to periodically query the OCSP Responder itself and then serve client both certificate and the proof that certificate is not revoked.

To enable OCSP Stapling in Nginx add following lines under your SSL listener.

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/sub.class1.server.ca.pem;

ssl_trusted_certificate should contain your intermediate certificates followed by your Root CA.

Once OCSP Stapling is properly configured you can verify it online by SSL Labs or OpenSSL.

echo QUIT | openssl s_client -connect azfarhashmi.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update'

If you see any result then OCSP Stapling is working fine. Don’t forget to replace ‘azfarhashmi.com‘ with your own domain.


Enable HTTP/2 in Nginx

As of now HTTP/2 protocol is approved and published which is faster then older protocol so its a good time to implement HTTP/2 on your website. Good thing is that all major servers and browsers are already compatible with it so we are good to go. If you are curious check the current compatibility list here.

I am using Ubuntu 14.04.3 LTS and Nginx Mainline v1.9.6. The installation is simple as you just need to install Nginx version 1.9.5 or higher compiled with ‘–with-http_v2_module‘ option.

Nginx 1.9.5 or higher compiled with ‘–with-http_v2_module
TLS enabled website

To install create /etc/apt/sources.list.d/nginx.list and add Nginx official repositories to it.

deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx

Adding key and install Nginx latest mainline version.

wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key
apt-get update
apt-get install nginx

Once Nginx is installed you just need to update your HTTPS listener in your vhost as below and your website is now ready to take advantages of HTTP/2 benefits.

'listen 443 ssl http2;'

Note: If you were using SPDY then you need to remove ‘spdy’ from your listener and simply replace it from ‘http2′, if you are passing any headers related to SPDY then also remove them.


Enable gzip compression with Nginx+CloudFront (CDN)

Some time ago I decided to use CloudFront to serve static content from it and every thing was smooth and quick as expected but I noticed that I lost gzip compression from all compressible objects. Amazingly when I fetch the object directly from my web server (Nginx) the compression was there but if I load the object from CloudFront there was no compression.

After scratching my head for a while I realized that by-default Nginx does not apply compression on proxied requests and Nginx determined it by the presence of  Via’ header in request which is true in case of CloudFront and unlike other CDNs CloudFront does not compress objects on its own so in result I lost the compression.

Good thing is that you can fix it easily by instructing Nginx to enable compression on proxied requests too by setting ‘gzip_proxied‘ to ‘any‘. In some case you may also need to set ‘gzip_http_version‘ to ‘ 1.0‘.

gzip_proxied any;
gzip_http_version 1.0;

Now in that case CloudFront will store two versions of compressible objects i-e: Compressed and Uncompressed and when client browser request contain ‘Accept-Encoding’ header CloudFront will serve the compressed version and when the header is missing it will serve the uncompressed version of the object.

Note: If you are still not seeing compression with CloudFront then you must have missed invalidating those objects from your distribution.


Use Custom SSL certificates with CloudFront (Free)

Although CloudFront provides its own wildcard certificate (*.cloudfront.net) for free with each Distribution you create but in that’s case you can not use your own domain CNAME to access the content from CDN over HTTPS e.g: you can’t use media.azfarhashmi.com instead you can only use CloudFront provided Domain Name e.g: ‘d3shv1t4v6he9p.cloudfront.net‘.

To use custom certificate you first have to upload your certificates into IAM which you can do via AWS CLI tools.

aws iam upload-server-certificate --server-certificate-name azfarhashmi2015 \

--certificate-body file://azfarhashmi.com.crt --private-key \

file://azfarhashmi.com.key --certificate-chain file://azfarhashmi0interm.pem \

--path /cloudfront/

Here server-certificate-name is the display name of your SSL that will appear in CloudFront settings, –certificate-body is the path to your certificate, –private-key is the path to your certificate private key, –certificate-chain is the path to your certificate complete Chain file and –path will remain /cloudfront/ in our case.

If you are having any error uploading certificates then make sure your certificates are in PEM format, your –certificate-body file does not contain any intermediate / Root certificate (in-case of nginx etc) and you are providing complete Chain in correct order i:e your Root Certificate should be in last.

Once the certificates are uploaded you can go to distribution settings and and choose ‘Custom SSL Certificate (stored in AWS IAM)’ option and select the recently uploaded Certificate, make sure that ‘Only Clients that Support Server Name Indication (SNI)’ is selected otherwise AWS will charge you additional $600/m for assigning you dedicated IP addresses at each Edge location.

As of now the SNI option should be enough for you if your Content is accessed by Browsers and no other application / library is accessing it as all latest Browsers have implemented SNI protocol, however if you are not sure then you can consult with WikiPedia article here


Invalidate CDN (CloudFront) objects via “bash” script

I have seen many people looking for how to invalidate / purge objects from CDN (CloudFront). I was also in need of similar when I decided to give little more power to my developers so they can invalidate objects themselves. So here is my little effort which will invalidate objects from AWS CF and report you when the invalidation is actually completed.

Prerequisite: AWS CLI tools

Once you setup AWS CLI tools and downloaded CDN invalidation script from here then you just have to adjust few variables within the script and you are ready to punch your CDN via shell.

Variables to update:

email: Your email address.
distributionid: Your CDN distribution ID.
json: Path to Json resultant file (contain links in Json format).
file: Here you have to put all links that you want to invalidate, one per line.

You can couple the script with real time or scheduled file system changes monitoring tool to achieve instant and automatic invalidation of modified objects.