0

Bulk Purge CDN (CloudFlare) objects

There are situations when you are in need to purge same item from CloudFlare cache being called from different URLs e.g: our Magento Multi-Store has 14 stores and there are some objects which are not part of main CDN domain (cdn.domain.com) and directly called from origin server for example:


http://store1.domain.com/1.jgp
http://store2.domain.com/1.jpg

Its a pain if you manually purge single item dozen of times with suffix so I wrote a script which generates all subsequent URLs and you just need to copy and paste them in CloudFlare Purge input box.

Download the script attached here and create a file containing  your items paths (relative) as below and execute the script.


skin/frontend/rwd/default/js/product.js
skin/frontend/rwd/mobile/js/product.js

Purging made easy 🙂

0

AWS IAM Policy exclusion

Few days back I was in a situation were I have to provide full access (sort of) on our AWS account to one of our partner so they can setup an Elastic Beanstalk cluster. Obviously I was not in favor to provide complete access to the resources which are critical for us so I come up with below IAM policy which is not only protecting our critical resources but also providing full access to deploy a Elastic Beanstalk cluster.

Download: IAM Policy

The logic in above policy is to add an additional tag “critical : true” (Key and Value respectively) to all of your existing resources and setup an explicit deny on resources which matches the tag (Note that not all AWS resources support tagging). To further extend the protection we are also setting up direct explicit deny rules on each resource on which we don’t want to provide access to IAM user.

1

Backup AWS Route53 Zones in BIND format

Your are using AWS Route53 to host your DNS and you need to backup all hosted zones along with complete records which should also compatible to BIND. Don’t scratch your head below script will take care of it, all you need is to install a handy tool for managing Route53 called “cli53“.


apt-get install python-pip
pip install cli53

Once cli53 is installed download the the script from here and before executing it you only need to specify your access keys. The script will fetch the name of all domains currently created in your account and backup them in Bind format to individual files.

 

2

HOT Backups to S3 Directly

There are situations when to want to backup your data directly to remote location e.g: you are out of free space, you want to provide backup of data to your client or developers, the server is going to terminate shortly (abuse, default etc) and you don’t have free space to backup and download your data instantly etc.

For situations like those I have wrote a script which will directly backup your webroot and mysql database to S3 bucket directly (without creating any local copy before uploading to S3) and create instant public or private download links.

Download the script from here. The script required s3cmd installed and configured but don’t worry the script will take care of it as well, all you need is to adjust the following variables before executing it.

Variables:

public: Yes or No (decide whether download link should be public or private)
tempfiles: Path where script download temporarily files
access_key: AWS Access Key
access_key: AWS Secret Key
bucket: S3 bucket name.
webroot: Path to your webroot.
db: Name of your database

That’s all, just update the above varibales and you are ready to take your backup. Backups were never easy before, isn’t?

 

0

Enable gzip compression with Nginx+CloudFront (CDN)

Some time ago I decided to use CloudFront to serve static content from it and every thing was smooth and quick as expected but I noticed that I lost gzip compression from all compressible objects. Amazingly when I fetch the object directly from my web server (Nginx) the compression was there but if I load the object from CloudFront there was no compression.

After scratching my head for a while I realized that by-default Nginx does not apply compression on proxied requests and Nginx determined it by the presence of  Via’ header in request which is true in case of CloudFront and unlike other CDNs CloudFront does not compress objects on its own so in result I lost the compression.

Good thing is that you can fix it easily by instructing Nginx to enable compression on proxied requests too by setting ‘gzip_proxied‘ to ‘any‘. In some case you may also need to set ‘gzip_http_version‘ to ‘ 1.0‘.


gzip_proxied any;
gzip_http_version 1.0;

Now in that case CloudFront will store two versions of compressible objects i-e: Compressed and Uncompressed and when client browser request contain ‘Accept-Encoding’ header CloudFront will serve the compressed version and when the header is missing it will serve the uncompressed version of the object.

Note: If you are still not seeing compression with CloudFront then you must have missed invalidating those objects from your distribution.

0

Use Custom SSL certificates with CloudFront (Free)

Although CloudFront provides its own wildcard certificate (*.cloudfront.net) for free with each Distribution you create but in that’s case you can not use your own domain CNAME to access the content from CDN over HTTPS e.g: you can’t use media.azfarhashmi.com instead you can only use CloudFront provided Domain Name e.g: ‘d3shv1t4v6he9p.cloudfront.net‘.

To use custom certificate you first have to upload your certificates into IAM which you can do via AWS CLI tools.


aws iam upload-server-certificate --server-certificate-name azfarhashmi2015 \

--certificate-body file://azfarhashmi.com.crt --private-key \

file://azfarhashmi.com.key --certificate-chain file://azfarhashmi0interm.pem \

--path /cloudfront/

Here server-certificate-name is the display name of your SSL that will appear in CloudFront settings, –certificate-body is the path to your certificate, –private-key is the path to your certificate private key, –certificate-chain is the path to your certificate complete Chain file and –path will remain /cloudfront/ in our case.

If you are having any error uploading certificates then make sure your certificates are in PEM format, your –certificate-body file does not contain any intermediate / Root certificate (in-case of nginx etc) and you are providing complete Chain in correct order i:e your Root Certificate should be in last.

Once the certificates are uploaded you can go to distribution settings and and choose ‘Custom SSL Certificate (stored in AWS IAM)’ option and select the recently uploaded Certificate, make sure that ‘Only Clients that Support Server Name Indication (SNI)’ is selected otherwise AWS will charge you additional $600/m for assigning you dedicated IP addresses at each Edge location.

As of now the SNI option should be enough for you if your Content is accessed by Browsers and no other application / library is accessing it as all latest Browsers have implemented SNI protocol, however if you are not sure then you can consult with WikiPedia article here

1

Invalidate CDN (CloudFront) objects via “bash” script

I have seen many people looking for how to invalidate / purge objects from CDN (CloudFront). I was also in need of similar when I decided to give little more power to my developers so they can invalidate objects themselves. So here is my little effort which will invalidate objects from AWS CF and report you when the invalidation is actually completed.

Prerequisite: AWS CLI tools
http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-bundle-other-os

Once you setup AWS CLI tools and downloaded CDN invalidation script from here then you just have to adjust few variables within the script and you are ready to punch your CDN via shell.

Variables to update:

email: Your email address.
distributionid: Your CDN distribution ID.
json: Path to Json resultant file (contain links in Json format).
file: Here you have to put all links that you want to invalidate, one per line.

You can couple the script with real time or scheduled file system changes monitoring tool to achieve instant and automatic invalidation of modified objects.

0

How To Get The Most Out of AWS Free Tier | Free VPS Server

Today I will explain how to make efficient use of AWS EC2 Free Tier. I was using DigitalOcean 512M server for my blog and paying them $5 each month and it was working absolutely fine for my site. I then realized why should not try AWS Free Tier.

AWS Free Tier is providing 12 month free service with some limitations mainly 750 hours of “t2.micro” EC2 instance (1G memory) which make it a whole month, 30GB General Purpose SSD or Magnetic storage (EBS), 15 GB of bandwidth and many other free services like RDS, S3 etc which we discuss later. To see what included in Free Tier you should read below before start deploying any thing.

http://aws.amazon.com/free

To get start you just need to sign-up for new AWS account and for it you just need a valid Credit Card, Phone no and email address. Once you finished setting up and verifying the account make sure to confirm whether you are eligible for Free Tier or not by visiting “Billing & Cost Management” where you should find You are eligible for the AWS Free Usage Tier. See the Getting Started Guide AWS Free Usage Tier to learn how to get started with the free usage tier. under “Alerts & Notifications”.

Now the tricky part is how to distribute the limited resources we have and design a good architecture which provide decent performance, capacity and basic disaster recovery. Below is my configuration which I used to start with.

  1. “Ubuntu Server 14.04 LTS (HVM), SSD Volume Type” based “t2.micro” EC2 instance. (HVM perform slightly better then PV)
  2. 20GB General Purpose SSD for Root volume which also hold your application data. (/var/www)
  3. 10GB General Purpose SSD as BACKUP disk where we store on-site backups.
  4. “db.t2.micro” instance for RDS with 20GB database storage.
  5. 20GB of backup storage for RDS database backups / snapshots.
  6. Mount 5GB S3 bucket (different region then server) for off-site backups.

This type of configuration is a good start for small sites or personal blogs etc which provide decent performance, capacity and reliable disaster recovery while ensuring you still fall under Free Tier limits.

Now most you are also looking for a static IP for your server so good news is that you can also allocate an EIP and attach it with your instance and Amazon wont charge you for it unless it is associated with a running instance. As you are done with setting up the server its time for one more step and this involve requesting Amazon to white-list your EIP and create relative rDNS record so you can send emails from your application and server.

https://aws.amazon.com/forms/ec2-email-limit-rdns-request?catalog=true&isauthcode=true

Lastly an important point is to keep your eyes on your AWS account billing and make you nothing is adding to your bill. You can do it by going to “Billing & Cost Management”.

Tools I used for backups are below.

On-Site Backup: mysqldump & rsnapshot

Off-Site Backup: S3backer, duplicity, s3cmd

Note that do not take EBS Snaphots as Amazon is providing only 1GB snapshot storage for free. EBS Snapshots are crucial for your DR but unless your are ready to pay don’t use them.

UPDATE:

You can also create a CloudWatch monitor which will send email notification to you when your monthly cost for any service will be higher then $0. Again you need to visit “Billing & Cost Management” and under “Alerts & Notifications” you will find a message Your account is enabled for monitoring estimated charges. Set your first billing alarm to receive an e-mail when charges reach a threshold you define.So just click on Set your first billing alarm and setup a Alarm with ‘0’ value and your email address, Voila.