Few days back I was in a situation were I have to provide full access (sort of) on our AWS account to one of our partner so they can setup an Elastic Beanstalk cluster. Obviously I was not in favor to provide complete access to the resources which are critical for us so I come up with below IAM policy which is not only protecting our critical resources but also providing full access to deploy a Elastic Beanstalk cluster.
Download: IAM Policy
The logic in above policy is to add an additional tag “critical : true” (Key and Value respectively) to all of your existing resources and setup an explicit deny on resources which matches the tag (Note that not all AWS resources support tagging). To further extend the protection we are also setting up direct explicit deny rules on each resource on which we don’t want to provide access to IAM user.
Your are using AWS Route53 to host your DNS and you need to backup all hosted zones along with complete records which should also compatible to BIND. Don’t scratch your head below script will take care of it, all you need is to install a handy tool for managing Route53 called “cli53“.
apt-get install python-pip
pip install cli53
Once cli53 is installed download the the script from here and before executing it you only need to specify your access keys. The script will fetch the name of all domains currently created in your account and backup them in Bind format to individual files.
There are situations when to want to backup your data directly to remote location e.g: you are out of free space, you want to provide backup of data to your client or developers, the server is going to terminate shortly (abuse, default etc) and you don’t have free space to backup and download your data instantly etc.
For situations like those I have wrote a script which will directly backup your webroot and mysql database to S3 bucket directly (without creating any local copy before uploading to S3) and create instant public or private download links.
Download the script from here. The script required s3cmd installed and configured but don’t worry the script will take care of it as well, all you need is to adjust the following variables before executing it.
public: Yes or No (decide whether download link should be public or private)
tempfiles: Path where script download temporarily files
access_key: AWS Access Key
access_key: AWS Secret Key
bucket: S3 bucket name.
webroot: Path to your webroot.
db: Name of your database
That’s all, just update the above varibales and you are ready to take your backup. Backups were never easy before, isn’t?
Some time ago I decided to use CloudFront to serve static content from it and every thing was smooth and quick as expected but I noticed that I lost gzip compression from all compressible objects. Amazingly when I fetch the object directly from my web server (Nginx) the compression was there but if I load the object from CloudFront there was no compression.
After scratching my head for a while I realized that by-default Nginx does not apply compression on proxied requests and Nginx determined it by the presence of ‘Via’ header in request which is true in case of CloudFront and unlike other CDNs CloudFront does not compress objects on its own so in result I lost the compression.
Good thing is that you can fix it easily by instructing Nginx to enable compression on proxied requests too by setting ‘gzip_proxied‘ to ‘any‘. In some case you may also need to set ‘gzip_http_version‘ to ‘ 1.0‘.
Now in that case CloudFront will store two versions of compressible objects i-e: Compressed and Uncompressed and when client browser request contain ‘Accept-Encoding’ header CloudFront will serve the compressed version and when the header is missing it will serve the uncompressed version of the object.
Note: If you are still not seeing compression with CloudFront then you must have missed invalidating those objects from your distribution.
Although CloudFront provides its own wildcard certificate (*.cloudfront.net) for free with each Distribution you create but in that’s case you can not use your own domain CNAME to access the content from CDN over HTTPS e.g: you can’t use media.azfarhashmi.com instead you can only use CloudFront provided Domain Name e.g: ‘d3shv1t4v6he9p.cloudfront.net‘.
To use custom certificate you first have to upload your certificates into IAM which you can do via AWS CLI tools.
aws iam upload-server-certificate --server-certificate-name azfarhashmi2015 \
--certificate-body file://azfarhashmi.com.crt --private-key \
file://azfarhashmi.com.key --certificate-chain file://azfarhashmi0interm.pem \
Here server-certificate-name is the display name of your SSL that will appear in CloudFront settings, –certificate-body is the path to your certificate, –private-key is the path to your certificate private key, –certificate-chain is the path to your certificate complete Chain file and –path will remain /cloudfront/ in our case.
If you are having any error uploading certificates then make sure your certificates are in PEM format, your –certificate-body file does not contain any intermediate / Root certificate (in-case of nginx etc) and you are providing complete Chain in correct order i:e your Root Certificate should be in last.
Once the certificates are uploaded you can go to distribution settings and and choose ‘Custom SSL Certificate (stored in AWS IAM)’ option and select the recently uploaded Certificate, make sure that ‘Only Clients that Support Server Name Indication (SNI)’ is selected otherwise AWS will charge you additional $600/m for assigning you dedicated IP addresses at each Edge location.
As of now the SNI option should be enough for you if your Content is accessed by Browsers and no other application / library is accessing it as all latest Browsers have implemented SNI protocol, however if you are not sure then you can consult with WikiPedia article here