0

Bulk Purge CDN (CloudFlare) objects

There are situations when you are in need to purge same item from CloudFlare cache being called from different URLs e.g: our Magento Multi-Store has 14 stores and there are some objects which are not part of main CDN domain (cdn.domain.com) and directly called from origin server for example:


http://store1.domain.com/1.jgp
http://store2.domain.com/1.jpg

Its a pain if you manually purge single item dozen of times with suffix so I wrote a script which generates all subsequent URLs and you just need to copy and paste them in CloudFlare Purge input box.

Download the script attached here and create a file containing  your items paths (relative) as below and execute the script.


skin/frontend/rwd/default/js/product.js
skin/frontend/rwd/mobile/js/product.js

Purging made easy 🙂

1

Backup AWS Route53 Zones in BIND format

Your are using AWS Route53 to host your DNS and you need to backup all hosted zones along with complete records which should also compatible to BIND. Don’t scratch your head below script will take care of it, all you need is to install a handy tool for managing Route53 called “cli53“.


apt-get install python-pip
pip install cli53

Once cli53 is installed download the the script from here and before executing it you only need to specify your access keys. The script will fetch the name of all domains currently created in your account and backup them in Bind format to individual files.

 

0

Automatic RootKit scanning using “chkrootkit”

Scanning servers for presence of any hidden RootKit was always an important role of system administration but recently discovery of Linux Ransomeware (luckily fix too) has highlighted the importance of it and also motivated me to write something about it. So what I have come with?

chkrootkit is among mostfamous and widely used tool for the detection of rootkits so I decided to write a script which take care of the installation, scanning and the report. The installation is pretty easy and quickly but why I wrote the script?

Usually when smart hackers are able to inject rootkit in server they also scan the system for the presence of AntiRootKit and if they found they replace it with hacked version of tool, but they are not done yet as they also replace your system binaries (ls, ps, egrep, awk etc) which Antirootkit tools uses hence by using the modified binaries Antirootkit tools fail to detect the presence of any rootkit and you think that you are safe.

So what script does?

The script download the latest version of chkrootkit, compile it, download safe versions of my system libraries (Ubuntu 14.04.3 LTS, which you need to replace with your own as you should not be using mine on your servers), scan the system and send its report on your email. As we don’t want hackers to know that we are using antirootkit when everything is done the script also remove the traces of chkrootkit installation and anything related. Lastly remove itself (the bash script) from filesystem and clear history. You can find the script here

Once the script downloaded you need to execute it as below

./chkrootkit.sh && history -c && history -w

Notes: Before using the script you should also review your hosts file (/etc/hosts) for any malformed entries as that way hacker can lead you to its own website and download modified version of chkrootkit.

Don’t install any cron of it as it will help hacker to detect the presence of antirootkit tool.

Update: If you are receiving “chkrootkit: can’t find `ssh’” error then its a bug in current release and you can find additional details here.

2

HOT Backups to S3 Directly

There are situations when to want to backup your data directly to remote location e.g: you are out of free space, you want to provide backup of data to your client or developers, the server is going to terminate shortly (abuse, default etc) and you don’t have free space to backup and download your data instantly etc.

For situations like those I have wrote a script which will directly backup your webroot and mysql database to S3 bucket directly (without creating any local copy before uploading to S3) and create instant public or private download links.

Download the script from here. The script required s3cmd installed and configured but don’t worry the script will take care of it as well, all you need is to adjust the following variables before executing it.

Variables:

public: Yes or No (decide whether download link should be public or private)
tempfiles: Path where script download temporarily files
access_key: AWS Access Key
access_key: AWS Secret Key
bucket: S3 bucket name.
webroot: Path to your webroot.
db: Name of your database

That’s all, just update the above varibales and you are ready to take your backup. Backups were never easy before, isn’t?

 

0

How to enable Online Certificate Status Protocol (OCSP) Stapling in Nginx

Generally when you access a secure site (HTTPS) browser has to create another request to specific certificate revocation server known as OCSP Responder to verify whether the certificate is revoked or not, due to this some browsers (Chrome etc) does not implement OCSP because of this overhead.

By implementing OCSP Stapling you can replace the role of your OCSP Responder by letting the your web server to periodically query the OCSP Responder itself and then serve client both certificate and the proof that certificate is not revoked.

To enable OCSP Stapling in Nginx add following lines under your SSL listener.


ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/sub.class1.server.ca.pem;

ssl_trusted_certificate should contain your intermediate certificates followed by your Root CA.

Once OCSP Stapling is properly configured you can verify it online by SSL Labs or OpenSSL.


echo QUIT | openssl s_client -connect azfarhashmi.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update'

If you see any result then OCSP Stapling is working fine. Don’t forget to replace ‘azfarhashmi.com‘ with your own domain.

0

Enable HTTP/2 in Nginx

As of now HTTP/2 protocol is approved and published which is faster then older protocol so its a good time to implement HTTP/2 on your website. Good thing is that all major servers and browsers are already compatible with it so we are good to go. If you are curious check the current compatibility list here.

I am using Ubuntu 14.04.3 LTS and Nginx Mainline v1.9.6. The installation is simple as you just need to install Nginx version 1.9.5 or higher compiled with ‘–with-http_v2_module‘ option.

Prerequisites:
Nginx 1.9.5 or higher compiled with ‘–with-http_v2_module
TLS enabled website

To install create /etc/apt/sources.list.d/nginx.list and add Nginx official repositories to it.


deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx

Adding key and install Nginx latest mainline version.

wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key
apt-get update
apt-get install nginx

Once Nginx is installed you just need to update your HTTPS listener in your vhost as below and your website is now ready to take advantages of HTTP/2 benefits.

'listen 443 ssl http2;'

Note: If you were using SPDY then you need to remove ‘spdy’ from your listener and simply replace it from ‘http2′, if you are passing any headers related to SPDY then also remove them.

0

Enable gzip compression with Nginx+CloudFront (CDN)

Some time ago I decided to use CloudFront to serve static content from it and every thing was smooth and quick as expected but I noticed that I lost gzip compression from all compressible objects. Amazingly when I fetch the object directly from my web server (Nginx) the compression was there but if I load the object from CloudFront there was no compression.

After scratching my head for a while I realized that by-default Nginx does not apply compression on proxied requests and Nginx determined it by the presence of  Via’ header in request which is true in case of CloudFront and unlike other CDNs CloudFront does not compress objects on its own so in result I lost the compression.

Good thing is that you can fix it easily by instructing Nginx to enable compression on proxied requests too by setting ‘gzip_proxied‘ to ‘any‘. In some case you may also need to set ‘gzip_http_version‘ to ‘ 1.0‘.


gzip_proxied any;
gzip_http_version 1.0;

Now in that case CloudFront will store two versions of compressible objects i-e: Compressed and Uncompressed and when client browser request contain ‘Accept-Encoding’ header CloudFront will serve the compressed version and when the header is missing it will serve the uncompressed version of the object.

Note: If you are still not seeing compression with CloudFront then you must have missed invalidating those objects from your distribution.

1

Invalidate CDN (CloudFront) objects via “bash” script

I have seen many people looking for how to invalidate / purge objects from CDN (CloudFront). I was also in need of similar when I decided to give little more power to my developers so they can invalidate objects themselves. So here is my little effort which will invalidate objects from AWS CF and report you when the invalidation is actually completed.

Prerequisite: AWS CLI tools
http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-bundle-other-os

Once you setup AWS CLI tools and downloaded CDN invalidation script from here then you just have to adjust few variables within the script and you are ready to punch your CDN via shell.

Variables to update:

email: Your email address.
distributionid: Your CDN distribution ID.
json: Path to Json resultant file (contain links in Json format).
file: Here you have to put all links that you want to invalidate, one per line.

You can couple the script with real time or scheduled file system changes monitoring tool to achieve instant and automatic invalidation of modified objects.

0

Securing Solr installation

You can protect your Solr installation in just few minutes.

    1.  Never install Solr in your web server working directories i-e: under your webroot
    2. Make Solr listen only on localhost

      vi bin/solr.in.sh
      SOLR_OPTS="$SOLR_OPTS -Djetty.host=127.0.0.1"
    3. Put localhost & 8983 as Solr server address in your application configuration, don’t use external / public address
    4. If you want to run SELECT queries from client’s browser (AJAX calls etc) then put a reverse proxy on front of your instance and protect remaining areas of Solr console (admin, update etc). Below is an example of Nginx host.


location ~* /solr/\w+/select {
proxy_pass http://127.0.0.1:8983;
}
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_pass http://127.0.0.1:8983;
}

By above nginx will only allow SELECT queries and will ask authentication on rest.

0

HTTP Strict Transport Security (Enable with CAUTION)

As Google is encouraging use of HTTPS on whole website so its good time to force whole website on HTTPS, typically you can do it by a simple redirect from your web-server but now you can do it even more efficiently at client level by passing an additional HSTS header.

Nginx:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Apache:

Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"

This will tell the browser that the site should be accessible only via HTTPS so in-case you type http://yourdomain the browser itself redirect it to https://yourdomain (Once the header is received and browser stored the information locally for future use).

You can also submit your domain to include in HSTS Preloading list. This service will submit your domain for the inclusion in the Chrome list which is also followed by other major browsers. HSTS is supported by all mainstream browsers and you can find full compatibility matrix here. This list is hard-coded into browsers so the browser will automatically use HTTPS for domains included in that list.

So far all good so where is the CAUTION!

If you want to use HSTS then you must be sure that you will be supporting HTTPS on whole site for a long period and specially when you are submitting it for HSTS Preloading. Those lists are hard-coded in browsers and does not have any method to de-list your domain and even if you are able to submit the de-list request then it may take months to take effect as this involves approval of request, update list in newer version of browsers, release of newer version of browser etc, but this is still not enough until your visitors /users have the newer browser version installed.

So whenever your are going to use HSTS put special attention on “max-age”, “includeSubdomains”, “Preloading” and “HSTS Preload Submission” otherwise you will be in big trouble if you want to stop use of HSTS on your site or don’t want to use HTTP on any sub-domain etc, also if your certificate is bad i-e expired, bad_cert_domain etc then browser won’t allow you to bypass or proceed for HSTS enabled sites.