0

Securing Solr installation

You can protect your Solr installation in just few minutes.

    1.  Never install Solr in your web server working directories i-e: under your webroot
    2. Make Solr listen only on localhost

      vi bin/solr.in.sh
      SOLR_OPTS="$SOLR_OPTS -Djetty.host=127.0.0.1"
    3. Put localhost8983 as Solr server address in your application configuration, don’t use external / public address
    4. If you want to run SELECT queries from client’s browser (AJAX calls etc) then put a reverse proxy on front of your instance and protect remaining areas of Solr console (admin, update etc). Below is an example of Nginx host.


location ~* /solr/\w+/select {
proxy_pass http://127.0.0.1:8983;
}
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_pass http://127.0.0.1:8983;
}

By above nginx will only allow SELECT queries and will ask authentication on rest.

0

HTTP Strict Transport Security (Enable with CAUTION)

As Google is encouraging use of HTTPS on whole website so its good time to force whole website on HTTPS, typically you can do it by a simple redirect from your web-server but now you can do it even more efficiently at client level by passing an additional HSTS header.

Nginx:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Apache:

Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"

This will tell the browser that the site should be accessible only via HTTPS so in-case you type http://yourdomain the browser itself redirect it to https://yourdomain (Once the header is received and browser stored the information locally for future use).

You can also submit your domain to include in HSTS Preloading list. This service will submit your domain for the inclusion in the Chrome list which is also followed by other major browsers. HSTS is supported by all mainstream browsers and you can find full compatibility matrix here. This list is hard-coded into browsers so the browser will automatically use HTTPS for domains included in that list.

So far all good so where is the CAUTION!

If you want to use HSTS then you must be sure that you will be supporting HTTPS on whole site for a long period and specially when you are submitting it for HSTS Preloading. Those lists are hard-coded in browsers and does not have any method to de-list your domain and even if you are able to submit the de-list request then it may take months to take effect as this involves approval of request, update list in newer version of browsers, release of newer version of browser etc, but this is still not enough until your visitors /users have the newer browser version installed.

So whenever your are going to use HSTS put special attention on “max-age”, “includeSubdomains”, “Preloading” and “HSTS Preload Submission” otherwise you will be in big trouble if you want to stop use of HSTS on your site or don’t want to use HTTP on any sub-domain etc, also if your certificate is bad i-e expired, bad_cert_domain etc then browser won’t allow you to bypass or proceed for HSTS enabled sites.

0

Magento Performance Tips [Rare]

By now you must have gone through tons of optimization tips related to Magento but here I am going to share my personal experience of fighting with Magento and these tweaks really helped me to improve overall Magento performance and good thing is most are also applicable to most of other PHP+Mysql based applications.

I was stuck with performance issues and I tried almost all the common suggestions spread over different sites and forums but I was still not able to achieve and fix some of issues. So what I did?

  1. Get a faster clock speed per core and latest generation CPU. Believe me no of cores and amount of Memory sound good on paper but they will only help you when you have to handle more no of visitors at given time but if you want to decrease PHP page generation time then only faster clock speed core will help you as normal visitor’s request is handled by only a single core so a faster core can only finish the job in lesser time.
  2. No one can beat the power of a Physical server so always go for a Dedicated server instead of Cloud or VPS or any type of virtualized server if you can design a reliable Disaster Recovery plan.
  3. If you have enough computing power on your web server then always go for local Mysql instance as it will provide you better performance compare to remote Mysql server.
  4. Move Magento ‘cache’, ‘full_page_cache’ and ‘session’ folder on a ‘tmpfs’ based RamDisk. I was having random high IOPS issues and moving them to RAMDisk sorted it.
  5. If you seeing high percentage of Mysql temporarily tables created on disk despite keep increasing ‘tmp_table_size’ and ‘max_heap_table_size’ values then most probably your application is using much TEXT and BLOB columns so allocating more memory won’t help you and in that case you can try moving Mysql ‘tmpdir’ to a RAMDisk (with caution).
  6. Play with XFS or EXT4 filesystems and see which work best with your environment.
  7. Upgrade PHP to 5.5 and use OPcache instead of APC or APCu.
  8. Upgrade Mysql to 5.6
  9. In case of Apache move all .htaccess rules to virtual host, you can try with ‘AllowOverride None’ altogether.
  10. Disable Mysql slow query log and binlog.
  11. Disable PHP slow_log.
  12. Disable Magento System and Exception logs as well as Profiler.
  13. You can also try mounting Mysql ‘data’ directory and ‘/var/www/’ with ‘noatime’ option.
  14.  Prefer using “Unix Socket” for PHP-FPM and Mysql over TCP/IP. Same applies to ‘Redis’ and “Memcached”.
  15. Create all unix sockets in RAMDisk (tmpfs) instead of disk.
  16. If you are seeing random 502, 499 errors in nginx (while CPU, Memory, I/O, database etc looks stable) with PHP-FPM “Unix Socket” then switch to TCP/IP or increase listen.backlognet.core.somaxconn and ulimit values.
  17. Try newer PHP Mysql driver ‘mysqlnd’ over ‘libmysql’.
  18. Try PHP memcached driver over memcache
  19. Setup “Parallel Downloads” for different type of content.
  20. Setup HTTP/2 or SPDY to improve HTTPS performance.
  21. Use external Analytics like GA.
  22. Properly setup robots.txt and block bad bots and private areas of site and define a crawl delay.
  23. Replace regular Search and Layered Navigation from Solr or Sphinx, I personally using Algolia.
  24. Setup scheduled “Magento Log Cleaning”.

Obviously along with above suggestion you still need to apply all those tweaks and tips which you already know most.

0

How To Get The Most Out of AWS Free Tier | Free VPS Server

Today I will explain how to make efficient use of AWS EC2 Free Tier. I was using DigitalOcean 512M server for my blog and paying them $5 each month and it was working absolutely fine for my site. I then realized why should not try AWS Free Tier.

AWS Free Tier is providing 12 month free service with some limitations mainly 750 hours of “t2.micro” EC2 instance (1G memory) which make it a whole month, 30GB General Purpose SSD or Magnetic storage (EBS), 15 GB of bandwidth and many other free services like RDS, S3 etc which we discuss later. To see what included in Free Tier you should read below before start deploying any thing.

http://aws.amazon.com/free

To get start you just need to sign-up for new AWS account and for it you just need a valid Credit Card, Phone no and email address. Once you finished setting up and verifying the account make sure to confirm whether you are eligible for Free Tier or not by visiting “Billing & Cost Management” where you should find You are eligible for the AWS Free Usage Tier. See the Getting Started Guide AWS Free Usage Tier to learn how to get started with the free usage tier. under “Alerts & Notifications”.

Now the tricky part is how to distribute the limited resources we have and design a good architecture which provide decent performance, capacity and basic disaster recovery. Below is my configuration which I used to start with.

  1. “Ubuntu Server 14.04 LTS (HVM), SSD Volume Type” based “t2.micro” EC2 instance. (HVM perform slightly better then PV)
  2. 20GB General Purpose SSD for Root volume which also hold your application data. (/var/www)
  3. 10GB General Purpose SSD as BACKUP disk where we store on-site backups.
  4. “db.t2.micro” instance for RDS with 20GB database storage.
  5. 20GB of backup storage for RDS database backups / snapshots.
  6. Mount 5GB S3 bucket (different region then server) for off-site backups.

This type of configuration is a good start for small sites or personal blogs etc which provide decent performance, capacity and reliable disaster recovery while ensuring you still fall under Free Tier limits.

Now most you are also looking for a static IP for your server so good news is that you can also allocate an EIP and attach it with your instance and Amazon wont charge you for it unless it is associated with a running instance. As you are done with setting up the server its time for one more step and this involve requesting Amazon to white-list your EIP and create relative rDNS record so you can send emails from your application and server.

https://aws.amazon.com/forms/ec2-email-limit-rdns-request?catalog=true&isauthcode=true

Lastly an important point is to keep your eyes on your AWS account billing and make you nothing is adding to your bill. You can do it by going to “Billing & Cost Management”.

Tools I used for backups are below.

On-Site Backup: mysqldump & rsnapshot

Off-Site Backup: S3backer, duplicity, s3cmd

Note that do not take EBS Snaphots as Amazon is providing only 1GB snapshot storage for free. EBS Snapshots are crucial for your DR but unless your are ready to pay don’t use them.

UPDATE:

You can also create a CloudWatch monitor which will send email notification to you when your monthly cost for any service will be higher then $0. Again you need to visit “Billing & Cost Management” and under “Alerts & Notifications” you will find a message Your account is enabled for monitoring estimated charges. Set your first billing alarm to receive an e-mail when charges reach a threshold you define.So just click on Set your first billing alarm and setup a Alarm with ‘0’ value and your email address, Voila.

0

How to setup Nginx+HHVM on Debian Wheezy (HipHop Virtual Machine)

Now a days HHVM is very hot for PHP developers and sysadmins, HHVM is an open-source virtual machine designed for executing programs written in Hack and PHP. HHVM uses a just-in-time (JIT) compilation approach to achieve superior performance while maintaining the development flexibility that PHP provides.

So lets looks how to install and configure it with Nginx.

First you need to add HHVM repository


echo deb http://dl.hhvm.com/debian wheezy main | tee /etc/apt/sources.list.d/hhvm.list
wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add -
apt-get update

Install Nginx and HHVM packages

apt-get install nginx-extras hhvm

Make HHVM service to start on boot

update-rc.d hhvm defaults

If you execute PHP scripts via php-cli then you need to replace the php cli binary

/usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60

Finally create a new virtualhost in nginx with very basic configuration.

server {
root /var/www/hhvm;
index index.php;

server_name hhvmtest.com;

location ~ \.(hh|php)$ {
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

To test if ngix+hhvm is able to execute php create a a file index.php in your webroot (/var/www/hhvm) and put below code in it.

<?php echo phpinfo(); ?>

Retsart Nginx and HHVM services

/etc/init.d/hhvm restart
/etc/init.d/nginx restart

This should return “HipHop” only instead of standard phpinfo() output so don’t panic. You can also verify whether HHVM is in action in place of default PHP-CLI binary by executing below command.

/usr/bin/php -v

This should return HHVM details instead of standard php -v output.

Now your basic Nginx+HHVM webstack is ready and you can test real app on it (WordPress my favorite) and customized it as per your requirements.

0

Bootstrap Debian on Google Compute Engine (GCE) using bootstrap-vz

Google provides clean Debian images on its cloud but in some cases you are required to further customize them as per your needs, here I will be explaining how to customize and bootstrap the official Debian image for re-usability.

Prerequisite: GCE account with authorized billing and a project.

First thing you need to setup is gcloud compute command-line tools.


curl https://sdk.cloud.google.com | bash

Follow on screen instructions

Reload user profile by re-connecting your ssh sessions and cd into google-cloud-sdk directory and request a token.


cd google-cloud-sdk
gcloud auth login

Copy the code and access it in your browser and grant access when ask. This will return you another code which you need to paste in your terminal.

Now your machine is authorized and you start using gcloud tools from this system. First thing you need to specify is default project ID.


gcloud config set project xxxxxxxxx-xxxx-xxx

Now its time to test gcloud tools. You can run any command so lets see list of regions.


gcutil listregions

It will show you list of all regions as below.


+--------------+--------+----------+----------------+------------------+
| name | status | cpus | disks-total-gb | static-addresses |
+--------------+--------+----------+----------------+------------------+
| asia-east1 | UP | 2.0/24.0 | 20.0/10240.0 | 2.0/7.0 |
+--------------+--------+----------+----------------+------------------+
| europe-west1 | UP | 0.0/24.0 | 0.0/10240.0 | 0.0/7.0 |
+--------------+--------+----------+----------------+------------------+
| us-central1 | UP | 2.0/24.0 | 0.0/10240.0 | 2.0/7.0 |
+--------------+--------+----------+----------------+------------------+

Now you need to launch a new instance in any region with Debain 7

gcloud compute instances create my-instance --image debian-7 --zone us-central1-f --machine-type f1-micro

Here debian-7 is an alias of latest latest debain 7 image provided from Google and Debian. Note the external IP of your system.

In order to SSH server you need to generate new key pair and store its public key on your instance.

ssh-keygen
echo root:$(cat /home/azfar/.ssh/id_rsa.pub) > /tmp/a
gcloud compute instances add-metadata my-instance --metadata-from-file sshKeys=/tmp/a --zone us-central1-f

SSH into the instance using root and private key.


ssh -i /home/azfar/.ssh/id_rsa [email protected]

First thing you need to do on new instance is

apt-get update
apt-get upgrade
apt-get install git parted kpartx qemu-utils git debootstrap python-pip
pip install termcolor jsonschema fysom docopt pyyaml

Clone bootstrap-vz repository and checkout development version, it is important to use development version as it has many fixes and new features and configuration is also based on yml instead of json.


git clone https://github.com/andsens/bootstrap-vz.git
cd bootstrap-vz
git checkout development

Now edit manifests/gce.manifest.yml and customize it as per your needs. You can start with my sample gce.manifest.yml which is removing and installing few packages and also executing a shell command. You can also add further functionality and for it below resources will be good to view.

http://andsens.github.io/bootstrap-vz/manifest.html
http://andsens.github.io/bootstrap-vz/plugins.html

For development version documentation you can checkout gh-pages-dev branch

git checkout gh-pages-dev

Now you are ready to start the bootstrapping process.

./bootstrap-vz manifests/gce.manifest.yml

Make sure all went good, this will create a image in /mnt/target which you need to upload in Google Cloud Storage. For this you need to create bucket where you will upload the image and then create an image from there.

gsutil mb gs://yourbucket
gsutil cp /mnt/target/debian-7-7-wheezy-v20150101.tar.gz gs://yourbucket
gcloud compute images create debianwheezy --source-uri gs://yourbucket/debian-7-7-wheezy-v20150101.tar.gz

Now its time to launch a new instance from our image and test our modifications in place or not

gcloud compute instances create my-instance2 --image debianwheezy --zone us-central1-f --machine-type f1-micro

You need to add your previously created key to the new instance as we did earlier

gcloud compute instances add-metadata my-instance2 --metadata-from-file sshKeys=/tmp/a --zone us-central1-f

Once key is added login to new instance and verify sure your changes exist.

0

Enable SPDY on Debian

Few months back Google announced that moving your site to HTTPS will give you a boost in ranking, though the boost in very minor at the moment but later Google has said that its weight will increase so its a good time enable HTTPS on your server. There are other advantages of HTTPS as well but they all come with performance tradeoff due to number of extra “handshake” packets in initial communication, extra CPU cycles require to encrypt/decrypt data, no caching on HTTPS etc.

So what to do?

Don’t worry you can still do few things to improve your site performance and one is use of SPDY with HTTPS which will give you a little boost by compressing request and response headers, use of multiplexed requests over a single connection etc. The process is very simple and just require couple of minutes (providing your site is already configured with standard HTTPS/SSL)

Apache:

First download your the required package from https://developers.google.com/speed/spdy/mod_spdy/. As I have Debian 64bit so I will go for 64bit package


wget https://dl-ssl.google.com/dl/linux/direct/mod-spdy-beta_current_amd64.deb

Install the .deb that you downloaded

dpkg -i mod-spdy-*.deb
apt-get -f install

Enabled the Apache module

a2enmod spdy
/etc/init.d/apache2 restart

There is one more change in order to make SPDY working that in activate the mod from Apache mod_spdy config so edit spdy.conf and make sure you have below line.


SpdyEnabled on

Restart apache so new changes take effect.


/etc/init.d/apache2 restart

Now you are ready to test the SPDY functionality, you can test by various methods. Easiest is to visit spdycheck.org and test your site, another way to install SPDY indicator Chrome extension which will show you a Green lightning icon along with SPDY protocol version in browser address bar if site is SPDY enabled. Another way to test is to visit your site in Chrome then open a new tab as below which will show you the SPDY status.

chrome://net-internals/#spdy

Once all good add SPDY repository in your list you get the latest package automatically. Create a new file /etc/apt/sources.list.d/mod-spdy.list and add the repo.


/etc/apt/sources.list.d/mod-spdy.list
deb http://dl.google.com/linux/mod-spdy/deb/ stable main

Test whether the newly added repository is working.


apt-get update
apt-get upgrade

Nginx:

Nginx above v1.5 support SPDY 3 protocol so make sure you have installed the latest version of it. You can check your nginx version by below

nginx -V

Make sure you you see --with-http_spdy_module in the list of compiled modules. To enable it you just need to add spdy option in your ssl listener so your new config will looks like below.


server {
listen 443 ssl spdy;
...
...
}

After saving the new config just restart the nginx service.

/etc/init.d/nginnx restart

If all good your site is ready to ROCK!!

3

Squid 3 with SSL Bumping and Dynamic Certificates generation

This document guide you how to configure squid with SSL Bumping with Dynamic Certificates generation on Debian 7.

First download Squid 3.4 source code from the official site and extract it

wget http://www.squid-cache.org/Versions/v3/3.4/squid-3.4.10.tar.gz
tar -zxvf squid-3.4.10.tar.gz

Install required packages.

apt-get install build-essential libssl-dev

cd to squid-3.4.10 folder configure it.

./configure --prefix=/usr/local/squid --enable-icap-client --enable-ssl --enable-ssl-crtd --with-default-user=squid

Now compile and install it.

make all
make install

Once install create a new user and own squid’s logs file directory.

useradd squid
chown -R squid:squid /usr/local/squid/var/logs/

Before starting squid create the swap directories.

/usr/local/squid/sbin/squid -z

Now start the squid process

/usr/local/squid/sbin/squid

If there is any issue debug it.

/usr/local/squid/sbin/squid -k parse
/usr/local/squid/sbin/squid -NCd1

Now you should have squid running on 3128 port and in order for SSL bumping and dynamic certificates generation you have to create your own CA (certificate Authority).

mkdir /usr/local/squid/ssl_cert
cd /usr/local/squid/ssl_cert
openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout myCA.pem -out myCA.pem

Now we need to modify squid.conf, open it and make below changes.

http_port 3128 transparent
always_direct allow all
ssl_bump server-first all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
https_port 3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /usr/local/squid/var/lib/ssl_db -M 4MB
sslcrtd_children 5

Now we need to perform few more steps for above.

mkdir /usr/local/squid/var/lib
/usr/local/squid/libexec/ssl_crtd -c -s /usr/local/squid/var/lib/ssl_db -M 4MB
chown -R squid:squid /usr/local/squid/var/lib/ssl_db/

Restart squid with ‘/usr/local/squid/sbin/squid -NCd1‘. You should be able to see something like below in the end.

Accepting NAT intercepted HTTP Socket connections at local=[::]:3128 remote=[::] FD 18 flags=41
2014/12/13 13:41:54| Accepting NAT intercepted SSL bumped HTTPS Socket connections at local=[::]:3127 remote=[::] FD 19 flags=41

If all good so far your squid configuration is completed but you need few more steps in order to use it transparently.

Enable IP Forwarding

echo "1" > /proc/sys/net/ipv4/ip_forward

Configure iptables to accept and forward connections to squid.

iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3127
iptables -I INPUT -p tcp -m tcp --dport 3127 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 3128 -j ACCEPT

Now you need to point your client gateway to squid box and install the CA certificate in your browser’s certificate store to avoid certificate warnings.

0

Squueze-LTS WARNING: The following packages cannot be authenticated!

I was suing squueze-lts repos from a while but suddenly I start getting following error when performing apt-get upgrade (after apt-get update)


WARNING: The following packages cannot be authenticated!

So what I did to sort it.

Removed squeeze-lts repos and pinning then


apt-get clean
apt-get update
apt-get upgrade
apt-get install debian-keyring debian-archive-keyring

 
Then I re-add squeeze-lts repos and pinning and re-run apt.


apt-get update
apt-get upgrade

 
Voila, warning gone and everything back to to normal.

 

0

POODLE Bite: Exploiting The SSL 3.0 CVE-2014-3566

Google has recently discovered an exploit in the implementation of SSL V3 protocol which potentially compromise secure connections. It is recommended to system administrators to disable SSL 3.0 on their servers and use TLS 1.1 or 1.2.

This vulnerability does not affect your SSL Certificates so there is no need to renew, reissue, or reinstall any SSL Certificates.

How to disable SSL V3.

Apache:
Edit your SSL virtualhost and make sure it contain below parameter.


SSLProtocol all -SSLv2 -SSLv3

Nginx:
Edit your SSL virtualhost and make sure it contain below parameter.


ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

IIS:


Download DisableSSL3.zip, extract it and install DisableSSL3.reg, reboot server.

Finally make sure you have restarted the web server service so the changes can take effect.

Amazon has also released instructions how to cop with this vulnerability.

http://aws.amazon.com/jp/security/security-bulletins/CVE-2014-3566-advisory/

UPDATE:
Once you disabled SSL V3 you can test your site / server from following tool.

http://poodlebleed.com/

Alternatively you have also verify it via command line.


openssl s_client -connect google.com:443 -ssl3

If there is hadshake failure then SSL V3 is disabled on server.

UPDATE: 10/16/2014

The vulnerability has been fixed in OpenSSL 1.0.1j version, so lets wait for the patches from Debian, RedHat and other Linus distributors.