Aug 272014
 

I have several Xen and KVM vps servers and they all suffer with the same problem of “nf_conntrack: table full, dropping packet” but its an easy fix

You can check what the current number of nf_conntrack_max is set to.
cat /proc/sys/net/nf_conntrack_max
the default is 65535 but all mine were set to 15000.

Now to increase the number of nf_conntrack_max
echo 100000 > /proc/sys/net/nf_conntrack_max
Now if you check again it should be the new value.

Now to make the change permanent we add the following to the bottom of /etc/sysctl.conf
net.nf_conntrack_max = 100000

Please note that the directory path to “nf_conntrack_max” differs between Linux distributions, the above works for CentOS

Aug 262014
 

I’ve been playing around and getting to know CentOS 7 and have decided i prefer iptables (over firewalld) which i have been using for the last few years so here’s how to swap firewalld for iptables.

Disable Firewalld Service.
systemctl disable firewalld
Stop Firewalld Service.
systemctl stop firewalld
Now we install the iptables services.
yum -y install iptables-services
Start iptables at boot.
systemctl enable iptables
Start ip6tables at boot. (skip if you don’t use ipv6)
systemctl enable ip6tables
Finally we start iptables.
systemctl start iptables
Finally we start ip6tables. (skip if you don’t use ipv6)
systemctl start ip6tables

Now our firewall uses iptables and we can add our rules like we always have.

Jul 072014
 

Centos 7 has now been released to the public.

Release announcement can be found here http://lists.centos.org/pipermail/centos-announce/2014-July/020393.html
Release notes can be found here http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7
The fastest way to download is via torrent, I’m currently seeding all images available and am pushing approx 300Mbps (megabits per second) across my servers and have done 200GB+ of bandwidth in under 3 hours.

I will be updating most of my servers over the next few weeks/months and will start updating my tutorials again, happy testing.

May 242014
 

I often hear “I don’t need a backup I’m running a raid array” which is completely wrong. Raid has several types but most types allow for 1 or more dries to fail without losing data but this is not a backup it just keeps your storage online/active.

Backup

A backup is a separate copy of your data, as a rule if you don’t have 3 copies of your data you don’t have proper backups. 1=Live storage, 2=On-site backup, 3=Off-site backup.

Business user

Now the setup i use for my commercial clients is Raid 1 or 5 in the storage server (This is my Live storage), I then use a NAS box again with raid 1 for on-site backups which i normally locate in a different part of the building where possible. Finally i have off-site backup, now depending on the size of the data to be stored and the average churn of data i use: If its small i use an online server or service but for larger uses i use a simple external usb3 hard drive which is then taken off-site, the other factor for online backup services is the speed of your internet connection.

Here’s a real world example:
Server – Raid 1 with 4TB usable storage
NAS – Raid 1 with 4TB usable storage
Usb3 HDD – 4TB

This client has a really slow internet connection (0.3 meg upload) and a high churn rate on the data in excess of 250GB a week so it would take forever to upload so we have to use an external usb3 HDD.
The server sync’s its files with the NAS over a gigabit network and does this nightly. This give us a full backup but we can only go back 1 day, for some clients they require several days worth of backups in which case you need more storage on the NAS and use an incremental backup system or similar.

Home user

Now for the average home user your backup requirements will be different and can be done very cheaply. I use 2 backup solutions myself, a system like above for my critical business files and a simple backup solution for the non critical personal files (pictures/videos/documents).
I store all of these in a online sync service like Dropbox or Google Drive. This gives me a local copy plus an off-site copy that can be accessed anywhere in the world online. I have used Dropbox pro for years but am moving over to Google drive for 1 reason – Price, i can get 1TB of storage with Google Drive for the same price as 100GB with Dropbox.

In my next post i will discuss how i backup web-servers.

 Posted by at 5:08 PM
May 052014
 

Storage Tape

Sony has developed a new storage tape that hold up to 185 terabytes (TB) of data.
Created with help from IBM, Sony’s technology allows for tapes that can store the equivalent of 3,700 Blu-ray discs.
The tape holds 148 gigabits (Gb) per square inch – beating the previous record by more than five times over.

Storage tapes are typically used by businesses to hold huge amounts of data for a long time. Using tapes is cheaper and more energy efficient method of storing data when compared to power-hungry racks full of hard drives.
However, retrieving data from tape is a much slower process, tapes only offer sequential access which means data has to be accessed in the order it was written in. The tape has to literally be moved to the right position for the data to be accessed.

Source Sony

Archivial Disc

Sony and Panasonic have teamed up to develop the next-generation storage discs called “Archivial Disc” which will eventually hold 1TB.

The first generation will hold 300GB per disc and is scheduled for release summer 2015. Over time they will roll out 500GB and 1TB versions of the disc. They are being aimed at big companies that need to store vast amounts of data.
The disc will be double sided and have 3 layers per side.

In an age of cloud storage, some have questioned the need of such discs but actually such systems are crucial. Facebook has begun installation of 10,000 Blu-ray discs in a prototype storage cabinet as back-ups for users’ photos and videos. Such a system will reduce its costs by 50% and use 80% less energy compared to traditional storage.

Source Sony