Upgrading Debian from wheezy to jessie

It is simple, but i am issuing no guarentees, this will probably work for you, but there is a chance that something could go wrong, you have been warned

Loosing network connectivity: One thing that sometimes happens is that the Ethernet number can change from eth0 to eth1 for example (this could happen if your system has not been exposed to other cards, in which case it can turn from eth0 to eth6), What you can do about this is either be physically present to fix it (fix in /etc/network/interfaces) or add another interface to the file so that it can work out of the box, or have KVM over IP, or something of the sort, if you have LXC installed, the containers will not be able to fire up because they must have the following two extra lines in their config, /var/lib/lxc/CONTAINER_NAME/config

lxc.autodev = 1
lxc.kmsg = 0

apt-get update
apt-get dist-upgrade

edit your apt sources (vi /etc/apt/sources.list)

replace the word wheezy with jessie (Wherever you find it)

apt-get update
apt-get dist-upgrade

You should be done

Installing enhanceio on debian Jessie (Not wheezy or squeez, minimum kernel 3.7 onwards)

Using enhanceio (the flashcache fork)

First, on a debian system, you need to compile enhanceio because debian have not yet released anything for it (2015-05-13)

apt-get install git make gcc pkg-config uuid openssl util-linux uuid-dev libblkid-dev python
apt-get install build-essential
apt-get install linux-headers-$(uname -r)

Now let us download enhanceio

git clone https://github.com/stec-inc/EnhanceIO.git
cd EnhanceIO/Driver/enhanceio/
make && make install

On make install you will see
make[1]: Leaving directory '/usr/src/linux-headers-3.16.0-4-amd64'
install -o root -g root -m 0755 -d /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_rand.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_fifo.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_lru.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/

cd /lib/modules/3.16.0-4-amd64/extra/enhanceio/
insmod enhanceio.ko
insmod enhanceio_fifo.ko
insmod enhanceio_lru.ko

#Now check that it is loaded as a kernel module with (HINT: Will it be there after reboot ?)
lsmod | grep enhanceio

cd ../../CLI/
cp eio_cli /sbin/
cp eio_cli.8 /usr/share/man/man8

Now that we have installed and compiled enhanceio, we will simply use it, we are all done, unlike bcache and dmcache, there is minimal setup for this kind of ssd/flash thumb stick.

Now in my setup, i want sdb to act as an SSD cache for sdc1

eio_cli create -d /dev/sdc1 -s /dev/sdb1 -p lru -m ro -c main_disk_cache

The output of that command was as follows, this result can also be obtained with the command “eio_cli info”, or for super detail “cat /proc/enhanceio/main_disk_cache/stats”

Cache Name       : main_disk_cache
Source Device    : /dev/sdc1
SSD Device       : /dev/sdb1
Policy           : lru
Mode             : Read Only
Block Size       : 4096
Associativity    : 256
ENV{ID_SERIAL}=="WDC_WD1001FALS-00J7B0_WD-WMATV0098355", ATTR{partition}=="1"
ENV{ID_SERIAL}=="INTEL_SSDSA2CW120G3_CVPR1481061P120LGN", ATTR{partition}=="1"
Cache created successfully

Now to see the block size
blockdev –getbsz /dev/sdb1 (512 was the result)

The ADATA S596 64GB SSD (500 series)

The controller on this SSD is the jmf612, along with the RAM chip (Cache) hynix H5PS1G63EFR (SDRAM) (64m X 16BIT) = 1Gbit which is 128MB (128 MBYTE).

This chip does not suffer from the latency problem in the JMF601A/602A chips, (jmf612 is 2009 while JMF601A is 2006).

The memory that comes with the drive is made by intel (29F32G08AAMDB), 4GB per chip, With 16 such chips, we have 64GB of flash on the board, the photos will follow once i get them off my camera and onto this blog.

The reason why i am keeping record here is that this drive is probably going to be used as a bcache cache device, and i am concerned about reliability, Granted that it is 90% read scenario, some websites are claiming that data corruption is a downside, but not providing any elaboration, http://alatest.co.uk/reviews/solid-state-drives/corsair-reactor-r60-jmicron-jmf612-ssd/po3-116271270,383/#overview (The crosair reactor-r60 is identical to this drive)

The drive is claimed to have speeds of read 240 write 170, compression is not a factor in this as other drives use compression to gain speed, something that is only good for certain types of data

There is also a new firmware at the adata website that you should make sure is installed, ADATA S596 Firmware Update Tool V1.0

Advanced Linux SSD caching for hard drives on Debian Jessie – bcache and enhanceIO

Please note, i am keeping this thread for myself as a reminder, so if sometimes you feel that it is getting too general, it is probably more of a reminder for me.

Also note that i am organizing this post as i go as i spend some time creating my new MySQL machine every day, so this is an ACTIVE work in progress

—————————————————
First note, the benefits of RAID 0 on the SSDs is more about wear than it is about speed.
Here is why and how

1- All modern disks use wear leveling to prolong the life of your disk, bcache does nothing in this regard, so using thumb drives is not very smart.
2- Your SSD has to be larger than the active data set, otherwise, bcache or any other will start kicking out data and writing in it’s place too often, wearing your disk, yet, if the data that is always accessed on your hard drive is about 500GBs of your 3TB disk, and your cache drive is more than 500GBs, odds are that your disk will probably send this data, then spin down because everything we need is already on the SSD reducing the wear to almost nothing. If you do not have a large enough SSD, odds are you will save their life with a RAID setup, either software or hardware, it makes no difference.
3- Larger disks have longer lives in general with or without raid for all purposes, the reason is arithmetically simple, the larger the disk, the linger it takes for a flash chip to get it’s turn for a write again because the others need to get their turn first, the larger the disk, the more the banks, hence, the longer the useful life.
4- Erase commands are expensive, And even with a workaround, trim support means nothing on a busy server
—————————————————
My own setup

1- My computer has 6 sata ports (ASUS P9X79).
2- I prefer to never group spinning hard drives in one metal case that does not have vibration isolation, so the mainboard SATA will host the 5 SSDs, and the main hard drive (2TB black).
3- I have 2 PCIe 2 port esata cards (silicon image supporting port multiplier)
4- 1 SATA port multiplier (Silicon image) that even though has raid function, is probably more reliable in JBOD mode, where the 5 spinning disks other than the main disk will need to live.

Most data on the 6 spinning disks is only stored and seldom accessed. furthermore, most of this data does not get cached because it is accessed sequentially.

The boot partition of the hard drive (Spinning) is not cached, the second partition is, this is safer as the system can boot anyways, then the raid array and the bcache initialized with no unforeseen effects, boot time is not so important as the PC is always on, we reboot it once a year

—————————————————
Note for self:
1- What are the possibilities of 2 cache layers, one with write-back, the other is only for read cache.
Possible Advantages:
You can use less reliable flash thumb drives with no wear leveling mechanism as an extra cache layer, where failure will not cause data corruption, but will leave more space for the SSDs
Overcache problem considerations ?
The SSD will initially get the same cache as the thumb drive, but when the hit rate on the thumb drive increases, the SSD will remove that data, so we will only have 1 copy
The SSD with write-back can make writing a hell lot faster for database data.
Bcache only allows one SSD per spinning partition. you can not combine, while the other methods could ! and since they are read only, you could even RAID the thumb drives
Caching software available
Device mapping
bcache (part of the new kernels)
dm-cache (part of the new kernels)
Flashcache (Facebook’s)
EnhanceIO (Fork of flashcache)

EnhanceIO can be attached to any block device on-the-fly even
when device is already mounted.
dm-cache is faster, but bcache is safer

If so…
we will probably be using bcache in FIFO mode, and enhanceio in most (least recently used (LRU)).

Also take a look at https://github.com/Feh/nocache
—————————————————
You probably noticed that there are not many tutorials and instructions on bcache for Debian online, the reason is that the user tools are not packaged for Debian, not even for Jessie, but that is a simple thing, the hard part has already been done at the kernel level so you dont need to recompile the kernel, but you need the instructions on how to setup bcache !

So here is a step by step, no theory and no explinations, just the procedure

bcache, the simple and safer way

So here is how i setup bcache on my linux server for MySQL usage.

1- create 2 partitions on my 2TB disk, one would be 200GB and the other 1800GB (With fdisk or parted you chose)

2- Delete the first 200GB partition leaving only the other

3- Run the debian installer and instructing it to use the continuous free space, which is now the small first partition

apt-get install git make gcc pkg-config uuid openssl util-linux uuid-dev libblkid-dev

git clone https://github.com/g2p/bcache-tools.git

cd bcache-tools

make

make install

-------------------------------------
install -m0755 make-bcache bcache-super-show /usr/sbin/
install -m0755 probe-bcache bcache-register /lib/udev/
install -m0644 69-bcache.rules /lib/udev/rules.d/
install -m0644 -- *.8 /usr/share/man/man8/
install -D -m0755 initramfs/hook /usr/share/initramfs-tools/hooks/bcache
install -D -m0755 initcpio/install /usr/lib/initcpio/install/bcache
install -D -m0755 dracut/module-setup.sh /lib/dracut/modules.d/90bcache/module-setup.sh

-----------------------------------------------------

Now, use wipefs to delete the filesystem on the big partition (/dev/sda2) so that we can format it as bcache rather than ext4

wipefs -a /dev/sda2

Take note of the UUIDs that come out of thie following 2 commands.

Why ? well, caching devices are the SSDs, backing devices are the normal sata disks, what happens when you run the following 2 commands is that you tell the kernell that one is a backing device, and the other is a caching device, you can have as many backing and caching devices as you want, but pay attention to this, you can not use 2 SSDs to cache one disk, although you can use 1 SSD to cache multiple disks, so for example, you can have 2 SSDs caching 5 disks, 1 SSD can cache 3 disks for example and the other can cache 2 disks.

Also, this is debian, so this step is done for you automatically, you will have /sys/fs/bcache/somediskuuid off the bat, that does not apply to all linux systems (some systems might requiere you to do the following to the backing deice ... echo "the bcache uuid" > /sys/fs/register)

First step, initialize my 1.8TB partition into a bcache block, -B means this is a backed device

make-bcache -B /dev/sda5

mkfs.ext4 /dev/bcache0

Then initialize the SSD (120GB intel SSD) into a bcache block, -C means caching device

make-bcache -C /dev/sdb1

--------------------

UUID: 4e9aed54-bf48-4d7a-b5b2-b041f2a811f8
Set UUID: 13544b4e-99de-42a0-905a-c6efbe669151
version: 0
nbuckets: 228944
block_size: 1
bucket_size: 1024
nr_in_set: 1
nr_this_dev: 0
first_bucket: 1

----------------------------

Now we have a backing and caching device, let's connect them together, this step is called "attaching"

first,

cd /sys/fs/bcache/

ls

the ls command will give you the caching devices UUID as a folder, which is actually a folder.

Now, Mount the bcache partition.

mount /dev/bcache0 /hds/cacheddisk (Check that you have bcache0 not bcache1 for example).