Backing up disk with DD saving space

The problem with DD is that it copies the whole disk, In reality, the disk could have 10GBs but that dump file has to be of the disk size, lets say 100GBs

So, how do we get a dump file that is only around 10GBs in size.

The answer is simple. Compressing a zero fill file is very efficient (almost nothing).

So, frst we create a zero fill file. with the following command, i recommend you stop the fill while there is still a bit of space on the disk especially if the disk has a database running that could need to insert.. so stop the running fill with ctrl+c before you actually fill the whole disk

cat /dev/zero > zero3.fill;sync;sleep 1;sync;

At this point, you can either delete the zero.fill file or not, It will not make a difference in the dump size, deleting is recommended, but it wont make much of a difference.

Notes
sync flushed any remaining buffer in memory to the hard drive
If the process stops for any reason, keep the file already written and make a second one and a third and whatever it takes, do not delete the existing one, just make sure almost all of your disk’s free space is occupied by zero fill files.

Now, to DD and compression on the fly (So that you won’t need much space on the target drive)

If you want to monitor the dump, you can use pv

dd if=/dev/sdb | pv -s SIZEOFDRIVEINBYTES | pigz --fast > /targetdrive/diskimage.img.gz

without the monitoring

dd if=/dev/sdb | pigz --fast > /targetdrive/diskimage.img.gz

Now, to dump this image back to a hard drive

Note that using pigz for the decompression in this situation is not recommended, somthing along the lines of this

DO NOT USE this one, use the one with gunzip
pigz -d /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdd

Will work, but it will decompress the file in place before sending it through the pipe, so the recommended way to do it on the fly is with gunzip, this is also true because there are no benefits from parallel gzip while decompressing

gunzip -c /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdb

Or

pigz -d /hds/www/vzhost.img.gz | dd of=/dev/sdd

My records
The following are irrelevant to you, this is strictly for my records

mount -t ext4 /dev/sdb1 /hds

dd if=/dev/sdc | pv -s 1610612736000 | pigz --fast > /hds/www/vzhost.img.gz

One that covers doing for part of a disk

Assume i want to copy the first 120GB of a large drive where my windows partition lives, I want it compressed and i want the free space cleared

first, in windows use SDELETE to zero empty space

sdelete -z c:

Now, mount the disk on a linux partition

dd if=/dev/sdb bs=512 count=235000000 | pigz --fast > /hds/usb1/diskimage.img.gz
dd if=/dev/sdb bs=512 count=235000000 | pbzip2 > /hds/usb1/diskimage.img.gz

If it is advanced format, you would probably do
dd if=/dev/sdb of=/hds/usb1/firstpartofdisk.img bs=4096 count=29000000

or something like that

Shrinking linux disks in vmware workstation

Here is the theory behind what we are doing

1- Fill all empty space with zeros, you can do that by writing a gigantic file full of zeros to fille up all empty space then crash when no space is left to put the file

cat /dev/zero > zero.fill;sync;sleep 1;sync;

Delete the file we just made, zeros are left behind

rm -f zero.fill

Shut down the VM, and go to the windows host running the vmware workstation

Navigate to the directory where the .vmdk files are located.

WNDR3700V3 reverting to stock or openwrt

In my case, i was switching to the openwrt from dd-wrt.

I got my netgear WNDR3700 V3 (which is broadcom not atherios) used from ebay, Switching from dd-wrt to openwrt

First of all, there is a bug in the dd-wrt 21061 that makes it not possible to use SSH, so i logged in with telnet

Now,

wget http://theplacewhereyouputthefile/filename.bin (the original firmware is .chk not .bin)

Then

mtd -e linux -r write /tmp/x.bin linux

And the router showed things like

Unlocking Linux …

Erasing Linux

Writing from x.bin to Linux … [e]
Writing from x.bin to Linux … [w]

Then, connection to the host was lost

Then the router was bricked

I did get ping replies from the router, but that did not mean it is working

So, my next thing to do was this, the router booted itself into recovery mode, so i got the original firmware.

tftp -i 192.168.1.1 put x.chk

Transfer successful: 7258170 bytes in 29 second(s), 250281 bytes/s

Where X is simply the factory firmware .chk file, now leave the router for more than 5 minutes while it digests the update, then use the web interface to update to the openwrt chk file

Changing the root password in an LXC container

If you forget your LXC container’s password, you can reset it from within the LXC host

1- chroot into the containers filesystem
chroot /var/lib/lxc/vm51/rootfs

2- issue the passwd command and enter the new password for the container

3- type exit to get back to the LXC host prompt

It is very important to understand that if you don’t have something such as fail2ban on your server, it could be that someone had bruit-forced into your container and changed the root password, in that case, i would completely recommend deleting the whole container and re-creating it from scratch

The reason is that we don’t know what the attacker (if any) had installed inside the system

Tar error and how to overcome

For some reason, while i was extracting half a terrabyte of a tar.gz file with the following command

tar -xvf thisfile.tar.gz

i got the following errors

tar: Skipping to next header
tar: Error exit delayed from previous errors

So, it turns out that tar files terminate with a big bunch of zeros, to tell the tar files to not consifer that bunch of zeros a terminator, you would use the -i switch (before the F not after)

So the command would look like

tar -xvif thisfile.tar.gz

Seems it worked for me, it may or may not work for you, but this is one of the reasons you could get this error. because tar dopes not tell you what the exact error is.

Using axcel, quick example

Using axcel

axel -a -s 10240000 -n 5 URL

-a is the nicer one line view
-s is maximum speed, here it is 100Mb (10MB)
-n is the maximum number of connections

———————————–

To download a list of files
1- Put them in a text file (Make sure the line feeds are linux (\n))
2- Run a while loop from terminal

while read url; do axel -a -n3 $url; done < /root/download124.txt

Tar and compress directory on the fly with multi threading

There is not much to it. the tar command piped into any compression program of your choice.

For speed, rather than using gzip, you can use pigz (to employ more processors / processor cores), or pbzip2 which is slower but compresses more

cd to the directory where your folder is in

then

tar -c mysql | pbzip2 -vc > /hds/dbdirzip.tar.bz2

for more compression
tar -c mysql | pbzip2 -vc -9 > /hds/dbdirzip.tar.bz2

for more compression and to limit CPUs to 6 instead of 8, or 3 instead of 4, or whatever you want to use, since the default is to use all cores
tar -c mysql | pbzip2 -vc -9 -p6 > /hds/dbdirzip.tar.bz2

tar cvf – mysql | pigz -9 > /hds/dbdirzip.tar.gz

Or to limit the number of processors to 6 for example
tar cvf – mysql | pigz -9 -p6 > /hds/dbdirzip.tar.gz

Now, if you want to compress a single file to a different directory

pbzip2 -cz somefile > /another/directory/compressed.bz2