Linux backups & software lists & upgrading

@iNgeon

Thought I might as well make a thread.

Scenario: formatting & reloading each time a new distro comes out.

  1. need to save packages installed.
    apt-cache --installed pkgnames > installed.packages.lst
    Save the installed.packages.lst somewhere safe, i.e a thumbstick or usb drive
  2. need to install packages on new installation
    sudo apt-get install cat installed.packages.lst

As suggested via Skype, rather do dist upgrades, but before you do, clone your linux so you can always revert back. I like to use DD.

How to create an image of your linux partition: (without compression)

dd if=/dev/sda4 of=28-05-2014.img bs=64K

Explanation:

IF is the input, look at your partitions and check where your / mount point is (root)
OF is the output, in this case we are just simply creating an output to the current directory we are in.
Adjust these as required
BS is just the block size.

*note you can change the path of the output to send the img to a usb drive.

restoring the image due to a failed dist-upgrade

Boot from a linux live CD/DVD then open the terminal/konsole

dd if=28-05-2014.img of=/dev/sda4 bs=64K

*note you can change the path of the input to look at the img on the usb drive.

If you wish to make an image but compress it you can do the following:

dd if=/dev/sda4 bs=64K | gzip -c > 28-05-2014.img

to restore the image (boot from live cd/dvd again):

gunzip -c 28-04-2014.img.gz | dd of=/dev/sda4 bs=64K

Note: I’m using dd here only to clone partitions, if you wish to clone the entire hard drive you would use /dev/hda or /dev/hdb depending on your install.

Upgrading your distro, use the three commands in order:

Sudo apt-get update
Sudo apt-get upgrade
Sudo apt-get dist-upgrade

@Arby can correct me if I’m wrong here, he’s the nix guru.

1 Like

Thats pretty spot on.
I normally like to update anything Debian based by first doing:
sudo apt-get update
sudo apt-get upgrade
do-release-upgrade -d

The dd is always a good idea, if you want to dd an entire disk then you need another disk of equal or greater size to do it to.
Partitions are normally good enough though and you would most likely be installing the newer version then restoring your partition so dont need to worry about bootloaders.
My brain is pretty broken at the moment, surviving on minimal sleep so I will probably be more useful when I have slept again!

1 Like

If you want to clone files only, you can use rsync

If you have a list of apps that you always use, create a bash script and save it. (same thing as a windows batch file .bat or .cmd)

EG:

#! /bin/bash sudo apt-get update sudo apt-get upgrade sudo apt-get install b43-fwcutter firmware-b43-installer sudo apt-get install steam sudo apt-get remove firefox

^ save that into a text file (I called mine new-installation), then to run it : sh new-installation

just a question , will this also work if you want to backup your linux server to a removable say once a week ?

will it run while a system is live ?

@EyeBall It’s not recommended to run dd on a live system. (it does corrupt in my experience) If you want a full system backup on a live system, use rsync. There are a few GUI’s for rsync, but save this command below and you’re golden.

That’s a complete waste of space.

Backup /etc, /home and dump your package list: dpkg -L>list.txt

Backup any websites: /var/www (or whatever custom location you may use)
Mysql: use mysqldump - don’t bother backing up the actual folders, it won’t work, especially if the innodb engine is in use.

Not sure for postgres.

^ that is for webservers only TG?

I use the rsync for file & non-web app servers. (piped to gzip)

Here is a simplified version of my backup script. It’s easy to modify to add extra folders etc.

#!/bin/sh
LUID=$(id -u)
if [ $LUID -ne 0 ]; then
  echo "$0 must be run via sudo"
  exit 1
fi

MP=<mysql root password goes here>

TDATE=`date +%Y-%m-%d`
BACKUPDIR=/home/user/backup
BTMPDIR=${BACKUPDIR}/${TDATE}
mkdir -p ${BTMPDIR}/db ${BTMPDIR}/sites ${BTMPDIR}/other
cd /
echo -n "Backup of /etc in progress..."
tar zcf ${BTMPDIR}/other/etc-${TDATE}.tar.gz etc
echo " done."
echo -n "Backup of docker configs, etc..."
tar zcf ${BTMPDIR}/other/docker-${TDATE}.tar.gz /home/user/docker
echo " done."
echo -n "Backup of docker discourse..."
tar zcf ${BTMPDIR}/other/docker-discourse-${TDATE}.tar.gz /var/docker
echo " done."
echo "Backup of websites in progress..."
cd /var/virtual
for domain in $(find . -maxdepth 1 -mindepth 1 -type d -exec basename '{}' \;); do
  cd ${domain}
  for host in $(find . -maxdepth 1 -mindepth 1 -type d -exec basename '{}' \;); do
    echo -n "Processing ${host}.${domain}..."
    if [ "${host}.${domain}" = "host.domain.org" ]
    then
      tar zcf ${BTMPDIR}/sites/${domain}.${host}-${TDATE}.tar.gz ${host} --exclude=this/particular/path/*
    else
     tar zcf ${BTMPDIR}/sites/${domain}.${host}-${TDATE}.tar.gz ${host}
    fi
    echo " done."
  done
  cd ..
done
echo "Website backup complete."

echo "Backup of databases in progress..."
cd /var/lib/mysql
for database in $(find . -maxdepth 1 -mindepth 1 -type d -exec basename '{}' \;); do
  echo -n "Processing database ${database}..."
  mysqldump -u root -p${MP} ${database} | gzip -9 -c > ${BTMPDIR}/db/${database}-${TDATE}.gz
  echo " done."
done
echo "Database backup complete."
echo -n "Dumping package list..."
dpkg -l>${BTMPDIR}/other/packagelist.txt
echo " done."
echo -n "Archiving backups..."
cd ${BACKUPDIR}
tar zcf backup-$TDATE.tar.gz $TDATE && chown $SUDO_USER.$SUDO_USER backup-$TDATE.tar.gz && rm -rf $TDATE
echo " done!"

You’ll be left with a single tar.gz file that you can move elsewhere.

2 Likes

How do you guys protect against compressed tarballs like that possibly corrupting? Or is the strategy simply to backup frequently and hope one of the weekly backups un-compresses successfully?

I dont trust any hardware much. My thinking is:

  1. make the backup
  2. test the backup, whther the compressed file is “in tact”
  3. calculate the checksum (md5 or something else)
  4. copy to offsite location/s
  5. compare checksums

About right?

You could do that, yes.
Although I believe these days it’s better to use a SHA1 than md5.

1 Like

They very rarely corrupt, but normally I save uncompressed

1 Like

btw thanks for sharing

till I make enough money to pay someone to look after my servers/IP its all pretty much up to me. Finding this kind of sharing very valuable.

2 Likes

Hey @InsanityFlea

so once I’ve done the rsync and I want to “restore” how do I run it ?

also rsync ?

Thanks again for this info , helps me a lot !

Yip, rsync again. Reverse the script.

1 Like

Mr @InsanityFlea . I need some advise

I have a linux mail server … HDD almost full and about 8 years old … still working well but taking strain under the clients growing needs .

they have agreed to replace the server , but as little as possible downtime.

where the problem come in is all the USB ports don’t work anymore so no external HDD , or memstick to rsync off it.

I was thinking … setup new linux server … drop it on the network and rsync straight off the old server onto the new server , overithing files … once rsync is done , switching off old server and dropping new server in it’s place

will this work ?

pretty much, yes.

this is not like windows where it has a flappy if you copy a file from one machine to another.*

*hyperbole

2 Likes

Yebo, it will work. Just be careful you don’t overwrite drivers.

1 Like

Awesome , I like your answers :smiley:

looking forward to this week now , was stressed about it quite badly :slight_smile:

thanks guys , you are awesome

just a stupid question , how do you link the two linux boxes :S , ftp ? ssh ?

nevermind … found it :blush:

1 Like