Bare Metal Restore for RHEL 5.5 and TSM 6.x
Sorry if this isn't in the right place, newbie to the RHN.
I have deployed some RHEL 5.5 x86_64 bit systems. I have identical hardware to be used for recovery purposes.
What are my options for bare metal recovery? Searching around on the internet doesn't render many results other than folks asking for help, and people giving misleading information. (Welcome to the internet)
Any help is greatly appreciated.
Responses
Hi Aaron,
In general, Red Hat typically recommends using kickstart for all of your deployment and disaster recovery needs. Kickstart is a method for automating an installation by specifying the settings and configuration for anaconda to use, such as the network settings, partitioning layout, package selections, etc. You can also utilize scripts in a kickstart that will run before, during, and after the installation process to setup just about any aspect of a deployment. A properly written kickstart will be able to deploy your desired configuration on any system, resulting in an an identical setup in the areas that matter, while differing in the areas that it needs to (such as network and hardware configuration).
Although you mention that you have identical hardware available for your recovery system now, somewhere down the line you may decide that you want to deploy this same configuration on a newer hardware platform, in which case a bit-for-bit backup of your current server's hard drive would not be ideal. However, you could just run the installation again on the new server using your kickstart, migrate any data over to that new system, and you are done.
The best way to get started with kickstart is to look at the one that was generated for you by your installation, stored in /root/anaconda-ks.cfg. You can also consult the Installation Guide for more details, such as the available options and examples:
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html-single/Installation_Guide/index.html
To take that one step further, you might want to take a look at Red Hat Network Satellite. With a Satellite, you can, among other things, create installation profiles, system groups, and kickstarts; create and clone software channels so that you have customized repositories for each of your deployment types; manage and deploy configuration files, directories, and symlinks; provision, manage, and update systems; and even automate all of this through API calls and scripting. You can also use satellite to grab all of the package versions from one system, and install those same versions on another.
For further information on Red Hat Network Satellite, visit:
http://docs.redhat.com/docs/en-US/Red_Hat_Network_Satellite/index.html
If you are more interested in just taking a copy of one system and restoring it to another, there are a few tools that you can take advantage of:
* rsync: With rsync, you can copy the files and directories you need, compress them, and transfer them to the destination system over the network with a single command. You can either schedule an rsync backup with cron or run it manually on-demand. You can also take advantage of rsync's ability to run incremental backups, where it will only have to copy and transfer the bits that have changed since the last time you have run it.
* dd: This tool will create a bit-for-bit copy of any file, partition, or disk that you specify and write it out to a file or disk of your choosing. You can then transfer this to your recovery system and again use dd to write that out to disk. For disaster recovery purposes, dd backups are difficult to manage due to the size of the files, as well as the fact that any hardware differences between the two systems will need to be accounted for and corrected in the backup image.
* LVM snapshots: If the data you are backing up is on an LVM2 logical volume and you have free space available in that volume group, then you can take a snapshot of the LV for backup purposes. This is useful because in order to get a consistent backup, you need to ensure that nothing is changing the source files while you are copying them. Snapshotting allows you to basically take a backup of a point-in-time version of your LV, while not actually preventing anything from continuing to write to it. Without a method for snapshotting or quiescing the system while taking the backup, best practice would be to completely shut down the system and create a backup from rescue mode.
There are numerous tools out there to help you manage backups and disaster recovery, and these are just a select few, so I'm sure others will have some ideas to offer as well. I'd be happy to go into more detail about any of this, or answer any questions you have, so don't hesitate to ask.
Regards,
John Ruemker, RHCA
Red Hat Technical Account Manager
John
I presume the tool the person who started thsi thread is looking for is somethign like flarcreate (Solaris) or mksysb (AIX). On those two platforms, the same old utilities such as pax, cpio, dd etc are all available. However the functionality and ease of use for flarcreate and mksysb is far ahead of the common utilities. They do not even compare.
With those two products for solaris and AIX, one is able to take a complete backup of a running OS and the benefits are these;
1) The tools come with the OS
2) The tools offer a single solution to the backup requirement. i.e. there is little need to create wrapper scripts
3) etc.
Hi Aaron
we had the same Problem in early 2006 and found a Company in Germany that implement a DR Tool for us.
It's now also public available as Product
http://www.atix.de/en/produkte/com.oonics/modul-d
http://com.oonics.org/
Before on Tru64 we had something similar:
http://www.unix-wissen.de/Tru64/cloning/
With the Enterprise Copy you are able to Clone your Installation. We use it only for the OS. We add another Grub entry and you are able boot into the Clone direct from the Bootmenu. Also if you have a SAN Installation you are able to map your Disk to a new Server and can also boot it.
Also this Tool is integrated with DR / Live DVD Tools, so you can create your own DR DVD.
Mike
I've used extensively Clonezilla, saving images to a ssh server. We had a small pendrive with clonezilla to boot the server, connect to the server and deploy the images. Pretty easy to use.
Also used Mondo Rescue [0]. It's a bit tricky if the destination hard disk is different from the source, but goes quite fine.
I'd take a look to FOG project too [1] as it looks promising.
[0]http://www.mondorescue.org/
I have two systems that are identical in H/W build spec. They are running RHEL 6.7 and as a minimal install. As the ReaR tool doesn't seem to like this, I wanted to explore copying from one system to another. This is a rehersal for something I may need to do. The documents for DR in the RedHat space give clues and leave large gaps, and in one case there is a link that's either stale of I don't have permission for some reason to access.
I built a test system using 3 x 300G disks raided as RAID1(ADM). The only pv (pv0) looks to be pretty well nearly the full size of the raid array at 278.876G. Underneath this, I see logical volumes listed as LogVol05 , 04, 01, 03 and 02 in that order. I don't see anything that says what they were for. When I built these, they were for the likes of /usr /var /opt /home. I see in the config file an ID for the volume group, the pv and each of the logicalvolumes.
In the docs for restore, having written the partition back, this is where I get stuck. There is a mention of poss issues needing to recover the LVM2 metadata. It looks like there is an assumption this is to the same disks.
How do I do this for a new server? This could easily be the case if a spike on the mains fries something, Are there any step by step guides on this? I am from a Solaris background and so though I'm generally OK with much of this, this is an area I've not gone inti before. The the original discussion person, I see nothing of much use unless you already know.
The other bit is in the discussion on recovering the system, it talks about recovering to /. As my system isn't running and I'm running off a boot disk, I assume I'd need to swap that for /mnt/something having mounted /dev/sda1 to it?
Any help greatly appreciated. My newer servers are RHEL 7.2 and I did use ReaR here with success.
Thanks Tom.
in a running system that I backed up the partition data with sfdisk and also took a copy of the LVM data. Finally, I used tar to backup everything / and did a similar thing with /boot.
The disks for /dev/sda are actually raided with a HP DL380 G9 raid controller RAID1(ADG). This is where I get stuck. The documents show tar being user to copy back to / , but at this point, I'm running off the RHE:L boot disk so / isn't the real /. Also, there is this mention of recovering the volume group data, but when I do this, I get a message saying the UUID isn't empty.
Is the UUID bit because the disks in the new system were used with a previous test and are not empty. The recover of the LVM data seems to baulk on an existing ID ?
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
