Full Automated Install of RHEV-H?

Latest response

We are trying to fully automated RHEV-H installation.  We have found two issues so far that drive us nuts...

 

1) No matter what option used, install, reinstall or firstboot, the process stops and waits for operator to press enter? To confirm install?  This happens every single time we boot via PXE. This makes no sense, how can we stop this behavior?

 

2) After automation takes place for RHEV-H deployment, the above issue not withstanding, we have to hit enter to acknowledge reboot?  At least kickstart CFG had an option for automatic reboot, but not RHEV-H automation?  No where in the Hypervisor Deployment Guide, or Installation Guide is this issue noted.  Anyone figure out how to solve this one?

 

We have found a few other quirks, but these two just frankly, break the ability to deploy RHEV-H effectively, full automation?  Not possible, yet.

Responses

Hi Schorschi, 

 

What you are experiencing should not be happening at all, we deploy RHEV-H automatically from PXE all the time. 

 

Can you post your ovirt.log and the PXE command line from a host that has gone through such a procedure? 

Sure...  This works of course...

 

LABEL RHEV-3.0-Installer
        MENU LABEL 3.0 Installer
        MENU INDENT 4

        KERNEL /images/RedHat/RHEV/3.0/vmlinuz
        APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck install

 

This does not...

 

LABEL RHEV-3.0-Node-02
        MENU LABEL 3.0 Node 02
        MENU INDENT 4

        KERNEL /images/RedHat/RHEV/3.0/vmlinuz
        APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck install storage_init=/dev/sba BOOTIF=eth0 management_server=192.168.1.31 iscsi_name=iqn.2012-01.org.me.rhevnode02

 

The install prompting issue we solved, some how we got rd_NO_LVM in the list of boot parameters when we started testing, so that forced configuration interface to invoke sense no disk device could be found to install on by default, since even /dev/sda was getting masked out.  That problem solved, automation sequence started firing as expected.  However.... we discovered the following...  the automation sequence crashes for some reason...  We get the following when the automation halts...

 

"Invalid installation, please reboot from media and choose reinstall"

 

This message appears automated after disk setup, network setup, etc., and if we are seeing it right, after the iscsi_name setting is set.  Something shows up on screen but the configuration GUI blocks it.  No reference to this message anywhere in the documentation that we can find.  We have tried this on several of the servers we have, and even a VM running on the same hardware that generates the error, under VMware (I know... a nested hypervisor, but hey we are just trying to get some automation working at this point, and VMware is another variant of hardware)... the issue is consistent thus far.  We are trying some different hardware in a different location, thinking we have an HCL issue or something odd.  But.. the hardware we are using, runs Hyper-V and just about any variant of VMware you can think of, and runs RHEV-H 3.0 fine, if we install with using the automation process.

 

When the issue occurs, as long as we don't use the 'rdshell' option we can F2 into the support shell, but nothing in the ovirt.log file explains the reason for the message.

 

Here is the ovirt.log...

 

/sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0
/sbin/restorecon reset /boot-kdump/initrd-2.6.32-220.2.1.el6.x86_64kdump.img context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0
/sbin/restorecon reset /boot-kdump/vmlinuz-2.6.32-220.2.1.el6.x86_64 context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0
/sbin/restorecon reset /files context system_u:object_r:file_t:s0->system_u:object_r:etc_runtime_t:s0
Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 umount: /var/log: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13 Feb 03 10:29:13   Can't remove open logical volume "Swap"
Really WIPE LABELS from physical volume "/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4" of volume group "HostVG" [y/n]?   WARNING: Wiping physical volume label from /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 of volume group "HostVG"
  Can't open /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 exclusively - not removing. Mounted filesystem?
Feb 03 10:29:14   Volume group "AppVG" not found
Really WIPE LABELS from physical volume "/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4" of volume group "HostVG" [y/n]?   WARNING: Wiping physical volume label from /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 of volume group "HostVG"
  Can't open /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 exclusively - not removing. Mounted filesystem?
Feb 03 10:29:14 Stopping auditd: [  OK  ]
Shutting down vdsm-reg:
[  OK  ]
vdsm-reg stop[  OK  ]
Starting auditd: [  OK  ]
vdsm-reg: starting
Starting up vdsm-reg daemon:
vdsm-reg start[  OK  ]
vdsm-reg: ended.
++ echo 'scale=0; 4844 / 1024;'
++ bc -l
+ MEM_SIZE_MB=4
++ echo 'scale=0; 50 * (1024 * 1024) / (1000 * 1000)'
++ bc -l
+ boot_size_si=52
+ '[' '' = y ']'
+ '[' y = y ']'
+ log 'Partitioning root drive: /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:18 '
Feb 03 10:29:18 + '[' 0 = 1 ']'
+ echo 'Partitioning root drive: /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001'
+ wipe_partitions /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local drive=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ wipe_mbr /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local drive=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ log 'Wiping old boot sector'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:18 '
Feb 03 10:29:18 + '[' 0 = 1 ']'
+ echo 'Wiping old boot sector'
+ dd if=/dev/zero of=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 bs=1024K count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.000980411 s, 1.1 GB/s
+ log 'Wiping secondary gpt header'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:18 '
Feb 03 10:29:18 + '[' 0 = 1 ']'
+ echo 'Wiping secondary gpt header'
++ sfdisk -s /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local disk_kb_count=8388608
+ dd if=/dev/zero of=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 bs=1024 seek=8388607 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 1.9263e-05 s, 53.2 MB/s
+ kpartx -d /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ '[' -L /dev/mapper/HostVG-Swap ']'
+ swapoff -a
++ find /dev/mapper/HostVG-Config /dev/mapper/HostVG-Data /dev/mapper/HostVG-Logging /dev/mapper/HostVG-Swap
+ for lv in '$(find /dev/mapper/HostVG*)'
+ dmsetup remove /dev/mapper/HostVG-Config
+ for lv in '$(find /dev/mapper/HostVG*)'
+ dmsetup remove /dev/mapper/HostVG-Data
+ for lv in '$(find /dev/mapper/HostVG*)'
+ dmsetup remove /dev/mapper/HostVG-Logging
+ for lv in '$(find /dev/mapper/HostVG*)'
+ dmsetup remove /dev/mapper/HostVG-Swap
+ reread_partitions /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local drive=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ udevadm settle
+ [[ /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 =~ /dev/mapper ]]
+ echo 3
+ set +e
+ partprobe /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ set -e
+ log 'Creating Root and RootBackup Partitions'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:18 '
Feb 03 10:29:18 + '[' 0 = 1 ']'
+ echo 'Creating Root and RootBackup Partitions'
+ let 'RootBackup_end=256*2+256'
+ let Root_end=256+256
+ true
+ set +e
+ parted -s /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 'mklabel msdos mkpart primary fat32 2048s 256M mkpart primary ext2 256 512M mkpart primary ext2 512M 768M'
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
Warning: parted was unable to re-read the partition table on /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 (Device or resource busy).  This means Linux won't know anything about the modifications you made.
+ set -e
+ reread_partitions /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local drive=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ udevadm settle
+ [[ /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 =~ /dev/mapper ]]
+ echo 3
+ set +e
+ partprobe /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ set -e
+ '[' -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000013 -o -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p3 ']'
+ break
+ partefi=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000011
+ partroot=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000012
+ partrootbackup=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000013
+ '[' '!' -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000012 ']'
+ partefi=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p1
+ partroot=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p2
+ partrootbackup=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p3
+ ln -snf /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p1 /dev/disk/by-label/EFI
+ mkfs.vfat /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p1 -n EFI
mkfs.vfat 3.0.9 (31 Jan 2010)
unable to get drive geometry, using default 255/63
+ ln -snf /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p2 /dev/disk/by-label/Root
+ mke2fs -t ext2 /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p2 -L Root
mke2fs 1.41.12 (17-May-2010)
Filesystem label=Root
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
62496 inodes, 249856 blocks
12492 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
31 block groups
8192 blocks per group, 8192 fragments per group
2016 inodes per group
Superblock backups stored on blocks:
 8193, 24577, 40961, 57345, 73729, 204801, 221185

Writing inode tables:  0/31 1/31 2/31 3/31 4/31 5/31 6/31 7/31 8/31 9/3110/3111/3112/3113/3114/3115/3116/3117/3118/3119/3120/3121/3122/3123/3124/3125/3126/3127/3128/3129/3130/31done                           
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ tune2fs -c 0 -i 0 /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p2
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
+ ln -snf /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p3 /dev/disk/by-label/RootBackup
+ mke2fs -t ext2 /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p3 -L RootBackup
mke2fs 1.41.12 (17-May-2010)
Filesystem label=RootBackup
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
62496 inodes, 249856 blocks
12492 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
31 block groups
8192 blocks per group, 8192 fragments per group
2016 inodes per group
Superblock backups stored on blocks:
 8193, 24577, 40961, 57345, 73729, 204801, 221185

Writing inode tables:  0/31 1/31 2/31 3/31 4/31 5/31 6/31 7/31 8/31 9/3110/3111/3112/3113/3114/3115/3116/3117/3118/3119/3120/3121/3122/3123/3124/3125/3126/3127/3128/3129/3130/31done                           
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ tune2fs -c 0 -i 0 /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p3
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
+ oldIFS='  
'
+ IFS=,
+ for drv in '$HOSTVGDRIVE'
+ '[' /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 '!=' /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 ']'
+ IFS='  
'
+ create_hostvg
+ log 'Creating LVM partition(s) for HostVG'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:20 '
Feb 03 10:29:20 + '[' 0 = 1 ']'
+ echo 'Creating LVM partition(s) for HostVG'
+ local drv
+ local parted_cmd
+ oldIFS='  
'
+ IFS=,
+ local physical_vols=
+ for drv in '$HOSTVGDRIVE'
+ '[' /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 = /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 ']'
+ parted_cmd='mkpart primary ext2 768M -1'
+ hostvgpart=4
+ true
+ set +e
+ parted -s /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 'mkpart primary ext2 768M -1'
+ parted -s /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 'set 4 lvm on'
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
device-mapper: remove ioctl failed: Device or resource busy
Warning: parted was unable to re-read the partition table on /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 (Device or resource busy).  This means Linux won't know anything about the modifications you made.
+ set -e
+ reread_partitions /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ local drive=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ udevadm settle
+ [[ /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001 =~ /dev/mapper ]]
+ echo 3
+ set +e
+ partprobe /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001
+ set -e
+ '[' -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000014 -o /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 ']'
+ break
+ '[' '' = y ']'
+ partpv=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000014
+ '[' '!' -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_010000000000000000014 ']'
+ partpv=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
+ log 'Creating physical volume'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:21 '
Feb 03 10:29:21 + '[' 0 = 1 ']'
+ echo 'Creating physical volume'
++ seq 1 10
+ for count in '$(seq 1 10)'
+ '[' '!' -e /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 ']'
+ dd if=/dev/zero of=/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 bs=1024k count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.00155601 s, 674 MB/s
+ pvcreate -ff -y /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
+ /sbin/pvcreate -ff -y /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
  Writing physical volume data to disk "/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4"
  Physical volume "/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4" successfully created
+ physical_vols=,/dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
+ log 'Creating volume group HostVG'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:21 '
Feb 03 10:29:21 + '[' 0 = 1 ']'
+ echo 'Creating volume group HostVG'
+ local is_first=1
+ for drv in '$physical_vols'
+ '[' -z '' ']'
+ continue
+ for drv in '$physical_vols'
+ '[' -z /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4 ']'
+ '[' 1 ']'
+ vgcreate HostVG /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
+ /sbin/vgcreate HostVG /dev/mapper/1ATA_VMware_Virtual_IDE_Hard_Drive_01000000000000000001p4
  Volume group "HostVG" successfully created
+ is_first=
+ IFS='  
'
+ '[' 4470 -gt 0 ']'
+ log 'Creating swap partition'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:22 '
Feb 03 10:29:22 + '[' 0 = 1 ']'
+ echo 'Creating swap partition'
+ lvcreate --name Swap --size 4470M /dev/HostVG
+ /sbin/lvcreate --name Swap --size 4470M /dev/HostVG
  Rounding up size to full physical extent 4.37 GiB
  Logical volume "Swap" created
+ '[' -n '' ']'
+ mkswap -L SWAP /dev/HostVG/Swap
mkswap: /dev/HostVG/Swap: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 4579324 KiB
LABEL=SWAP, UUID=12a620b5-8238-4049-94b5-ea06354bf21f
+ echo '/dev/HostVG/Swap swap swap defaults 0 0'
+ '[' 5 -gt 0 ']'
+ log 'Creating config partition'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:22 '
Feb 03 10:29:22 + '[' 0 = 1 ']'
+ echo 'Creating config partition'
+ lvcreate --name Config --size 5M /dev/HostVG
+ /sbin/lvcreate --name Config --size 5M /dev/HostVG
  Rounding up size to full physical extent 8.00 MiB
  Logical volume "Config" created
+ mke2fs -j -t ext4 /dev/HostVG/Config -L CONFIG
mke2fs 1.41.12 (17-May-2010)
Filesystem label=CONFIG
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
2048 inodes, 8192 blocks
409 blocks (4.99%) reserved for the super user
First data block=1
Maximum filesystem blocks=8388608
1 block group
8192 blocks per group, 8192 fragments per group
2048 inodes per group

Writing inode tables: 0/1done                           
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ tune2fs -c 0 -i 0 /dev/HostVG/Config
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
+ '[' 2048 -gt 0 ']'
+ log 'Creating log partition'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:22 '
Feb 03 10:29:22 + '[' 0 = 1 ']'
+ echo 'Creating log partition'
+ lvcreate --name Logging --size 2048M /dev/HostVG
+ /sbin/lvcreate --name Logging --size 2048M /dev/HostVG
  Logical volume "Logging" created
+ mke2fs -j -t ext4 /dev/HostVG/Logging -L LOGGING
mke2fs 1.41.12 (17-May-2010)
Filesystem label=LOGGING
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912

Writing inode tables:  0/16 1/16 2/16 3/16 4/16 5/16 6/16 7/16 8/16 9/1610/1611/1612/1613/1614/1615/16done                           
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ tune2fs -c 0 -i 0 /dev/HostVG/Logging
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
+ echo '/dev/HostVG/Logging /var/log ext4 defaults,noatime 0 0'
+ local use_data=1
+ '[' -1 -eq -1 ']'
+ log 'Creating data partition with remaining free space'
++ date '+%b %d %H:%M:%S'
+ printf 'Feb 03 10:29:23 '
Feb 03 10:29:23 + '[' 0 = 1 ']'
+ echo 'Creating data partition with remaining free space'
+ lvcreate --name Data -l 100%FREE /dev/HostVG
+ /sbin/lvcreate --name Data -l 100%FREE /dev/HostVG
  Logical volume "Data" created
+ use_data=0
+ '[' 0 = 0 ']'
+ mke2fs -j -t ext4 /dev/HostVG/Data -L DATA
mke2fs 1.41.12 (17-May-2010)
Filesystem label=DATA
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
59392 inodes, 237568 blocks
11878 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=243269632
8 block groups
32768 blocks per group, 32768 fragments per group
7424 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376

Writing inode tables: 0/81/82/83/84/85/86/87/8done                           
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ tune2fs -c 0 -i 0 /dev/HostVG/Data
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
+ echo '/dev/HostVG/Data /data ext4 defaults,noatime 0 0'
+ echo '/data/images /var/lib/libvirt/images bind bind 0 0'
+ echo '/data/core /var/log/core bind bind 0 0'
+ set +x
Feb 03 10:29:24  File persisted

Successfully persisted /etc/fstab

Feb 03 10:29:24 # processname: vdsm-reg-setup
Stopping auditd: [  OK  ]
Shutting down vdsm-reg:
[  OK  ]
vdsm-reg stop[  OK  ]
`/var/log/audit' -> `/tmp/tmp.aJUd1Ov19q/audit'
`/var/log/audit/audit.log' -> `/tmp/tmp.aJUd1Ov19q/audit/audit.log'
`/var/log/core' -> `/tmp/tmp.aJUd1Ov19q/core'
`/var/log/libvirt' -> `/tmp/tmp.aJUd1Ov19q/libvirt'
`/var/log/libvirt/uml' -> `/tmp/tmp.aJUd1Ov19q/libvirt/uml'
`/var/log/libvirt/qemu' -> `/tmp/tmp.aJUd1Ov19q/libvirt/qemu'
`/var/log/libvirt/lxc' -> `/tmp/tmp.aJUd1Ov19q/libvirt/lxc'
`/var/log/ntpstats' -> `/tmp/tmp.aJUd1Ov19q/ntpstats'
`/var/log/rhsm' -> `/tmp/tmp.aJUd1Ov19q/rhsm'
`/var/log/sa' -> `/tmp/tmp.aJUd1Ov19q/sa'
`/var/log/vdsm' -> `/tmp/tmp.aJUd1Ov19q/vdsm'
`/var/log/vdsm/backup' -> `/tmp/tmp.aJUd1Ov19q/vdsm/backup'
`/var/log/vdsm-reg' -> `/tmp/tmp.aJUd1Ov19q/vdsm-reg'
`/var/log/vdsm-reg/vdsm-reg.log' -> `/tmp/tmp.aJUd1Ov19q/vdsm-reg/vdsm-reg.log'
restorecon reset /var/log context system_u:object_r:file_t:s0->system_u:object_r:var_log_t:s0
restorecon reset /var/log/lost+found context system_u:object_r:file_t:s0->system_u:object_r:var_log_t:s0
restorecon reset /var/log/ovirt.log-tmp context system_u:object_r:file_t:s0->system_u:object_r:var_log_t:s0
Starting auditd: [  OK  ]
vdsm-reg: starting
Starting up vdsm-reg daemon:
vdsm-reg start[  OK  ]
vdsm-reg: ended.
Feb 03 10:29:28 restorecon reset /var/lib/libvirt/images context system_u:object_r:file_t:s0->system_u:object_r:virt_image_t:s0
restorecon reset /var/lib/libvirt/images/rhev context system_u:object_r:file_t:s0->system_u:object_r:virt_image_t:s0
restorecon reset /var/log/core context system_u:object_r:file_t:s0->system_u:object_r:var_log_t:s0
 File persisted

Successfully persisted /etc/kdump.conf

Stopping kdump:[  OK  ]
Detected change(s) the following file(s):
 
  /etc/kdump.conf
Rebuilding /boot-kdump/initrd-2.6.32-220.2.1.el6.x86_64kdump.img
Warning: There is not enough space to save a vmcore.
         The size of /dev/HostVG/Data should be much greater than 4960408 kilo bytes.
Starting kdump:[FAILED]
Feb 03 10:29:37 Feb 03 10:29:38 shred: /config/etc/sysconfig/network-scripts/ifcfg-breth0: failed to open for writing: No such file or directory
shred: /config/etc/sysconfig/network-scripts/ifcfg-eth0: failed to open for writing: No such file or directory
shred: /config/etc/sysconfig/network-scripts/ifcfg-eth1: failed to open for writing: No such file or directory
Feb 03 10:29:38  File persisted

Successfully persisted /etc/sysconfig/network-scripts/ifcfg-eth0

 File persisted

Successfully persisted /etc/sysconfig/network-scripts/ifcfg-breth0

 File persisted

Successfully persisted /etc/sysconfig/network-scripts/ifcfg-eth1

 File persisted

Successfully persisted /etc/ntp.conf

 

 

This one is a mystery right now!

In the RHEV-3.0-Node-02 entry try to use

 

APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck reinstall storage_init=/dev/sba BOOTIF=eth0 management_server=192.168.1.31 iscsi_name=iqn.2012-01.org.me.rhevnode02

 

BTW, storage_init=/dev/sba - shouldn't this be sda?

Yup, I was doing some more testing, and fat-fingered the 'sda' reference.  We got automation working to a point, but now we have this error where the automation halts.  See above for details.  And the ovrit.log is less than informative. 

 

Previously, the big blocker, was that we had rd_NO_LVM in the boot parameters, this was forcing FirstBoot to popup the configurator since we were leaving the automated process no valid disk type.  NTP, DNS entries are sometimes appended, and sometimes replaced by defaults depending on the options used, for example if you set DNS=<ip> explicitly and are using ip=dhcp, overwrites the /etc/resolv.conf file, but on a reinstall appends, but this is just default behavior of RHEL,DHCP, and once we saw the behavior, we starting makeing sense.  So easy to deal with.

 

But this halt of automation issue, has us stumped at the moment.  It effectively crashes the automated build. So far, I have not found any log that explains what we might be doing to trip of the automation.

In my previous comment, I changed the line a bit more than just pointing to /dev/sba. There's a reinstall option in there. 

I suspect you have some unclean disks in that host, and the install line you were using wasn't enough to cope with that.

 

Can you pls review my previous post and try that line exactly as it is (with /dev/sba-sda taken into account of course)

Understood, tested the APPEND line as you specified. 

 

APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck reinstall storage_init=/dev/sda BOOTIF=eth0 management_server=192.168.1.31 iscsi_name=iqn.2012-01.org.me.rhevnode02

 

Disk wiped and LVM created, Network configuration done.  Then, once again, got the same 'Invalid installation, please reboot from media and choose Reinstall" message.

 

I don't believe this is hardware issue, since the same APPEND line on the same AMD T1075 6core system as well as a VMware VM that is running on the same hardware.  Booting the ISO media or doing PXE boot without the autoamtion parameters, doing an interactive install of RHEV-H works on the hardware and VM running on the same hardware without issue, for example...

 

 APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck reinstall

 

It almost feels like the install bootloader configuration sequence never fires or belows up under the automated sequence.  But without the ability to see behind the curtain no way to tell?

 

It is only the automated process that fails.  Again, the ovirt.log file makes no mention of any error or warning condition, reports a number of files with persistent changes done.  Is there any other log I can look at? I figure not given the automated process design?  Some additional debug mode enabled?

 

At this point we are going to test different hardware, that is not based on the AMD 1075T processor, or the given mainboard vendor, to see if the behavior is specific to the processor/mainboard configuration.

Does it work if you omit the iscsi_name param?

 

EDIT: Actually no, scratch that, leave iscsi_name in there, but also add local_boot

Ok, will try local_boot, but per the documentation is just an alias for reinstall right?

We have identified a potential bug, that can be worked around when local_boot is added in. Just want to verify you're hitting the same issue

local_boot is actually an alias for upgrade.  This is actually a bug that will be fixed in the next z-stream release of RHEV-H.

 

https://bugzilla.redhat.com/show_bug.cgi?id=788179

Yes, I might be experiencing the bug I guess.  The following worked... adding the local_boot parameter...

 

APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck reinstall local_boot storage_init=/dev/sda BOOTIF=eth0 management_server=<ip> iscsi_name=iqn.2012-01.<domain name>.<host name>

 

Finally saw the installing image... status as expected.  The below attempt that fails, without the local_boot parameter...

 

APPEND rootflags=loop initrd=/images/RedHat/RHEV/3.0/initrd.img root=live:/rhevh-6.2-20120119.1.el6_2.iso rootfstype=auto ro liveimg nomodeset rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 rd_NO_LUKS rd_NO_MD rd_NO_DM nocheck reinstall storage_init=/dev/sda BOOTIF=eth0 management_server=<ip> iscsi_name=iqn.2012-01.<domain name>.<host name>

 

Now I can start getting a bit more creative with some of the other options.