Appendix F. Troubleshooting

The troubleshooting information in the following sections might be helpful when diagnosing issues after the installation process. The following sections are for all supported architectures. However, if an issue is for a particular architecture, it is specified at the start of the section.

F.1. Resuming an interrupted download attempt

You can resume an interrupted download using the curl command.

Prerequisite

  • You have navigated to the Product Downloads section of the Red Hat Customer Portal at https://access.redhat.com/downloads, and selected the required variant, version, and architecture.
  • You have right-clicked on the required ISO file, and selected Copy Link Location to copy the URL of the ISO image file to your clipboard.

Procedure

  1. Download the ISO image from the new link. Add the --continue-at - option to automatically resume the download:

    $ curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -
  2. Use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes:

    $ sha256sum rhel-x.x-x86_64-dvd.iso
    			`85a...46c rhel-x.x-x86_64-dvd.iso`

    Compare the output with reference checksums provided on the Red Hat Enterprise Linux Product Download web page.

Example F.1. Resuming an interrupted download attempt

The following is an example of a curl command for a partially downloaded ISO image:

$ curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth=141...963' --continue-at -

F.2. Disks are not detected

If the installation program cannot find a writable storage device to install to, it returns the following error message in the Installation Destination window: No disks detected. Please shut down the computer, connect at least one disk, and restart to complete installation.

Check the following items:

  • Your system has at least one storage device attached.
  • If your system uses a hardware RAID controller; verify that the controller is properly configured and working as expected. See your controller’s documentation for instructions.
  • If you are installing into one or more iSCSI devices and there is no local storage present on the system, verify that all required LUNs are presented to the appropriate Host Bus Adapter (HBA).

If the error message is still displayed after rebooting the system and starting the installation process, the installation program failed to detect the storage. In many cases the error message is a result of attempting to install on an iSCSI device that is not recognized by the installation program.

In this scenario, you must perform a driver update before starting the installation. Check your hardware vendor’s website to determine if a driver update is available. For more general information about driver updates, see the Updating drivers during installation section of the Performing an advanced RHEL 9 installation document.

You can also consult the Red Hat Hardware Compatibility List, available at https://access.redhat.com/ecosystem/search/#/category/Server.

F.3. Cannot boot with a RAID card

If you cannot boot your system after the installation, you might need to reinstall and repartition your system’s storage. Some BIOS types do not support booting from RAID cards. After you finish the installation and reboot the system for the first time, a text-based screen displays the boot loader prompt (for example, grub>) and a flashing cursor might be displayed. If this is the case, you must repartition your system and move your /boot partition and the boot loader outside of the RAID array. The /boot partition and the boot loader must be on the same drive. Once these changes have been made, you should be able to finish your installation and boot the system properly.

F.4. Graphical boot sequence is not responding

When rebooting your system for the first time after installation, the system might be unresponsive during the graphical boot sequence. If this occurs, a reset is required. In this scenario, the boot loader menu is displayed successfully, but selecting any entry and attempting to boot the system results in a halt. This usually indicates that there is a problem with the graphical boot sequence. To resolve the issue, you must disable the graphical boot by temporarily altering the setting at boot time before changing it permanently.

Procedure: Disabling the graphical boot temporarily

  1. Start your system and wait until the boot loader menu is displayed. If you set your boot timeout period to 0, press the Esc key to access it.
  2. From the boot loader menu, use your cursor keys to highlight the entry you want to boot. Press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options.
  3. In the list of options, find the kernel line - that is, the line beginning with the keyword linux. On this line, locate and delete rhgb.
  4. Press F10 or Ctrl+X to boot your system with the edited options.

If the system started successfully, you can log in normally. However, if you do not disable graphical boot permanently, you must perform this procedure every time the system boots.

Procedure: Disabling the graphical boot permanently

  1. Log in to the root account on your system.
  2. Use the grubby tool to find the default GRUB2 kernel:

    # grubby --default-kernel
    /boot/vmlinuz-4.18.0-94.el8.x86_64
  3. Use the grubby tool to remove the rhgb boot option from the default kernel in your GRUB2 configuration. For example:

    # grubby --remove-args="rhgb" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64
  4. Reboot the system. The graphical boot sequence is no longer used. If you want to enable the graphical boot sequence, follow the same procedure, replacing the --remove-args="rhgb" parameter with the --args="rhgb" parameter. This restores the rhgb boot option to the default kernel in your GRUB2 configuration.

F.5. X server fails after log in

An X server is a program in the X Window System that runs on local machines, that is, the computers used directly by users. X server handles all access to the graphics cards, display screens and input devices, typically a keyboard and mouse on those computers. The X Window System, often referred to as X, is a complete, cross-platform and free client-server system for managing GUIs on single computers and on networks of computers. The client-server model is an architecture that divides the work between two separate but linked applications, referred to as clients and servers.*

If X server crashes after login, one or more of the file systems might be full. To troubleshoot the issue, execute the following command:

$ df -h

The output verifies which partition is full - in most cases, the problem is on the /home partition. The following is a sample output of the df command:

Filesystem                                  Size  Used Avail Use% Mounted on
devtmpfs                                    396M     0  396M   0%  /dev
tmpfs                                       411M     0  411M   0%  /dev/shm
tmpfs                                       411M  6.7M  405M   2%  /run
tmpfs                                       411M     0  411M   0%  /sys/fs/cgroup
/dev/mapper/rhel-root                       17G    4.1G  13G   25% /
/dev/sda1                                   1014M  173M 842M  17% /boot
tmpfs                                       83M    20K   83M   1%  /run/user/42
tmpfs                                       83M    84K  83M    1%  /run/user/1000
/dev/dm-4                                   90G    90G    0  100% /home

In the example, you can see that the /home partition is full, which causes the failure. Remove any unwanted files. After you free up some disk space, start X using the startx command. For additional information about df and an explanation of the options available, such as the -h option used in this example, see the df(1) man page.

*Source: http://www.linfo.org/x_server.html

F.6. RAM is not recognized

In some scenarios, the kernel does not recognize all memory (RAM), which causes the system to use less memory than is installed. If the total amount of memory that your system reports does not match your expectations, it is likely that at least one of your memory modules is faulty. On BIOS-based systems, you can use the Memtest86+ utility to test your system’s memory.

Some hardware configurations have part of the system’s RAM reserved, and as a result, it is unavailable to the system. Some laptop computers with integrated graphics cards reserve a portion of memory for the GPU. For example, a laptop with 4 GiB of RAM and an integrated Intel graphics card shows roughly 3.7 GiB of available memory. Additionally, the kdump crash kernel dumping mechanism, which is enabled by default on most Red Hat Enterprise Linux systems, reserves some memory for the secondary kernel used in case of a primary kernel failure. This reserved memory is not displayed as available.

Use this procedure to manually set the amount of memory.

Procedure

  1. Check the amount of memory that your system currently reports in MiB:

    $ free -m
  2. Reboot your system and wait until the boot loader menu is displayed.

    If your boot timeout period is set to 0, press the Esc key to access the menu.

  3. From the boot loader menu, use your cursor keys to highlight the entry you want to boot, and press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options.
  4. In the list of options, find the kernel line: that is, the line beginning with the keyword linux. Append the following option to the end of this line:

    mem=xxM
  5. Replace xx with the amount of RAM you have in MiB.
  6. Press F10 or Ctrl+X to boot your system with the edited options.
  7. Wait for the system to boot, log in, and open a command line.
  8. Check the amount of memory that your system reports in MiB:

    $ free -m
  9. If the total amount of RAM displayed by the command now matches your expectations, make the change permanent:

    # grubby --update-kernel=ALL --args="mem=xxM"

F.7. System is displaying signal 11 errors

A signal 11 error, commonly known as a segmentation fault means that a program accessed a memory location that it was not assigned. A signal 11 error can occur due to a bug in one of the software programs that are installed, or faulty hardware. If you receive a signal 11 error during the installation process, verify that you are using the most recent installation images and prompt the installation program to verify them to ensure they are not corrupt.

For more information, see Verifying Boot media.

Faulty installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verifying the integrity of the installation media is recommended before every installation. For information about obtaining the most recent installation media, see Downloading the installation ISO image.

To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. If you performed a media check without any errors and you still have issues with segmentation faults, it usually indicates that your system encountered a hardware error. In this scenario, the problem is most likely in the system’s memory (RAM). This can be a problem even if you previously used a different operating system on the same computer without any errors.

Note

For AMD and Intel 64-bit and 64-bit ARM architectures: On BIOS-based systems, you can use the Memtest86+ memory testing module included on the installation media to perform a thorough test of your system’s memory.

For more information, see Detecting memory faults using the Memtest86 application.

Other possible causes are beyond this document’s scope. Consult your hardware manufacturer’s documentation and also see the Red Hat Hardware Compatibility List, available online at https://access.redhat.com/ecosystem/search/#/category/Server.

F.8. Unable to IPL from network storage space

Note
  • This issue is for IBM Power Systems.

If you experience difficulties when trying to IPL from Network Storage Space (*NWSSTG), it is most likely due to a missing PReP partition. In this scenario, you must reinstall the system and create this partition during the partitioning phase or in the Kickstart file.

F.9. Using XDMCP

There are scenarios where you have installed the X Window System and want to log in to your Red Hat Enterprise Linux system using a graphical login manager. Use this procedure to enable the X Display Manager Control Protocol (XDMCP) and remotely log in to a desktop environment from any X-compatible client, such as a network-connected workstation or X11 terminal.

Note

XDMCP is not supported by the Wayland protocol.

Procedure

  1. Open the /etc/gdm/custom.conf configuration file in a plain text editor such as vi or nano.
  2. In the custom.conf file, locate the section starting with [xdmcp]. In this section, add the following line:

    Enable=true
  3. If you are using XDMCP, ensure that WaylandEnable=false is present in the /etc/gdm/custom.conf file.
  4. Save the file and exit the text editor.
  5. Restart the X Window System. To do this, either reboot the system, or restart the GNOME Display Manager using the following command as root:

    # systemctl restart gdm.service
    Warning

    Restarting the gdm service terminates all currently running GNOME sessions of all desktop users who are logged in. This might result in users losing unsaved data.

  1. Wait for the login prompt and log in using your user name and password. The X Window System is now configured for XDMCP. You can connect to it from another workstation (client) by starting a remote X session using the X command on the client workstation. For example:

    $ X :1 -query address
  2. Replace address with the host name of the remote X11 server. The command connects to the remote X11 server using XDMCP and displays the remote graphical login screen on display :1 of the X11 server system (usually accessible by pressing Ctrl-Alt-F8). You can also access remote desktop sessions using a nested X11 server, which opens the remote desktop as a window in your current X11 session. You can use Xnest to open a remote desktop nested in a local X11 session. For example, run Xnest using the following command, replacing address with the host name of the remote X11 server:

    $ Xnest :1 -query address

Additional resources

F.10. Using rescue mode

The installation program’s rescue mode is a minimal Linux environment that can be booted from the Red Hat Enterprise Linux DVD or other boot media. It contains command-line utilities for repairing a wide variety of issues. Rescue mode can be accessed from the Troubleshooting menu of the boot menu. In this mode, you can mount file systems as read-only, blacklist or add a driver provided on a driver disc, install or upgrade system packages, or manage partitions.

Note

The installation program’s rescue mode is different from rescue mode (an equivalent to single-user mode) and emergency mode, which are provided as parts of the systemd system and service manager.

To boot into rescue mode, you must be able to boot the system using one of the Red Hat Enterprise Linux boot media, such as a minimal boot disc or USB drive, or a full installation DVD.

Important

Advanced storage, such as iSCSI or zFCP devices, must be configured either using dracut boot options such as rd.zfcp= or root=iscsi: options, or in the CMS configuration file on 64-bit IBM Z. It is not possible to configure these storage devices interactively after booting into rescue mode. For information about dracut boot options, see the dracut.cmdline(7) man page.

F.10.1. Booting into rescue mode

This procedure describes how to boot into rescue mode.

Procedure

  1. Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to be displayed.
  2. From the boot menu, either select Troubleshooting > Rescue a Red Hat Enterprise Linux system option, or append the inst.rescue option to the boot command line. To enter the boot command line, press the Tab key on BIOS-based systems or the e key on UEFI-based systems.
  3. Optional: If your system requires a third-party driver provided on a driver disc to boot, append the inst.dd=driver_name to the boot command line:

    inst.rescue inst.dd=driver_name
  4. Optional: If a driver that is part of the Red Hat Enterprise Linux distribution prevents the system from booting, append the modprobe.blacklist= option to the boot command line:

    inst.rescue modprobe.blacklist=driver_name
  5. Press Enter (BIOS-based systems) or Ctrl+X (UEFI-based systems) to boot the modified option. Wait until the following message is displayed:

    The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2. If for some reason this process does not work choose 3 to skip directly to a shell.
    
    1) Continue
    2) Read-only mount
    3) Skip to shell
    4) Quit (Reboot)

    If you select 1, the installation program attempts to mount your file system under the directory /mnt/sysroot/. You are notified if it fails to mount a partition. If you select 2, it attempts to mount your file system under the directory /mnt/sysroot/, but in read-only mode. If you select 3, your file system is not mounted.

    For the system root, the installer supports two mount points /mnt/sysimage and /mnt/sysroot. The /mnt/sysroot path is used to mount / of the target system. Usually, the physical root and the system root are the same, so /mnt/sysroot is attached to the same file system as /mnt/sysimage. The only exceptions are rpm-ostree systems, where the system root changes based on the deployment. Then, /mnt/sysroot is attached to a subdirectory of /mnt/sysimage. It is recommended to use /mnt/sysroot for chroot.

  6. Select 1 to continue. Once your system is in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2. Use the Ctrl+Alt+F1 key combination to access VC 1 and Ctrl+Alt+F2 to access VC 2:

    sh-4.2#
  7. Even if your file system is mounted, the default root partition while in rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode (multi-user.target or graphical.target). If you selected to mount your file system and it mounted successfully, you can change the root partition of the rescue mode environment to the root partition of your file system by executing the following command:

    sh-4.2# chroot /mnt/sysroot

    This is useful if you need to run commands, such as rpm, that require your root partition to be mounted as /. To exit the chroot environment, type exit to return to the prompt.

  8. If you selected 3, you can still try to mount a partition or LVM2 logical volume manually inside rescue mode by creating a directory, such as /directory/, and typing the following command:

    sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory

    In the above command, /directory/ is the directory that you created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is a different type than XFS, replace the xfs string with the correct type (such as ext4).

  9. If you do not know the names of all physical partitions, use the following command to list them:

    sh-4.2# fdisk -l

    If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay, vgdisplay or lvdisplay commands.

F.10.2. Using an SOS report in rescue mode

The sosreport command-line utility collects configuration and diagnostic information, such as the running kernel version, loaded modules, and system and service configuration files from the system. The utility output is stored in a tar archive in the /var/tmp/ directory. The sosreport utility is useful for analyzing system errors and troubleshooting. Use this procedure to capture an sosreport output in rescue mode.

Prerequisites

  • You have booted into rescue mode.
  • You have mounted the installed system / (root) partition in read-write mode.
  • You have contacted Red Hat Support about your case and received a case number.

Procedure

  1. Change the root directory to the /mnt/sysroot/ directory:

    sh-4.2# chroot /mnt/sysroot/
  2. Execute sosreport to generate an archive with system configuration and diagnostic information:

    sh-4.2# sosreport
    Important

    sosreport prompts you to enter your name and the case number you received from Red Hat Support. Use only letters and numbers because adding any of the following characters or spaces could render the report unusable:

    # % & { } \ < > > * ? / $ ~ ' " : @ + ` | =

  3. Optional: If you want to transfer the generated archive to a new location using the network, it is necessary to have a network interface configured. In this scenario, use the dynamic IP addressing as no other steps required. However, when using static addressing, enter the following command to assign an IP address (for example 10.13.153.64/23) to a network interface, for example dev eth0:

    bash-4.2# ip addr add 10.13.153.64/23 dev eth0
  4. Exit the chroot environment:

    sh-4.2# exit
  5. Store the generated archive in a new location, from where it can be easily accessible:

    sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location
  6. For transferring the archive through the network, use the scp utility:

    sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport

F.10.3. Reinstalling the GRUB2 boot loader

In some scenarios, the GRUB2 boot loader is mistakenly deleted, corrupted, or replaced by other operating systems. Use this procedure to reinstall GRUB2 on the master boot record (MBR) on AMD64 and Intel 64 systems with BIOS.

Prerequisites

  • You have booted into rescue mode.
  • You have mounted the installed system / (root) partition in read-write mode.
  • You have mounted the /boot mount point in read-write mode.

Procedure

  1. Change the root partition:

    sh-4.2# chroot /mnt/sysroot/
  2. Reinstall the GRUB2 boot loader, where the install_device block device was installed:

    sh-4.2# /sbin/grub2-install install_device
    Important

    Running the grub2-install command could lead to the machine being unbootable if all the following conditions apply:

    • The system is an AMD64 or Intel 64 with Extensible Firmware Interface (EFI).
    • Secure Boot is enabled.

    After you run the grub2-install command, you cannot boot the AMD64 or Intel 64 systems that have Extensible Firmware Interface (EFI) and Secure Boot enabled. This issue occurs because the grub2-install command installs an unsigned GRUB2 image that boots directly instead of using the shim application. When the system boots, the shim application validates the image signature, which when not found fails to boot the system.

  3. Reboot the system.

F.10.4. Using RPM to add or remove a driver

Missing or malfunctioning drivers cause problems when booting the system. Rescue mode provides an environment in which you can add or remove a driver even when the system fails to boot. Wherever possible, it is recommended that you use the RPM package manager to remove malfunctioning drivers or to add updated or missing drivers. Use the following procedures to add or remove a driver.

Important

When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image.

F.10.4.1. Adding a driver using RPM

Use this procedure to add a driver.

Prerequisites

  • You have booted into rescue mode.
  • You have mounted the installed system in read-write mode.

Procedure

  1. Make the RPM package that contains the driver available. For example, mount a CD or USB flash drive and copy the RPM package to a location of your choice under /mnt/sysroot/, for example: /mnt/sysroot/root/drivers/.
  2. Change the root directory to /mnt/sysroot/:

    sh-4.2# chroot /mnt/sysroot/
  3. Use the rpm -ivh command to install the driver package. For example, run the following command to install the xorg-x11-drv-wacom driver package from /root/drivers/:

    sh-4.2# rpm -­ivh /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm
    Note

    The /root/drivers/ directory in this chroot environment is the /mnt/sysroot/root/drivers/ directory in the original rescue environment.

  4. Exit the chroot environment:

    sh-4.2# exit

F.10.4.2. Removing a driver using RPM

Use this procedure to remove a driver.

Prerequisites

  • You have booted into rescue mode.
  • You have mounted the installed system in read-write mode.

Procedure

  1. Change the root directory to the /mnt/sysroot/ directory:

    sh-4.2# chroot /mnt/sysroot/
  2. Use the rpm -e command to remove the driver package. For example, to remove the xorg-x11-drv-wacom driver package, run:

    sh-4.2# rpm -e xorg-x11-drv-wacom
  3. Exit the chroot environment:

    sh-4.2# exit

    If you cannot remove a malfunctioning driver for some reason, you can instead blocklist the driver so that it does not load at boot time.

  4. When you have finished adding and removing drivers, reboot the system.

F.11. ip= boot option returns an error

Using the ip= boot option format ip=[ip address] for example, ip=192.168.1.1 returns the error message Fatal for argument 'ip=[insert ip here]'\n sorry, unknown value [ip address] refusing to continue.

In previous releases of Red Hat Enterprise Linux, the boot option format was:

ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1

However, in Red Hat Enterprise Linux 9, the boot option format is:

ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250

To resolve the issue, use the format: ip=ip::gateway:netmask:hostname:interface:none where:

  • ip specifies the client ip address. You can specify IPv6 addresses in square brackets, for example, [2001:DB8::1].
  • gateway is the default gateway. IPv6 addresses are also accepted.
  • netmask is the netmask to be used. This can be either a full netmask, for example, 255.255.255.0, or a prefix, for example, 64.
  • hostname is the host name of the client system. This parameter is optional.

Additional resources

F.12. Cannot boot into the graphical installation on iLO or iDRAC devices

The graphical installer for a remote ISO installation on iLO or iDRAC devices may not be available due to a slow internet connection. To proceed with the installation in this case, you can choose one of the following methods:

  1. Avoid the timeout. To do so:

    1. Press the Tab key in case of BIOS usage, or the e key in case of UEFI usage when booting from an installation media. That will allow you to modify the kernel command line arguments.
    2. To proceed with the installation, append the rd.live.ram=1 and press Enter in case of BIOS usage, or Ctrl+x in case of UEFI usage.

      This might take longer time to load the installation program.

  2. Another option to extend the loading time for the graphical installer is to set the inst.xtimeout kernel argument in seconds.

    inst.xtimeout=N
  3. You can install the system in text mode. For more details, see Installing RHEL8 in text mode.
  4. In the remote management console, such as iLO or iDRAC, instead of a local media source, use the direct URL to the installation ISO file from the Download center on the Red Hat Customer Portal. You must be logged in to access this section.

F.13. Rootfs image is not initramfs

If you get the following message on the console during booting the installer, the transfer of the installer initrd.img might have had errors:

[ ...] rootfs image is not initramfs

To resolve this issue, download initrd again or run the sha256sum with initrd.img and compare it with the checksum stored in the .treeinfo file on the installation medium, for example,

$ sha256sum dvd/images/pxeboot/initrd.img
fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img

To view the checksum in .treeinfo:

$ grep sha256 dvd/.treeinfo
images/efiboot.img = sha256:d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917
images/install.img = sha256:8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514
images/pxeboot/initrd.img = sha256:fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8
images/pxeboot/vmlinuz = sha256:b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db

Despite having correct initrd.img, if you get the following kernel messages during booting the installer, often a boot parameter is missing or mis-spelled, and the installer could not load stage2, typically referred to by the inst.repo= parameter, providing the full installer initial ramdisk for its in-memory root file system:

[ ...] No filesystem could mount root, tried:
[ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
[ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1
[ ...] ...
[ ...] Call Trace:
[ ...] ([<...>] show_trace+0x.../0x...)
[ ...]  [<...>] show_stack+0x.../0x...
[ ...]  [<...>] panic+0x.../0x...
[ ...]  [<...>] mount_block_root+0x.../0x...
[ ...]  [<...>] prepare_namespace+0x.../0x...
[ ...]  [<...>] kernel_init_freeable+0x.../0x...
[ ...]  [<...>] kernel_init+0x.../0x...
[ ...]  [<...>] kernel_thread_starter+0x.../0x...
[ ...]  [<...>] kernel_thread_starter+0x.../0x…

To resolve this issue, check

  • if the installation source specified is correct on the kernel command line (inst.repo=) or in the kickstart file
  • the network configuration is specified on the kernel command line (if the installation source is specified as network)
  • the network installation source is accessible from another system