removal of SAS-2 controller drivers in RHEL 8

Latest response

I've recently been made aware that RHEL 8 will remove certain devices that were previously supported by the mpt2sas driver, as listed here:

I can certainly understand the need to purge old drivers that are no longer relevant, especially those storage controllers from the SAS-1 or SATA-I/II era. However, among the removed drivers are the LSI SAS2008/2108/2116/etc. I point these out specifically for 2 reasons:

1) They are SAS-2/SATA-III storage controllers, which are still very much relevant if we're talking about storage servers that are still using traditional spinning HDDs. As much as SSDs and NVMe storage have grown in the storage space, I think HDDs still have very important roles to play (bulk storage, media storage, backup storage, etc.) Using faster SAS-3 controllers on HDDs would be a real waste. So, I don't fully understand the decision to remove these drivers?

2) LSI SAS2008/2108/2116 storage controllers are still widely deployed. They are also often re-used from the 2nd hand market for building storage servers for small/medium size businesses, start-up companies, as well as "home lab" hobbyists. They are also often re-branded by Dell, HP, IBM/Lenovo, etc., so are in very wide circulation. With this change, all these systems that still use these controllers would not have an easy path to upgrade to RHEL 8.

So, my questions are:

a) What was the reasoning for removing these specific drivers?
b) Will RedHat re-consider this decision, or is this etched in stone at this point?
c) If this decision cannot be modified, can RHEL provide a "legacy driver" kernel driver sub-package so that servers with these controllers can upgrade to RHEL8 in the future?



I assume you are using physically different usb sticks for 8.0 and 8.1. Just to confirm it is not caused by hardware issue, could you use the drive you used for 8.0 to install 8.1?

I am using 2 different usb sticks:

  • big usb stick 1 for RHEL install media
    • I have used the same stick for both 8.0 and 8.1
    • I just copied 8.0 to it when 8.1 was failing
    • Since I was not the original creator of the 8.1 copy I will retry tomorrow with a fresh copy of 8.1
  • small usb stick 2 for dd-mptsas-3.04.20-1.xx.elrepo.iso

I am not sure how the original creator of "big usb stick 1" with RHEL 8.1 created it however when I went to re-copy the RHEL 8.1 iso to it I discovered it is too small (nominal 8GB stick). I will check with the original creator when he gets back on the 2nd.

I acquired a new "bigger usb stick 3" (32GB capacity) and copied the RHEL 8.1 iso to it. I am happy to report that this new stick in conjunction with dd-mptsas-3.04.20-1.el8_1.elrepo.iso on usb stick 2 installs successfully!!

So my problem turns out to be same issue as Joshua had which is "corrupt media".

Thanks for the help Akemi.

Continuing on with my testing of 2 different scenarios:

  • Installing 8.0 and updating to 8.1 with yum update
    • Install 8.0 using dd-mptsas-3.04.20-1.el8_0.elrepo.iso (success)
    • yum update dracut (to get latest dracut package for 8.1)
    • reboot
    • yum update (to update to 8.1)
    • reboot (failure - no drives found)
    • had to reboot and use earlier kernel
  • Installing 8.0 and limiting to 8.0 with subscription-manager
    • Install 8.0 using dd-mptsas-3.04.20-1.el8_0.elrepo.iso (success)
    • subscription-manager release --set=8.0
    • yum update dracut (to get latest dracut 8.0 package)
    • reboot
    • yum update (to get latest versions of 8.0 packages)
    • reboot (success)

So my first scenario of installing 8.0 and then using yum update to update to 8.1 fails.

  • Is this expected?
  • Should the mptsas driver package survive an update to 8.1?
  • Is this because the dracut package for 8.1 does not contain the fixes that the dracut package for 8.0 does?

Unfortunately, kmod-mptsas built for 8.0 does not survive the update to 8.1. You'd need to install the .el8_1 version when updating the OS to 8.1.

By the way kmod packages for 8.1 are in the elrepo-testing repository at the moment. We will be moving them to the main repo shortly.

I see. Thanks for verifying that!

Just a note to say that the el8_1 packages that were in the elrepo-testing repo have been moved to the main repo.

There is a ridiculous number of servers out there that still use the storage cards affected by RedHat's decision. "Oh just use elrepo to install", seems to be the response. Um, this is a horrible problem in that it now becomes INCREDIBLY difficult to automate, for example. Oh you need this specific driver disk, but no-- you need the OTHER one now because you're installing 8.1 instead of 8.0, and............

I saw a question earlier in this discussion that basically said, "What was RedHat's reasoning for removing these"? I don't believe that question was ever addressed.

I am disappointed, and angry at how much extra work this decision has added as relates to keeping my infrastructure alive.

And it's causing a MASSIVE delay in attempting to adopt RedHat version 8.

@Kent Brodie : I'm the OP, so I feel your frustration. as you might have noticed, i started this discussion back in June 2019! it's been 1/2 a year now and RHT hasn't really responded with why, or even if they are reconsidering this decision. It would really be nice if they at least acknowledge this decision has become a problem and they are looking into it.

that said, if you are a RHT customer, I will reiterate what Jamie BainBridge (Red Hat) said:

[quote] Could we trouble you to please open a Severity 4 support case for this query?

It will be directed to our storage engineers who can answer your questions.

It's also important for us to capture customer demand for these removed drivers, and support cases linked to knowledgebase articles and to bugs are the way we gauge that demand. [/quote]

Unfortunately, I'm not a direct customer and only a consultant and several of my customers are affected by this. And as you said, they are not adopting RHEL8 yet because of it. The only thing I can do is ask my customers to complain to RHT. So, if you are a RHT customer, I ask that you open a formal support case as suggested by JamieB above.

[quote] Could we trouble you to please open a Severity 4 support case for this query?

I opened such a case and eventually gave up after many moons of vague responses and strange circular logic. One such logic was that Dell was not supporting this hardware anymore even-though I produced emails showing Dell attempting to still sell a warranty for them.

Another strange logic was that these Dell systems were not even supported by Dell for RHEL7.

And then finally, Dell does not support RHEL8. Oye.

@Daryl Herzmann Yikes! That must have been frustrating! (been there before... in other situations)

Which model Dell servers were these?

This problem is much broader than Dell servers though... I know Dell PERC H310, H200, H700 all use one of the removed drivers, but SAS2008/SAS2108/SAS2116 controllers are in many other servers (IBM/Lenovo, Supermicro, Fujitsu, Gigabyte, ASRock, etc.)...

We only have redhat academic "self support", so it does not appear I am in the position to open up a formal case. (but based on the other response to my post, it does not appear that would actually do anything... useful..)

It would be great if someone from redhat's storage team would directly respond here however.

Yes, in my case they're dell 11/12 generation servers, which aren't the newest toys available but I have been chatting around the globe with my peers, and well, there's a LOT of servers like this that are in service. a LOT.

The most recent server I have affected by this was purchased NEW FROM DELL only 4.5 years ago. Sorry, that's just not that old.

I agree with you, there are plenty of Westmere/SandyBridge/IvyBridge Xeon servers that are still in service and perfectly adequate for the purposes they serve.

The failure of previous attempts to get RedHat's attention isn't a sign not to try again. The point is to create enough noise so someone gets sick of answering the same questions over and over and escalates the issue to someone who can effect change. This was the point of me starting this thread here, (and a few other places online) but it looks like it needs more boost... LOL.

In my case they're Dell Rx10 (R310, R510, R710) servers [older], and also Dell Rx20 (R320, R420, R720) [NOT so old].

and of course ALL with lsi/megaraid and "PERC" controllers eg, H700 etc.

This thread has been super helpful and informative. I ran into this issue head on as I've begun testing RHEL 8/CentOS 8 for upcoming machine deployments...machines will be new(ish) test machines aren't. The card that I'm having issues with is showing up in lspci -nn as: 03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2008 [Falcon] [1000:0073] (rev 03). I've installed kmod-mpt3sas-, kmod-mptsas-3.04.20-2.el8_1.elrepo.x86_64, and kmod-megaraid_sas-07.707.51.00-1.el8_1.elrepo.x86_64 and rebooted but the drives behind these cards still aren't showing up. Am I using the wrong driver or missing a step in the process? Any help would be appreciated.

According to the device ID pairing [1000:0073], your controller is supported by the megaraid_sas driver. What is the version of your running kernel? Is the module loaded (lsmod) ?

Please show us the output from:

find /lib/modules -name megaraid_sas.ko

Hi there,

I can confirm it's using the megaraid_sas version. The kmod-megaraid_sas.x86_64 07.707.51.00-1.el8_1.elrepo package added a version that supports the alias to the version, but unfortunately, after that version, nothing works.

I'm fighting with my server to try and push that older version instead of the new one.

If you have any insight on how to customize the module aliases in the ko file to allow dracut to boot, I'm really interested :)

When I change the /lib/module/4.18.0-187*/modules.alias file pour add the hardware device ID to the megaraid_sas driver, it seems ignored by the dracut command.

Hi Thibaut,

Could you elaborate a bit more? When you say "old version" and "new version", do you mean the device ID? If so, what is the ID you'd like to see added to the driver?

Hi there, sorry I was too happy to find that I'm not alone that my message can be seen as a bit unclear :)

So, at the moment I have a H310, which is basically :

# lspci -nnv -s 01:00.0
01:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2008 [Falcon] [1000:0073] (rev 03)

So it's 1000:0073. At the moment it is handled by megaraid_sas, which is provided by :

# modinfo megaraid_sas
filename:       /lib/modules/4.18.0-147.el8.x86_64/extra/megaraid_sas/megaraid_sas.ko
description:    Broadcom MegaRAID SAS Driver
version:        07.707.51.00-rc1
license:        GPL
rhelversion:    8.1
srcversion:     1CDFF574D9202A94AABD3E9
alias:          pci:v00001000d00000073sv*sd*bc*sc*i*

So as you can see, the alias is present, and that's why the HW is taken care of by megaraid_sas.

The file is provided by the following package :

# yum whatprovides /lib/modules/4.18.0-147.el8.x86_64/extra/megaraid_sas/megaraid_sas.ko
Last metadata expiration check: 0:31:53 ago on Fri 27 Mar 2020 05:16:23 PM CET.
kmod-megaraid_sas-07.707.51.00-1.el8_1.elrepo.x86_64 : megaraid_sas kernel module(s)

Now, if I update my kernel with 4.18.0-187.el8, it will use a different module, and the problem is that the module doesn't have the same alias line :

# modinfo /lib/modules/4.18.0-187.el8.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko.xz
filename:       /lib/modules/4.18.0-187.el8.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko.xz
description:    Broadcom MegaRAID SAS Driver
version:        07.710.50.00-rc1
license:        GPL
rhelversion:    8.2
srcversion:     13A32FE28510E4471D94601
alias:          pci:v00001000d000010E7sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E4sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E3sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E0sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E6sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E5sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E2sv*sd*bc*sc*i*
alias:          pci:v00001000d000010E1sv*sd*bc*sc*i*
alias:          pci:v00001000d0000001Csv*sd*bc*sc*i*
alias:          pci:v00001000d0000001Bsv*sd*bc*sc*i*
alias:          pci:v00001000d00000017sv*sd*bc*sc*i*
alias:          pci:v00001000d00000016sv*sd*bc*sc*i*
alias:          pci:v00001000d00000015sv*sd*bc*sc*i*
alias:          pci:v00001000d00000014sv*sd*bc*sc*i*
alias:          pci:v00001000d00000053sv*sd*bc*sc*i*
alias:          pci:v00001000d00000052sv*sd*bc*sc*i*
alias:          pci:v00001000d000000CFsv*sd*bc*sc*i*
alias:          pci:v00001000d000000CEsv*sd*bc*sc*i*
alias:          pci:v00001000d0000005Fsv*sd*bc*sc*i*
alias:          pci:v00001000d0000005Dsv*sd*bc*sc*i*
alias:          pci:v00001000d0000002Fsv*sd*bc*sc*i*
alias:          pci:v00001000d0000005Bsv*sd*bc*sc*i*

So if I reboot, it will tell me that my drive doesn't exist, and I can't boot my OS.

Dell seems to have a compiled version of 7.710 Megaraid firmware for H310 here, but I can't find either the source of that driver nor the RHEL8 version.

So I'm trying to tweak the dracut of the 187 kernel version so that the megaraid 710 will take the 0073 device id into consideration, knowing that it might not work even so.

Is that clearer ?

It is clear now. :)

You are trying to use kernel 4.18.0-187.el8. That is CentOS Stream kernel and it is not supported by ELRepo because the kABI is virtually broken there.

Perhaps, the easiest path for you will be to rebuild the ELRepo's package against your running kernel. It may not survive the next kernel update though.

There is yet another thing you can look at. Try installing the kernel-ml package from ELRepo. It has the megaraid_sas kernel module that includes [1000:0073].

Hi Thibaut,

OK, just for testing, I have built the kmod-megaraid_sas package for CentOS Stream kernel-4.18.0-187.el8. You can find it here:

It is not signed and is meant for testing purposes.

Thanks, I'll give it a try very soon !

Any chance you could explain how this is done ? I have no idea on how do apply those type of changes if I want to make it future proof.

Also, if I may, you might want to remind people in the readme file to rebuild the image using

depmod -a 4.18.0-187.el8.x86_64

dracut -f --kver 4.18.0-187.el8.x86_64

Otherwise... they won't understand why it's still not booting :) Also, as your package is marked as an update from the previous, it removes the 4.18.0-147 version of the package , rendering all kernels non-bootable except the rescue one.

dunno if you can make your package install-only or if that should be part of the readme as well :) Thanks

Kernel Version is 4.18.0-147.3.1.el8_1.x86_64 (from uname -a)

Output from that find:

[root@edrei ~]# find /lib/modules -name megaraid_sas.ko

It looks like the kernel module is loaded:

[root@edrei ~]# lsmod | grep -i mega
megaraid_sas          155648  0

For context this is in a 4U 36 bay Supermicro server, there's one card for the front 24 drives and one for the back 12, I've double checked the cards and they are in JBOD mode for all the drives, but all I get are the two OS disks plugged directly into the motherboard (so I'm not having boot issues fortunately).

for anyone who needs a little help using the ELRepo driver with RHEL8/CentOS8, i made a video showing the process:

That is quite impressive! I'm sure the video will help a lot of people.

I'd like to make one suggestion. That is, before booting a new kernel, it will be a good idea to look in /lib/modules and make sure that there is a symbolic link to the newly installed kernel. Usually kmod packages survive kernel updates within a minor release, this is not guaranteed.

Kia ora SSA from NZ here;

I created several Bugzilla's for this during RHEL8 beta. They are floating about but my google fu is weak at the moment.

Whilst I feel your pain. The problem is that the Power based System on a chip in a large swathe of the SAS and Raid cards are based on as part of the tranche of IP movements when Qlogic/Broadcom etc deal went through meant a loss in maintainable code base and microcontroller support. The SBC the tranche of removed PCI'ID in the RHEL8 kernel belong to, are all particular revisions where the upstream vendor is not supporting AND where there are known data-corruption/stability and support issues in the publicly available codebase.

Whilst we have tried in the past to make known hardware vendor issues red flaged (in big kernel DMESG output etc) It doesn't detract people from using it; and then it becomes a finger pointing game later on when the inevitable issues occur.

So yes - it's frustrating (I have a bunch of affected servers in the room next door) - but when you consider that the alternative is predictable deterministic loss of mission critical data and systems for our clients. It makes sense.

i'm sorry, maybe i'm the only idiot who doesn't quite understand...

what is this "Power based SoC" stuff you're talking about and how does it relate to this topic?

when you're talking about IP movements, are you referring to the LSI->Broadcom/Avago Tech acquisition? Are you saying during that acquisition, support for certain LSI controllers became non-existent? please clarify? what data corruption/stability issues? maybe links to the bugzilla reports will help here... but without it, i'm not even sure what you're talking about or what specifically to search for?

In short, I'm really at lost at what you're trying to say? I'm sorry if I'm too ignorant here, but if you or someone who gets this can spare a moment to explain that would be appreciated. links to information works too.

follow-up: ok, sorry, after re-reading your message, i think i'm understanding a little more... by "Power based SoC", you're talking about the LSI controller chips like SAS2008/SAS2108, right? they are PowerPC cores?

but then, I don't get why SAS2208/SAS2308 PCI IDs were not dropped also? I mean, those too are PowerPC based, just running at high clock speeds with PCIe 3.0 support (in some revisions)... and seem to share a lot of IP with the SAS2008/SAS2108... so I still don't get the loss of support statement due to IP movement thing? Does it have anything to do with IP movement? Or, are you just saying Avago Technologies(Broadcom) dropped support for SAS2008/SAS2108 and RHT doesn't want to be caught holding the hot potato when an issue arises and can't work with hardware vendor?

I don't know specifics for each case (and there are a lot of them); as mentioned it's combination of a) Chip not supported/maintained by vendor, b) Vendor not fixing known problems upstream c) No mainline kernel maintainers/unable to maintain due to lack of documentation d) Ancient and ageing hardware that compromises performance and stability of other components. e) Data corruption issues f)Customer expectations of support from RH.

There will be different individual paths to which of those are relevant - but hardware deprecations are common between major releases (and even more so in mainline if you discard anything during kernel compilation that are earmarked dubious)

You can always run Rhel7 if you want a supported in subscription system and/or run unsupported 3rd party kernel modules compiled against the RH kernel.

All of the variously badged cards are powered by a SoC PowerPC design that was variously integrated by several hardware OEMs and re-badged extensively.

A list of removed PCI ID's deprecated and removed is listed on the RHEL 8 release notes here:

There are public discussions on the Kernel mailing list around some of these issues relating to specific drivers and cards; searching for your specific hardware on mainline commits and LKML - is likely a good start to unravelling your particular story. of why something is no-longer supported.

The point I was attempting to make with ref to the commonly integrated PowerPC based RAID cards issue is one I have been across a little(having some hardware with the falcon perc3i HBA/RAID cards affected) and I'm just an SSA so am socializing what I have been advised when I flagged it.

Red Hat did that leg work for you and made a call around not including it 'in the box' ; adding an out of support kernel module is very much possible - but also very much an indication that you are no-longer in a standard support envelope.

What is "SSA" by the way? Not familiar with SSA other than Social Security Administration... which I don't think is what you meant.

I wonder if Serial Storage Architecture [1] is the correct expansion of that abbreviation?


Specialist Solution Architect

Thank you.

Yup - pretty much.

Hi, I am pretty late in joining this discussion but as it's an actual topic to me right now I would like to as about driver support for 03:00.0 SCSI storage controller: Broadcom / LSI SAS1068E PCI-Express Fusion-MPT SAS (rev 08) on a R710 Box.

I already tested elrepo's dd-mptsas-3.04.20-2.el8_1.elrepo.iso & dd-mpt3sas- without the expected success.

RHEL 7 shows
lsmod |grep mpt mptsas 62316 4 scsi_transport_sas 41224 1 mptsas mptscsih 40150 1 mptsas mptbase 106036 2 mptsas,mptscsih so mptsas should be fine. But unfortunately it is not.

What makes me wonder is the fact that Broadcom / LSI SAS1068E PCI-Express Fusion-MPT SAS is not on the removal list and the controller obviously works with mptsas on RHEL 7.

Can someone enlight me here and explain what the actual problem is.

Many thanks !

To say anything for sure, I need to know your device IDs [xxxx:yyyy] as shown by lspci -nn.

There you are:

03:00.0 SCSI storage controller [0100]: Broadcom / LSI SAS1068E PCI-Express Fusion-MPT SAS [1000:0058] (rev 08)

The device with [1000:0058] is indeed unsupported in the current RHEL 8. It also shows that the device is supported by the mptsas driver. Therefore ELRepo's packages are supposed to work.

Did the installation process find the driver?

Unfortunately - we can't add screen shots. But my assumption was it did. I used inst.dd=nfs:/share/driver.iso and the installation process would quit in case the file is not in place :).

In the tutorial video kindly provided by Valued Customer, go to 18:50 or so, you will see the actual driver name. During the installation, did you actually see the driver disk selection like that?

I can confirm that the Oracle "Unbreakable Enterprise Kernel" (UEK) brings full support back for Megaraid without the necessity of running a tainted kernel. This is kernel version 5.4.17, long-term support release. (As a bonus, it also supports my 3com vortex card, and aacraid is also returned).

This brings a large amount of hardware back to the table, and it does appear to support ancient Megaraid. Loading this kernel will also restore BtrFS support, among other features.

This is the yum repo:

$ cat /etc/yum.repos.d/uek-ol8.repo 
name=Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux $releasever ($basearch)

name=Oracle Linux 8 UEK6 RDMA ($basearch)

Issue a "yum install kernel-uek" after placing this file.

Hi, hopefully this topic is still 'live'. I successfully followed the steps and used the elrepo dd drivers for some megaraid systems, but now I'm encountering a problem with a system using an Intel/Adaptec raid card. The system is currently running CentOS 6, here is the relevant lscpi -nn output: 00:1f.2 RAID bus controller [0104]: Intel Corporation 82801GR/GDH (ICH7R/ICH7DH) SATA Controller [RAID mode] [8086:27c3] (rev 01) I've tried the 8.1 by itself and no drives are visible. I've tried using the dd-aacraid-1.2.1-2.el8_1.elrepo.iso since during bios boot the raid controller shows up as Adaptec (despite the info in the lspci) via inst.dd=/dev/xxx to no avail. Should I be using a different driver? Is there a driver available for this hardware (it is pretty old)?

Hi, I am trying to upgrade RedHat 7.8 to 8.2 on a DELL T110 server. The SAS controller is Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

When I reboot to continue the upgrade the dracut emergency shell comes up, and no SAS drive is present.

The leapp upgrade created initramfs-upgrade.x86_64.img. As long as I understand I need to put a new kernel module in this image to work with the LSI controller.

How do I do this?

You can manually add the desired module ("foo") to your image like so:

dracut --add-drivers nvidia -f /boot/initramfs-upgrade.x86_64.img upgrade

Or alternatively you can add the driver(s) you want to a dracut.conf file. For example:

cat /etc/dracut.conf.d/foo.conf

add_drivers+=" foo "

which will ensure the "foo" module is always added to any new initramfs images that are created.

See the dracut and dracut.conf man pages for more information.

I have the following RAID controller ==>

>>07:08.0 SCSI storage controller: Broadcom / LSI SAS1068 PCI-X Fusion-MPT SAS (rev 01)<<

During kickstart I used the Elrepo iso, dd-mptsas-3.04.20-3.el8_2.elrepo.iso. Otherwise, I wouldn't have detected the hard disk. Thanks, Elrepo!!! for doing a great job supporting the SAS1068 with RHEL 8.2.

Further, I noticed my RAID bios is maintaining my disk management, i.e. the raid-1 mirror(as I expected). I am used to using MegaRAID GUI. I installed the MegaRAID but it doesn't go through the discovery to detect the driver, essentially the controller. It seems I am out of luck maintaining the RAID using MegaRAID.

The kernel has loaded the mptsas, e.g. from lsmod output:

>>mpt3sas               294912  1
>>raid_class             16384  1 mpt3sas
>>mptsas                 69632  8
>>scsi_transport_sas     45056  2 mptsas,mpt3sas
>>mptscsih               45056  1 mptsas
>>mptbase                98304  2 mptsas,mptscsih

additional info:

# modprobe maptsas
modprobe: FATAL: Module maptsas not found in directory /lib/modules/4.18.0-193.1.2.el8_2.x86_64

mptsas.ko -> /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko

So now my dilemma is how to maintain the raid using at least mptsas utilities. Appreciate any insight into the matter.

So, I was in early on this thread. Responses helped me to install RHEL8 on some older Dell boxes that still had some life left in them. Everything was great until the RHEL 8.2 kernel came out. Now I can't boot into the new kernel. I could use some help with updating the new kernel/mpt3sas/dracut combination. I can boot into 8.1 kernel, so these boxes are operational.

I appreciate the help. How-Tos only seem to address the issue during an install, not on a kernel updated operational OS box.