Error messages when booting with locked self-encrypted drives.
Issue
- There are 16 self-encrypted Intel SSD DC3500 drives in the system. Boot drive is not encrypted though. When system boots with the drives locked, below error messages are seen in dmesg as the drives are probed.
sd 2:0:1:0: [sdj] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 2:0:1:0: [sdj] Sense Key : Illegal Request [current]
sd 2:0:1:0: [sdj] Add. Sense: Security conflict in translated device <----
sd 2:0:1:0: [sdj] CDB: Read(10): 28 00 00 00 00 08 00 00 08 00
__ratelimit: 319 callbacks suppressed
Buffer I/O error on device sdj, logical block 1
sd 2:0:2:0: [sdk] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 2:0:2:0: [sdk] Sense Key : Illegal Request [current]
sd 2:0:2:0: [sdk] Add. Sense: Security conflict in translated device <----
sd 2:0:2:0: [sdk] CDB: Read(10): 28 00 00 00 00 08 00 00 08 00
sd 2:0:0:0: [sdi] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 2:0:0:0: [sdi] Sense Key : Illegal Request [current]
sd 2:0:0:0: [sdi] Add. Sense: Security conflict in translated device
sd 2:0:0:0: [sdi] CDB: Read(10): 28 00 00 00 00 08 00 00 08 00
sd 2:0:3:0: [sdl] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
- There is a UDEV rule that calls a script to unlock each drive using hdparm but the drives are being accessed before UDEV can unlock them. Is there a way to unlock the drives sooner, or prevent the drives from being accessed until udev unlocks them ? Appending noprobe to the kernel line in /etc/grub.conf file did not help ?
Environment
- Red Hat Enterprise Linux 6
- Self-encrypted Intel SSD DC3500 drives
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.