pvcreate give error of reading

Latest response

I have several volumes and I was able to create pvcreate for mpathe and it generated it without any issue, however, when I want to do it for mpathd it gives me the following error

I am getting the following error

pvcreate /dev/mapper/mpathd
  Error reading device /dev/mapper/mpathd at 0 length 512.
  Error reading device /dev/mapper/mpathd at 0 length 4.
  Error reading device /dev/mapper/mpathd at 4096 length 4.
  Error reading device /dev/mapper/mpathd at 0 length 512.
  Error reading device /dev/mapper/mpathd at 0 length 4.
  Error reading device /dev/mapper/mpathd at 4096 length 4.

what should I do to solve this ? how to find what is wrong ?

some info that could be useful

ls -l /dev/mapper/mpathb*
lrwxrwxrwx. 1 root root 7 Jun  3 06:03 /dev/mapper/mpathb -> ../dm-3

and also I added this in a filter (found here https://access.redhat.com/discussions/2423461)

/etc/lvm/lvm.conf file

filter = [ "a|/dev/sda|", "a|/dev/mapper/*|" , "r/block/", "r/disk/", "r/sd.*/", "a/.*/" ]

it did not work and I got the same error.

I also tried to fdisk it but I got this error

fdisk /dev/mapper/mpathd
fdisk: cannot open /dev/mapper/mpathd: Input/output error

I don't know what I should do to fix this issue

Responses

Hi Mohammad,

Look for multipath errors in /var/log/message

Contact your storage team for help to find the cause.

Open a support case to get help from Red Hat.

Regards,

Jan Gerrit

Hi, thanks for your message. Basically, the storage guys tell me that this is not their job. the severs are Lenovo and they are also not helping while the Redhat support tells me that I am level3 support and they also do not help. so my only option is open source request. anyway, I got the tail message and it looks like this

~~~ kernel: sd 18:0:0:3: [sdi] Add. Sense: Logical unit not ready, cause not reportable

kernel: sd 18:0:0:3: [sdi] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 kernel: blk_update_request: I/O error, dev sdi, sector 0

kernel: sd 17:0:0:3: [sdh] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

kernel: sd 17:0:0:3: [sdh] Sense Key : Not Ready [current]

kernel: sd 17:0:0:3: [sdh] Add. Sense: Logical unit not ready, cause not reportable

kernel: sd 17:0:0:3: [sdh] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00

kernel: blk_update_request: I/O error, dev sdh, sector 0

multipathd: mpathd: sdi - directio checker reports path is down

multipathd: mpathd: sdh - directio checker reports path is down ~~~

Mohammad,

level 3 support means the reseller of the subsciptions needs to help you, if this community cannot help you.

hint: start a block of output with 3 tilde signs , start each 3 spaces, end the block with 3 tildes.

This makes the log lines, more readable.

Regards,

Jan Gerrit

Thanks, I amended the post :-)

Adding to Jan's suggestion, you may run pvcreate (pvcreate -v /dev/mapper/mpathd) command in verbose mode and check out for messages. That may also give you an hint.

Hi, thanks for your response , I did the -v and I checked the message ~~~ tail /var/log/messages

kernel: sd 18:0:0:3: [sdi] Add. Sense: Logical unit not ready, cause not reportable

kernel: sd 18:0:0:3: [sdi] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00

kernel: blk_update_request: I/O error, dev sdi, sector 0

multipathd: mpathd: sdi - directio checker reports path is down

kernel: sd 17:0:0:3: [sdh] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

kernel: sd 17:0:0:3: [sdh] Sense Key : Not Ready [current]

kernel: sd 17:0:0:3: [sdh] Add. Sense: Logical unit not ready, cause not reportable

kernel: sd 17:0:0:3: [sdh] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00

kernel: blk_update_request: I/O error, dev sdh, sector 0

multipathd: mpathd: sdh - directio checker reports path is down ~~~

Right, that certainly looks to be an issue on storage end.

Hi Mohammad,

You said : "Basically, the storage guys tell me that this is not their job." It IS their job ! Tell them to DO their job. Just my 2 cents ... :)

Regards,
Christian

Also, check if "multipath -ll" output for that device shows as failed.. This might be the situation over there since you had already mentioned about "path down".

Hi,

I think it is running look at below

~~~mpathd (3600d0231000984a06849afa56df1c5d6) dm-8 PAC Data,PAC GS2024R

size=3.0T features='0' hwhandler='0' wp=rw

|-+- policy='service-time 0' prio=0 status=enabled

| `- 17:0:0:3 sdh 8:112 failed faulty running

`-+- policy='service-time 0' prio=0 status=enabled

`- 18:0:0:3 sdi 8:128 failed faulty running ~~~

This certainly looks to be a problem at storage end... https://access.redhat.com/solutions/418203

Hi Mohammad,

Also have a look at Why do I see I/O errors on a RHEL system using devices from an active/passive storage array? if it contains useful information.

I found this based on the message Logical unit not ready, cause not reportable, so it might not be a 100% correct solution.

Regards,

Jan Gerrit