I don't know why iSCSI so slowly..

Latest response

Hi, there.

I've some question, about rhev storage domain server is so slow.

I create vm on my hypervisor1 server.

os install is remaning 7hours... is that a something problem right..

i don't know is there..

Responses

Hi,

 

Please provide some details:

- Storage configuration and properties (type, spindle count and types, path count, network speed and type etc)

- VM type, guest OS, configuration

- Installation source details (ISO domain/PXE server/how configured/how it is accessed)

 

Storage configuration

- type ; iscsi 700G (SATA 2 7200rpm)

 networkspeed 1000Mbps

VM type - RHEL_6.2_x86_64bit , preallocated

 installation source

 - using NFS from hypervisor-1 server

 

and network

hypervisor-1

nic1 192.168.x.x 1000Mbps (rhevm)

nic2 10.10.x.x   1000Mbps (Service network)

nic3 20.20.x.x 1000mbsp (Storage)                    

also hypervisor-2 same network environment

 

iSCSI server network

20.20.x.x 1000Mbps

OK, so you have a single 1kmbps path to the iSCSI server, which is based on slow, non-enterprise disks.

Where do you export the installation image from? I mean where is the NFS server? How is it configured (what's in /etc/exports)

 

Also, do you use virtio or IDE for the VM disk image?

 

And one last question - have you measured the actual baremetal speeds of reading from the NFS, and writing to the iSCSI LUN?

NFS server mount rhev manager server and NFS shared.

here is /exports/iso   192.168.11.200/24(rw,wdelay,root_squash,no_subtree_check)

VM disk image type virtio,  and writing to the iscsi lun.

vmstat / iostat result

[root@storageSV ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0    148  12160 1557084  40992    0    0     2    44   43   45  0  0 100  0  0
 

 

[root@storageSV ~]# iostat
Linux 2.6.18-300.el5 (storageSV)  05/07/2012

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.06    0.00    0.15    0.05    0.00   99.74

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               2.40         9.05       173.56    2655382   50938530
sda1              0.00         0.01         0.00       1822          4
sda2              0.41         2.86         5.58     839898    1638560
sda3              0.00         0.01         0.00       1736        328
sda4              1.99         6.17       167.97    1811502   49299638

I guess storageSV is your iSCSI server? What I meant is to measure the iscsi access speeds from theRHEV hypervisor, not locally

Hi, Dan. Would you tell me the speed test about iscsi server to Hypervisor. i've no idea.. sorry..

please tell me about that.

And Thank you so much..

sure, go into the root shell of the hypervisor, make sure your iscsi target is connected, create a spare LV on it, and dd if=/dev/zero of=/dev/mapper/test-LV bs=512 count=20000

 

Should be good enough.

Don't forget to get rid of the test LV after you're done

I can't do that. iSCSI server still working and not enough storage capacity.

Anything else check method?

iostat then

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.28    3.45    0.00   95.96

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
ram0              0.00         0.00         0.00          0          0
ram1              0.00         0.00         0.00          0          0
ram2              0.00         0.00         0.00          0          0
ram3              0.00         0.00         0.00          0          0
ram4              0.00         0.00         0.00          0          0
ram5              0.00         0.00         0.00          0          0
ram6              0.00         0.00         0.00          0          0
ram7              0.00         0.00         0.00          0          0
ram8              0.00         0.00         0.00          0          0
ram9              0.00         0.00         0.00          0          0
ram10             0.00         0.00         0.00          0          0
ram11             0.00         0.00         0.00          0          0
ram12             0.00         0.00         0.00          0          0
ram13             0.00         0.00         0.00          0          0
ram14             0.00         0.00         0.00          0          0
ram15             0.00         0.00         0.00          0          0
loop0             0.00         0.00         0.00          0          0
loop1             0.00         0.00         0.00          0          0
loop2             0.00         0.00         0.00          0          0
loop3             0.00         0.00         0.00          0          0
loop4             0.00         0.00         0.00          0          0
loop5             0.00         0.00         0.00          0          0
loop6             0.00         0.00         0.00          0          0
loop7             0.00         0.00         0.00          0          0
sda               1.33         2.92         9.71     861328    2861856
dm-0              1.09         2.48         9.71     731206    2861856
dm-3              0.05         1.59         0.00     468592         32
dm-4              1.44         0.38         9.71     112950    2861824
dm-5              0.14         1.14         0.00     337186          0
dm-6              0.00         0.01         0.00       2824       1208
dm-7              0.00         0.00         0.00       1058        672
dm-8              1.22         0.11         9.71      33106    2859896
dm-9              0.00         0.01         0.00       2714         48
dm-1              0.02         0.16         0.00      45776          0
dm-2              0.02         0.16         0.00      45760          0
dm-10             1.74        66.96       129.09   19725460   38029708
dm-11             0.09         0.74         0.00     218957         11
dm-12             0.14         0.07         0.07      22043      21161
dm-13             0.00         0.00         0.00        880          0
dm-14             0.05        49.90         0.00   14700880         16
dm-15             0.00         0.00         0.10        896      30064
dm-16             0.00         0.01         0.01       1850       2280
sdb               0.24        50.26         0.61   14807977     180408

Here is iostat at hypervisor-2.

 

 

 

great, now please check multipath -ll for which dm-X is the iSCSI LUN we are using for RHEV

[root@hypervisor-2 ~]# multipath -ll
1IET_00010001 dm-10 IET,VIRTUAL-DISK
size=823G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 9:0:0:1 sdb 8:16 active ready running
1ATA_ST3500413AS_S2A19FQE dm-0 ATA,ST3500413AS
size=466G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 4:0:0:0 sda 8:0  active ready running

 

is that a fine?

yes, it shows dm-10 is the target. It's writing at about 60MBps and reading at about 30MBps. The write speed seems to be acceptable, for a single 1Gbps path.

 

So I think you might have a bottleneck on the NFS side of things. Can you try to remount the partition that holds your NFS share with the noatime and nodiratime options?

I apply that the options. But still slowly. VM os also slowly. something problem the server i cannot find a point..

root@RHvm1 ~]#dd if=/dev/vda of=tmp bs=32 count=300000000

85MB copied, 221.922 seconds, 384kB/s

Host hypervisor-1 into vm os (v5.8)dd speed above result.

There is something problem or nomal result?

No, this doesn't look normal at all.

But you have to test with bs=512 first.

 

How is the iSCSI server configured?

Thanks Dan Yasny your answers.

 

tgt-admin -s

 

Target 1: iqn.2012-05.disk.com:storage
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 14
            Initiator: iqn.2012-05.disk.com:storage
            Connection: 0
                IP Address: 20.20.20.4
        I_T nexus: 15
            Initiator: iqn.2012-05.disk.com:storage
            Connection: 0
                IP Address: 20.20.20.3

 

[root@datastore ~]# ethtool eth0
Settings for eth0:
 Supported ports: [ TP ]
 Supported link modes:   10baseT/Half 10baseT/Full
                         100baseT/Half 100baseT/Full
                         1000baseT/Half 1000baseT/Full
 Supports auto-negotiation: Yes
 Advertised link modes:  10baseT/Half 10baseT/Full
                         100baseT/Half 100baseT/Full
                         1000baseT/Half 1000baseT/Full
 Advertised auto-negotiation: Yes
 Speed: 1000Mb/s
 Duplex: Full
 Port: Twisted Pair
 PHYAD: 1
 Transceiver: internal
 Auto-negotiation: on
 Supports Wake-on: g
 Wake-on: g
 Current message level: 0x000000ff (255)
 Link detected: yes

 

Thanks again Yasny.

seems to be a performance issue inside the guest, have you already opened a support case?

Not yet.

Could I open up the support case?