RHEV3: changing default storage type?

Latest response

Hi,

During the installation process, of the rhevm console, I don't pay attention to the default storage type.

I exepected being able to provision FC storage but it looks the rhevm need also to have access to the storage (which was not expected!).

I finally (while awaiting FC storage reconfiguration); I have provisioned an NFS volume and would like to configure it.

 

Under System>default I have selected the default Datacenter and edited the storage type from FC to NFS. Status is Uninitialised.

 

When selecting Storage tab, New domain buton open a window which still expect "Data / fibre Channel" and won't allow to select anything else.

 

Could so give me help? (should I right click Default datacenter and select 're-Initialize Data Center'?

 

Regards

Responses

Only the hosts do. This is why when you create a new storage domain you are being asked to select a host.

 

As for your other issue, changing the Data Center storage type while it's uninitialized is allowed and should allow you to create a new data domain of that type.

 

Could you please say what the storage type column shows?

 

Thanks,

Simon

Hi Simon,

 

Apparently, this was fixed after logout/login (or alternatively after waiting few seconds).

 

So now I have a NFSshare up.

 

Regarding FC, could you confirm that both hypervisors AND the RHEVM should have access to the same FC LUNs?

 

Also, let assume I would add FC storage in addition to the NFS-share now configured, how should I process?

 

Regards

Only the hosts need access to the FC. RHEV-M does not need direct access to the storage.

you cannot mix an NFS and FC data domains in same data center yet.

you can use an NFS ISO or Export domain with an FC data domain.

> Regarding FC, could you confirm that both hypervisors AND the RHEVM should have access to the same FC LUNs?

 

Both hypervisors need access to the FC storage to start vms on them. If one of the hypervisors cannot access the FC LUNS, it will be marked as Non-Operational by RHEV-M and you will not be able to run vms on that hypervisor. However you will be able to start vms on other hosts which has access to the FC luns. If one of the hypervisors loses access to the storage while vms are running on them, then vms will be paused automatically to prevent data corruption which need to be manually unpaused once storage access is restored.

Hi,

 

I now have 2 Luns available (visible in rhevm GUI for both 2 hypervisor servers).

I don't tested this... would it be possible to select all Luns as part of the FCP-share (chapter 6.3)? how will them be used? (any concatenation?)

 

Would it be better to create 2 FC domains?

 

Regards

It depends on the use case of course.

 

RHEV formats a LUN as an LVM PV, and the Storage Domain is essentially a VG, that contains those PVs. So, if you simply add both LUNs to a single Storage Domain (SD), that SD will be the size of both LUNs combined.

 

The pros are quite straightforward - you get the large size, and don't need to care about running out of space for a longer period of time. 

 

The cons are actually coming from two worlds - the world of LVM, and the world of RHEV.

 

From LVMs POV, if you have two LUNs exported from two different SANs, where one LUN is fast SSD based chunk of storage, and the second is old, SATA based and slow, you never want to combine these together in one VG - this will cause countless issues.

 

From RHEV's POV, you need to consider the fact that in every Storage Pool (SP), one of the SDs is Master, meaning it will hold the metadata for the entire SP. If you have more than one SD in an SP, you will have a place for the Master metadata to failover to, in case you lose the current Master SD. Not something that happens frequently (in fact I've never seen it as something that needs doing) but always a good idea to plan for the worst.

This clarify a bit.

Thanks

Bernard

I will try to reinstall all so that to use FC LUNS and a NFS share for ISOs.

Regards

you don't have to re-install - you can simply create another DC and cluster and move the hosts to the new cluster