bond plus vlans vs two separate nics and vlans
Hello,
I can you recommend which network configuration should i use for optimal performance for:
- ceph
Dell R720xd with 2x10Gb, 2Gb nics
currently I have 3 mon (only one 10Gb is used for ceph public net) and 5 osd servers (one 10Gb nic for ceph public net and second 10Gb for cluster net).
- RHEV
Dell R710 with 2x10Gb, 2x1Gb
currently hypervisors have one 10Gb nic used for vlans(switch trunk) , another for Management and iscsi storage(switch access), 2x1Gb are used for network (switch access)
- OpenStack
Dell R710 with 2x10Gb, 2x1Gb
current hypevisors have one 10Gb nic used for vlans(switch trunk) , another for Management and storage(switch access)
managers 10Gb for openstack API/Dashboard, another 10Gb for Management.
What would give better performance for ceph separate nics for each network or bond?
is it sensible to use 1Gb interfaces or they will make a bottleneck?
Is it good to have separate Managemnt vlan and storage network in RHev and openstack I was told by my colegues that there were some problems with rhev Management network while on vlans and bond (nic not synced)
thanks
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
