Kickstarting with adapter teaming (10Gb NICs)

Latest response

I have a Dell PowerEdge R740xd with em1 and em2 as 10GB DACs, using the bnxt_en driver, and em3 and em4 as copper 1Gb NICs using tg3. The target OS is RHEL7.5.

I'm having real trouble getting Kickstart to set up an adapter team of em1 and em2 (using the "lacp" runner, as opposed to "roundrobin" or "activebackup", etc.). For the longest time I couldn't even install, as dracut would stop and then timeout, and I wasn't getting any information as to why. I believe this was down to a bad "network" line in the Kickstart configuration file for the server. After some editing I have a Kickstart file which causes the server to install, but it seems to install using em1 only (all of the IP details for the team get assigned to it 'correctly'), and without any trace of teaming in /etc/sysconfig/network-scripts/. I boot from a USB stick (sdc; sda and sdb are internal RAID volumes) for historical reasons.

The Kickstart syslinux.cfg:
label SERVER1
kernel vmlinuz
append initrd=initrd.img ks=hd:sdc1:/configs/SERVER1.cfg console=ttyS1,115200

The Kickstart file SERVER1.cfg:
network team=team0:em1,em2 --bootproto=static --ip= --netmask= --gateway= --nameserver= --activate --teamconfig='{\"runner\": {\"name\": \"lacp\", \"active\": true, \"fast_rate\": false, \"tx_hash\": [\"ip\"]}, \"link_watch\": {\"name\": \"ethtool\"}, \"ports\": {\"em1\": {}, \"em2\": {}}, \"tx_balancer\": { \"name\": \"basic\"}}'
network --hostname=SERVER1

What am I missing, please? I tried having "--device=link" in the Kickstart file, but that wasn't accepted as valid. As it is I have to set em1's port on one of the two FEX switches to accept non-LACP traffic to allow it to talk to the install media server, and then configure the teaming after installation. I was told you can set up teaming at install time, I just don't see how.

Please tell me where I'm going wrong (-:

Thank you,



This is what I have noted that worked for our installs of RHEL 7.4 using Satellite 5.8.0


vmlinuz initrd=initrd.img linux ip= nameserver= bond=bond0:em1,em2:mode=802.3ad,lacp_rate=fast,miimon=100,xmit_hash_policy=layer2+3 vlan=bond0.123:bond0 inst.sshd inst.ks=


network --bootproto=static --device=bond0 --ip= --netmask= --gateway= --vlanid=123 --hostname=hostname --bondopts=mode=802.3ad,lacp_rate=fast,miimon=100,xmit_hash_policy=layer2+3 --bondslaves=em1,em2

Thank you. That's not quite what I need, and won't totally fit into our way of doing things here, but it's a good suggestion. Hopefully someone else will pipe up (Red Hat employees?) and tell me what I'm doing wrong.


I know this is coming late and I'm not sure if you're even still interested. It appears the issue is that the teaming implementation within dracut is not complete. Apparently it lacks the ability to configure the runner. Without being able to configure the runner unfortunately we will never be able to configure LACP with teaming in dracut. It appears this feature does not even exist upstream yet. It seems using bonding for the dracut.cmdline to get the lacp up for pulling the ks is the correct workaround until this feature is added.

How to configure teaming in dracut.cmdline?

Hi Michael,

Very interested! But also disappointed. We don't want to use bonding (-: At least I know it's not me that's been at fault all these years. I guess we'll have to continue installing via one NIC, and then set up LACP teaming using Ansible (or nmtui). Here's hoping dracut gets improved in the future!

There is an RFE for this: Bug 1881463 - [RFE] Requesting ability to configure the runner for team interfaces in dracut.cmdline/installations. As a workaround, you might also consider a customized image with modification of /usr/lib/dracut/hooks/cmdline/