F5 TMM pods not starting on worker node
Issue
- We have an application that runs on nodes which have a specific mcp applied which sets up specific kernel args:
spec:
additionalKernelArgs:
- nosmt
cpu:
isolated: 2-13
reserved: 0-1
hugepages:
defaultHugepagesSize: 2M
pages:
- count: 58000
node: 0
size: 2M
- what we are seeing is that it works fine on one node, but not both.
- If we reboot the one that works, it seem to start working on the other node. This happens each time we try to run TMM with two replicas.
- Currently worker 1 is working and worker-0 is not.
Environment
- OpenShift Container Platform 4.12
- IPI install
- Dual stack with F5 SPK 1.7.2
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.