Every system administrator knows the feeling of having to wake up in the middle of the night because a server crashed or lost connectivity. This is where Red Hat Insights comes in. Thanks to our expansive knowledge base, the Insights team has been able to identify several critical stability issues that could cause a system outage. Don’t let these issues catch you by surprise. Check out our latest stability rules here!
|The “rpmdbNextIterator” error exists in the rpm command output due to a corrupt RPM database||The
||Why does rpm command show error "rpmdbNextIterator"?|
|RHEL 6.6 kernel panics when disconnecting storage due to known bugs||In the Red Hat Enterprise Linux 6.6 kernel, a bug was introduced whereby the removal of a Fibre Channel SCSI host might cause a kernel panic event .||RHEL 6.6 kernel panics when disconnecting storage|
|Network stops responding under high connection volume due to the ARP table getting too full||The Red Hat Enterprise Linux 6 or 7 kernel throws "Neighbour table overflow" messages after connecting to or discovering a large number of network hosts, which can lead to the network not responding. "Neighbour table" refers to the ARP cache and the overflow occurs when the ARP table gets too full and at least one new entry was denied due to the size limit being reached.||Kernel throws "Neighbour table overflow" messages after connecting to or discovering a large number of network hosts|
|Unintentional bridge kernel module loading while sosreport is run||When
||sosreport loads the bridge kernel module unintentionally|
|Network is down when the IP address of the loopback interface is assigned to the same subnet as that of the primary interface||If the IP address of the loopback interface is assigned to the same subnet as that of the primary interface, the network will be unreachable.||IPv4 Connection Unstable|
|HA VMs restart twice in RHEV3.5 because of a known bug in JSON RPC||In Red Hat Enterprise Virtualization (RHEV) 3.5 compatibility version, hosts that are newly-added to a cluster will use json-rpc by default. There is a bug in json-rpc that does not have “flag _recovery”. When there is a network issue or a hypervisor is not responding, RHEV will try to fence the problematic hypervisor and try to restart the vdsm process on this hypervisor. If this is successful, the HA-configured VM running on this hypervisor will be started twice.||Possible duplicate HA VMs during vdsm reinitialize|
|The number of 802.1q VLANs in an OpenStack host may exceed the limitation of 4,096||Red Hat Enterprise Linux supports a maximum of 4,094 802.1q VLANs per host. When hosts in an OpenStack environment get within 90% of this threshold, a warning is shown.||Red Hat Enterprise Linux OpenStack Platform 7 Architecture Guide|
|Unexpected NUMA node mapping when a CPU has more than 256 logical processors||In cases when BIOS is populating MADT with both x2apic and local apic entries (as per ACPI spec), the kernel builds its processor table in the following order: BSP, X2APIC, local APIC. This results in processors on the same core not being separated by core count.||Bad NUMA cpu numbering on server with more than 256 cpus|
|Unsupported Journal Mode Detected||This rule has been enhanced ti now check /etc/fstab for issues that may not be present when running the
||Why are logs similar to "JBD: Spotted dirty metadata buffer" logged in /var/log/messages?|
To see if you have systems affected by these new rules, check out the Stability category here.
As a Red Hat Insights customer, you can register machines now and remember what a good night’s sleep feels like.