Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

Chapter 2. Important Changes to External Kernel Parameters

This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 6.8. These changes include added or updated proc entries, sysctl, and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes.
force_hrtimer_reprogram [KNL]
Force the reprogramming of expired timers in the hrtimer_reprogram() function.
softirq_2ms_loop [KNL]
Set softirq handling to 2 ms maximum. The default time is the existing Red Hat Enterprise Linux 6 behaviour.
tpm_suspend_pcr=[HW,TPM]
Specify that, at suspend time, the tpm driver should extend the specified principal components regression (PCR) with zeros as a workaround for some chips which fail to flush the last written PCR on a TPM_SaveState operation. This guarantees that all the other PCRs are saved.
Format: integer pcr id
/proc/fs/fscache/stats

Table 2.1. class Ops:

new:ini=NNumber of async ops initialised
changed:rel=Nwill be equal to ini=N when idle

Table 2.2. new class CacheEv

nsp=NNumber of object lookups or creations rejected due to a lack of space
stl=NNumber of stale objects deleted
rtr=NNumber of objects retired when relinquished
cul=NNumber of objects culled
/proc/sys/net/core/default_qdisc
The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters, it is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue (sfq). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin, which require setting up classes and bandwidths.
Default: pfifo_fast
/sys/kernel/mm/ksm/max_page_sharing
Maximum sharing allowed for each KSM page. This enforces a deduplication limit to avoid the virtual memory rmap lists to grow too large. The minimum value is 2 as a newly created KSM page will have at least two sharers. The rmap walk has O(N) complexity where N is the number of rmap_items, that is virtual mappings that are sharing the page, which is in turn capped by max_page_sharing. So this effectively spreads the linear O(N) computational complexity from rmap walk context over different KSM pages. The ksmd walk over the stable_node chains is also O(N), but N is the number of stable_node dups, not the number of rmap_items, so it has not a significant impact on ksmd performance. In practice the best stable_node dups candidate is kept and found at the head of the dups list. The higher this value the faster KSM merges the memory, because there will be fewer stable_node dups queued into the stable_node chain->hlist to check for pruning. And the higher the deduplication factor is, but the slowest the worst case rmap walk could be for any given KSM page. Slowing down the rmap walk means there will be higher latency for certain virtual memory operations happening during swapping, compaction, NUMA balancing, and page migration, in turn decreasing responsiveness for the caller of those virtual memory operations. The scheduler latency of other tasks not involved with the VM operations doing the rmap walk is not affected by this parameter as the rmap walks are always scheduled friendly themselves.
/proc/sys/net/core/default_qdisc
The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters so is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue (sfq). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin which require setting up classes and bandwidths.
Default: pfifo_fast
/sys/kernel/mm/ksm/stable_node_chains_prune_millisecs
How frequently to walk the whole list of stable_node "dups" linked in the stable_node chains in order to prune stale stable_node. Smaller milllisecs values will free up the KSM metadata with lower latency, but they will make ksmd use more CPU during the scan. This only applies to the stable_node chains so it is a noop unless a single KSM page hits max_page_sharing. In such a case there are no stable_node chains.
/sys/kernel/mm/ksm/stable_node_chains
Number of stable node chains allocated. this is effectively the number of KSM pages that hit the max_page_sharing limit.
/sys/kernel/mm/ksm/stable_node_dups
Number of stable node dups queued into the stable_node chains.