Warning message

Log in to add comments.

Latest Posts

  • The Right Performance Tool for the Task

    Authored by: William Cohen

    As an engineer who works on performance tools at Red Hat, I often get seemingly simple questions along the lines of, "How do I get performance tool X to collect Y data?" Unfortunately, many times the answer is that "tool X does not measure Y." This leads to a dicussion about the performance problem being investigated. With additional background information, it becomes much easier to suggest more promising tools and techniques to get the desired measurements.

    Given the number of performance tools and complexity of Linux, it is easy for developers and system administrators to end up trying to press what tools they know about onto tasks the tools are really not suited for. I have been guilty of this, too. Part of this tool-centric bias might be an effect of documentation being focused on individual tools rather than the tasks people want to accomplish.

    Here at Red Hat, we are working to make it easier for people to measure and understand machine performance with a Performance Measurement Cookbook. Rather than being focused on the tools themselves, the cookbook tasks (or recipes) are focused on how to get the data that you need to understand what is happening on your system.

    An example of the task-oriented nature of the Performance Measurement Cookbook is the Determine whether an application has poor cache performance recipe. It shows you precisely how to use the perf utily to compute the cache miss rate for an application. If the cache miss rate is high, you can use the recipe to locate the places in the code where the cache misses occur. As you gain experience, you can modify this recipe to suit your own needs so you don't have to start from scratch.

    Take a look at Performance Measurement Cookbook to see what other recipes Red Hat has to offer.

    Posted: 2014-03-31T18:04:57+00:00
  • Determining Whether an Application Has Poor Cache Performance

    Authored by: William Cohen

    Modern computer systems include cache memory to hide the higher latency and lower bandwidth of RAM memory from the processor. The cache has access latencies ranging from a few processor cycles to 10 or 20 cycles, rather than the hundreds of cycles needed to access RAM. If the processor must frequently obtain data from the RAM rather than the cache, performance will suffer. With Red Hat Enterprise Linux 6 and later distributions, the system use of cache can be measured with the perf utility available from the perf RPM.

    The perf utility uses the Performance Monitoring Units (PMUs) hardware in modern processors to collect data on hardware events such as cache accesses and cache misses without undue overhead on the system. The PMU hardware is processor-implementation specific, and the specific underlying events might differ between processors. For example, one processor implementation measures the first-level cache events of the cache closest to the processor and another processor implementation measures lower-level cache events for a cache farther from the processor and closer to main memory. The configuration of the cache might also differ between processors models: one processor in the processor family might have 2MB of last-level cache and another member in the same processor family might have 8MB of last-level cache. These differences makes direct comparison of event counts between processors difficult.

    NOTE: Depending the architecture, the software versions and system configuration use of the PMU hardware by the perf utility might not be supported in KVM guest machines. In Red Hat Enterprise Linux 6, the guest virtual machine does not support the PMU hardware.

    To get a general understanding of how well or poorly an application is using the system cache, perf can collect statistics on the application and its children using the following command:

    $ perf stat -e task-clock,cycles,instructions,cache-references,cache-misses  ./stream_c.exe
    

    When the application completes, perf will generate a summary of the measurement like the following:

    Performance counter stats for './stream_c.exe':
    
            229.872935 task-clock                #    0.996 CPUs utilized          
           626,676,991 cycles                    #    2.726 GHz                    
           525,543,766 instructions              #    0.84  insns per cycle        
            18,587,219 cache-references          #   80.859 M/sec                  
             6,605,955 cache-misses              #   35.540 % of all cache refs    
    
           0.230761764 seconds time elapsed
    

    The output above is for a simple example program that tests the memory bandwidth performance. The test is relatively short: only 525,543,766 instructions executed requiring 626,676,991 processor cycles. The output also estimates the instructions per processor clock cycle (IPC). The higher the IPC, the more efficiently the processor is executing instructions on the system. In this case, it is 0.84 instructions per clock cycle. The superscalar processor this example is run on could potentially execute four instructions per clock cycle, giving an upper bound of 4 IPC. The IPC will be affected by delay due to cache misses.

    The ratio of cache misses to instructions will give an indication of how well the cache is working; the lower the ratio, the better. In this example, the ratio is 1.26% (6,605,955 cache-misses/525,543,766 instructions). Because of the relatively large difference in cost between the RAM memory and cache access (100s cycles vs <20 cycles), even small improvements to the cache-miss rate can significantly improve performance. If the cache-miss rate per instruction is over 5%, further investigation is required.

    It is possible for perf to provide more information about where the cache-miss events occur in code. Ideally, the application code should be compiled with debuginfo (GCC -g option) and debuginfo RPMs installed for the related system supplied executables and libraries so that perf can map the samples back to the source code and give you a better idea where the cache misses occur.

    The perf stat command only provides an overall count of the events. The perf record command performs sampling recording where those hardware events occur. In this case, we would like to know where cache misses occur during the execution of the code. We can use the following command to collect that information:

    $ perf record -e cache-misses ./stream_c.exe
    

    When the command completes execution, output like the following will be printed indicating the amount of data collected (~2093 samples) and stored in a file perf.data.

    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.048 MB perf.data (~2093 samples) ]
    

    The perf.data file is analyzed with perf report. By default, perf report will put up a simple Text User Interface (TUI) to navigate the collected data. The --stdio option provides simple text output. Below is the perf report output for the earlier perf record. The perf report output starts with a header listing when and where the data was collected. The header also includes details about the hardware/software environment and the command used to collect the data.

    A sorted list follows the header information and shows which functions the instructions associated with those evens are located in. In this case, the simple stream benchmark has over 85% of the samples in main. This is not surprising given this is a simple benchmark that stresses memory performance. However, the second function on the list is the kernel function clear_page_c_e where the [k] indicates this function is in the kernel.

    $ perf report --stdio
    # ========
    # captured on: Wed Oct  2 14:38:59 2013
    # hostname : dhcp129-131.rdu.redhat.com
    # os release : 3.10.0-31.el7.x86_64
    # perf version : 3.10.0-31.el7.x86_64.debug
    # arch : x86_64
    # nrcpus online : 8
    # nrcpus avail : 8
    # cpudesc : Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
    # cpuid : GenuineIntel,6,58,9
    # total memory : 7859524 kB
    # cmdline : /usr/bin/perf record -e cache-misses ./stream_c.exe 
    # event : name = cache-misses, type = 0, config = 0x3, config1 = 0x0, config2 = 
    # HEADER_CPU_TOPOLOGY info available, use -I to display
    # HEADER_NUMA_TOPOLOGY info available, use -I to display
    # pmu mappings: cpu = 4, software = 1, tracepoint = 2, uncore_cbox_0 = 6, uncore
    # ========
    #
    # Samples: 888  of event 'cache-misses'
    # Event count (approx.): 6576993
    #
    # Overhead       Command      Shared Object                        Symbol
    # ........  ............  .................  ............................
    #
        85.19%  stream_c.exe  stream_c.exe       [.] main                    
        11.20%  stream_c.exe  [kernel.kallsyms]  [k] clear_page_c_e          
         2.23%  stream_c.exe  stream_c.exe       [.] checkSTREAMresults      
         0.62%  stream_c.exe  [kernel.kallsyms]  [k] _cond_resched           
         0.20%  stream_c.exe  [kernel.kallsyms]  [k] trigger_load_balance    
         0.13%  stream_c.exe  [kernel.kallsyms]  [k] task_tick_fair          
         0.13%  stream_c.exe  [kernel.kallsyms]  [k] free_pages_prepare      
         0.11%  stream_c.exe  [kernel.kallsyms]  [k] perf_pmu_disable        
         0.09%  stream_c.exe  [kernel.kallsyms]  [k] tick_do_update_jiffies64
         0.09%  stream_c.exe  [kernel.kallsyms]  [k] __acct_update_integrals 
         0.00%  stream_c.exe  [kernel.kallsyms]  [k] flush_signal_handlers   
         0.00%  stream_c.exe  [kernel.kallsyms]  [k] perf_event_comm_output  
    
    #
    # (For a higher level overview, try: perf report --sort comm,dso)
    #
    

    You might want more details on which lines of code in main those caches misses are associated with. The perf annotate can provide exactly that information. The left column of the perf annotate output shows the percentage of samples associated with that instruction. The right column shows intermixed source code and assembly language. You can use cursor keys to scroll through the output and H to look at hottest instructions with the most samples. In this case, the addition of elements from two arrays is the hottest area of code.

    $ perf annotate
    
    main
           │       movsd  %xmm0,(%rsp)                                             ▒
           │       xchg   %ax,%ax                                                  ▒
           │     #ifdef TUNED                                                      ▒
           │             tuned_STREAM_Add();                                       ▒
           │     #else                                                             ▒
           │     #pragma omp parallel for                                          ▒
           │             for (j=0; j&lt;N; j++)                                       ▒
           │                 c[j] = a[j]+b[j];                                     ▒
      1.96 │2a0:   movsd  0x24868e0(%rax),%xmm0                                    ▒
     11.17 │       add    {{ the_content }}x8,%rax                                                ▒
      2.21 │       addsd  0x15444d8(%rax),%xmm0                                    ▒
     13.74 │       movsd  %xmm0,0x6020d8(%rax)                                     ▒
           │             times[2][k] = mysecond();                                 ◆
           │     #ifdef TUNED                                                      ▒
           │             tuned_STREAM_Add();                                       ▒
           │     #else                                                             ▒
           │     #pragma omp parallel for                                          ▒
           │             for (j=0; j&lt;N; j++)                                       ▒
      3.56 │       cmp    {{ the_content }}xf42400,%rax                                           ▒
           │     ↑ jne    2a0                                                      ▒
           │                 c[j] = a[j]+b[j];                                     ▒
           │     #endif                                                            ▒
    Press 'h' for help on key bindings
    

    Techniques to reduce the cache misses are very dependent on the specifics of the cache organization and the code operation. For more information, refer the the software optimization manuals from the processor manufacturers.

    AMD:

    Intel:

    Originally posted at http://developerblog.redhat.com/2014/03/10/determining-whether-an-application-has-poor-cache-performance-2/

    Posted: 2014-03-26T20:39:35+00:00
  • Examining Huge Pages or Transparent Huge Pages Performance

    Authored by: William Cohen

    All modern processors use page-based mechanisms to translate the user-space processes virtual addresses into physical addresses for RAM. The pages are commonly 4KB in size, and the processor can hold a limited number of virtual-to-physical address mappings in the Translation Lookaside Buffers (TLB). The number of TLB entries ranges from tens to hundreds of mappings. This limits a processor to a few megabytes of memory it can address without changing the TLB entries. When a virtual-to-physical address mapping is not in the TLB, the processor must do an expensive computation to generate a new virtual-to-physical address mapping.

    To increase the amount of memory the processor can address without performing the expensive TLB updates, many processors allow larger page sizes to be used. On x86_64 processors, huge pages are 2MB, which is 512 times larger than regular 4KB pages. In ideal situations, huge pages can decrease the overhead of the TLB updates (misses). However, huge-page use can increase memory pressure, add latency for minor pages faults, and add overhead when splitting huge pages or coalescing normal-sized pages into huge pages.

    There are two mechanisms available for huge pages in Linux: the HugePages and Transparent Huge Pages (THP). Explicit configuration is required for the original HugePages mechanism. The newer THP mechanism will automatically use larger pages for dynamically allocated memory in Red Hat Enterprise Linux 6.

    To determine whether the newer THP or the older HugePages mechanism are being used, look at the output of /proc/meminfo as below:

    $ cat /proc/meminfo|grep Huge
    AnonHugePages:   3049472 kB
    HugePages_Total:       0
    HugePages_Free:        0
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    

    The AnonHugePages entry lists the number of pages that the newer THP mechanism currently has in use. For this machine, there are 309472kB: 1489 huge pages each 2048kB in size.

    In this case, there are zero pages in the pool of the older HugePage mechanism, as shown by HugePages_Total of 0. The HugePages_Free shows how many pages are still available for allocation, which is going to be less than or equal to HugePages_Total. The number of HugePages in use can be computed as HugePages_Total-HugePagesFree. For more information about the configuration of HugePages, see Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases.

    Determining Whether Page-Fault Latency is Due to Use of Huge Pages

    Huge-page use can reduce the number of TLB updates required to access large regions of memory and reduce the overall cost of TLB updates, but it increases costs and latency for other operations. When a user-space application is given a range of addresses for a memory allocation, the assignment of a physical page is deferred until the first time the page is accessed. To prevent information leakage from the previous user of the page, the kernel writes zeros in the entire page. For a 4096 byte page, this is a relatively short operation and will only take a couple of microseconds. The x86 huge pages are 2MB in size, 512 times larger than the normal page. Thus, the operation might take hundreds of microseconds and impact the operation of latency-sensitive code. Below is a simple SystemTap command line script to show which applications have huge pages zeroed out and how long those operations take. It will run until Ctl-C is pressed.

    stap  -e 'global huge_clear probe kernel.function("clear_huge_page").return {
      huge_clear [execname(), pid()] &lt;&lt;&lt; (gettimeofday_us() - @entry(gettimeofday_us()))}'
    

    The script will output a list sorted from the executable name and process with the most huge-page clears to the least. The @count is the number of times that process encountered a huge-page clear operation. Following that information is time statistics displayed in microseconds of wall-clock time. The @min and the @max are the minimum and the maximum time respectively to clear out a page. The @sum is the total wall-clock time.

    Originally posted at http://developerblog.redhat.com/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

    Posted: 2014-03-26T20:35:16+00:00