Figuring out CPUs and Sockets

Latest response

I had a recent email from one of my customers.  HIs organization was about ready to go through some licensing true-ups and he was in a bit of a pickle.  He had a few 3rd-party products they needed to do some accounting on and each product was licensed using a different model.  Sadly they did not have any type of CMDB in place to help (Configuration Management Database - something very handy to have when it comes to looking at your server inventory).  I thought back to my years of running a large Enterprise *NIX team and shuddered; easily once every month or so someone came by asking me the exact same questions.

So we worked on a few simple commands that can be used to produce this data.  First we tried this:

   $ lscpu | grep 'socket'
   Core(s) per socket:    2
   CPU socket(s):         1

At this command's "core" [ha ha, pun intended] we got exactly what my pal Tom wanted, and then some.  Not only can we see how many sockets he was using (which is what he was reporting for) but we also found out how many cores there were in each socket.

Next we tried something that while much less pretty, zeroed in on the exact requirement:

  $ cat /proc/cpuinfo | grep "physical id" | sort -u | wc -l
  1
 

This told us exactly how many sockets we had.  And then for fun (Tom is nothing if not fun) we wondered how you could account for if something was hyperthreaded or not so he whipped out this:

   $ egrep -e "core id" -e ^physical /proc/cpuinfo|xargs -l2 echo|sort -u
   physical id : 0 core id : 0
   physical id : 0 core id : 1

So Tom went back to work, happy and ready to give his bosses EXACTLY what they needed (he was so happy he had a new scripting project to tinker with).  These commands worked from RHEL6 back to RHEL4 so most everyone should be able to use them,  So if you're interested in giving these a whirl, there are also a few official knowledge solutions produced by our esteemed Ryan Sawhill you may want to review too:

   How to determine number of CPU sockets on a system

     https://access.redhat.com/knowledge/solutions/61791

and

   Difference between physical cpus, cpu cores, and logical cpus

     https://access.redhat.com/knowledge/solutions/224883

 

So what do you think?  Is this useful stuff?  Will this save you any time or even help you start off your own CMDB?  We'd love to hear from you!

 

Cheers,

CRob

Technical Account Manager

Red Hat Inc.

Responses

Yes, this is useful. I've kept these for a few years to help me. some other options i've used are:

Check if Server is VM? 

dmidecode | grep -i product

  •         Product Name: VMware Virtual Platform 

 

Get number of CPU 

  • grep -i "physical id" /proc/cpuinfo | sort -u | wc -l 

dmidecode |grep -i cpu        

Socket Designation: CPU1 

        Socket Designation: CPU2 

        Socket Designation: CPU3 

        Socket Designation: CPU4 

                CPU.Socket.1 

                CPU.Socket.2 

                CPU.Socket.3 

                CPU.Socket.4 

 

Check if HyperThreading is enabled 

# of siblings = # of cores 

  • cat /proc/cpuinfo |egrep 'sibling|cores' 
  • grep -i "processor" /proc/cpuinfo | sort -u | wc -l
     

 

Thank you for sharing Todd!  Yes, these are excellent choices as well.  A question back to you...once you collect this data, are you generating at the time you need it and then discarding (for a license true-up or an audit for example) or are you storing and perdioically updating in some kind of CMDB?

 

-Chris

Christphoer,

 

Hyperthreading can be found with lscpu too, I guess:

 

lscpu | grep -i thread

Thread(s) per core:    2

 

Have not tried to turn Hyperthreading off on my system yet.

 

Kind regards,

 

Jan Gerrit

I usually recommend something in line with the KB arcticle using dmidecode as it is available in more versions of RHEL and works with VMs.

For example, on a RHEL 5.9 guest of a RHEL 6.3 KVM host, cpuinfo doesn't contain a phyiscal id:

#cat /proc/cpuinfo | grep "physical id" | sort -u | wc -l
0
But dmidecode still shows sockets:

# dmidecode -t4 | egrep 'Designation|Status'
        Socket Designation: CPU 1
        Status: Populated, Enabled
        Socket Designation: CPU 2
        Status: Populated, Enabled

Another excellent tip!  Thanks for sharing Matthew!

Christopher already pointed out both of the knowledgebase articles I wrote (one of which has an attached little script)... but I'll also add that xsos (covered in a previous groups post) can query info about CPUs and shows output like the following:

(In the screenshot it's working against an extracted sosreport; if you don't give it a directory it works against the local system.)

Thanks for adding this in here, Ryan!

Thanks Ryan!  Let me say again how awesome XSOS is [It is pretty awesome].  There are so many little hooks, like this one, that keeps me coming back to it as one of my go-to tools when working an issue.

Hello all,

What about using the "hwloc" tool?

The latest documentation on hwloc:

http://www.open-mpi.org/projects/hwloc/doc/v1.6.2/

The tool is now included in RHEL 6.x, and for older versions of RHEL, you can download your own copy of the open source tool at :

http://www.open-mpi.org/projects/hwloc/

Since it is now included in-box with RHEL version 6.x, I suggest that hwloc is easier to use than processing the dmidecode and lspci output directly ... which is very motherboard-vendor specific.  Using the hwloc tool give you a greater degree of specific vendor or motherboard independence.

Unfortunately, there has been some birthing pain with hwloc, and there have been several updates to the package.

https://rhn.redhat.com/errata/RHBA-2013-0331.html

Thoughts?

Dave B

I absolutely agree Dave! I use lstopo and love it.

I wish hwloc was a default install package in RHEL6. That's one minus. The other is that I (and many others) still deal with TONNNNNNNS of non-RHEL6 systems (i.e., RHEL5 and below) .... so hwloc is usually not available. That said, the goal behind the hwloc package matches up exactly with what this thread is trying to talk about. :)

I'm always glad to hear when people are using it CRob! :)

Unfortunately, some sites "harden" their RHEL builds. I know that for us, even though RHEL has included tools for doing SCSI rescans, we're still stuck doing the  old `echo "- - -" ..` thing because the friendlier tools haven't been authorized for ues. I've my guesses that such would be similar for things like hwloc (and,  try explaining why you need such a tool to a security person who assumes that such a tool makes too much information available - even when you have other methods of getting that info).

Thanks Ryan,

Don't forget that hwloc is downloadable from the the open-mpi.org site, and many other sites have built the RPM packages for both RHEL 5 and older.  Use one of the RHEL 5 - compatible RPMs, and add it to your RPM repository, and it is not much more difficult than using the in-box packages.  Yes, it is not part of the default install, but there are typically many packages that an organization uses that are not part of the default install.  Once you have a procedure to collect and install the non-default packages, adding hwloc's package to the non-default list is trivial.

One challenge that all of us deal with when supporting RHEL 5 and older systems is the small syntax and functionality differences in the versions of the tools that work with the older RHEL versions.  But that is nothing new.  More and more tools are making it easire to query the version number so you can include the appropriate conditional code.

As an aside, many of the recent generation of 10GbE, Fibre channel, and Infiniband and faster interfaces are "multi-queue" aware.  There are several good past Red Hat Summit presentations about tuning these interfaces for higher throughput, better efficiency, and lower latencies.  I assume there will be a few presentation in this year's Summit coming up.

At the center of many of these tuning techniques is distributing the IO stack's processing load across multiple cores, yet trying to keep the IO stack process "near" the physical IO channel.  The hwloc tool allows you to easily identify the current system's topology and optimize appropriately. 

> Don't forget that hwloc is downloadable from the the open-mpi.org site,
> and many other sites have built the RPM packages for both RHEL 5 and
> older.

Oh I'm well aware Dave. :)  ... It's an act of Congress to get RHEL packages installed on most of the systems I've been dealing with in my last 6 years of Linux -- much less external non-RH-provided packages (see Tom's post above). But yes, I agree that hwloc is an amazing tool.

Hi,

I use numactl --hardware for this info. It also gives information about the topology of the CPU.

# numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 32 33 34 35 36 37 38 39
node 0 size: 16307 MB
node 0 free: 15456 MB
node 1 cpus: 8 9 10 11 12 13 14 15 40 41 42 43 44 45 46 47
node 1 size: 16384 MB
node 1 free: 15980 MB
node 2 cpus: 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55
node 2 size: 16384 MB
node 2 free: 15963 MB
node 3 cpus: 24 25 26 27 28 29 30 31 56 57 58 59 60 61 62 63
node 3 size: 16384 MB
node 3 free: 15945 MB
node distances:
node   0   1   2   3
  0:  10  21  31  21
  1:  21  10  21  31
  2:  31  21  10  21
  3:  21  31  21  10
 

A Node is normally a socket.

This will also get  the cpus on a specific node.

 

Hi Rich,

numactl usage - beware.  Yes, numactl does provide cpu topology information, but how the CPUs are "numbered" can vary widely with/without hyperthreading, and across different "patforms" (1-socket, 2 socket, 4-socket, Intel platform generation, AMD platform generation, Power or Itanium platform generation).  Scripts work on the original topology they were developed on, and often unexpected differences in the numactl numbering appear when you run on a different topology.  OK ... you figure it out, and put in the needed conditional code, and continue.  Then the same thing happens with another different topology, or one some system that has a different hyperthreading setting in BIOS that someone forgot to enable or disable.

These kinds of hardware-specific inconsistencies are what the hwloc package and the lstopo tool are designed to handle.  Not only do they display the topology, but they attempt to map the topology into a common "namespace".  Numactl does not try to map the topology description into a multi-vendor, multi-platform common namespace.

As you run numactl on more and more configurations, with different BIOS settings (what happens when you disable hyperthreads or CPUs in the BIOS or boot settings?) you effectivelty will end up re-inventing mapping or translation conditional code for the configurations you need to support.  What is often more important, as has already been discussed in this thread is that "numactl" is more likely to have been installed and available, than the hwloc package and the lstopo tool.

Also beware ... any of these tools running on a VM guest may be displaying the "virtual" topology or the real topology, or some hybrid combination depending on the low-level API that the tool is using.  Also ... such VM guest topology may change over time.  You save a VM on a 2-core system, and re-hydrate the VM on a 16-core system.  What does the VM guest see?  It depends....

 

 

Well said! Kudos.