服务器内存带宽计算和使用情况测量(杀猫长文)

标签: Linux 工具介绍 DDR3 intel-performance-tuning-utility mbw | 发表时间:2011-09-12 20:54 | 作者:Yu Feng coolzsb
出处:http://blog.yufeng.info

原创文章,转载请注明: 转载自Erlang非业余研究

本文链接地址: 服务器内存带宽计算和使用情况测量(杀猫长文)

前段时间我们在MYSQL调优上发现有瓶颈,怀疑是过多拷贝内存,导致内存带宽用完。在Linux下内存的使用情况有top工具, IO设备的使用情况有iostat工具,就是没有内存使用情况的测量工具。 我们可以看到大量的memcpy和字符串拷贝(可以用systemtap来测量),但是像简单的数据移动操作就无法统计,我们希望在硬件层面有办法可以查到CPU在过去的一段时间内总共对主存系统发起了多少读写字节数。

所以我们内存测量的的目标就归结为二点:1. 目前我们这样的服务器真正的内存带宽是多少。 2. 我们的应用到底占用了多少带宽。

首先来看下我们的服务器配置情况:

$ sudo ~/aspersa/summary
# Aspersa System Summary Report ##############################
        Date | 2011-09-12 11:23:11 UTC (local TZ: CST +0800)
    Hostname | my031121.sqa.cm4
      Uptime | 13 days,  3:52,  2 users,  load average: 0.02, 0.01, 0.00
      System | Dell Inc.; PowerEdge R710; vNot Specified (<OUT OF SPEC>)
 Service Tag | DHY6S2X
     Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)
      Kernel | 2.6.18-164.el5
Architecture | CPU = 64-bit, OS = 64-bit
   Threading | NPTL 2.5
    Compiler | GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-44).
     SELinux | Disabled
# Processor ##################################################
  Processors | physical = 2, cores = 12, virtual = 24, hyperthreading = yes
      Speeds | 24x2926.089
      Models | 24xIntel(R) Xeon(R) CPU X5670 @ 2.93GHz
      Caches | 24x12288 KB
# Memory #####################################################
       Total | 94.40G
        Free | 4.39G
        Used | physical = 90.01G, swap = 928.00k, virtual = 90.01G
     Buffers | 1.75G
      Caches | 7.85G
        Used | 78.74G
  Swappiness | vm.swappiness = 0
 DirtyPolicy | vm.dirty_ratio = 40, vm.dirty_background_ratio = 10
  Locator   Size     Speed             Form Factor   Type          Type Detail
  ========= ======== ================= ============= ============= ===========
  DIMM_A1   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A2   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A3   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A4   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A5   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A6   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B1   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B2   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B3   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B4   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B5   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_B6   8192 MB  1333 MHz (0.8 ns) DIMM          {OUT OF SPEC} Synchronous
  DIMM_A7   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
  DIMM_A8   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
  DIMM_A9   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
  DIMM_B7   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
  DIMM_B8   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
  DIMM_B9   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Synchronous
...

DELL R710的机器上有2个X5670CPU,每个上面有6个core,超线程,所以共有24个逻辑CPU。上面插了12根 8192MB(1333 MHz)内存条。

我们的机器架构从之前的FSB总线结构变成现在的numa架构,谢谢@fcicq提供的信息,请参考下图(来源):

我们可以清楚的看到每个CPU都有自己的内存控制器直接连接到内存去,而且有3个通道, CPU直接通过QPI连接。 内存控制器和QPI上面都会流动数据。

我们服务器用的是DDR3内存,所以我们需要计算下在这样的架构下我们内存的带宽。

DDR3内存带宽如何计算,参看 这里
从配置信息可以看到我的服务器的内存条: DIMM_A1 8192 MB 1333 MHz (0.8 ns), 有12根,每个CPU连接6根。
根据文章我们计算如下:每个通道 (1333/8)*64*8 /8 = 10.6G Byte;而我们的CPU是3个通道的,也就是说这个CPU的总的内存带宽是 10.6*3=31.8G;我的机器有2个CPU,所以总的通道是63.6G, 理论上是这样的对吧(有错,请纠正我),我们等下实际测量下。

从上面的计算,显然内存带宽不是瓶颈。那问题处在那里呢,我们继续看!

我们需要个工具能够搬动内存的工具。这类工具测试处理的带宽是指在一个线程能跑出的最大内存带宽。 挑个简单的mbw(mbw – Memory BandWidth benchmark)来玩下:

$ sudo apt-get install mbw
$ mbw -q -n 1 256
0	Method: MEMCPY	Elapsed: 0.19652	MiB: 256.00000	Copy: 1302.647 MiB/s
AVG	Method: MEMCPY	Elapsed: 0.19652	MiB: 256.00000	Copy: 1302.647 MiB/s
0	Method: DUMB	Elapsed: 0.12668	MiB: 256.00000	Copy: 2020.840 MiB/s
AVG	Method: DUMB	Elapsed: 0.12668	MiB: 256.00000	Copy: 2020.840 MiB/s
0	Method: MCBLOCK	Elapsed: 0.02945	MiB: 256.00000	Copy: 8693.585 MiB/s
AVG	Method: MCBLOCK	Elapsed: 0.02945	MiB: 256.00000	Copy: 8693.585 MiB/s

CPU内存带宽测量工具方面,在@王王争 同学大力帮助下,同时给我详尽地介绍了PTU(intel-performance-tuning-utility),在 这里 可以下载。
解开下载的二进制包后,bin里面带的vtbwrun就是我们的硬件层面的内存带宽使用测量工具,下面是使用帮助:

$ sudo ptu40_005_lin_intel64/bin/vtbwrun  --help

***********************************
                PTU
        Performance Tuning Utility
                For
              Nehalem
              Ver 0.4
***********************************

Usage: ./ptu [-c] [-i <iterations>] [-A] [-r] [-p] [-w]
-c disable CPU check.
-i <iterations> specify how many iterations PTU should run.
-A Automated mode, no user Input.
-r Monitor QHL read/write requests from the IOH
*************************** EXCLUSIVE ******************************
-p Monitor partial writes on Memory Channel 0,1,2
-w Monitor WriteBack, Conflict event
*************************** EXCLUSIVE ******************************

$ sudo ptu40_005_lin_intel64/bin/vtbwrun -c -A

运行期截图如下:

该图清楚的指出System Memory Throughput(MB/s): 13019.45, 其中QPI也消耗挺大的。

此外,我们还需要知道 CPU的topology结构,也就是说每个操作系统的CPU对应到哪个CPU哪个核心的哪个超线程去。有了这个信息,我们才能用taskset把内存测试工具绑定到指定的CPU去,才能精确的观察内存使用情况。CPU topology参看这里

$ sudo ./cpu_topology64.out
        Advisory to Users on system topology enumeration

This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.

User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.

        Software visible enumeration in the system:
Number of logical processors visible to the OS: 24
Number of logical processors visible to this process: 24
Number of processor cores visible to this process: 12
Number of physical packages visible to this process: 2 

        Hierarchical counts by levels of processor topology:
 # of cores in package  0 visible to this process: 6 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .
 # of cores in package  1 visible to this process: 6 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .

        Affinity masks per SMT thread, per core, per package:
Individual:
        P:0, C:0, T:0 --> 1
        P:0, C:0, T:1 --> 1z3

Core-aggregated:
        P:0, C:0 --> 1001
Individual:
        P:0, C:1, T:0 --> 4
        P:0, C:1, T:1 --> 4z3

Core-aggregated:
        P:0, C:1 --> 4004
Individual:
        P:0, C:2, T:0 --> 10
        P:0, C:2, T:1 --> 1z4

Core-aggregated:
        P:0, C:2 --> 10010
Individual:
        P:0, C:3, T:0 --> 40
        P:0, C:3, T:1 --> 4z4

Core-aggregated:
        P:0, C:3 --> 40040
Individual:
        P:0, C:4, T:0 --> 100
        P:0, C:4, T:1 --> 1z5

Core-aggregated:
        P:0, C:4 --> 100100
Individual:
        P:0, C:5, T:0 --> 400
        P:0, C:5, T:1 --> 4z5

Core-aggregated:
        P:0, C:5 --> 400400

Pkg-aggregated:
        P:0 --> 555555
Individual:
        P:1, C:0, T:0 --> 2
        P:1, C:0, T:1 --> 2z3

Core-aggregated:
        P:1, C:0 --> 2002
Individual:
        P:1, C:1, T:0 --> 8
        P:1, C:1, T:1 --> 8z3

Core-aggregated:
        P:1, C:1 --> 8008
Individual:
        P:1, C:2, T:0 --> 20
        P:1, C:2, T:1 --> 2z4

Core-aggregated:
        P:1, C:2 --> 20020
Individual:
        P:1, C:3, T:0 --> 80
        P:1, C:3, T:1 --> 8z4

Core-aggregated:
        P:1, C:3 --> 80080
Individual:
        P:1, C:4, T:0 --> 200
        P:1, C:4, T:1 --> 2z5

Core-aggregated:
        P:1, C:4 --> 200200
Individual:
        P:1, C:5, T:0 --> 800
        P:1, C:5, T:1 --> 8z5

Core-aggregated:
        P:1, C:5 --> 800800

Pkg-aggregated:
        P:1 --> aaaaaa

        APIC ID listings from affinity masks
OS cpu   0, Affinity mask 00000001 - apic id 20
OS cpu   1, Affinity mask 00000002 - apic id 0
OS cpu   2, Affinity mask 00000004 - apic id 22
OS cpu   3, Affinity mask 00000008 - apic id 2
OS cpu   4, Affinity mask 00000010 - apic id 24
OS cpu   5, Affinity mask 00000020 - apic id 4
OS cpu   6, Affinity mask 00000040 - apic id 30
OS cpu   7, Affinity mask 00000080 - apic id 10
OS cpu   8, Affinity mask 00000100 - apic id 32
OS cpu   9, Affinity mask 00000200 - apic id 12
OS cpu  10, Affinity mask 00000400 - apic id 34
OS cpu  11, Affinity mask 00000800 - apic id 14
OS cpu  12, Affinity mask 00001000 - apic id 21
OS cpu  13, Affinity mask 00002000 - apic id 1
OS cpu  14, Affinity mask 00004000 - apic id 23
OS cpu  15, Affinity mask 00008000 - apic id 3
OS cpu  16, Affinity mask 00010000 - apic id 25
OS cpu  17, Affinity mask 00020000 - apic id 5
OS cpu  18, Affinity mask 00040000 - apic id 31
OS cpu  19, Affinity mask 00080000 - apic id 11
OS cpu  20, Affinity mask 00100000 - apic id 33
OS cpu  21, Affinity mask 00200000 - apic id 13
OS cpu  22, Affinity mask 00400000 - apic id 35
OS cpu  23, Affinity mask 00800000 - apic id 15

Package 0 Cache and Thread details

Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 6
L1I is Level 1 Instruction cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 6
L2 is Level 2 Unified cache, size(KBytes)= 256,  Cores/cache= 2, Caches/package= 6
L3 is Level 3 Unified cache, size(KBytes)= 12288,  Cores/cache= 12, Caches/package= 1
      +-------------+-------------+-------------+-------------+-------------+-------------+
Cache |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
OScpu#|     0     12|     2     14|     4     16|     6     18|     8     20|    10     22|
Core  | c0_t0  c0_t1| c1_t0  c1_t1| c2_t0  c2_t1| c3_t0  c3_t1| c4_t0  c4_t1| c5_t0  c5_t1|
AffMsk|     1    1z3|     4    4z3|    10    1z4|    40    4z4|   100    1z5|   400    4z5|
CmbMsk|  1001       |  4004       | 10010       | 40040       |100100       |400400       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L2       |    L2       |    L2       |    L2       |    L2       |    L2       |
Size  |  256K       |  256K       |  256K       |  256K       |  256K       |  256K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L3                                                                             |
Size  |   12M                                                                             |
CmbMsk|555555                                                                             |
      +-----------------------------------------------------------------------------------+

Combined socket AffinityMask= 0x555555

Package 1 Cache and Thread details

Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-------------+-------------+-------------+-------------+-------------+-------------+
Cache |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
OScpu#|     1     13|     3     15|     5     17|     7     19|     9     21|    11     23|
Core  | c0_t0  c0_t1| c1_t0  c1_t1| c2_t0  c2_t1| c3_t0  c3_t1| c4_t0  c4_t1| c5_t0  c5_t1|
AffMsk|     2    2z3|     8    8z3|    20    2z4|    80    8z4|   200    2z5|   800    8z5|
CmbMsk|  2002       |  8008       | 20020       | 80080       |200200       |800800       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L2       |    L2       |    L2       |    L2       |    L2       |    L2       |
Size  |  256K       |  256K       |  256K       |  256K       |  256K       |  256K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L3                                                                             |
Size  |   12M                                                                             |
CmbMsk|aaaaaa                                                                             |
      +-----------------------------------------------------------------------------------+

#或者最简单的方法,让Erlang告诉我们
$ erl
Erlang R14B04 (erts-5.8.5) 1 [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.8.5  (abort with ^G)
1> erlang:system_info(cpu_topology).
[{processor,[{core,[{thread,{logical,1}},
                    {thread,{logical,13}}]},
             {core,[{thread,{logical,3}},{thread,{logical,15}}]},
             {core,[{thread,{logical,5}},{thread,{logical,17}}]},
             {core,[{thread,{logical,7}},{thread,{logical,19}}]},
             {core,[{thread,{logical,9}},{thread,{logical,21}}]},
             {core,[{thread,{logical,11}},{thread,{logical,23}}]}]},
 {processor,[{core,[{thread,{logical,0}},
                    {thread,{logical,12}}]},
             {core,[{thread,{logical,2}},{thread,{logical,14}}]},
             {core,[{thread,{logical,4}},{thread,{logical,16}}]},
             {core,[{thread,{logical,6}},{thread,{logical,18}}]},
             {core,[{thread,{logical,8}},{thread,{logical,20}}]},
             {core,[{thread,{logical,10}},{thread,{logical,22}}]}]}]

#我们顺手写个shell脚本可以运行多个mbw绑定在指定的CPU上
$ cat run_mbw.sh
#!/bin/bash
for i in $(seq $1 $3 $2)
do
    echo $i
    taskset -c $i mbw -q -n 9999 256 >/dev/null &
done

$ chmod +x run_mbw.sh

我们知道CPU0的逻辑CPU号码是技术, CPU1的逻辑CPU号码是. 只要运行 ./run_mbw from to increase 就达到目的.

有了这些工具,我们就可以做试验了,喝口水继续:

$ sudo ./run_mbw.sh  0 22 2
0
2
...

我们可以看到CPU的绑定情况,符合预期,见下图
那么内存的带宽呢?看图:

有点问题,内存被2个CPU消耗,并且有QPI。

$ sudo./run_mbw.sh  1 23 2
1
3
...

见下图

这时候所有的CPU已经跑满,那么内存的带宽呢?看图:

我们看出这台机器内存最大带宽32G。

我们很奇怪呀,为什么会出现这种情况呢?CPU并没有按照预期消耗自己的那部分内存,而且QPI的消耗也很大。
究根本原因是我们在操作系统启动的时候把numa给关掉了,避免mysqld在大内存的情况下引起swap.
我们来确认下:

# cat /proc/cmdline
ro root=LABEL=/ numa=off  console=tty0 console=ttyS1,115200

找到原因就好办了。没关系,我们重新找台没有关numa的机器,实验如下:

$ cat /proc/cmdline
ro root=LABEL=/ console=tty0 console=ttyS0,9600 #确实没关numa, 现在机器有2个node

$ sudo ~/aspersa/summary
# Aspersa System Summary Report ##############################
        Date | 2011-09-12 12:26:18 UTC (local TZ: CST +0800)
    Hostname | my031089
      Uptime | 18 days, 10:33,  2 users,  load average: 0.04, 0.01, 0.00
      System | Huawei Technologies Co., Ltd.; Tecal RH2285; vV100R001 (Main Server Chassis)
 Service Tag | 210231771610B4001897
     Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)
      Kernel | 2.6.18-164.el5
Architecture | CPU = 64-bit, OS = 64-bit
   Threading | NPTL 2.5
    Compiler | GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-44).
     SELinux | Disabled
# Processor ##################################################
  Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes
      Speeds | 16x1600.000
      Models | 16xIntel(R) Xeon(R) CPU E5620 @ 2.40GHz
      Caches | 16x12288 KB
# Memory #####################################################
       Total | 23.53G
        Free | 8.40G
        Used | physical = 15.13G, swap = 20.12M, virtual = 15.15G
     Buffers | 863.79M
      Caches | 4.87G
        Used | 8.70G
  Swappiness | vm.swappiness = 60
 DirtyPolicy | vm.dirty_ratio = 40, vm.dirty_background_ratio = 10
  Locator   Size     Speed             Form Factor   Type          Type Detail
  ========= ======== ================= ============= ============= ===========
  DIMM_A0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_B0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_C0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_D0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_E0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_F0   4096 MB  1066 MHz (0.9 ns) DIMM          {OUT OF SPEC} Other
  DIMM_A1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
  DIMM_B1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
  DIMM_C1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
  DIMM_D1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
  DIMM_E1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
  DIMM_F1   {EMPTY}  Unknown           DIMM          {OUT OF SPEC} Other
...

$ erl
Erlang R14B03 (erts-5.8.4) 1 [64-bit] [smp:16:16] [rq:16] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.8.4  (abort with ^G)
1>  erlang:system_info(cpu_topology).
erlang:system_info(cpu_topology).
[{node,[{processor,[{core,[{thread,{logical,4}},
                           {thread,{logical,12}}]},
                    {core,[{thread,{logical,5}},{thread,{logical,13}}]},
                    {core,[{thread,{logical,6}},{thread,{logical,14}}]},
                    {core,[{thread,{logical,7}},{thread,{logical,15}}]}]}]},
 {node,[{processor,[{core,[{thread,{logical,0}},
                           {thread,{logical,8}}]},
                    {core,[{thread,{logical,1}},{thread,{logical,9}}]},
                    {core,[{thread,{logical,2}},{thread,{logical,10}}]},
                    {core,[{thread,{logical,3}},{thread,{logical,11}}]}]}]}]
2>

这是一台有2个E5620 CPU的华为生产的机器,总共16个逻辑CPU,其中逻辑CPU4-7,12-15对应于物理CPU0, 逻辑CPU0-3,8-11对应于物理CPU1。

$ ./run_mbw.sh 4 5 1
4
5

这时候的内存带宽图:
从图中我们可以看出2个mbw绑定在CPU 0上消耗了10.4G带宽,CPU1无消耗,QPI无消耗,符合预期。

继续加大压力:

$ ./run_mbw.sh 6 7 1
6
7


内存消耗达到16G了,继续加压力。

$ ./run_mbw.sh 12 15 1
12
13
14
15


我们看到线程数目加大,但是内存带宽保持不变,说明已经到瓶颈了。该CPU瓶颈16G。

我们在另外一个CPU1上实验下:

$ ./run_mbw.sh 0 3 1
0
1
2
3
$ ./run_mbw.sh 8 11 1
8
9
10
11

一下子把压力全加上去,看截图:

从图中我们可以看出CPU0,CPU1都消耗差不多15G的带宽,总带宽达到30G。

到此我们很明白我们如何计算我们的服务器带宽,以及如何测量目前的带宽使用情况,是不是很有意思?
非常感谢互联网让我们解决问题这么迅速。

BTW:测量出来的带宽和理论差距一倍,是不是我哪里计算错了,请大侠们帮我解惑,谢谢!

祝玩得开心!

Post Footer automatically generated by wp-posturl plugin for wordpress.

相关 [服务器 内存 带宽] 推荐:

服务器内存带宽计算和使用情况测量(杀猫长文)

- coolzsb - Erlang非业余研究
原创文章,转载请注明: 转载自Erlang非业余研究. 本文链接地址: 服务器内存带宽计算和使用情况测量(杀猫长文). 前段时间我们在MYSQL调优上发现有瓶颈,怀疑是过多拷贝内存,导致内存带宽用完. 在Linux下内存的使用情况有top工具, IO设备的使用情况有iostat工具,就是没有内存使用情况的测量工具.

如何预估服务器带宽需求

- - 企业架构 - ITeye博客
转载自:  http://www.douban.com/note/229245525/. 其中,PC是并发数,T是观测时间,即产生PV的时间长度,比如一天中的14个小时产生了20万并发,则公式中T=14*60*60秒,PV=20万. C是单个页面请求的HTTP连接数,t是连接的持续时间,一般取一个估计用户等待连接的时长,比如用户在平均等待5秒发现网页还没打开就不耐烦了,那么这个t可以取5秒.

Erlang服务器内存耗尽bug跟踪过程

- chuang - Erlang Display - 不求完美,学以致用
本文描述朋友Erlang服务器内存耗尽bug的解决过程. 首先说明一下问题,服务器1千多人在线,16G内存快被吃光. 查看节点的内存消耗在什么地方?. 显示内存大部分消耗在进程上,由此确定是进程占用了大量内存. (以输出text方式启动etop,其间隔为1秒,输出行数为20行,按照内存排序. 这里spawn一个新进程,目的是输出etop数据时不影响erlang shell 输入.).

sysbench测试MySQL服务器性能(cpu,io,内存,mysql等)

- - CSDN博客数据库推荐文章
Sysbench的安装请参考http://blog.csdn.net/mchdba/article/details/8951289. sysbench采用寻找最大素数的方式来测试CPU的性能. 首先生成需要的测试文件,文件总大小1000M,16个并发线程,随机读写模式. 执行完后会在当前目录下生成一堆小文件.

Erlang服务器内存吃紧的优化解决方法

- - CSDN博客系统运维推荐文章
问题提出:服务器100万人在线,16G内存快被吃光. 查看进程数目是否正常,是否超过了erlang虚拟机的最大进程数. 查看节点的内存瓶颈所在地方. 显示内存大部分消耗在进程上,由此确定是进程占用了大量内存. (以输出text方式启动etop,其间隔为1秒,输出行数为20行,按照内存排序. 这里spawn一个新进程,目的是输出etop数据时不影响erlang shell 输入.).

线上服务器内存使用量已达到90%报警(内存泄漏)

- - Linux - 操作系统 - ITeye博客
2016-05-06中午11:56,收到“[sentry2]2016-05-06 11:56:09. xxxxxxhost xxx.xxx.xxx.xxx 内存使用已达到90.18%”报警. 首先在脑海浮现的, 应该哪里出现内存泄漏了. Sentry 监控系统查看了该服务的“ 服务器监控”指标,发现其中2台机器的 内存使用量都超过了90%,另外2台尽然没有监控数据(以前是有的).

Nginx带宽控制

- - 火丁笔记
有个老项目,通过 Squid 提供文件下载功能,利用  delay_parameters 实现带宽控制,问题是我玩不转 Squid,于是盘算着是不是能在 Nginx 里找到类似的功能. 好消息是 Nginx 提供了  limit_rate 和  limit_rate_after,举个例子来说明一下:.

kernel.org服务器遭入侵

- Lamo - Solidot
kernel.org网站首页发布公告,声称多台服务器在本月初(8月12日前)遭黑客攻击,他们在8月28日发现了入侵. 入侵者利用一位用户凭证获得了服务器根访问权限,他们正在调查黑客是如何提升权限的;系统启动脚本被加入了一个木马启动文件;ssh相关文件被修改. kernel.org声称,他们相信Linux kernel源代码库未受影响,因为git分布式版本控制系统的特性决定了它可以很容易注意到代码变化.

Ubuntu下赌ARM服务器

- Tim - Solidot
今日无数手机平板使用的低能耗处理器能否撑起未来的服务器市场. Canonical计划推出支持ARM架构的Ubuntu服务器版本. Ubuntu Linux并不是x86服务器市场的重量级选手,Red Hat才是. 但通过与ARM合作打造ARM服务器,Canonical正努力赢得更多市场份额. 计划于2011年10月发布的Ubuntu Server 11.10,将同步推出支持x86、x86-64和ARM架构的版本.

Windows 搭建VPN服务器

- 洋白菜 - iGFW
Windows XP搭建PPTP VPN. 普通用户,在家里也可以搭建自己的VPN. 需要将服务器端的电脑直接放置于外网访问下,若是ADSL拨号的话,可以用此电脑直接拨号,中间不接路由器. 若是接路由器的话,可以启用NAT(网络地址转换)中的DMZ,将该服务器IP地址填入此处,那么外网访问到本路由Wan口时,就会直接转到该服务器上.