您当前的位置: 首页 > 学无止境 > 心得笔记 网站首页心得笔记
10-Docker的系统资源限制及验正
发布时间:2020-09-26 16:00:50编辑:雪饮阅读()
今天主要来了解下对容器内存与cpu方面的压力测试。
Lscpu
在了解之前先了解下lscpu,通过lscpu命令可以查看我们cpu相关的信息。如下面可以看到这里这个服务器的cpu是4个核心,有0-3的cpu槽位。
[root@localhost ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Stepping: 10
CPU MHz: 2207.999
CPU max MHz: 0.0000
CPU min MHz: 0.0000
BogoMIPS: 4415.99
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 9216K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm 3dnowprefetch epb fsgsbase smep xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
lorel/docker-stress-ng
通过上面我们了解到了我们要测试的目标容器所在服务器宿主机的一些cpu信息,接下来我们需要用到一个压力测试工具,在这里当然不能用ab了,虽然也可以吧只是有点不合适。这里推荐用docker镜像仓库中比较出名的docker-stress-ng,那么首先我们需要把这个镜像拉下来。
[root@localhost ~]# docker pull lorel/docker-stress-ng
Using default tag: latest
Trying to pull repository docker.io/lorel/docker-stress-ng ...
latest: Pulling from docker.io/lorel/docker-stress-ng
c52e3ed763ff: Pull complete
a3ed95caeb02: Pull complete
7f831269c70e: Pull complete
Digest: sha256:c8776b750869e274b340f8e8eb9a7d8fb2472edd5b25ff5b7d55728bca681322
Status: Downloaded newer image for docker.io/lorel/docker-stress-ng:latest
该镜像中也就一个stress,我们可以大概看看其帮助信息。
[root@localhost ~]# docker run --name stress -it --rm lorel/docker-stress-ng:latest stress --help
stress-ng, version 0.03.11
Usage: stress-ng [OPTION [ARG]]
--h, --help show help
--affinity N start N workers that rapidly change CPU affinity
--affinity-ops N stop when N affinity bogo operations completed
--affinity-rand change affinity randomly rather than sequentially
--aio N start N workers that issue async I/O requests
--aio-ops N stop when N bogo async I/O requests completed
--aio-requests N number of async I/O requests per worker
-a N, --all N start N workers of each stress test
-b N, --backoff N wait of N microseconds before work starts
-B N, --bigheap N start N workers that grow the heap using calloc()
--bigheap-ops N stop when N bogo bigheap operations completed
--bigheap-growth N grow heap by N bytes per iteration
--brk N start N workers performing rapid brk calls
--brk-ops N stop when N brk bogo operations completed
--brk-notouch don't touch (page in) new data segment page
--bsearch start N workers that exercise a binary search
--bsearch-ops stop when N binary search bogo operations completed
--bsearch-size number of 32 bit integers to bsearch
-C N, --cache N start N CPU cache thrashing workers
--cache-ops N stop when N cache bogo operations completed (x86 only)
--cache-flush flush cache after every memory write (x86 only)
--cache-fence serialize stores
--class name specify a class of stressors, use with --sequential
--chmod N start N workers thrashing chmod file mode bits
--chmod-ops N stop chmod workers after N bogo operations
-c N, --cpu N start N workers spinning on sqrt(rand())
--cpu-ops N stop when N cpu bogo operations completed
-l P, --cpu-load P load CPU by P %%, 0=sleep, 100=full load (see -c)
--cpu-method m specify stress cpu method m, default is all
-D N, --dentry N start N dentry thrashing processes
--dentry-ops N stop when N dentry bogo operations completed
--dentry-order O specify dentry unlink order (reverse, forward, stride)
--dentries N create N dentries per iteration
--dir N start N directory thrashing processes
--dir-ops N stop when N directory bogo operations completed
-n, --dry-run do not run
--dup N start N workers exercising dup/close
--dup-ops N stop when N dup/close bogo operations completed
--epoll N start N workers doing epoll handled socket activity
--epoll-ops N stop when N epoll bogo operations completed
--epoll-port P use socket ports P upwards
--epoll-domain D specify socket domain, default is unix
--eventfd N start N workers stressing eventfd read/writes
--eventfd-ops N stop eventfd workers after N bogo operations
--fault N start N workers producing page faults
--fault-ops N stop when N page fault bogo operations completed
--fifo N start N workers exercising fifo I/O
--fifo-ops N stop when N fifo bogo operations completed
--fifo-readers N number of fifo reader processes to start
--flock N start N workers locking a single file
--flock-ops N stop when N flock bogo operations completed
-f N, --fork N start N workers spinning on fork() and exit()
--fork-ops N stop when N fork bogo operations completed
--fork-max P create P processes per iteration, default is 1
--fstat N start N workers exercising fstat on files
--fstat-ops N stop when N fstat bogo operations completed
--fstat-dir path fstat files in the specified directory
--futex N start N workers exercising a fast mutex
--futex-ops N stop when N fast mutex bogo operations completed
--get N start N workers exercising the get*() system calls
--get-ops N stop when N get bogo operations completed
-d N, --hdd N start N workers spinning on write()/unlink()
--hdd-ops N stop when N hdd bogo operations completed
--hdd-bytes N write N bytes per hdd worker (default is 1GB)
--hdd-direct minimize cache effects of the I/O
--hdd-dsync equivalent to a write followed by fdatasync
--hdd-noatime do not update the file last access time
--hdd-sync equivalent to a write followed by fsync
--hdd-write-size N set the default write size to N bytes
--hsearch start N workers that exercise a hash table search
--hsearch-ops stop when N hash search bogo operations completed
--hsearch-size number of integers to insert into hash table
--inotify N start N workers exercising inotify events
--inotify-ops N stop inotify workers after N bogo operations
-i N, --io N start N workers spinning on sync()
--io-ops N stop when N io bogo operations completed
--ionice-class C specify ionice class (idle, besteffort, realtime)
--ionice-level L specify ionice level (0 max, 7 min)
-k, --keep-name keep stress process names to be 'stress-ng'
--kill N start N workers killing with SIGUSR1
--kill-ops N stop when N kill bogo operations completed
--lease N start N workers holding and breaking a lease
--lease-ops N stop when N lease bogo operations completed
--lease-breakers N number of lease breaking processes to start
--link N start N workers creating hard links
--link-ops N stop when N link bogo operations completed
--lsearch start N workers that exercise a linear search
--lsearch-ops stop when N linear search bogo operations completed
--lsearch-size number of 32 bit integers to lsearch
-M, --metrics print pseudo metrics of activity
--metrics-brief enable metrics and only show non-zero results
--memcpy N start N workers performing memory copies
--memcpy-ops N stop when N memcpy bogo operations completed
--mmap N start N workers stressing mmap and munmap
--mmap-ops N stop when N mmap bogo operations completed
--mmap-async using asynchronous msyncs for file based mmap
--mmap-bytes N mmap and munmap N bytes for each stress iteration
--mmap-file mmap onto a file using synchronous msyncs
--mmap-mprotect enable mmap mprotect stressing
--msg N start N workers passing messages using System V messages
--msg-ops N stop msg workers after N bogo messages completed
--mq N start N workers passing messages using POSIX messages
--mq-ops N stop mq workers after N bogo messages completed
--mq-size N specify the size of the POSIX message queue
--nice N start N workers that randomly re-adjust nice levels
--nice-ops N stop when N nice bogo operations completed
--no-madvise don't use random madvise options for each mmap
--null N start N workers writing to /dev/null
--null-ops N stop when N /dev/null bogo write operations completed
-o, --open N start N workers exercising open/close
--open-ops N stop when N open/close bogo operations completed
-p N, --pipe N start N workers exercising pipe I/O
--pipe-ops N stop when N pipe I/O bogo operations completed
-P N, --poll N start N workers exercising zero timeout polling
--poll-ops N stop when N poll bogo operations completed
--procfs N start N workers reading portions of /proc
--procfs-ops N stop procfs workers after N bogo read operations
--pthread N start N workers that create multiple threads
--pthread-ops N stop pthread workers after N bogo threads created
--pthread-max P create P threads at a time by each worker
-Q, --qsort N start N workers exercising qsort on 32 bit random integers
--qsort-ops N stop when N qsort bogo operations completed
--qsort-size N number of 32 bit integers to sort
-q, --quiet quiet output
-r, --random N start N random workers
--rdrand N start N workers exercising rdrand instruction (x86 only)
--rdrand-ops N stop when N rdrand bogo operations completed
-R, --rename N start N workers exercising file renames
--rename-ops N stop when N rename bogo operations completed
--sched type set scheduler type
--sched-prio N set scheduler priority level N
--seek N start N workers performing random seek r/w IO
--seek-ops N stop when N seek bogo operations completed
--seek-size N length of file to do random I/O upon
--sem N start N workers doing semaphore operations
--sem-ops N stop when N semaphore bogo operations completed
--sem-procs N number of processes to start per worker
--sendfile N start N workers exercising sendfile
--sendfile-ops N stop after N bogo sendfile operations
--sendfile-size N size of data to be sent with sendfile
--sequential N run all stressors one by one, invoking N of them
--sigfd N start N workers reading signals via signalfd reads
--sigfd-ops N stop when N bogo signalfd reads completed
--sigfpe N start N workers generating floating point math faults
--sigfpe-ops N stop when N bogo floating point math faults completed
--sigsegv N start N workers generating segmentation faults
--sigsegv-ops N stop when N bogo segmentation faults completed
-S N, --sock N start N workers doing socket activity
--sock-ops N stop when N socket bogo operations completed
--sock-port P use socket ports P to P + number of workers - 1
--sock-domain D specify socket domain, default is ipv4
--stack N start N workers generating stack overflows
--stack-ops N stop when N bogo stack overflows completed
-s N, --switch N start N workers doing rapid context switches
--switch-ops N stop when N context switch bogo operations completed
--symlink N start N workers creating symbolic links
--symlink-ops N stop when N symbolic link bogo operations completed
--sysinfo N start N workers reading system information
--sysinfo-ops N stop when sysinfo bogo operations completed
-t N, --timeout N timeout after N seconds
-T N, --timer N start N workers producing timer events
--timer-ops N stop when N timer bogo events completed
--timer-freq F run timer(s) at F Hz, range 1000 to 1000000000
--tsearch start N workers that exercise a tree search
--tsearch-ops stop when N tree search bogo operations completed
--tsearch-size number of 32 bit integers to tsearch
--times show run time summary at end of the run
-u N, --urandom N start N workers reading /dev/urandom
--urandom-ops N stop when N urandom bogo read operations completed
--utime N start N workers updating file timestamps
--utime-ops N stop after N utime bogo operations completed
--utime-fsync force utime meta data sync to the file system
-v, --verbose verbose output
--verify verify results (not available on all tests)
-V, --version show version
-m N, --vm N start N workers spinning on anonymous mmap
--vm-bytes N allocate N bytes per vm worker (default 256MB)
--vm-hang N sleep N seconds before freeing memory
--vm-keep redirty memory instead of reallocating
--vm-ops N stop when N vm bogo operations completed
--vm-locked lock the pages of the mapped region into memory
--vm-method m specify stress vm method m, default is all
--vm-populate populate (prefault) page tables for a mapping
--wait N start N workers waiting on child being stop/resumed
--wait-ops N stop when N bogo wait operations completed
--zero N start N workers reading /dev/zero
--zero-ops N stop when N /dev/zero bogo read operations completed
Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s
Note: Sizes can be suffixed with B,K,M,G and times with s,m,h,d,y
对指定容器的内存的压力测试
经过上面的了解我们这里通过-m指定目标容器最大可用内存位256m,而对该容器用两个进程进行压测。而实际上还可以设置每个进程所消耗的内存,这里就不设置了,那么缺省值也是256m,所以该命令下去实际消耗内存就是512m则是超过目标容器最大内存的。
[root@localhost ~]# docker run --name stress -it --rm -m 256m lorel/docker-stress-ng:latest stress --vm 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm
然后我们另外开一个会话可以看到目标容器的一些资源占用信息
[root@localhost ~]# docker top stress
UID PID PPID C STIME TTY TIME CMD
root 1765 1746 0 03:19 pts/1 00:00:00 /usr/bin/stress-ng stress --vm 2
root 1794 1765 0 03:19 pts/1 00:00:00 /usr/bin/stress-ng stress --vm 2
root 1795 1765 0 03:19 pts/1 00:00:00 /usr/bin/stress-ng stress --vm 2
root 1858 1794 24 03:21 pts/1 00:00:00 /usr/bin/stress-ng stress --vm 2
root 1859 1795 24 03:21 pts/1 00:00:00 /usr/bin/stress-ng stress --vm 2
当然最重要的还是这个状态信息
[root@localhost ~]# docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 0.00% 255.8 MiB / 256 MiB 99.91% 656 B / 656 B 786 MB / 32.5 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 0.00% 255.8 MiB / 256 MiB 99.91% 656 B / 656 B 786 MB / 32.5 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 68.23% 256 MiB / 256 MiB 99.98% 656 B / 656 B 786 MB / 32.7 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 68.23% 256 MiB / 256 MiB 99.98% 656 B / 656 B 786 MB / 32.7 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 9.95% 251.4 MiB / 256 MiB 98.22% 656 B / 656 B 786 MB / 32.8 GB 4
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 9.95% 251.4 MiB / 256 MiB 98.22% 656 B / 656 B 786 MB / 32.8 GB 4
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 9.95% 251.4 MiB / 256 MiB 98.22% 656 B / 656 B 786 MB / 32.8 GB 4
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 28.81% 255.9 MiB / 256 MiB 99.95% 656 B / 656 B 796 MB / 33 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 9.76% 255.9 MiB / 256 MiB 99.95% 656 B / 656 B 796 MB / 33.1 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 9.76% 255.9 MiB / 256 MiB 99.95% 656 B / 656 B 796 MB / 33.1 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 37.49% 256 MiB / 256 MiB 99.98% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 37.49% 256 MiB / 256 MiB 99.98% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 3.59% 255.5 MiB / 256 MiB 99.79% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 3.59% 255.5 MiB / 256 MiB 99.79% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 3.59% 255.5 MiB / 256 MiB 99.79% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 3.59% 255.5 MiB / 256 MiB 99.79% 656 B / 656 B 798 MB / 33.3 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 0.00% 255.9 MiB / 256 MiB 99.97% 656 B / 656 B 803 MB / 33.7 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 0.00% 255.9 MiB / 256 MiB 99.97% 656 B / 656 B 803 MB / 33.7 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 19.42% 256 MiB / 256 MiB 100.00% 656 B / 656 B 806 MB / 33.8 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 19.42% 256 MiB / 256 MiB 100.00% 656 B / 656 B 806 MB / 33.8 GB 5
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75e6e25b363b 7.83% 253.4 MiB / 256 MiB 99.00% 656 B / 656 B 806 MB / 33.9 GB 5
这里可以看到经过一段时间测试后无论如何内存最大消耗都是在256m以内,无法超过256m。我们知道此时实际内存使用是需要512m的。
对指定容器进行cpu压测
--Cpus
我们这里对目标容器限制cpu核心数为2个,但是我们使用8个cpu对其进行压测。
[root@localhost ~]# docker run --name stress -it --rm --cpus 2 lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
那么接下来我们再来通过另外一个会话查看下资源占用排行榜及状态信息
[root@localhost ~]# docker top stress
UID PID PPID C STIME TTY TIME CMD
root 5835 5817 0 03:34 pts/1 00:00:00 /usr/bin/stress-ng stress --cpu 8
root 5867 5835 24 03:34 pts/1 00:00:30 /usr/bin/stress-ng stress --cpu 8
root 5868 5835 25 03:34 pts/1 00:00:32 /usr/bin/stress-ng stress --cpu 8
root 5869 5835 24 03:34 pts/1 00:00:31 /usr/bin/stress-ng stress --cpu 8
root 5870 5835 25 03:34 pts/1 00:00:32 /usr/bin/stress-ng stress --cpu 8
root 5871 5835 24 03:34 pts/1 00:00:30 /usr/bin/stress-ng stress --cpu 8
root 5872 5835 24 03:34 pts/1 00:00:30 /usr/bin/stress-ng stress --cpu 8
root 5873 5835 24 03:34 pts/1 00:00:30 /usr/bin/stress-ng stress --cpu 8
root 5874 5835 24 03:34 pts/1 00:00:30 /usr/bin/stress-ng stress --cpu 8
[root@localhost ~]# docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 20.56% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 20.56% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 207.96% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 195.51% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 195.51% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 200.41% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 200.41% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48338e4876a1 201.33% 30.06 MiB / 981.9 MiB 3.06% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
通过状态信息我们可以看到这里虽然我们用了8个进行压测虽然资源占用百分比不断提高直到200%(占用了2个核心)左右就处于波动不大的情况了。
放开容器最大两个核心的限制试试
[root@localhost ~]# docker run --name stress -it --rm lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
那么当我们放开容器最大两个核心的限制后会发现瞬间cpu核心达到400左右(文章开头我们可以看到我们宿主机总共就4个核心)
6bd99e0a94f8 381.03% 30.16 MiB / 981.9 MiB 3.07% 516 B / 516 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 386.72% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 386.72% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 398.63% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 398.63% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 403.09% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 403.09% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 403.09% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 405.60% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 405.60% 30.16 MiB / 981.9 MiB 3.07% 586 B / 586 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 397.81% 30.16 MiB / 981.9 MiB 3.07% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 397.81% 30.16 MiB / 981.9 MiB 3.07% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 402.13% 30.16 MiB / 981.9 MiB 3.07% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 383.53% 30.16 MiB / 981.9 MiB 3.07% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6bd99e0a94f8 383.53% 30.16 MiB / 981.9 MiB 3.07% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / L
--cpuset-cpus
我们再来测试下限制容器运行在指定cpu上面的情况
这里我指定了目标容器运行在0和2这两个cpu槽上,我们文章开头了解到我们有0-3的cpu槽。
[root@localhost ~]# docker run --name stress -it --cpuset-cpus 0,2 --rm lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
然后同样可以看到由于我们指定0与2cpu槽相当于只用了2个核心,所以cpu资源状态信息中占比最高又恢复保持在200%左右
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
590339520a0a 200.22% 29.09 MiB / 981.9 MiB 2.96% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
590339520a0a 200.22% 29.09 MiB / 981.9 MiB 2.96% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
590339520a0a 200.90% 29.09 MiB / 981.9 MiB 2.96% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
590339520a0a 200.90% 29.09 MiB / 981.9 MiB 2.96% 656 B / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
接下来我们试试对于多个容器cpu的加权配额压测
这里我们就用两个容器进行测试,那么我们至少需要3个会话,首先第一个会话我们给一个容器cpu加权配额为1024
[root@localhost ~]# docker run --name stress -it --cpu-shares 1024 --rm lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
然后我们另外开一个会话给第二个容器cpu加权配额为512
[root@localhost ~]# docker run --name stress2 -it --cpu-shares 512 --rm lorel/docker-stress-ng:latest stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
那么第三个会话我们就来查看资源占用状态信息。
[root@localhost ~]# docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 1.54% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 22.52% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 1.54% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 22.52% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 169.66% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 294.72% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 104.93% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 294.72% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 104.93% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 231.57% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 133.59% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 257.49% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 133.59% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 257.49% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 130.79% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 262.83% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 130.79% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 262.83% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 130.60% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 262.83% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 130.60% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 276.81% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 122.25% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 272.46% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 122.25% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 272.46% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 129.12% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 272.46% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 129.12% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 303.50% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 129.12% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 303.50% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 158.52% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 280.25% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 158.52% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 280.25% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 120.37% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 219.89% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 120.37% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 219.89% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 132.13% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 265.20% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 132.13% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 257.75% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 120.32% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 257.75% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 127.75% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 255.31% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 127.75% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 255.31% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 123.77% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 270.39% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3a4755ef3834 123.77% 15.81 MiB / 981.9 MiB 1.61% 656 B / 656 B 0 B / 0 B 9
c88b6c5c9fdb 270.39% 15.81 MiB / 981.9 MiB 1.61% 1.31 kB / 656 B 0 B / 0 B 9
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
这里我们可以发现当这两个容器cpu占用基本都达到天花板后其占用比率容器2比容器1的比值基本都处于1/2(512/1024)的状态没有太大波动了。
关键字词:Docker,资源,限制,cpu,内存
下一篇:马哥docker学习笔记