压力测试 - netty-boot
测试代码
https://github.com/litongjava/netty-boot-web-hello
package com.litongjava.study.netty.boot.handler;
import com.litongjava.model.body.RespBodyVo;
import com.litongjava.netty.boot.utils.HttpRequestUtils;
import com.litongjava.netty.boot.utils.HttpResponseUtils;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
public class OkHandler {
public FullHttpResponse txt(ChannelHandlerContext ctx, FullHttpRequest httpRequest) throws Exception {
String responseContent = "Hello, this is the HTTP response!";
return HttpResponseUtils.txt(responseContent);
}
public FullHttpResponse json(ChannelHandlerContext ctx, FullHttpRequest httpRequest) throws Exception {
RespBodyVo ok = RespBodyVo.ok();
return HttpResponseUtils.json(ok);
}
public FullHttpResponse echo(ChannelHandlerContext ctx, FullHttpRequest httpRequest) {
String fullHttpRequestString = HttpRequestUtils.getFullHttpRequestAsString(httpRequest);
return HttpResponseUtils.txt(fullHttpRequestString);
}
}
package com.litongjava.study.netty.boot.config;
import java.util.function.Supplier;
import com.litongjava.annotation.AConfiguration;
import com.litongjava.annotation.Initialization;
import com.litongjava.jfinal.aop.Aop;
import com.litongjava.netty.boot.http.HttpRequestRouter;
import com.litongjava.netty.boot.server.NettyBootServer;
import com.litongjava.netty.boot.websocket.WebsocketRouter;
import com.litongjava.study.netty.boot.handler.OkHandler;
import com.litongjava.study.netty.boot.handler.Ws2Handler;
import com.litongjava.study.netty.boot.handler.WsHandler;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.handler.codec.http.websocketx.WebSocketFrame;
@AConfiguration
public class WebConfig {
@Initialization
public void config() {
// 设置 HTTP 路由
HttpRequestRouter httpRouter = NettyBootServer.me().getHttpRequestRouter();
OkHandler okHandler = Aop.get(OkHandler.class);
httpRouter.add("/txt", okHandler::txt);
httpRouter.add("/json", okHandler::json);
httpRouter.add("/echo", okHandler::echo);
// 设置 WebSocket 路由
WebsocketRouter websocketRouter = NettyBootServer.me().getWebsocketRouter();
Supplier<SimpleChannelInboundHandler<WebSocketFrame>> wsHandlerSupplier = WsHandler::new;
websocketRouter.add("/ws", wsHandlerSupplier);
websocketRouter.add("/ws2", Ws2Handler::new);
}
}
服务器配置
在进行性能测试之前,除了运行你的 Netty 应用外,你还需要确保操作系统和网络环境都做了相应的准备与调优。以下是一些常见的步骤和需要检查的配置命令:
1. 系统资源检查
- CPU 和内存
确认服务器的 CPU 核心数和内存大小,确保有足够的资源应对高并发测试。lscpu free -m cat /proc/meminfo
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz
BIOS Model name: Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz To Be Filled By O.E.M CPU @ 1.7GHz
BIOS CPU family: 205
CPU family: 6
Model: 69
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 46%
CPU max MHz: 2700.0000
CPU min MHz: 800.0000
BogoMIPS: 4789.06
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer ae
s xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms in
vpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 512 KiB (2 instances)
L3: 3 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Meltdown: Mitigation; PTI
Mmio stale data: Unknown: No mitigations
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
free -m
total used free shared buff/cache available
Mem: 15908 930 14715 16 547 14978
Swap: 975 0 975
cat /proc/meminfo
MemTotal: 16290700 kB
MemFree: 15070768 kB
MemAvailable: 15339752 kB
Buffers: 32960 kB
Cached: 481704 kB
SwapCached: 0 kB
Active: 159868 kB
Inactive: 837100 kB
Active(anon): 6580 kB
Inactive(anon): 492296 kB
Active(file): 153288 kB
Inactive(file): 344804 kB
Unevictable: 136 kB
Mlocked: 0 kB
SwapTotal: 999420 kB
SwapFree: 999420 kB
Zswap: 0 kB
Zswapped: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 478452 kB
Mapped: 233044 kB
Shmem: 16572 kB
KReclaimable: 45744 kB
Slab: 104708 kB
SReclaimable: 45744 kB
SUnreclaim: 58964 kB
KernelStack: 3552 kB
PageTables: 4788 kB
SecPageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 9144768 kB
Committed_AS: 5023368 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 35612 kB
VmallocChunk: 0 kB
Percpu: 1904 kB
HardwareCorrupted: 0 kB
AnonHugePages: 233472 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 180512 kB
DirectMap2M: 6025216 kB
DirectMap1G: 10485760 kB
2. 文件描述符和进程限制
- 文件描述符限制
高并发连接时需要足够的文件描述符数量。查看当前限制:临时调整:ulimit -n
或修改ulimit -n 65535
/etc/security/limits.conf
进行永久配置。
3. 网络参数调优
- TCP 参数
查看并根据需要调整 TCP 相关参数,比如最大连接数、TIME_WAIT 处理、socket 缓冲区大小等。
查看 TCP 参数:sysctl -a | grep net.ipv4.tcp
- socket 队列
检查并调整net.core.somaxconn
参数来允许更多的挂起连接:sysctl net.core.somaxconn sudo sysctl -w net.core.somaxconn=4096
- 其他优化
如net.ipv4.tcp_tw_reuse
、net.ipv4.tcp_fin_timeout
等参数,根据具体业务场景进行调优:sysctl -w net.ipv4.tcp_tw_reuse=2 sysctl -w net.ipv4.tcp_fin_timeout=60
测试报告
ab -c1000 -n10000000 http://localhost/im/json
This is ApacheBench, Version 2.3 <$Revision: 1913912 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000000 requests
Completed 2000000 requests
Completed 3000000 requests
Completed 4000000 requests
Completed 5000000 requests
Completed 6000000 requests
Completed 7000000 requests
Completed 8000000 requests
Completed 9000000 requests
Completed 10000000 requests
Finished 10000000 requests
Server Software:
Server Hostname: localhost
Server Port: 80
Document Path: /im/json
Document Length: 56 bytes
Concurrency Level: 1000
Time taken for tests: 794.690 seconds
Complete requests: 10000000
Failed requests: 0
Total transferred: 1410000000 bytes
HTML transferred: 560000000 bytes
Requests per second: 12583.52 [#/sec] (mean)
Time per request: 79.469 [ms] (mean)
Time per request: 0.079 [ms] (mean, across all concurrent requests)
Transfer rate: 1732.69 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 37 5.1 37 114
Processing: 12 42 9.5 41 574
Waiting: 5 30 8.8 30 574
Total: 44 79 9.0 78 589
Percentage of the requests served within a certain time (ms)
50% 78
66% 80
75% 82
80% 84
90% 87
95% 90
98% 94
99% 98
100% 589 (longest request)
报告分析
测试概要
- 并发水平 (Concurrency Level): 1000。模拟了 1000 个并发请求,测试了系统在高并发负载下的性能。
- 总请求数 (Total requests): 10,000,000。测试总共发送了 1000 万次请求。
- 测试持续时间 (Time taken for tests): 794.690 秒。完成所有请求所用的总时间。
- 无失败请求 (Failed requests): 0。没有失败请求,表明服务在并发压力下的稳定性良好.
- 总传输字节数 (Total transferred): 1410000000 字节。包括 HTTP 头部和响应正文的数据量。
- HTML 传输字节数 (HTML transferred): 560000000 字节。具体的文档数据传输量。
请求处理效率
- 每秒请求数 (Requests per second): 12583.52 [#/sec]。这是服务每秒处理的请求数(RPS),反映了系统的处理能力。
- 平均每个请求的时间 (Time per request): 79.469 毫秒(mean),即从请求发送到接收响应的平均时间。
- 平均每个请求的时间(并发请求下): 0.079 毫秒(mean, across all concurrent requests),表示在高并发场景下每个请求所分摊的平均时间。
数据传输效率
- 传输速率 (Transfer rate): 1732.69 KB/秒。反映了数据在传输过程中的速度。
连接时间统计 (Connection Times)
各阶段的时间统计如下:
Connect:
- 最小:0 毫秒
- 平均:37 毫秒(标准差 ±5.1 毫秒)
- 中位数:37 毫秒
- 最大:114 毫秒
Processing:
- 最小:12 毫秒
- 平均:42 毫秒(标准差 9.5 毫秒)
- 中位数:41 毫秒
- 最大:574 毫秒
Waiting:
- 最小:5 毫秒
- 平均:30 毫秒(标准差 ±8.8 毫秒)
- 中位数:30 毫秒
- 最大:574 毫秒
Total:
- 最小:44 毫秒
- 平均:79 毫秒(标准差 9.0 毫秒)
- 中位数:78 毫秒
- 最大:580 毫秒
百分比分布 (Percentage Distribution)
报告的百分比分布描述了不同百分比请求的响应时间(单位:毫秒):
- 50% 的请求响应时间在 78 毫秒以内。
- 66% 的请求响应时间在 80 毫秒以内。
- 75% 的请求响应时间在 82 毫秒以内。
- 80% 的请求响应时间在 84 毫秒以内。
- 90% 的请求响应时间在 87 毫秒以内。
- 95% 的请求响应时间在 90 毫秒以内。
- 98% 的请求响应时间在 94 毫秒以内。
- 99% 的请求响应时间在 98 毫秒以内。
- 100% 的请求响应时间在 589 毫秒以内(最慢的请求)。
以上数据表明,在大部分情况下,系统能够在大约 80 毫秒内响应请求,但极少数情况下,响应时间会显著延长至近 600 毫秒。