常见linux内核参数修改
一.内核参数修改参数说明#最大的TCP数据接收窗口(字节)#当前值:net.core.rmem_max=16777216#最大的TCP数据发送窗口(字节)。#当前值net.core.wmem_max=16777216#为自动调优定义socket使用的内存。第一个值是为socket接收缓冲区分配的最少字节数;第二个值是默认值(该值会被rmem_default覆盖),缓...
一.内核参数修改
参数说明
#最大的TCP数据接收窗口(字节)
#当前值:
net.core.rmem_max=16777216
#最大的TCP数据发送窗口(字节)。
#当前值
net.core.wmem_max=16777216
#为自动调优定义socket使用的内存。第一个值是为socket接收缓冲区分配的最少字节数;第二个值是默认值(该值会被rmem_default覆盖),缓冲区在系统负载不重的情况下可以增长到这个值;第三个值是接收缓冲区空间的最大字节数(该值会被rmem_max覆盖)。
#当前值
net.ipv4.tcp_rmem="4096 87380 16777216"
#为自动调优定义socket使用的内存。第一个值是为socket发送缓冲区分配的最少字节数;第二个值是默认值(该值会被wmem_default覆盖),缓冲区在系统负载不重的情况下可以增长到这个值;第三个值是发送缓冲区空间的最大字节数(该值会被wmem_max覆盖)。
#当前值
net.ipv4.tcp_wmem="4096 16384 16777216"
调整queue Sizes
#net.core.somaxconn controls the size of the connection listening queue
#当前值:128
net.core.somaxconn=4096
#在每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目
#当前值:
net.core.netdev_max_backlog=16384
#已修改
#net.ipv4.tcp_max_syn_backlog=8192
#已修改
#net.ipv4.tcp_syncookies=1
修改办法:
1.使用root用户,vi /etc/sysctl.conf ,新增如下内容:
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem="4096 87380 16777216"
net.ipv4.tcp_wmem="4096 16384 16777216"
net.core.somaxconn=4096
net.core.netdev_max_backlog=16384
2.执行命令:sysctl -p
回退方法:
1.先回退/etc/sysctl.conf添加的内容
2.使用sysctl -w执行该参数对应的值
sysctl -w net.core.rmem_max=修改前的值
sysctl -w net.core.wmem_max=修改前的值
sysctl -w net.ipv4.tcp_rmem=修改前的值
sysctl -w net.ipv4.tcp_wmem=修改前的值
sysctl -w net.core.somaxconn=128
sysctl -w net.core.netdev_max_backlog=1000
说明:修改前的值可查看其他未调整的服务器值,通过sysctl -a|grep rmem_max
open files数量修改
说明
inux系统默认open files数目为1024, 有时应用程序会报Too many open files的错误,是因为open files 数目不够。这就需要修改ulimit和file-max。特别是提供大量静态文件访问的web服务器,缓存服务器(如squid), 更要注意这个问题。
当前值:
soft nofile 8192
hard nofile 8192
修改后:
soft nofile 20480
hard nofile 20480
修改办法:
使用root用户,vi /etc/security/limits.conf
修改:
soft nofile 20480
hard nofile 20480
jetty高可用常见参数设置
Operating System Tuning
Both the server machine and any load generating machines need to be tuned to support many TCP/IP connections and high throughput.
Linux does a reasonable job of self-configuring TCP/IP, but there are a few limits and defaults that you should increase. You can configure most of these in /etc/security/limits.conf
or via sysctl
.
You should increase TCP buffer sizes to at least 16MB for 10G paths and tune the auto-tuning (keep in mind that you need to consider buffer bloat).
$ sysctl -w net.core.rmem_max=16777216 $ sysctl -w net.core.wmem_max=16777216 $ sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216" $ sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
net.core.somaxconn
controls the size of the connection listening queue. The default value is 128. If you are running a high-volume server and connections are getting refused at a TCP level, you need to increase this value. This setting can take a bit of finesse to get correct: if you set it too high, resource problems occur as it tries to notify a server of a large number of connections, and many remain pending, but if you set it too low, refused connections occur.
$ sysctl -w net.core.somaxconn=4096
The net.core.netdev_max_backlog
controls the size of the incoming packet queue for upper-layer (Java) processing. The default (2048) may be increased and other related parameters adjusted with:
$ sysctl -w net.core.netdev_max_backlog=16384 $ sysctl -w net.ipv4.tcp_max_syn_backlog=8192 $ sysctl -w net.ipv4.tcp_syncookies=1
If many outgoing connections are made (for example, on load generators), the operating system might run low on ports. Thus it is best to increase the port range, and allow reuse of sockets in TIME_WAIT
:
$ sysctl -w net.ipv4.ip_local_port_range="1024 65535" $ sysctl -w net.ipv4.tcp_tw_recycle=1
Busy servers and load generators may run out of file descriptors as the system defaults are normally low. These can be increased for a specific user in /etc/security/limits.conf
:
theusername hard nofile 40000 theusername soft nofile 40000
Linux supports pluggable congestion control algorithms. To get a list of congestion control algorithms that are available in your kernel run:
$ sysctl net.ipv4.tcp_available_congestion_control
If cubic and/or htcp are not listed, you need to research the control algorithms for your kernel. You can try setting the control to cubic with:
$ sysctl -w net.ipv4.tcp_congestion_control=cubic
参考:https://www.eclipse.org/jetty/documentation/current/high-load.html
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)