Redis 配置文件示例

Note that in order to read the configuration file, Redis must be
started with the file path as first argument:

./redis-server /path/to/redis.conf

当你需要为某个配置项指定内存大小的时候,必须要带上单位,通常的格式就是 1k 5gb 4m 等。
1k => 1000 bytes
1kb => 1024 bytes
1m => 1000000 bytes
1mb => 10241024 bytes
1g => 1000000000 bytes
1gb => 1024
1024*1024 bytes
单位是不区分大小写的,你写 1K 5GB 4M 也行

包含

包含一个或多个其他配置文件
这在你有标准配置模板但是每个redis服务器又需要个性设置的时候很有用,包含文件特性允许你引人其他配置文件,所以好好利用吧。
但是要注意,include 是不能被 config rewrite 命令改写的,由于 redis 总是以最后的加工线作为一个配置指令值,所以你最好是把 include 放在这个文件的最前面,以避免在运行时覆盖配置的改变,相反,你就把它放在后面。
include /path/to/local.conf
include /path/to/other.conf

模块

Load modules at startup. If the server is not able to load modules
it will abort. It is possible to use multiple loadmodule directives.

loadmodule /path/to/my_module.so
loadmodule /path/to/other_module.so

网络

绑定主机接口,如果没设置所有接口的连接都会被监听。
例如: bind 192.168.1.100 10.0.0.1
有时候为了安全起见,redis一般都是监听127.0.0.1 但是有时候又有同网段能连接的需求,当然可以绑定0.0.0.0 用iptables来控制访问权限,或者设置redis访问密码来保证数据安全。
不设置将处理所有请求,建议生产环境中设置,有个误区:bind是用来限制外网IP访问的,其实不是,限制外网ip访问可以通过iptables;如:-A INPUT -s 10.10.1.0/24 -p tcp -m state --state NEW -m tcp --dport 9966 -j ACCEPT ;
实际上,bind ip 绑定的是redis所在服务器网卡的ip,当然127.0.0.1也是可以的
如果绑定一个外网ip,就会报错:Creating Server TCP listening socket xxx.xxx.xxx.xxx:9966: bind: Cannot assign requested address。
例如:bind 127.0.0.1 10.10.1.3
假设绑定是以上ip,使用 netstat -anp|grep 9966 会发现,这两个ip被bind,其中10.10.1.3是服务器网卡的ip。
bind 0.0.0.0

关闭受保护模式,允许外界可访问,需屏蔽bind选项。
protected-mode no

接受连接的特定端口,默认是6379。如果端口设置为0,Redis就不会监听TCP套接字。
port 6379

TCP 监听的最大容纳数量
此参数确定了TCP连接中已完成队列(完成三次握手之后)的长度,当系统并发量大并且客户端速度缓慢的时候,你需要把这个值调高以避免客户端连接缓慢的问题。
Linux 内核会一声不响的把这个值缩小成 /proc/sys/net/core/somaxconn 对应的值,默认是511,而Linux的默认参数值是128。
所以可以将这二个参数一起参考设定,你以便达到你的预期。
tcp-backlog 511

UNIX指定监听socket,指定用来监听连接的UNIX套接字的路径。
没有默认值,如果不指定,Redis就不会通过UNIX套接字来监听。
unixsocket /tmp/redis.sock

当指定监听为socket时,可以指定其权限为755
unixsocketperm 700

客户端和Redis服务端的连接超时时间,默认是0,表示永不超时。
一个客户端空闲多少秒后关闭连接。(0代表禁用,永不关闭)
timeout 0

TCP 心跳包
如果设置为非零,则在与客户端缺乏通讯的时候使用 SO_KEEPALIVE 发送 tcp acks 给客户端。
这个之所有有用,主要由两个原因:

  1. 防止死的 peers
  2. Take the connection alive from the point of view of network equipment in the middle.
    推荐一个合理的值就是60秒
    tcp-keepalive 300

常规

Redis默认是不作为守护进程来运行的,可以把设置为"yes"让它作为守护进程来运行。
当作为守护进程的时候,Redis会把进程ID写到 /var/run/redis.pid。
daemonize no

If you run Redis from upstart or systemd, Redis can interact with your
supervision tree. Options:
supervised no - no supervision interaction
supervised upstart - signal upstart by putting Redis into SIGSTOP mode
supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
supervised auto - detect upstart or systemd method based on
UPSTART_JOB or NOTIFY_SOCKET environment variables
Note: these supervision methods only signal "process is ready."
They do not enable continuous liveness pings back to your supervisor.
supervised no

当以守护进程方式运行的时候,Redis会把进程ID默认写到 /var/run/redis.pid。可以在这里修改路径。
pidfile /var/run/redis_6379.pid

设置服务器调试等级
debug 很多信息,对开发/测试有用
verbose 很多精简的有用信息,但是不像debug等级那么多
notice 适量的信息,基本上是你生产环境中需要的程度
warning 只有很重要/严重的信息会记录下来
loglevel notice

指明日志文件名。也可以使用"stdout"来强制让Redis把日志信息写到标准输出上。
如果Redis以守护进程方式运行,而你设置日志显示到标准输出的话,那么日志会发送到 /dev/null 屏蔽日志。
logfile ""

日志记录到系统日志
要使用系统日志记录器很简单,只要设置 "syslog-enabled" 为 "yes" 就可以了。
然后根据需要设置其他一些syslog参数就可以了。
syslog-enabled no

指明syslog身份
syslog-ident redis

指明syslog的设备。必须是一个用户或者是 LOCAL0 ~ LOCAL7 之一
syslog-facility local0

设置数据库个数。默认数据库是 DB 0,你可以通过SELECT WHERE dbid(0~'databases' - 1)来为每个连接使用不同的数据库。
databases 16

By default Redis shows an ASCII art logo only when started to log to the
standard output and if the standard output is a TTY. Basically this means
that normally a logo is displayed only in interactive sessions.

However it is possible to force the pre-4.0 behavior and always show a
ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes

快照

把数据库存到磁盘上 save 会在指定秒数和数据变化次数之后把数据库写到磁盘上
下面的例子将会进行把数据写入磁盘的操作
900秒(15分钟)之后,且至少1次变更
300秒(5分钟)之后,且至少10次变更
60秒之后,且至少10000次变更
你要想不写磁盘的话就把所有 "save" 设置注释掉就行了。
save 900 1
save 300 10
save 60 10000

默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,否则就会没人注意到灾难的发生。
如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。
然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好。
stop-writes-on-bgsave-error yes

在进行备份时是否进行压缩
是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串,默认都设为 yes。
当导出到 .rdb 数据库时是否用LZF压缩字符串对象。
默认设置为 "yes",所以几乎总是生效的。
如果你想节省CPU的话你可以把这个设置为 "no",但是如果你有可压缩的key的话,那数据文件就会更大了。
rdbcompression yes

读取和写入的时候是否支持CRC64校验,默认是开启。
rdbchecksum yes

备份文件的文件名,数据库的文件名。
dbfilename dump.rdb

数据库备份的文件放置的路径 / 工作目录
路径跟文件名分开配置是因为 Redis 备份时,先会将当前数据库的状态写入到一个临时文件。等备份完成时,再把该临时文件替换为上面所指定的文件,而临时文件和上面所配置的备份文件都会放在这个指定的路径当中。
数据库会写到这个目录下,文件名就是上面的 "dbfilename" 的值。
累加文件也放这里,注意你这里指定的必须是目录,不是文件名。
dir ./

主从复制

主从同步。通过 slaveof 配置来实现Redis实例的备份。
注意,这里是本地从远端复制数据。也就是说,本地可以有不同的数据库文件、绑定不同的IP、监听不同的端口。

+------------------+ +---------------+
| Master | ---> | Replica |
| (receive writes) | | (exact copy) |
+------------------+ +---------------+

  1. Redis replication is asynchronous, but you can configure a master to
    stop accepting writes if it appears to be not connected with at least
    a given number of replicas.
  2. Redis replicas are able to perform a partial resynchronization with the
    master if the replication link is lost for a relatively small amount of
    time. You may want to configure the replication backlog size (see the next
    sections of this file) with a sensible value depending on your needs.
  3. Replication is automatic and does not need user intervention. After a
    network partition replicas automatically try to reconnect to masters
    and resynchronize with them.

replicaof

如果master设置了密码(通过下面的 "requirepass" 选项来配置),那么slave在开始同步之前必须进行身份验证,否则它的同步请求会被拒绝。
masterauth

当一个slave失去和master的连接,或者同步正在进行中,slave的行为有两种可能:

  1. 如果 slave-serve-stale-data 设置为 "yes" (默认值),slave会继续响应客户端请求,可能是正常数据,也可能是还没获得值的空数据。
  2. 如果 slave-serve-stale-data 设置为 "no",slave会回复"正在从master同步(SYNC with master in progress)"来处理各种请求,除了 INFO 和 SLAVEOF 命令。
    replica-serve-stale-data yes

You can configure a replica instance to accept writes or not. Writing against
a replica instance may be useful to store some ephemeral data (because data
written on a replica will be easily deleted after resync with the master) but
may also cause problems if clients are writing to it because of a
misconfiguration.

Since Redis 2.6 by default replicas are read-only.

Note: read only replicas are not designed to be exposed to untrusted clients
on the internet. It is just a protection layer against misuse of the instance.
Still a read only replica exports by default all the administrative commands
such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
security of read only replicas using 'rename-command' to shadow all the
administrative / dangerous commands.
replica-read-only yes

Replication SYNC strategy: disk or socket.


WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY

New replicas and reconnecting replicas that are not able to continue the replication
process just receiving differences, need to do what is called a "full
synchronization". An RDB file is transmitted from the master to the replicas.
The transmission can happen in two different ways:

  1. Disk-backed: The Redis master creates a new process that writes the RDB
    file on disk. Later the file is transferred by the parent
    process to the replicas incrementally.
  2. Diskless: The Redis master creates a new process that directly writes the
    RDB file to replica sockets, without touching the disk at all.

With disk-backed replication, while the RDB file is generated, more replicas
can be queued and served with the RDB file as soon as the current child producing
the RDB file finishes its work. With diskless replication instead once
the transfer starts, new replicas arriving will be queued and a new transfer
will start when the current one terminates.

When diskless replication is used, the master waits a configurable amount of
time (in seconds) before starting the transfer in the hope that multiple replicas
will arrive and the transfer can be parallelized.

With slow disks and fast (large bandwidth) networks, diskless replication
works better.
repl-diskless-sync no

When diskless replication is enabled, it is possible to configure the delay
the server waits in order to spawn the child that transfers the RDB via socket
to the replicas.

This is important since once the transfer starts, it is not possible to serve
new replicas arriving, that will be queued for the next RDB transfer, so the server
waits a delay in order to let more replicas arrive.

The delay is specified in seconds, and by default is 5 seconds. To disable
it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

slave根据指定的时间间隔向服务器发送ping请求。 时间间隔可以通过 repl_ping_slave_period 来设置。 默认10。
repl-ping-replica-period 10

设置主从复制过期时间
下面的选项设置了大块数据I/O、向master请求数据和ping响应的过期时间。 默认值60秒。
一个很重要的事情是:确保这个值比 repl-ping-slave-period 大,否则master和slave之间的传输过期时间比预想的要短。
repl-timeout 60

指定向slave同步数据时,是否禁用socket的NO_DELAY选 项。
若配置为“yes”,则禁用NO_DELAY,则TCP协议栈会合并小包统一发送,这样可以减少主从节点间的包数量并节省带宽,但会增加数据同步到 slave的时间。
若配置为“no”,表明启用NO_DELAY,则TCP协议栈不会延迟小包的发送时机,这样数据同步的延时会减少,但需要更大的带宽。
通常情况下,应该配置为no以降低同步延时,但在主从节点间网络负载已经很高的情况下,可以配置为yes。
repl-disable-tcp-nodelay no

设置主从复制容量大小
这个 backlog 是一个用来在 slaves 被断开连接时存放 slave 数据的 buffer,所以当一个 slave 想要重新连接,通常不希望全部重新同步,只是部分同步就够了,仅仅传递 slave 在断开连接时丢失的这部分数据。
这个值越大,salve 可以断开连接的时间就越长。
repl-backlog-size 1mb

在某些时候,master 不再连接 slaves,backlog 将被释放。如果设置为 0 ,意味着绝不释放 backlog 。
repl-backlog-ttl 3600

指定slave的优先级
在不只1个slave存在的部署环境下,当master宕机时,Redis Sentinel会将priority值最小的slave提升为master。
这个值越小,就越会被优先选中,需要注意的是,若该配置项为0,则对应的slave永远不会自动提升为master。
replica-priority 100

It is possible for a master to stop accepting writes if there are less than
N replicas connected, having a lag less or equal than M seconds.

The N replicas need to be in "online" state.

The lag in seconds, that must be <= the specified value, is calculated from
the last ping received from the replica, that is usually sent every second.

This option does not GUARANTEE that N replicas will accept the write, but
will limit the window of exposure for lost writes in case not enough replicas
are available, to the specified number of seconds.

For example to require at least 3 replicas with a lag <= 10 seconds use:

min-replicas-to-write 3
min-replicas-max-lag 10

Setting one or the other to 0 disables the feature.

By default min-replicas-to-write is set to 0 (feature disabled) and
min-replicas-max-lag is set to 10.

A Redis master is able to list the address and port of the attached
replicas in different ways. For example the "INFO replication" section
offers this information, which is used, among other tools, by
Redis Sentinel in order to discover replica instances.
Another place where this info is available is in the output of the
"ROLE" command of a master.

The listed IP and address normally reported by a replica is obtained
in the following way:

IP: The address is auto detected by checking the peer address
of the socket used by the replica to connect with the master.

Port: The port is communicated by the replica during the replication
handshake, and is normally the port that the replica is using to
listen for connections.

However when port forwarding or Network Address Translation (NAT) is
used, the replica may be actually reachable via different IP and port
pairs. The following two options can be used by a replica in order to
report to its master a specific set of IP and port, so that both INFO
and ROLE will report those values.

There is no need to use both the options if you need to override just
the port or the IP address.

replica-announce-ip 5.5.5.5
replica-announce-port 1234

安全

设置连接redis的密码
要求客户端在处理任何命令时都要验证身份和密码
这在你信不过来访者时很有用,为了向后兼容的话,这段应该注释掉。
而且大多数人不需要身份验证(例如:它们运行在自己的服务器上。)
警告:因为Redis太快了,所以居心不良的人可以每秒尝试150k的密码来试图破解密码。
这意味着你需要一个高强度的密码,否则破解太容易了。
requirepass redis520...

命令重命名
重命名一些高危命令,用来禁止高危命令。
在共享环境下,可以为危险命令改变名字。比如,你可以为 CONFIG 改个其他不太容易猜到的名字,这样你自己仍然可以使用,而别人却没法做坏事了。
例如: rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
至也可以通过给命令赋值一个空字符串来完全禁用这条命令:rename-command CONFIG ""
rename-command CONFIG ""

客户端

设置能连上redis的最大客户端连接数量
默认是10000个客户端连接。由于redis不区分连接是客户端连接还是内部打开文件或者和slave连接等,所以maxclients最小建议设置到32。如果超过了maxclients,redis会给新的连接发送’max number of clients reached’,并关闭连接。
设置最多同时连接客户端数量
默认没有限制,这个关系到Redis进程能够打开的文件描述符数量,特殊值"0"表示没有限制。
一旦达到这个限制,Redis会关闭所有新连接并发送错误"达到最大用户数上限(max number of clients reached)" 。
限制同时连接的客户数量,默认是10000。当连接数超过这个值时,redis 将不再接收其他连接请求,客户端尝试连接时将收到 error 信息。
maxclients 10000

内存管理

redis配置的最大内存容量
当内存满了,需要配合maxmemory-policy策略进行处理。注意slave的输出缓冲区是不计算在maxmemory内的。所以为了防止主机内存使用完,建议设置的maxmemory需要更小一些。
设置Redis能够使用的最大内存
不要用比设置的上限更多的内存。一旦内存使用达到上限,Redis会根据选定的回收策略(参见:maxmemmory-policy)删除key。
如果因为删除策略问题Redis无法删除key,或者策略设置为 "noeviction",Redis会回复需要更多内存的错误信息给命令。
例如,SET,LPUSH等等。但是会继续合理响应只读命令,比如:GET。
在使用Redis作为LRU缓存,或者为实例设置了硬性内存限制的时候(使用 "noeviction" 策略)的时候,这个选项还是满有用的。
警告:当一堆slave连上达到内存上限的实例的时候,响应slave需要的输出缓存所需内存不计算在使用内存当中。
这样当请求一个删除掉的key的时候就不会触发网络问题/重新同步的事件,然后slave就会收到一堆删除指令,直到数据库空了为止。
简而言之,如果你有slave连上一个master的话,那建议你把master内存限制设小点儿,确保有足够的系统内存用作输出缓存。
(如果策略设置为"noeviction"的话就不无所谓了)
maxmemory

内存容量超过maxmemory后的处理策略

内存策略:如果达到内存限制了,Redis如何删除key。你可以在下面五个策略里面选:
volatile-lru 根据LRU算法生成的过期时间来删除
allkeys-lru 根据LRU算法删除任何key
volatile-lfu
allkeys-lfu
volatile-random 根据过期设置来随机删除key
allkeys-random 无差别随机删
volatile-ttl 根据最近过期时间来删除(辅以TTL)
noeviction 谁也不删,直接在写操作时返回错误。

LRU 表示Least Recently Used
LFU 表示 Least Frequently Used
Both LRU, LFU and volatile-ttl are implemented using approximated
randomized algorithms.

注意:对所有策略来说,如果Redis找不到合适的可以删除的key都会在写操作时返回一个错误。
上面的这些驱逐策略,如果redis没有合适的key驱逐,对于写命令,还是会返回错误。redis将不再接收写请求,只接收get请求。
写命令包括:set setnx setex append incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby getset mset msetnx exec sort
默认值如下
maxmemory-policy noeviction

lru检测的样本数
使用lru或者ttl淘汰算法,从需要淘汰的列表中随机选择sample个key,选出闲置时间最长的key移除。
LRU和最小TTL算法的实现都不是很精确,但是很接近(为了省内存),所以你可以用样例做测试。
例如:默认Redis会检查三个key然后取最旧的那个,你可以通过下面的配置项来设置样本的个数。
maxmemory-samples 5

Starting from Redis 5, by default a replica will ignore its maxmemory setting
(unless it is promoted to master after a failover or manually). It means
that the eviction of keys will be just handled by the master, sending the
DEL commands to the replica as keys evict in the master side.

This behavior ensures that masters and replicas stay consistent, and is usually
what you want, however if your replica is writable, or you want the replica to have
a different memory setting, and you are sure all the writes performed to the
replica are idempotent, then you may change this default (but be sure to understand
what you are doing).

Note that since the replica by default does not evict, it may end using more
memory than the one set via maxmemory (there are certain buffers that may
be larger on the replica, or data structures may sometimes take more memory and so
forth). So make sure you monitor your replicas and make sure they have enough
memory to never hit a real out-of-memory condition before the master hits
the configured maxmemory setting.

replica-ignore-maxmemory yes

LAZY FREEING

Redis has two primitives to delete keys. One is called DEL and is a blocking
deletion of the object. It means that the server stops processing new commands
in order to reclaim all the memory associated with an object in a synchronous
way. If the key deleted is associated with a small object, the time needed
in order to execute the DEL command is very small and comparable to most other
O(1) or O(log_N) commands in Redis. However if the key is associated with an
aggregated value containing millions of elements, the server can block for
a long time (even seconds) in order to complete the operation.

For the above reasons Redis also offers non blocking deletion primitives
such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
FLUSHDB commands, in order to reclaim memory in background. Those commands
are executed in constant time. Another thread will incrementally free the
object in the background as fast as possible.

DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
It's up to the design of the application to understand when it is a good
idea to use one or the other. However the Redis server sometimes has to
delete keys or flush the whole database as a side effect of other operations.
Specifically Redis deletes objects independently of a user call in the
following scenarios:

  1. On eviction, because of the maxmemory and maxmemory policy configurations,
    in order to make room for new data, without going over the specified
    memory limit.
  2. Because of expire: when a key with an associated time to live (see the
    EXPIRE command) must be deleted from memory.
  3. Because of a side effect of a command that stores data on a key that may
    already exist. For example the RENAME command may delete the old key
    content when it is replaced with another one. Similarly SUNIONSTORE
    or SORT with STORE option may delete existing keys. The SET command
    itself removes any old content of the specified key in order to replace
    it with the specified string.
  4. During replication, when a replica performs a full resynchronization with
    its master, the content of the whole database is removed in order to
    load the RDB file just transferred.

In all the above cases the default is to delete objects in a blocking way,
like if DEL was called. However you can configure each case specifically
in order to instead release memory in a non-blocking way like if UNLINK
was called, using the following configuration directives:

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

纯累加模式

默认redis使用的是rdb方式持久化,这种方式在许多应用中已经足够用了。但是redis如果中途宕机,会导致可能有几分钟的数据丢失,根据save来策略进行持久化,Append Only File是另一种持久化方式,可以提供更好的持久化特性。Redis会把每次写入的数据在接收后都写入 appendonly.aof 文件,每次启动时Redis都会先把这个文件的数据读入内存里,先忽略RDB文件。
默认情况下,Redis是异步的把数据导出到磁盘上。这种情况下,当Redis挂掉的时候,最新的数据就丢了。
如果不希望丢掉任何一条数据的话就该用纯累加模式:一旦开启这个模式,Redis会把每次写入的数据在接收后都写入 appendonly.aof 文件。
每次启动时Redis都会把这个文件的数据读入内存里。
注意,异步导出的数据库文件和纯累加文件可以并存(你得把上面所有"save"设置都注释掉,关掉导出机制)。
如果纯累加模式开启了,那么Redis会在启动时载入日志文件而忽略导出的 dump.rdb 文件。
重要:查看 BGREWRITEAOF 来了解当累加日志文件太大了之后,怎么在后台重新处理这个日志文件。
appendonly no

纯累加aof文件名字(默认:"appendonly.aof")
appendfilename "appendonly.aof"

aof持久化策略的配置

fsync() 请求操作系统马上把数据写到磁盘上,不要再等了。
有些操作系统会真的把数据马上刷到磁盘上;有些则要磨蹭一下,但是会尽快去做。
Redis支持三种不同的模式
no: 不要立刻刷,只有在操作系统需要刷的时候再刷。比较快。 表示不执行fsync,由操作系统保证数据同步到磁盘,速度最快。
always:每次写操作都立刻写入到aof文件。慢,但是最安全。 表示每次写入都执行fsync,以保证数据同步到磁盘。
everysec:每秒写一次。折衷方案。 表示每秒执行一次fsync,可能会导致丢失这1s数据。
默认的 "everysec" 通常来说能在速度和数据安全性之间取得比较好的平衡。
如果你真的理解了这个意味着什么,那么设置"no"可以获得更好的性能表现(如果丢数据的话,则只能拿到一个不是很新的快照);
或者相反的,你选择 "always" 来牺牲速度确保数据安全、完整。 如果拿不准,就用 "everysec" 。
appendfsync everysec

在aof重写或者写入rdb文件的时候,会执行大量IO,此时对于everysec和always的aof模式来说,执行fsync会造成阻塞过长时间,no-appendfsync-on-rewrite字段设置为默认设置为no。如果对延迟要求很高的应用,这个字段可以设置为yes,否则还是设置为no,这样对持久化特性来说这是更安全的选择。设置为yes表示rewrite期间对新写操作不fsync,暂时存在内存中,等rewrite完成后再写入,默认为no,建议yes。Linux的默认fsync策略是30秒。可能丢失30秒数据。
如果AOF的同步策略设置成 "always" 或者 "everysec",那么后台的存储进程(后台存储或写入AOF日志)会产生很多磁盘I/O开销。
某些Linux的配置下会使Redis因为 fsync() 而阻塞很久
注意,目前对这个情况还没有完美修正,甚至不同线程的 fsync() 会阻塞我们的 write(2) 请求。
为了缓解这个问题,可以用下面这个选项。它可以在 BGSAVE 或 BGREWRITEAOF 处理时阻止 fsync()。
这就意味着如果有子进程在进行保存操作,那么Redis就处于"不可同步"的状态。
这实际上是说,在最差的情况下可能会丢掉30秒钟的日志数据。(默认Linux设定)
如果你有延迟的问题那就把这个设为 "yes",否则就保持 "no",这是保存持久数据的最安全的方式。
no-appendfsync-on-rewrite no

aof自动重写配置

当目前aof文件大小超过上一次重写的aof文件大小的百分之多少进行重写,即当aof文件增长到一定大小的时候Redis能够调用bgrewriteaof对日志文件进行重写。当前AOF文件大小是上次日志重写得到AOF文件大小的二倍(设置为100)时,自动启动新的日志重写过程。
自动重写AOF文件
如果AOF日志文件大到指定百分比,Redis能够通过 BGREWRITEAOF 自动重写AOF日志文件。
工作原理:Redis记住上次重写时AOF日志的大小(或者重启后没有写操作的话,那就直接用此时的AOF文件),基准尺寸和当前尺寸做比较。如果当前尺寸超过指定比例,就会触发重写操作。
你还需要指定被重写日志的最小尺寸,这样避免了达到约定百分比但尺寸仍然很小的情况还要重写。
指定百分比为0会禁用AOF自动重写特性。
auto-aof-rewrite-percentage 100
设置允许重写的最小aof文件大小,避免了达到约定百分比但尺寸仍然很小的情况还要重写。
auto-aof-rewrite-min-size 64mb

aof文件可能在尾部是不完整的,当redis启动的时候,aof文件的数据被载入内存。
重启可能发生在redis所在的主机操作系统宕机后,尤其在ext4文件系统没有加上data=ordered选项(redis宕机或者异常终止不会造成尾部不完整现象。)出现这种现象,可以选择让redis退出,或者导入尽可能多的数据。
如果选择的是yes,当截断的aof文件被导入的时候,会自动发布一个log给客户端然后load。
如果是no,用户必须手动redis-check-aof修复AOF文件才可以。
aof-load-truncated yes

When rewriting the AOF file, Redis is able to use an RDB preamble in the
AOF file for faster rewrites and recoveries. When this option is turned
on the rewritten AOF file is composed of two different stanzas:

[RDB file][AOF tail]

When loading Redis recognizes that the AOF file starts with the "REDIS"
string and loads the prefixed RDB file, and continues loading the AOF
tail.
aof-use-rdb-preamble yes

LUA 脚本

一个Lua脚本最长的执行时间,单位为毫秒,如果为0或负数表示无限执行时间,默认为5000。
如果达到最大时间限制(毫秒),redis会记个log,然后返回error。当一个脚本超过了最大时限。只有SCRIPT KILL和SHUTDOWN NOSAVE可以用。第一个可以杀没有调write命令的东西。要是已经调用了write,只能用第二个命令杀。
lua-time-limit 5000

REDIS 集群

启用或停用集群,集群开关,默认是不开启集群模式。
cluster-enabled yes

集群配置文件的名称
每个节点都有一个集群相关的配置文件,持久化保存集群的信息。这个文件并不需要手动配置,这个配置文件有Redis生成并更新,每个Redis集群节点需要一个单独的配置文件,请确保与实例运行的系统中配置文件名称不冲突。
cluster-config-file nodes-6379.conf

节点互连超时的阀值 集群节点超时毫秒数
cluster-node-timeout 15000

在进行故障转移的时候,全部slave都会请求申请为master,但是有些slave可能与master断开连接一段时间了,导致数据过于陈旧,这样的slave不应该被提升为master。该参数就是用来判>断slave节点与master断线的时间是否过长。判断方法是:
比较slave断开连接的时间和(node-timeout * slave-validity-factor) + repl-ping-slave-period
如果节点超时时间为三十秒, 并且slave-validity-factor为10,假设默认的repl-ping-slave-period是10秒,即如果超过310秒slave将不会尝试进行故障转移
可能出现由于某主节点失联却没有从节点能顶上的情况,从而导致集群不能正常工作,在这种情况下,只有等到原来的主节点重新回归到集群,集群才恢复运作
如果设置成0,则无论从节点与主节点失联多久,从节点都会尝试升级成主节
cluster-replica-validity-factor 10

master的slave数量大于该值,slave才能迁移到其他孤立master上,如这个参数若被设为2,那么只有当一个主节点拥有2 个可工作的从节点时,它的一个从节点会尝试迁移。主节点需要的最小从节点数,只有达到这个数,主节点失败时,它从节点才会进行迁移。
cluster-migration-barrier 1

默认情况下,集群全部的slot有节点分配,集群状态才为ok,才能提供服务。设置为no,可以在slot没有全部分配的时候提供服务。不建议打开该配置,这样会造成分区的时候,小分区的mster一直在接受写请求,而造成很长时间数据不一致。在部分key所在的节点不可用时,如果此参数设置为”yes”(默认值), 则整个集群停止接受操作;如果此参数设置为”no”,则集群依然为可达节点上的key提供读操作。
cluster-require-full-coverage yes

This option, when set to yes, prevents replicas from trying to failover its
master during master failures. However the master can still perform a
manual failover, if forced to do so.

This is useful in different scenarios, especially in the case of multiple
data center operations, where we want one side to never be promoted if not
in the case of a total DC failure.

cluster-replica-no-failover no

In order to setup your cluster make sure to read the documentation
available at http://redis.io web site.

CLUSTER DOCKER/NAT support

In certain deployments, Redis Cluster nodes address discovery fails, because
addresses are NAT-ted or because ports are forwarded (the typical case is
Docker and other containers).

In order to make Redis Cluster working in such environments, a static
configuration where each node knows its public address is needed. The
following two options are used for this scope, and are:

  • cluster-announce-ip
  • cluster-announce-port
  • cluster-announce-bus-port

Each instruct the node about its address, client port, and cluster message
bus port. The information is then published in the header of the bus packets
so that other nodes will be able to correctly map the address of the node
publishing the information.

If the above options are not used, the normal Redis Cluster auto-detection
will be used instead.

Note that when remapped, the bus port may not be at the fixed offset of
clients port + 10000, so you can specify any port and bus-port depending
on how they get remapped. If the bus-port is not set, a fixed offset of
10000 will be used as usually.

Example:

cluster-announce-ip 10.1.1.5
cluster-announce-port 6379
cluster-announce-bus-port 6380

慢查询日志

slog log是用来记录redis运行中执行比较慢的命令耗时。当命令的执行超过了指定时间,就记录在slow log中,slog log保存在内存中,所以没有IO操作。
执行时间比slowlog-log-slower-than大的请求记录到slowlog里面,单位是微秒,所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录所有命令。
Redis慢查询日志可以记录超过指定时间的查询,运行时间不包括各种I/O时间。
例如:连接客户端,发送响应数据等。只计算命令运行的实际时间(这是唯一一种命令运行线程阻塞而无法同时为其他请求服务的场景)
你可以为慢查询日志配置两个参数:一个是超标时间,单位为微妙,记录超过个时间的命令。
另一个是慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录会被删掉。
下面的时间单位是微秒,所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录所有命令。
slowlog-log-slower-than 10000

慢查询日志长度
当一个新的命令被写进日志的时候,最老的那个记录会被删掉。这个长度没有限制。只要有足够的内存就行。你可以通过 SLOWLOG RESET 来释放内存。
这个长度没有限制。只要有足够的内存就行。你可以通过 SLOWLOG RESET 来释放内存。(译者注:日志居然是在内存里的Orz)
slowlog-max-len 128

延迟监控

延迟监控功能是用来监控redis中执行比较缓慢的一些操作,用LATENCY打印redis实例在跑命令时的耗时图表。只记录大于等于下边设置的值的操作。0的话,就是关闭监视。默认延迟监控功能是关闭的,如果你需要打开,也可以通过CONFIG SET命令动态设置。
latency-monitor-threshold 0

EVENT NOTIFICATION

键空间通知使得客户端可以通过订阅频道或模式,来接收那些以某种方式改动了 Redis 数据集的事件。因为开启键空间通知功能需要消耗一些 CPU ,所以在默认配置下,该功能处于关闭状态。
notify-keyspace-events 的参数可以是以下字符的任意组合,它指定了服务器该发送哪些类型的通知:
K 键空间通知,所有通知以 [email protected] 为前缀
E 键事件通知,所有通知以 [email protected] 为前缀
g DEL 、 EXPIRE 、 RENAME 等类型无关的通用命令的通知
$ 字符串命令的通知
l 列表命令的通知
s 集合命令的通知
h 哈希命令的通知
z 有序集合命令的通知
x 过期事件 每当有过期键被删除时发送
e 驱逐(evict)事件:每当有键因为 maxmemory 政策而被删除时发送
A 参数 g$lshzxe 的别名
输入的参数中至少要有一个 K 或者 E,否则的话,不管其余的参数是什么,都不会有任何 通知被分发。详细使用可以参考http://redis.io/topics/notifications
notify-keyspace-events ""

高级配置

当hash中包含超过指定元素个数并且最大的元素没有超过临界时,hash将以一种特殊的编码方式(大大减少内存使用)来存储,这里可以设置这两个临界值。
当有大量数据时,适合用哈希编码(需要更多的内存),元素数量上限不能超过给定限制。
你可以通过下面的选项来设定这些限制:
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

Lists are also encoded in a special way to save a lot of space.
The number of entries allowed per internal list node can be specified
as a fixed maximum size or a maximum number of elements.
For a fixed maximum size, use -5 through -1, meaning:
-5: max size: 64 Kb <-- not recommended for normal workloads
-4: max size: 32 Kb <-- not recommended
-3: max size: 16 Kb <-- probably not recommended
-2: max size: 8 Kb <-- good
-1: max size: 4 Kb <-- good
Positive numbers mean store up to exactly that number of elements
per list node.
The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2

Lists may also be compressed.
Compress depth is the number of quicklist ziplist nodes from each side of
the list to exclude from compression. The head and tail of the list
are always uncompressed for fast push/pop operations. Settings are:
0: disable all list compression
1: depth 1 means "don't start compressing until after 1 node into the list,
going from either the head or tail"
So: [head]->node->node->...->node->[tail]
[head], [tail] will always be uncompressed; inner nodes will compress.
2: [head]->[next]->node->node->...->node->[prev]->[tail]
2 here means: don't compress head or head->next or tail->prev or tail,
but compress all nodes between them.
3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
etc.
list-compress-depth 0

set数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。
还有这样一种特殊编码的情况:数据全是64位无符号整型数字构成的字符串。
下面这个配置项就是用来限制这种情况下使用这种编码的最大上限的。
set-max-intset-entries 512

与第一、第二种情况相似,有序序列也可以用一种特别的编码方式来处理,可节省大量空间。
这种编码只适合长度和元素都符合下面限制的有序序列:
zsort数据类型多少节点以下会采用去指针的紧凑存储格式
zsort数据类型节点值大小小于多少字节会采用紧凑存储格式
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

HyperLogLog sparse representation bytes limit. The limit includes the
16 bytes header. When an HyperLogLog using the sparse representation crosses
this limit, it is converted into the dense representation.

A value greater than 16000 is totally useless, since at that point the
dense representation is more memory efficient.

The suggested value is ~ 3000 in order to have the benefits of
the space efficient encoding without slowing down too much PFADD,
which is O(N) with the sparse encoding. The value can be raised to
~ 10000 when CPU is not a concern, but space is, and the data set is
composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

Streams macro node max size / items. The stream data structure is a radix
tree of big nodes that encode multiple items inside. Using this configuration
it is possible to configure how big a single node can be in bytes, and the
maximum number of items it may contain before switching to a new node when
appending new stream entries. If any of the following settings are set to
zero, the limit is ignored, so for instance it is possible to set just a
max entires limit by setting max-bytes to 0 and max-entries to the desired
value.
stream-node-max-bytes 4096
stream-node-max-entries 100

哈希刷新,每100个CPU毫秒会拿出1个毫秒来刷新Redis的主哈希表(顶级键值映射表)。
redis所用的哈希表实现(见dict.c)采用延迟哈希刷新机制:你对一个哈希表操作越多,哈希刷新操作就越频繁; 反之,如果服务器非常不活跃那么也就是用点内存保存哈希表而已。
默认是每秒钟进行10次哈希表刷新,用来刷新字典,然后尽快释放内存。
建议:
如果你对延迟比较在意的话就用 "activerehashing no",每个请求延迟2毫秒不太好嘛。
如果你不太在意延迟而希望尽快释放内存的话就设置 "activerehashing yes"。
activerehashing yes

客户端输出缓存限制强迫断开读取速度比较慢的客户端有三种类型的限制
normal 正常的客户端包括监控客户端
replica 从客户端
pubsub 客户端至少订阅了一个频道或者模式
客户端输出缓存限制语法如下(时间单位:秒):client-output-buffer-limit <类别> <强制限制> <软性限制> <软性时间>
达到强制限制缓存大小,立刻断开链接。达到软性限制,仍然会有软性时间大小的链接时间。
默认正常客户端无限制,只有请求后,异步客户端数据请求速度快于它能读取数据的速度。
订阅模式和主从客户端又默认限制,因为它们都接受推送。
强制限制和软性限制都可以设置为0来禁用这个特性
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

Client query buffers accumulate new commands. They are limited to a fixed
amount by default in order to avoid that a protocol desynchronization (for
instance due to a bug in the client) will lead to unbound memory usage in
the query buffer. However you can configure it here if you have very special
needs, such us huge multi/exec requests or alike.

client-query-buffer-limit 1gb

In the Redis protocol, bulk requests, that are, elements representing single
strings, are normally limited ot 512 mb. However you can change this limit
here.

proto-max-bulk-len 512mb

设置Redis后台任务执行频率,比如清除过期键任务。
设置范围为1到500,默认为10.越大CPU消耗越大,延迟越小,建议不要超过100。
hz 10

Normally it is useful to have an HZ value which is proportional to the
number of clients connected. This is useful in order, for instance, to
avoid too many clients are processed for each background task invocation
in order to avoid latency spikes.

Since the default HZ value by default is conservatively set to 10, Redis
offers, and enables by default, the ability to use an adaptive HZ value
which will temporary raise when there are many connected clients.

When dynamic HZ is enabled, the actual configured HZ will be used as
as a baseline, but multiples of the configured HZ value will be actually
used as needed once more clients are connected. In this way an idle
instance will use very little CPU time while a busy instance will be
more responsive.
dynamic-hz yes

当子进程重写AOF文件,以下选项开启时,AOF文件会每产生32M数据同步一次,这有助于更快写入文件到磁盘避免延迟。
aof-rewrite-incremental-fsync yes

When redis saves RDB file, if the following option is enabled
the file will be fsync-ed every 32 MB of data generated. This is useful
in order to commit the file to the disk more incrementally and avoid
big latency spikes.
rdb-save-incremental-fsync yes

Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
idea to start with the default settings and only change them after investigating
how to improve the performances and how the keys LFU change over time, which
is possible to inspect via the OBJECT FREQ command.

There are two tunable parameters in the Redis LFU implementation: the
counter logarithm factor and the counter decay time. It is important to
understand what the two parameters mean before changing them.

The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
uses a probabilistic increment with logarithmic behavior. Given the value
of the old counter, when a key is accessed, the counter is incremented in
this way:

  1. A random number R between 0 and 1 is extracted.
  2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
  3. The counter is incremented only if R < P.

The default lfu-log-factor is 10. This is a table of how the frequency
counter changes with a different number of accesses with different
logarithmic factors:

+--------+------------+------------+------------+------------+------------+
| factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
+--------+------------+------------+------------+------------+------------+
| 0 | 104 | 255 | 255 | 255 | 255 |
+--------+------------+------------+------------+------------+------------+
| 1 | 18 | 49 | 255 | 255 | 255 |
+--------+------------+------------+------------+------------+------------+
| 10 | 10 | 18 | 142 | 255 | 255 |
+--------+------------+------------+------------+------------+------------+
| 100 | 8 | 11 | 49 | 143 | 255 |
+--------+------------+------------+------------+------------+------------+

NOTE: The above table was obtained by running the following commands:

redis-benchmark -n 1000000 incr foo
redis-cli object freq foo

NOTE 2: The counter initial value is 5 in order to give new objects a chance
to accumulate hits.

The counter decay time is the time, in minutes, that must elapse in order
for the key counter to be divided by two (or decremented if it has a value
less <= 10).

The default value for the lfu-decay-time is 1. A Special value of 0 means to
decay the counter every time it happens to be scanned.

lfu-log-factor 10
lfu-decay-time 1

ACTIVE DEFRAGMENTATION

WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
even in production and manually tested by multiple engineers for some
time.

What is active defragmentation?

Active (online) defragmentation allows a Redis server to compact the
spaces left between small allocations and deallocations of data in memory,
thus allowing to reclaim back memory.

Fragmentation is a natural process that happens with every allocator (but
less so with Jemalloc, fortunately) and certain workloads. Normally a server
restart is needed in order to lower the fragmentation, or at least to flush
away all the data and create it again. However thanks to this feature
implemented by Oran Agra for Redis 4.0 this process can happen at runtime
in an "hot" way, while the server is running.

Basically when the fragmentation is over a certain level (see the
configuration options below) Redis will start to create new copies of the
values in contiguous memory regions by exploiting certain specific Jemalloc
features (in order to understand if an allocation is causing fragmentation
and to allocate it in a better place), and at the same time, will release the
old copies of the data. This process, repeated incrementally for all the keys
will cause the fragmentation to drop back to normal values.

Important things to understand:

  1. This feature is disabled by default, and only works if you compiled Redis
    to use the copy of Jemalloc we ship with the source code of Redis.
    This is the default with Linux builds.

  2. You never need to enable this feature if you don't have fragmentation
    issues.

  3. Once you experience fragmentation, you can enable this feature when
    needed with the command "CONFIG SET activedefrag yes".

The configuration parameters are able to fine tune the behavior of the
defragmentation process. If you are not sure about what they mean it is
a good idea to leave the defaults untouched.

Enabled active defragmentation
activedefrag yes

Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb

Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10

Maximum percentage of fragmentation at which we use maximum effort
active-defrag-threshold-upper 100

Minimal effort for defrag in CPU percentage
active-defrag-cycle-min 5

Maximal effort for defrag in CPU percentage
active-defrag-cycle-max 75

Maximum number of set/hash/zset/list fields that will be processed from
the main dictionary scan
active-defrag-max-scan-fields 1000