返回满足给定pattern的所有key
示例:
127.0.0.1:6379> keys *
1) "zset1"
2) "list:1"
3) "s1"
4) "s2"
5) "newlist"
6) "s3"
7) "set2"
8) "set1"
确认一个key是否存在
从示例结果来看,数据库中不存在HongWan这个key,但是age这个key是存在的
示例:
127.0.0.1:6379> exists set8
(integer) 0
127.0.0.1:6379> exists set1
(integer) 1
删除一个key
示例:
127.0.0.1:6379> del s3
(integer) 1
127.0.0.1:6379> exists set3
(integer) 0
127.0.0.1:6379>
重命名key
从示例结果来看,s1成功的被我们改名为s1_new了。
示例:
127.0.0.1:6379> keys *
1) "zset1"
2) "list:1"
3) "s1"
4) "s2"
5) "newlist"
6) "set2"
7) "set1"
127.0.0.1:6379> rename s1 s1_new
OK
127.0.0.1:6379> keys *
1) "zset1"
2) "list:1"
3) "s2"
4) "newlist"
5) "set2"
6) "s1_new"
7) "set1"
返回值的类型
从示例结果来看,这个方法可以非常简单的判断出值的类型。
示例:
127.0.0.1:6379> keys *
1) "zset1"
2) "list:1"
3) "s2"
4) "newlist"
5) "set2"
6) "s1_new"
7) "set1"
127.0.0.1:6379> type s2
string
127.0.0.1:6379> type set1
set
127.0.0.1:6379> type zset1
zset
127.0.0.1:6379> type list:1
list
EXPIRE key seconds 设置key的生存时间(单位:秒)key在多少秒后会自动删除
TTL key 查看key生于的生存时间
PERSIST key 清除生存时间
PEXPIRE key milliseconds 生存时间设置单位为:毫秒
示例:
127.0.0.1:6379> keys *
1) "zset1"
2) "list:1"
3) "s2"
4) "newlist"
5) "set2"
6) "s1_new"
7) "set1"
127.0.0.1:6379> expire s2 10 设置s2的生存时间为10秒
(integer) 1
127.0.0.1:6379> ttl s2 查看s2的生于生成时间还有4秒删除
(integer) 4
127.0.0.1:6379> ttl s2
(integer) -2
127.0.0.1:6379> get s2 获取s2的值,已经删除
(nil)
RDB方式的持久化是通过
快照(snapshotting)
完成的,当符合一定条件
时Redis会自动将内存中的数据进行快照并持久化到硬盘
。 RDB是Redis默认
采用的持久化方式。
在redis.conf中修改持久化快照的条件,如下图:
详解如下:
save 开头的一行就是持久化配置,可以配置多个条件(每行配置一个条件),每个条件之间是“或”的关系。
save 900 1 表示15分钟(900秒)内至少1个键被更改则进行快照。
save 300 10 表示5分钟(300秒)内至少10个键被更改则进行快照。
save 60 10000 表示1分钟(60秒)内至少10000个键被更改则进行快照。
在redis.conf中可以指定持久化文件存储的目录,如下图:
详解如下:
dbfilename dump.rdb 表示设置而快照文件(持久化文件)存储的名称为:dump.rdb
dir ./ 表示设置而快照文件(持久化文件)存储的目录为:在当前目录下
Redis启动后会读取RDB快照文件,将数据从硬盘载入到内存。根据数据量大小与结构和服务器性能不同,这个时间也不同。
通常将记录一千万个字符串类型键、大小为1GB的快照文件载入到内存中需要花费20~30秒钟。
通过RDB方式实现持久化,一旦Redis异常退出(非法关闭),就会丢失最后一次快照以后更改的所有数据。这就需要开发者根据具体的应用场合,通过组合设置自动快照条件的方式来将可能发生的数据损失控制在能够接受的范围。 如果数据很重要以至于无法承受任何损失,则可以考虑使用AOF方式进行持久化。 即:一旦redis非法关闭,那么会丢失最后一次持久化之后的数据。 如果数据不重要,则不必要关心。 如果数据不能允许丢失,那么要使用AOF方式。
详解如下:
默认情况下Redis没有开启AOF(append only file)方式的持久化。
可以通过修改redis.conf配置文件中的appendonly参数开启,如下:
appendonly yes
开启AOF持久化后每执行一条会更改Redis中的数据的命令,Redis就会将该命令写入硬盘中的AOF文件。
AOF文件的保存位置和RDB文件的位置相同,都是通过dir参数设置的。
dir ./
默认的文件名是appendonly.aof,可以通过appendfilename参数修改:
appendfilename appendonly.aof
在同时使用aof和rdb方式时,如果redis重启,则数据从aof文件加载。
硬盘损坏
了可能会导致数据丢失,如果通过redis的主从复制机制就可以避免这种单点故障
,如下图所示: 上边的配置说明当前该【从redis服务器】所对应的【主redis服务器】的IP是192.168.5.128,端口是6379。
详解如下:
架构细节:
(1) 所有的redis节点彼此互联(`PING-PONG机制`),内部使用`二进制协议`优化传输速度和带宽。
(2) 节点的fail(失败)是通过集群中`超过半数的节点检测失效`时才生效。即`集群搭建中主机的个数为奇数`。
(3) 客户端与redis节点直连,不需要中间proxy层,客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可。
(4) redis-cluster把所有的物理节点映射到[0-16383]slot(槽)上,cluster负责维护node<-->slot<-->value
Redis 集群中内置了 `16384 个哈希槽`,当需要在 Redis 集群中放置一个 key-value 时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果`对 16384 求余数`,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,redis 会根据节点数量`大致均等`的将哈希槽映射到不同的节点。
示例如下:
详解如下:
(1) 集群中所有master参与投票,如果半数以上master节点与其中一个master节点通信超过(cluster-node-timeout),认为该master节点挂掉。
(2) 什么时候整个集群不可用(cluster_state:fail)?
1、如果集群任意master挂掉,且当前master没有slave,则集群进入fail状态。也可以理解成集群的[0-16383]slot映射不完全时进入fail状态。
2、如果集群超过半数以上master挂掉,无论是否有slave,集群进入fail状态。
redis-trib.rb
依赖ruby环境
,首先需要安装ruby环境。redis-trib.rb
拷贝到/user/local/redis/redis-cluster 目录下(注意:先在/user/local/redis/目录下创建目录redis-cluster) [root@itheima ~]# cd /root/redis-3.0.0/src/ [root@itheima src]# ll *rb -rwxr-xr-x. 1 root root 48141 11月 3 19:30 redis-trib.rb [root@itheima src]# cp redis-trib.rb /usr/local/redis/redis-cluster补上1:我们自己写一个脚本文件shutdown-all.sh批量关闭它们: [root@itheima redis-cluster]# vim shutdown-all.sh
补上2:我们自己写一个脚本文件delete-aof-rdb-nodes.sh批量删除持久化文件和节点配置文件: [root@itheima redis-cluster]# vim delete-aof-rdb-nodes.sh
注意:自定义的脚本文件需要授予写权限,方可使用!!!
[root@itheima redis-cluster]# ./redis-trib.rb create --replicas 1 192.168.5.128:7001 192.168.5.128:7002 192.168.5.128:7003 192.168.5.128:7004 192.168.5.128:7005 192.168.5.128:7006
>>> Creating cluster
Connecting to node 192.168.5.128:7001: OK
Connecting to node 192.168.5.128:7002: OK
Connecting to node 192.168.5.128:7003: OK
Connecting to node 192.168.5.128:7004: OK
Connecting to node 192.168.5.128:7005: OK
Connecting to node 192.168.5.128:7006: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.5.128:7001
192.168.5.128:7002
192.168.5.128:7003
Adding replica 192.168.5.128:7004 to 192.168.5.128:7001
Adding replica 192.168.5.128:7005 to 192.168.5.128:7002
Adding replica 192.168.5.128:7006 to 192.168.5.128:7003
M: 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 192.168.5.128:7001
slots:0-5460 (5461 slots) master
M: 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 192.168.5.128:7002
slots:5461-10922 (5462 slots) master
M: d7d28caadfdc4161261305f2d2baf55d2d8f4221 192.168.5.128:7003
slots:10923-16383 (5461 slots) master
S: f124b72c0421c7514f44668d30761d075e42643d 192.168.5.128:7004
replicates 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7
S: 8cf60c085a58b60557a887a5e8451ce38e6b54fa 192.168.5.128:7005
replicates 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d
S: 05ad0eb9b5f839771b09dc18192909d5fa1f893e 192.168.5.128:7006
replicates d7d28caadfdc4161261305f2d2baf55d2d8f4221
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 192.168.5.128:7001)
M: 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 192.168.5.128:7001
slots:0-5460 (5461 slots) master
M: 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 192.168.5.128:7002
slots:5461-10922 (5462 slots) master
M: d7d28caadfdc4161261305f2d2baf55d2d8f4221 192.168.5.128:7003
slots:10923-16383 (5461 slots) master
M: f124b72c0421c7514f44668d30761d075e42643d 192.168.5.128:7004
slots: (0 slots) master
replicates 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7
M: 8cf60c085a58b60557a887a5e8451ce38e6b54fa 192.168.5.128:7005
slots: (0 slots) master
replicates 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d
M: 05ad0eb9b5f839771b09dc18192909d5fa1f893e 192.168.5.128:7006
slots: (0 slots) master
replicates d7d28caadfdc4161261305f2d2baf55d2d8f4221
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@itheima redis-cluster]#
[root@itheima 7001]# ./redis-cli -h 192.168.5.128 -p 7001 –c -c:指定是集群连接
192.168.5.128:7002> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:2
cluster_stats_messages_sent:1837
cluster_stats_messages_received:1837
192.168.5.128:7002>
192.168.5.128:7002> cluster nodes
d7d28caadfdc4161261305f2d2baf55d2d8f4221 192.168.5.128:7003 master - 0 1541263366376 3 connected 10923-16383
f124b72c0421c7514f44668d30761d075e42643d 192.168.5.128:7004 slave 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 0 1541263372425 4 connected
05ad0eb9b5f839771b09dc18192909d5fa1f893e 192.168.5.128:7006 slave d7d28caadfdc4161261305f2d2baf55d2d8f4221 0 1541263371417 6 connected
3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 192.168.5.128:7001 master - 0 1541263370402 1 connected 0-5460
8cf60c085a58b60557a887a5e8451ce38e6b54fa 192.168.5.128:7005 slave 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 0 1541263369396 5 connected
27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 192.168.5.128:7002 myself,master - 0 0 2 connected 5461-10922
192.168.5.128:7002>
[root@itheima redis-cluster]# ./redis-trib.rb add-node 192.168.5.128:7007 192.168.5.128:7001
>>> Adding node 192.168.5.128:7007 to cluster 192.168.5.128:7001
Connecting to node 192.168.5.128:7001: OK
Connecting to node 192.168.5.128:7006: OK
Connecting to node 192.168.5.128:7004: OK
Connecting to node 192.168.5.128:7002: OK
Connecting to node 192.168.5.128:7005: OK
Connecting to node 192.168.5.128:7003: OK
>>> Performing Cluster Check (using node 192.168.5.128:7001)
M: 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 192.168.5.128:7001
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 05ad0eb9b5f839771b09dc18192909d5fa1f893e 192.168.5.128:7006
slots: (0 slots) slave
replicates d7d28caadfdc4161261305f2d2baf55d2d8f4221
S: f124b72c0421c7514f44668d30761d075e42643d 192.168.5.128:7004
slots: (0 slots) slave
replicates 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7
M: 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 192.168.5.128:7002
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 8cf60c085a58b60557a887a5e8451ce38e6b54fa 192.168.5.128:7005
slots: (0 slots) slave
replicates 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d
M: d7d28caadfdc4161261305f2d2baf55d2d8f4221 192.168.5.128:7003
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Connecting to node 192.168.5.128:7007: OK
>>> Send CLUSTER MEET to node 192.168.5.128:7007 to make it join the cluster.
[OK] New node added correctly.
[root@itheima redis-cluster]#
第五步:输入yes开始移动槽到目标结点id
第六步:查看分配后主节点7007的槽
[root@itheima redis-cluster]# ./redis-trib.rb add-node --slave --master-id 26630461d8a63a7398e3f43b7366014c72a6a7ef 192.168.5.128:7008 192.168.5.128:7007
>>> Adding node 192.168.5.128:7008 to cluster 192.168.5.128:7007
Connecting to node 192.168.5.128:7007: OK
Connecting to node 192.168.5.128:7001: OK
Connecting to node 192.168.5.128:7003: OK
Connecting to node 192.168.5.128:7005: OK
Connecting to node 192.168.5.128:7004: OK
Connecting to node 192.168.5.128:7006: OK
Connecting to node 192.168.5.128:7002: OK
>>> Performing Cluster Check (using node 192.168.5.128:7007)
M: 26630461d8a63a7398e3f43b7366014c72a6a7ef 192.168.5.128:7007
slots:0-165,5461-5627,10923-11088 (499 slots) master
0 additional replica(s)
M: 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7 192.168.5.128:7001
slots:166-5460 (5295 slots) master
1 additional replica(s)
M: d7d28caadfdc4161261305f2d2baf55d2d8f4221 192.168.5.128:7003
slots:11089-16383 (5295 slots) master
1 additional replica(s)
S: 8cf60c085a58b60557a887a5e8451ce38e6b54fa 192.168.5.128:7005
slots: (0 slots) slave
replicates 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d
S: f124b72c0421c7514f44668d30761d075e42643d 192.168.5.128:7004
slots: (0 slots) slave
replicates 3cbed89c47ca14b3d1eb11dd2f7525fa6cb4fcd7
S: 05ad0eb9b5f839771b09dc18192909d5fa1f893e 192.168.5.128:7006
slots: (0 slots) slave
replicates d7d28caadfdc4161261305f2d2baf55d2d8f4221
M: 27d069e1b4d459a92b6cc9a1c92ad46ea00cb61d 192.168.5.128:7002
slots:5628-10922 (5295 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Connecting to node 192.168.5.128:7008: OK
>>> Send CLUSTER MEET to node 192.168.5.128:7008 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.5.128:7007.
[OK] New node added correctly.
[ERR] Node XXXXXX is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0
示例如下: 删除从节点:
[root@itheima redis-cluster]# ./redis-trib.rb del-node 192.168.5.128:7005 8cf60c085a58b60557a887a5e8451ce38e6b54fa
>>> Removing node 8cf60c085a58b60557a887a5e8451ce38e6b54fa from cluster 192.168.5.128:7005
Connecting to node 192.168.5.128:7005: OK
Connecting to node 192.168.5.128:7008: OK
Connecting to node 192.168.5.128:7004: OK
Connecting to node 192.168.5.128:7007: OK
Connecting to node 192.168.5.128:7006: OK
Connecting to node 192.168.5.128:7003: OK
Connecting to node 192.168.5.128:7001: OK
Connecting to node 192.168.5.128:7002: OK
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@itheima redis-cluster]#
删除主节点:
[root@itheima redis-cluster]# ./redis-trib.rb del-node 192.168.5.128:7003 d7d28caadfdc4161261305f2d2baf55d2d8f4221
>>> Removing node d7d28caadfdc4161261305f2d2baf55d2d8f4221 from cluster 192.168.5.128:7003
Connecting to node 192.168.5.128:7003: OK
Connecting to node 192.168.5.128:7001: OK
Connecting to node 192.168.5.128:7008: OK
Connecting to node 192.168.5.128:7007: OK
Connecting to node 192.168.5.128:7004: OK
Connecting to node 192.168.5.128:7002: OK
Connecting to node 192.168.5.128:7006: OK
[ERR] Node 192.168.5.128:7003 is not empty! Reshard data away and try again.
[root@itheima redis-cluster]#
/**
* Jedis连接Redis集群
*/
@Test
public void testJedisCluster1() {
Set<HostAndPort> nodes = new HashSet<>();
nodes.add(new HostAndPort("192.168.5.128", 7001));
nodes.add(new HostAndPort("192.168.5.128", 7002));
nodes.add(new HostAndPort("192.168.5.128", 7003));
nodes.add(new HostAndPort("192.168.5.128", 7004));
nodes.add(new HostAndPort("192.168.5.128", 7005));
nodes.add(new HostAndPort("192.168.5.128", 7006));
// 创建JedisCluster对象,在系统中是单例存在的
JedisCluster jedisCluster = new JedisCluster(nodes);
// 执行JedisCluster对象中的方法,方法和redis中的一一对应
jedisCluster.set("cluster-test", "Jedis connects Redis cluster test");
String result = jedisCluster.get("cluster-test");
System.out.println(result);
// 程序结束时需要关闭JedisCluster对象
jedisCluster.close();
}
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/mvc
http://www.springframework.org/schema/mvc/spring-mvc-3.2.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.2.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-3.2.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-3.2.xsd ">
<!-- 连接池配置 -->
<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
<!-- 最大连接数 -->
<property name="maxTotal" value="30" />
<!-- 最大空闲连接数 -->
<property name="maxIdle" value="10" />
<!-- 每次释放连接的最大数目 -->
<property name="numTestsPerEvictionRun" value="1024" />
<!-- 释放连接的扫描间隔(毫秒) -->
<property name="timeBetweenEvictionRunsMillis" value="30000" />
<!-- 连接最小空闲时间 -->
<property name="minEvictableIdleTimeMillis" value="1800000" />
<!-- 连接空闲多久后释放, 当空闲时间>该值 且 空闲连接>最大空闲连接数 时直接释放 -->
<property name="softMinEvictableIdleTimeMillis" value="10000" />
<!-- 获取连接时的最大等待毫秒数,小于零:阻塞不确定的时间,默认-1 -->
<property name="maxWaitMillis" value="1500" />
<!-- 在获取连接的时候检查有效性, 默认false -->
<property name="testOnBorrow" value="true" />
<!-- 在空闲时检查有效性, 默认false -->
<property name="testWhileIdle" value="true" />
<!-- 连接耗尽时是否阻塞, false报异常,ture阻塞直到超时, 默认true -->
<property name="blockWhenExhausted" value="false" />
</bean>
<!-- redis集群 -->
<bean id="jedisCluster" class="redis.clients.jedis.JedisCluster">
<constructor-arg index="0">
<set>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7001"></constructor-arg>
</bean>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7002"></constructor-arg>
</bean>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7003"></constructor-arg>
</bean>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7004"></constructor-arg>
</bean>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7005"></constructor-arg>
</bean>
<bean class="redis.clients.jedis.HostAndPort">
<constructor-arg index="0" value="192.168.5.128"></constructor-arg>
<constructor-arg index="1" value="7006"></constructor-arg>
</bean>
</set>
</constructor-arg>
<constructor-arg index="1" ref="jedisPoolConfig"></constructor-arg>
</bean>
</beans>
private ApplicationContext applicationContext;
@Before
public void init() {
applicationContext = new ClassPathXmlApplicationContext(
"classpath:applicationContext.xml");
}
/**
* 使用spring实现Jedis连接Redis集群
*/
@Test
public void testJedisCluster2() {
JedisCluster jedisCluster = (JedisCluster) applicationContext.getBean("jedisCluster");
jedisCluster.set("name", "xiaoyi");
String value = jedisCluster.get("name");
System.out.println(value);
}
本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。