您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
webman-session管理-配置文件-redis集群搭建(單宿主機多端口實現)
发布时间:2022-01-29 11:37:20编辑:雪饮阅读()
理論:
redis集群采用P2P模式,是完全去中心化的,不存在中心节点或者代理节点;
redis集群是没有统一的入口的,客户端(client)连接集群的时候连接集群中的任意节点(node)即可,集群内部的节点是相互通信的(PING-PONG机制),每个节点都是一个redis实例;
为了实现集群的高可用,即判断节点是否健康(能否正常使用),redis-cluster有这么一个投票容错机制:如果集群中超过半数的节点投票认为某个节点挂了,那么这个节点就挂了(fail)。这是判断节点是否挂了的方法;
那么如何判断集群是否挂了呢? -> 如果集群中任意一个节点挂了,而且该节点没有从节点(备份节点),那么这个集群就挂了。这是判断集群是否挂了的方法;
那么为什么任意一个节点挂了(没有从节点)这个集群就挂了呢? -> 因为集群内置了16384个slot(哈希槽),并且把所有的物理节点映射到了这16384[0-16383]个slot上,或者说把这些slot均等的分配给了各个节点。当需要在Redis集群存放一个数据(key-value)时,redis会先对这个key进行crc16算法,然后得到一个结果。再把这个结果对16384进行求余,这个余数会对应[0-16383]其中一个槽,进而决定key-value存储到哪个节点中。所以一旦某个节点挂了,该节点对应的slot就无法使用,那么就会导致集群无法正常工作。
综上所述,每个Redis集群理论上最多可以有16384个节点。
編寫多個redis實例啓動便捷脚本:
start0_5.sh:
#!/bin/bash
/www/server/redis/src/redis-server --port 6379 --dbfilename dump6379.rdb --cluster-enabled yes --cluster-config-file node_6379 --protected-mode no --bind 0.0.0.0 &
/www/server/redis/src/redis-server --port 6380 --dbfilename dump6380.rdb --cluster-enabled yes --cluster-config-file node_6380 --protected-mode no --bind 0.0.0.0 &
/www/server/redis/src/redis-server --port 6381 --dbfilename dump6381.rdb --cluster-enabled yes --cluster-config-file node_6381 --protected-mode no --bind 0.0.0.0 &
/www/server/redis/src/redis-server --port 6382 --dbfilename dump6382.rdb --cluster-enabled yes --cluster-config-file node_6382 --protected-mode no --bind 0.0.0.0 &
/www/server/redis/src/redis-server --port 6383 --dbfilename dump6383.rdb --cluster-enabled yes --cluster-config-file node_6383 --protected-mode no --bind 0.0.0.0 &
/www/server/redis/src/redis-server --port 6384 --dbfilename dump6384.rdb --cluster-enabled yes --cluster-config-file node_6384 --protected-mode no --bind 0.0.0.0 &
運行多個redis十六啓動脚本
[root@localhost webman]# ./start0_5.sh
-bash: ./start0_5.sh: /bin/bash^M: bad interpreter: No such file or directory
出現該錯誤后,可以嘗試用vi進入該脚本中在命令模式下,執行如下兩個操作
:set ff=unix
:x
安裝ruby環境,因爲redis集群環境貌似依賴ruby環境
[root@localhost webman]# yum install ruby
該步驟也是安裝一個貌似依賴的東西。
gem install redis -v 3.3.
運行集群
[root@localhost webman]# redis-cli --cluster create 192.168.1.10:6379 192.168.1.10:6380 192.168.1.10:6381 192.168.1.10:6382 192.168.1.10:6383 192.168.1.10:6384 --cluster-replicas 1
[ERR] Node 192.168.1.10:6379 is not configured as a cluster node.
如果出現該情況,可以考慮killall -9 redis-server然後重新運行下上面的start0_5.sh
然後再次嘗試
[root@localhost webman]# redis-cli --cluster create 192.168.1.10:6379 192.168.1.10:6380 192.168.1.10:6381 192.168.1.10:6382 192.168.1.10:6383 192.168.1.10:6384 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.10:6383 to 192.168.1.10:6379
Adding replica 192.168.1.10:6384 to 192.168.1.10:6380
Adding replica 192.168.1.10:6382 to 192.168.1.10:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 1a8a78ce01d8cb00828f824693753078683499f8 192.168.1.10:6379
slots:[0-5460] (5461 slots) master
M: 9eda37a8ffe8176460c137cac9fd861998ccccd4 192.168.1.10:6380
slots:[5461-10922] (5462 slots) master
M: 52ae6420725e924d72fbf8884d060725f37b953b 192.168.1.10:6381
slots:[10923-16383] (5461 slots) master
S: 61df0b77e73d9e8693bd5d95f2e11ca976170d1f 192.168.1.10:6382
replicates 9eda37a8ffe8176460c137cac9fd861998ccccd4
S: c3f903a44966153e1a891610bb68e209d935461a 192.168.1.10:6383
replicates 52ae6420725e924d72fbf8884d060725f37b953b
S: 441fe63ed37d167cea82ba97ca732d506ab1efe2 192.168.1.10:6384
replicates 1a8a78ce01d8cb00828f824693753078683499f8
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
16314:M 29 Jan 2022 11:25:31.179 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
16315:M 29 Jan 2022 11:25:31.179 # configEpoch set to 2 via CLUSTER SET-CONFIG-EPOCH
16316:M 29 Jan 2022 11:25:31.179 # configEpoch set to 3 via CLUSTER SET-CONFIG-EPOCH
16317:M 29 Jan 2022 11:25:31.180 # configEpoch set to 4 via CLUSTER SET-CONFIG-EPOCH
16318:M 29 Jan 2022 11:25:31.180 # configEpoch set to 5 via CLUSTER SET-CONFIG-EPOCH
16319:M 29 Jan 2022 11:25:31.180 # configEpoch set to 6 via CLUSTER SET-CONFIG-EPOCH
>>> Sending CLUSTER MEET messages to join the cluster
16314:M 29 Jan 2022 11:25:31.194 # IP address for this node updated to 192.168.1.10
16317:M 29 Jan 2022 11:25:31.296 # IP address for this node updated to 192.168.1.10
16318:M 29 Jan 2022 11:25:31.296 # IP address for this node updated to 192.168.1.10
16319:M 29 Jan 2022 11:25:31.296 # IP address for this node updated to 192.168.1.10
16315:M 29 Jan 2022 11:25:31.296 # IP address for this node updated to 192.168.1.10
16316:M 29 Jan 2022 11:25:31.297 # IP address for this node updated to 192.168.1.10
Waiting for the cluster to join
16317:S 29 Jan 2022 11:25:32.183 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
16317:S 29 Jan 2022 11:25:32.183 * Connecting to MASTER 192.168.1.10:6380
16317:S 29 Jan 2022 11:25:32.183 * MASTER <-> REPLICA sync started
16317:S 29 Jan 2022 11:25:32.183 # Cluster state changed: ok
16317:S 29 Jan 2022 11:25:32.183 * Non blocking connect for SYNC fired the event.
16318:S 29 Jan 2022 11:25:32.183 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
16318:S 29 Jan 2022 11:25:32.183 * Connecting to MASTER 192.168.1.10:6381
16318:S 29 Jan 2022 11:25:32.183 * MASTER <-> REPLICA sync started
16318:S 29 Jan 2022 11:25:32.183 # Cluster state changed: ok
16318:S 29 Jan 2022 11:25:32.183 * Non blocking connect for SYNC fired the event.
16319:S 29 Jan 2022 11:25:32.183 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
16319:S 29 Jan 2022 11:25:32.183 * Connecting to MASTER 192.168.1.10:6379
16319:S 29 Jan 2022 11:25:32.183 * MASTER <-> REPLICA sync started
16319:S 29 Jan 2022 11:25:32.183 # Cluster state changed: ok
16319:S 29 Jan 2022 11:25:32.183 * Non blocking connect for SYNC fired the event.
16317:S 29 Jan 2022 11:25:32.183 * Master replied to PING, replication can continue...
16319:S 29 Jan 2022 11:25:32.184 * Master replied to PING, replication can continue...
16318:S 29 Jan 2022 11:25:32.184 * Master replied to PING, replication can continue...
16319:S 29 Jan 2022 11:25:32.184 * Trying a partial resynchronization (request e6c79518434e719ca531ff8b6e64e1887585418c:1).
16314:M 29 Jan 2022 11:25:32.184 * Replica 192.168.1.10:6384 asks for synchronization
16314:M 29 Jan 2022 11:25:32.184 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for 'e6c79518434e719ca531ff8b6e64e1887585418c', my replication IDs are '4f2136ac1d4cc4d2f1766878525c2c48d6236ce0' and '0000000000000000000000000000000000000000')
16314:M 29 Jan 2022 11:25:32.184 * Replication backlog created, my new replication IDs are '0ab0b3802eaa4e9a858bc4108c308922bbe2142f' and '0000000000000000000000000000000000000000'
16314:M 29 Jan 2022 11:25:32.184 * Starting BGSAVE for SYNC with target: disk
16317:S 29 Jan 2022 11:25:32.184 * Trying a partial resynchronization (request 05c3918c7a1475f6735f4c5218eddf807f8566b9:1).
16315:M 29 Jan 2022 11:25:32.184 * Replica 192.168.1.10:6382 asks for synchronization
16315:M 29 Jan 2022 11:25:32.184 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '05c3918c7a1475f6735f4c5218eddf807f8566b9', my replication IDs are 'e4e6e0bd39b511f56c0c4b37ae57831ac1e59d5f' and '0000000000000000000000000000000000000000')
16315:M 29 Jan 2022 11:25:32.184 * Replication backlog created, my new replication IDs are '4f6fb32cb6a69554f021a4f66ecacee24c36705f' and '0000000000000000000000000000000000000000'
16315:M 29 Jan 2022 11:25:32.184 * Starting BGSAVE for SYNC with target: disk
16314:M 29 Jan 2022 11:25:32.186 * Background saving started by pid 16792
16315:M 29 Jan 2022 11:25:32.186 * Background saving started by pid 16793
16318:S 29 Jan 2022 11:25:32.186 * Trying a partial resynchronization (request 3c448939503dee434d30ae08471e75515c0b5ac8:1).
16317:S 29 Jan 2022 11:25:32.186 * Full resync from master: 4f6fb32cb6a69554f021a4f66ecacee24c36705f:0
16317:S 29 Jan 2022 11:25:32.186 * Discarding previously cached master state.
16316:M 29 Jan 2022 11:25:32.186 * Replica 192.168.1.10:6383 asks for synchronization
16316:M 29 Jan 2022 11:25:32.186 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '3c448939503dee434d30ae08471e75515c0b5ac8', my replication IDs are '4170a1f0dfc5d11524dbfec1c47658365e1c0705' and '0000000000000000000000000000000000000000')
16316:M 29 Jan 2022 11:25:32.186 * Replication backlog created, my new replication IDs are 'ae03e5c488a874af4e1fcfb8fa4a9fe3ff934f66' and '0000000000000000000000000000000000000000'
16316:M 29 Jan 2022 11:25:32.186 * Starting BGSAVE for SYNC with target: disk
16319:S 29 Jan 2022 11:25:32.186 * Full resync from master: 0ab0b3802eaa4e9a858bc4108c308922bbe2142f:0
16319:S 29 Jan 2022 11:25:32.186 * Discarding previously cached master state.
16316:M 29 Jan 2022 11:25:32.188 * Background saving started by pid 16794
16318:S 29 Jan 2022 11:25:32.188 * Full resync from master: ae03e5c488a874af4e1fcfb8fa4a9fe3ff934f66:0
16318:S 29 Jan 2022 11:25:32.188 * Discarding previously cached master state.
>>> Performing Cluster Check (using node 192.168.1.10:6379)
M: 1a8a78ce01d8cb00828f824693753078683499f8 192.168.1.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 52ae6420725e924d72fbf8884d060725f37b953b 192.168.1.10:6381
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 441fe63ed37d167cea82ba97ca732d506ab1efe2 192.168.1.10:6384
slots: (0 slots) slave
replicates 1a8a78ce01d8cb00828f824693753078683499f8
S: c3f903a44966153e1a891610bb68e209d935461a 192.168.1.10:6383
slots: (0 slots) slave
replicates 52ae6420725e924d72fbf8884d060725f37b953b
M: 9eda37a8ffe8176460c137cac9fd861998ccccd4 192.168.1.10:6380
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 61df0b77e73d9e8693bd5d95f2e11ca976170d1f 192.168.1.10:6382
slots: (0 slots) slave
replicates 9eda37a8ffe8176460c137cac9fd861998ccccd4
16794:C 29 Jan 2022 11:25:32.189 * DB saved on disk
16794:C 29 Jan 2022 11:25:32.189 * RDB: 4 MB of memory used by copy-on-write
16793:C 29 Jan 2022 11:25:32.190 * DB saved on disk
16792:C 29 Jan 2022 11:25:32.190 * DB saved on disk
16793:C 29 Jan 2022 11:25:32.190 * RDB: 6 MB of memory used by copy-on-write
16792:C 29 Jan 2022 11:25:32.190 * RDB: 4 MB of memory used by copy-on-write
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@localhost webman]# 16315:M 29 Jan 2022 11:25:32.201 * Background saving terminated with success
16316:M 29 Jan 2022 11:25:32.201 * Background saving terminated with success
16315:M 29 Jan 2022 11:25:32.201 * Synchronization with replica 192.168.1.10:6382 succeeded
16316:M 29 Jan 2022 11:25:32.201 * Synchronization with replica 192.168.1.10:6383 succeeded
16318:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: receiving 175 bytes from master to disk
16318:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: Flushing old data
16318:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: Loading DB in memory
16317:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: receiving 175 bytes from master to disk
16317:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: Flushing old data
16317:S 29 Jan 2022 11:25:32.201 * MASTER <-> REPLICA sync: Loading DB in memory
16318:S 29 Jan 2022 11:25:32.202 * Loading RDB produced by version 6.2.6
16318:S 29 Jan 2022 11:25:32.202 * RDB age 0 seconds
16318:S 29 Jan 2022 11:25:32.202 * RDB memory usage when created 2.59 Mb
16318:S 29 Jan 2022 11:25:32.202 # Done loading RDB, keys loaded: 0, keys expired: 0.
16317:S 29 Jan 2022 11:25:32.202 * Loading RDB produced by version 6.2.6
16317:S 29 Jan 2022 11:25:32.202 * RDB age 0 seconds
16317:S 29 Jan 2022 11:25:32.202 * RDB memory usage when created 2.54 Mb
16317:S 29 Jan 2022 11:25:32.202 # Done loading RDB, keys loaded: 0, keys expired: 0.
16317:S 29 Jan 2022 11:25:32.202 * MASTER <-> REPLICA sync: Finished with success
16318:S 29 Jan 2022 11:25:32.202 * MASTER <-> REPLICA sync: Finished with success
16314:M 29 Jan 2022 11:25:32.216 * Background saving terminated with success
16314:M 29 Jan 2022 11:25:32.216 * Synchronization with replica 192.168.1.10:6384 succeeded
16319:S 29 Jan 2022 11:25:32.216 * MASTER <-> REPLICA sync: receiving 175 bytes from master to disk
16319:S 29 Jan 2022 11:25:32.216 * MASTER <-> REPLICA sync: Flushing old data
16319:S 29 Jan 2022 11:25:32.216 * MASTER <-> REPLICA sync: Loading DB in memory
16319:S 29 Jan 2022 11:25:32.218 * Loading RDB produced by version 6.2.6
16319:S 29 Jan 2022 11:25:32.218 * RDB age 0 seconds
16319:S 29 Jan 2022 11:25:32.218 * RDB memory usage when created 2.59 Mb
16319:S 29 Jan 2022 11:25:32.218 # Done loading RDB, keys loaded: 0, keys expired: 0.
16319:S 29 Jan 2022 11:25:32.218 * MASTER <-> REPLICA sync: Finished with success
16314:M 29 Jan 2022 11:25:36.096 # Cluster state changed: ok
16315:M 29 Jan 2022 11:25:36.183 # Cluster state changed: ok
16316:M 29 Jan 2022 11:25:36.187 # Cluster state changed: ok
這中間有個交互,要你按yes,你輸入yes即可。
這中間有出現一個警告:[WARNING] Some slaves are in the same host as their master
因爲目前我們是在一臺虛擬機内部實現的,所以有這種報錯也很正常了。
連接redis集群
[root@localhost ~]# redis-cli -p 6379 -c
這裏可以隨便連接一個節點即可,但必須要有-c參數,否則节点之间是无法自动跳转的。
查看redis集群信息:
[root@localhost webman]# redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:540
cluster_stats_messages_pong_sent:555
cluster_stats_messages_sent:1095
cluster_stats_messages_ping_received:550
cluster_stats_messages_pong_received:540
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1095
查看redis集群節點信息:
[root@localhost webman]# redis-cli cluster nodes
52ae6420725e924d72fbf8884d060725f37b953b 192.168.1.10:6381@16381 master - 0 1643427335057 3 connected 10923-16383
441fe63ed37d167cea82ba97ca732d506ab1efe2 192.168.1.10:6384@16384 slave 1a8a78ce01d8cb00828f824693753078683499f8 0 1643427336000 1 connected
c3f903a44966153e1a891610bb68e209d935461a 192.168.1.10:6383@16383 slave 52ae6420725e924d72fbf8884d060725f37b953b 0 1643427337000 3 connected
1a8a78ce01d8cb00828f824693753078683499f8 192.168.1.10:6379@16379 myself,master - 0 1643427331000 1 connected 0-5460
9eda37a8ffe8176460c137cac9fd861998ccccd4 192.168.1.10:6380@16380 master - 0 1643427335000 2 connected 5461-10922
61df0b77e73d9e8693bd5d95f2e11ca976170d1f 192.168.1.10:6382@16382 slave 9eda37a8ffe8176460c137cac9fd861998ccccd4 0 1643427337071 2 connected
最後也可以直接使用RDM更加可視化的查看集群
关键字词:webman,session,管理,配置,文件,redis,集群,搭建,單,宿主,機,多,端口,實現
相关文章
- webman-session管理-配置文件-更換session驅動為redis
- webman-session管理-助手函数session()-給session賦值
- webman-session管理-助手函数session()-獲取某個值
- webman-session管理-助手函数session()-session實例獲
- webman-session管理-判断对应session数据是否存在
- webman-session管理-删除所有session数据
- webman-session管理-删除session数据
- webman-session管理-存儲session(set與put)
- webman-session管理-獲取session中某個值
- webman-session管理-獲取所有session