您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
elasticSearch配置集群、配置主从(master与data分离)集群
发布时间:2021-09-05 15:56:33编辑:雪饮阅读()
要配置elasticSearch集群也就是水平扩容,那么一般来说都是有主从配置的,如果都是主主则称为脑裂现象。
要配置elasticSearch集群,这里以win10做为master节点而win7做为data节点。
配置当然是在config/elasticsearch.yml中配置的
对于win10中配置如:
path.repo: ["D:/esbak"]
path.data: "D:/software/elasticsearch-7.14.0-windows-x86_64/elasticsearch-7.14.0/myData"
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.name: "logging-prod"
network.host: 0.0.0.0
node.master: true
node.data: false
node.name: "node-win10"
cluster.initial_master_nodes: ["192.168.43.71"]
discovery.zen.minimum_master_nodes: 2
这里最重要的配置有如下:
path_data:这个是配置数据存储目录的,当时我这个win10上面是有数据的,然后我将数据都删除了,然后配置这个目录,若不删除数据,则直接启动实例时候就会有好多错误,什么验证之类的,查看日志会发现从win7那边join到win10时候xxxxx验证通过不了。网上查资料就是说要先将path_data配置的目录删除,由于我默认没有配置这个path_data。
那么我直接先在elasticSearch-head中删除除了系统自带索引以外的所有索引,然后配置上这个path_data目录。
cluster.initial_master_nodes: ["192.168.43.71"]:这个配置中写入要做为master的节点名或者ip地址都行,网上看有资料填写的是节点名,但也有资料看到的是填写的ip地址,这里我暂时使用的是ip地址了,那么这个192.168.43.71就是我这个win10了。
Win10配置好之后,接下来就该配置win7了
path.data: "C:/software/elasticsearch-7.14.0/myData"
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.name: "logging-prod"
network.host: 0.0.0.0
node.master: false
node.data: true
node.name: "node-win7"
discovery.seed_hosts: ["192.168.43.71", "192.168.43.76"]
cluster.initial_master_nodes: ["192.168.43.71"]
discovery.zen.minimum_master_nodes: 2
win7这个配置中,主要是多了一个discovery.seed_hosts,该项配置中存储的是一个集群节点的ip地址列表,那么我猜想用节点名列表也是可以的。
同时discovery.seed_hosts中配置的ip地址列表中192.168.43.76正好就是这个win7的ip地址。
那么接下来这两个实例只要是在同一个局域网下,应该就是没有什么问题的,分别启动后稍等片刻会集群就会形成了。
不过在win7这里出现了一点小插曲:
[2021-09-05T15:09:22,436][ERROR][o.e.i.g.GeoIpDownloader ] [node-win7] exception during geoip databases update
java.net.SocketTimeoutException: Read timed out
at sun.nio.ch.NioSocketImpl.timedRead(NioSocketImpl.java:283) ~[?:?]
at sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309) ~[?:?]
at sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350) ~[?:?]
at sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803) ~[?:?]
at java.net.Socket$SocketInputStream.read(Socket.java:976) ~[?:?]
at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:478) ~[?:?]
at sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:472) ~[?:?]
at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:110) ~[?:?]
at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1498) ~[?:?]
at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1404) ~[?:?]
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:441) ~[?:?]
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:412) ~[?:?]
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:574) ~[?:?]
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183) ~[?:?]
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1653) ~[?:?]
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1577) ~[?:?]
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527) ~[?:?]
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:308) ~[?:?]
at org.elasticsearch.ingest.geoip.HttpClient.lambda$get$0(HttpClient.java:55) ~[ingest-geoip-7.14.0.jar:7.14.0]
at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]
at org.elasticsearch.ingest.geoip.HttpClient.doPrivileged(HttpClient.java:97) ~[ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.HttpClient.get(HttpClient.java:49) ~[ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.HttpClient.getBytes(HttpClient.java:40) ~[ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.GeoIpDownloader.fetchDatabasesOverview(GeoIpDownloader.java:117) ~[ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:105) ~[ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:237) [ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:89) [ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:38) [ingest-geoip-7.14.0.jar:7.14.0]
at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:40) [elasticsearch-7.14.0.jar:7.14.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) [elasticsearch-7.14.0.jar:7.14.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.14.0.jar:7.14.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:831) [?:?]
这是启动过程中报错的
不过最后集群还是成功了。
验证集群成功,可以通过集群api的_nodes如:
这样就可以看到有两个节点,和我们这个集群的目标配置节点数相同的。
一般这个集群节点发现过程需要稍等片刻,不一定刚启动结束就会出现所有节点。
那么除了通过集群api的_nodes查看,也可以通过elasticSearch-head查看
像是这样也可以看到有两个节点,这样就也算是集群配置完成了。
关键字词:elasticSearch,配置集群