ElasticSearch高可用集群部署

一、介绍

集群基本概念

  • Cluster 代表一个集群
  • Shards 代表索引分片,分布在不同节点上,构成分部署搜索。分片数量在创建索引时指定,不能修改。
  • Replicas 副本,提高系统的容错性,提高查询效率。ES 会自动对请求进行负载均衡。
  • Recovery数据重新分布,ES 在节点加入或退出时,会根据机器的负载对索引分片进行重新分配,挂掉的节点重新启动时也会进行数据恢复。

ES 高并发

ES是一个分布式全文检索框架,隐藏了复杂的处理机制,内部使用 分片机制、集群发现、分片负载均衡请求路由。

  • Shards 实现了分布式检索。
  • Replicas 实现了负载均衡与容错恢复。

附上官网地址

二、准备

集群架构

部署服务器方案

使用 HaProxy 负载

cluster.name: my-cluster

  1. ELK8:192.168.2.8 es-node1
  2. ELK9:192.168.2.9 es-node2
  3. ELK10:192.168.2.10 es-node3

三、ES 安装

解压安装

下载

下载地址 or ES start

下载 elasticsearch-7.14.0-linux-x86_64.tar.gzkibana-7.14.0-linux-x86_64.tar.gz

解压

1
2
3
4
tar -zxvf elasticsearch-7.14.0-linux-x86_64.tar.gz
mv elasticsearch-7.14.0 /opt/elasticsearch-7.14.0
tar -zxvf kibana-7.14.0-linux-x86_64.tar.gz
mv kibana-7.14.0-linux-x86_64 /opt/kibana-7.14.0

添加用户

1
2
3
4
5
6
7
8
9
10
11
# 添加账号
useradd es
# 修改密码
passwd es
# 把用户加入到 root 组
usermod -aG root es
# 加入到 sudo 中
sudo vim /etc/sudoers
# 添加一行: es ALL=(ALL) ALL
su es
sudo chown es ./

Centos 系统配置

max file

修改

1
sudo vim /etc/security/limits.conf

添加

1
2
3
4
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536

执行

1
sudo source /etc/security/limits.conf

virtual memory

1
2
3
4
sudo vim /etc/sysctl.conf
# 添加 一行 vm.max_map_count=655360
# 重新加载参数
sudo sysctl -p

核心配置

ElasticSearch

1
vim /opt/elasticsearch-7.14.0/config/elasticsearch.yml

修改为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 集群名称,在集群中不能重复,默认为  elasticsearch
cluster.name: my-cluster
# 集群节点描述
node.name: es-node8
# data数据保存路径,默认为 elasticsearch/data
#path.data: /path/to/data
# 日志数据保存路径,默认为 elasticsearch/logs
#path.logs: /path/to/logs
# 绑定地址,所有人都可以访问
network.host: 0.0.0.0
# 绑定端口,用于外部通讯(9300端口:Tcp协议,ES集群之间的通讯)
http.port: 9200
# 集群的节点,单机就写上面的节点名称
cluster.initial_master_nodes: ["es-node8","es-node9","es-node10"]

Kibana

1
vim /opt/kibana-7.14.0/config/kibana.yml

修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 端口
server.port: 5601
# 本机IP
server.host: "192.168.2.2"
# 请求最大负载大小
#server.maxPayload: 1048576
# 服务名称
server.name: "kibana-host2"
# Elasticsearch instances.
elasticsearch.hosts: ["http://192.168.2.2:9200"]
# 如果你的Elasticsearch受基本身份验证的保护,这些设置将提供Kibana服务器在启动时用于维护Kibana索引的用户名和密码。Kibana的用户仍然需要通过Elasticsearch进行身份验证,Elasticsearch是通过Kibana服务器代理的。
elasticsearch.username: "maxzhao"
elasticsearch.password: "maxzhao"
xpack.security.sessionTimeout: 600000
# 随机数长度大于32 https://www.elastic.co/guide/en/kibana/current/reporting-settings-kb.html
xpack.reporting.encryptionKey: "11112222333344445555666677778888"
# https://www.elastic.co/guide/en/kibana/6.x/using-kibana-with-security.html
xpack.security.encryptionKey: "11112222333344445555666677778888"

启动

1
2
3
4
5
6
sudo chmod g+xwr /var
mkdir /var/data/
mkdir /var/log/
cd /opt
./elasticsearch-7.14.0/bin/elasticsearch
./kibana-7.14.0/bin/kibana

访问

  1. 访问ES http://127.0.0.1:9200/
  2. 访问kibana http://localhost:5601

后台启动

1
2
/opt/elasticsearch-7.14.0/bin/elasticsearch -d
/opt/kibana-7.14.0/bin/kibana -d

停止服务

1
2
3
ps -ef|grep elasticsearch
ps -ef|grep kibana
kill -9 xxx

启动方式

指定进程ID

进程ID写在文件中,方便关闭

1
2
/opt/elasticsearch-7.14.0/bin/elasticsearch -d -p pid
pkill -F pid

外部化配置

1
/opt/elasticsearch-7.14.0/bin/elasticsearch -d -Ecluster.name=my_cluster -Enode.name=node_1

环境变量配置

1
2
export HOSTNAME="host1,host2"
vim /opt/elasticsearch-7.14.0/config/elasticsearch.yml

写入

1
2
node.name:    ${HOSTNAME}
network.host: ${ES_NETWORK_HOST}

集群

配置

host8

host9

host10

查看

查看集群健康状态

http://192.168.2.8:9200/_cat/health`

查看集群节点

http://192.168.2.8:9200/_cat/nodes?pretty

其中 * 代表为 master 节点

elasticsearch.yml

host8

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
# 集群名称,在集群中不能重复,默认为 elasticsearch
cluster.name: my-cluster
# 集群节点描述
node.name: es-node8
# 集群角色,默认如下
# node.roles: ["master","data","data_content","data_hot","data_warm","data_cold","data_frozen","ingest","ml","remote_cluster_client","transform"]
# Add custom attributes to the node:
#node.attr.rack: r1
# data数据保存路径,默认为 elasticsearch/data
#path.data: /path/to/data
path:
data:
- /var/data/elasticsearch1
- /var/data/elasticsearch2
- /var/data/elasticsearch3
# 日志数据保存路径,默认为 elasticsearch/logs
#path.logs: /path/to/logs
logs:
- /var/log/elasticsearch1
- /var/log/elasticsearch2
- /var/log/elasticsearch3
# 绑定地址,所有人都可以访问
network.host: 192.168.2.8
# 绑定端口,用于外部通讯(9300端口:Tcp协议,ES集群之间的通讯)
http.port: 9200
# 集群发现
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.seed_hosts: ["192.168.2.9", "192.168.2.10:9300"]
# 集群的节点,单机就写上面的节点名称,从当前节点中投票选出主节点
cluster.initial_master_nodes: ["es-node8","es-node9","es-node10"]
#bootstrap.memory_lock: true
# 删除索引时要求显式名称:
#action.destructive_requires_name: true

host9

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
# 集群名称,在集群中不能重复,默认为 elasticsearch
cluster.name: my-cluster
# 集群节点描述
node.name: es-node9
# 集群角色,默认如下
# node.roles: ["master","data","data_content","data_hot","data_warm","data_cold","data_frozen","ingest","ml","remote_cluster_client","transform"]
# Add custom attributes to the node:
#node.attr.rack: r1
# data数据保存路径,默认为 elasticsearch/data
#path.data: /path/to/data
path:
data:
- /var/data/elasticsearch1
- /var/data/elasticsearch2
- /var/data/elasticsearch3
# 日志数据保存路径,默认为 elasticsearch/logs
#path.logs: /path/to/logs
logs:
- /var/log/elasticsearch1
- /var/log/elasticsearch2
- /var/log/elasticsearch3
# 绑定地址,所有人都可以访问
network.host: 192.168.2.9
# 绑定端口,用于外部通讯(9300端口:Tcp协议,ES集群之间的通讯)
http.port: 9200
# 集群发现
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.seed_hosts: ["192.168.2.8", "192.168.2.10:9300"]
# 集群的节点,单机就写上面的节点名称,从当前节点中投票选出主节点
cluster.initial_master_nodes: ["es-node8","es-node9","es-node10"]
#bootstrap.memory_lock: true
# 删除索引时要求显式名称:
#action.destructive_requires_name: true

host10

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
# 集群名称,在集群中不能重复,默认为 elasticsearch
cluster.name: my-cluster
# 集群节点描述
node.name: es-node10
# 集群角色,默认如下
# node.roles: ["master","data","data_content","data_hot","data_warm","data_cold","data_frozen","ingest","ml","remote_cluster_client","transform"]
# Add custom attributes to the node:
#node.attr.rack: r1
# data数据保存路径,默认为 elasticsearch/data
#path.data: /path/to/data
path:
data:
- /var/data/elasticsearch1
- /var/data/elasticsearch2
- /var/data/elasticsearch3
# 日志数据保存路径,默认为 elasticsearch/logs
#path.logs: /path/to/logs
logs:
- /var/log/elasticsearch1
- /var/log/elasticsearch2
- /var/log/elasticsearch3
# 绑定地址,所有人都可以访问
network.host: 192.168.2.10
# 绑定端口,用于外部通讯(9300端口:Tcp协议,ES集群之间的通讯)
http.port: 9200
# 集群发现
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.seed_hosts: ["192.168.2.9", "192.168.2.8:9300"]
# 集群的节点,单机就写上面的节点名称,从当前节点中投票选出主节点
cluster.initial_master_nodes: ["es-node8","es-node9","es-node10"]
#bootstrap.memory_lock: true
# 删除索引时要求显式名称:
#action.destructive_requires_name: true

kibana.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
# 端口
server.port: 5601
# 本机IP
server.host: "192.168.2.2"
#代理下指定一个路径挂载Kibana。
#使用服务器。rewriteBasePath的设置告诉Kibana是否应该删除basePath
#此设置不能以斜杠结束。
#server.basePath: ""
# 重写前缀为 server.basePath,默认为true.
#server.rewriteBasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""
# 请求最大负载大小
#server.maxPayload: 1048576
# 服务名称
#server.name: "your-hostname"
# Elasticsearch instances.
elasticsearch.hosts: ["http://192.168.2.2:9200"]

# Kibana使用Elasticsearch中的索引来存储已保存的搜索、可视化和仪表盘。Kibana创建一个新的索引,如果这个索引还不存在的话。
#kibana.index: ".kibana"
# 要加载的默认应用程序。
#kibana.defaultAppId: "home"
# 如果你的Elasticsearch受基本身份验证的保护,这些设置将提供Kibana服务器在启动时用于维护Kibana索引的用户名和密码。Kibana的用户仍然需要通过Elasticsearch进行身份验证,Elasticsearch是通过Kibana服务器代理的。
elasticsearch.username: "maxzhao"
elasticsearch.password: "maxzhao"
xpack.security.sessionTimeout: 600000
# 随机数长度大于32 https://www.elastic.co/guide/en/kibana/current/reporting-settings-kb.html
xpack.reporting.encryptionKey: "11112222333344445555666677778888"
# https://www.elastic.co/guide/en/kibana/6.x/using-kibana-with-security.html
xpack.security.encryptionKey: "11112222333344445555666677778888"
# 分别启用SSL和pem格式的SSL证书和SSL密钥文件的路径。
# 这些设置启用了从Kibana服务器向浏览器发送请求的SSL。
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

本文地址: https://github.com/maxzhao-it/blog/post/48916/