-
Notifications
You must be signed in to change notification settings - Fork 1
Linux下Elasticsearch安装配置及优化策略
- 下载tar包
elasticsearch-7.9.2-linux-x86_64
- 解压tar包
mkdir -p /opt/deploy/data
tar -C /opt/deploy/data -zxvf elasticsearch-6.8.6.tar.gz
mv /opt/deploy/data/elasticsearch-6.8.6 /opt/deploy/data/elasticsearch
- 配置环境变量(/etc/profile)
#ES setting
ELK_VERSION=6.8.6
ES_HOME=/opt/deploy/data/elasticsearch
KIBANA_HOME=/opt/deploy/data/kibana
LS_HOME=/opt/deploy/data/logstash
PATH=$PATH:$ES_HOME/bin:$KIBANA_HOME/bin:$LS_HOME/bin
export ES_HOME KIBANA_HOME LS_HOME PATH
source /etc/profile
- 添加用户
groupadd elasticsearch
useradd elasticsearch -g elasticsearch -p elasticsearch
chown -R elasticsearch:elasticsearch /opt/deploy/data/elasticsearch
- 内存优化(系统内存的50%,最大32G)
-Xmx8g -Xms8g
- 内存锁定以及文件句柄优化(vim /etc/security/limits.conf)
elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft nproc 8096
elasticsearch hard nproc 8096
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
- 禁止交换区和最大映射文件数(vim /etc/sysctl.conf)
vm.swappiness = 1
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 262144
vm.max_map_count = 655360
fs.file-max = 6553560
vm.overcommit_memory = 1
sysctl -p
vim /etc/systemd/system.conf
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
- 设置索引刷新时间
PUT /my_logs
{
"settings": {
"refresh_interval": "30s"
}
}
PUT /my_index/_settings
{
"index.translog.durability": "async",
"index.translog.sync_interval": "5s"
}
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.mapping.total_fields.limit" : "100000",
"index.mapping.depth.limit": 100,
"index.mapping.nested_fields.limit": 100
}'
以上配置使用如下: PUT /my_index/_settings 或 /_all/_settings
{
"refresh_interval": "30s",
"index.mapping.total_fields.limit" : 100000,
"index.mapping.depth.limit": 100,
"index.mapping.nested_fields.limit": 100,
"index.translog.durability": "async",
"index.translog.sync_interval": "5s"
}
{
"refresh_interval": "30s",
"index.mapping.total_fields.limit" : 100000,
"index.mapping.depth.limit": 100,
"index.max_result_window":100000,
"index.mapping.nested_fields.limit": 100000,
"index.translog.durability": "async"
}
注: index.translog.sync_interval无法全局设置 5. 集群单播优化(elasticsearch.yml)
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ics-elasticsearch
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ics-elasticsearch-102
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /opt/deploy/data/elasticsearch/data
#
# Path to log files:
#
path.logs: /opt/deploy/data/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#bootstrap.system_call_filter:false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
http.compression: true
transport.tcp.port: 9300
transport.tcp.compress: true
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["ics-server100", "ics-server101","ics-server102"]
discovery.zen.ping_timeout: 10s
discovery.zen.fd.ping_retries: 3
discovery.zen.fd.ping_interval: 3s
discovery.zen.fd.ping_timeout: 30s
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#---------------------------------- Indices -----------------------------------
indices.fielddata.cache.size: 30%
indices.query.bool.max_clause_count: 100000
Rolling upgrade的备份过程可以让用户在一个时间内只升级集群中的某一个特定的节点。由于Elasticsearch集群具有非常优秀的容灾机制,因此,在删除集群中的某一个节点时,数据并不会丢失,而是可以由其余节点上的拷贝恢复。不建议在一个集群中长时间的运行多个版本的Elasticsearch实例,因为当删除的节点恢复时,将来自多个版本实例的数据汇聚到同一个节点会有可能会导致节点无法工作。
1.关闭shard 的实时分配选项,这样做的目的在于当集群shutdown之后可以快速的启动。这个参数默认是开启的,默认情况下当实例启动时,会尝试从其他节点实例上拷贝相关的shard副本至本地,这样会浪费大量的时间和耗费高额的IO资源。如果实时分配选项关闭了,那么当新的实例启动,尝试加入集群的时候,它不会从其他实例上拷贝shard副本。当实例完全启动之后,则应该再将该选项开启,以提供长期的容灾。
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "none"
}
}'
2.关闭所要升级版本的节点实例,并将其移除集群
curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'
3.移除节点之后,等待剩余节点数据转移完成,直到确定所有的shard都被正确地分配
4.升级节点的Elasticsearch版本,最简单和最安全的办法就是下载一个全新的Elasticsearch版本到本地,并将原来Elasticsearch的配置文件复制到新的版本中,最好能建立一个Elasticsearch的软连接到最新版本文件所在的目录,这样可以方便将来使用
5.启动已经升级好的节点ES实例,并检查其是否正确地加入到集群中
6.重新开启shard reallocation选项(实时分配选项)
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}'
Cluster Nodes : 192.168.0.100,192.168.0.101,192.168.0.102,192.168.0.103
- 关闭分配(192.168.0.100)
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "none"
}
}
- 刷新同步(192.168.0.100)使数据激活平衡
POST _flush/synced
- 停止低版本节点(杀掉进程)
- 升级版本
- A. 将关闭的低版本配置文件(elasticsearch.yml)替换新版本配置文件
- B. 安装插件
- C. 启动新版本节点
- 查看节点集群情况(新节点是否加入集群)
GET _cat/nodes?v
- 启用分配(192.168.0.100)
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
- 刷新同步(192.168.0.100)使数据激活平衡
POST _flush/synced
- 等待节点恢复以及数据同步完成(yellow --> green)(192.168.0.100)
GET _cat/health
- 查看数据恢复情况
GET _cat/recovery
- 重复1-9步骤升级其他节点(192.168.0.101,192.168.0.102,192.168.0.103)
- 版本
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.16
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.8.6
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.5.1