🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
# 全集群重启升级 A full cluster restart upgrade requires that you shut all nodes in the cluster down, upgrade them, and restart the cluster. A full cluster restart was required when upgrading to major versions prior to 6.x. Elasticsearch 6.x supports [rolling upgrades](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rolling-upgrades.html "Rolling upgrades") from Elasticsearch 5.6.查阅[此表](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/setup-upgrade.html#upgrade-paths)来验证哪些是必须要全集群重启升级的。 执行全集群重启升级的过程如下: 1. **禁用分片分配** 当你关闭一个节点时,分配进程在等待一分钟之后开始将此节点上的分片复制到其它节点中,会造成很多浪费的I/O。这可以在节点关闭前通过禁用分片分配来避免: ``` PUT _cluster/settings { "transient": { "cluster.routing.allocation.enable": "none" } } ``` 2. **执行同步冲刷** 你可以愉快地继续索引在升级。但是,如果你临时地关闭一些非不要的索引库以及执行一次[同步冲刷](../../Indices_APIs/Flush/Synced_Flush.md)请求可以帮助你快速恢复分片: ``` POST _flush/synced ``` 同步冲刷请求是一个”尽力而为“的操作。它可能会因为一些正在进行的索引操作而失败,但是如果有必要你可以反复的执行它,这是安全的。 3. **停止所有节点** * If you are running Elasticsearch with `systemd`: ``` sudo systemctl stop elasticsearch.service ``` * If you are running Elasticsearch with SysV `init`: ``` sudo -i service elasticsearch stop ``` * If you are running Elasticsearch as a daemon: ``` kill $(cat pid) ``` 4. **升级所有的节点** To upgrade using a [Debian](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/deb.html "Install Elasticsearch with Debian Package") or [RPM](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html "Install Elasticsearch with RPM") package: * Use `rpm` or `dpkg` to install the new package. All files are installed in the appropriate location for the operating system and Elasticsearch config files are not overwritten. To upgrade using a zip or compressed tarball: 1. Extract the zip or tarball to a *new* directory. This is critical if you are not using external `config` and `data` directories. 2. Set the `ES_PATH_CONF` environment variable to specify the location of your external `config` directory and `jvm.options` file. If you are not using an external `config` directory, copy your old configuration over to the new installation. 3. Set `path.data` in `config/elasticsearch.yml` to point to your external data directory. If you are not using an external `data` directory, copy your old data directory over to the new installation. 4. Set `path.logs` in `config/elasticsearch.yml` to point to the location where you want to store your logs. If you do not specify this setting, logs are stored in the directory you extracted the archive to. > 提示 > > When you extract the zip or tarball packages, the`elasticsearch-n.n.n` directory contains the Elasticsearh`config`, `data`, `logs` and `plugins` directories. > We recommend moving these directories out of the Elasticsearch directory so that there is no chance of deleting them when you upgrade Elasticsearch. To specify the new locations, use the `ES_PATH_CONF` environment variable and the`path.data` and `path.logs` settings. For more information, see[Important Elasticsearch configuration](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/important-settings.html "Important Elasticsearch configuration"). > The [Debian](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/deb.html "Install Elasticsearch with Debian Package") and [RPM](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html "Install Elasticsearch with RPM") packages place these directories in the appropriate place for each operating system. In production, we recommend installing using the deb or rpm package. 5. **升级所有的插件** 使用`elasticsearch-plugin`脚本安装你所有需要插件的正确版本。升级Elasticsearch节点时必须升级插件。 6. **启动集群** 如果你有专门的master节点(在节点配置中设置`node.master`为`true`且`node.data`为`false`),先启动他们是一个好的主意。在处理数据节点之前等待它们形成一个集群并选举出一个主节点。你可以通过查看日志来检查进度。 一旦对方发现了[最小的主节点数](../../Modules/Discovery/Zen_Discovery.md#master-election),它们将在集群中选举主节点。从这时开始,就可以使用[\_cat/health](../../cat_APIs/cat_health.md)与[\_cat/nodes](../../cat_APIs/cat_nodes.md)API来监控节点加入集群: ``` GET _cat/health GET _cat/nodes ``` 使用这些API来检查所有节点已经成功地加入到集群。 6. **Wait for all nodes to join the cluster and report a status of yellow.** 一旦节点加入了集群,他将开始恢复本地存储的分片数据。刚开始[\_cat/health](../../cat_APIs/cat_health.md)会返回`status`为`red`,这表示还有主分片未分配完成。 一旦本地存储的分片恢复完成,`status`将会变成`yellow`,这表示所有主分片已恢复,但是副本分片没有分配。这是我们预料到的,因为分片分配被我们之前禁用了。 7. **重新打开分片分配** 延迟副本分片的分配,直到所有节点都加入了集群且完成了本地分片数据的分配。从这时开始,所有节点都已在集群中,重新打开分片分配是安全的: ``` PUT _cluster/settings { "transient": { "cluster.routing.allocation.enable": "all" } } ``` 集群将开始分片副本到所有的数据节点。这是你可以安全的开始新增索引与执行搜索操作,但是如果你在分片恢复之前延迟这些操作将会使得恢复过程变得更快。一旦`_cat/health`输出的`status`列变成`green`,所有主分片与副本分片都成功分配完成。 你可以使用[\_cat/health](../../cat_APIs/cat_health.md)与[\_cat/revocery](../../cat_APIs/cat_recovery.md)API来监控进度: ``` GET _cat/health GET _cat/recovery ```