# 索引重建升级
Elasticsearch只能使用前一个主版本(major)创建的索引数据。Older indices must be reindexed or deleted. Elasticsearch 6.x can use indices created in Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before. Elasticsearch 5.x can use indices created in Elasticsearch 2.x, but not those created in 1.x or before.
Elasticsearch nodes will fail to start if incompatible indices are present.
To upgrade an Elasticsearch cluster running 2.x, you have two options:
* Perform a [full cluster restart upgrade](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/restart-upgrade.html "Full cluster restart upgrade") to 5.6, [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/reindex-upgrade.html#reindex-upgrade-inplace "Reindex in place") the 2.x indices, then perform a [rolling upgrade](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rolling-upgrades.html "Rolling upgrades") to 6.x. If your Elasticsearch 2.x cluster contains indices that were created before 2.x, you must either delete or reindex them before upgrading to 5.6\. For more information about upgrading from 2.x to 5.6, see [Upgrading Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html) in the Elasticsearch 5.6 Reference.
* Create a new 6.x cluster and [reindex from remote](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/reindex-upgrade.html#reindex-upgrade-remote "Reindex from a remote cluster") to import indices directly from the 2.x cluster.
To upgrade an Elasticsearch 1.x cluster, you have two options:
* Perform a [full cluster restart upgrade](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/restart-upgrade.html "Full cluster restart upgrade") to Elasticsearch 2.4.x and [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/reindex-upgrade.html#reindex-upgrade-inplace "Reindex in place") or delete the 1.x indices. Then, perform a full cluster restart upgrade to 5.6 and reindex or delete the 2.x indices. Finally, perform a [rolling upgrade](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rolling-upgrades.html "Rolling upgrades") to 6.x. For more information about upgrading from 1.x to 2.4, see [Upgrading Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-upgrade.html) in the Elasticsearch 2.4 Reference. For more information about upgrading from 2.4 to 5.6, see [Upgrading Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html) in the Elasticsearch 5.6 Reference.
* Create a new 6.x cluster and [reindex from remote](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/reindex-upgrade.html#reindex-upgrade-remote "Reindex from a remote cluster") to import indices directly from the 1.x cluster.
## **Upgrading time-based indices**
If you use time-based indices, you likely won’t need to carry pre-5.x indices forward to 6.x. Data in time-based indices generally becomes less useful as time passes and are deleted as they age past your retention period.
Unless you have an unusally long retention period, you can just wait to upgrade to 6.x until all of your pre-5.x indices have been deleted.
## Reindex in place
To manually reindex your old indices with the [`reindex` API](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html "Reindex API"):
1. Create a new index and copy the mappings and settings from the old index.
2. Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for efficient reindexing.
3. Reindex all documents from the old index into the new index using the[reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html "Reindex API").
4. Reset the `refresh_interval` and `number_of_replicas` to the values used in the old index.
5. Wait for the index status to change to `green`.
6. In a single [update aliases](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-aliases.html "Index Aliases") request:
a. Delete the old index.
b. Add an alias with the old index name to the new index.
c. Add any aliases that existed on the old index to the new index.
### **Migration assistance and upgrade tools**
X-Pack 5.6 provides migration assistance and upgrade tools that simplify reindexing and upgrading to 6.x. These tools are free with the X-Pack trial and Basic licenses and you can use them to upgrade whether or not X-Pack is a regular part of your Elastic Stack. For more information, see
### Reindex from a remote cluster
You can use [reindex from remote](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html#reindex-from-remote "Reindex from Remoteedit") to migrate indices from your old cluster to a new 6.x cluster. This enables you move to 6.x from a pre-5.6 cluster without interrupting service.
> 警告
>
> Elasticsearch provides backwards compatibility support that enables indices from the previous major version to be upgraded to the current major version. Skipping a major version means that you must resolve any backward compatibility issues yourself.
To migrate your indices:
1. Set up a new 6.x cluster alongside your old cluster. Enable it to access your old cluster by adding your old cluster to the `reindex.remote.whitelist` in`elasticsearch.yml`:
```
reindex.remote.whitelist: oldhost:9200
```
> 注意
>
> The new cluster doesn’t have to start fully-scaled out. As you migrate indices and shift the load to the new cluster, you can add nodes to the new cluster and remove nodes from the old one.
2. For each index that you need to migrate to the 6.x cluster:
a. Create a new index in 6.x with the appropriate mappings and settings. Set the `refresh_interval` to `-1` and set `number_of_replicas`to `0` for faster reindexing.
b. [Reindex from remote](https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html#reindex-from-remote "Reindex from Remoteedit") to pull documents from the old index into the new 6.x index:
```
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
```
c. When the reindex job completes, set the `refresh_interval` and`number_of_replicas` to the desired values (the default settings are`30s` and `1`).
d. Once replication is complete and the status of the new index is`green`, you can delete the old index.
- 入门
- 基本概念
- 安装
- 探索你的集群
- 集群健康
- 列出所有索引库
- 创建一个索引库
- 索引文档创建与查询
- 删除一个索引库
- 修改你的数据
- 更新文档
- 删除文档
- 批量处理
- 探索你的数据
- 搜索API
- 查询语言介绍
- 执行搜索
- 执行过滤
- 执行聚合
- 总结
- Elasticsearch设置
- 安装Elasticsearch
- .zip或.tar.gz文件的安装方式
- Install Elasticsearch with .zip on Windows
- Debian软件包安装方式
- RPM安装方式
- Install Elasticsearch with Windows MSI Installer
- Docker安装方式
- 配置Elasticsearch
- 安全配置
- 日志配置
- 重要的Elasticsearch配置
- 重要的系统配置
- 系统设置
- 在jvm.options中设置JVM堆大小
- 禁用swapping
- 文件描述符
- 虚拟内存
- 线程数
- DNS cache settings
- 启动前检查
- 堆大小检查
- 文件描述符检查
- 内存锁定检查
- 最大线程数检查
- 最大虚拟内存检查
- Max file size check
- 最大map数检查
- JVM Client模式检查
- 串行收集使用检查
- 系统调用过滤检查
- OnError与OnOutOfMemoryError检查
- Early-access check
- G1GC检查
- Elasticsearch停机
- Elasticsearch升级
- 滚动升级
- 全集群重启升级
- 索引重建升级
- Set up X-Pack
- Installing X-Pack
- X-Pack Settings
- Watcher Settings
- Configuring Security
- Breaking changes in 6.0
- X-Pack Breaking Changes
- 重大变化
- 6.0的重大变化
- 聚合变化
- Cat API变化
- 客户端变化
- 集群变化
- 文档API变化
- 索引变化
- 预处理变化
- 映射变化
- Packaging变化
- Percolator变化
- 插件变化
- 索引重建变化
- 信息统计变化
- DSL查询变化
- 设置变化
- 脚本变化
- API约定
- 多索引语法
- 索引库名称的日期运算
- 常用选项
- URL-based访问控制
- 文档APIs
- 读写文档
- 索引接口
- Get接口
- Delete API
- Delete By Query API
- Update API
- Update By Query API
- Multi Get API
- Bulk API
- Reindex API
- Term Vectors
- Multi termvectors API
- ?refresh
- 搜索APIs
- Search
- URI Search
- Request Body Search
- Query
- From / Size
- Sort
- Source filtering
- Fields
- Script Fields
- Doc value Fields
- Post filter
- Highlighting
- Rescoring
- Search Type
- Scroll
- Preference
- Explain
- Version
- Index Boost
- min_score
- Named Queries
- Inner hits
- Field Collapsing
- Search After
- Search Template
- Multi Search Template
- Search Shards API
- Suggesters
- Term suggester
- Phrase Suggester
- Completion Suggester
- Context Suggester
- Returning the type of the suggester
- Multi Search API
- Count API
- Validate API
- Explain API
- Profile API
- Profiling Queries
- Profiling Aggregations
- Profiling Considerations
- Field Capabilities API
- Aggregations
- Metrics Aggregations
- 平均值聚合
- 值计数聚合(Value Count Aggregation)
- Cardinality Aggregation
- Extended Stats Aggregation
- 地理边界聚合
- 地理重心聚合
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- 邻接矩阵聚合
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Significant Terms Aggregation
- Filter Aggregation(过滤器聚合)
- Filters Aggregation
- Geo Distance Aggregation(地理距离聚合) 转至元数据结尾
- GeoHash grid Aggregation(GeoHash网格聚合)
- Global Aggregation(全局聚合) 转至元数据结尾
- Histogram Aggregation
- IP Range Aggregation(IP范围聚合)
- Missing Aggregation
- Nested Aggregation(嵌套聚合)
- Range Aggregation(范围聚合)
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation(导数聚合)
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation(扩展信息桶聚合)
- Percentiles Bucket Aggregation(百分数桶聚合)
- Moving Average Aggregation
- Cumulative Sum Aggregation(累积汇总聚合)
- Bucket Script Aggregation(桶脚本聚合)
- Bucket Selector Aggregation(桶选择器聚合)
- Serial Differencing Aggregation(串行差异聚合)
- Matrix Aggregations
- Matrix Stats
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Indices APIs
- Create Index /创建索引
- Delete Index /删除索引
- Get Index /获取索引
- Indices Exists /索引存在
- Open / Close Index API /启动关闭索引
- Shrink Index /缩小索引
- Rollover Index/滚动索引
- Put Mapping /提交映射
- Get Mapping /获取映射
- Get Field Mapping /获取字段映射
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Explain Analyze
- Index Templates
- 索引统计信息
- 索引段
- 索引恢复
- 索引分片存储
- 清理缓存
- 刷新
- 同步刷新
- 重新加载
- 强制合并
- Cat APIs
- cat aliases
- cat allocation
- cat count
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat thread pool
- cat snapshots
- cat templates
- Cluster APIs
- 集群健康
- 集群状态
- 集群统计
- 挂起的集群任务
- 集群重新路由
- Cluster Update Settings
- Nodes Stats
- Nodes Info
- Nodes Feature Usage
- Remote Cluster Info
- Task Management API
- Nodes hot_threads
- Cluster Allocation Explain API
- Query DSL
- 查询context与过滤context
- Match All Query
- 全文搜索
- 匹配查询
- 短语匹配查询
- 短语前缀匹配查询
- 多字段查询
- 常用术语查询
- 查询语句查询
- 简单查询语句
- Term level queries
- Term Query
- Terms Query
- Range Query
- Exists Query
- Prefix Query
- Wildcard Query
- Regexp Query
- Fuzzy Query
- Type Query
- Ids Query
- 复合查询
- Constant Score 查询
- Bool 查询
- Dis Max 查询
- Function Score 查询
- Boosting 查询
- Joining queries
- Has Child Query
- Has Parent Query
- Nested Query(嵌套查询)
- Parent Id Query
- Geo queries
- GeoShape Query(地理形状查询)
- Geo Bounding Box Query(地理边框查询)
- Geo Distance Query(地理距离查询)
- Geo Polygon Query(地理多边形查询)
- Specialized queries
- More Like This Query
- Script Query
- Percolate Query
- Span queries
- Span Term 查询
- Span Multi Term 查询
- Span First 查询
- Span Near 查询
- Span Or 查询
- Span Not 查询
- Span Containing 查询
- Span Within 查询
- Span Field Masking 查询 转至元数据结尾
- Minimum Should Match
- Multi Term Query Rewrite
- Mapping
- Removal of mapping types
- Field datatypes
- Array
- Binary
- Range
- Boolean
- Date
- Geo-point datatype
- Geo-Shape datatype
- IP datatype
- Keyword datatype
- Nested datatype
- Numeric datatypes
- Object datatype
- Text
- Token数
- 渗滤型
- join datatype
- Meta-Fields
- _all field
- _field_names field
- _id field
- _index field
- _meta field
- _routing field
- _source field
- _type field
- _uid field
- Mapping parameters
- analyzer(分析器)
- normalizer(归一化)
- boost(提升)
- Coerce(强制类型转换)
- copy_to(合并参数)
- doc_values(文档值)
- dynamic(动态设置)
- enabled(开启字段)
- eager_global_ordinals
- fielddata(字段数据)
- format (日期格式)
- ignore_above(忽略超越限制的字段)
- ignore_malformed(忽略格式不对的数据)
- index (索引)
- index_options(索引设置)
- fields(字段)
- Norms (标准信息)
- null_value(空值)
- position_increment_gap(短语位置间隙)
- properties (属性)
- search_analyzer (搜索分析器)
- similarity (匹配方法)
- store(存储)
- Term_vectors(词根信息)
- Dynamic Mapping
- Dynamic field mapping(动态字段映射)
- Dynamic templates(动态模板)
- default mapping(mapping中的_default_)
- Analysis
- Anatomy of an analyzer(分析器的分析)
- Testing analyzers(测试分析器)
- Analyzers(分析器)
- Configuring built-in analyzers(配置内置分析器)
- Standard Analyzer(标准分析器)
- Simple Analyzer(简单分析器)
- 空白分析器
- Stop Analyzer
- Keyword Analyzer
- 模式分析器
- 语言分析器
- 指纹分析器
- 自定义分析器
- Normalizers
- Tokenizers(分词器)
- Standard Tokenizer(标准分词器)
- Letter Tokenizer
- Lowercase Tokenizer (小写分词器)
- Whitespace Analyzer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer(泰语分词器)
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Analyzer
- Pattern Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer(路径层次分词器)
- Token Filters(词元过滤器)
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Character Filters(字符过滤器)
- HTML Strip Character Filter
- Mapping Character Filter
- Pattern Replace Character Filter
- 模块
- Cluster
- 集群级路由和碎片分配
- 基于磁盘的分片分配
- 分片分配awareness
- 分片分配过滤
- Miscellaneous cluster settings
- Scripting
- Painless Scripting Language
- Lucene Expressions Language
- Advanced scripts using script engines
- Snapshot And Restore
- Thread Pool
- Index Modules(索引模块)
- 预处理节点
- Pipeline Definition
- Ingest APIs
- Put Pipeline API
- Get Pipeline API
- Delete Pipeline API
- Simulate Pipeline API
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Monitoring Elasticsearch
- X-Pack APIs
- X-Pack Commands
- How To
- Testing(测试)
- Glossary of terms
- Release Notes
- X-Pack Release Notes