用户在腾讯云上自建的 ES 集群或者在其它云厂商购买的 ES 集群,如果要迁移至腾讯云 ES(适用于大部分普通索引迁移),用户可以根据自己的业务需要选择合适的迁移方案。如果业务可以停服或者可以暂停写操作,可以使用以下几种方式进行数据迁移:
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[my_cos_backup] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[my_cos_backup] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Exception when write blob master.dat",
"caused_by" : {
"type" : "cos_service_exception",
"reason" : "cos_service_exception: The specified bucket does not exist. (Status Code: 404; Error Code: NoSuchBucket; Request ID: NjUzYzkwYmRfMzAxNzUyMWVfMjJmYmNfYTJkOGY1Ng==); Trace ID: OGVmYzZiMmQzYjA2OWNhODk0NTRkMTBiOWVmMDAxODc1NGE1MWY0MzY2NTg1MzM1OTY3MDliYzY2YTQ0ZThhMDFhOWZlZTQxMzRkMTQ2NGM4MmFlZDk1MTQzM2UyMTll"
}
}
},
"status" : 500
}
bucket:COS Bucket 名字,不带 appId 后缀的 bucket 名
app_id:腾讯云账号 APPID
先删除之前恢复的索引,然后在恢复的命令中,加以下相关参数
POST _snapshot/cos_backup/snapshot_名称/_restore
{
"indices": "*,-.monitoring*,-.security*,-.kibana*",
"ignore_unavailable": true,
"ignore_index_settings": [
"index.routing.allocation.require.temperature"
]
}
curl -u 'elastic:xxxx' -X PUT 'http://xxxxx:9200/_snapshot/my_cos_repository' -H "Content-Type: application/json" -d '
{
"type": "cos",
"settings": {
"bucket": "xxx",
"region": "ap-shanghai",
"access_key_id": "XXX",
"access_key_secret": "XXX",
"base_path": "/",
"app_id": "xxxx"
}
}
'
8.8.1版本需要这样创建
PUT _snapshot/my_cos_backup
{
"type": "cos",
"settings": {
"compress": true,
"chunk_size": "500mb",
"cos": {
"client": {
"app_id": "xxxx",
"access_key_id": "xxxx",
"access_key_secret": "xxxx",
"bucket": "xxxx",
"region": "ap-guangzhou",
"base_path": "/"
}
}
}
}
{
"statusCode": 400,
"error": "Bad Request",
"message": "Alias [.kibana] has more than one indices associated with it [[.kibana_2_backup, .kibana_1]], can't execute a single index op: [illegal_argument_exception] Alias [.kibana] has more than one indices associated with it [[.kibana_2_backup, .kibana_1]], can't execute a single index op"
}
恢复ES备份的时候,把源集群的.kibana_1和.kibana_2也复制过来了,这个.kibana_2的别名是.kibana。导致冲突了。
首先移除.kibana_2别名
POST _aliases
{
"actions": [
{
"remove": {
"index": ".kibana_2",
"alias": ".kibana"
}
}
]
}
关闭索引自动创建
PUT _cluster/settings
{
"persistent": {
"action.auto_create_index": false
}
}
操作完成后重新进行恢复数据操作
{"unassigned_info":{"reason":"EXISTING_INDEX_RESTORED","details":"restore_source[my_cos_backup/snapshot_2]"},"node_allocation_decisions":[{"deciders":[{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]},{"deciders":[{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]},{"deciders":[{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]},{"deciders":[{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]},{"deciders":[{"explanation":"the shard cannot be allocated to the same node on which a copy of the shard already exists [[sel_pitem_his][2], node[Tg4tV6mcT22SO_0ZCfWHWA], [R], s[STARTED], a[id=X7H-v4fRQTaWnOPsu3j7KA]]"},{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]},{"deciders":[{"explanation":"the shard cannot be allocated to the same node on which a copy of the shard already exists [[sel_pitem_his][2], node[b2RSlGbNR82IU_1j1HShJw], [P], s[STARTED], a[id=UXIVucaDQkyNSQ5syzrkwA]]"},{"explanation":"there are too many copies of the shard allocated to nodes with attribute [set], there are [3] total configured shard copies for this shard id and [3] total attribute values, expected the allocated shard count per attribute [2] to be less than or equal to the upper bound of the required number of shards per attribute [1]"}]}]}
方案一
结合集群环境,计算副本数目
不需要调整的副本数目需要用以下公式计算:
ceil{(replicas + 1) / (可用区+1)} = ceil{(replicas + 1) / 可用区}
repilcas <= 总节点数 -1
以2可用区,单可用区3节点为例
replicas可用值为:0,1,3
然后获取集群所有索引分片数目,如果和上述结果不一致,则挂起
方案二
换一个思路,分片迁移出现异常是由于awareness.attributes 中set属性限制导致,流程执行到checkScaleInCvmCluster后,执行如下命令:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.awareness.attributes": "ip"
},
"transient": {
"cluster.routing.allocation.awareness.attributes": "ip"
}
}
执行完上述命令后,卡住的分片随后会自动迁移,并且老的可用区节点自动下线, "cluster.routing.allocation.awareness.attributes": "ip"会自动还原为 "cluster.routing.allocation.awareness.attributes" : "set,ip"
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。