## 部署高可用etcd集群
kubernetes 系统使用`etcd`存储所有的数据,我们这里部署3个节点的etcd 集群,这3个节点分别用192.168.10.65、 192.168.10.64、 192.168.10.63,分别命名为`kube-node-65`、`kube-node-64`、`kube-node-63`:
* kube-node-65 / 192.168.10.65
* kube-node-64 / 192.168.10.64
* kube-node-63 / 192.168.10.63
#### 定义环境变量
```shell
$ export NODE_NAME=kube-node-65 # 当前部署的机器名称(随便定义,只要能区分不同机器即可)
$ export NODE_IP=192.168.10.65 # 当前部署的机器IP
$ export NODE_IPS="192.168.10.65 192.168.10.64 192.168.10.63" # etcd 集群所有机器 IP
# etcd 集群间通信的IP和端口
$ export ETCD_NODES=etcd01=kube-node-65=https://192.168.10.65:2380,kube-node-64=https://192.168.10.64:2380,kube-node-63=https://192.168.10.63:2380
$ # 导入用到的其它全局变量:ETCD_ENDPOINTS、FLANNEL_ETCD_PREFIX、CLUSTER_CIDR
$ source /usr/k8s/bin/env.sh
```
#### 下载etcd二进制文件
```shell
$ wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz
$ tar -xvf etcd-v3.2.9-linux-amd64.tar.gz
$ sudo mv etcd-v3.2.9-linux-amd64/etcd* /usr/k8s/bin/
```
#### 创建TLS密钥和证书
> 为了保证通信安全,客户端与etcd集群、etcd集群之间的通信需要使用TLS加密。
**创建etcd证书签名请求:**
```json
$ cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"${NODE_IP}"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
```
**生成etcd证书和密钥**
```shell
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
$ ls etcd*
etcd.csr etcd-csr.json etcd-key.pem etcd.pem
$ sudo mkdir -p /etc/etcd/ssl
$ sudo mv etcd*.pem /etc/etcd/ssl/
```
#### 创建etcd的systemd unit文件
```shell
$ sudo mkdir -p /var/lib/etcd # 必须要先创建工作目录
$ cat > etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/k8s/bin/etcd \\
--name=${NODE_NAME} \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
--initial-advertise-peer-urls=https://${NODE_IP}:2380 \\
--listen-peer-urls=https://${NODE_IP}:2380 \\
--listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://${NODE_IP}:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
```
**如kube-node-65机器生成的文件:**
```shell
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/k8s/bin/etcd \
--name=kube-node-65 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://192.168.10.65:2380 \
--listen-peer-urls=https://192.168.10.65:2380 \
--listen-client-urls=https://192.168.10.65:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.10.65:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=kube-node-65=https://192.168.10.65:2380,kube-node-64=https://192.168.10.64:2380,kube-node-63=https://192.168.10.63:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
```
#### 启动etcd服务
```shell
$ sudo mv etcd.service /etc/systemd/system/
$ sudo systemctl daemon-reload
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
$ sudo systemctl status etcd
```
> 最先启动的etcd 进程会卡住一段时间,等待其他节点启动加入集群,在所有的etcd 节点重复上面的步骤,直到所有的机器etcd 服务都已经启动。
#### 验证etcd服务
部署完etcd集群后,在任一etcd节点上执行下面命令:
```shell
for ip in ${NODE_IPS}; do
ETCDCTL_API=3 /usr/k8s/bin/etcdctl \
--endpoints=https://${ip}:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
endpoint health; done
```
输出如下
```yaml
https://192.168.10.65:2379 is healthy: successfully committed proposal: took = 963.711µs
https://192.168.10.64:2379 is healthy: successfully committed proposal: took = 2.420937ms
https://192.168.10.63:2379 is healthy: successfully committed proposal: took = 2.555773ms
```
> 上面的信息显示3个节点上的etcd均为healthy,则表示集群服务正常
- Docker
- Docker入门
- docker管理UI
- 封装各大数据组件
- 自主封装
- 封装hadoop
- 封装spark
- 官方封装
- 封装hue
- 封装jenkins
- Swarm
- Swarm入门
- Zookeeper on swarm
- Hue on swarm
- Grafana
- influxDB
- Prometheus
- cAdvisor
- kubernetes
- k8s入门
- k8s部署dashboard
- minikube
- 手动搭建k8s的高可用集群
- 01环境准备
- 02部署etcd集群
- 03配置kubelet
- 04部署flannel网络
- 05部署master集群
- 06配置高可用
- 07部署node节点
- 08验证集群
- Monitor
- swarm 监控
- influxDB+Grafana
- Prometheus+Grafana