### kube-apiserver 高可用
按照上面的方式在master01与master02机器上安装kube-apiserver、kube-controller-manager、kube-scheduler,但是现在我们还是手动指定访问的6443和8080端口的,因为我们的域名k8s-api.virtual.local对应的master01节点直接通过http 和https 还不能访问,这里我们使用haproxy 来代替请求。
> 意思就是我们需要将http默认的80端口请求转发到apiserver的8080端口,将https默认的443端口请求转发到apiserver的6443端口,所以我们这里使用haproxy来做请求转发。
#### 安装haproxy
```shell
$ yum install -y haproxy
```
#### 配置Haproxy
```shell
frontend k8s-api
bind 192.168.10.55:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-api-1 192.168.10.55:6443 check
server k8s-api-2 192.168.10.56:6443 check
frontend k8s-http-api
bind 192.168.10.55:80
mode tcp
option tcplog
default_backend k8s-http-api
backend k8s-http-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-http-api-1 192.168.10.55:8080 check
server k8s-http-api-2 192.168.10.56:8080 check
```
#### 启动Haproxy
```shell
$ sudo systemctl start haproxy
$ sudo systemctl enable haproxy
$ sudo systemctl status haproxy
```
> 然后我们可以通过上面9000端口监控我们的haproxy的运行状态(192.168.10.65:9000/stats):
![](https://box.kancloud.cn/2449dbe277fd60b5a452523fd07a5813_1862x961.png)
#### 安装keepalived
> KeepAlived 是一个高可用方案,通过 VIP(即虚拟 IP)和心跳检测来实现高可用。其原理是存在一组(两台)服务器,分别赋予 Master、Backup 两个角色,默认情况下Master 会绑定VIP 到自己的网卡上,对外提供服务。Master、Backup 会在一定的时间间隔向对方发送心跳数据包来检测对方的状态,这个时间间隔一般为 2 秒钟,如果Backup 发现Master 宕机,那么Backup 会发送ARP 包到网关,把VIP 绑定到自己的网卡,此时Backup 对外提供服务,实现自动化的故障转移,当Master 恢复的时候会重新接管服务。非常类似于路由器中的虚拟路由器冗余协议(VRRP)
**开启路由转发,这里我们定义虚拟IP为:192.168.10.69**
```shell
$ vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {
}
router_id kube_api
}
vrrp_script check_k8s {
script "/etc/keepalived/chk_k8s_master.sh"
interval 3
weight 5
}
vrrp_instance APISERVER {
unicast_src_ip 192.168.10.55
unicast_peer {
192.168.10.56
}
state BACKUP
interface enp6s0
virtual_router_id 41
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress {
192.168.10.69 dev enp6s0 label enp6s0:vip
}
track_script {
check_k8s
}
}
```
**启动keeplived**
```shell
$ systemctl start keepalived
$ systemctl enable keepalived
# 查看日志
$ journalctl -f -u keepalived
```
**验证虚拟ip配置是否正确**
```shell
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:60:6e:46:7a:c0 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.55/24 brd 192.168.10.255 scope global enp6s0
valid_lft forever preferred_lft forever
inet6 fe80::58c6:152e:edc0:4c4c/64 scope link
valid_lft forever preferred_lft forever
3: enp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 08:60:6e:46:7a:c1 brd ff:ff:ff:ff:ff:ff
```
- Docker
- Docker入门
- docker管理UI
- 封装各大数据组件
- 自主封装
- 封装hadoop
- 封装spark
- 官方封装
- 封装hue
- 封装jenkins
- Swarm
- Swarm入门
- Zookeeper on swarm
- Hue on swarm
- Grafana
- influxDB
- Prometheus
- cAdvisor
- kubernetes
- k8s入门
- k8s部署dashboard
- minikube
- 手动搭建k8s的高可用集群
- 01环境准备
- 02部署etcd集群
- 03配置kubelet
- 04部署flannel网络
- 05部署master集群
- 06配置高可用
- 07部署node节点
- 08验证集群
- Monitor
- swarm 监控
- influxDB+Grafana
- Prometheus+Grafana