将原基础的 `ingress-nginx` 一个副本提升到多个副本。然后再提供VIP进行访问。
以下三种方式都可以实现高可用
1. LoadBalancer
2. nodeport + VIP
3. hostport + VIP
- 其中 `LoadBalancer` 是在公有云上使用,不过自管集群也可以安装 `Metallb` 也可以实现 `LoadBalancer` 的方式。
- `Metallb` 的官网为 https://metallb.universe.tf/installation/
这里演示 `hostport + keepalived + nginx` 的组合方式。实现高可用和高并发。
## 安装nginx
**创建目录**
```shell
mkdir -p /etc/nginx/{conf.d,stream}
```
**nginx主配置**
```shell
cat <<-"EOF" | sudo tee /etc/nginx/nginx.conf > /dev/null
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
stream {
log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
'"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"';
include /etc/nginx/stream/*.conf;
}
EOF
```
**四层代理ingress服务**
```shell
cat <<-"EOF" | sudo tee /etc/nginx/stream/ingress.conf > /dev/null
upstream http {
server 192.168.31.103:80 max_fails=3 fail_timeout=5s;
server 192.168.31.79:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
# proxy_protocol on;
proxy_pass http;
access_log /var/log/nginx/ingress_http_tcp_access.log proxy;
error_log /var/log/nginx/ingress_http_tcp_error.log;
}
upstream https {
server 192.168.31.103:443 max_fails=3 fail_timeout=5s;
server 192.168.31.79:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
# proxy_protocol on;
proxy_pass https;
access_log /var/log/nginx/ingress_https_tcp_access.log proxy;
error_log /var/log/nginx/ingress_https_error.log;
}
EOF
```
> 注意:修改server替换成实际的 master节点 IP地址
**docker-compose配置**
```shell
cat <<-EOF | sudo tee /etc/nginx/docker-compose.yaml > /dev/null
version: "3"
services:
nginx:
container_name: nginx
image: nginx:1.21-alpine
volumes:
- "./stream:/etc/nginx/stream:ro"
- "./conf.d:/etc/nginx/conf.d:ro"
- "./nginx.conf:/etc/nginx/nginx.conf:ro"
- "./logs:/var/log/nginx"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro"
restart: always
ports:
- "6443:6443"
- "80:80"
- "443:443"
EOF
```
**启动nginx**
```shell
docker-compose -f /etc/nginx/docker-compose.yaml up -d
```
## 安装keepalived
**keepalived配置**
```shell
$ sudo mkdir /etc/keepalived
$ cat <<-EOF | sudo tee -a /etc/keepalived/keepalived.conf > /dev/null
include /etc/keepalived/keepalived_ingress.conf
EOF
$ cat <<-EOF | sudo tee /etc/keepalived/keepalived_ingress.conf > /dev/null
vrrp_script ingress {
# 检测脚本路径
script "/etc/keepalived/chk_ingress.sh"
# 执行检测脚本的用户
user root
# 脚本调用之间的秒数
interval 1
# 转换失败所需的次数
fall 5
# 转换成功所需的次数
rise 3
# 按此权重调整优先级
weight -50
}
vrrp_instance ingress {
# 状态是主节点还是从节点
state MASTER
# inside_network 的接口,由 vrrp 绑定。
interface eth0
# 虚拟路由id,根据该id进行组成主从架构
virtual_router_id 200
# 初始优先级
# 最后优先级权重计算方法
# (1) weight 为正数,priority - weight
# (2) weight 为负数,priority + weight
priority 200
# 加入集群的认证
authentication {
auth_type PASS
auth_pass pwd200
}
# keepalivd配置成单播模式
## 单播的源地址
unicast_src_ip 192.168.31.103
## 单播的对端地址
unicast_peer {
192.168.31.79
}
# vip 地址
virtual_ipaddress {
192.168.31.188
}
# 健康检查脚本
track_script {
ingress
}
}
EOF
```
**keepalived检测脚本**
```shell
$ cat <<-EOF | sudo tee /etc/keepalived/chk_ingress.sh > /dev/null
#!/bin/sh
count=\$(netstat -lntup | egrep ":443|:80" | wc -l)
if [ "\$count" -ge 2 ];then
# 退出状态为0,代表检查成功
exit 0
else
# 退出状态为1,代表检查不成功
exit 1
fi
EOF
$ chmod +x /etc/keepalived/chk_ingress.sh
```
**keepalived的docker-compose**
```shell
$ cat <<-EOF | sudo tee /etc/keepalived/docker-compose.yaml > /dev/null
version: "3"
services:
keepalived:
container_name: keepalived
image: jiaxzeng/keepalived:2.2.7-alpine3.12
volumes:
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime"
- ".:/etc/keepalived"
cap_add:
- NET_ADMIN
network_mode: "host"
restart: always
EOF
```
**启动keepalived**
```shell
docker-compose -f /etc/keepalived/docker-compose.yaml up -d
```
## 修改ingress-nginx
```shell
# 在 deploy 添加或修改replicas
replicas: 2
# 在 deploy.spec.template.spec 下面添加affinity
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
topologyKey: kubernetes.io/hostname
```
> 需要重启ingress-nginx-controller容器
## 附加iptables
```shell
iptables -I INPUT -p tcp -m multiport --dports 80,443,8443 -m comment --comment "nginx ingress controller external ports" -j ACCEPT
iptables -I INPUT -p tcp --dport 10086 -m comment --comment "haproxy stats ports" -j ACCEPT
```
> `80、443、8443` 是由 `ingress-nginx-controller` 暴露的端口
- 前言
- 架构
- 部署
- kubeadm部署
- kubeadm扩容节点
- 二进制安装基础组件
- 添加master节点
- 添加工作节点
- 选装插件安装
- Kubernetes使用
- k8s与dockerfile启动参数
- hostPort与hostNetwork异同
- 应用上下线最佳实践
- 进入容器命名空间
- 主机与pod之间拷贝
- events排序问题
- k8s会话保持
- 容器root特权
- CNI插件
- calico
- calicoctl安装
- calico网络通信
- calico更改pod地址范围
- 新增节点网卡名不一致
- 修改calico模式
- calico数据存储迁移
- 启用 kubectl 来管理 Calico
- calico卸载
- cilium
- cilium架构
- cilium/hubble安装
- cilium网络路由
- IP地址管理(IPAM)
- Cilium替换KubeProxy
- NodePort运行DSR模式
- IP地址伪装
- ingress使用
- nginx-ingress
- ingress安装
- ingress高可用
- helm方式安装
- 基本使用
- Rewrite配置
- tls安全路由
- ingress发布管理
- 代理k8s集群外的web应用
- ingress自定义日志
- ingress记录真实IP地址
- 自定义参数
- traefik-ingress
- traefik名词概念
- traefik安装
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中间件(middlewares)
- traefik记录真实IP地址
- cert-manager
- 安装教程
- 颁布者CA
- 创建证书
- 外部存储
- 对接NFS
- 对接ceph-rbd
- 对接cephfs
- 监控平台
- Prometheus
- Prometheus安装
- grafana安装
- Prometheus配置文件
- node_exporter安装
- kube-state-metrics安装
- Prometheus黑盒监控
- Prometheus告警
- grafana仪表盘设置
- 常用监控配置文件
- thanos
- Prometheus
- Sidecar组件
- Store Gateway组件
- Querier组件
- Compactor组件
- Prometheus监控项
- grafana
- Querier对接grafana
- alertmanager
- Prometheus对接alertmanager
- 日志中心
- filebeat安装
- kafka安装
- logstash安装
- elasticsearch安装
- elasticsearch索引生命周期管理
- kibana安装
- event事件收集
- 资源预留
- 节点资源预留
- imagefs与nodefs验证
- 资源预留 vs 驱逐 vs OOM
- scheduler调度原理
- Helm
- Helm安装
- Helm基本使用
- 安全
- apiserver审计日志
- RBAC鉴权
- namespace资源限制
- 加密Secret数据
- 服务网格
- 备份恢复
- Velero安装
- 备份与恢复
- 常用维护操作
- container runtime
- 拉取私有仓库镜像配置
- 拉取公网镜像加速配置
- runtime网络代理
- overlay2目录占用过大
- 更改Docker的数据目录
- Harbor
- 重置Harbor密码
- 问题处理
- 关闭或开启Harbor的认证
- 固定harbor的IP地址范围
- ETCD
- ETCD扩缩容
- ETCD常用命令
- ETCD数据空间压缩清理
- ingress
- ingress-nginx header配置
- kubernetes
- 验证yaml合法性
- 切换KubeProxy模式
- 容器解析域名
- 删除节点
- 修改镜像仓库
- 修改node名称
- 升级k8s集群
- 切换容器运行时
- apiserver接口
- 其他
- 升级内核
- k8s组件性能分析
- ETCD
- calico
- calico健康检查失败
- Harbor
- harbor同步失败
- Kubernetes
- 资源Terminating状态
- 启动容器报错