## kube-proxy部署 (连接pod网络跟集群网络)
### 1、签发证书
```
vi /opt/certs/kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
```
#### 生成证书
```
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
\-profile=client kube-proxy 跟之前的kubelet clietn 证书通用,原因是"CN": "system:kube-proxy", CN变了
```
#### 分发证书
```
scp kube-proxy-client-key.pem kube-proxy-client.pem hdss7-21:/opt/kubernetes/server/bin/cert/
```
### 2、创建kube-proxy配置
一个node创建,在所有node节点使用
```
cd /opt/kubernetes/server/bin/conf/ 注意:要在conf下
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
```
#### 创建好分发到其他两个节点
```
scp kube-proxy.kubeconfig hdss7-22:/opt/kubernetes/server/bin/conf/
scp kube-proxy.kubeconfig hdss7-23:/opt/kubernetes/server/bin/conf/
```
### 3、加载ipvs模块,使得让kube-proxy使用ipvs调度算法(可以查看一下要是准备工作没有做这里做一下)
kube-proxy 共有3种流量调度模式,分别是 namespace(做大量用户态跟内核太态互,太费资源),iptables(标准的,但是不科学,没有七层调度),ipvs,其中ipvs性能最好启动ipvs内核模块脚本
```
[root@hdss7-21 ~]# lsmod | grep ip_vs # 查看ipvs模块空行表示没有开启
[root@hdss7-21 ~]# vi ipvs.sh
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
[root@hdss7-21 ~]# chmod a+x ipvs.sh
[root@hdss7-21 ~]# ./ipvs.sh
[root@hdss7-21 ~]# lsmod | grep ip_vs # 查看ipvs模块(一个算法一个模块)
ip_vs_ftp 13079 0
nf_nat 26583 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
ip_vs_sed 12519 0
ip_vs_nq 12516 0
ip_vs_sh 12688 0
ip_vs_dh 12688 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs_wlc 12519 0
ip_vs_lc 12516 0
ip_vs 145458 22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrac
解释:
静态调度算法:一般常用
ip_vs_rr中rr: 轮叫调度(Round-Robin Scheduling)
ip_vs_wrr中wrr: 加权轮叫调度(Weighted Round-Robin Scheduling)
ip_vs_lc中lc: 最小连接调度(Least-Connection Scheduling)
ip_vs_wlc中wlc: 加权最小连接调度(Weighted Least-Connection Scheduling)
动态算法:
ip_vs_lblc、ip_vs_lblcr、ip_vs_dh、ip_vs_sh 比较少用,一般只用于cdn纯静态的。ip_vs_sed、ip_vs_nq 常用
ip_vs_lblc中lblc: 基于局部性的最少链接(Locality-Based Least Connections Scheduling)
ip_vs_lblcr中lblcr: 带复制的基于局部性最少链接(Locality-Based Least Connections with Replication Scheduling)
ip_vs_dh中dh: 目标地址散列调度(Destination Hashing Scheduling)
ip_vs_sh中sh: 源地址散列调度(Source Hashing Scheduling)
ip_vs_sed中sed: 最短预期延时调度(Shortest Expected Delay Scheduling)
ip_vs_nq中nq: 不排队调度(Never Queue Scheduling)
```
### 4、创建启动脚本
```
vi /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override hdss7-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
```
#### 创建日志目录和授权
```
chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
mkdir -p /data/logs/kubernetes/kube-proxy
```
#### 创建配置文件
```
vi /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
```
#### 更新配置查看启动情况
```
supervisorctl update
supervisorctl status
```
### 5、创建一个资源配置清单,导入一个nginx ,启动pod控制器
```
vim nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
```
#### 启动和查看
```
kubectl create -f nginx-ds.yaml
kubectl get pods -o wide
```
![](https://img.kancloud.cn/f9/dc/f9dcbe0124a2f10eea83aaceeb7d7305_838x94.png)
- 空白目录
- k8s
- k8s介绍和架构图
- 硬件环境和准备工作
- bind9-DNS服务部署
- 私有仓库harbor部署
- k8s-etcd部署
- api-server部署
- 配置apiserver L4代理
- controller-manager部署
- kube-scheduler部署
- node节点kubelet 部署
- node节点kube-proxy部署
- cfss-certinfo使用
- k8s网络-Flannel部署
- k8s网络优化
- CoreDNS部署
- k8s服务暴露之ingress
- 常用命令记录
- k8s-部署dashboard服务
- K8S平滑升级
- k8s服务交付
- k8s交付dubbo服务
- 服务架构图
- zookeeper服务部署
- Jenkins服务+共享存储nfs部署
- 安装配置maven和java运行时环境的底包镜像
- 使用blue ocean流水线构建镜像
- K8S生态--交付prometheus监控
- 介绍
- 部署4个exporter
- 部署prometheus server
- 部署grafana
- alert告警部署
- 日志收集ELK
- 制作Tomcat镜像
- 部署ElasticSearch
- 部署kafka和kafka-manager
- filebeat镜像制作
- 部署logstash
- 部署Kibana
- Apollo交付到Kubernetes集群
- Apollo简介
- 交付apollo-configservice
- 交付apollo-adminservice
- 交付apollo-portal
- k8s-CICD
- 集群整体架构
- 集群安装
- harbor仓库和nfs部署
- nginx-ingress-controller服务部署
- gitlab服务部署
- gitlab服务优化
- gitlab-runner部署
- dind服务部署
- CICD自动化服务devops演示
- k8s上服务日志收集