# 讲师:张长志
区块链、大数据项目讲师, Java开发、10余年软件研发及企业培训经验,曾为多家大型企业提供企业内训。
擅长领域
python领域
Java领域: SSM、SpringBoot、SpringCloud等java体现;
大数据:Hadoop、HDFS、MapReduce、HBase、Kafka、Spark、CDH 5.3.x集群;
10余年软件研发及企业培训经验,丰富的企业应用软件开发经验、深厚的软件架构设计理论基础及实践能力。
为中石化,中国联通,中国移动等知名企业提供企业培训服务。
项目开发历程:基于大数据技术推荐系统 ,电商大数据分析与统计推断,H5跨平台APP,电信系统,go语言实现storm和zk类似框。
# 1.通过图形方式快速了解k8s
![1564803707075](assets/1564803707075.png)
# 2系统架构
![1564804598545](assets/1564804598545.png)
# 3组建对应的功能
## 3.1 Master(主节点)
- kube-apiserver
Kubernetes API,集群的统一入口,各个组件的协调,以http api提供接口服务,所有对象资源的增删改查和监听操作都交给APIserver处理后提供给etcd。
- kube-controller-manager
处理集群中常规的后台服务,一个资源对应的一个控制器,而cm就是负责管理这些控制器的。
- kube-schedule
根据调度算法为新创的pod选择一个node节点
## 3.3 worker节点
+ Kubelet
kubelet是master在woker节点的angent,管理本机运行容器的生命周期。比如创建容器,pod挂载数据卷,下载secret,获取容器和节点的状态。kubelet将每个pod转换成一组容器。
+ kube-proxy
在woker节点上实现pod网络代理,维护网络规划和四层负载均衡工作
+ docker engine
运行容器
## 第三方服务
+ etcd
分布式键值对存储系统,用于保存机器的机器状态,比如pod service的信息。
# 卸载k8s
```
kubectl delete node --all
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
rm -rf /etc/systemd/system/docker.service.d
rm -rf /var/lib/docker
rm -rf /var/run/docker
```
# k8s集群部署
## 1、集群环境
系统:centos7u4
本次使用三台机器用于部署k8s的运行环境,1台master,2台node。具体如下表
| 节点名称 | 主机名 | IP |
| ---------------------- | ---------- | -------------- |
| master,etcd,registry | k8s-master | 192.168.28.201 |
| Node1 | k8s-node1 | 192.168.28.202 |
| Node2 | k8s-node2 | 192.168.28.203 |
## 2.说明
Kubenetes 工作模式是:server-client模式
Kubenets Master提供了集中化管理Minions。
Kubenets集群组建:
- etcd一个高可用的K/V键值对存储和服务发现的系统
- flannel实现夸主机的容器网络通信
- kube-apiserver 提供kubernetes集群的API调用
- Kube-controller-manager确保集群服务
- kube-sheduler 容器调度,分配到Node
- Kubelet在node节点上按照配置文件中定义的规则启动容器
- kube-proxy提供网络代理服务
## 3.设置三台机器的主机名
免密码登录
```
ssh 免密码登录
```
master执行
```
hostnamectl --static set-hostname k8s-master
```
slaves上执行
```
hostnamectl --static set-hostname k8s-node-1
hostnamectl --static set-hostname k8s-node-2
```
## 4.修改master和slave的hosts
在master和slave的`/etc/hosts`文件中均加入以下内容:
```
192.168.28.251 etcd
192.168.28.251 registry
192.168.28.251 k8s-master
192.168.28.252 k8s-node-1
192.168.28.253 k8s-node-2
```
## 5.关闭防火墙和selinux
```
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
```
## 6.安装epel-release源
```
yum -y install epel-release
```
# 部署master
## 1.使用yum安装etcd
etcd 服务作为Kubernetes集群的主数据库,在安装Kubernetes各服务之前首先安装和启动。
```
yum -y install etcd
```
## 2.编辑/etc/etcd/etcd.conf文件
```
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#ETCD_ENABLE_V2="true"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[auth]
#ETCD_AUTH_TOKEN="simple"
```
- 主要修改
```
ETCD_NAME=master
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
```
## 3.启动etcd服务
```
systemctl start etcd
```
## 4.验证
```
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
再获取etcd的健康指标看看:
etcdctl -C http://etcd:2379 cluster-health
etcdctl -C http://etcd:4001 cluster-health
```
## 5.**docker安装**
```
yum install docker -y
```
### 5.1允许从registry中拉取镜像
```
vi /etc/sysconfig/docker
OPTIONS='--insecure-registry registry:5000'
```
### 5.2 设置开机自启动并开启服务
```
systemctl enable docker
systemctl start docker
```
## 6.安装kubernets
```
yum install kubernetes -y
```
配置kubernetes master上需要运行以下组件
```
kubernetes API Server
kubernetes Controller Manager
kubernetes scheduler
```
相应的要更改以下几个配置信息:
```
vi /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
```
```
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
```
vi /etc/kubernetes/config
```
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
```
## 6.1启动kubernetes
启动
```
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
```
- 设置K8S各组件开机启动
```
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service
```
# 部署Slave节点
slave节点需要安装以下组件:
- docker
- kubernetes
- flannel
## 1.部署docker
```
yum install -y docker
```
```
vi /etc/sysconfig/docker
OPTIONS='--insecure-registry registry:5000'
```
设置开机自动启动
```
systemctl enable docker
systemctl start docker
```
## 2.安装配置启动Kubernetetes(slave节点都要配置)
```
yum install -y kubernetes
```
配置kubernetes slave上需要运行以下组件
```
Kubelet
Kubernetes proxy
```
相应的要更改以下几个配置信息:
vi /etc/kubernetes/config
```
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
```
```
KUBE_MASTER="--master=http://k8s-master:8080"
```
- 配置`/etc/kubernetes/kubelet`
```
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
```
```
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-node-1" 自己的主机名称
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
```
# 3.启动服务
```
systemctl start kubelet.service
systemctl start kube-proxy.service
```
# 4.设置启动自检
```
systemctl enable kubelet.service
systemctl enable kube-proxy.service
```
# 5查看状态:
在master长查看集群中节点以及节点的状态
```
kubectl -s http://k8s-master:8080 get nodes
```
![1564733951276](../../%E8%B5%84%E6%96%99/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/assets/1564733951276.png)
```
kubectl get nodes
```
![1564734049493](../../%E8%B5%84%E6%96%99/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/assets/1564734049493.png)
# 创建覆盖网络 -flannel
**flannel安装**
- 在master和node节点执行如下命令,进行安装
```
yum install flannel -y
```
- 配置flannel:`/etc/sysconfig/flanneld`
```
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
```
- 配置etcd中关于flannel的key(master执行 etcd)
```
etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
etcdctl rm 删除
etcdctl update 更新(测试时候当网络不好使用的时候需要刷新)
```
- 启动flannel并设置开机自启
```
systemctl start flanneld.service
systemctl enable flanneld.service
```
- 在每个minion节点上,flannel启动,它从etcd中获取network配置,并为本地节点产生一个subnet,也保存在etcd中,并且产生/run/flannel/subnet.evn 文件:
```
FLANNEL_NETWORK=10.0.0.0/16 #这是全局的falnnel subnet
FLANNEL_SUBNET=10.0.15.1/24 #这是本节点的falnnel subnet
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
```
## 启动
启动Flannel之后,需要依次启动docker ,kubernete.
在master执行
```
systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager
```
```
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
```
在node执行
```
systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
```
```
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
```
# 验证
- 查看端点信息:`kubectl get endpoints`
```
kubectl get endpoints
```
![1564736441892](../../%E8%B5%84%E6%96%99/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/assets/1564736441892.png)
- 查看集群信息:`kubectl cluster-info`
![1564736481670](../../%E8%B5%84%E6%96%99/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/assets/1564736481670.png)
- 获取集群中的节点状态: `kubectl get nodes`
![1564736524908](../../%E8%B5%84%E6%96%99/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/%E5%85%AC%E5%8F%B8%E7%8E%AF%E5%A2%83/assets/1564736524908.png)
* 查询状态
```
kubectl get componentstatus/cs
```
![1564818852263](assets/1564818852263.png)
# 案例部署nginx
docker --registry-mirror=https://registry.docker-cn.com daemon
```
kubectl run nginx --image=nginx --replicas=3
kubectl get pod
kubectl get pod -o wide #在哪个节点运行
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
kubectl get svc nginx
```
kubectl get pod -o wide #在哪个节点运行
![1564824445309](assets/1564824445309.png)
kubectl get svc
![1564826276556](assets/1564826276556.png)
<https://blog.csdn.net/weixin_34346099/article/details/87525499>
```
setenforce 0
iptables --flush
iptables -tnat --flush
service docker restart
iptables -P FORWARD ACCEPT
```
在我们的work节点上实验
curl 10.254.90.252:88
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
注意:
kubectl get pod
![1564824183396](assets/1564824183396.png)
kubectl describe pod
报了一个错
```
/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory
```
解决错误:参考以下网址
<https://www.cnblogs.com/lexiaofei/p/k8s.html>
```
yum install *rhsm* -y
```
```
yum install -y wget
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
```