***如果你的kubernetes的集群已经完成了搭建,当你不知道怎么使用这套架构,下面我们一起来用k8s集群搭建一个简单的微服务应用:***
* [ ] 这个微服务应用是一个简单的php程序留言板:
* [ ] web应用是基于appche运行php程序
* [ ] 后面是一个redis主从结构,我们用redis一主两从的模式
* * * * *
本例中将用到3个Docker镜像,镜像下载地址为:https://hup.docker.com/u/kubeguide
1. redis-master:用于前端web进行写留言操作的redis服务,其中已经保存了一条内容为hello world的留言。
2. redis-slave: 用于前端web进行读留言的操作热redis,并与redis-master同步
3. frontend:phpweb服务,在网页上展示内容,也提供一个文本框提供访客添加留言
* * * * *
![](https://box.kancloud.cn/f7271153c5ee0cfe78da6f72b105f6a5_2166x442.png)
* * * * *
下面我们来一起操作:
我们先为redis-master创建一个RC定义文件:(RC主要是来定义副本的)
~~~
[root@localhost gustboo]# cat redis-master-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
name: redis-master
spec:
replicas: 1
selector:
name: redis-master
template:
metadata:
labels:
name: redis-master
spec:
containers:
- name: master
image: kubeguide/redis-master
ports:
- containerPort: 6379
~~~
* kind为ReplicationController 表示为副本控制器
* metadata.name: 第一rc名字
* metadata.labels:rc的标签
* spec.replicas=1这里我们只运行一个master实例
* 当运行的实例的副本少于设置的当前的副本数量时,他会根据定义的template来重新生成副本。
* containerPort:设置redis在容器中运行的port
* * * * *
当定义好了rc文件后,我们在k8s集群中来城建这个rc
~~~
kubectl create -f redis-master-controller.yaml
[root@localhost gustboo]# kubectl get rc
NAME DESIRED CURRENT READY AGE
redis-master 1 1 1 2h
[root@localhost gustboo]# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-master-r04fz 1/1 Running 0 2h
~~~
当READY=1和status=Running 说名我们当rc创建好了
* * * * *
下面我们来创建master的svc:(svc就是service,用来定义服务的)
~~~
[root@localhost gustboo]# cat redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
name: redis-master
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis-master
~~~
注意:port:6379 和targetport:6379 把容器里面的redis端口映射映射到虚拟ip的6379
来创建redis-master的svc
~~~
[root@localhost gustboo]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master 10.254.211.158 <none> 6379/TCP 2h
~~~
这样redis-master就搭建好了
下一步我们来搭建redis-slave
我们一样也是准备好redis-slave的rc和svc的文件
~~~
[root@localhost gustboo]# cat redis-slave-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
replicas: 2
selector:
name: redis-slave
template:
metadata:
labels:
name: redis-slave
spec:
containers:
- name: slave
image: kubeguide/guestbook-redis-slave
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 6379
~~~
* * * * *
~~~
[root@localhost gustboo]# cat redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
ports:
- port: 6379
selector:
name: redis-slave
~~~
这redis-slave和redis-master的rc文件有所不同:
我们可以看到redis-slave的rc文件中多出了env这个配置,这个是环境变量的配置,我们都知道,redis主从模式,需要redis-slave去同步主的redis,这个时候,我们不能提前知道k8分配给redismaster的ip是多少,k8s里面有有两个方式去获取redis-master的ip
第一中是通过环境变量,第二中是通过dns,我们这里用第一种,当redis-master生成好了后,系统会自动生成环境变量:
~~~
#env
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT=tcp://10.254.211.158:6379
~~~
我们可以看到master的环境变量已经生成好了,我们通过制作redis-slave的镜像来实现主从,下面的脚本在redis-slave镜像中,启动时就运行
~~~
root@redis-slave-w3zr1:/# cat run.sh
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ ${GET_HOSTS_FROM:-dns} == "env" ]]; then
redis-server --slaveof ${REDIS_MASTER_SERVICE_HOST} 6379
else
redis-server --slaveof redis-master 6379
fi
~~~
这个脚本主要就是判断容器设置中有无env环境变量,有就直接用env,没有就用dns服务器发现。(前提是要搭好dns,这里我们不做讨论)
我们来创建我们的redis-slave
~~~
kubectl create -f redis-slave-controller.yaml
kubectl create -f redis-slave-service.yaml
[root@localhost gustboo]# kubectl get svc pod rc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-slave 10.254.160.65 <none> 6379/TCP 2h
redis-slave-w3zr1 1/1 Running 0 2h
redis-slave-xq3pf 1/1 Running 0 2h
redis-slave 2 2 2 2h
~~~
这样我们的redis-slave就搭建好了,redis的主从模式就搭建好了
最好我们来创建我们的前台web
准备好我们svc和rc文件
~~~
[root@localhost gustboo]# cat frontend-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: frontend
image: kubeguide/guestbook-php-frontend
env:
- name: GET_HOST_FROM
value: env
ports:
- containerPort: 80
~~~
~~~
[root@localhost gustboo]# cat frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
selector:
name: frontend
~~~
最后我们来看我们集群有些pod
~~~
[root@localhost gustboo]# kubectl get pod,svc,rc,endpoints
NAME READY STATUS RESTARTS AGE
po/frontend-7mfsh 1/1 Running 0 2h
po/frontend-m8fkw 1/1 Running 0 2h
po/frontend-x4tzj 1/1 Running 0 2h
po/my-nginx-543887649-4gh28 1/1 Running 14 19d
po/my-nginx-543887649-lxxcw 1/1 Running 14 19d
po/my-nginx-pod-2945913857-88lml 1/1 Running 14 19d
po/my-nginx-pod-2945913857-nwrlv 1/1 Running 14 19d
po/redis-master-r04fz 1/1 Running 0 3h
po/redis-slave-w3zr1 1/1 Running 0 2h
po/redis-slave-xq3pf 1/1 Running 0 2h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/frontend 10.254.198.99 <nodes> 80:30001/TCP 1h
svc/kubernetes 10.254.0.1 <none> 443/TCP 31d
svc/my-nginx 10.254.78.8 <none> 80/TCP 19d
svc/redis-master 10.254.211.158 <none> 6379/TCP 3h
svc/redis-slave 10.254.160.65 <none> 6379/TCP 2h
NAME DESIRED CURRENT READY AGE
rc/frontend 3 3 3 2h
rc/redis-master 1 1 1 3h
rc/redis-slave 2 2 2 2h
NAME ENDPOINTS AGE
ep/frontend 172.30.1.21:80,172.30.1.22:80,172.30.1.23:80 1h
ep/kubernetes 172.16.168.129:6443 31d
ep/my-nginx 172.30.1.16:80,172.30.1.9:80 19d
ep/redis-master 172.30.1.18:6379 3h
ep/redis-slave 172.30.1.19:6379,172.30.1.20:6379 2h
~~~
这样我们的redis主从和phpweb就搭建好了
![](https://box.kancloud.cn/82999f70c328f03b25deae1a3c8c4617_2304x998.png)