[TOC]
## 版本说明
| CSI 版本 | kubernetes 版本 |
| :------: | :------------: |
| v3.11.0 | v1.26, v1.27, v1.28, v1.29 |
| v3.10.2 | v1.26, v1.27, v1.28 |
| vv3.9.0 | v1.25, v1.26, v1.27 |
| vv3.8.1 | v1.24, v1.25, v1.26 |
| vv3.7.2 | v1.22, v1.23, v1.24 |
| vv3.6.2 | v1.21, v1.22, v1.23 |
| v3.5.1 | v1.21, v1.22, v1.23 |
| vv3.4.0 | v1.20, v1.21, v1.22 |
| vv3.3.0 | v1.20, v1.21, v1.22 |
> 详细的对应的版本,请查看下面的参考文章
## ceph侧执行
1. ceph创建kubernetes存储池
```shell
$ ceph osd pool create kubernetes 128 128
pool 'kubernetes' created
```
2. 初始化存储池
```shell
$ rbd pool init kubernetes
```
3. 为 Kubernetes 和 ceph-csi 创建一个新用户
```shell
sudo ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' -o /etc/ceph/ceph.client.kubernetes.keyring
```
4. 获取ceph相关信息
```shell
$ ceph mon dump
epoch 2
fsid b87d2535-406b-442d-8de2-49d86f7dc599
last_changed 2022-06-15T17:35:37.096336+0800
created 2022-06-15T17:35:05.828763+0800
min_mon_release 15 (octopus)
0: [v2:192.168.31.69:3300/0,v1:192.168.31.69:6789/0] mon.ceph01
1: [v2:192.168.31.102:3300/0,v1:192.168.31.102:6789/0] mon.ceph02
2: [v2:192.168.31.165:3300/0,v1:192.168.31.165:6789/0] mon.ceph03
dumped monmap epoch 2
```
## k8s部署ceph-csi
> 本文章ceph-csi部署在 `kube-storage` 命名空间下
1. 创建命名空间
```shell
$ sudo mkdir -p /etc/kubernetes/addons/cephrbd && \
sudo chown -R ${USER}./etc/kubernetes/addons/cephrbd && \
cd /etc/kubernetes/addons/cephrbd
$ cat << EOF | tee 0.namespace.yml >> /dev/null
apiVersion: v1
kind: Namespace
metadata:
name: csi-system
EOF
$ kubectl apply -f 0.namespace.yml
namespace/kube-storage created
```
2. 生成类似于以下示例的 csi-config-map.yaml 文件,将 fsid 替换为“clusterID”,并将监视器地址替换为“monitors”:
```shell
$ cat << EOF | tee 1.csi-config-map.yml >> /dev/null
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "b87d2535-406b-442d-8de2-49d86f7dc599",
"monitors": [
"192.168.31.69:6789",
"192.168.31.102:6789",
"192.168.31.165:6789"
]
}
]
metadata:
name: ceph-csi-config
namespace: csi-system
EOF
$ kubectl apply -f 1.csi-config-map.yml
configmap/ceph-csi-config created
```
> 根据ceph侧执行的返回结果来填写内容
3. 创建csi的kvs配置文件
```shell
$ cat << EOF | tee 2.csi-kms-config-map.yml >> /dev/null
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
{}
metadata:
name: ceph-csi-encryption-kms-config
namespace: csi-system
EOF
$ kubectl apply -f 2.csi-kms-config-map.yml
configmap/ceph-csi-encryption-kms-config createdv
```
4. 创建rbd的访问权限
```shell
$ cat << EOF | tee 3.csi-rbd-secret.yml >> /dev/null
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: csi-system
stringData:
userID: kubernetes
# ceph auth get-key client.kubernetes 获取key,不需要base64。
userKey: AQCfkKpidBhVHBAAJTzhkRKlSMuWDDibrlbPDA==
EOF
$ kubectl apply -f 3.csi-rbd-secret.yml
secret/csi-rbd-secret created
```
5. 创建ceph配置文件以及密钥文件
```shell
$ kubectl -n csi-system create configmap ceph-config --from-file=/etc/ceph/ceph.conf --from-file=keyring=/etc/ceph/ceph.client.kubernetes.keyring
configmap/ceph-config created
```
6. 创建相关的rbac权限
```shell
$ cat << EOF | tee 4.csi-provisioner-rbac.yaml >> /dev/null
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-provisioner
# replace with non-default namespace name
namespace: csi-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots/status"]
verbs: ["get", "list", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update", "patch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-provisioner-role
subjects:
- kind: ServiceAccount
name: rbd-csi-provisioner
# replace with non-default namespace name
namespace: csi-system
roleRef:
kind: ClusterRole
name: rbd-external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# replace with non-default namespace name
namespace: csi-system
name: rbd-external-provisioner-cfg
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-provisioner-role-cfg
# replace with non-default namespace name
namespace: csi-system
subjects:
- kind: ServiceAccount
name: rbd-csi-provisioner
# replace with non-default namespace name
namespace: csi-system
roleRef:
kind: Role
name: rbd-external-provisioner-cfg
apiGroup: rbac.authorization.k8s.io
EOF
$ cat << EOF | tee 5.csi-nodeplugin-rbac.yaml >> /dev/null
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-nodeplugin
# replace with non-default namespace name
namespace: csi-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
# allow to read Vault Token and connection options from the Tenants namespace
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-nodeplugin
subjects:
- kind: ServiceAccount
name: rbd-csi-nodeplugin
# replace with non-default namespace name
namespace: csi-system
roleRef:
kind: ClusterRole
name: rbd-csi-nodeplugin
apiGroup: rbac.authorization.k8s.io
EOF
$ kubectl apply -f 4.csi-provisioner-rbac.yml
serviceaccount/rbd-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
$ kubectl apply -f 5.csi-nodeplugin-rbac.yml
serviceaccount/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
```
7. 创建 ceph-csi 配置器
```shell
$ cat << 'EOF' | tee 6.csi-rbdplugin-provisioner.yml >> /dev/null
---
kind: Service
apiVersion: v1
metadata:
name: csi-rbdplugin-provisioner
# replace with non-default namespace name
namespace: csi-system
labels:
app: csi-metrics
spec:
selector:
app: csi-rbdplugin-provisioner
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8680
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-rbdplugin-provisioner
# replace with non-default namespace name
namespace: csi-system
spec:
replicas: 3
selector:
matchLabels:
app: csi-rbdplugin-provisioner
template:
metadata:
labels:
app: csi-rbdplugin-provisioner
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- csi-rbdplugin-provisioner
topologyKey: "kubernetes.io/hostname"
serviceAccountName: rbd-csi-provisioner
priorityClassName: system-cluster-critical
containers:
- name: csi-provisioner
image: registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
args:
- "--csi-address=$(ADDRESS)"
- "--v=1"
- "--timeout=150s"
- "--retry-interval-start=500ms"
- "--leader-election=true"
# set it to true to use topology based provisioning
- "--feature-gates=Topology=false"
- "--feature-gates=HonorPVReclaimPolicy=true"
- "--prevent-volume-mode-conversion=true"
# if fstype is not specified in storageclass, ext4 is default
- "--default-fstype=ext4"
- "--extra-create-metadata=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-snapshotter
image: registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
args:
- "--csi-address=$(ADDRESS)"
- "--v=1"
- "--timeout=150s"
- "--leader-election=true"
- "--extra-create-metadata=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
image: registry.k8s.io/sig-storage/csi-attacher:v4.3.0
args:
- "--v=1"
- "--csi-address=$(ADDRESS)"
- "--leader-election=true"
- "--retry-interval-start=500ms"
- "--default-fstype=ext4"
env:
- name: ADDRESS
value: /csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: registry.k8s.io/sig-storage/csi-resizer:v1.8.0
args:
- "--csi-address=$(ADDRESS)"
- "--v=1"
- "--timeout=150s"
- "--leader-election"
- "--retry-interval-start=500ms"
- "--handle-volume-inuse-error=false"
- "--feature-gates=RecoverVolumeExpansionFailure=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-rbdplugin
image: quay.io/cephcsi/cephcsi:v3.9.0
args:
- "--nodeid=$(NODE_ID)"
- "--type=rbd"
- "--controllerserver=true"
- "--endpoint=$(CSI_ENDPOINT)"
- "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--pidlimit=-1"
- "--rbdhardmaxclonedepth=8"
- "--rbdsoftmaxclonedepth=4"
- "--enableprofiling=false"
- "--setmetadata=true"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# - name: KMS_CONFIGMAP_NAME
# value: encryptionConfig
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
- name: CSI_ADDONS_ENDPOINT
value: unix:///csi/csi-addons.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: ceph-config
mountPath: /etc/ceph/
- name: oidc-token
mountPath: /run/secrets/tokens
readOnly: true
- name: csi-rbdplugin-controller
image: quay.io/cephcsi/cephcsi:v3.9.0
args:
- "--type=controller"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--drivernamespace=$(DRIVER_NAMESPACE)"
- "--setmetadata=true"
env:
- name: DRIVER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: ceph-config
mountPath: /etc/ceph/
- name: liveness-prometheus
image: quay.io/cephcsi/cephcsi:v3.9.0
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: socket-dir
emptyDir: {
medium: "Memory"
}
- name: ceph-config
configMap:
name: ceph-config
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: ceph-csi-encryption-kms-config
configMap:
name: ceph-csi-encryption-kms-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
- name: oidc-token
projected:
sources:
- serviceAccountToken:
path: oidc-token
expirationSeconds: 3600
audience: ceph-csi-kms
EOF
$ kubectl apply -f 6.csi-rbdplugin-provisioner.yml
service/csi-rbdplugin-provisioner created
deployment.apps/csi-rbdplugin-provisioner created
$ kubectl -n kube-storage get pod -l app=csi-rbdplugin-provisioner
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-provisioner-6bd5bd5fd9-psp58 7/7 Running 0 19m
csi-rbdplugin-provisioner-6bd5bd5fd9-sl4kq 7/7 Running 0 19m
csi-rbdplugin-provisioner-6bd5bd5fd9-wwzzp 7/7 Running 0 19m
```
> sed -ri 's@registry.k8s.io/sig-storage@172.139.20.170:5000/csi@g' 6.csi-rbdplugin-provisioner.yml
> sed -ri 's@quay.io/cephcsi@172.139.20.170:5000/csi@g' 6.csi-rbdplugin-provisioner.yml
8. 创建 ceph-csi 节点器
```shell
$ cat << 'EOF' | tee 7.csi-rbdplugin.yml >> /dev/null
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-rbdplugin
# replace with non-default namespace name
namespace: csi-system
spec:
selector:
matchLabels:
app: csi-rbdplugin
template:
metadata:
labels:
app: csi-rbdplugin
spec:
serviceAccountName: rbd-csi-nodeplugin
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
# to use e.g. Rook orchestrated cluster, and mons' FQDN is
# resolved through k8s service, set dns policy to cluster first
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: driver-registrar
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
securityContext:
privileged: true
allowPrivilegeEscalation: true
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0
args:
- "--v=1"
- "--csi-address=/csi/csi.sock"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: csi-rbdplugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: quay.io/cephcsi/cephcsi:v3.9.0
args:
- "--nodeid=$(NODE_ID)"
- "--pluginpath=/var/lib/kubelet/plugins"
- "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/"
- "--type=rbd"
- "--nodeserver=true"
- "--endpoint=$(CSI_ENDPOINT)"
- "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--enableprofiling=false"
# If topology based provisioning is desired, configure required
# node labels representing the nodes topology domain
# and pass the label names below, for CSI to consume and advertise
# its equivalent topology domain
# - "--domainlabels=failure-domain/region,failure-domain/zone"
#
# Options to enable read affinity.
# If enabled Ceph CSI will fetch labels from kubernetes node and
# pass `read_from_replica=localize,crush_location=type:value` during
# rbd map command. refer:
# https://docs.ceph.com/en/latest/man/8/rbd/#kernel-rbd-krbd-options
# for more details.
# - "--enable-read-affinity=true"
# - "--crush-location-labels=topology.io/zone,topology.io/rack"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# - name: KMS_CONFIGMAP_NAME
# value: encryptionConfig
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: CSI_ADDONS_ENDPOINT
value: unix:///csi/csi-addons.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /sys
name: host-sys
- mountPath: /run/mount
name: host-mount
- mountPath: /etc/selinux
name: etc-selinux
readOnly: true
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: plugin-dir
mountPath: /var/lib/kubelet/plugins
mountPropagation: "Bidirectional"
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: ceph-logdir
mountPath: /var/log/ceph
- name: ceph-config
mountPath: /etc/ceph/
- name: oidc-token
mountPath: /run/secrets/tokens
readOnly: true
- name: liveness-prometheus
securityContext:
privileged: true
allowPrivilegeEscalation: true
image: quay.io/cephcsi/cephcsi:v3.9.0
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: ceph-logdir
hostPath:
path: /var/log/ceph
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: host-dev
hostPath:
path: /dev
- name: host-sys
hostPath:
path: /sys
- name: etc-selinux
hostPath:
path: /etc/selinux
- name: host-mount
hostPath:
path: /run/mount
- name: lib-modules
hostPath:
path: /lib/modules
- name: ceph-config
configMap:
name: ceph-config
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: ceph-csi-encryption-kms-config
configMap:
name: ceph-csi-encryption-kms-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
- name: oidc-token
projected:
sources:
- serviceAccountToken:
path: oidc-token
expirationSeconds: 3600
audience: ceph-csi-kms
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
name: csi-metrics-rbdplugin
# replace with non-default namespace name
namespace: csi-system
labels:
app: csi-metrics
spec:
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8680
selector:
app: csi-rbdplugin
EOF
$ kubectl apply -f 7.csi-rbdplugin.yml
daemonset.apps/csi-rbdplugin created
service/csi-metrics-rbdplugin created
$ kubectl -n kube-storage get pod -l app=csi-rbdplugin
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-747x8 3/3 Running 0 7m38s
csi-rbdplugin-8l5pj 3/3 Running 0 7m38s
csi-rbdplugin-d9pnv 3/3 Running 0 7m38s
csi-rbdplugin-rslnz 3/3 Running 0 7m38s
csi-rbdplugin-tcrs4 3/3 Running 0 7m38s
```
如果kubelet数据目录有做修改的话,请修改相关的配置。
例如,kubelet数据目录在`/data/k8s/data/kubelet`目录下。那么请执行 `sed -ri 's#/var/lib/kubelet#/data/k8s/data/kubelet#g' 7.csi-rbdplugin.yml` 来修改配置文件
> sed -ri 's@registry.k8s.io/sig-storage@172.139.20.170:5000/csi@g' 7.csi-rbdplugin.yml
> sed -ri 's@quay.io/cephcsi@172.139.20.170:5000/csi@g' 7.csi-rbdplugin.yml
9. 创建SC动态供应
```shell
$ cat << EOF | tee 8.csi-rbd-sc.yaml >> /dev/null
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: b87d2535-406b-442d-8de2-49d86f7dc599
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: csi-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: csi-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: csi-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
$ kubectl apply -f 8.csi-rbd-sc.yaml
storageclass.storage.k8s.io/csi-rbd-sc created
```
> 注意修改 `clusterID` 字段内容
>
## 验证
创建一个1Gb的pvc
```shell
$ cat << EOF | tee 9.raw-block-pvc.yaml >> /dev/null
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
EOF
$ kubectl apply -f 9.raw-block-pvc.yaml
persistentvolumeclaim/raw-block-pvc created
$ kubectl get pvc raw-block-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
raw-block-pvc Bound pvc-b89dd991-4b74-432c-bebf-97098f9b8740 1Gi RWO csi-rbd-sc 25s
```
## 附件内容
链接:https://pan.baidu.com/s/1wXSMC8-yoJfCKjcTLR-AbQ
提取码:gcuj
## 参考文章
ceph官网文章:https://docs.ceph.com/en/octopus/rbd/rbd-kubernetes/
Kubernetes CSI 开发者文档:https://kubernetes-csi.github.io/docs/drivers.html
ceph-csi文档:https://github.com/ceph/ceph-csi/tree/v3.5.1
- 前言
- 架构
- 部署
- kubeadm部署
- kubeadm扩容节点
- 二进制安装基础组件
- 添加master节点
- 添加工作节点
- 选装插件安装
- Kubernetes使用
- k8s与dockerfile启动参数
- hostPort与hostNetwork异同
- 应用上下线最佳实践
- 进入容器命名空间
- 主机与pod之间拷贝
- events排序问题
- k8s会话保持
- 容器root特权
- CNI插件
- calico
- calicoctl安装
- calico网络通信
- calico更改pod地址范围
- 新增节点网卡名不一致
- 修改calico模式
- calico数据存储迁移
- 启用 kubectl 来管理 Calico
- calico卸载
- cilium
- cilium架构
- cilium/hubble安装
- cilium网络路由
- IP地址管理(IPAM)
- Cilium替换KubeProxy
- NodePort运行DSR模式
- IP地址伪装
- ingress使用
- nginx-ingress
- ingress安装
- ingress高可用
- helm方式安装
- 基本使用
- Rewrite配置
- tls安全路由
- ingress发布管理
- 代理k8s集群外的web应用
- ingress自定义日志
- ingress记录真实IP地址
- 自定义参数
- traefik-ingress
- traefik名词概念
- traefik安装
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中间件(middlewares)
- traefik记录真实IP地址
- cert-manager
- 安装教程
- 颁布者CA
- 创建证书
- 外部存储
- 对接NFS
- 对接ceph-rbd
- 对接cephfs
- 监控平台
- Prometheus
- Prometheus安装
- grafana安装
- Prometheus配置文件
- node_exporter安装
- kube-state-metrics安装
- Prometheus黑盒监控
- Prometheus告警
- grafana仪表盘设置
- 常用监控配置文件
- thanos
- Prometheus
- Sidecar组件
- Store Gateway组件
- Querier组件
- Compactor组件
- Prometheus监控项
- grafana
- Querier对接grafana
- alertmanager
- Prometheus对接alertmanager
- 日志中心
- filebeat安装
- kafka安装
- logstash安装
- elasticsearch安装
- elasticsearch索引生命周期管理
- kibana安装
- event事件收集
- 资源预留
- 节点资源预留
- imagefs与nodefs验证
- 资源预留 vs 驱逐 vs OOM
- scheduler调度原理
- Helm
- Helm安装
- Helm基本使用
- 安全
- apiserver审计日志
- RBAC鉴权
- namespace资源限制
- 加密Secret数据
- 服务网格
- 备份恢复
- Velero安装
- 备份与恢复
- 常用维护操作
- container runtime
- 拉取私有仓库镜像配置
- 拉取公网镜像加速配置
- runtime网络代理
- overlay2目录占用过大
- 更改Docker的数据目录
- Harbor
- 重置Harbor密码
- 问题处理
- 关闭或开启Harbor的认证
- 固定harbor的IP地址范围
- ETCD
- ETCD扩缩容
- ETCD常用命令
- ETCD数据空间压缩清理
- ingress
- ingress-nginx header配置
- kubernetes
- 验证yaml合法性
- 切换KubeProxy模式
- 容器解析域名
- 删除节点
- 修改镜像仓库
- 修改node名称
- 升级k8s集群
- 切换容器运行时
- apiserver接口
- 其他
- 升级内核
- k8s组件性能分析
- ETCD
- calico
- calico健康检查失败
- Harbor
- harbor同步失败
- Kubernetes
- 资源Terminating状态
- 启动容器报错