Loading [MathJax]/jax/output/CommonHTML/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
社区首页 >专栏 >K8S使用ceph-csi持久化存储之RBD

K8S使用ceph-csi持久化存储之RBD

作者头像
匿名用户的日记
发布于 2021-12-31 02:32:21
发布于 2021-12-31 02:32:21
2.8K00
代码可运行
举报
运行总次数:0
代码可运行

创建一个ceph pool 创建存储池

ceph集群请看这里:https://cloud.tencent.com/developer/article/1927693

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
ceph osd pool create rbd 128
ceph osd pool set-quota rbd max_bytes $((50 * 1024 * 1024 * 1024)) #50G的存储池
rbd pool init rbd
查看集群状态
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@node3 ~]# ceph -s
  cluster:
    id:     3a2a06c7-124f-4703-b798-88eb2950361e
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node5,node4,node3
    mgr: node3(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 23  objects, 22 MiB
    usage:   7.4 GiB used, 593 GiB / 600 GiB avail
    pgs:     128 active+clean
查看用户key
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@node3 ~]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]
	key = AQCJMslhQW0JEhAAXEgcsW3IZozDi7FF51+sbw==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

或者自己创建存储池、用户以及用户key

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@node3 ~]# ceph osd pool create kubernetes
[root@node3 ~]# rbd pool init kubernetes
[root@node3 ~]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'
[client.kubernetes]
    	key = AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg==

注意:这里key后面对应的只是一个例子,实际配置中要以运行命令后产生的结果为准

这里的key使用user的key,后面配置中是需要用到的

如果是ceph luminous版本的集群,那么命令应该是ceph auth get-or-create client.kubernetes mon 'allow r' osd 'allow rwx pool=kubernetes' -o ceph.client.kubernetes.keyring

k8s所有节点安装ceph客户端

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
cat > /etc/yum.repos.d/ceph.repo<<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
yum -y install ceph
生成ceph-csi的kubernetes configmap
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@node3 ~]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 3a2a06c7-124f-4703-b798-88eb2950361e
last_changed 2021-12-27 11:27:02.815248
created 2021-12-27 11:27:02.815248
0: 172.18.112.18:6789/0 mon.node5
1: 172.18.112.19:6789/0 mon.node4
2: 172.18.112.20:6789/0 mon.node3
用以上的的信息生成configmap:
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
cat csi-config-map.yaml
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "3a2a06c7-124f-4703-b798-88eb2950361e",
        "monitors": [
          "172.18.112.20:6789",
          "172.18.112.19:6789",
          "172.18.112.18:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config

在kubernetes集群上,将此configmap存储到集群

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f csi-config-map.yaml

生成ceph-csi cephx的secret

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
cat <<EOF > csi-rbd-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
    userID: admin
    userKey: AQAs89depA23NRAA8yEg0GfHNC/uhKU9jsgp6Q==
EOF

将此配置存储到kubernetes中

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f csi-rbd-secret.yaml

配置ceph-csi插件(kubernetes上的rbac和提供存储功能的容器)

rbac部分

可以通信github直接部署

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
离线请按照以下配置
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat csi-provisioner-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-provisioner
  # replace with non-default namespace name
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: rbd-external-provisioner-cfg
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f csi-provisioner-rbac.yaml

可以通信github直接部署

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
离线请按照以下配置
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat csi-nodeplugin-rbac.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-nodeplugin
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
  # allow to read Vault Token and connection options from the Tenants namespace
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
subjects:
  - kind: ServiceAccount
    name: rbd-csi-nodeplugin
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-csi-nodeplugin
  apiGroup: rbac.authorization.k8s.io

部署

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f csi-nodeplugin-rbac.yaml

provisioner部分

包含镜像版本,要是用其他版本,请自行修改yaml文件:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
k8s.gcr.io/sig-storage/csi-resizer:v1.3.0
k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0
k8s.gcr.io/sig-storage/csi-attacher:v3.3.0
quay.io/cephcsi/cephcsi:canary

官方文件

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml

以下yml文件所引用的镜像文件已经本地镜像仓库,请根据自己网络环境调整

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat csi-rbdplugin-provisioner.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: csi-rbdplugin-provisioner
  # replace with non-default namespace name
  namespace: default
  labels:
    app: csi-metrics
spec:
  selector:
    app: csi-rbdplugin-provisioner
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin-provisioner
  # replace with non-default namespace name
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: csi-rbdplugin-provisioner
  template:
    metadata:
      labels:
        app: csi-rbdplugin-provisioner
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - csi-rbdplugin-provisioner
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: rbd-csi-provisioner
      priorityClassName: system-cluster-critical
      containers:
        - name: csi-provisioner
          image: dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--retry-interval-start=500ms"
            - "--leader-election=true"
            #  set it to true to use topology based provisioning
            - "--feature-gates=Topology=false"
            # if fstype is not specified in storageclass, ext4 is default
            - "--default-fstype=ext4"
            - "--extra-create-metadata=true"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-snapshotter
          image: dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--leader-election=true"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-attacher
          image: dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-attacher:v3.3.0
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election=true"
            - "--retry-interval-start=500ms"
          env:
            - name: ADDRESS
              value: /csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-resizer
          image: dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-resizer:v1.3.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--leader-election"
            - "--retry-interval-start=500ms"
            - "--handle-volume-inuse-error=false"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-rbdplugin
          # for stable functionality replace canary with latest release version
          image: dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary
          args:
            - "--nodeid=$(NODE_ID)"
            - "--type=rbd"
            - "--controllerserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--pidlimit=-1"
            - "--rbdhardmaxclonedepth=8"
            - "--rbdsoftmaxclonedepth=4"
            - "--enableprofiling=false"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
            - name: CSI_ADDONS_ENDPOINT
              value: unix:///csi/csi-addons.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
           # - name: ceph-csi-encryption-kms-config
           #   mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
           # - name: ceph-config
           #   mountPath: /etc/ceph/
        - name: csi-rbdplugin-controller
          # for stable functionality replace canary with latest release version
          image: dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary
          args:
            - "--type=controller"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--drivernamespace=$(DRIVER_NAMESPACE)"
          env:
            - name: DRIVER_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
           # - name: ceph-config
           #   mountPath: /etc/ceph/
        - name: liveness-prometheus
          image: dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: socket-dir
          emptyDir: {
            medium: "Memory"
          }
        #- name: ceph-config
        #  configMap:
        #    name: ceph-config
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat csi-rbdplugin.yaml
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin
  # replace with non-default namespace name
  namespace: default
spec:
  selector:
    matchLabels:
      app: csi-rbdplugin
  template:
    metadata:
      labels:
        app: csi-rbdplugin
    spec:
      serviceAccountName: rbd-csi-nodeplugin
      hostNetwork: true
      hostPID: true
      priorityClassName: system-node-critical
      # to use e.g. Rook orchestrated cluster, and mons' FQDN is
      # resolved through k8s service, set dns policy to cluster first
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: driver-registrar
          # This is necessary only for systems with SELinux, where
          # non-privileged sidecar containers cannot access unix domain socket
          # created by privileged CSI driver container.
          securityContext:
            privileged: true
          image: dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0
          args:
            - "--v=5"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
        - name: csi-rbdplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          # for stable functionality replace canary with latest release version
          image: dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary
          args:
            - "--nodeid=$(NODE_ID)"
            - "--pluginpath=/var/lib/kubelet/plugins"
            - "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/"
            - "--type=rbd"
            - "--nodeserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--enableprofiling=false"
            # If topology based provisioning is desired, configure required
            # node labels representing the nodes topology domain
            # and pass the label names below, for CSI to consume and advertise
            # its equivalent topology domain
            # - "--domainlabels=failure-domain/region,failure-domain/zone"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: CSI_ADDONS_ENDPOINT
              value: unix:///csi/csi-addons.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /run/mount
              name: host-mount
            - mountPath: /etc/selinux
              name: etc-selinux
              readOnly: true
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            #- name: ceph-csi-encryption-kms-config
            #  mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: plugin-dir
              mountPath: /var/lib/kubelet/plugins
              mountPropagation: "Bidirectional"
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
            - name: ceph-logdir
              mountPath: /var/log/ceph
            #- name: ceph-config
            #  mountPath: /etc/ceph/
        - name: liveness-prometheus
          securityContext:
            privileged: true
          image: dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins
            type: Directory
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
        - name: ceph-logdir
          hostPath:
            path: /var/log/ceph
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: etc-selinux
          hostPath:
            path: /etc/selinux
        - name: host-mount
          hostPath:
            path: /run/mount
        - name: lib-modules
          hostPath:
            path: /lib/modules
        #- name: ceph-config
        #  configMap:
        #    name: ceph-config
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
  name: csi-metrics-rbdplugin
  # replace with non-default namespace name
  namespace: default
  labels:
    app: csi-metrics
spec:
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680
  selector:
    app: csi-rbdplugin

修改csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml文件,注释关于ceph-csi-encryption-kms-config与ceph-config配置:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# grep  "#" csi-rbdplugin-provisioner.yaml
  # replace with non-default namespace name
  # replace with non-default namespace name
            #  set it to true to use topology based provisioning
            # if fstype is not specified in storageclass, ext4 is default
          # for stable functionality replace canary with latest release version
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
           # - name: ceph-csi-encryption-kms-config
           #   mountPath: /etc/ceph-csi-encryption-kms-config/
           # - name: ceph-config
           #   mountPath: /etc/ceph/
          # for stable functionality replace canary with latest release version
           # - name: ceph-config
           #   mountPath: /etc/ceph/
        #- name: ceph-config
        #  configMap:
        #    name: ceph-config
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config

注意:所使用的镜像以及修改为本地仓库镜像,请根据自己网络环境调整

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-resizer:v1.3.0
dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0
dockerhub.kubekey.local/k8s.gcr.io/sig-storage/csi-attacher:v3.3.0
dockerhub.kubekey.local/quay.io/cephcsi/cephcsi:canary

部署

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f csi-rbdplugin-provisioner.yaml
kubectl apply -f csi-rbdplugin.yaml

查看运行状态

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# kubectl get pods 
NAME                                         READY   STATUS    RESTARTS   AGE

csi-rbdplugin-5jb79                          3/3     Running   0          22h
csi-rbdplugin-7dqd7                          3/3     Running   0          22h
csi-rbdplugin-8dpnb                          3/3     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-4clkc   7/7     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-lbjld   7/7     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-vpvb2   7/7     Running   0          22h
csi-rbdplugin-txjcg                          3/3     Running   0          22h
csi-rbdplugin-x57d6                          3/3     Running   0          22h

使用ceph块儿设备

创建storageclass
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 3a2a06c7-124f-4703-b798-88eb2950361e
   pool: rbd
   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: default
   csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard
  • clusterID对应之前的步骤中的fsid
  • imageFeatures,这个是用来确定创建的image的特征的
  • allowVolumeExpansion: true 是否开启在线扩容
查看storageclass:
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]#  kubectl get storageclass
NAME              PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE

csi-rbd-sc        rbd.csi.ceph.com   Delete          Immediate              true                   22h
local (default)   openebs.io/local   Delete          WaitForFirstConsumer   false                  5d23h
创建PVC
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raw-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

部署

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
kubectl apply -f raw-block-pvc.yaml 
查看pvc
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]#  kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
raw-block-pvc   Bound    pvc-23bb1905-2e26-4ce1-8616-2754dd36317f   1Gi        RWO            csi-rbd-sc     22h
创建使用PVC的Pod
代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# cat raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-raw-block-volume
spec:
  containers:
    - name: fc-container
      image: fedora:26
      command: ["/bin/sh", "-c"]
      args: ["tail -f /dev/null"]
      volumeDevices:
        - name: data
          devicePath: /dev/xvda
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: raw-block-pvc

查看

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[root@master-1 ~]# kubectl get pods 
NAME                                         READY   STATUS    RESTARTS   AGE

csi-rbdplugin-5jb79                          3/3     Running   0          22h
csi-rbdplugin-7dqd7                          3/3     Running   0          22h
csi-rbdplugin-8dpnb                          3/3     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-4clkc   7/7     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-lbjld   7/7     Running   0          22h
csi-rbdplugin-provisioner-66557fcc8f-vpvb2   7/7     Running   0          22h
csi-rbdplugin-txjcg                          3/3     Running   0          22h
csi-rbdplugin-x57d6                          3/3     Running   0          22h

pod-with-raw-block-volume                    1/1     Running   0          22h
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021-12-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
K8S学习笔记之k8s使用ceph实现动态持久化存储
本文章介绍如何使用ceph为k8s提供动态申请pv的功能。ceph提供底层存储功能,cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany ,RBD支持ReadWriteOnce,ReadOnlyMany两种模式
Jetpropelledsnake21
2019/05/31
2.6K0
Kubernetes 使用 ceph-csi 消费 RBD 作为持久化存储
本文详细介绍了如何在 Kubernetes 集群中部署 ceph-csi(v3.1.0),并使用 RBD 作为持久化存储。
米开朗基杨
2020/09/21
1.9K0
Kubernetes 使用 ceph-csi 消费 RBD 作为持久化存储
045.集群存储-CSI存储机制
Kubernetes从1.9版本开始引入容器存储接口Container Storage Interface(CSI)机制,用于在Kubernetes和外部存储系统之间建立一套标准的存储管理接口,通过该接口为容器提供存储服务。
木二
2020/04/08
1.2K0
045.集群存储-CSI存储机制
云实验室(24) - kubesphere使用ceph-csi进行存储
由于虚拟机没有访问国外网站,一些k8s的镜像拉取不到,固使用一些其他源代替,顺次执行以下命令在各个节点上(注意不要一次全部执行,要一条一条来,确保镜像全部拉取成功)
惊羽-布壳儿
2022/06/15
2.4K19
云实验室(24) - kubesphere使用ceph-csi进行存储
tke集群如何使用ceph存储
Ceph是一个统一的分布式存储系统,设计初衷是提供较好的性能、可靠性和可扩展性,不管你是想为云平台提供Ceph 对象存储和/或 Ceph 块设备,还是想部署一个 Ceph 文件系统或者把 Ceph 作为他用,所有 Ceph 存储集群的部署都始于部署一个个 Ceph 节点、网络和 Ceph 存储集群。 Ceph 存储集群至少需要一个 Ceph Monitor 和两个 OSD 守护进程。而运行 Ceph 文件系统客户端时,则必须要有元数据服务器( Metadata Server )。
聂伟星
2022/05/11
9820
kubernetes使用ceph存储
管理存储是管理计算的一个明显问题。PersistentVolume子系统为用户和管理员提供了一个API,用于抽象如何根据消费方式提供存储的详细信息。于是引入了两个新的API资源:PersistentVolume和PersistentVolumeClaim
yuezhimi
2020/09/30
3.1K0
「走进k8s」Kubernetes1.15.1的持久化存储StorageClass(32)
1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中。2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上。3.下面的yaml涉及到3个yaml文件,都是参考github里面配置来的
IT架构圈
2019/09/08
8520
K8s 上的分布式存储集群搭建(Rook/ceph)
修改Rook CSI镜像地址,原本的地址可能是gcr的镜像,但是gcr的镜像无法被国内访问,所以需要同步gcr的镜像到阿里云镜像仓库,本文档已经为大家完成同步,可以直接修改如下:
灵雀云
2021/12/10
3.8K0
K8s 上的分布式存储集群搭建(Rook/ceph)
Kubernetes集群搭建
上述命令执行完成以后,kubeadm、kubelet、kubectl、kubernetes-cni这些二进制文件都会被自动安装好。
shysh95
2022/05/24
6280
Kubernetes集群搭建
实战:用“廉价”的NFS作为K8S后端存储
大家都知道,NFS是一种基于网络的文件系统协议,允许在不同的机器之间共享文件系统资源。在K8S中,可以使用NFS作为后端存储,以提供持久化存储和共享存储卷。但是否适合在生产环境使用NFS作为后端存储,这取决于具体的应用程序和使用场景。如果应用程序对性能和可靠性要求比较高,可能需要选择其他更适合的存储方案,比如ceph。如果只是在测试或者开发环境中,我觉得使用NFS可以更方便地实现共享存储卷,提高测试或者开发的效率。
不背锅运维
2023/04/10
1.3K0
实战:用“廉价”的NFS作为K8S后端存储
K8s——数据持久化自动创建PV
实现k8s的数据持久化的流程为:搭建nfs底层存储---->创建PV---->创建PVC---->创建pod。最终pod中的container实现数据的持久化。
小手冰凉
2020/09/11
2.3K0
kubernetes中持久化存储之StorageClass
上面介绍的PV和PVC模式是需要运维人员先创建好PV,然后开发人员定义好PVC进行一对一的Bond,但是如果PVC请求成千上万,那么就需要创建成千上万的PV,对于运维人员来说维护成本很高,Kubernetes提供一种自动创建PV的机制,叫StorageClass,它的作用就是创建PV的模板。
极客运维圈
2020/02/01
8350
kubernetes中持久化存储之StorageClass
k8s 使用新版NFS Provisioner配置subdir
NFS在k8s中作为volume存储已经没有什么新奇的了,这个是最简单也是最容易上手的一种文件存储。最近有一个需求需要在k8s中使用NFS存储,于是记录如下,并且还存在一些骚操作和过程中遇到的坑点,同时记录如下。
没有故事的陈师傅
2021/08/13
3.6K0
外包精通--使用 Ceph RBD 的完整示例(ceph or rook)笔记
Complete Example Using Ceph RBD - Persistent Storage Examples | Installation and Configuration | OpenShift Enterprise 3.1
Godev
2023/06/25
6110
快速上手 Rook,入门云原生存储编排
Rook 是一个开源 cloud-native storage orchestrator(云原生存储编排器),为各种存储解决方案提供平台、框架和支持,以与云原生环境进行原生集成。
为少
2021/08/26
2.8K0
快速上手 Rook,入门云原生存储编排
k8s对接ceph
所有的k8s节点的node节点要能访问到ceph的服务端,所以所有的node节点要安装客户端(ceph-common),我上面是直接安装ceph,也是可以的。
jwangkun
2021/12/23
8751
TKE使用自建NFS持久化存储
使用TKE的过程中,我们需要把pod一些文件持久化存储到外部,这边我们会用到nfs存储,其实在腾讯云上有CFS服务,可以用CFS作为文件存储服务器,TKE也支持将文件挂载到CFS上存储。但是如果你想自己管理nfs服务器,这边也可以通过自建nfs服务器来作为tke集群中pod存储。下面我们来说一下如何将pod的文件挂载到自建的nfs服务器来进行存储。
聂伟星
2020/09/17
2.4K2
kubernetes(k8s) 存储动态挂载
其中 nfs-kernel-server 为服务端, nfs-common 为客户端。
小陈运维
2021/11/17
3.3K1
Kubernetes 集群分布式存储插件 Rook Ceph部署
我们经常会说:容器和 Pod 是短暂的。其含义是它们的生命周期可能很短,会被频繁地销毁和创建。容器销毁时,保存在容器内部文件系统中的数据都会被清除。为了持久化保存容器的数据,可以使用存储插件在容器里挂载一个基于网络或者其他机制的远程数据卷,使得在容器里创建的文件,实际上是保存在远程存储服务器上,或者以分布式的方式保存在多个节点上,而与当前宿主机没有绑定关系。这样,无论在哪个节点上启动新的容器,都可以请求挂载指定的持久化存储卷。
高楼Zee
2020/11/04
3.2K0
Kubernetes 集群分布式存储插件 Rook Ceph部署
附013.Kubernetes永久存储Rook部署
Ceph是一种高度可扩展的分布式存储解决方案,提供对象、文件和块存储。在每个存储节点上,将找到Ceph存储对象的文件系统和Ceph OSD(对象存储守护程序)进程。在Ceph集群上,还存在Ceph MON(监控)守护程序,它们确保Ceph集群保持高可用性。
木二
2020/03/20
1.5K0
推荐阅读
相关推荐
K8S学习笔记之k8s使用ceph实现动态持久化存储
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
查看详情【社区公告】 技术创作特训营有奖征文