假定Kubeadm
已经安装在节点上。
初始化集群的第一阶段是启动主节点。主服务器负责运行控制平面组件、etcd和API
服务器。客户端将与API
通信,以调度工作负载和管理集群的状态
kubeadm init --token=102952.1a7dd4cc8d1f4cc5 --kubernetes-versionsion -o short)
在生产环境中,建议排除明文令牌,kubeadm
会生成一个令牌。
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.44 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.44 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.44]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.026618 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 102952.1a7dd4cc8d1f4cc5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
要管理Kubernetes
集群,需要客户端配置和证书。这个配置是在kubeadm
初始化集群时创建的。该命令将配置复制到用户的主目录,并设置环境变量以供CLI
使用
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
容器网络接口(CNI)定义了不同节点及其工作负载应该如何通信。有多个网络解决方案可用,如下:
在此我们使用WeaveWorks
:
cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- nodes/status
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- weave-net
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
roleRef:
kind: Role
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.32.0.0/24
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.6.0'
readinessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-npc:2.6.0'
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: xtables-lock
mountPath: /run/xtables.lock
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
updateStrategy:
type: RollingUpdate
controlplane $ kubectl apply -f /opt/weave-kube.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
controlplane $ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- nodes/status
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- weave-net
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
roleRef:
kind: Role
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.32.0.0/24
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.6.0'
readinessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-npc:2.6.0'
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: xtables-lock
mountPath: /run/xtables.lock
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
updateStrategy:
type: RollingUpdate
controlplane $ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- nodes/status
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
rules:
- apiGroups:
- ''
resourceNames:
- weave-net
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
roleRef:
kind: Role
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: apps/v1
kind: DaemonSet
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"original-request": {
"url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
"date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
spec:
minReadySeconds: 5
selector:
matchLabels:
name: weave-net
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.32.0.0/24
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.6.0'
readinessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-npc:2.6.0'
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: xtables-lock
mountPath: /run/xtables.lock
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
updateStrategy:
type: RollingUpdate
这可以使用kubectl apply
进行部署
kubectl apply -f /opt/weave-kube.yaml
Weave
现在会在集群中以一系列pod
的形式部署。它的状态可以通过命令kubectl get pod -n kube-system
查看
controlplane $ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-l9rx6 1/1 Running 0 11m
coredns-fb8b8dccf-xgt2z 1/1 Running 0 11m
etcd-controlplane 1/1 Running 0 10m
kube-apiserver-controlplane 1/1 Running 0 10m
kube-controller-manager-controlplane 1/1 Running 0 10m
kube-proxy-p8864 1/1 Running 0 11m
kube-scheduler-controlplane 1/1 Running 1 10m
weave-net-lvsh9 2/2 Running 0 2m51s
目前Master和CNI
初始化完成,其他节点就可以加入集群,只要它们拥有正确的令牌。令牌可以通过kubeadm token
进行管理,如下:
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
102952.1a7dd4cc8d1f4cc5 23h 2021-02-04T12:10:23Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
在第二个节点上,运行该命令加入集群,并提供主节点的IP
地址
node01 $ kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 172.17.0.44:6443
其中
--discovery-token-unsafe-skip-ca-verification
标记用于绕过Discovery Token 验证
。由于这个令牌是动态生成的,所以不能在步骤中包含它。在生产环境中,使用kubeadm init
提供的令牌。
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
集群现在已经初始化。主节点将管理集群,而我们的一个工作节点将运行容器工作负载
Kubernetes CLI(称为kubectl)
现在可以使用配置访问集群。例如,命令kubectl get nodes
将返回我们集群中的两个节点:
controlplane $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready master 31m v1.14.0
node01 Ready <none> 13m v1.14.0
集群中的两个节点的状态现在应该是Ready
。这意味着我们的部署可以被调度和启动。使用Kubectl
可以部署pods
。向主节点发出命令,每个节点只负责执行工作负载。下面的命令根据Docker镜像katacoda/ Docker -http-server
创建一个Pod
。
controlplane $ kubectl create deployment http --image=katacoda/docker-http-server:latest
deployment.apps/http created
可以通过以下方式查看Pod
创建的状态
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
http-7f8cbdf584-4x6z2 1/1 Running 0 35s
运行后,可以看到Docker
容器运行在节点上。
node01 $ docker ps | grep docker-http-server
0083eec91ba0 katacoda/docker-http-server "/app" About a minute ago Up About a minute k8s_docker-http-server_http-7f8cbdf584-4x6z2_default_66c9a2ba-661d-11eb-9c73-0242ac11002c_0
Kubernetes
有一个基于web
的Dashboard UI
,借助它可以可视化Kubernetes
集群
使用命令kubectl apply -f dashboard.yaml
部署仪表板yaml
controlplane $ kubectl apply -f dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
指示板被部署到kube-system
名称空间中。
命令kubectl get pods -n kube-system
查看部署的状态
controlplane $ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-l9rx6 1/1 Running 0 45m
coredns-fb8b8dccf-xgt2z 1/1 Running 0 45m
etcd-controlplane 1/1 Running 0 44m
kube-apiserver-controlplane 1/1 Running 0 44m
kube-controller-manager-controlplane 1/1 Running 0 44m
kube-proxy-ld6d4 1/1 Running 0 27m
kube-proxy-p8864 1/1 Running 0 45m
kube-scheduler-controlplane 1/1 Running 1 43m
kubernetes-dashboard-5f57845f9d-9fttf 1/1 Running 0 73s
weave-net-7c5dh 2/2 Running 1 27m
weave-net-lvsh9 2/2 Running 0 36m
登录时需要使用ServiceAccount
。ClusterRoleBinding
用于为新的ServiceAccount (admin-user)
分配集群中的cluster-admin
角色
controlplane $ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: ServiceAccount
> metadata:
> name: admin-user
> namespace: kube-system
> ---
> apiVersion: rbac.authorization.k8s.io/v1beta1
> kind: ClusterRoleBinding
> metadata:
> name: admin-user
> roleRef:
> apiGroup: rbac.authorization.k8s.io
> kind: ClusterRole
> name: cluster-admin
> subjects:
> - kind: ServiceAccount
> name: admin-user
> namespace: kube-system
> EOF
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
这意味着他们可以控制Kubernetes
的所有方面。使用ClusterRoleBinding和RBAC
,可以根据安全需求定义不同级别的权限。
创建ServiceAccount后, 要找到要登录的令牌: kubectl -n kube-system describe secret (kubectl -n kube-system grep admin-user | awk '{print 1}')
controlplane $ kubectl -n kube-system describe secret $(kubectl -n kube-system grep admin-user | awk '{print $1}')
Name: admin-user-token-59z49
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 4241eb05-661f-11eb-9c73-0242ac11002c
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTU5ejQ5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MjQxZWIwNS02NjFmLTExZWItOWM3My0wMjQyYWMxMTAwMmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.ZZepDRh26A2f-xyJwBJCWsi-O7SBalfKN-NqT3Kbnu2SHHjDKl9vvdWvIZC5dR2hTqq2WpJgDSjPiXrUp5zguAFqicJW9dI_pjyXUC_e0QG8P1q0DwYzcDcUw0qKpe7KArflBK7fVFw2fLgVDvr-ElrhXRPBRwGWKO6MzzK2GfmoZTMdbIXaAZpQLK1mmR8FDXC3WfgSPIz4OHWwDfJhasub9QDwtHYG1llA73D5NNsqZSStTPu_wP5_ZA1us1DGZ_TcNjTLUVrMWA0spHxN-IQdEVBk0JRUAKbHt9Ed5T9ElQwJCyZmcILkACgrm6baxzBxOY1jDkFIVJBoIAbwvw
部署完成仪表板,它使用externalIPs
将服务绑定到端口8443
。这使得仪表板对集群外部可用。
可以在https://2886795312-8443-xdp.com/
上查看。
使用admin-user
令牌访问指示板。对于生产环境,建议使用kubectl
代理来访问仪表板,而不是使用externalIPs
。
使用admin-user
令牌访问指示板。对于生产环境,建议使用kubectl proxy
来访问仪表板,而不是使用externalIPs
。
详情请访问: