Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制
我们可以在yaml中进行限制:如下
Pod的每个容器可以指定以下一项或多项:
'//resources表示资源限制字段'
'//requests表示基本资源'
'//limits表示资源上限,即这个pod最大能用到多少资源'
spec.containers[].resources.limits.cpu '//CPU上限'
spec.containers[].resources.limits.memory '//内存上限'
spec.containers[].resources.requests.cpu '//创建时分配的基本CPU资源'
spec.containers[].resources.requests.memory '//创建时分配的基本内存资源'
尽管只能在单个容器上指定请求和限制,但是谈论Pod资源请求和限制很方便。特定资源类型的 Pod资源请求/限制是Pod中每个Container的该类
[root@master test]# vim pod2-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db '//容器1'
image: mysql
env: '//因为mysql这里要登录密码,必须要设置不然服务起不来'
- name: MYSQL_ROOT_PASSWORD '必填//设置环境变量的名称'
value: "password" '//环境变量的值'
resources: '//定义资源限制和请求的限制'
requests: '//定义pod请求的资源限制'
memory: "64Mi" '//基础内存为64M'
cpu: "250m" '//基础cpu使用为25%'
limits:
memory: "128Mi" '//这个容器内存上限为128M'
cpu: "500m" '//这个容器cpu上限为50%'
- name: wp '//容器2'
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
使用yaml文件创建pod资源
[root@master test]# kubectl create -f pod2-test.yaml
pod/frontend created
[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
frontend 0/2 ContainerCreating 0 47s
[root@master test]# kubectl describe pod frontend '//详细查看pod信息'
...省略内容
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 116s default-scheduler Successfully assigned default/frontend to 192.168.233.133
Normal Pulled 61s kubelet, 192.168.233.133 Successfully pulled image "mysql"
Normal Created 60s kubelet, 192.168.233.133 Created container
Normal Started 60s kubelet, 192.168.233.133 Started container
Normal Pulling 60s kubelet, 192.168.233.133 pulling image "wordpress"
Normal Created 14s kubelet, 192.168.233.133 Created container
Normal Pulled 14s kubelet, 192.168.233.133 Successfully pulled image "wordpress"
Normal Pulling 13s (x2 over 115s) kubelet, 192.168.233.133 pulling image "mysql"
Normal Started 13s kubelet, 192.168.233.133 Started container
'//发现容器是在node02节点创建的'
查看node节点资源分配
[root@master test]# kubectl describe node 192.168.233.133
...省略内容
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default frontend 500m (12%) 1 (25%) 128Mi (7%) 256Mi (14%)
default my-nginx-69b8899fd6-glh6w 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default nginx-test-d55b94fd-9zmdj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default nginx-test-d55b94fd-w4c5k 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 500m (12%) 1 (25%)
memory 128Mi (7%) 256Mi (14%)
查看pod重启策略和重启次数
[root@master test]# kubectl edit pod frontend
restartPolicy: Always '//默认进入podyaml编辑界面,将重启策略改成always'
[root@master test]# kubectl get pod '//发现已经重启了五次'
NAME READY STATUS RESTARTS AGE
frontend 1/2 CrashLoopBackOff 5 10m
pod的重启策略restartpolicy,在pod遇到故障之后的重启的动作称为重启策略
注意:k8s中不支持重启pod资源,这里说的重启指的是删除重建pod
方法一:使用kubectl edit命令查看
[root@master test]# kubectl edit pod frontend restartPolicy: Always ‘//可以看到重启策略是Always,yaml文件中不指定重启策略默认就是always’
kubectl get pod名称 --export -o yaml文件名称
先删除所有pod资源(个人为了方便,根据自己情况)
[root@master test]# kubectl delete -f .
编写一个yaml文件
[root@master test]# vim pod3-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceshichongqicelue
spec:
containers:
- name: busybox
image: busybox '//Linux的最小化产品(测试用)'
args:
- /bin/sh
- -c
- sleep 30;exit 3 '//休眠30秒,异常退出'
创建pod,查看重启状态
[root@master test]# kubectl create -f pod3-test.yaml
pod/ceshichongqicelue created
[root@master test]# kubectl get pod -w '//-w:动态查看'
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 ContainerCreating 0 10s
ceshichongqicelue 1/1 Running 0 13s
ceshichongqicelue 0/1 Error 0 44s
ceshichongqicelue 1/1 Running 1 53s
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 1/1 Running 1 59s '//重启了一次'
重新修改pod3-test.yaml文件的重启策略
[root@master test]# vim pod3-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceshichongqicelue
spec:
restartPolicy: Never '//添加重启策略为从不重启'
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30;exit 3
重新创建pod资源,查看重启状态
[root@master test]# kubectl delete -f pod3-test.yaml
pod "ceshichongqicelue" deleted
[root@master test]# kubectl apply -f pod3-test.yaml
pod/ceshichongqicelue created
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 ContainerCreating 0 6s
ceshichongqicelue 1/1 Running 0 15s
ceshichongqicelue 0/1 Error 0 45s '//因为返回的是状态码3,所以显示的是error,如果删除这个异常状态码,那么显示的是completed'
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
ceshichongqicelue 0/1 Error 0 67s
pod的健康检查又被称为探针,来检查pod资源,探针的规则可以同时定义
判断容器是否存活(running),若不健康,则kubelet杀死该容器,根据Pod的restartPolicy来操作。 若容器中不包含此探针,则kubelet人为该容器的亲和性探针返回值永远是success
判断容器服务是否就绪(ready),若不健康,kubernetes会把Pod从service endpoints中剔除,后续在把恢复到Ready状态的Pod加回后端的Endpoint列表。这样就能保证客户端在访问service’时不会转发到服务不可用的pod实例上 endpoint是service负载均衡集群列表,添加pod资源的地址
亲和性探针和就绪型探针都可以配置这三种检查方式
1、exec(最常用):执行shell命令返回状态码为0代表成功,exec检查后面所有pod资源,触发策略就执行
2、httpGet:发送http请求,返回200-400范围状态码为成功
3、tcpSocket :发起TCP Socket建立成功
编辑yaml文件
[root@master test]# vim pod4-test.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 300
livenessProbe:
exec: '//执行存活的exec探针策略'
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5 '//容器启动5秒后开始探测'
periodSeconds: 5 '//每5秒探测一次,如果探测不到就杀死然后重启'
'//kubelet 在容器内执行命令 cat /tmp/healthy 来进行探测。若命令返回值为 0,kubelet 就会认为这个容器是健康存活的。否则,kubelet 会杀死这个容器并重新启动它。'
使用yaml文件创建pod
[root@master test]# kubectl create -f pod4-test.yaml
pod/liveness-exec created
[root@master test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-exec 0/1 ContainerCreating 0 18s
liveness-exec 1/1 Running 0 35s
liveness-exec 1/1 Running 1 114s '//发现容器重启了一次'
^C[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 2m1s
查看pod详细事件信息
[root@master test]# kubectl describe pod liveness-exec
。。。省略内容
Events: '//以下事件信息是重启的时候的信息'
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m31s default-scheduler Successfully assigned default/liveness-exec to 192.168.233.133
Normal Pulling 42s (x2 over 2m31s) kubelet, 192.168.233.133 pulling image "busybox"
Normal Killing 42s kubelet, 192.168.233.133 Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 38s (x2 over 116s) kubelet, 192.168.233.133 Successfully pulled image "busybox"
Normal Created 38s (x2 over 116s) kubelet, 192.168.233.133 Created container
Normal Started 38s (x2 over 116s) kubelet, 192.168.233.133 Started container
Warning Unhealthy 2s (x5 over 82s) kubelet, 192.168.233.133 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
编写yaml文件
[root@k8s-master01 k8s-test]# cat livenessProbe-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget-pod
namespace: default
spec:
containers:
- name: liveness-httpget-container
image: kone.com/library/nginx
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
livenessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 10
[root@k8s-master01 k8s-test]#
创建kubectl create -f livenessProbe-httpget.yaml 查看此时重启次数为0
[root@k8s-master01 k8s-test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-httpget-pod 1/1 Running 0 4m31s
进入容器删除/usr/share/nginx/html/index.htm
kubectl exec -it liveness-httpget-pod -- rm -f /usr/share/nginx/html/index.html
然后看容器的重启次数为1
[root@k8s-master01 k8s-test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-httpget-pod 1/1 Running 1 7m25s
yaml文件中定义检测端口为8090,因nginx启动为80端口,所以容器启动5秒后,开始检测,检测发起连接8090端口,1秒后超时检测失败
[root@k8s-master01 k8s-test]# cat livenessProbe-tcp.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-tcp
namespace: default
spec:
containers:
- name: liveness-tcp-container
image: kone.com/library/nginx
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
tcpSocket:
port: 8090
periodSeconds: 3
[root@k8s-master01 k8s-test]#
查看liveness-tcp容器在不停的重启
[root@k8s-master01 k8s-test]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-tcp 1/1 Running 0 11s
liveness-tcp 1/1 Running 1 15s
liveness-tcp 1/1 Running 2 28s