前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >kubeadm实现K8S的HA

kubeadm实现K8S的HA

作者头像
用户1499526
发布2019-07-15 18:07:42
1.2K0
发布2019-07-15 18:07:42
举报
文章被收录于专栏:简单的日记

1、环境准备: 虚拟机或服务器的master节点CPU需2核以上,可通过下述命令查看: cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

(1)k8s各节点SSH设置免密登录 所有节点用root用户操作,全部设置免密登陆,不做细分。

(2)时间同步。 yum install -y ntpdate ntpdate -u ntp.api.bz

(3)所有节点必须关闭防火墙及swap。 systemctl disable firewalld.service systemctl stop firewalld.service systemctl status firewalld.service sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config setenforce 0 sed -i 's/.∗swap.∗.∗swap.∗/# \1/g' /etc/fstab swapoff -a 如果修改/etc/fstab里的swap相关信息,需要重启。

2、节点规划:

主机名 IP&Role 10.10.1.200 master1 etcd、Master、Node、keepalived 10.10.1.199 master2 etcd、Master、Node、keepalived 10.10.1.198 master3 etcd、Master、Node、keepalived 10.10.1.201 node1 10.10.1.202 node2 10.10.1.203 node3 10.10.1.210 cluster.kube.com

所有节点主机名和IP加入/etc/hosts解析 cat /etc/hosts 10.10.1.200 master1 10.10.1.201 node1 10.10.1.202 node2 10.10.1.203 node3 10.10.1.198 master3 10.10.1.199 master2 10.10.1.210 cluster.kube.com

3、镜像清单:

k8s.gcr.io/kube-proxy v1.13.0 8fa56d18961f 9 days ago 80.2MB k8s.gcr.io/kube-scheduler v1.13.0 9508b7d8008d 9 days ago 79.6MB k8s.gcr.io/kube-controller-manager v1.13.0 d82530ead066 9 days ago 146MB k8s.gcr.io/kube-apiserver v1.13.0 f1ff9b7e3d6e 9 days ago 181MB quay.io/calico/node v3.3.2 4e9be81e3a59 9 days ago 75.3MB quay.io/calico/cni v3.3.2 490d921fa49c 9 days ago 75.4MB k8s.gcr.io/coredns 1.2.6 f59dcacceff4 5 weeks ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB quay.io/coreos/flannel v0.10.0-s390x 463654e4ed2d 10 months ago 47MB quay.io/coreos/flannel v0.10.0-ppc64l e2f67d69dd84 10 months ago 53.5MB quay.io/coreos/flannel v0.10.0-arm c663d02f7966 10 months ago 39.9MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB --------------------- ----------------------------------------

4.部署keepalived

此处的keeplived的主要作用是为haproxy提供vip(10.10.1.210),在三个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。 (1)系统配置 cat >> /etc/sysctl.conf << EOF net.ipv4.ip_forward = 1 EOF sysctl -p (2)安装keepalived yum install -y keepalived (3)配置keepalived: 【注意:VIP地址是否正确,且各个节点的priority不同,master1节点为MASTER,其余节点为BACKUP,killall -0 意思是根据进程名称检测进程是否存活】

代码语言:javascript
复制
--------------master1:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 51
    priority 250
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 35f18af7190d51c9f7f78f37300a0cbd
    }
    virtual_ipaddress {
        10.10.1.210
    }
    track_script {
        check_haproxy
    }
}
EOF
--------------master2:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    virtual_router_id 51
    priority 249
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 35f18af7190d51c9f7f78f37300a0cbd
    }
    virtual_ipaddress {
        10.10.1.210
    }
    track_script {
        check_haproxy
    }
}
EOF

--------------master3:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    virtual_router_id 51
    priority 248
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 35f18af7190d51c9f7f78f37300a0cbd
    }
    virtual_ipaddress {
        10.10.1.210
    }
    track_script {
        check_haproxy
    }
}
EOF

(4)启动并检测服务 systemctl enable keepalived.service systemctl start keepalived.service systemctl status keepalived.service ip address show ens32

5.部署haproxy【所有master】

此处的haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量,这种方式更加合理、健壮。

(1)系统配置 cat >> /etc/sysctl.conf << EOF net.ipv4.ip_nonlocal_bind = 1 EOF sysctl -p

(2)安装haproxy yum install -y haproxy (3)配置haproxy【三个master节点一样】

代码语言:javascript
复制
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     40000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  master1 10.10.1.200:6443 check
    server  master2 10.10.1.199:6443 check
    server  master3 10.10.1.198:6443 check

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF

(4)启动并检测服务 systemctl enable haproxy.service systemctl start haproxy.service systemctl status haproxy.service ss -lnt | grep -E "16443|1080" LISTEN 0 128 *:1080 *:* LISTEN 0 128 *:16443 *:* --------------------- 6.安装kubeadm、kubectl、kubelet、docker【为了方便,所有节点都直接复制粘贴进行相同操作】

(1)系统配置 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system

(2)安装docker ### 设置docker-ce的yum源 ### yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y repolist yum -y install docker-ce-18.06.1.ce-3.el7 --disableexcludes=docker-ce

# 编辑systemctl的Docker启动文件 sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service cat /usr/lib/systemd/system/docker.service |grep ExecStart

(3)安装kubernetes ### 设置kubernetes的yum源 ### cat <<EOF>> /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum -y repolist yum -y install kubelet-1.13.0 kubeadm-1.13.0 kubectl-1.13.0 --disableexcludes=kubernetes

(4)# 启动docker&kubelet,设置开机启动 systemctl daemon-reload systemctl restart docker.service systemctl enable docker.service systemctl status docker.service systemctl enable kubelet.service

(5)#提前拉取镜像更改标签【所有节点】 docker pull mirrorgooglecontainers/kube-apiserver:v1.13.0 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.0 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.0 docker pull mirrorgooglecontainers/kube-proxy:v1.13.0 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0 docker tag mirrorgooglecontainers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.0 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.0 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.0 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.0 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x docker rmi xiyangxixia/k8s-flannel:v0.10.0-s390x

docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l docker rmi xiyangxixia/k8s-flannel:v0.10.0-ppc64le

docker pull xiyangxixia/k8s-flannel:v0.10.0-arm docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm docker rmi xiyangxixia/k8s-flannel:v0.10.0-arm

docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64 docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi xiyangxixia/k8s-flannel:v0.10.0-amd64 --------------------- 7.部署master1

(1)编辑kubeadm配置文件 cd ~ cat > kubeadm-config.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.13.0 apiServer: certSANs: - "cluster.kube.com" controlPlaneEndpoint: "cluster.kube.com:16443" networking: podSubnet: "10.244.0.0/16" EOF 注意:【podSubnet如果使用flannel方案,则推荐设置为10.244.0.0/16】 ------------------------------------------------------------------------------------------------------ (2)初始化第一个master节点 kubeadm init --config kubeadm-config.yaml ………… 记录加入集群的token等: kubeadm join cluster.kube.com:16443 --token h0q766.ng8jo85gpbdffqks --discovery-token-ca-cert-hash sha256:8d493104d82j59b3c777d4bc74822ecbe21ac618ea876acafb5876ebf4c45e80 此时kuberctl get nodes查看节点状态不可能是Ready,coredns会有问题比如taint什么的因为还没有设置网络插件先不用管。

第一个master上作为root用户执行下列命令:(如果不是root用户就需要执行kubeadm初始化之后提示的三个命令) echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo $KUBECONFIG ------------------------------------------------------------------------------------------------------ (3)安装网络插件

设置系统参数【所有节点】: sysctl net.bridge.bridge-nf-call-iptables=1 创建网络: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

查看状态: kubectl get nodes kubectl get pods --all-namespaces ------------------------------------------------------- (4)复制相关文件到其他两个master节点上 ssh root@master2 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@master2:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master2:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@master2:/etc/kubernetes/pki/etcd

ssh root@master3 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@master3:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master3:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@master3:/etc/kubernetes/pki/etcd --------------------- 8.部署其他的master

执行加入语句,后加--experimental-control-plane参数即可。 kubeadm join cluster.kube.com:16443 --token h0q766.ngajo85gpbdffqks --discovery-token-ca-cert-hash sha256:8d493104d87059b3c777d4bc74822ecbe21ac618ea876acafb5876ebf4c45e80 --experimental-control-plane 完成后执行: echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo $KUBECONFIG =================================================================== 2.6、查看kubernetes集群各个状态【任意master执行即可】 (1)kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master1 Ready master 11m v1.13.0 10.10.1.200 <none> CentOS Linux 7 (Core) 3.10.0-862.el7.x86_64 docker://18.9.0 master2 Ready master 2m16s v1.13.0 10.10.1.199 <none> CentOS Linux 7 (Core) 3.10.0-862.el7.x86_64 docker://18.9.0 master3 Ready master 45s v1.13.0 10.10.1.198 <none> CentOS Linux 7 (Core) 3.10.0-862.el7.x86_64 docker://18.9.0

(2)kubectl get pods --all-namespaces -o wide

(3)kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}

(4)查看etcd集群状态: 进入容器: kubectl exec -ti -n kube-system etcd-master1 sh 设定环境变量: export ETCDCTL_API=3 执行带证书的语句: etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list 得到类似如下的结果: 4e0333cc4a713ecd, started, master3, https://10.10.1.198:2380, https://10.10.1.198:2379 865b63300bc38ab7, started, master2, https://10.10.1.199:2380, https://10.10.1.199:2379 d5c79fd433825701, started, master1, https://10.10.1.200:2380, https://10.10.1.200:2379

=============================================================================== 三、Node节点的配置【所有Worker】 确认已经完成了前面提到的“所有节点”的设置,包括hosts设置、SSH设置、安装了docker$k8s、提前拉取了镜像 3.1、参数设置 modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe ip_vs83【这个可能会提示没有,没关系】

3.2、执行加入语句: kubeadm join cluster.kube.com:16443 --token h0q766.ngajo85gpbdffqks --discovery-token-ca-cert-hash sha256:8d493104d87059b3c777d4bc74822ecbe21ac618ea876acafb5876ebf4c45e80 忘记或token超期的处理方式: Master上执行以下命令获取新的token值,: kubeadm token create 在master节点上执行以下命令链来获取新的hash值: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 然后其他节点可以再通过命令加入: kubeadm join 192.168.1.201:6443 --token `新的token值` --discovery-token-ca-cert-hash `新的hash值`

3.3、master1查看状态 kubectl get nodes -o wide kubectl get pods --all-namespaces -o wide ---------------------

四、在k8s基础上配置dashboard 4.1、所有节点提前下载镜像并修改标签:【用1.8.3的】可以上https://hub.docker.com/r/library/查具体信息; docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3 docker tag k8scn/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker rmi k8scn/kubernetes-dashboard-amd64:v1.8.3 docker images|grep dashboard

4.2、在master1上,应用dashboard的部署文件 (1)、下载和修改文件 wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 修改上述文件,修改里面的镜像版本策略和版本,修改nodeport,默认的是v1.10.0的; ---------镜像策略部分-------------------------------- containers: - name: kubernetes-dashboard imagePullPolicy: IfNotPresent image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 --------------------------------------- targetPort部分如下:改为nodePort类型并指定30080端口(实际看情况可以指定其他端口) # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30080 selector: k8s-app: kubernetes-dashboard -------------------------------------------------- 然后执行: kubectl create -f kubernetes-dashboard.yaml

4.3、查看创建情况: kubectl get pods -n kube-system 创建不对的话可以使用kubectl delete -f kubernetes-dashboard.yaml删掉一切; 创建正常之后继续:

4.4、创建serviceaccount 用于登陆dashboard【名为dashboard-admin】 kubectl create serviceaccount dashboard-admin -n kube-system

4.5、创建clusterrolebinding kubectl create clusterrolebinding cluster-dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl get svc --all-namespaces可以查看服务的IP及端口对应如下: kube-system kubernetes-dashboard NodePort 10.97.145.51 <none> 443:30080/TCP

4.6、查找刚刚生成的secret的token 先找secret:格式为 kubectl get secret -n kube-system|grep dashboard-admin 找到刚创建的secret名为:dashboard-admin-token-xxxx 实际查得:dashboard-admin-token-2kw7n 再找token,格式为:kubectl describe secret dashboard-admin-token-xxxx -n kube-system 实际操作 kubectl describe secret dashboard-admin-token-2kw7n -n kube-system 复制token: etoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY2654nQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4td3ZmaGoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2YwNDY0ZGUtZmYzZS0xMWU4LWI0YTYtMDAwYzI5N2ZkNzk0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.h3GdIgtH3KAr58GVsxqSyj-i9lnreRPGiResvof5viWP_P0X5l7q_dUclZf9UWhcPV7gaomOLZfttzsYNBfvb0KJgl6PMmnzbjaYHGbcpyFLWF0_51XxvjSnFpLdkH8x_bcI3ZewMHoYRu5-X1gjfNofHOXm4TaX9pMtXqSih-TKhwlUoVAN2lQ2pBcSdlpFHCg2gf85jb3Bm6LGXT5oBWkupXPmFKGX2EX3YgX4J8hPo-gmE-yHKIcRvVy_cfpAjJlmKUAnbXQSJCban0R7GIpYhNPrwJoMuOqCSLGgTPvG1kfvdeANHbA6wVm70SBc50STmrJJFGaML1urQzDtIA

7、登陆dashboard:使用火狐浏览器【chrome的添加例外比较麻烦】 https://10.10.1.200:30080【访问其他的集群里的IP也是一样的流程和结果,如访问某个Node的IP或者VIP都可以】 会有提示不安全,高级--添加例外,对10.10.1.200:30080添加例外; 然后会有两个选项分别是kubeconfig和token, 选择token 输入上面复制来的secret的token。 登录,即可查看各项内容。

五、配置监控 参考:http://blog.51cto.com/kaliarch/2160569

5.1、master/node节点环境部署 master1安装git,并下载相关yaml文件 git clone https://github.com/redhatxl/k8s-prometheus-grafana.git

5.2、在node节点下载监控所需镜像 docker pull prom/node-exporter docker pull prom/prometheus:v2.0.0 docker pull grafana/grafana:4.2.0

5.3、采用daemonset方式部署node-exporter组件 cd k8s-prometheus-grafana kubectl create -f node-exporter.yaml

5.4、部署prometheus组件 rbac文件 kubectl create -f prometheus/rbac-setup.yaml 以configmap的形式管理prometheus组件的配置文件 kubectl create -f prometheus/configmap.yaml Prometheus deployment 文件 kubectl create -f prometheus/prometheus.deploy.yml Prometheus service文件 kubectl create -f prometheus/prometheus.svc.yml

5.5、部署grafana组件 grafana deployment配置文件 kubectl create -f grafana/grafana-deploy.yaml grafana service配置文件 kubectl create -f grafana/grafana-svc.yaml grafana ingress配置文件 kubectl create -f grafana/grafana-ing.yaml

5.6、查看node-exporter http://10.10.1.210:31672/metrics prometheus对应的nodeport端口为30003,通过访问http://10.10.1.210:30003/target(这个要等一会儿) 可以看到prometheus已经成功连接上了k8s的apiserver,可以查看到一系列值。 kubectl get svc --all-namespaces查看grafana暴露的nodePort是32092; 通过端口进行granfa访问,http://10.10.1.210:32092默认用户名密码均为admin 登录后点击选择添加数据源,按如下填写: 数据源名称Prometheus,type选择prometheus; HTTP settings: Url:http://prometheus:9090 Access:proxy 其他不填,最下方选择添加,然后选择Save&Test,会有成功提示。 然后点击上方的保存dashboard按钮或者Ctrl+s保存。 导入Dashboard,点击import,会出现相关界面,直接输入模板编号315在线导入,然后选择数据源名称为prometheus,点击import即可。 或者下载好对应的json模板文件本地导入,面板模板下载地址https:///dashboards/315。 即可查看展示效果。

六、集群功能测试: (1)首先验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常: master上部署一个 Nginx Deployment,包含两个Pod 参考:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ kubectl create deployment nginx --image=nginx:alpine 等待一会儿之后创建成功; kubectl scale deployment nginx --replicas=2

验证Nginx Pod是否正确运行,并且会分配10.244.开头的集群IP kubectl get pods -l app=nginx -o wide 输出如下: NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-65d5c4f7cc-7pzgp 1/1 Running 0 88s 10.244.1.2 ubuntu2 <none> nginx-65d5c4f7cc-l2h26 1/1 Running 0 82s 10.244.1.3 ubuntu2 <none> 如果哪个不正常,可以使用 kubectl describe pod nginx-xxxxxx来查看相信信息;

(2)再验证一下kube-proxy是否正常:Master上操作 # 以 NodePort 方式对外提供服务 https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/ kubectl expose deployment nginx --port=80 --type=NodePort # 查看集群外可访问的Port kubectl get services nginx # 输出如下: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx NodePort 10.98.44.131 <none> 80:32137/TCP 10s # 可以通过集群内的任意 NodeIP:Port 在集群外部访问这个服务,或者通过浏览器访问也可以; curl http://10.10.1.201:32137 curl http://10.10.1.202:32137 curl http://10.10.1.203:32137

(3)验证一下dns, pod network是否正常: 查看内网IP: NAME READY STATUS RESTARTS AGE IP NODE nginx-f9f67b99-tbdhf 1/1 Running 0 4m 10.244.3.2 node3 nginx-f9f67b99-w4zrp 1/1 Running 0 3m 10.244.1.2 node1

(4) 运行Busybox并进入交互模式 kubectl run -it curl --image=radial/busyboxplus:curl # 输入`nslookup nginx`查看是否可以正确解析出集群内的IP,已验证DNS是否正常 nslookup nginx # 输出如下: [ root@curl-87b54756-bt8bs:/ ]$ nslookup nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: nginx Address 1: 10.100.213.93 nginx.default.svc.cluster.local [ root@curl-87b54756-bt8bs:/ ]$ # 通过服务名进行访问,验证kube-proxy是否正常 [ root@curl-5cc7b478b6-tlf46:/ ]$ curl http://nginx/ # 输出如下: # <!DOCTYPE html> ---省略,但主要内容应该跟前几步curl出来的nginx信息一样; # 分别访问一下2个Pod的内网IP,验证跨Node的网络通信是否正常(内网IP是从上面查来的) [ root@curl-5cc7b478b6-tlf46:/ ]$ curl http://10.244.3.2/ [ root@curl-5cc7b478b6-tlf46:/ ]$ curl http://10.244.1.2/ ---------------------

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1、环境准备: 虚拟机或服务器的master节点CPU需2核以上,可通过下述命令查看: cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l
    • 2、节点规划:
      • 3、镜像清单:
        • 4.部署keepalived
          • 5.部署haproxy【所有master】
          相关产品与服务
          容器服务
          腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
          领券
          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档