部署一个完整的K8S集群(下)-成都快上网建站

部署一个完整的K8S集群(下)

部署UI

成都创新互联专注于万山企业网站建设,响应式网站设计,商城系统网站开发。万山网站建设公司,为万山等地区提供建站服务。全流程定制制作,专业设计,全程项目跟踪,成都创新互联专业和态度为您提供的服务

[root@k8s-master1 YAML]# kubectl apply -f dashboard.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

[root@k8s-master1 YAML]# kubectl get pods -n kubernetes-dashboard

NAME                                         READY   STATUS    RESTARTS   AGE

dashboard-metrics-scraper-566cddb686-v5s8t   1/1     Running   0          22m

kubernetes-dashboard-7b5bf5d559-sqpd7        1/1     Running   0          22m

[root@k8s-master1 YAML]# kubectl get svc -n kubernetes-dashboard    

NAME                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE

dashboard-metrics-scraper   ClusterIP   10.0.0.180           8000/TCP        23m

kubernetes-dashboard        NodePort    10.0.0.163           443:30001/TCP   23m

[root@k8s-master1 YAML]#  kubectl apply -f dashboard-adminuser.yaml

serviceaccount/admin-user created

clusterrolebinding.rbac.authorization.k8s.io/admin-user created

创建能访问dashboard的token

[root@k8s-master1 src]#kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Name:         admin-user-token-2k5k9

Namespace:    kubernetes-dashboard

Labels:       

Annotations:  kubernetes.io/service-account.name: admin-user

              kubernetes.io/service-account.uid: 14110df7-4a24-4a06-a99e-18c3a60c5b13

Type:  kubernetes.io/service-account-token

Data

====

ca.crt:     1359 bytes

namespace:  20 bytes

token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkV5VUtIek9UeUs1WnRnbzJzVzgyaEJKblM3UDFiMXdHTEdPeFhkZmxwaDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJrNWs5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxNDExMGRmNy00YTI0LTRhMDYtYTk5ZS0xOGMzYTYwYzViMTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.eURKAOmq-DOPyf7B_ZH2nIg4QxcMhmy6VL4miZuoXx7g70V69rhQjEdR156TujxHkXIFz4X6biifycm_gLxShn2sAwoiBohzKOogJZLo1hXWl6pAGHbAGLuEZsvN5GrSmyUhC955ztheNve0xx5QTwFLtXFSOuTwnzzKEHYMyfivYTVmf8iovx0S2SS1IQxqFOZxMNH5DKUCK7tleEZxnXcHzUG2zTSn6D7nL8EtAzOAD_kVx6dKsQt4fbMqiOcyG_qFfFopU9ZJwsILTDma4k3iecRAb4KmNlRaasFdXLptF6SDs0IceHqE9hm3yoOB7pZXWsptNafmcrFCSOEjaQ

部署一个完整的K8S集群(下)

访问如上链接,有两种验证方式,其一,配置文件验证,其二,token验证,现在选择第二种方式,Token验证登陆,并填入绿色文字的token。

部署一个完整的K8S集群(下)

已上画面为登陆dashboard已经成功了

部署coreDNS:

[root@k8s-master1 YAML]# kubectl apply -f coredns.yaml 

serviceaccount/coredns created

clusterrole.rbac.authorization.k8s.io/system:coredns created

clusterrolebinding.rbac.authorization.k8s.io/system:coredns created

configmap/coredns created

deployment.apps/coredns created

service/kube-dns created

运用bs.yml文件进行测试,看看dns是否能解析

[root@k8s-master1 src]#kubectl apply -f bs.yaml

pod/busybox created

[root@k8s-master1 YAML]# kubectl get pods

NAME                  READY   STATUS    RESTARTS   AGE

busybox               1/1     Running   0          6m47s

web-d86c95cc9-8tmkl   1/1     Running   0          65m

进入busybox,Ping对应的docker,看看能否解析

[root@k8s-master1 YAML]# kubectl exec -it busybox sh

/ # ping web

PING web (10.0.0.203): 56 data bytes

64 bytes from 10.0.0.203: seq=0 ttl=64 time=0.394 ms

64 bytes from 10.0.0.203: seq=1 ttl=64 time=0.323 ms

^C

--- web ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.323/0.358/0.394 ms

/ # ping kubernetes

PING kubernetes (10.0.0.1): 56 data bytes

64 bytes from 10.0.0.1: seq=0 ttl=64 time=0.344 ms

64 bytes from 10.0.0.1: seq=1 ttl=64 time=0.239 ms

^C

--- kubernetes ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.239/0.291/0.344 ms

/ # 

如上所示,可以解析,代表coredns已经安装OK了

部署keepalived   nginx(两台机都需要部署)

[root@lvs1 ~]# rpm -ivh http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

Retrieving http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

warning: /var/tmp/rpm-tmp.oiFMgm: Header V4 RSA/SHA1 Signature, key ID 7bd9bf62: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...

   1:nginx-1:1.16.0-1.el7.ngx         ################################# [100%]

----------------------------------------------------------------------

Thanks for using nginx!

Please find the official documentation for nginx here:

* http://nginx.org/en/docs/

Please subscribe to nginx-announce mailing list to get

the most important news about nginx:

* http://nginx.org/en/support.html

Commercial subscriptions for nginx are available on:

* http://nginx.com/products/

----------------------------------------------------------------------

[root@lvs1 ~]# systemctl enable nginx

Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

[root@lvs1 ~]# systemctl status nginx

● nginx.service - nginx - high performance web server

   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)

   Active: inactive (dead)

     Docs: http://nginx.org/en/docs/

[root@lvs1 ~]# systemctl start nginx 

[root@lvs1 ~]# systemctl status nginx

● nginx.service - nginx - high performance web server

   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)

   Active: active (running)since Sat 2020-02-01 14:41:09 CST; 11s ago

     Docs: http://nginx.org/en/docs/

  Process: 1681 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)

 Main PID: 1682 (nginx)

   CGroup: /system.slice/nginx.service

           ├─1682 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf

           └─1683 nginx: worker process

Feb 01 14:41:09 lvs1 systemd[1]: Starting nginx - high performance web server...

Feb 01 14:41:09 lvs1 systemd[1]: Started nginx - high performance web server.

[root@lvs1 ~]# yum install keepalived -y

Loaded plugins: fastestmirror

Determining fastest mirrors

 * base: mirrors.aliyun.com

 * extras: mirrors.cn99.com

 * updates: mirrors.aliyun.com

base                                                                                                                                              | 3.6 kB  00:00:00     

extras                                                                                                                                            | 2.9 kB  00:00:00     

updates                                                                                                                                           | 2.9 kB  00:00:00     

(1/2): extras/7/x86_64/primary_db                                                                                                                 | 159 kB  00:00:00     

(2/2): updates/7/x86_64/primary_db                                                                                                                | 5.9 MB  00:00:01     

Resolving Dependencies

--> Running transaction check

---> Package keepalived.x86_64 0:1.3.5-16.el7 will be installed

--> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-16.el7.x86_64

--> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-16.el7.x86_64

--> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-16.el7.x86_64

--> Running transaction check

---> Package net-snmp-agent-libs.x86_64 1:5.7.2-43.el7 will be installed

--> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64

---> Package net-snmp-libs.x86_64 1:5.7.2-43.el7 will be installed

--> Running transaction check

---> Package lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================================

 Package                                     Arch                           Version                                                   Repository                    Size

=========================================================================================================================================================================

Installing:

 keepalived                                  x86_64                         1.3.5-16.el7                                              base                         331 k

Installing for dependencies:

 lm_sensors-libs                             x86_64                         3.4.0-8.20160601gitf9185e5.el7                            base                          42 k

 net-snmp-agent-libs                         x86_64                         1:5.7.2-43.el7                                            base                         706 k

 net-snmp-libs                               x86_64                         1:5.7.2-43.el7                                            base                         750 k

Transaction Summary

=========================================================================================================================================================================

Install  1 Package (+3 Dependent packages)

Total download size: 1.8 M

Installed size: 6.0 M

Downloading packages:

(1/4): lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm                                                                                  |  42 kB  00:00:00     

(2/4): net-snmp-agent-libs-5.7.2-43.el7.x86_64.rpm                                                                                                | 706 kB  00:00:00     

(3/4): net-snmp-libs-5.7.2-43.el7.x86_64.rpm                                                                                                      | 750 kB  00:00:00     

(4/4): keepalived-1.3.5-16.el7.x86_64.rpm                                                                                                         | 331 kB  00:00:01     

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Total                                                                                                                                    1.0 MB/s | 1.8 MB  00:00:01     

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Warning: RPMDB altered outside of yum.

  Installing : 1:net-snmp-libs-5.7.2-43.el7.x86_64                                                                                                                   1/4 

  Installing : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                                                                 2/4 

  Installing : 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64                                                                                                             3/4 

  Installing : keepalived-1.3.5-16.el7.x86_64                                                                                                                        4/4 

  Verifying  : keepalived-1.3.5-16.el7.x86_64                                                                                                                        1/4 

  Verifying  : 1:net-snmp-agent-libs-5.7.2-43.el7.x86_64                                                                                                             2/4 

  Verifying  : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                                                                 3/4 

  Verifying  : 1:net-snmp-libs-5.7.2-43.el7.x86_64                                                                                                                   4/4 

Installed:

  keepalived.x86_64 0:1.3.5-16.el7                                                                                                                                       

Dependency Installed:

  lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7            net-snmp-agent-libs.x86_64 1:5.7.2-43.el7            net-snmp-libs.x86_64 1:5.7.2-43.el7           

Complete!

主keepalived配置文件:

[root@lvs1 nginx]# cat /etc/keepalived/keepalived.conf 

     

global_defs { 

   notification_email { 

     acassen@firewall.loc 

     failover@firewall.loc 

     sysadmin@firewall.loc 

   } 

   notification_email_from Alexandre.Cassen@firewall.loc  

   smtp_server 127.0.0.1 

   smtp_connect_timeout 30 

   router_id NGINX_MASTER

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 { 

    state MASTER 

    interface eth0

    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 

    priority 100    # 优先级,备服务器设置 90 

    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 

    authentication { 

        auth_type PASS      

        auth_pass 1111 

    }  

    virtual_ipaddress { 

        192.168.1.120

    } 

    track_script {

        check_nginx

    } 

}

主nginx配置文件:

[root@lvs1 nginx]# cat /etc/nginx/nginx.conf

user  nginx;

worker_processes  4;

error_log  /var/log/nginx/error.log warn;

pid        /var/run/nginx.pid;

events {

    worker_connections  1024;

}

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {

                server 192.168.1.124:6443;      

                server 192.168.1.125:6443;

                server 192.168.1.126:6443;

            }

    

    server {

       listen 6443;

       proxy_pass k8s-apiserver;

    }

}

http {

    include       /etc/nginx/mime.types;

    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                      '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;

}

备keepalived配置文件

[root@lvs2 keepalived]# cat /etc/keepalived/keepalived.conf 

     

global_defs { 

   notification_email { 

     acassen@firewall.loc 

     failover@firewall.loc 

     sysadmin@firewall.loc 

   } 

   notification_email_from Alexandre.Cassen@firewall.loc  

   smtp_server 127.0.0.1 

   smtp_connect_timeout 30 

   router_id NGINX_BACKUP

vrrp_script check_nginx {

    script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 { 

    state BACKUP 

    interface eth0

    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 

    priority 90    # 优先级,备服务器设置 90 

    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 

    authentication { 

        auth_type PASS      

        auth_pass 1111 

    }  

    virtual_ipaddress { 

        192.168.1.120

    } 

    track_script {

        check_nginx

    } 

}

从nginx配置文件:

[root@lvs2 keepalived]# cat /etc/nginx/nginx.conf

user  nginx;

worker_processes  4;

error_log  /var/log/nginx/error.log warn;

pid        /var/run/nginx.pid;

events {

    worker_connections  1024;

}

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {

                server 192.168.1.124:6443;

                server 192.168.1.125:6443;

                server 192.168.1.126:6443;

            }

    

    server {

       listen 6443;

       proxy_pass k8s-apiserver;

    }

}

http {

    include       /etc/nginx/mime.types;

    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                      '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;

}

nginx检测脚本:

nginx脚本要加可执行权限:

[root@lvs1 nginx]# chmod +x /etc/keepalived/check_nginx.sh

[root@lvs2 nginx]# chmod +x /etc/keepalived/check_nginx.sh

[root@lvs2 keepalived]# cat check_nginx.sh 

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

    exit 1

else

    exit 0

fi

[root@lvs1 nginx]# systemctl restart keepalived && systemctl restart nginx

[root@lvs2 nginx]# systemctl restart keepalived && systemctl restart nginx

修改node1,node2,node3节点种的apiserver的接口地址,改成负载均衡器的IP地址即可,然后再重启 kubelet和kube-proxy

[root@k8s-node1 cfg]# grep "192.168" *

bootstrap.kubeconfig:    server: https://192.168.1.124:6443

kubelet.kubeconfig:    server: https://192.168.1.124:6443

kube-proxy.kubeconfig:    server: https://192.168.1.124:6443

[root@k8s-node1 cfg]# sed -i "s#192.168.1.124#192.168.1.120#g" *

[root@k8s-node1 cfg]# grep "192.168" *

bootstrap.kubeconfig:    server: https://192.168.1.120:6443

kubelet.kubeconfig:    server: https://192.168.1.120:6443

kube-proxy.kubeconfig:    server:  https://192.168.1.120:6443

[root@k8s-node1 cfg]# systemctl restart kubelet && systemctl restart kube-proxy

[root@k8s-node2 cfg]# sed -i "s#192.168.1.124#192.168.1.120#g" *

[root@k8s-node2 cfg]# grep "192.168" *

bootstrap.kubeconfig:    server: https://192.168.1.120:6443

kubelet.kubeconfig:    server: https://192.168.1.120:6443

kube-proxy.kubeconfig:    server: https://192.168.1.120:6443

[root@k8s-node2 cfg]# systemctl restart kubelet && systemctl restart kube-proxy

[root@k8s-node3 cfg]# sed -i "s#192.168.1.124#192.168.1.120#g" *

[root@k8s-node3 cfg]# grep "192.168" *

bootstrap.kubeconfig:    server: https://192.168.1.120:6443

kubelet.kubeconfig:    server: https://192.168.1.120:6443

kube-proxy.kubeconfig:    server: https://192.168.1.120:6443

[root@k8s-node3 cfg]# systemctl restart kubelet && systemctl restart kube-proxy

命令检测k8s的集群状态,依旧是Ready状态。集群正常,也可以去查看Nginx的日志,观察是否异常

[root@k8s-master1 k8s]# kubectl get nodes

NAME        STATUS   ROLES    AGE     VERSION

k8s-node1   Ready       4h28m   v1.16.0

k8s-node2   Ready       4h28m   v1.16.0

k8s-node3   Ready       4h28m   v1.16.0

[root@lvs1 nginx]# tailf /var/log/nginx/k8s-access.log 

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:34:19 +0800] 200 1160

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:34:19 +0800] 200 1159

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:34:19 +0800] 200 1159

192.168.1.129 192.168.1.126:6443 - [01/Feb/2020:15:34:19 +0800] 200 1160

192.168.1.129 192.168.1.126:6443 - [01/Feb/2020:15:34:19 +0800] 200 1159

192.168.1.129 192.168.1.126:6443 - [01/Feb/2020:15:34:19 +0800] 200 1160

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:34:19 +0800] 200 1160

192.168.1.129 192.168.1.125:6443 - [01/Feb/2020:15:34:39 +0800] 200 1611

192.168.1.128 192.168.1.126:6443 - [01/Feb/2020:15:34:39 +0800] 200 1611

192.168.1.127 192.168.1.126:6443 - [01/Feb/2020:15:34:39 +0800] 200 1611

[root@lvs2 keepalived]# tailf /var/log/nginx/k8s-access.log 

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:33:44 +0800] 200 1161

192.168.1.127 192.168.1.125:6443 - [01/Feb/2020:15:33:44 +0800] 200 1159

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:33:44 +0800] 200 1160

192.168.1.129 192.168.1.124:6443 - [01/Feb/2020:15:33:44 +0800] 200 1159

192.168.1.129 192.168.1.125:6443 - [01/Feb/2020:15:33:44 +0800] 200 1161

192.168.1.129 192.168.1.126:6443 - [01/Feb/2020:15:33:44 +0800] 200 1161

192.168.1.129 192.168.1.125:6443 - [01/Feb/2020:15:33:44 +0800] 200 1159

192.168.1.128 192.168.1.126:6443 - [01/Feb/2020:15:33:44 +0800] 200 1161

192.168.1.128 192.168.1.125:6443 - [01/Feb/2020:15:49:06 +0800] 200 2269

192.168.1.129 192.168.1.125:6443 - [01/Feb/2020:15:51:11 +0800] 200 2270

192.168.1.127 192.168.1.125:6443 - [01/Feb/2020:15:51:47 +0800] 200 2270

192.168.1.128 192.168.1.124:6443 - [01/Feb/2020:15:51:56 +0800] 200 4352

192.168.1.127 192.168.1.124:6443 - [01/Feb/2020:15:52:04 +0800] 200 5390

192.168.1.129 192.168.1.125:6443 - [01/Feb/2020:15:52:07 +0800] 200 4409

代表能正常切换,K8S集群搭建OK


分享文章:部署一个完整的K8S集群(下)
链接URL:http://kswjz.com/article/gdhjhp.html
扫二维码与项目经理沟通

我们在微信上24小时期待你的声音

解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流