扫二维码与项目经理沟通
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流
本篇内容主要讲解“怎么安装Kubernetes”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“怎么安装Kubernetes”吧!
成都创新互联主营龙湖网站建设的网络公司,主营网站建设方案,成都App定制开发,龙湖h5小程序设计搭建,龙湖网站营销推广欢迎龙湖等地区企业咨询
安装过程基本包括下载软件,下载镜像,主机配置,启动 Master 节点,配置网络,启动 Node 节点。
这个步骤有翻墙能力的人可以直接按照步骤来,没有的话可以使用我打包好的1.6.2软件包。
首先在自己的翻墙的主机配置K8S源。
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
配置好后,下载软件包
yum install -y -downloadonly kubelet kubeadm kubectl kubernetes-cni
将下载好的所有 RPM 打包,发回到本地。这样就完成了K8S软件包的下载了。
下载好的 RPM:
https://pan.baidu.com/s/1clIpjC cp6h
下载镜像可以直接使用我提供的脚本,前提是要能翻墙,你懂的。我自己也已经下载过一份了。提供给大家使用。
#!/usr/bin/env bash images=( kube-proxy-amd64:v1.6.2 kube-controller-manager-amd64:v1.6.2 kube-apiserver-amd64:v1.6.2 kube-scheduler-amd64:v1.6.2 kubernetes-dashboard-amd64:v1.6.0 k8s-DNS-sidecar-amd64:1.14.1 k8s-dns-kube-dns-amd64:1.14.1 k8s-dns-dnsmasq-nanny-amd64:1.14.1 etcd-amd64:3.0.17 pause-amd64:3.0 ) for imageName in ${images[@]} ; do docker pull gcr.io/google_containers/$imageName docker tag gcr.io/google_containers/$imageName registry.cn-beijing.aliyuncs.com/bbt_k8s/$imageName docker push registry.cn-beijing.aliyuncs.com/bbt_k8s/$imageName done quay.io/coreos/flannel:v0.7.0-amd64 docker tag quay.io/coreos/flannel:v0.7.0-amd64 registry.cn-beijing.aliyuncs.com/bbt_k8s/flannel:v0.7.0-amd64 docker push registry.cn-beijing.aliyuncs.com/bbt_k8s/flannel:v0.7.0-amd64
关于这个脚本我解释一下。这个脚本是下载常用的镜像,然后回传到国内的源上,可以将registry.cn-beijing.aliyuncs.com/bbt_k8s改成你自己的地址,必须先用docker login 进行登录,否则可能会出现权限认证错误。推荐使用阿里云,网易的服务。如果使用自己的地址,请注意下面的配置,根据自己的情况进行修改,不在累赘了。
镜像版本号说明:
软件 | 版本 | 说明 | 备注 |
---|---|---|---|
kube-proxy-amd64 kube-controller-manager-amd64 kube-apiserver-amd64 kube-scheduler-amd64 | v1.6.2 | 这几个镜像一般跟着 K8S 的版本走,例如我安装的是 K8S 的1.6.2,那么版本号就是 v1.6.2 | |
kubernetes-dashboard-amd64 | v1.6.0 | 这个是 K8S 的控制台(虽然并不好用,单但是最起码很适合新手),一般跟着 K8S 的大版本好走,例如我安装 K8S 的1.6.2,大版本是1.6,所以版本号是 v1.6.0 | |
k8s-dns-sidecar-amd64 k8s-dns-kube-dns-amd64 k8s-dns-dnsmasq-nanny-amd64 | 1.14.1 | 这个是 DNS 服务,一般不跟随 K8S 进行升级,具体版本可以参考https://kubernetes.io/docs/getting-started-guides/kubeadm/ | |
etcd-amd64 | 3.0.17 | 这个是 etcd 服务,一般不跟随 K8S 进行升级,具体版本可以参考https://kubernetes.io/docs/getting-started-guides/kubeadm/ | |
pause-amd64 | 3.0 | 一般不跟随 K8S 进行升级,具体版本可以参考https://kubernetes.io/docs/getting-started-guides/kubeadm/ | 已经很长时间版本都是3.0了。 |
flannel | v0.7.0-amd64 | 网络组件,我这里使用的flannel,当然也可以使用其它的。具体版本信息参考对应的网络组件,例如flannel是https://github.com/coreos/flannel/tree/master/Documentation |
OK,这些镜像下载完成就 OK 了,没有翻墙工具的,就直接跳过吧。
上面的内容下载好后,我们就可以安装了。
没什么可以介绍的。
yum update -y
K8S 的1.6.x 版本仅仅在 Docker 1.12上测试过,虽然最新版本 Docker 也可以运行,但是不推荐安装最新版本,免得遇到什么问题。
curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh /dev/stdin 1.12.6
安装完成后,禁用 Docker 的更新,禁用方式为,在/etc/yum.conf添加
exclude=docker-engine*
主要是配置一些加速器,避免自己下载镜像速度太慢。
修改/etc/docker/daemon.json 添加如下内容:
{ "registry-mirrors": ["https://自己的加速地址"] }
之后就是启动Docker 的服务,
systemctl daemon-reload systemctl enable docker systemctl start docker
主要是开启桥接相关支持,这个是 flannel 需要的配置,具体是否需要,看自己的网络组件选择的是什么。
修改/usr/lib/sysctl.d/00-system.conf,将net.bridge.bridge-nf-call-iptables改成1.之后修改当前内核状态
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
上传RPM 包,到自己的服务器上,然后执行
yum install -y *.rpm
之后开启kubelet的开机启动
systemctl enable kubelet
然后配置 kubelet,修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf成如下文件
[Service] Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_ALIYUN_ARGS=--pod-infra-container-image=registry-vpc.cn-beijing.aliyuncs.com/bbt_k8s/pause-amd64:3.0" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS $KUBELET_ALIYUN_ARGS
在这里主要修正2个问题,一个是将 POD 的基础容器修改为我们自己源里面的,另外一个是最新版本的 K8S 的资源管理和 Docker 默认的资源管理方式有冲突,这里把这块给删除了。具体可以参考https://github.com/kubernetes/release/issues/306
然后重新 reload 服务。
systemctl daemon-reload
这样就完成了主机环境的初始化,如果是使用虚拟机,拷贝3份就可以了。如果是实体机,3台都按照这个步骤来一遍就好了。然后为每一台主机根据类型设置好 HostName,K8S会把 HostName 当做主机标识。
配置完主机后,我们就可以启动我们的 Master 节点了,通常 Master 节点推荐2-3个,本地测试我们就简单一些,一个节点就可以了。
export KUBE_REPO_PREFIX="registry-vpc.cn-beijing.aliyuncs.com/bbt_k8s" export KUBE_ETCD_IMAGE="registry-vpc.cn-beijing.aliyuncs.com/bbt_k8s/etcd-amd64:3.0.17" kubeadm init --kubernetes-version=v1.6.2 --pod-network-cidr=10.96.0.0/12
前面2个环境变量配置,是让 kubeadm 初始化的时候,使用我们的镜像源下载镜像。
最后 kubeadm init 是初始化 Master 节点。其中需要配置的参数我说明一下。
参数 | 意义 | 备注 |
---|---|---|
--kubernetes-version | K8S 的版本号,根据自己下载的镜像和 RPM 版本选择。 | 我这里使用的1.6.2,所以版本为v1.6.2. |
--pod-network-cidr | POD 的网络,只要不和主机网络冲突就可以,我这里使用的是10.96.0.0/12 | 这个和上面/etc/systemd/system/kubelet.service.d/10-kubeadm.conf里面声明的KUBELET_DNS_ARGS挂钩,请一同修改。 |
执行完毕后,稍等一阵,就完成了。
kubeadm init --kubernetes-version=v1.6.2 --pod-network-cidr=10.96.0.0/12 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.2 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [preflight] Starting the kubelet service [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [node0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.41] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 14.583864 seconds [apiclient] Waiting for at least one node to register [apiclient] First node has registered after 6.008990 seconds [token] Using token: e7986d.e440de5882342711 [apiconfig] Created RBAC rules [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 1111.1111111111111 *.*.*.*:6443
安装完成后,有一个内容非常重要,查看安装日志,拷贝类似于下面的语句,这条语句用来初始化之后的节点。
kubeadm join --token 11111.11111111111111 *.*.*.*:6443
接下来我们就可以去看看我们 K8S 的状态。我这使用的是 Mac。 Linux 和 Windows 的同学自行处理。
安装 kubectl
brew install kubectl
然后拷贝 Master 节点上的/etc/kubernetes/admin.conf文件到本机的~/.kube/config
之后执行kebectl get node。我这里已经安装完毕了,所以有全部信息,只要能看到节点,就算是成功了。
接下来我们安装网络组件,我这里使用的是flannel。创建2个文件
kube-flannel-rbac.yml
# Create the clusterrole and clusterrolebinding: # $ kubectl create -f kube-flannel-rbac.yml # Create the pod using the same namespace used by the flannel serviceaccount: # $ kubectl create --namespace kube-system -f kube-flannel.yml --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system
kube-flannel-ds.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "10.96.0.0/12", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel containers: - name: kube-flannel image: registry.cn-beijing.aliyuncs.com/bbt_k8s/flannel:v0.7.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: install-cni image: registry.cn-beijing.aliyuncs.com/bbt_k8s/flannel:v0.7.0-amd64 command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
然后使用使用命令进行配置。
kubectl create -f kube-flannel-rbac.yml kubectl create -f kube-flannel-ds.yaml
分别在2个Node 节点,执行下面的命令。
export KUBE_REPO_PREFIX="registry-vpc.cn-beijing.aliyuncs.com/bbt_k8s" export KUBE_ETCD_IMAGE="registry-vpc.cn-beijing.aliyuncs.com/bbt_k8s/etcd-amd64:3.0.17" kubeadm join --token 1111.111111111111 *.*.*.*:6443
其中 kubeadm join 请参考启动 Master 节点中的内容。
理论上我们安装到这里,K8S 就已经可以使用了。接下来主要是K8S的 Dashboard 的安装,仅供参考,不一定要安装。
创建文件kubernetes-dashboard.yaml
# Copyright 2015 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI compatible with # Kubernetes 1.6 (RBAC enabled). # # Example usage: kubectl create -fapiVersion: v1 kind: ServiceAccount metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-beijing.aliyuncs.com/bbt_k8s/kubernetes-dashboard-amd64:v1.6.0 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard
创建文件dashboard-rbac.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: default namespace: kube-system
之后执行
kubectl create -f dashboard-rbac.yml kubectl create -f kubernetes-dashboard.yaml
之后用下面的命令,获取到对应端口号。主要是看 NodePort:
kubectl describe --namespace kube-system service kubernetes-dashboard
到此,这份安装教程就到这里结束了。最后奉上一份安装后的截图。
到此,相信大家对“怎么安装Kubernetes”有了更深的了解,不妨来实际操作一番吧!这里是创新互联网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流