博客 > 术业专攻> 云计算> kubernetes> 使用kubeadm安装k8s-1.10集群 2019年08月29日 11:24:46
所有需要的包都已经下载并统一打包,基于这些包,就能够完成部署安装。
文件包已上传百度云,链接如下:
文件下载 | 文件名称:kubeadm所需安装包 | 文件大小:八九百兆 |
下载声明:本站文件大多来自于网络,仅供学习和研究使用,不得用于商业用途,如果有版权问题,请联系博主进行相关处理! | ||
下载地址:https://pan.baidu.com/s/1YPYO9PKpkdxRzsJDGsu5jw |
获取提取码方式如下:
这里使用两台服务器做集群,一台作为master,一台作为node。
两台服务器都做这些初始化操作。
yum -y install wget ntpdate lrzsz curl && ntpdate -u cn.pool.ntp.org
关闭swap。
swapoff -a sed -i '11s/^/#/g' /etc/fstab #这个永久关闭根据自己的实际位置操作。
配置hosts。
192.168.106.4 master 192.168.106.5 node
优化参数。
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
然后加载。
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
tar xf k8s.rpm.tar.gz cd k8s.rpm yum -y install ./*.rpm
这一步是将需要的包都缓存下来然后可以直接安装了,如果按照原有的路数走,那么需要进行如下方式安装。
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && mv docker-ce.repo /etc/yum.repos.d
yum list docker-ce.x86_64 –showduplicates | sort -r
yum -y install docker-ce-[VERSION]
这里安装 17的版本
yum -y install docker-ce-17.12.1.ce-1.el7.centos
首先配置阿里云源。
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
然后安装
yum makecache fast && yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0
启动docker并加入开机自启动。
systemctl start docker && systemctl enable docker && systemctl status docker
master上操作:
docker load -i image-master.tar
更改一下tag。
docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
node上操作:
docker load -i image-node.tar
更改一下tag:
docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
安装完成后,我们还需要对kubelet进行配置,因为用yum源的方式安装的kubelet生成的配置文件将参数–cgroup-driver改成了systemd,而docker的cgroup-driver是cgroupfs,这二者必须一致才行,我们可以通过docker info命令查看:
[root@localhost ~]$docker info |grep Cgroup Cgroup Driver: cgroupfs
修改文件kubelet的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,将其中的KUBELET_CGROUP_ARGS参数更改成cgroupfs:
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
修改完成之后,重新加载一下。
systemctl daemon-reload
到这里我们的准备工作就完成了,接下来我们就可以在master节点上用kubeadm命令来初始化我们的集群了:
[root@localhost ~]$kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.106.4 [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.106.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.106.4] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 19.502623 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: y6829y.a3qvzgz8vm0horcr [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
根据人家提示的执行如下指令:
mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
特别要注意最后的那个输出,以后node节点加入进来都是靠的这条指令。
此时可以在本机查看节点状态:
[root@localhost ~]$kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 2m v1.10.0
那么:
[root@localhost ~]$kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds-amd64" created daemonset.extensions "kube-flannel-ds-arm64" created daemonset.extensions "kube-flannel-ds-arm" created daemonset.extensions "kube-flannel-ds-ppc64le" created daemonset.extensions "kube-flannel-ds-s390x" created
等一会儿再查看一下。
[root@localhost ~]$kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 2m v1.10.0
在node节点执行刚刚记录下来的指令:
kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
等下在master查看状态:
[root@localhost ~]$kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 3m v1.10.0 node Ready1m v1.10.0
添加两个文件。
dashboard.yaml:
[root@localhost dashboard]$cat dashboard.yaml apiVersion: v1 kind: ServiceAccount metadata: name: kubernetes-dashboard namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: kubernetes-dashboard containers: - name: kubernetes-dashboard image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 100m memory: 100Mi ports: - containerPort: 9090 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 tolerations: - key: "CriticalAddonsOnly" operator: "Exists"
dashboard-svc.yaml:
[root@localhost dashboard]$cat dashboard-svc.yaml apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: k8s-app: kubernetes-dashboard type: NodePort ports: - port: 9090 targetPort: 9090 nodePort: 30033
然后通过指令创建:
[root@localhost dashboard]$kubectl create -f dashboard.yaml serviceaccount "kubernetes-dashboard" created [root@localhost dashboard]$kubectl create -f dashboard-svc.yaml service "kubernetes-dashboard" created
然后拿上任一一个节点的ip:30033进行访问即可看到ui界面了。
本文整理参考自:
https://www.jianshu.com/p/9c2e6733ef9b
https://www.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/#10-%E9%83%A8%E7%BD%B2dashboard-%E6%8F%92%E4%BB%B6-a-id-dashboard-a
© 2018 www.qingketang.net 鄂ICP备18027844号-1
武汉快勤科技有限公司 13554402156 武汉市东湖新技术开发区关山二路特一号国际企业中心6幢4层7号
扫码关注,全站教程免费播放
订单金额:
支付金额:
支付方式: