# 执行成功[root@k8s-master ~]# kubeadm init \>--apiserver-advertise-address=172.51.216.81 \>--image-repository registry.aliyuncs.com/google_containers \>--kubernetes-version v1.20.6 \>--service-cidr=10.96.0.0/12 \>--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.51.216.81]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.51.216.81 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.51.216.81 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[etcd] Creating static Pod manifest for local etcd in"/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 73.002096 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20"in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: kx8q8v.fg193i70v4hgzeh0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir-p$HOME/.kube
sudo cp-i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown$(id-u):$(id-g)$HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.51.216.81:6443 --token kx8q8v.fg193i70v4hgzeh0 \--discovery-token-ca-cert-hash sha256:ea6e05a86b0b8d0d5c3cd9d7a0f920dd4b382de23c28a1ac7325f2105186f88c
# 说明# 执行成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
# 操作1mkdir-p$HOME/.kube
sudo cp-i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown$(id-u):$(id-g)$HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
# 操作 加入node
kubeadm join 172.51.216.81:6443 --token kx8q8v.fg193i70v4hgzeh0 \--discovery-token-ca-cert-hash sha256:ea6e05a86b0b8d0d5c3cd9d7a0f920dd4b382de23c28a1ac7325f2105186f88c
[root@localhost ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master ~]# kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-79c5968bdc-9c9hf 1/1 Running 0 66s
pod/kubernetes-dashboard-658485d5c7-jlbsl 1/1 Running 0 66s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.96.233.196 <none> 8000/TCP 66s
service/kubernetes-dashboard ClusterIP 10.97.0.33 <none> 443/TCP 66s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 66s
deployment.apps/kubernetes-dashboard 1/1 1 1 66s
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-79c5968bdc 1 1 1 66s
replicaset.apps/kubernetes-dashboard-658485d5c7 1 1 1 66s
# kubeadm alpha certs check-expiration[root@k8s-master ~]# kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 24, 2022 01:05 UTC 364d no
apiserver Dec 24, 2022 01:05 UTC 364d ca no
apiserver-etcd-client Dec 24, 2022 01:05 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 24, 2022 01:05 UTC 364d ca no
controller-manager.conf Dec 24, 2022 01:05 UTC 364d no
etcd-healthcheck-client Dec 24, 2022 01:05 UTC 364d etcd-ca no
etcd-peer Dec 24, 2022 01:05 UTC 364d etcd-ca no
etcd-server Dec 24, 2022 01:05 UTC 364d etcd-ca no
front-proxy-client Dec 24, 2022 01:05 UTC 364d front-proxy-ca no
scheduler.conf Dec 24, 2022 01:05 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 22, 2031 01:05 UTC 9y no
etcd-ca Dec 22, 2031 01:05 UTC 9y no
front-proxy-ca Dec 22, 2031 01:05 UTC 9y no
# 更新证书
kubeadm config view > /root/kubeadm-config.yaml
kubeadm alpha certs renew all --config=/root/kubeadm-config.yaml
[root@localhost kubernetes-1.20.6]# kubeadm config view > /root/kubeadm-config.yaml
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
[root@localhost kubernetes-1.20.6]# kubeadm alpha certs renew all --config=/root/kubeadm-config.yaml
Command "all" is deprecated, please use the same command under "kubeadm certs"
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
# kubeadm alpha certs check-expiration[root@k8s-master kubernetes-1.20.6]# kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 22, 2031 01:48 UTC 9y no
apiserver Dec 22, 2031 01:48 UTC 9y ca no
apiserver-etcd-client Dec 22, 2031 01:48 UTC 9y etcd-ca no
apiserver-kubelet-client Dec 22, 2031 01:48 UTC 9y ca no
controller-manager.conf Dec 22, 2031 01:48 UTC 9y no
etcd-healthcheck-client Dec 22, 2031 01:48 UTC 9y etcd-ca no
etcd-peer Dec 22, 2031 01:48 UTC 9y etcd-ca no
etcd-server Dec 22, 2031 01:48 UTC 9y etcd-ca no
front-proxy-client Dec 22, 2031 01:48 UTC 9y front-proxy-ca no
scheduler.conf Dec 22, 2031 01:48 UTC 9y no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 22, 2031 01:05 UTC 9y no
etcd-ca Dec 22, 2031 01:05 UTC 9y no
front-proxy-ca Dec 22, 2031 01:05 UTC 9y no
We use cookies to improve user experience. By continuing to browse this site, you agree the use of cookies, in accordance with our cookie policy.