侧边栏壁纸
博主头像
Awesome Devin 博主等级

行动起来,活在当下

  • 累计撰写 345 篇文章
  • 累计创建 26 个标签
  • 累计收到 3 条评论

目 录CONTENT

文章目录
k8s

使用ubuntu24043全新安装k8s1.35

Administrator
2025-12-19 / 0 评论 / 0 点赞 / 6 阅读 / 0 字

来了,国内用户使用ubuntu24043全新安装k8s1.35

本文带来了当前最新版本(截止到2025年12月的17日)Kubernetes v1.35:Timbernetes(世界树发布)的安装部署

背景

2025 年始于 Octarine(魔法之色,v1.33)的光辉,经历了风与意志的洗礼(v1.34)。年终之际,我们双手触摸着世界树,灵感源自连接众多领域的生命之树 Yggdrasil。像任何一棵伟大的树一样,Kubernetes 一圈一圈地生长,一版一版地发展,由全球社区的关爱塑造。 树的中心是环绕地球的 Kubernetes 轮,基石是那些始终坚守的维护者、贡献者和用户。他们在日常工作、生活变迁和稳定的开源管理之间,修剪旧 API,嫁接新功能,保持这个全球最大开源项目的健康。 三只松鼠守护着这棵树:持有 LGTM 卷轴的法师代表审查者,挥舞战斧和 Kubernetes 盾牌的战士代表发布团队,提灯的盗贼代表问题分类者,为黑暗的问题队列带来光明。 他们象征着庞大的冒险团队。Kubernetes v1.35 为世界树增添了新的一圈年轮,众多双手和多条道路共同塑造,让树枝更高,根基更深。
Timbernetes

环境

系统:ubuntu 24.04.3 LTS
内存:4g
cpu: 2c
磁盘:40g

准备工作

添加hosts

# vim /etc/hosts
# ip根据实际进行更改即可
192.168.19.21 master

安装必要组件

root@hao-ubuntu24043:~#apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
curl is already the newest version (8.5.0-2ubuntu10.6).
gnupg2 is already the newest version (2.4.4-2ubuntu17.3).
software-properties-common is already the newest version (0.99.49.3).
apt-transport-https is already the newest version (2.8.3).
ca-certificates is already the newest version (20240203).
0 upgraded, 0 newly installed, 0 to remove and 90 not upgraded.
root@hao-ubuntu24043:~# containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
root@hao-ubuntu24043:~# sed -i  -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
root@hao-ubuntu24043:~# sed -i   's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml 
root@hao-ubuntu24043:~# systemctl restart containerd
root@hao-ubuntu24043:~# vim /etc/containerd/config.toml 
root@hao-ubuntu24043:~# mkdir -p /etc/containerd/certs.d/docker.io
root@hao-ubuntu24043:~# vim /etc/containerd/certs.d/docker.io/hosts.toml


root@hao-ubuntu24043:~# mkdir  /etc/containerd/certs.d/registry.k8s.io/
root@hao-ubuntu24043:~# vim /etc/containerd/certs.d/registry.k8s.io/hosts.toml
root@hao-ubuntu24043:~# systemctl enable --now containerd
root@hao-ubuntu24043:~# systemctl restart containerd


root@hao-ubuntu24043:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

root@hao-ubuntu24043:~# mkdir -p -m 755 /etc/apt/keyrings

问题一 kubelet : Depends: kubernetes-cni (>= 1.2.0) but it is not installable

此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置。

root@hao-ubuntu24043:~# 
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /
root@hao-ubuntu24043:~# sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Hit:1 https://mirrors.huaweicloud.com/docker-ce/linux/ubuntu noble InRelease
Hit:2 http://security.ubuntu.com/ubuntu noble-security InRelease                             
Get:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  InRelease [1,224 B]
Get:4 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  Packages [2,214 B]
Hit:5 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble InRelease                                                                                                                                                                                                                                
Hit:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-updates InRelease                                                                                                                                                                                                                        
Hit:7 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-backports InRelease
Fetched 3,438 B in 17s (197 B/s)
Reading package lists... Done
W: https://mirrors.huaweicloud.com/docker-ce/linux/ubuntu/dists/noble/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 kubelet : Depends: kubernetes-cni (>= 1.2.0) but it is not installable
E: Unable to correct problems, you have held broken packages.
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

寻找解决方案 添加以前的1.34仓库成功解决

方案来自:

https://github.com/kubernetes/release/issues/3575
https://github.com/kubernetes/release/issues/4100
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/k8s-1-34.gpg
echo 'deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/k8s-1-34.list
deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /
echo 'deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/k8s-1-35.list
deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /

尝试安装 kubernetes-cni

root@hao-ubuntu24043:/home/hao# apt-get install -y kubernetes-cni
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  kubernetes-cni
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
Need to get 39.9 MB of archives.
After this operation, 98.5 MB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.34/deb  kubernetes-cni 1.7.1-1.1 [39.9 MB]
Fetched 39.9 MB in 42s (954 kB/s)                                                                                                                                                                                                                                                               
Selecting previously unselected package kubernetes-cni.
(Reading database ... 125608 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_1.7.1-1.1_amd64.deb ...
Unpacking kubernetes-cni (1.7.1-1.1) ...
Setting up kubernetes-cni (1.7.1-1.1) ...
Scanning processes...                                                                                                                                                                                                                                                            
Scanning linux images...                                                                                                                                                                                                                                                               ```

```shell
root@hao-ubuntu24043:/home/hao# apt-get install -y kubeadm-1.35.0 kubectl-1.35.0 kubelet-1.35.0
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package kubeadm-1.35.0
E: Couldn't find any package by glob 'kubeadm-1.35.0'
E: Couldn't find any package by regex 'kubeadm-1.35.0'
E: Unable to locate package kubectl-1.35.0
E: Couldn't find any package by glob 'kubectl-1.35.0'
E: Couldn't find any package by regex 'kubectl-1.35.0'
E: Unable to locate package kubelet-1.35.0
E: Couldn't find any package by glob 'kubelet-1.35.0'
E: Couldn't find any package by regex 'kubelet-1.35.0'
root@hao-ubuntu24043:/home/hao# apt-mark unhold kubelet kubeadm kubectl
Canceled hold on kubelet.
Canceled hold on kubeadm.
Canceled hold on kubectl.
root@hao-ubuntu24043:/home/hao# sudo apt install -y kubelet kubeadm kubectl 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following NEW packages will be installed:
  cri-tools kubeadm kubectl kubelet
0 upgraded, 4 newly installed, 0 to remove and 23 not upgraded.
Need to get 53.0 MB of archives.
After this operation, 229 MB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  cri-tools 1.35.0-1.1 [16.2 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  kubeadm 1.35.0-1.1 [12.4 MB]                                                                                                                                                                      
Get:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  kubectl 1.35.0-1.1 [11.5 MB]                                                                                                                                                                      
Get:4 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.35/deb  kubelet 1.35.0-1.1 [12.9 MB]                                                                                                                                                                      
Fetched 53.0 MB in 8s (6,493 kB/s)                                                                                                                                                                                                                                                              
Selecting previously unselected package cri-tools.
(Reading database ... 125637 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.35.0-1.1_amd64.deb ...
Unpacking cri-tools (1.35.0-1.1) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.35.0-1.1_amd64.deb ...
Unpacking kubeadm (1.35.0-1.1) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.35.0-1.1_amd64.deb ...
Unpacking kubectl (1.35.0-1.1) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../kubelet_1.35.0-1.1_amd64.deb ...
Unpacking kubelet (1.35.0-1.1) ...
Setting up kubectl (1.35.0-1.1) ...
Setting up cri-tools (1.35.0-1.1) ...
Setting up kubelet (1.35.0-1.1) ...
Setting up kubeadm (1.35.0-1.1) ...
Scanning processes...                                                                                                                                                                                                                                                                            
Scanning linux images... 

设置开机启动

root@hao-ubuntu24043:/home/hao# systemctl enable --now kubelet
root@hao-ubuntu24043:/home/hao# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Thu 2025-12-18 06:38:44 UTC; 7s ago
       Docs: https://kubernetes.io/docs/
    Process: 5602 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 5602 (code=exited, status=1/FAILURE)
        CPU: 35ms

提前拉取镜像

这里我们使用阿里云的仓库 registry.aliyuncs.com/google_containers

root@hao-ubuntu24043:/home/hao# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
W1218 06:43:16.731183    5838 version.go:108] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1218 06:43:16.731243    5838 version.go:109] falling back to the local client version: v1.35.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.35.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.35.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.35.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.35.0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.13.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.6.6-0

直接使用官方文档的安装方式,使用默认参数(只记录过程)

注意 这种方式有问题,不建议使用,会导致后续flannel获取不到pod cidr

问题二 swap未关闭导致kubelet无法启动

交换分区的配置。kubelet 的默认行为是在节点上检测到交换内存时无法启动。 kubelet 自 v1.22 起已开始支持交换分区。自 v1.28 起,仅针对 cgroup v2 支持交换分区; kubelet 的 NodeSwap 特性门控处于 Beta 阶段,但默认被禁用。

虽然k8s从1.22版本开始支持使用swap,但是默认是关闭的。由于之前的swap只是使用swapoff -a临时进行关闭,重启之后还是会自动挂载,所以出现了kubelet启动失败的问题,进而导致kubeadm初始化失败。

临时关闭swap

root@hao-ubuntu24043:/home/hao# swapoff -a

永久关闭swap

root@hao-ubuntu24043:/home/hao# vim /etc/fstab
# 注释掉swap所在行即可

再次尝试重新初始化集群

root@hao-ubuntu24043:/home/hao# kubeadm init --control-plane-endpoint=master  --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hao-ubuntu24043 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.19.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hao-ubuntu24043 localhost] and IPs [192.168.19.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hao-ubuntu24043 localhost] and IPs [192.168.19.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001266979s

Unfortunately, an error has occurred, likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded

To see the stack trace of this error execute with --v=5 or higher

查看kubelet相关报错,确认为swap未关闭引起

root@hao-ubuntu24043:/home/hao# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Thu 2025-12-18 07:55:00 UTC; 3s ago
       Docs: https://kubernetes.io/docs/
    Process: 1621 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 1621 (code=exited, status=1/FAILURE)
        CPU: 51ms
root@hao-ubuntu24043:/home/hao# journalctl -xeu kubelet
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 1367.
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.952607    1621 server.go:525] "Kubelet version" kubeletVersion="v1.35.0"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.952644    1621 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.952658    1621 watchdog_linux.go:95] "Systemd watchdog is not enabled"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.952663    1621 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.952995    1621 server.go:951] "Client rotation is on, will bootstrap in background"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.954455    1621 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.955817    1621 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.958408    1621 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.961231    1621 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: I1218 07:55:00.961363    1621 swap_util.go:119] "Swap is on" /proc/swaps contents=<
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]:         Filename                                Type                Size                Used                Priority
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]:         /swap.img                               file                3960828                0                -2
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]:  >
Dec 18 07:55:00 hao-ubuntu24043 kubelet[1621]: E1218 07:55:00.961390    1621 run.go:72] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap or set --fail-swap-on flag to false"
Dec 18 07:55:00 hao-ubuntu24043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStart= process belonging to unit kubelet.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 18 07:55:00 hao-ubuntu24043 systemd[1]: kubelet.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
Dec 18 07:55:11 hao-ubuntu24043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ Automatic restarting of the unit kubelet.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Dec 18 07:55:11 hao-ubuntu24043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kubelet.service has finished successfully.

重置集群

kubeadm reset all -f

root@hao-ubuntu24043:/home/hao# kubeadm reset all -f
[reset] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[reset] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
W1218 06:59:08.856756 7201 reset.go:141] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.19.21:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.19.21:6443: connect: connection refused
[preflight] Running pre-flight checks
W1218 06:59:08.856824 7201 removeetcdmember.go:105] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not perform cleanup of CNI plugin configuration,
network filtering rules and kubeconfig files.

For information on how to perform this cleanup manually, please see:
https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/

下载flannel文件

修改image部分为代理镜像地址或者内部仓库地址 ghcr.m.ixdev.cn

root@hao-ubuntu24043:/home/hao# wget https://g.1ab.asia/https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

root@hao-ubuntu24043:/home/hao# vim kube-flannel.yml 
root@hao-ubuntu24043:/home/hao# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

flannel初始化一直重启

swap问题解决,flannel一直有报错

root@hao-ubuntu24043:/home/hao# kubectl get pod -A
NAMESPACE      NAME                                      READY   STATUS              RESTARTS      AGE
kube-flannel   kube-flannel-ds-gs9vr                     0/1     PodInitializing     0             2s
kube-system    coredns-bbdc5fdf6-99kbd                   0/1     ContainerCreating   0             18h
kube-system    coredns-bbdc5fdf6-wn2sh                   0/1     ContainerCreating   0             18h
kube-system    etcd-hao-ubuntu24043                      1/1     Running             2 (17h ago)   18h
kube-system    kube-apiserver-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
kube-system    kube-controller-manager-hao-ubuntu24043   1/1     Running             2 (17h ago)   18h
kube-system    kube-proxy-8r89w                          1/1     Running             1 (17h ago)   18h
kube-system    kube-scheduler-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
root@hao-ubuntu24043:/home/hao# kubectl get pod -A
NAMESPACE      NAME                                      READY   STATUS              RESTARTS      AGE
kube-flannel   kube-flannel-ds-gs9vr                     1/1     Running             1 (3s ago)    6s
kube-system    coredns-bbdc5fdf6-99kbd                   0/1     ContainerCreating   0             18h
kube-system    coredns-bbdc5fdf6-wn2sh                   0/1     ContainerCreating   0             18h
kube-system    etcd-hao-ubuntu24043                      1/1     Running             2 (17h ago)   18h
kube-system    kube-apiserver-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
kube-system    kube-controller-manager-hao-ubuntu24043   1/1     Running             2 (17h ago)   18h
kube-system    kube-proxy-8r89w                          1/1     Running             1 (17h ago)   18h
kube-system    kube-scheduler-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
root@hao-ubuntu24043:/home/hao# kubectl get pod -A
NAMESPACE      NAME                                      READY   STATUS              RESTARTS      AGE
kube-flannel   kube-flannel-ds-gs9vr                     0/1     Error               1 (5s ago)    8s
kube-system    coredns-bbdc5fdf6-99kbd                   0/1     ContainerCreating   0             18h
kube-system    coredns-bbdc5fdf6-wn2sh                   0/1     ContainerCreating   0             18h
kube-system    etcd-hao-ubuntu24043                      1/1     Running             2 (17h ago)   18h
kube-system    kube-apiserver-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
kube-system    kube-controller-manager-hao-ubuntu24043   1/1     Running             2 (17h ago)   18h
kube-system    kube-proxy-8r89w                          1/1     Running             1 (17h ago)   18h
kube-system    kube-scheduler-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h

查看flannel详细报错

Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory

root@hao-ubuntu24043:~# kubectl logs -n kube-flannel  kube-flannel-ds-gs9vr -f
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I1219 01:19:16.252168       1 main.go:215] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ipMasqRandomFullyDisable:false ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true blackholeRoute:false netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W1219 01:19:16.252423       1 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1219 01:19:16.260141       1 kube.go:139] Waiting 10m0s for node controller to sync
I1219 01:19:16.260280       1 kube.go:537] Starting kube subnet manager
I1219 01:19:17.260402       1 kube.go:163] Node controller sync successful
I1219 01:19:17.260433       1 main.go:241] Created subnet manager: Kubernetes Subnet Manager - hao-ubuntu24043
I1219 01:19:17.260437       1 main.go:244] Installing signal handlers
I1219 01:19:17.260625       1 main.go:523] Found network config - Backend type: vxlan
E1219 01:19:17.260653       1 main.go:278] Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
root@hao-ubuntu24043:~# kubectl logs -n kube-flannel  kube-flannel-ds-gs9vr -f
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I1219 01:19:16.252168       1 main.go:215] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ipMasqRandomFullyDisable:false ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true blackholeRoute:false netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W1219 01:19:16.252423       1 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1219 01:19:16.260141       1 kube.go:139] Waiting 10m0s for node controller to sync
I1219 01:19:16.260280       1 kube.go:537] Starting kube subnet manager
I1219 01:19:17.260402       1 kube.go:163] Node controller sync successful
I1219 01:19:17.260433       1 main.go:241] Created subnet manager: Kubernetes Subnet Manager - hao-ubuntu24043
I1219 01:19:17.260437       1 main.go:244] Installing signal handlers
I1219 01:19:17.260625       1 main.go:523] Found network config - Backend type: vxlan
E1219 01:19:17.260653       1 main.go:278] Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory

添加br_netfiler模块,启用转发

🌟 什么是br_netfilter?
br_netfilter是Linux内核中的一个桥接网络过滤模块,简单来说,它是一个让iptables能够"看到"通过网桥(bridge)传输的数据包的魔法组件。

🔍 为什么它这么重要?
想象一下:在Docker或Kubernetes中,容器之间通过虚拟网桥(如docker0、cbr0)通信,但默认情况下,这些网桥的数据包会绕过iptables的防火墙规则,就像走了一条"后门"一样。br_netfilter就是用来关闭这条后门,让iptables能正常处理这些桥接流量的。

🛠️ 在Kubernetes/Docker中它扮演什么角色?
Kubernetes网络策略的基础:如果你在K8s中配置了NetworkPolicy(网络策略),br_netfilter是让这些策略生效的关键。没有它,你的NetworkPolicy就像"纸糊的防火墙",根本挡不住流量。
容器间通信的桥梁:Docker默认使用docker0桥接网络,br_netfilter让容器能正确地通过iptables进行网络过滤和转发。
跨节点通信的保障:虽然它主要影响同一节点内的通信,但对整个集群的网络策略实施至关重要。
⚠️ 不加载br_netfilter会怎样?
同一节点上的Pod之间通过Service通信会失败
NetworkPolicy(网络策略)无法生效,安全策略形同虚设
你可能会看到类似"iptables规则存在但未生效"的错误

root@hao-ubuntu24043:~# sudo modprobe br_netfilter
root@hao-ubuntu24043:~# echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee /etc/sysctl.d/99-bridge-nf-call-iptables.conf
echo "net.bridge.bridge-nf-call-ip6tables=1" | sudo tee -a /etc/sysctl.d/99-bridge-nf-call-iptables.conf
sudo sysctl -p /etc/sysctl.d/99-bridge-nf-call-iptables.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
# 检查文件是否存在
ls /proc/sys/net/bridge/bridge-nf-call-iptables

# 应返回:/proc/sys/net/bridge/bridge-nf-call-iptables
# 检查值是否为 1
cat /proc/sys/net/bridge/bridge-nf-call-iptables
# 应返回:1
cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

# 安装一个工具bridge-utils
root@hao-ubuntu24043:~# apt install -y bridge-utils
# 查看网桥状态
root@hao-ubuntu24043:~# brctl show
bridge name	bridge id		STP enabled	interfaces
cni0		8000.7acb0ea779ef	no		veth5ade2671
							veth8613c69b
docker0		8000.26786a491a10	no		

此处报错原因是没有指定pod cidr

root@hao-ubuntu24043:~# kubectl get pod -A
NAMESPACE      NAME                                      READY   STATUS              RESTARTS      AGE
kube-flannel   kube-flannel-ds-nhzjc                     0/1     Error               1 (10s ago)   13s
kube-system    coredns-bbdc5fdf6-99kbd                   0/1     ContainerCreating   0             18h
kube-system    coredns-bbdc5fdf6-wn2sh                   0/1     ContainerCreating   0             18h
kube-system    etcd-hao-ubuntu24043                      1/1     Running             2 (17h ago)   18h
kube-system    kube-apiserver-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
kube-system    kube-controller-manager-hao-ubuntu24043   1/1     Running             2 (17h ago)   18h
kube-system    kube-proxy-8r89w                          1/1     Running             1 (17h ago)   18h
kube-system    kube-scheduler-hao-ubuntu24043            1/1     Running             2 (17h ago)   18h
root@hao-ubuntu24043:~# kubectl logs -n kube-flannel  kube-flannel-ds-nhzjc
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I1219 01:25:06.248816       1 main.go:215] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ipMasqRandomFullyDisable:false ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true blackholeRoute:false netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W1219 01:25:06.249220       1 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1219 01:25:06.257895       1 kube.go:139] Waiting 10m0s for node controller to sync
I1219 01:25:06.257941       1 kube.go:537] Starting kube subnet manager
I1219 01:25:07.258086       1 kube.go:163] Node controller sync successful
I1219 01:25:07.258106       1 main.go:241] Created subnet manager: Kubernetes Subnet Manager - hao-ubuntu24043
I1219 01:25:07.258110       1 main.go:244] Installing signal handlers
I1219 01:25:07.258458       1 main.go:523] Found network config - Backend type: vxlan
I1219 01:25:07.260523       1 kube.go:737] List of node(hao-ubuntu24043) annotations: map[string]string{"node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"}
I1219 01:25:07.260567       1 match.go:211] Determining IP address of default interface
I1219 01:25:07.260932       1 match.go:269] Using interface with name ens33 and address 192.168.19.21
I1219 01:25:07.260963       1 match.go:291] Defaulting external address to interface address (192.168.19.21)
I1219 01:25:07.261002       1 vxlan.go:128] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I1219 01:25:07.262735       1 kube.go:704] List of node(hao-ubuntu24043) annotations: map[string]string{"node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"}
E1219 01:25:07.263100       1 main.go:370] Error registering network: failed to acquire lease: node "hao-ubuntu24043" pod cidr not assigned
I1219 01:25:07.263146       1 main.go:503] Stopping shutdownHandler...

使用指定的方式重新进行安装

kubeadm init \
--kubernetes-version=v1.35.0 \
--image-repository=registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket unix:///run/containerd/containerd.sock \
--v=5

I1219 01:32:35.440692  283134 interface.go:432] Looking for default routes with IPv4 addresses
I1219 01:32:35.440776  283134 interface.go:437] Default route transits interface "ens33"
I1219 01:32:35.440990  283134 interface.go:209] Interface ens33 is up
I1219 01:32:35.441465  283134 interface.go:257] Interface "ens33" has 2 addresses :[192.168.19.21/24 fe80::20c:29ff:fe63:c41e/64].
I1219 01:32:35.441568  283134 interface.go:224] Checking addr  192.168.19.21/24.
I1219 01:32:35.441594  283134 interface.go:231] IP found 192.168.19.21
I1219 01:32:35.441636  283134 interface.go:263] Found valid IPv4 address 192.168.19.21 for interface "ens33".
I1219 01:32:35.441667  283134 interface.go:443] Found active IP 192.168.19.21 
I1219 01:32:35.441707  283134 kubelet.go:195] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
I1219 01:32:35.443284  283134 checks.go:609] validating Kubernetes and kubeadm version
I1219 01:32:35.443687  283134 checks.go:203] validating if the firewall is enabled and active
I1219 01:32:35.454005  283134 checks.go:238] validating availability of port 6443
I1219 01:32:35.454237  283134 checks.go:238] validating availability of port 10259
I1219 01:32:35.454269  283134 checks.go:238] validating availability of port 10257
I1219 01:32:35.454298  283134 checks.go:315] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1219 01:32:35.454310  283134 checks.go:315] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1219 01:32:35.454315  283134 checks.go:315] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1219 01:32:35.454320  283134 checks.go:315] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1219 01:32:35.454335  283134 checks.go:465] validating if the connectivity type is via proxy or direct
I1219 01:32:35.454382  283134 checks.go:504] validating http connectivity to first IP address in the CIDR
I1219 01:32:35.454400  283134 checks.go:504] validating http connectivity to first IP address in the CIDR
I1219 01:32:35.454411  283134 checks.go:89] validating the container runtime
I1219 01:32:35.458619  283134 checks.go:120] validating the container runtime version compatibility
I1219 01:32:35.461049  283134 checks.go:685] validating whether swap is enabled or not
I1219 01:32:35.461211  283134 checks.go:405] validating the presence of executable losetup
I1219 01:32:35.461269  283134 checks.go:405] validating the presence of executable mount
I1219 01:32:35.461287  283134 checks.go:405] validating the presence of executable cp
I1219 01:32:35.461303  283134 checks.go:551] running system verification checks
I1219 01:32:35.503539  283134 checks.go:436] checking whether the given node name is valid and reachable using net.LookupHost
I1219 01:32:35.504004  283134 checks.go:651] validating kubelet version
I1219 01:32:35.540425  283134 checks.go:165] validating if the "kubelet" service is enabled and active
I1219 01:32:35.555686  283134 checks.go:238] validating availability of port 10250
I1219 01:32:35.556912  283134 checks.go:364] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1219 01:32:35.557307  283134 checks.go:238] validating availability of port 2379
I1219 01:32:35.557717  283134 checks.go:238] validating availability of port 2380
I1219 01:32:35.557770  283134 checks.go:278] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1219 01:32:35.560809  283134 checks.go:892] using image pull policy: IfNotPresent
I1219 01:32:35.561423  283134 checks.go:904] failed to detect the sandbox image for local container runtime, no 'sandboxImage' field in CRI info config
I1219 01:32:35.561964  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.35.0
I1219 01:32:35.562233  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.35.0
I1219 01:32:35.562579  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.35.0
I1219 01:32:35.562884  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.35.0
I1219 01:32:35.563115  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/coredns:v1.13.1
I1219 01:32:35.563406  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/pause:3.10.1
I1219 01:32:35.563722  283134 checks.go:923] image exists: registry.aliyuncs.com/google_containers/etcd:3.6.6-0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1219 01:32:35.564194  283134 certs.go:111] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1219 01:32:35.661624  283134 certs.go:472] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hao-ubuntu24043 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.19.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1219 01:32:35.887950  283134 certs.go:111] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1219 01:32:35.916031  283134 certs.go:472] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1219 01:32:35.940156  283134 certs.go:111] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1219 01:32:35.969486  283134 certs.go:472] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hao-ubuntu24043 localhost] and IPs [192.168.19.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hao-ubuntu24043 localhost] and IPs [192.168.19.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1219 01:32:36.246346  283134 certs.go:77] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1219 01:32:36.291496  283134 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1219 01:32:36.309676  283134 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I1219 01:32:36.347928  283134 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1219 01:32:36.489211  283134 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1219 01:32:36.532175  283134 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1219 01:32:36.633055  283134 local.go:67] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1219 01:32:36.633724  283134 manifests.go:125] [control-plane] getting StaticPodSpecs
I1219 01:32:36.633867  283134 certs.go:472] validating certificate period for CA certificate
I1219 01:32:36.633899  283134 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1219 01:32:36.633903  283134 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1219 01:32:36.633905  283134 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1219 01:32:36.633908  283134 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1219 01:32:36.633910  283134 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1219 01:32:36.634685  283134 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1219 01:32:36.634732  283134 manifests.go:125] [control-plane] getting StaticPodSpecs
I1219 01:32:36.634947  283134 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1219 01:32:36.634954  283134 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1219 01:32:36.634958  283134 manifests.go:151] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1219 01:32:36.634960  283134 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1219 01:32:36.634963  283134 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1219 01:32:36.634966  283134 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1219 01:32:36.634970  283134 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1219 01:32:36.636683  283134 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1219 01:32:36.637385  283134 manifests.go:125] [control-plane] getting StaticPodSpecs
I1219 01:32:36.637864  283134 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1219 01:32:36.638534  283134 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1219 01:32:36.638860  283134 kubelet.go:69] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I1219 01:32:36.875279  283134 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 01:32:36.875655  283134 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 01:32:36.875698  283134 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 01:32:36.875732  283134 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I1219 01:32:36.875766  283134 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
I1219 01:32:36.875798  283134 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.76708ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.19.21:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1219 01:32:37.380118  283134 wait.go:290] kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused
I1219 01:32:37.380235  283134 wait.go:290] kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
I1219 01:32:37.380246  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: Get "https://192.168.19.21:6443/livez?timeout=10s": dial tcp 192.168.19.21:6443: connect: connection refused
I1219 01:32:37.880873  283134 wait.go:290] kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
I1219 01:32:37.880948  283134 wait.go:290] kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused
[control-plane-check] kube-controller-manager is healthy after 1.002559643s
I1219 01:32:39.279453  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: forbidden: User "kubernetes-admin" cannot get path "/livez"
I1219 01:32:39.284258  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: forbidden: User "kubernetes-admin" cannot get path "/livez"
[control-plane-check] kube-scheduler is healthy after 1.910826652s
I1219 01:32:39.381730  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: forbidden: User "kubernetes-admin" cannot get path "/livez"
I1219 01:32:39.881549  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: forbidden: User "kubernetes-admin" cannot get path "/livez"
I1219 01:32:40.381385  283134 wait.go:283] kube-apiserver check failed at https://192.168.19.21:6443/livez: forbidden: User "kubernetes-admin" cannot get path "/livez"
[control-plane-check] kube-apiserver is healthy after 3.502830813s
I1219 01:32:40.883937  283134 kubeconfig.go:657] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists
I1219 01:32:40.884851  283134 kubeconfig.go:730] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf
I1219 01:32:40.890836  283134 uploadconfig.go:111] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1219 01:32:40.899730  283134 uploadconfig.go:125] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node hao-ubuntu24043 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node hao-ubuntu24043 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 2vqpgm.z55xqs00az4y39u6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1219 01:32:40.933729  283134 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1219 01:32:40.933864  283134 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1219 01:32:40.936953  283134 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1219 01:32:41.085075  283134 request.go:683] "Waited before sending request" delay="147.287736ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s"
I1219 01:32:41.284968  283134 request.go:683] "Waited before sending request" delay="196.354932ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s"
I1219 01:32:41.287781  283134 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1219 01:32:41.288471  283134 kubeletfinalize.go:144] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
I1219 01:32:41.684491  283134 request.go:683] "Waited before sending request" delay="139.259569ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s"
[addons] Applied essential addon: CoreDNS
I1219 01:32:41.884079  283134 request.go:683] "Waited before sending request" delay="107.046045ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s"
I1219 01:32:42.084686  283134 request.go:683] "Waited before sending request" delay="182.347554ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s"
I1219 01:32:42.284843  283134 request.go:683] "Waited before sending request" delay="192.319852ms" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://192.168.19.21:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s"
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.19.21:6443 --token 2vqpgm.z55xqs00az4y39u6 \
	--discovery-token-ca-cert-hash sha256:3e495357496da261d84bfdb701f764914245dc06cb8961cd9f1c0ccb1fe0a17b 

添加kubectl客户端配置

root@hao-ubuntu24043:~# mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
cp: overwrite '/root/.kube/config'? y

查看集群

root@hao-ubuntu24043:~# kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   coredns-bbdc5fdf6-8krhf                   0/1     ContainerCreating   0          10s
kube-system   coredns-bbdc5fdf6-n729l                   0/1     ContainerCreating   0          10s
kube-system   etcd-hao-ubuntu24043                      0/1     Running             3          16s
kube-system   kube-apiserver-hao-ubuntu24043            1/1     Running             3          16s
kube-system   kube-controller-manager-hao-ubuntu24043   1/1     Running             0          16s
kube-system   kube-proxy-4h92k                          1/1     Running             0          10s
kube-system   kube-scheduler-hao-ubuntu24043            0/1     Running             3          16s

添加网络组件,这里选的flannel

root@hao-ubuntu24043:~# kubectl apply -f /home/hao/kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看集群概览

发现已经一切正常了

root@hao-ubuntu24043:~# kubectl get all -Aowide
NAMESPACE      NAME                                          READY   STATUS    RESTARTS   AGE   IP              NODE              NOMINATED NODE   READINESS GATES
kube-flannel   pod/kube-flannel-ds-6l7mr                     1/1     Running   0          45s   192.168.19.21   hao-ubuntu24043   <none>           <none>
kube-system    pod/coredns-bbdc5fdf6-8krhf                   1/1     Running   0          78s   10.244.0.2      hao-ubuntu24043   <none>           <none>
kube-system    pod/coredns-bbdc5fdf6-n729l                   1/1     Running   0          78s   10.244.0.3      hao-ubuntu24043   <none>           <none>
kube-system    pod/etcd-hao-ubuntu24043                      1/1     Running   3          84s   192.168.19.21   hao-ubuntu24043   <none>           <none>
kube-system    pod/kube-apiserver-hao-ubuntu24043            1/1     Running   3          84s   192.168.19.21   hao-ubuntu24043   <none>           <none>
kube-system    pod/kube-controller-manager-hao-ubuntu24043   1/1     Running   0          84s   192.168.19.21   hao-ubuntu24043   <none>           <none>
kube-system    pod/kube-proxy-4h92k                          1/1     Running   0          78s   192.168.19.21   hao-ubuntu24043   <none>           <none>
kube-system    pod/kube-scheduler-hao-ubuntu24043            1/1     Running   3          84s   192.168.19.21   hao-ubuntu24043   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  85s   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   84s   k8s-app=kube-dns

NAMESPACE      NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS     IMAGES                                                       SELECTOR
kube-flannel   daemonset.apps/kube-flannel-ds   1         1         1       1            1           <none>                   45s   kube-flannel   ghcr.m.ixdev.cn/flannel-io/flannel:v0.27.4                   app=flannel
kube-system    daemonset.apps/kube-proxy        1         1         1       1            1           kubernetes.io/os=linux   84s   kube-proxy     registry.aliyuncs.com/google_containers/kube-proxy:v1.35.0   k8s-app=kube-proxy

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                    SELECTOR
kube-system   deployment.apps/coredns   2/2     2            2           84s   coredns      registry.aliyuncs.com/google_containers/coredns:v1.13.1   k8s-app=kube-dns

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                    SELECTOR
kube-system   replicaset.apps/coredns-bbdc5fdf6   2         2         2       78s   coredns      registry.aliyuncs.com/google_containers/coredns:v1.13.1   k8s-app=kube-dns,pod-template-hash=bbdc5fdf6
root@hao-ubuntu24043:~# kubectl get node
NAME              STATUS   ROLES           AGE   VERSION
hao-ubuntu24043   Ready    control-plane   98s   v1.35.0
root@hao-ubuntu24043:~# kubectl get node -owide
NAME              STATUS   ROLES           AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
hao-ubuntu24043   Ready    control-plane   103s   v1.35.0   192.168.19.21   <none>        Ubuntu 24.04.3 LTS   6.8.0-90-generic   containerd://2.2.0
root@hao-ubuntu24043:~# crictl ps
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml" 
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] Image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
0ca70bd427f93       aa5e3ebc0dfed       2 minutes ago       Running             coredns                   0                   9f0425370cc67       coredns-bbdc5fdf6-n729l                   kube-system
5de4dfa1d16be       aa5e3ebc0dfed       2 minutes ago       Running             coredns                   0                   fb8302ad20a21       coredns-bbdc5fdf6-8krhf                   kube-system
dce1e91488344       e83704a177312       2 minutes ago       Running             kube-flannel              0                   8a7ec3bdcb8ec       kube-flannel-ds-6l7mr                     kube-flannel
f60a6b4139682       32652ff1bbe6b       3 minutes ago       Running             kube-proxy                0                   198fc78f0acb3       kube-proxy-4h92k                          kube-system
13a26fb300b95       2c9a4b058bd7e       3 minutes ago       Running             kube-controller-manager   0                   c22236fee25ad       kube-controller-manager-hao-ubuntu24043   kube-system
087ccad00730e       5c6acd67e9cd1       3 minutes ago       Running             kube-apiserver            3                   eff35820d66c4       kube-apiserver-hao-ubuntu24043            kube-system
70e8231fb5a97       550794e3b12ac       3 minutes ago       Running             kube-scheduler            3                   3ec93263d7faa       kube-scheduler-hao-ubuntu24043            kube-system
a5e5dc3e21535       0a108f7189562       3 minutes ago       Running             etcd                      3                   54e5e4364a155       etcd-hao-ubuntu24043                      kube-system
root@hao-ubuntu24043:~# crictl images
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml" 
WARN[0000] Image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
IMAGE                                                             TAG                 IMAGE ID            SIZE
ghcr.io/flannel-io/flannel-cni-plugin                             v1.8.0-flannel1     bb28ded63816e       4.92MB
ghcr.m.ixdev.cn/flannel-io/flannel-cni-plugin                     v1.8.0-flannel1     bb28ded63816e       4.92MB
ghcr.m.ixdev.cn/flannel-io/flannel                                v0.27.4             e83704a177312       34.1MB
registry.aliyuncs.com/google_containers/coredns                   v1.13.1             aa5e3ebc0dfed       23.6MB
registry.aliyuncs.com/google_containers/etcd                      3.6.6-0             0a108f7189562       23.6MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.35.0             5c6acd67e9cd1       27.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.35.0             2c9a4b058bd7e       23.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.35.0             32652ff1bbe6b       25.8MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.35.0             550794e3b12ac       17.2MB
registry.aliyuncs.com/google_containers/pause                     3.10                873ed75102791       320kB
registry.aliyuncs.com/google_containers/pause                     3.10.1              cd073f4c5f6a8       320kB

到此的话,master节点已经安装完成,如果需要单独加node,只需要将node节点设置一下

#Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.19.21:6443 --token 2vqpgm.z55xqs00az4y39u6 \
	--discovery-token-ca-cert-hash sha256:3e495357496da261d84bfdb701f764914245dc06cb8961cd9f1c0ccb1fe0a17b 

参考链接:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://github.com/kubernetes/release/issues/3575
https://github.com/kubernetes/release/issues/4100

关注我们,获取更多DevOps和安全更新资讯!
本文作者:运维技术团队:辣个男人Devin
发布日期:2025年12月19日
适用系统:Linux

0
博主关闭了所有页面的评论