【Kubernetes】Ubuntu 24.04 二进制方式部署 K8s
2026/5/9 14:30:29 网站建设 项目流程

Kubernetes二进制部署是指手动下载、配置和启动各组件的过程,相比于kubeadm等自动化工具,能更深入地理解集群各组件的工作原理和交互机制。

1、环境准备

1.1、节点规划

角色IP地址主机名组件
master192.168.140.133master

etcd、kube-apiserver、kube-controller-manager、

kube-scheduler、kubelet、kubectl

node1192.168.140.134node1

kubelet、kube-proxy、容器运行时

node2192.168.140.135node2kubelet、kube-proxy、容器运行时

1.2、所有节点执行的基础配置

# 设置主机名(各节点分别执行) # master节点 hostnamectl set-hostname master # node1节点 hostnamectl set-hostname node1 # node2节点 hostnamectl set-hostname node2 # 配置hosts解析(所有节点) cat >> /etc/hosts << EOF 192.168.140.133 master 192.168.140.134 node1 192.168.140.135 node2 EOF # 禁用 swap swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # 禁用防火墙(或配置允许的端口) ufw disable # 启用内核模块和配置 sysctl cat >> /etc/modules-load.d/containerd.conf << EOF overlay br_netfilter EOF modprobe overlay modprobe br_netfilter cat >> /etc/sysctl.d/kubernetes.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system # 安装必要工具 apt-get update apt-get install -y wget curl tar gzip openssl socat conntrack ipset

1.3、安装容器运行时 containerd

  • 所有节点都需要安装容器运行时
# 安装必要系统工具 sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl # 信任 Docker 的 GPG 公钥 sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # 配置 docker 源,写入软件源信息 echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # 安装 containerd.io=1.6.33-1(可根据需要安装指定版本) apt-get update apt install -y containerd.io=1.6.33-1 # 配置 containerd,创建 containerd 配置文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改配置使用 systemd cgroup sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml # 替换为阿里云的镜像地址 sed -i "s#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml # 重启 containerd systemctl restart containerd systemctl enable containerd systemctl status containerd

2、证书生成

  • 在 master 节点上生成所有证书

2.1、创建证书目录和配置文件

mkdir -p /etc/kubernetes/pki /etc/kubernetes/pki/etcd cd /etc/kubernetes # 创建CA配置文件 cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF # 创建CA证书签名请求 cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "System" } ] } EOF

2.2、生成根证书 CA

# 下载 cfssl 工具 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64 -O /usr/local/bin/cfssl wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64 -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl* # 生成 CA 证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2.3、生成 admin 证书

cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin

2.4、生成 kube-proxy 证书

cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:node-proxier", "OU": "System" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare kube-proxy

2.5、生成 kube-controller-manager 证书

cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "System" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

2.6、生成 kube-scheduler 证书

cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "System" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2.7、生成 API Server 证书

cat > kubernetes-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "System" } ] } EOF # 生成API Server证书(包含所有访问地址) cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ -hostname=10.96.0.1,192.168.140.133,192.168.140.134,192.168.140.135,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local \ kubernetes-csr.json | cfssljson -bare kubernetes

2.8、生成 Service Account 密钥对

# 生成 SA 私钥和公钥(用于签署 service account token) openssl genrsa -out sa.key 2048 openssl rsa -in sa.key -pubout -out sa.pub

2.9、生成 etcd 证书

cd /etc/kubernetes/pki/etcd # etcd CA 证书 cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "System" } ] } EOF cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca # etcd server 证书 cat > etcd-server-csr.json << EOF { "CN": "etcd-server", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "System" } ] } EOF cfssl gencert \ -ca=etcd-ca.pem \ -ca-key=etcd-ca-key.pem \ -config=../ca-config.json \ -hostname=192.168.140.133,127.0.0.1 \ -profile=kubernetes \ etcd-server-csr.json | cfssljson -bare etcd-server # etcd client 证书 cat > etcd-client-csr.json << EOF { "CN": "etcd-client", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "System" } ] } EOF cfssl gencert \ -ca=etcd-ca.pem \ -ca-key=etcd-ca-key.pem \ -config=../ca-config.json \ -profile=kubernetes \ etcd-client-csr.json | cfssljson -bare etcd-client

3、下载 Kubernetes 二进制文件

# 下载 Kubernetes 1.35 二进制文件 cd /usr/local/src wget https://dl.k8s.io/v1.35.0/kubernetes-server-linux-amd64.tar.gz tar -xzf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ # 复制二进制文件到 /usr/local/bin cp kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubectl /usr/local/bin/

4、部署 etcd(在 master 节点)

# 下载 etcd wget https://github.com/etcd-io/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz tar -xzf etcd-v3.5.11-linux-amd64.tar.gz cp etcd-v3.5.11-linux-amd64/etcd* /usr/local/bin/ # 创建 etcd 数据目录 mkdir -p /var/lib/etcd /etc/etcd # 创建etcd配置文件 cat > /etc/etcd/etcd.conf.yml << EOF name: master>5、部署 kube-apiserver(在 master 节点)
# 创建审计日志目录 mkdir -p /var/log/kubernetes # 创建 kube-apiserver 配置文件 cat > /etc/kubernetes/kube-apiserver.conf << EOF KUBE_API_ARGS="--secure-port=6443 \\ --bind-address=0.0.0.0 \\ --advertise-address=192.168.140.133 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.96.0.0/12 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.140.133:2379 \\ --etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem \\ --etcd-certfile=/etc/kubernetes/pki/etcd/etcd-client.pem \\ --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-client-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/kubernetes.pem \\ --kubelet-client-key=/etc/kubernetes/pki/kubernetes-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --api-audiences=kubernetes.default.svc \\ --enable-admission-plugins=NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/kubernetes.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/kubernetes-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --v=2" EOF # 创建kube-apiserver systemd服务 cat > /etc/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver \$KUBE_API_ARGS Restart=always RestartSec=10s LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver

6、部署 kube-controller-manager(在 master 节点)

# 创建 kube-controller-manager 配置文件 cat > /etc/kubernetes/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=0.0.0.0 \\ --cluster-name=kubernetes \\ --controllers=*,bootstrapsigner,tokencleaner \\ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --service-cluster-ip-range=10.96.0.0/12 \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --cluster-signing-duration=87600h0m0s \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=5m0s \\ --requestheader-client-ca-file=/etc/kubernetes/pki/ca.pem \\ --use-service-account-credentials=true \\ --v=2" EOF # 创建 controller-manager 的 kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem \ --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig # 创建 systemd 服务 cat > /etc/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target kube-apiserver.service [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_ARGS Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager

7、部署 kube-scheduler(在 master 节点)

# 创建 kube-scheduler 配置文件 cat > /etc/kubernetes/kube-scheduler.conf << EOF KUBE_SCHEDULER_ARGS="--bind-address=0.0.0.0 \\ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\ --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\ --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\ --leader-elect=true \\ --v=2" EOF # 创建 scheduler 的 kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem \ --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig # 创建 systemd 服务 cat > /etc/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target kube-apiserver.service [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_ARGS Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler

8、配置kubectl(在 master 节点)

# 创建 admin 的 kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=/root/.kube/config kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/root/.kube/config kubectl config set-context admin@kubernetes \ --cluster=kubernetes \ --user=admin \ --kubeconfig=/root/.kube/config kubectl config use-context admin@kubernetes \ --kubeconfig=/root/.kube/config # 验证集群状态 kubectl get cs kubectl get nodes

9、部署 kubelet(所有节点)

9.1、创建 bootstrap token(用于 kubelet 自动申请证书)

# 在 master 节点执行 # 创建 bootstrap token secret cat > bootstrap-token.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-abcdef namespace: kube-system type: bootstrap.kubernetes.io/token stringData: token-id: abcdef token-secret: abcdef0123456789 usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:worker EOF kubectl apply -f bootstrap-token.yaml # 创建 bootstrap ClusterRoleBinding cat > bootstrap-rbac.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubeadm:kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubeadm:node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubeadm:node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes EOF kubectl apply -f bootstrap-rbac.yaml

9.2、生成 bootstrap kubeconfig(分发到 node 节点)

# 在 master 节点生成 bootstrap kubeconfig BOOTSTRAP_TOKEN=$(kubectl get secret -n kube-system bootstrap-token-abcdef -o jsonpath='{.data.token-secret}' | base64 -d) kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.140.133:6443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf kubectl config set-credentials system:bootstrap:abcdef \ --token=abcdef.${BOOTSTRAP_TOKEN} \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf kubectl config set-context default \ --cluster=kubernetes \ --user=system:bootstrap:abcdef \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf kubectl config use-context default --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf # 将 bootstrap kubeconfig 复制到 node 节点 scp /etc/kubernetes/bootstrap-kubelet.conf node1:/etc/kubernetes/ scp /etc/kubernetes/bootstrap-kubelet.conf node2:/etc/kubernetes/ # 复制 CA 证书到 node 节点 scp /etc/kubernetes/pki/ca.pem node1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.pem node2:/etc/kubernetes/pki/

9.3、配置 kubelet(所有节点)

# 所有节点执行 # 创建 kubelet 配置文件目录 mkdir -p /var/lib/kubelet /var/log/kubernetes # 创建kubelet配置文件 cat > /var/lib/kubelet/config.yaml << EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook cgroupDriver: systemd clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerRuntimeEndpoint: unix:///run/containerd/containerd.sock rotateCertificates: true serverTLSBootstrap: true EOF # 创建 kubelet systemd 服务配置 cat > /etc/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\ --config=/var/lib/kubelet/config.yaml \\ --cert-dir=/var/lib/kubelet/pki \\ --rotate-certificates=true \\ --rotate-server-certificates=true \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF # 创建 kubelet 服务 drop-in 目录 mkdir -p /etc/systemd/system/kubelet.service.d # 启动 kubelet systemctl daemon-reload systemctl enable kubelet systemctl start kubelet

10、部署 kube-proxy(所有节点)

# 在 master 节点创建 kube-proxy 的 kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.140.133:6443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=system:kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig # 复制kube-proxy.kubeconfig到node节点 scp /etc/kubernetes/kube-proxy.kubeconfig node1:/etc/kubernetes/ scp /etc/kubernetes/kube-proxy.kubeconfig node2:/etc/kubernetes/ # 所有节点创建 kube-proxy 配置文件 cat > /var/lib/kube-proxy/config.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration bindAddress: 0.0.0.0 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 10.96.0.0/12 mode: iptables EOF # 创建 kube-proxy systemd 服务 cat > /etc/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/var/lib/kube-proxy/config.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy

11、部署 Calico 网络插件

# 在 master 节点部署 Calico # 下载 Calico v3.27 资源配置文件 curl https://raw.githubusercontent.com/projectcalico/calico/v3.27/manifests/calico.yaml -o calico.yaml # 编辑 calico.yaml,修改 CALICO_IPV4POOL_CIDR 为你的 Pod 网段(默认192.168.0.0/16) # 若不修改,Calico 会使用默认的 192.168.0.0/16 作为 Pod 网段 # 确保这个网段不与现有网络冲突 sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' calico.yaml sed -i 's|# value: "192.168.0.0/16"| value: "192.168.0.0/16"|g' calico.yaml # 应用 Calico kubectl apply -f calico.yaml # 查看 Calico Pod 状态 kubectl get pods -n kube-system -l k8s-app=calico-node kubectl get pods -n kube-system -l k8s-app=calico-kube-controllers # 验证节点状态 kubectl get nodes -o wide

12、验证集群

# 查看集群组件状态 kubectl get cs # 查看节点状态 kubectl get nodes # 查看所有命名空间的Pod kubectl get pods --all-namespaces # 测试部署nginx kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort # 查看service kubectl get svc # 测试访问 curl <node-ip>:<node-port>

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询