k8s安装pods网络插件calico(etcd+tls)

前言

Calico

Calico 是一种广泛采用、久经考验的开源网络和网络安全解决方案,适用于 Kubernetes、虚拟机和裸机工作负载。 CalicoCloud Native 应用程序提供了两大服务:

  • 工作负载之间的网络连接。
  • 工作负载之间的网络安全策略实施。

核心组件

  • elix: Calico agent,跑在每台需要运行 workload 的节点上,主要负责配置路由及 ACLs 等信息来确保 endpoint 的连通状态;
  • etcd: 分布式键值存储,主要负责网络元数据一致性,确保 Calico 网络状态的准确性;
  • BGPClient: 主要负责把 Felix 写入 kernel 的路由信息分发到当前 Calico 网络,确保 workload 间的通信的有效性;
  • BGP Route Reflector: 大规模部署时使用,摒弃所有节点互联的 mesh 模式,通过一个或者多个BGP Route Reflector来完成集中式的路由分发;

网络基础了解

交换技术

广播域(Broadcast Domain

  • 广播是一种信息的传播方式,指网络中的某一设备同时向网络中所有的其它设备发送数据,这个数据所能广播到的范围即为广播域。

    简单点说,广播域就是指网络中所有能接收到同样广播消息的设备的集合。

  • 比如:交换机在转发数据时会先进行广播,这个广播可以发送的区域就是一个广播域

  • 路由器的一个接口下的网络是一个广播域,所以路由器可以隔离广播域

  • 冲突域:在同一个冲突域中的每一个节点都能收到所有被发送的帧

  • 广播域:网络中能接收任一设备发出的广播帧的所有设备的集合

地址解析协议(ARP

ARPAddress Resolution Protocol

  • ARP是根据IP地址获取物理地址的一个TCP/IP协议
  • 主机发送信息时将包含目标IP地址的ARP请求广播到局域网络上的所有主机,并接收返回消息,以此确定目标的物理地址;

三层交换机

  • 三层交换机就是具有部分路由器功能的交换机,工作在OSI网络标准模型的第三层:网络层。三层交换机的最重要目的是加快大型局域网内部的数据交换,所具有的路由功能也是为这目的服务的,能够做到一次路由,多次转发。
  • 对于数据包转发等规律性的过程由硬件高速实现,而像路由信息更新、路由表维护、路由计算、路由确定等功能,由软件实现。
  • 二层交换机工作在数据链路层

虚拟局域网(VLAN

VLANVirtual Local Area Network

  • VLAN 是一组逻辑上的设备和用户,这些设备和用户并不受物理位置的限制,可以根据功能、部门及应用等因素将它们组织起来,相互之间的通信就好像它们在同一个网段中一样,由此得名虚拟局域网,由于交换机端口有两种VLAN属性,其一是VLANID,其二是VLANTAG,分别对应VLAN对数据包设置VLAN标签和允许通过的VLANTAG(标签)数据包,不同VLANID端口,可以通过相互允许VLANTAG,构建VLAN。
  • VLAN是一种比较新的技术,工作在OSI参考模型的第2层和第3层,一个VLAN不一定是一个广播域,VLAN之间的通信并不一定需要路由网关,其本身可以通过对VLANTAG的相互允许,组成不同访问控制属性的VLAN,当然也可以通过第3层的路由器来完成的,但是,通过VLANID和VLANTAG的允许,VLAN可以为几乎局域网内任何信息集成系统架构逻辑拓扑和访问控制,并且与其它共享物理网路链路的信息系统实现相互间无扰共享。
  • VLAN可以为信息业务和子业务、以及信息业务间提供一个相符合业务结构的虚拟网络拓扑架构并实现访问控制功能。
  • 与传统的局域网技术相比较,VLAN技术更加灵活,它具有以下优点:
    • 网络设备的移动、添加和修改的管理开销减少;
    • 可以控制广播活动;可提高网络的安全性。

路由技术

路由器(Router)

  • 连接两个或多个网络的硬件设备,在网络间起网关的作用,是读取每一个数据包中的地址然后决定如何传送的专用智能性的网络设备。
  • 它能够理解不同的协议,例如某个局域网使用的以太网协议,因特网使用的TCP/IP协议
  • 这样,路由器可以分析各种不同类型网络传来的数据包的目的地址,把非TCP/IP网络的地址转换成TCP/IP地址,或者反之;再根据选定的路由算法把各数据包按最佳路线传送到指定位置。
  • 所以路由器可以把非TCP/IP网络连接到因特网上。

K8S pod 网络

CNI(容器网络接口)

  • CNI 只关心容器的网络连接以及在容器被删除时移除分配的资源。
  • CNI配置文件默认路径:/etc/cni/net.d
  • CNI二进制程序默认路径:/opt/cni/bin/
  • GitHub CNI
  • calico docs
  • 边界网关协议 (BGPBorder Gateway Protocol) ,BGPInternet外部网关路由协议
  • BGP可以作为办公网络访问集群网络哦的路由交换

这两个路径可在kubelet启动参数中定义:

1
2
3
--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin

准备服务器资源

  • 192.168.2.158 master-158
  • 192.168.2.159 master-159
  • 192.168.2.160 master-160
  • 192.168.2.160 node-161

部署

Calico 数据存储方式

  1. etcd 存储:https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml
  2. Kubernetes API Datastore 服务: https://projectcalico.docs.tigera.io/manifests/calico.yaml

获取镜像

master-158执行

1
2
3
4
5
6
7
8
cd ~/
# 全家桶下载
curl https://github.com/projectcalico/calico/releases/download/v3.23.1/release-v3.23.1.tgz -O
tar -xf release-v3.23.1.tgz
# 发送到其它服务器
scp release-v3.23.1.tgz root@192.168.2.159:/root/
scp release-v3.23.1.tgz root@192.168.2.160:/root/
scp release-v3.23.1.tgz root@192.168.2.161:/root/

master-158/master-159/master-160/node-161都执行

1
2
3
4
5
6
# 导入 master 到空间 kube-system
tar -xf release-v3.23.1.tgz
cd release-v3.23.1/images/
for i in $(ls) ; do ctr -n=kube-system image import ${i}; done
ctr -n=kube-system images list
cd ~/

获取 k8s calico-etcd.yaml文件

1
2
# 下载配置文件
curl https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml -o /root/calico-etcd.yaml-back

etcd CA配置文件

master-158执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 当前脚本可以重复执行 
cd ~/
\cp /root/calico-etcd.yaml-back /root/calico-etcd.yaml
cat /etc/certs/etcd/ca.pem | base64 -w 0 > CA_BASE64
cat /etc/certs/etcd/etcd-158.pem | base64 -w 0 > ETCD_BASE64
cat /etc/certs/etcd/etcd-158-key.pem | base64 -w 0 > ETCD_KEY_BASE64
echo "https://192.168.2.158:2379,https://192.168.2.159:2379,https://192.168.2.160:2379" > END_POINTS
sed -i "s?http://<ETCD_IP>:<ETCD_PORT>?$(cat END_POINTS)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-key: null?etcd-key: $(cat ETCD_KEY_BASE64)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-ca: null?etcd-ca: $(cat CA_BASE64)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-cert: null?etcd-cert: $(cat ETCD_BASE64)?g" /root/calico-etcd.yaml
sed -i 's?etcd_ca: ""?etcd_ca: "/calico-secrets/etcd-ca"?g' /root/calico-etcd.yaml
sed -i 's?etcd_cert: ""?etcd_cert: "/calico-secrets/etcd-cert"?g' /root/calico-etcd.yaml
sed -i 's?etcd_key: ""?etcd_key: "/calico-secrets/etcd-key"?g' /root/calico-etcd.yaml
# KubeProxyConfiguration.clusterCIDR
sed -i 's?# - name: CALICO_IPV4POOL_CIDR?- name: CALICO_IPV4POOL_CIDR?g' /root/calico-etcd.yaml
sed -i 's?# value: "192.168.0.0/16"? value: "10.244.0.0/16"?g' /root/calico-etcd.yaml
cat /root/calico-etcd.yaml |grep etcd
cat /root/calico-etcd.yaml |grep CALICO_IPV4POOL_CIDR
# 创建 命令空间
kubectl create namespace tigera-operator
# 配置api-server 地址,KUBERNETES_SERVICE_HOST,KUBERNETES_SERVICE_PORT
cat > calico-config.yaml << EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "192.168.2.158"
KUBERNETES_SERVICE_PORT: "6443"
EOF
kubectl apply -f calico-config.yaml

安装etcd集群
生成CA证书(K8S)

安装

我们这里的 etcd 是使用证书的

master-158执行

1
2
3
4
5
6
7
cd ~/
# 第一次载入
kubectl apply -f calico-etcd.yaml
# 重启-yaml
kubectl replace --force -f calico-etcd.yaml
# 监控
watch kubectl get pods -n kube-system

image-20220603123021229

校验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 查看
kubectl get pods -A -o wide
# 查看日志
kubectl logs -n kube-system -f calico-node-hd9z6
kubectl get ds -n kube-system calico-node
# 查看具体pod的yaml配置信息
kubectl get pod -n kube-system calico-node-hd9z6 -o=yaml
# 查看Pod的详细信息,包括记录的事件:
kubectl describe pod -n kube-system calico-node
kubectl describe pod -n kube-system calico-node-hd9z6
# 查看节点信息
kubectl get pod -n kube-system -l k8s-app=calico-node -o wide
# 删除节点信息:
# kubectl delete pod -n kube-system 节点名称
# kubectl delete -f calico-etcd.yaml
# rm -rf /etc/cni/net.d/
# modprobe -r ipip


# 查看 API 对象细节
# 使用 kubectl describe 命令,查看一个 API 对象的细节:
kubectl describe node master-158
# 重启-yaml
kubectl replace --force -f calico-etcd.yaml
# scale 重启
kubectl scale deployment -n kube-system {pod}--replicas=0
#查看k8s 的token
kubectl get serviceaccount -n kube-system -l k8s-app=calico-node -o yaml
kubectl describe secrets -n kube-system calico-config
# 清楚 calico
kubectl scale deploy -n kube-system calico-kube-controllers --replicas=0
# 查看证书
kubeadm alpha certs renew all

查询失败的状态

1
kubectl get pod -n kube-system -l k8s-app=calico-node -o wide

image-20220603131306419

  • Init:ImagePullBackOff 镜像拉取失败
  • CrashLoopBackOff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 查看日志
kubectl logs -n kube-system -f calico-node-hd9z6
# 查看Pod的详细信息,包括记录的事件: master-158
kubectl describe pod -n kube-system calico-node-hd9z6
# master-159
kubectl describe pod -n kube-system calico-node-jsn8n
# master-160
kubectl describe pod -n kube-system calico-node-4l6b6
# node-161
kubectl describe pod -n kube-system calico-node-bqfkf
# 导出单个
kubectl get pod -n kube-system calico-node-hd9z6 -o=yaml > calico-node-hd9z6.yaml
# 重启-yaml
kubectl replace --force -f calico-node-hd9z6.yaml

kubelet 全部节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#修改启动参数
cat > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf << EOF
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
# 添加
#Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
# 添加
#Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# 这是 "kubeadm init" 和 "kubeadm join" 运行时生成的文件,动态地填充 KUBELET_KUBEADM_ARGS 变量
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# 这是一个文件,用户在不得已下可以将其用作替代 kubelet args。
# 用户最好使用 .NodeRegistration.KubeletExtraArgs 对象在配置文件中替代。
# KUBELET_EXTRA_ARGS 应该从此文件中获取。
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet -l

查询错误

1
journalctl -xeu kubelet

修改文件

1
cat /root/calico-etcd.yaml

配置内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdTYzMm9DWEJDem9BSm1Bc3JKb01PbHdLTktDemltNEFaUXdpWnZPRmRsOFlocUplClhJYlZiSXJxbTZZSGg1K0twcy9jc2RzWEZrOXhtVHN6M3NCcVFHYVgvRlNjc003eElmUHF0R3dSTHoyc2xJSXIKZ2FsRW1sUEhZb25WMUU3RUR3YUQ0S0ZrQVNhSm4zZUxOUHVtRGg4VHNzS3drQW44bmlSMkhIRlA5bzczdWlsVwp3MTkwWnNFNUNSbXhxaUlwT3JDNUtpY1gyTkM0eHhpT05LdEZ3UlJwem14Z0M5SnZvRTllWmYzZ2dpOG1DSGZGCklTb3VpbFVBVlhpQUVPNGpTVitIWG9pQ01IQUN3WnJYQ01ycE4wN3c3WmhDQ1g5ZHZBMmZPMW0wVFJpM2NxTWMKaVNmb2xBTGR6OFNZbnl1ZU1sWUZmYVl4elU0aVduQXppRTV4MlFJREFRQUJBb0lCQUFvZmpSRUFXRlJSc1paZwpVNmlQdXA4ZlBkR3U1V0JQSktoT3FrQmhYRTZSUEpKdWlhWjJBMmNTYXlzd0huSGJVakJEUUFVNzZ4ZmgreCtuCnlObDRDWU1seFlidnpXL2dDYk9xSTN2TjVITm00VHMxZGtGTkx3MGYvYjQ3N3hPL2wrV3psVU4xa1I1YXhNdWMKT0I4SWYrRjlIYVBqeW9CS2VaelNIS2pXRjlrVnIwOHE4bm9DSEltRy8weWhLSTFsMFdYZzA2amVLdWVEbUhwOAppNlpnQlZiNUhDQUtMK1Z1K1N3NkJSSlJLdG5vdXMySFNOVTFtdndXRENXa00yZFVwVU1KYnA1KzlqZWpVSUV1CklZRW5kdjZwYUpaRHFOb0k2Y3RCdEl1NGMrSXM4dndmSUVoMGk0bjBZS0w2Nmt3aUl4czNyNjNGemZzOW9SOXUKc2x3SG9ZMENnWUVBeEhDK2ZaQjVkZjI5MnhwVWltQ1lxMFUrYnVyMUZWN1BVdmh0QThNUGJWVXY1WHdWZ25KYQpnUzM2ZHY1ektTZjNudGhLeWJsSUlYQjdxcXBLVFBKYU16b1VtOXc3UklUSVp6RVRoYjNXQnRBcVRPVUJSMHZ2CmREWGpvcFpwNDdFdC9EMWZPbFdFelA1cmFPelpjQ3FBMit6VG1teEZWZjBNb1NWRU5OKytndDhDZ1lFQTlKVTQKblpuNjRoN2o0SnBlaGVYdlFVcy9EUk0vN2pqdjhnRC8wNFVjL1JEVWJzMXRycUNUTUhwYkxzanRlNnlEemtqZApPUkliaVp4cTRSamVtN25YQnVVRnRtNXF2RzZqd2xoYnJBTkRxa1R1cTd2TzROWXpsejZCZTZMRHF3ejVmam4yCjM0MXk0Smx2bFdCaFUwQnI5RndMOHVSZkVPU2lPcy94a1dZQ21rY0NnWUVBb29sdTlGSWdUY0tmM3JTUWt0YU8KTzloVmFrMDZjRzQ4T1RpWWF1NXd5MVFiQjFSK0w2c1N1NlFoZzJmU1BaRjJUNVpEZTFtMUZ3WU5MUTh0M3pELwo5VGJ2YW03MUV5S1M5dDhpZWh5ekJId2xJKzZ2K2lBWWh4MDN0b1dpSStXc1dQTW00Z3Qwa0hGS3lreC9OVkhWCndTTFppd0plOUdFbW5BZEx3andIVkIwQ2dZQmlpZ3c1VXVSRlhmU3BkUWhJSWc5MjJ2NFlJbjFMV1IweS96d1MKMkRxSnF0SXJvaEJpbnNjdWJOMTN4L1FHTThjV3dUeC8xYy9LYlg2U0doYjEzclhIVFZZejNlQ2E4bWgvMEdGKwp1QUgzQTdhMDhnR3pqQmxWQWhYZzNmNi9WNGJkV0RVaWREYW9UcWtxSVo2VWtBdnVjM2RNOEwxc2JQRC9pTy9tCmlKYmIyUUtCZ1FDTmRWNHMrRHYzV3lWQldRSWt5UWdJd3VzQ00rMmg2SUFoU1hqd2NRUVlKYkwxcmErcXRVRkIKeHVxWnF1bmRkZzM0aG5XMHA4ZjhCVWdodm5lemlVdlhGNzNDbjRPNFBjTzJLcE85a000bkJzSnhoSUMrMmlWWgo2Tkc4NXUzczZSNzV2aTZVQkRZdUp1QlVvU0pvK0VCVS9reWg2Z2UrU2ZqWDBMenVTdFIvbWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVuakNDQTRhZ0F3SUJBZ0lVRE9FOU1CWGhQVUFoejl1aFh6SEdKWFZtVys0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2NURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VwcFlXNW5JRk4xTVJBd0RnWURWUVFIRXdkTwpZVzVLYVc1bk1STXdFUVlEVlFRS0V3cHRZWGg2YUdGdkxXTmhNUll3RkFZRFZRUUxFdzFsZEdOa0lGTmxZM1Z5CmFYUjVNUkF3RGdZRFZRUURFd2R0WVhoNmFHRnZNQjRYRFRJeU1EWXdNakEyTVRnd01Gb1hEVFF5TURVeU9EQTIKTVRnd01Gb3djakVMTUFrR0ExVUVCaE1DUTA0eEVUQVBCZ05WQkFnVENFcHBZVzVuSUZOMU1SQXdEZ1lEVlFRSApFd2RPWVc1S2FXNW5NUk13RVFZRFZRUUtFd3B0WVhoNmFHRnZMV05oTVJZd0ZBWURWUVFMRXcxbGRHTmtJRk5sClkzVnlhWFI1TVJFd0R3WURWUVFERXdobGRHTmtMVEUxT0RDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVAKQURDQ0FRb0NnZ0VCQUx1dDlxQWx3UXM2QUNaZ0xLeWFERHBjQ2pTZ3M0cHVBR1VNSW1iemhYWmZHSWFpWGx5RwoxV3lLNnB1bUI0ZWZpcWJQM0xIYkZ4WlBjWms3TTk3QWFrQm1sL3hVbkxETzhTSHo2clJzRVM4OXJKU0NLNEdwClJKcFR4MktKMWRST3hBOEdnK0NoWkFFbWlaOTNpelQ3cGc0ZkU3TENzSkFKL0o0a2RoeHhUL2FPOTdvcFZzTmYKZEdiQk9Ra1pzYW9pS1Rxd3VTb25GOWpRdU1jWWpqU3JSY0VVYWM1c1lBdlNiNkJQWG1YOTRJSXZKZ2gzeFNFcQpMb3BWQUZWNGdCRHVJMGxmaDE2SWdqQndBc0dhMXdqSzZUZE84TzJZUWdsL1hid05uenRadEUwWXQzS2pISWtuCjZKUUMzYy9FbUo4cm5qSldCWDJtTWMxT0lscHdNNGhPY2RrQ0F3RUFBYU9DQVNzd2dnRW5NQTRHQTFVZER3RUIKL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFILwpCQUl3QURBZEJnTlZIUTRFRmdRVWUrSkRPOTZ6R05La1lXeDBpV2dOYUtyYTIxRXdId1lEVlIwakJCZ3dGb0FVCjlsbGNvUm15Ui9aSjF3STY3T21CZ0hwRUtiTXdnYWNHQTFVZEVRU0JuekNCbklJS2EzVmlaWEp1WlhSbGM0SVMKYTNWaVpYSnVaWFJsY3k1a1pXWmhkV3gwZ2hacmRXSmxjbTVsZEdWekxtUmxabUYxYkhRdWMzWmpnaDVyZFdKbApjbTVsZEdWekxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYS0NKR3QxWW1WeWJtVjBaWE11WkdWbVlYVnNkQzV6CmRtTXVZMngxYzNSbGNpNXNiMk5oYkljRWZ3QUFBWWNFd0tnQ25vY0V3S2dDbjRjRXdLZ0NvSWNFd0tnQ29UQU4KQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVTJvNU10b1c1cHZFbWVpWnRuNnM1U2EyNGRILzQ2RmkvTHNWNjYySwpvZTlBdENqbjBrczlnUTFLN29oSFI1MHVHUEJyL21rYXlWYXVnVmhpb2tNQWVjK2VoNWtWbXh4NnJtcHNQV3JsCmUwd2ZJR3lwUDkrVHNtUGN6ekNoUzNUVHFpMGljdFhVMEs5ZHFRZmYvTmtBUTBZZU9RcVNwSWoxcXZpeklNT3oKc1hTVzdwZ2xwZVFzeXFMYTFQNE0yemc0WkhRTEVla0hoNExnTHV6MlNleXRrL25vazluMnBCWTZYdEVlc1llagpZZjVDMEdlVm1mSHpWTlBNNTcwTXhBdFlmMndoOUVkUHNlYlp0cUoyb1Y3ZHp2TVRkNE9temdpcDljWGErUG50CkpqbTlZdXQwVGRsaU1XYi9FVmVCbFdiSlNod0x3U05HUFhoVWJwdVg4QXFZa1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURzakNDQXBxZ0F3SUJBZ0lVTjB5TG1uVnF4aEMrenJqTllxb2pvOXlxaGpjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2NURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VwcFlXNW5JRk4xTVJBd0RnWURWUVFIRXdkTwpZVzVLYVc1bk1STXdFUVlEVlFRS0V3cHRZWGg2YUdGdkxXTmhNUll3RkFZRFZRUUxFdzFsZEdOa0lGTmxZM1Z5CmFYUjVNUkF3RGdZRFZRUURFd2R0WVhoNmFHRnZNQjRYRFRJeU1EWXdNakExTXprd01Gb1hEVEkzTURZd01UQTEKTXprd01Gb3djVEVMTUFrR0ExVUVCaE1DUTA0eEVUQVBCZ05WQkFnVENFcHBZVzVuSUZOMU1SQXdEZ1lEVlFRSApFd2RPWVc1S2FXNW5NUk13RVFZRFZRUUtFd3B0WVhoNmFHRnZMV05oTVJZd0ZBWURWUVFMRXcxbGRHTmtJRk5sClkzVnlhWFI1TVJBd0RnWURWUVFERXdkdFlYaDZhR0Z2TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBeGZWQmo0YnBPUTlFZ1h3M2tESm5jbzhrRmxuMDFkdE9XbllQNDM0b092K0hrSGlkK0R2NQo3OERkQTZodVJnL1JYbTkxbTN2TVg2ZDFQQkl1ckNoU2IyZTE1b3ZVaEZrK3B6U29qNFp1ckFNYVdPbXJmbWgrCnJsR3RwRkdqeFU5VG5kQlVlSkVVbEEvTkRhNXZtRkNpSGpmYUJ5VjhVa3lybUFZbDJGUlN2V0FIdkdjbHZOZUEKOHc5NHNSVEV2OVFzOUJrV1V1RmFPQ2FUVkVRWWE1dnBwNVZJWE5CNUFYSDNLWjZIOGU4N0NnVHlTQUNUTTFrOApHUWVsVE0wamxYU1JNdE13MlNkR3VsaTJ1dTRrUjczNWhDNXAzeEJRMTBhUEJDZVB2WTFPTUkzOFVjT01qZTFGCjNiK0FxUk9mTGl1Rm5abGJDMHZPbklnZ0tvMEhwelFtRXdJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUMKQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTlsbGNvUm15Ui9aSjF3STY3T21CZ0hwRQpLYk13RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUh2QTEwbVQ4bGkzeWlQSEs1bnN2UGs4UTVRSWJDTkM5L1h5CnRNeUdaZThHYXlHODdqa3JQazB4MEdUOE5DcU42TWU2Y0NQWjZmcXRRT0pVY3dHWC9BT2JCTmFaZ1BzdHhvMHAKYVZNaStYcmZTNEF3bTNsdzJGa09nK1V6eHhoalJGYThiVWNXazJxQ21qYzNhQ0x6R1Y2MWptU2llQjFUbzdndwpIZTN6NUxHcHNFR24rR1p4bVIxd1BrMFZLRTRKQWg5R2lpa0hhMFBOWitZRGw5RXJtYVdtV0RWNGU3d1BTTFh1CkdsR0dhdmh5RmZBNDNDWTNMVlRQNlhVT0VkZHNPdEtaR0ttYkc0QjV2OElyc3laWkxid0dMWWNzWU83bGhQWjAKUGY3bEdidHVDWXNQTjZWdGREY0U1dDJvaXN1T0t6OTV0bVUvTzN3UmhDVUtsY2NNRzc0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://192.168.2.158:2379,https://192.168.2.159:2379,https://192.168.2.160:2379"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"

# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"

# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Pods are monitored for changing labels.
# The node controller monitors Kubernetes nodes.
# Namespace and serviceaccount labels are used for policy.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
- serviceaccounts
verbs:
- watch
- list
- get
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
# EndpointSlices are used for Service-based network policy rule
# enforcement.
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- watch
- list
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.23.1
command: ["/opt/cni/bin/install"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.23.1
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Enable or Disable VXLAN on the default IPv6 IP pool.
- name: CALICO_IPV6POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# KubeProxyConfiguration.clusterCIDR
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
lifecycle:
preStop:
exec:
command:
- /bin/calico-node
- -shutdown
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
timeoutSeconds: 10
volumeMounts:
# For maintaining CNI plugin API credentials.
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
readOnly: false
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- mountPath: /calico-secrets
name: etcd-certs
- name: policysync
mountPath: /var/run/nodeagent
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
# parent directory.
- name: sysfs
mountPath: /sys/fs/
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
mountPropagation: Bidirectional
- name: cni-log-dir
mountPath: /var/log/calico/cni
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: sysfs
hostPath:
path: /sys/fs/
type: DirectoryOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to access CNI logs.
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0400
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.23.1
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,namespace,serviceaccount,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
volumes:
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0440

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system

---

# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml

导入配置

1
kubectl apply -f calico-etcd.yaml

错误

failed to \"CreatePodSandbox\" for \"coredns

1
no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""

是因为calico 没有启动成功

calico-node:CrashLoopBackOffx509: certificate is valid for

1
kubectl logs -n kube-system -f calico-node-96jzl

结果

1
2022-06-05 04:02:04.693 [ERROR][9] startup/startup.go 158: failed to query kubeadm's config map error=Get "https://10.244.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s": x509: certificate is valid for 10.96.0.1, 192.168.2.158, 127.0.0.1, 192.168.2.159, 192.168.2.160, 192.168.2.161, not 10.244.0.1

方案一:重置后添加扩展IP

1
kubeadm init --apiserver-cert-extra-sans=10.244.0.1

也有人讲

1
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.244.0.1

方案二:修复证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# root 用户执行
rm -f /etc/kubernetes/pki/apiserver.crt
rm -f /etc/kubernetes/pki/apiserver.key
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.crt
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.key
rm -f /etc/kubernetes/pki/apiserver.*
# 如果两个文件都已存在,则 kubeadm 将跳过生成步骤,使用现有文件。
kubeadm init phase certs apiserver --config=kubeadm-config-init.yaml
kubeadm init phase certs apiserver-kubelet-client --config=kubeadm-config-init.yaml
kubectl get pods -A -o wide
kubectl delete pod -n kube-system kube-apiserver-master-158
kubectl delete pod -n kube-system kube-apiserver-master-159
kubectl delete pod -n kube-system kube-apiserver-master-160
# 分发到三个主节点
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.159:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.160:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.161:/etc/kubernetes/pki/
systemctl restart kubelet

参考:

kubeadm init phase

[Invalid x509 certificate for kubernetes master](https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master)

Client.Timeout exceeded while awaiting headers

1
failed to query kubeadm's config map error=Get "https://10.244.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

node节点找不到apiserver地址,需要配置 apiserverIPport

1
2
3
4
5
6
7
8
9
10
11
12
13
# KUBERNETES_SERVICE_HOST 
# KUBERNETES_SERVICE_PORT
cat > calico-config.yaml << EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "192.168.2.158"
KUBERNETES_SERVICE_PORT: "6443"
EOF
kubectl apply -f calico-config.yaml

等待 60 秒让 kubelet 获取 ConfigMap(参见 Kubernetes 问题 #30189);然后,重新启动操作员以获取更改:

1
kubectl delete pod -n tigera-operator -l k8s-app=tigera-operator

configure-calico-to-connect-directly-to-the-api-server
Calico Kubernetes Hosted Install
Calico Kubernetes Hosted Install

issue #30189

本文地址: https://github.com/maxzhao-it/blog/post/ebede6df/