0%

Spring中的aspectj注解AOP

常用的有

  • After
  • AfterReturning
  • AfterThrowing
  • Around
  • Aspect
  • Before

注解的执行

没有异常的执行顺序

1
around begin ->  before -> afterReturning -> after -> around end

抛出异常的执行顺序

1
around begin ->  before -> AfterThrowing -> after

表达式Pointcut

主要语法

指示器(designators

  • 匹配方法

    • execution
  • 匹配参数

    • args
  • 匹配对象

    • this
    • bean
    • target(类)
  • 匹配包/类型

    • within (类)
  • 匹配注解

    • @args
    • @target(类)
    • @within(类)
    • @annotation

通配符(wildcards

  • * 匹配任意字符
  • .. 匹配指定类及其子类
  • + 匹配任意数的子包或参数

运算符(operators

  • &&
  • ||
  • !

表达式语法

execution

1
execution(modifiers-pattern? ret-type-pattern declaring-type-pattern? name-pattern(param-pattern)throws-pattern?)

其中 三个必须的:

  • returning type pattern
  • name pattern
  • parameters pattern

示例

  • execution(* com.*.*(..)) 匹配 com 包里类的任意方法
  • execution(* com..*.*(..)) 匹配 com 包及其所有子包里类的任意方法
  • execution(* com..Demo*.*(..)) 匹配 com 包及其所有子包里以Demo开头类的任意方法

within

  • within(@org.springframework.web.bind.annotation.RestController *) 匹配注解下任意类的任意方法
  • within(com.) 匹配 com 包里的任意类
  • within(com..) 匹配 com 包及其所有子包里的任意类

this

  • this(com.IService) 匹配IService接口的所有实现类

@within

  • @within(org.springframework.web.bind.annotation.RestController) 匹配注解下任意类的任意方法

@target

  • @target(org.springframework.web.bind.annotation.RestController) 匹配注解下任意类的任意方法

@annotation

  • @annotation(org.springframework.transaction.annotation.Transactional)匹配注解下的方法

常用示例

扫描注解标注的方法

1
2
3
4
5
6
public class AspectjAop{
@Pointcut("@annotation(io.swagger.v3.oas.annotations.Operation)")
public void operationPointCut() {

}
}

扫描注解标注的类

1
2
3
4
5
6
public class AspectjAop{
@Pointcut("within(@org.springframework.web.bind.annotation.RestController *))
public void operationPointCut() {

}
}

多个条件

1
2
3
4
5
6
public class AspectjAop{
@Pointcut("within(@org.springframework.web.bind.annotation.RestController || within(@org.springframework.stereotype.Controller *)")
public void operationPointCut() {

}
}

全部表达式

1
2
new InternalUseOnlyPointcutParser(ClassLocader)
[call, @target, target, execution, reference pointcut, adviceexecution, get, initialization, preinitialization, set, @within, withincode, @withincode, staticinitialization, @this, handler, this, within, @annotation, args, @args]

自定义 AOP 表达式

1
2
3
4
5
6
7
8
9
10
11
12
13
public class Demo(
public void test(){
String expression = "execution(* com..*Controller.*(..))";
AspectJExpressionPointcut pointcutParameterNames = new AspectJExpressionPointcut(Demo.class,
new String[0], new Class[0]);
pointcutParameterNames.setExpression(expression);
boolean matches = pointcutParameterNames.matches(ApiController.class);
log.debug("匹配 ApiController.class=>{}", matches);
assert matches;
matches = pointcutParameterNames.matches(ApiController.class.getMethod("RequestMapping", String.class), ApiController.class);
log.debug("匹配 ApiController.RequestMapping=>{}", matches);
}
)

本文地址: https://github.com/maxzhao-it/blog/post/44534123/

前言

Calico

Calico 是一种广泛采用、久经考验的开源网络和网络安全解决方案,适用于 Kubernetes、虚拟机和裸机工作负载。 CalicoCloud Native 应用程序提供了两大服务:

  • 工作负载之间的网络连接。
  • 工作负载之间的网络安全策略实施。

核心组件

  • elix: Calico agent,跑在每台需要运行 workload 的节点上,主要负责配置路由及 ACLs 等信息来确保 endpoint 的连通状态;
  • etcd: 分布式键值存储,主要负责网络元数据一致性,确保 Calico 网络状态的准确性;
  • BGPClient: 主要负责把 Felix 写入 kernel 的路由信息分发到当前 Calico 网络,确保 workload 间的通信的有效性;
  • BGP Route Reflector: 大规模部署时使用,摒弃所有节点互联的 mesh 模式,通过一个或者多个BGP Route Reflector来完成集中式的路由分发;

网络基础了解

交换技术

广播域(Broadcast Domain

  • 广播是一种信息的传播方式,指网络中的某一设备同时向网络中所有的其它设备发送数据,这个数据所能广播到的范围即为广播域。

    简单点说,广播域就是指网络中所有能接收到同样广播消息的设备的集合。

  • 比如:交换机在转发数据时会先进行广播,这个广播可以发送的区域就是一个广播域

  • 路由器的一个接口下的网络是一个广播域,所以路由器可以隔离广播域

  • 冲突域:在同一个冲突域中的每一个节点都能收到所有被发送的帧

  • 广播域:网络中能接收任一设备发出的广播帧的所有设备的集合

地址解析协议(ARP

ARPAddress Resolution Protocol

  • ARP是根据IP地址获取物理地址的一个TCP/IP协议
  • 主机发送信息时将包含目标IP地址的ARP请求广播到局域网络上的所有主机,并接收返回消息,以此确定目标的物理地址;

三层交换机

  • 三层交换机就是具有部分路由器功能的交换机,工作在OSI网络标准模型的第三层:网络层。三层交换机的最重要目的是加快大型局域网内部的数据交换,所具有的路由功能也是为这目的服务的,能够做到一次路由,多次转发。
  • 对于数据包转发等规律性的过程由硬件高速实现,而像路由信息更新、路由表维护、路由计算、路由确定等功能,由软件实现。
  • 二层交换机工作在数据链路层

虚拟局域网(VLAN

VLANVirtual Local Area Network

  • VLAN 是一组逻辑上的设备和用户,这些设备和用户并不受物理位置的限制,可以根据功能、部门及应用等因素将它们组织起来,相互之间的通信就好像它们在同一个网段中一样,由此得名虚拟局域网,由于交换机端口有两种VLAN属性,其一是VLANID,其二是VLANTAG,分别对应VLAN对数据包设置VLAN标签和允许通过的VLANTAG(标签)数据包,不同VLANID端口,可以通过相互允许VLANTAG,构建VLAN。
  • VLAN是一种比较新的技术,工作在OSI参考模型的第2层和第3层,一个VLAN不一定是一个广播域,VLAN之间的通信并不一定需要路由网关,其本身可以通过对VLANTAG的相互允许,组成不同访问控制属性的VLAN,当然也可以通过第3层的路由器来完成的,但是,通过VLANID和VLANTAG的允许,VLAN可以为几乎局域网内任何信息集成系统架构逻辑拓扑和访问控制,并且与其它共享物理网路链路的信息系统实现相互间无扰共享。
  • VLAN可以为信息业务和子业务、以及信息业务间提供一个相符合业务结构的虚拟网络拓扑架构并实现访问控制功能。
  • 与传统的局域网技术相比较,VLAN技术更加灵活,它具有以下优点:
    • 网络设备的移动、添加和修改的管理开销减少;
    • 可以控制广播活动;可提高网络的安全性。

路由技术

路由器(Router)

  • 连接两个或多个网络的硬件设备,在网络间起网关的作用,是读取每一个数据包中的地址然后决定如何传送的专用智能性的网络设备。
  • 它能够理解不同的协议,例如某个局域网使用的以太网协议,因特网使用的TCP/IP协议
  • 这样,路由器可以分析各种不同类型网络传来的数据包的目的地址,把非TCP/IP网络的地址转换成TCP/IP地址,或者反之;再根据选定的路由算法把各数据包按最佳路线传送到指定位置。
  • 所以路由器可以把非TCP/IP网络连接到因特网上。

K8S pod 网络

CNI(容器网络接口)

  • CNI 只关心容器的网络连接以及在容器被删除时移除分配的资源。
  • CNI配置文件默认路径:/etc/cni/net.d
  • CNI二进制程序默认路径:/opt/cni/bin/
  • GitHub CNI
  • calico docs
  • 边界网关协议 (BGPBorder Gateway Protocol) ,BGPInternet外部网关路由协议
  • BGP可以作为办公网络访问集群网络哦的路由交换

这两个路径可在kubelet启动参数中定义:

1
2
3
--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin

准备服务器资源

  • 192.168.2.158 master-158
  • 192.168.2.159 master-159
  • 192.168.2.160 master-160
  • 192.168.2.160 node-161

部署

Calico 数据存储方式

  1. etcd 存储:https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml
  2. Kubernetes API Datastore 服务: https://projectcalico.docs.tigera.io/manifests/calico.yaml

获取镜像

master-158执行

1
2
3
4
5
6
7
8
cd ~/
# 全家桶下载
curl https://github.com/projectcalico/calico/releases/download/v3.23.1/release-v3.23.1.tgz -O
tar -xf release-v3.23.1.tgz
# 发送到其它服务器
scp release-v3.23.1.tgz root@192.168.2.159:/root/
scp release-v3.23.1.tgz root@192.168.2.160:/root/
scp release-v3.23.1.tgz root@192.168.2.161:/root/

master-158/master-159/master-160/node-161都执行

1
2
3
4
5
6
# 导入 master 到空间 kube-system
tar -xf release-v3.23.1.tgz
cd release-v3.23.1/images/
for i in $(ls) ; do ctr -n=kube-system image import ${i}; done
ctr -n=kube-system images list
cd ~/

获取 k8s calico-etcd.yaml文件

1
2
# 下载配置文件
curl https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml -o /root/calico-etcd.yaml-back

etcd CA配置文件

master-158执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 当前脚本可以重复执行 
cd ~/
\cp /root/calico-etcd.yaml-back /root/calico-etcd.yaml
cat /etc/certs/etcd/ca.pem | base64 -w 0 > CA_BASE64
cat /etc/certs/etcd/etcd-158.pem | base64 -w 0 > ETCD_BASE64
cat /etc/certs/etcd/etcd-158-key.pem | base64 -w 0 > ETCD_KEY_BASE64
echo "https://192.168.2.158:2379,https://192.168.2.159:2379,https://192.168.2.160:2379" > END_POINTS
sed -i "s?http://<ETCD_IP>:<ETCD_PORT>?$(cat END_POINTS)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-key: null?etcd-key: $(cat ETCD_KEY_BASE64)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-ca: null?etcd-ca: $(cat CA_BASE64)?g" /root/calico-etcd.yaml
sed -i "s?# etcd-cert: null?etcd-cert: $(cat ETCD_BASE64)?g" /root/calico-etcd.yaml
sed -i 's?etcd_ca: ""?etcd_ca: "/calico-secrets/etcd-ca"?g' /root/calico-etcd.yaml
sed -i 's?etcd_cert: ""?etcd_cert: "/calico-secrets/etcd-cert"?g' /root/calico-etcd.yaml
sed -i 's?etcd_key: ""?etcd_key: "/calico-secrets/etcd-key"?g' /root/calico-etcd.yaml
# KubeProxyConfiguration.clusterCIDR
sed -i 's?# - name: CALICO_IPV4POOL_CIDR?- name: CALICO_IPV4POOL_CIDR?g' /root/calico-etcd.yaml
sed -i 's?# value: "192.168.0.0/16"? value: "10.244.0.0/16"?g' /root/calico-etcd.yaml
cat /root/calico-etcd.yaml |grep etcd
cat /root/calico-etcd.yaml |grep CALICO_IPV4POOL_CIDR
# 创建 命令空间
kubectl create namespace tigera-operator
# 配置api-server 地址,KUBERNETES_SERVICE_HOST,KUBERNETES_SERVICE_PORT
cat > calico-config.yaml << EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "192.168.2.158"
KUBERNETES_SERVICE_PORT: "6443"
EOF
kubectl apply -f calico-config.yaml

安装etcd集群
生成CA证书(K8S)

安装

我们这里的 etcd 是使用证书的

master-158执行

1
2
3
4
5
6
7
cd ~/
# 第一次载入
kubectl apply -f calico-etcd.yaml
# 重启-yaml
kubectl replace --force -f calico-etcd.yaml
# 监控
watch kubectl get pods -n kube-system

image-20220603123021229

校验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 查看
kubectl get pods -A -o wide
# 查看日志
kubectl logs -n kube-system -f calico-node-hd9z6
kubectl get ds -n kube-system calico-node
# 查看具体pod的yaml配置信息
kubectl get pod -n kube-system calico-node-hd9z6 -o=yaml
# 查看Pod的详细信息,包括记录的事件:
kubectl describe pod -n kube-system calico-node
kubectl describe pod -n kube-system calico-node-hd9z6
# 查看节点信息
kubectl get pod -n kube-system -l k8s-app=calico-node -o wide
# 删除节点信息:
# kubectl delete pod -n kube-system 节点名称
# kubectl delete -f calico-etcd.yaml
# rm -rf /etc/cni/net.d/
# modprobe -r ipip


# 查看 API 对象细节
# 使用 kubectl describe 命令,查看一个 API 对象的细节:
kubectl describe node master-158
# 重启-yaml
kubectl replace --force -f calico-etcd.yaml
# scale 重启
kubectl scale deployment -n kube-system {pod}--replicas=0
#查看k8s 的token
kubectl get serviceaccount -n kube-system -l k8s-app=calico-node -o yaml
kubectl describe secrets -n kube-system calico-config
# 清楚 calico
kubectl scale deploy -n kube-system calico-kube-controllers --replicas=0
# 查看证书
kubeadm alpha certs renew all

查询失败的状态

1
kubectl get pod -n kube-system -l k8s-app=calico-node -o wide

image-20220603131306419

  • Init:ImagePullBackOff 镜像拉取失败
  • CrashLoopBackOff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 查看日志
kubectl logs -n kube-system -f calico-node-hd9z6
# 查看Pod的详细信息,包括记录的事件: master-158
kubectl describe pod -n kube-system calico-node-hd9z6
# master-159
kubectl describe pod -n kube-system calico-node-jsn8n
# master-160
kubectl describe pod -n kube-system calico-node-4l6b6
# node-161
kubectl describe pod -n kube-system calico-node-bqfkf
# 导出单个
kubectl get pod -n kube-system calico-node-hd9z6 -o=yaml > calico-node-hd9z6.yaml
# 重启-yaml
kubectl replace --force -f calico-node-hd9z6.yaml

kubelet 全部节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#修改启动参数
cat > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf << EOF
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
# 添加
#Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
# 添加
#Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# 这是 "kubeadm init" 和 "kubeadm join" 运行时生成的文件,动态地填充 KUBELET_KUBEADM_ARGS 变量
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# 这是一个文件,用户在不得已下可以将其用作替代 kubelet args。
# 用户最好使用 .NodeRegistration.KubeletExtraArgs 对象在配置文件中替代。
# KUBELET_EXTRA_ARGS 应该从此文件中获取。
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet -l

查询错误

1
journalctl -xeu kubelet

修改文件

1
cat /root/calico-etcd.yaml

配置内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdTYzMm9DWEJDem9BSm1Bc3JKb01PbHdLTktDemltNEFaUXdpWnZPRmRsOFlocUplClhJYlZiSXJxbTZZSGg1K0twcy9jc2RzWEZrOXhtVHN6M3NCcVFHYVgvRlNjc003eElmUHF0R3dSTHoyc2xJSXIKZ2FsRW1sUEhZb25WMUU3RUR3YUQ0S0ZrQVNhSm4zZUxOUHVtRGg4VHNzS3drQW44bmlSMkhIRlA5bzczdWlsVwp3MTkwWnNFNUNSbXhxaUlwT3JDNUtpY1gyTkM0eHhpT05LdEZ3UlJwem14Z0M5SnZvRTllWmYzZ2dpOG1DSGZGCklTb3VpbFVBVlhpQUVPNGpTVitIWG9pQ01IQUN3WnJYQ01ycE4wN3c3WmhDQ1g5ZHZBMmZPMW0wVFJpM2NxTWMKaVNmb2xBTGR6OFNZbnl1ZU1sWUZmYVl4elU0aVduQXppRTV4MlFJREFRQUJBb0lCQUFvZmpSRUFXRlJSc1paZwpVNmlQdXA4ZlBkR3U1V0JQSktoT3FrQmhYRTZSUEpKdWlhWjJBMmNTYXlzd0huSGJVakJEUUFVNzZ4ZmgreCtuCnlObDRDWU1seFlidnpXL2dDYk9xSTN2TjVITm00VHMxZGtGTkx3MGYvYjQ3N3hPL2wrV3psVU4xa1I1YXhNdWMKT0I4SWYrRjlIYVBqeW9CS2VaelNIS2pXRjlrVnIwOHE4bm9DSEltRy8weWhLSTFsMFdYZzA2amVLdWVEbUhwOAppNlpnQlZiNUhDQUtMK1Z1K1N3NkJSSlJLdG5vdXMySFNOVTFtdndXRENXa00yZFVwVU1KYnA1KzlqZWpVSUV1CklZRW5kdjZwYUpaRHFOb0k2Y3RCdEl1NGMrSXM4dndmSUVoMGk0bjBZS0w2Nmt3aUl4czNyNjNGemZzOW9SOXUKc2x3SG9ZMENnWUVBeEhDK2ZaQjVkZjI5MnhwVWltQ1lxMFUrYnVyMUZWN1BVdmh0QThNUGJWVXY1WHdWZ25KYQpnUzM2ZHY1ektTZjNudGhLeWJsSUlYQjdxcXBLVFBKYU16b1VtOXc3UklUSVp6RVRoYjNXQnRBcVRPVUJSMHZ2CmREWGpvcFpwNDdFdC9EMWZPbFdFelA1cmFPelpjQ3FBMit6VG1teEZWZjBNb1NWRU5OKytndDhDZ1lFQTlKVTQKblpuNjRoN2o0SnBlaGVYdlFVcy9EUk0vN2pqdjhnRC8wNFVjL1JEVWJzMXRycUNUTUhwYkxzanRlNnlEemtqZApPUkliaVp4cTRSamVtN25YQnVVRnRtNXF2RzZqd2xoYnJBTkRxa1R1cTd2TzROWXpsejZCZTZMRHF3ejVmam4yCjM0MXk0Smx2bFdCaFUwQnI5RndMOHVSZkVPU2lPcy94a1dZQ21rY0NnWUVBb29sdTlGSWdUY0tmM3JTUWt0YU8KTzloVmFrMDZjRzQ4T1RpWWF1NXd5MVFiQjFSK0w2c1N1NlFoZzJmU1BaRjJUNVpEZTFtMUZ3WU5MUTh0M3pELwo5VGJ2YW03MUV5S1M5dDhpZWh5ekJId2xJKzZ2K2lBWWh4MDN0b1dpSStXc1dQTW00Z3Qwa0hGS3lreC9OVkhWCndTTFppd0plOUdFbW5BZEx3andIVkIwQ2dZQmlpZ3c1VXVSRlhmU3BkUWhJSWc5MjJ2NFlJbjFMV1IweS96d1MKMkRxSnF0SXJvaEJpbnNjdWJOMTN4L1FHTThjV3dUeC8xYy9LYlg2U0doYjEzclhIVFZZejNlQ2E4bWgvMEdGKwp1QUgzQTdhMDhnR3pqQmxWQWhYZzNmNi9WNGJkV0RVaWREYW9UcWtxSVo2VWtBdnVjM2RNOEwxc2JQRC9pTy9tCmlKYmIyUUtCZ1FDTmRWNHMrRHYzV3lWQldRSWt5UWdJd3VzQ00rMmg2SUFoU1hqd2NRUVlKYkwxcmErcXRVRkIKeHVxWnF1bmRkZzM0aG5XMHA4ZjhCVWdodm5lemlVdlhGNzNDbjRPNFBjTzJLcE85a000bkJzSnhoSUMrMmlWWgo2Tkc4NXUzczZSNzV2aTZVQkRZdUp1QlVvU0pvK0VCVS9reWg2Z2UrU2ZqWDBMenVTdFIvbWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVuakNDQTRhZ0F3SUJBZ0lVRE9FOU1CWGhQVUFoejl1aFh6SEdKWFZtVys0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2NURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VwcFlXNW5JRk4xTVJBd0RnWURWUVFIRXdkTwpZVzVLYVc1bk1STXdFUVlEVlFRS0V3cHRZWGg2YUdGdkxXTmhNUll3RkFZRFZRUUxFdzFsZEdOa0lGTmxZM1Z5CmFYUjVNUkF3RGdZRFZRUURFd2R0WVhoNmFHRnZNQjRYRFRJeU1EWXdNakEyTVRnd01Gb1hEVFF5TURVeU9EQTIKTVRnd01Gb3djakVMTUFrR0ExVUVCaE1DUTA0eEVUQVBCZ05WQkFnVENFcHBZVzVuSUZOMU1SQXdEZ1lEVlFRSApFd2RPWVc1S2FXNW5NUk13RVFZRFZRUUtFd3B0WVhoNmFHRnZMV05oTVJZd0ZBWURWUVFMRXcxbGRHTmtJRk5sClkzVnlhWFI1TVJFd0R3WURWUVFERXdobGRHTmtMVEUxT0RDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVAKQURDQ0FRb0NnZ0VCQUx1dDlxQWx3UXM2QUNaZ0xLeWFERHBjQ2pTZ3M0cHVBR1VNSW1iemhYWmZHSWFpWGx5RwoxV3lLNnB1bUI0ZWZpcWJQM0xIYkZ4WlBjWms3TTk3QWFrQm1sL3hVbkxETzhTSHo2clJzRVM4OXJKU0NLNEdwClJKcFR4MktKMWRST3hBOEdnK0NoWkFFbWlaOTNpelQ3cGc0ZkU3TENzSkFKL0o0a2RoeHhUL2FPOTdvcFZzTmYKZEdiQk9Ra1pzYW9pS1Rxd3VTb25GOWpRdU1jWWpqU3JSY0VVYWM1c1lBdlNiNkJQWG1YOTRJSXZKZ2gzeFNFcQpMb3BWQUZWNGdCRHVJMGxmaDE2SWdqQndBc0dhMXdqSzZUZE84TzJZUWdsL1hid05uenRadEUwWXQzS2pISWtuCjZKUUMzYy9FbUo4cm5qSldCWDJtTWMxT0lscHdNNGhPY2RrQ0F3RUFBYU9DQVNzd2dnRW5NQTRHQTFVZER3RUIKL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFILwpCQUl3QURBZEJnTlZIUTRFRmdRVWUrSkRPOTZ6R05La1lXeDBpV2dOYUtyYTIxRXdId1lEVlIwakJCZ3dGb0FVCjlsbGNvUm15Ui9aSjF3STY3T21CZ0hwRUtiTXdnYWNHQTFVZEVRU0JuekNCbklJS2EzVmlaWEp1WlhSbGM0SVMKYTNWaVpYSnVaWFJsY3k1a1pXWmhkV3gwZ2hacmRXSmxjbTVsZEdWekxtUmxabUYxYkhRdWMzWmpnaDVyZFdKbApjbTVsZEdWekxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYS0NKR3QxWW1WeWJtVjBaWE11WkdWbVlYVnNkQzV6CmRtTXVZMngxYzNSbGNpNXNiMk5oYkljRWZ3QUFBWWNFd0tnQ25vY0V3S2dDbjRjRXdLZ0NvSWNFd0tnQ29UQU4KQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVTJvNU10b1c1cHZFbWVpWnRuNnM1U2EyNGRILzQ2RmkvTHNWNjYySwpvZTlBdENqbjBrczlnUTFLN29oSFI1MHVHUEJyL21rYXlWYXVnVmhpb2tNQWVjK2VoNWtWbXh4NnJtcHNQV3JsCmUwd2ZJR3lwUDkrVHNtUGN6ekNoUzNUVHFpMGljdFhVMEs5ZHFRZmYvTmtBUTBZZU9RcVNwSWoxcXZpeklNT3oKc1hTVzdwZ2xwZVFzeXFMYTFQNE0yemc0WkhRTEVla0hoNExnTHV6MlNleXRrL25vazluMnBCWTZYdEVlc1llagpZZjVDMEdlVm1mSHpWTlBNNTcwTXhBdFlmMndoOUVkUHNlYlp0cUoyb1Y3ZHp2TVRkNE9temdpcDljWGErUG50CkpqbTlZdXQwVGRsaU1XYi9FVmVCbFdiSlNod0x3U05HUFhoVWJwdVg4QXFZa1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURzakNDQXBxZ0F3SUJBZ0lVTjB5TG1uVnF4aEMrenJqTllxb2pvOXlxaGpjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2NURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VwcFlXNW5JRk4xTVJBd0RnWURWUVFIRXdkTwpZVzVLYVc1bk1STXdFUVlEVlFRS0V3cHRZWGg2YUdGdkxXTmhNUll3RkFZRFZRUUxFdzFsZEdOa0lGTmxZM1Z5CmFYUjVNUkF3RGdZRFZRUURFd2R0WVhoNmFHRnZNQjRYRFRJeU1EWXdNakExTXprd01Gb1hEVEkzTURZd01UQTEKTXprd01Gb3djVEVMTUFrR0ExVUVCaE1DUTA0eEVUQVBCZ05WQkFnVENFcHBZVzVuSUZOMU1SQXdEZ1lEVlFRSApFd2RPWVc1S2FXNW5NUk13RVFZRFZRUUtFd3B0WVhoNmFHRnZMV05oTVJZd0ZBWURWUVFMRXcxbGRHTmtJRk5sClkzVnlhWFI1TVJBd0RnWURWUVFERXdkdFlYaDZhR0Z2TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBeGZWQmo0YnBPUTlFZ1h3M2tESm5jbzhrRmxuMDFkdE9XbllQNDM0b092K0hrSGlkK0R2NQo3OERkQTZodVJnL1JYbTkxbTN2TVg2ZDFQQkl1ckNoU2IyZTE1b3ZVaEZrK3B6U29qNFp1ckFNYVdPbXJmbWgrCnJsR3RwRkdqeFU5VG5kQlVlSkVVbEEvTkRhNXZtRkNpSGpmYUJ5VjhVa3lybUFZbDJGUlN2V0FIdkdjbHZOZUEKOHc5NHNSVEV2OVFzOUJrV1V1RmFPQ2FUVkVRWWE1dnBwNVZJWE5CNUFYSDNLWjZIOGU4N0NnVHlTQUNUTTFrOApHUWVsVE0wamxYU1JNdE13MlNkR3VsaTJ1dTRrUjczNWhDNXAzeEJRMTBhUEJDZVB2WTFPTUkzOFVjT01qZTFGCjNiK0FxUk9mTGl1Rm5abGJDMHZPbklnZ0tvMEhwelFtRXdJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUMKQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTlsbGNvUm15Ui9aSjF3STY3T21CZ0hwRQpLYk13RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUh2QTEwbVQ4bGkzeWlQSEs1bnN2UGs4UTVRSWJDTkM5L1h5CnRNeUdaZThHYXlHODdqa3JQazB4MEdUOE5DcU42TWU2Y0NQWjZmcXRRT0pVY3dHWC9BT2JCTmFaZ1BzdHhvMHAKYVZNaStYcmZTNEF3bTNsdzJGa09nK1V6eHhoalJGYThiVWNXazJxQ21qYzNhQ0x6R1Y2MWptU2llQjFUbzdndwpIZTN6NUxHcHNFR24rR1p4bVIxd1BrMFZLRTRKQWg5R2lpa0hhMFBOWitZRGw5RXJtYVdtV0RWNGU3d1BTTFh1CkdsR0dhdmh5RmZBNDNDWTNMVlRQNlhVT0VkZHNPdEtaR0ttYkc0QjV2OElyc3laWkxid0dMWWNzWU83bGhQWjAKUGY3bEdidHVDWXNQTjZWdGREY0U1dDJvaXN1T0t6OTV0bVUvTzN3UmhDVUtsY2NNRzc0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://192.168.2.158:2379,https://192.168.2.159:2379,https://192.168.2.160:2379"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"

# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"

# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Pods are monitored for changing labels.
# The node controller monitors Kubernetes nodes.
# Namespace and serviceaccount labels are used for policy.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
- serviceaccounts
verbs:
- watch
- list
- get
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
# EndpointSlices are used for Service-based network policy rule
# enforcement.
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- watch
- list
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.23.1
command: ["/opt/cni/bin/install"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.23.1
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Enable or Disable VXLAN on the default IPv6 IP pool.
- name: CALICO_IPV6POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# KubeProxyConfiguration.clusterCIDR
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
lifecycle:
preStop:
exec:
command:
- /bin/calico-node
- -shutdown
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
timeoutSeconds: 10
volumeMounts:
# For maintaining CNI plugin API credentials.
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
readOnly: false
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- mountPath: /calico-secrets
name: etcd-certs
- name: policysync
mountPath: /var/run/nodeagent
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
# parent directory.
- name: sysfs
mountPath: /sys/fs/
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
mountPropagation: Bidirectional
- name: cni-log-dir
mountPath: /var/log/calico/cni
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: sysfs
hostPath:
path: /sys/fs/
type: DirectoryOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to access CNI logs.
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0400
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.23.1
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,namespace,serviceaccount,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
volumes:
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0440

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system

---

# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml

导入配置

1
kubectl apply -f calico-etcd.yaml

错误

failed to \"CreatePodSandbox\" for \"coredns

1
no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""

是因为calico 没有启动成功

calico-node:CrashLoopBackOffx509: certificate is valid for

1
kubectl logs -n kube-system -f calico-node-96jzl

结果

1
2022-06-05 04:02:04.693 [ERROR][9] startup/startup.go 158: failed to query kubeadm's config map error=Get "https://10.244.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s": x509: certificate is valid for 10.96.0.1, 192.168.2.158, 127.0.0.1, 192.168.2.159, 192.168.2.160, 192.168.2.161, not 10.244.0.1

方案一:重置后添加扩展IP

1
kubeadm init --apiserver-cert-extra-sans=10.244.0.1

也有人讲

1
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.244.0.1

方案二:修复证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# root 用户执行
rm -f /etc/kubernetes/pki/apiserver.crt
rm -f /etc/kubernetes/pki/apiserver.key
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.crt
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.key
rm -f /etc/kubernetes/pki/apiserver.*
# 如果两个文件都已存在,则 kubeadm 将跳过生成步骤,使用现有文件。
kubeadm init phase certs apiserver --config=kubeadm-config-init.yaml
kubeadm init phase certs apiserver-kubelet-client --config=kubeadm-config-init.yaml
kubectl get pods -A -o wide
kubectl delete pod -n kube-system kube-apiserver-master-158
kubectl delete pod -n kube-system kube-apiserver-master-159
kubectl delete pod -n kube-system kube-apiserver-master-160
# 分发到三个主节点
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.159:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.160:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/apiserver* root@192.168.2.161:/etc/kubernetes/pki/
systemctl restart kubelet

参考:

kubeadm init phase

[Invalid x509 certificate for kubernetes master](https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master)

Client.Timeout exceeded while awaiting headers

1
failed to query kubeadm's config map error=Get "https://10.244.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

node节点找不到apiserver地址,需要配置 apiserverIPport

1
2
3
4
5
6
7
8
9
10
11
12
13
# KUBERNETES_SERVICE_HOST 
# KUBERNETES_SERVICE_PORT
cat > calico-config.yaml << EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "192.168.2.158"
KUBERNETES_SERVICE_PORT: "6443"
EOF
kubectl apply -f calico-config.yaml

等待 60 秒让 kubelet 获取 ConfigMap(参见 Kubernetes 问题 #30189);然后,重新启动操作员以获取更改:

1
kubectl delete pod -n tigera-operator -l k8s-app=tigera-operator

configure-calico-to-connect-directly-to-the-api-server
Calico Kubernetes Hosted Install
Calico Kubernetes Hosted Install

issue #30189

本文地址: https://github.com/maxzhao-it/blog/post/ebede6df/

通过配置文件创建集群

1
kubeadm init --config=kubeadm-config.yaml

配置文件内容

v1beta3文档

配置类型

一个 kubeadm 配置文件可以包含多个配置类型,使用三个破折号(“—”)分隔。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
# 配置集群范围的设置
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# 用于更改将传递给集群中部署的所有 `kubelet` 实例的配置
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# 用于更改传递给集群中部署的 `kube-proxy` 实例的配置
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# join 的配置
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration

打印 initjoin 的默认值

1
2
3
4
kubeadm config print init-defaults
kubeadm config print init-defaults --component-configs KubeletConfiguration
kubeadm config print init-defaults --component-configs KubeProxyConfiguration
kubeadm config print join-defaults

Kubeadm init 配置类型

使用 --config 选项执行 kubeadm init 时,可以使用以下配置类型:InitConfigurationClusterConfigurationKubeProxyConfigurationKubeletConfiguration

但只有 InitConfigurationClusterConfiguration 之间的一种是强制性的。

InitConfiguration

用于配置运行时设置,在 kubeadm init 的情况下是 bootstrap toke 的配置以及特定于执行 kubeadm 的节点的所有设置,包括:

  • NodeRegistration,包含与将新节点注册到集群相关的字段;使用它来自定义节点名称、要使用的 CRI 套接字或应仅适用于该节点的任何其他设置(例如节点 ip)。
  • LocalAPIEndpoint,表示要在该节点上部署的 API 服务器实例的端点;使用它,例如自定义 API 服务器广告地址。

ClusterConfiguration

用于配置集群范围的设置,包括以下设置:

  • Networking,保存集群网络拓扑的配置;使用它,例如自定义 pod 子网或服务子网。
  • Etcd ;使用它,例如自定义本地 etcd 或配置 API 服务器以使用外部 etcd 集群。
  • kube-apiserver、kube-scheduler、kube-controller-manager 配置;使用它通过添加自定义设置或覆盖 kubeadm 默认设置来自定义控制平面组件。

KubeProxyConfiguration

用于更改传递给集群中部署的 kube-proxy 实例的配置。如果未提供或仅部分提供此对象,kubeadm 将应用默认值。

kubernetes.io kube-proxy 官方文档

kubernetes.io kube-proxy 官方文档

KubeletConfiguration

用于更改将传递给集群中部署的所有 kubelet 实例的配置。如果未提供或仅部分提供此对象,kubeadm 将应用默认值。

kubernetes.io kubelet 官方文档

godoc.org/k8s.io kubelet 官方文档

Kubeadm join 配置类型

使用 --config 选项执行 kubeadm join 时,应提供 JoinConfiguration 类型。

JoinConfiguration 类型应该用于配置运行时设置,在 kubeadm join 的情况下,是用于访问集群信息的发现方法以及特定于执行 kubeadm 的节点的所有设置,包括:

  • NodeRegistration,包含与将新节点注册到集群相关的字段;使用它来自定义节点名称、要使用的 CRI 套接字或应仅适用于该节点的任何其他设置(例如节点 ip)。
  • APIEndpoint,表示最终将部署在该节点上的 API 服务器实例的端点。

本次实例

完整示例及字段说明

host158 master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
mkdir -p  /etc/kubernetes/pki/etcd/
\cp /etc/certs/etcd/ca.pem /etc/kubernetes/pki/etcd/etcd-ca.crt
\cp /etc/certs/etcd/etcd-158.pem /etc/kubernetes/pki/etcd/etcd.crt
\cp /etc/certs/etcd/etcd-158-key.pem /etc/kubernetes/pki/etcd/etcd.key
# 生成 certificateKey
#默认证书 /etc/kubernetes/pki/ca.crt
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
# vim ~/kubeadm-config-init.yaml
# 查看host中是否有映射
cat /etc/hosts
# 写入每个节点的hosts
cat >> /etc/hosts << EOF
192.168.2.158 host158
192.168.2.159 host159
192.168.2.160 host160
192.168.2.161 host161
192.168.2.240 host240
192.168.2.240 host241
192.168.2.158 master-158
192.168.2.159 master-159
192.168.2.160 master-160
192.168.2.161 node-160
EOF
echo "192.168.2.158 cluster.158" >> /etc/hosts
#使用自己的ca证书,InitConfiguration.skipPhasesz 跳过 certs/ca
mkdir -p /etc/kubernetes/pki/
\cp /etc/certs/etcd/ca.pem /etc/kubernetes/pki/ca.crt
\cp /etc/certs/etcd/ca-key.pem /etc/kubernetes/pki/ca.key

参考
pkg.go.dev/k8s.io v1beta3/types.go
github.com/kubernetes v1beta3/types.go

kubelet-config:KubeletConfiguration

写入配置

  • 使用外部 etcd
  • 使用 ipvs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
# 写入配置
cat > ~/kubeadm-config-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: fk3wpg.gs0mcv4twx3tz2mc
ttl: 240h0m0s
usages:
- signing
- authentication
description: "描述设置了一个人性化的消息,为什么这个令牌存在以及它的用途"
# NodeRegistration 包含与将新控制平面节点注册到集群相关的字段
nodeRegistration:
name: master-158
criSocket: unix:///var/run/containerd/containerd.sock
ignorePreflightErrors:
- IsPrivilegedUser
# LocalAPIEndpoint表示部署在这个控制平面节点上的API服务器实例的端点。
localAPIEndpoint:
advertiseAddress: 192.168.2.158
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: k8s-cluster
etcd:
external:
endpoints:
- "https://192.168.2.158:2379"
- "https://192.168.2.159:2379"
- "https://192.168.2.160:2379"
caFile: "/etc/certs/etcd/ca.pem"
certFile: "/etc/certs/etcd/etcd-158.pem"
keyFile: "/etc/certs/etcd/etcd-158-key.pem"
# 网络持有集群的网络拓扑结构的配置
networking:
dnsDomain: cluster.158
serviceSubnet: 10.96.0.0/16
podSubnet: "10.244.0.0/16"
kubernetesVersion: 1.24.1
controlPlaneEndpoint: "192.168.2.158:6443"
# APIServer 包含 API 服务器控制平面组件的额外设置
apiServer:
extraArgs:
bind-address: 0.0.0.0
authorization-mode: "Node,RBAC"
#service-cluster-ip-range: 10.96.0.0/16
#service-node-port-range: 30000-32767
timeoutForControlPlane: 4m0s
certSANs:
- "localhost"
- "cluster.158"
- "127.0.0.1"
- "master-158"
- "master-159"
- "master-160"
- "node-161"
- "10.96.0.1"
- "10.244.0.1"
- "192.168.2.158"
- "192.168.2.159"
- "192.168.2.160"
- "192.168.2.161"
- "host158"
- "host159"
- "host160"
- "host161"
# 包含控制器管理器控制平面组件的额外设置
controllerManager:
extraArgs:
bind-address: 0.0.0.0
#"node-cidr-mask-size": "20"
#cluster-cidr: 10.244.0.0/16
#service-cluster-ip-range: 10.96.0.0/16
#config: /etc/kubernetes/scheduler-config.yaml
#extraVolumes:
# - name: schedulerconfig
# hostPath: /home/johndoe/schedconfig.yaml
# mountPath: /etc/kubernetes/scheduler-config.yaml
# readOnly: true
# pathType: "File"
# 调度程序包含调度程序控制平面组件的额外设置
scheduler:
extraArgs:
bind-address: 0.0.0.0
#config: /etc/kubernetes/kubescheduler-config.yaml
#extraVolumes:
# - hostPath: /etc/kubernetes/kubescheduler-config.yaml
# mountPath: /etc/kubernetes/kubescheduler-config.yaml
# name: kubescheduler-config
# readOnly: true
# DNS 定义集群中安装的 DNS 插件的选项。
dns: {}
certificatesDir: /etc/kubernetes/pki
#imageRepository: k8s.gcr.io
imageRepository: registry.aliyuncs.com/google_containers
#用户启用的 FeatureGates。
#featureGates:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 2s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: 60s
tcpEstablishedTimeout: 2s
# 默认 LocalModeClusterCIDR
detectLocalMode: ""
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
enableProfiling: false
healthzBindAddress: "0.0.0.0:10256"
hostnameOverride: "kube-proxy-158"
ipvs:
excludeCIDRs: null
minSyncPeriod: 1m
scheduler: ""
strictARP: true
syncPeriod: 1m
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: "127.0.0.1:10249"
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
EOF

创建集群

1
2
3
4
kubeadm init --config=/root/kubeadm-config-init.yaml --v=5
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

其它操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 执行结束后
# `host158`上生成加入节点的脚本:
kubeadm token create --print-join-command
# 查看步骤
kubeadm init --help
# 更新配置文件
kubeadm init phase upload-config all --config=/root/kubeadm-config-init.yaml --v=5
# 更新配置文件 kubeconfig
kubeadm init phase kubeconfig all --config=/root/kubeadm-config-init.yaml --v=5
# 更新 kube-proxy
kubeadm init phase addon kube-proxy --config=/root/kubeadm-config-init.yaml --v=5
# 查询 kubeadm 配置文件
kubectl describe cm -n kube-system kubeadm-config
kubectl get cm -n kube-system kubeadm-config -o yaml
kubectl describe cm -n kube-system kubelet-config
kubectl describe cm -n kube-system kube-proxy
kubectl get cm -n kube-system kube-proxy -o yaml
kubectl describe cm -n kube-system
# 编辑
kubectl edit cm -n kube-system kubeadm-config
kubectl edit cm -n kube-system kubelet-config
kubectl edit cm -n kube-system kube-proxy
# 更新 ConfigMap 内容到本地文件 /var/lib/kubelet/config.conf
kubeadm upgrade node phase kubelet-config --v=10
systemctl restart kubelet
kubeadm upgrade node phase control-plane --v=5
kubeadm upgrade node phase preflight --v=5

使用启动引导令牌

启动引导令牌的 Secret 格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat > bootstrap-token-fk3wpg.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
# name 必须是 "bootstrap-token-<token id>" 格式的
name: bootstrap-token-fk3wpg
namespace: kube-system
# type 必须是 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# 供人阅读的描述,可选。
description: "启用 kubeadm init 时token ."
# 令牌 ID 和秘密信息,必需。
token-id: fk3wpg
token-secret: base64(gs0mcv4twx3tz2mc)
# 可选的过期时间字段
expiration: "2022-06-10T03:22:11Z"
# 允许的用法
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# 令牌要认证为的额外组,必须以 "system:bootstrappers:" 开头
auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress,system:bootstrappers:maxzhao-ca
EOF
kubectl apply -f bootstrap-token-fk3wpg.yaml

使用 kubeadm 管理令牌

被签名的 ConfigMapkube-public 名字空间中的 cluster-info。 典型的工作流中,客户端在未经认证和忽略 TLS 报错的状态下读取这个 ConfigMap。 通过检查 ConfigMap 中嵌入的签名校验 ConfigMap 的载荷。

查看 kube-public cluster-info

1
kubectl get configmap -n kube-public cluster-info -o yaml

自定义

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat > kube-public-cluster-info.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "2022-06-01T08:02:52Z"
name: cluster-info
namespace: kube-public
data:
jws-kubeconfig-fk3wpg: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
# cat ~/.kube/config
certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3RENDQXFpZ0F3SUJBZ0lVTUh4Y1p0SFJwbVFpelViUElwLzl3Z2JtdjA4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2VERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VwcFlXNW5JRk4xTVJBd0RnWURWUVFIRXdkTwpZVzVLYVc1bk1SVXdFd1lEVlFRS0V3eGxkR05rTFcxaGVIcG9ZVzh4RmpBVUJnTlZCQXNURFdWMFkyUWdVMlZqCmRYSnBkSGt4RlRBVEJnTlZCQU1UREdWMFkyUXRjbTl2ZEMxallUQWVGdzB5TWpBMU1qa3hOREl6TURCYUZ3MHkKTnpBMU1qZ3hOREl6TURCYU1IZ3hDekFKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoS2FXRnVaeUJUZFRFUQpNQTRHQTFVRUJ4TUhUbUZ1U21sdVp6RVZNQk1HQTFVRUNoTU1aWFJqWkMxdFlYaDZhR0Z2TVJZd0ZBWURWUVFMCkV3MWxkR05rSUZObFkzVnlhWFI1TVJVd0V3WURWUVFERXd4bGRHTmtMWEp2YjNRdFkyRXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzR2UVcxN0wxNkw3RWtYcFozTXJpSVhKek8ySTRyV2pMSQpKeDU1SFQ2czJFQjRQbE5DS3c4aFJtOWRjQVl2TVB6OEY1WVc4R3AxcWNLNk8vRW94T1grNkVXVzlYazh1RXQxCkxyZUUyUVo2b1JVenZTZFNXdllkRnhPUXM0VFBJcUNOcGFUNHgyZkFEU21sNDdaNWlkYnQrQ3M2ZEo0Z1Ftb2QKSkIxOWMvbDY3VnlwNmxPMWMwVWtNdFNod0xSZ2NFdGpKY1pmd0xPQXFibkRtYzg5Uks2cTh6b1AxMDhuZkwxbwpXZHBoREpsSVRHTmd6UTBGZU10aHZGUHJlU25EOEhWcHlqdjZtZ0pSZWlIWExhSWtEZ0tOQmh4UHRSdzdZb0pQCjNQSStuWkpBTVFOVW1nTWtoeC9mVGVzdW11VUs4RjI3R0N1b0VrNTZYRHJ0R042aEdGenhBZ01CQUFHalFqQkEKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlJISkVWWgp0WEl5RXpJaThndy80SFh0cTg3RllEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHYzZoR0Vqd010TDVUeDVVCmxBTUZSTlprRHJlMGRrL2t4Rmlra01EOWIvdUxkYTB5TXRSeEhLUjZYUTNDV3pyNDVINGh1SHRjZXlrL3BzZTIKbzdvSXZySnFzbThJRHBJTVU4Wk9qSGpBdVE0TytRenBtOEpHSXQrNy9JelpkMG1iSW5KdWJZZ2dtZG5GUG5UdQowbFpyTG5JL04vZHA5eVNJUGVnY252QnREaHdUa0tkbkRLZEtLdFdaalRMa1crNmJGUDZSaTZBWmpuczc1cmpvClhEaUtyTGZRS2Y1RlJFbHRHL3N1YnhkNlFDWFNpWDVqUTZlSk9xR1d1UkpRRHovWlZoWGhvdWFWaTVWOGpuRWcKNkVJZWxtR1A5QWRJaE4raFVMYTJZclRLMDJ4OTk3TlBQRnpDMm55NWV3VlFYdzZlNTE1QWU2S3BLUnZBcEE2RQplQWdwTVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
server: https://192.168.2.158:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
EOF
kube-public-cluster-info.yaml

host159加入集群

证书

1
2
3
# 在 158 上执行
ssh root@192.168.2.159 "mkdir -p /etc/kubernetes/pki"
scp -r /etc/kubernetes/pki/* root@192.168.2.159:/etc/kubernetes/pki

加入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
cat > /root/kubeadm-config-join.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
# NodeRegistration 包含与将新控制平面节点注册到集群相关的字段
nodeRegistration:
name: master-159
criSocket: unix:///var/run/containerd/containerd.sock
ignorePreflightErrors:
- IsPrivilegedUser
imagePullPolicy: IfNotPresent
caCertPath: "/etc/kubernetes/pki/ca.crt"
# Discovery 指定 kubelet 在 TLS 引导过程中使用的选项
discovery:
# BootstrapToken用于设置基于引导令牌的发现选项。
# BootstrapToken和File是互斥的
bootstrapToken:
token: fk3wpg.gs0mcv4twx3tz2mc
# APIServerEndpoint 是从其获取信息的 API 服务器的 IP 或域名
apiServerEndpoint: 192.168.2.158:6443
caCertHashes:
- sha256:011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
# UnsafeSkipCAVerification允许基于令牌的发现,无需通过CACertHashes进行CA验证。
# 这可能会削弱kubeadm的安全性,因为其他节点可以模拟控制平面。
unsafeSkipCAVerification: false
tlsBootstrapToken: fk3wpg.gs0mcv4twx3tz2mc
# 超时修改发现超时
timeout: 4m0s
# 如果为零,则不会部署额外的控制平面实例。
controlPlane:
# LocalAPIEndpoint 表示要在此节点上部署的 API 服务器实例的端点。
localAPIEndpoint:
# AdvertiseAddress 设置 API 服务器发布的 IP 地址。
advertiseAddress: 192.168.2.159
# BindPort 设置 API Server 绑定的安全端口。
bindPort: 6443
certificateKey: 011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
#skipPhases:
# - addon/kube-proxy
EOF
kubeadm join --config=/root/kubeadm-config-join.yaml --v=10

配置和校验

1
2
3
4
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

host160加入集群

证书

1
2
3
# 在 158 上执行
ssh root@192.168.2.160 "mkdir -p /etc/kubernetes/pki"
scp -r /etc/kubernetes/pki/* root@192.168.2.160:/etc/kubernetes/pki

加入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat > /root/kubeadm-config-join.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
# NodeRegistration 包含与将新控制平面节点注册到集群相关的字段
nodeRegistration:
# 如果未提供,则默认为节点的主机名。
name: master-160
# CRISocket 用于检索容器运行时信息。此信息将被注释到 Node API 对象,以供以后重用
criSocket: unix:///var/run/containerd/containerd.sock
taints: null
# kubeletExtraArgs:
# v: 4
# IgnorePreflightErrors 提供了在当前节点注册时要忽略的 pre-flight 错误片段。
ignorePreflightErrors:
- IsPrivilegedUser
imagePullPolicy: IfNotPresent
#caCertPath: "/etc/kubernetes/pki/ca.crt"
# Discovery 指定 kubelet 在 TLS 引导过程中使用的选项
discovery:
# BootstrapToken用于设置基于引导令牌的发现选项。
# BootstrapToken和File是互斥的
bootstrapToken:
token: fk3wpg.gs0mcv4twx3tz2mc
# APIServerEndpoint 是从其获取信息的 API 服务器的 IP 或域名
apiServerEndpoint: 192.168.2.158:6443
#caCertHashes:
#- sha256:011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
# UnsafeSkipCAVerification允许基于令牌的发现,无需通过CACertHashes进行CA验证。
# 这可能会削弱kubeadm的安全性,因为其他节点可以模拟控制平面。
unsafeSkipCAVerification: true
#file:
#kubeConfigPath: /etc/kubernetes/admin.conf
tlsBootstrapToken: fk3wpg.gs0mcv4twx3tz2mc
# 超时修改发现超时
timeout: 4m0s
# 如果为零,则不会部署额外的控制平面实例。
controlPlane:
# LocalAPIEndpoint 表示要在此节点上部署的 API 服务器实例的端点。
localAPIEndpoint:
# AdvertiseAddress 设置 API 服务器发布的 IP 地址。
advertiseAddress: 192.168.2.160
# BindPort 设置 API Server 绑定的安全端口。
bindPort: 6443
certificateKey: 011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
#skipPhases:
# - addon/kube-proxy
EOF
kubeadm join --config=/root/kubeadm-config-join.yaml --v=10

配置和校验

1
2
3
4
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

host161加入集群

1
2
kubeadm join 192.168.2.158:6443 --node-name node-161 --token fk3wpg.gs0mcv4twx3tz2mc \
--discovery-token-ca-cert-hash sha256:011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025

导出kubeadm 配置文件

1
2
3
4
kubectl describe cm -n kube-system kubeadm-config > ./kubeadm-config.yaml
kubectl describe cm -n kube-system kubelet-config > ./kubelet-config.yaml
#更新配置文件
kubeadm init phase kubeconfig all --config=./kubeadm-config.yaml --v=5

kubelet 配置

  • 用于 TLS 引导程序的 KubeConfig 文件为 /etc/kubernetes/bootstrap-kubelet.conf, 但仅当 /etc/kubernetes/kubelet.conf 不存在时才能使用。
  • 具有唯一 kubelet 标识的 KubeConfig 文件为 /etc/kubernetes/kubelet.conf
  • 包含 kubelet 的组件配置的文件为 /var/lib/kubelet/config.yaml
  • 包含的动态环境的文件 KUBELET_KUBEADM_ARGS 是来源于 /var/lib/kubelet/kubeadm-flags.env
  • 包含用户指定标志替代的文件 KUBELET_EXTRA_ARGS 是来源于 /etc/default/kubelet(对于 DEB),或者 /etc/sysconfig/kubelet(对于 RPM)。 KUBELET_EXTRA_ARGS 在标志链中排在最后,并且在设置冲突时具有最高优先级

KubeProxy 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 2s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: 60s
tcpEstablishedTimeout: 2s
# 默认 LocalModeClusterCIDR
detectLocalMode: ""
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
enableProfiling: false
healthzBindAddress: "0.0.0.0:10256"
hostnameOverride: "kube-proxy-158"
ipvs:
excludeCIDRs: null
minSyncPeriod: 1m
scheduler: ""
strictARP: true
syncPeriod: 1m
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: "127.0.0.1:10249"
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""

官方示例

这是一个完整的示例,其中包含要在 kubeadm init 运行期间使用的多个配置类型的单个 YAML 文件。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- token: "9a08jv.c0izixklcxtmnze7"
description: "kubeadm bootstrap token"
ttl: "24h"
- token: "783bde.3f89s0fje9f38fhf"
description: "another bootstrap token"
usages:
- authentication
- signing
groups:
- system:bootstrappers:kubeadm:default-node-token
nodeRegistration:
name: "ec2-10-100-0-1"
criSocket: "unix:///var/run/containerd/containerd.sock"
taints:
- key: "kubeadmNode"
value: "someValue"
effect: "NoSchedule"
kubeletExtraArgs:
v: 4
ignorePreflightErrors:
- IsPrivilegedUser
imagePullPolicy: "IfNotPresent"
localAPIEndpoint:
advertiseAddress: "10.100.0.1"
bindPort: 6443
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"
skipPhases:
- addon/kube-proxy
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
etcd:
# one of local or external
local:
imageRepository: "k8s.gcr.io"
imageTag: "3.2.24"
dataDir: "/var/lib/etcd"
extraArgs:
listen-client-urls: "http://10.100.0.1:2379"
serverCertSANs:
- "ec2-10-100-0-1.compute-1.amazonaws.com"
peerCertSANs:
- "10.100.0.1"
# external:
# endpoints:
# - "10.100.0.1:2379"
# - "10.100.0.2:2379"
# caFile: "/etc/kubernetes/pki/ca.crt"
# certFile: "/etc/kubernetes/pki/etcd/etcd.crt"
# keyFile: "/etc/kubernetes/pki/etcd/etcd.key"
networking:
serviceSubnet: "10.96.0.0/16"
podSubnet: "10.244.0.0/16"
dnsDomain: "cluster.local"
kubernetesVersion: "v1.21.0"
controlPlaneEndpoint: "10.100.0.1:6443"
apiServer:
extraArgs:
authorization-mode: "Node,RBAC"
extraVolumes:
- name: "some-volume"
hostPath: "/etc/some-path"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
certSANs:
- "10.100.1.1"
- "ec2-10-100-0-1.compute-1.amazonaws.com"
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
"node-cidr-mask-size": "20"
extraVolumes:
- name: "some-volume"
hostPath: "/etc/some-path"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
scheduler:
extraArgs:
address: "10.100.0.1"
extraVolumes:
- name: "some-volume"
hostPath: "/etc/some-path"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
clusterName: "example-cluster"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# kubelet specific options here
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy specific options here

配置文件

kubelet 配置

/var/lib/kubelet/config.yaml

master-158

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
cat > /var/lib/kubelet/config.yaml << EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# kubelet specific options here
# 启用 kubelet 的安全服务器。 注意:kubelet 的不安全端口由 readOnlyPort 选项控制。
enableServer: true
#发送给 kubelet 服务器的请求是如何进行身份认证的
authentication:
#包含与匿名身份认证相关的配置信息。
anonymous:
# enabled允许匿名用户向 kubelet 服务器发送请求。 未被其他身份认证方法拒绝的请求都会被当做匿名请求。
# 匿名请求对应的用户名为system:anonymous,对应的用户组名为 system:unauthenticated。
enabled: false
# 包含与 Webhook 持有者令牌认证相关的配置。
webhook:
# cacheTTL启用对身份认证结果的缓存。
cacheTTL: 2m
#enabled允许使用tokenreviews.authentication.k8s.io API 来提供持有者令牌身份认证。
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
# 发送给 kubelet 服务器的请求是如何进行鉴权的
authorization:
#鉴权模式:AlwaysAllow和Webhook
mode: Webhook
# WebHook 配置
webhook:
#设置来自 Webhook 鉴权组件的 'authorized' 响应的缓存时长。
cacheAuthorizedTTL: 5m
#设置来自 Webhook 鉴权组件的 'unauthorized' 响应的缓存时长。
cacheUnauthorizedTTL: 10s
# kubelet 用来操控宿主系统上控制组 (CGroup) 的驱动程序(cgroupfs 或 systemd)。
cgroupDriver: systemd
# 集群的 DNS IP 地址
#clusterDNS是集群 DNS 服务器的 IP 地址的列表。
# 如果设置了,kubelet 将会配置所有容器使用这里的 IP 地址而不是宿主系统上的 DNS 服务器来完成 DNS 解析。
clusterDNS:
- 10.96.0.10
# clusterDomain是集群的 DNS 域名。如果设置了此字段,kubelet 会配置所有容器,使之在搜索主机的搜索域的同时也搜索这里指定的 DNS 域。
clusterDomain: cluster.158
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
#healthz 服务器用来提供服务的 IP 地址。
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
# 日志
logging:
# 设置日志消息的结构。默认的格式取值为 text。
format: "text"
# 对日志进行清洗的最大间隔纳秒数 (例如,1s = 1000000000)
flushFrequency: 1000
# verbosity 用来确定日志消息记录的详细程度阈值。默认值为 0, 意味着仅记录最重要的消息。数值越大,额外的消息越多。出错消息总是会被记录下来。
# vmodule 会在单个文件层面重载 verbosity 阈值的设置。 这一选项仅支持 "text" 日志格式。
# options 中包含特定于不同日志格式的配置参数。 只有针对所选格式的选项会被使用,但是合法性检查时会查看所有选项配置。
options:
json:
# splitStream 将错误信息重定向到标准错误输出(stderr), 而将提示信息重定向到标准输出(stdout),并为二者提供缓存。 默认设置是将二者都写出到标准输出,并且不提供缓存。
# infoBufferSize 在分离数据流时用来设置提示数据流的大小。 默认值为 0,相当于禁止缓存。
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
# 指向要运行的本地(静态)Pod 的目录, 或者指向某个静态 Pod 文件的路径。
staticPodPath: /etc/kubernetes/manifests
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF

/etc/kubernetes/kubelet.conf

master-158
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat > /etc/kubernetes/kubelet.conf << EOF
apiVersion: v1
clusters:
- cluster:
# certificate-authority-data: cat /etc/kubernetes/pki/ca.crt | base64 -w 0
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXlNakl4TkRRd04xb1hEVE15TURVeE9USXhORFF3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUFMCjRkN2xWekIwZnlhazYrcFdaaGJiNzBqOWFqa3NveXkrTnIvaWYzV2RxVHRXNUNHZVcyRzEwRldaenNLd3RUWnoKYjRTSGRBV21LL1dTNUJocWVEeFVENVNPcTVXNXpqUHZFNWpGanlONThlN0RVM1gzS0NJSGcxbXUyVnRQSzZWWQpob2RISHJIaXBEM3lOalBQTk1ERzRGU1V3NUFMWmRsRTFLV3VMdWVlWHhnbEd5WlRodHFBUkU1N1Q4MUFNMm5zCkN1cmI5N3AwKyt1L3pnZHdDRG1iWmtWYUFrTUM3MjB3cTdsUWs2UnpLbHVHMHJreVVLb0lQTk9YTkhxby9UdXMKNUh1ditFby9SbGo4MUlyMkh2OUE1dCt4YUo0SDB6NS9GajQxNnAwQ0xENjBrWi9YQlJoNE5MTC9EYkNQQll6cgorUzhwbEQ2Y0VRMi8xellSeWI4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZEL2EzQ0JYT2h1Y2hrYzNkY3JhR291TDNLYzNNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSStQdnZPK2VnMk5idUIrUUN2aQo3RktGUjJzOUQzSGxRNWtsRVZTbWJvSGRsN2Z5dkZhTUNPLzVQSk56cWUwRDNOL3F0MlJsWk43UEw0Q2VNaVFrCnM5Nk8zS2U4WVkrRmRUeXFJTW9MN2FKODNDTjE2NFR2S29Ld3JmL1pVaHZkQW1xS2NhM3I5blNGcjZreXVPZVAKV2cxMGlxcDMzS3ZqQjlJVFJBckxscDdRWm5JdmhMOVRVUk9YOW84amJvTVpjbU1QbG1lOEppRDcySkYzS0diUQpOMEJoTDYrVHo4RkJWSkNJYW9BRVQrQXlBdVFKdTl6cjMvd2FsZk9PcEdTSlhENmpjWGVsMldRVVFGQTJjUEpICndLSEtKNkRSL3pwKzNlTVRDWnhLMmloV2oyT0ZNbkV2UE9TcDZ5M0VtNmxmd0lXMktKSjVjakhnWTZOUTN1UWEKUUFBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.2.158:6443
name: cluster.158
contexts:
- context:
cluster: cluster.158
user: system:node:master-158
name: system:node:master-158@cluster.158
current-context: system:node:master-158@cluster.158
kind: Config
preferences: {}
users:
- name: system:node:master-158
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
EOF

/var/lib/kubelet/pki/kubelet-client-current.pem

1
2
3
4
5
cat > /var/lib/kubelet/pki/kubelet-client-current.pem << EOF
# 这里面是公钥私钥
# Issuer: CN=kubernetes
# Subject: CN=kubernetes
EOF

错误处理

join 时的 JWS 问题

1
The cluster-info ConfigMap does not yet contain a JWS signature for token ID "j2lxkq", will try again

这里 kubeadm token list 可以看到 token 都很正常。

cluster info中的 JWS 需要在kube-controller-manager运行后创建。

1
kubectl get pods -A

image-20220602104516917

1
2
3
4
# 查看
kubectl describe -n kube-system kube-controller-manager-master-158
kubectl logs -n kube-system kube-controller-manager-master-158
kubectl logs -n kube-system kube-controller-manager-master-158 --v=10

image-20220602104739590

节点NotReady

1
kubectl describe nodes master-160

image-20220603102357056

cni plugin not initialized

1
sudo systemctl restart containerd

卸载集群

1
2
3
4
5
6
7
8
9
10
11
12
# 删除对集群的本地引用,集群名称 k8s-cluster
kubectl config delete-cluster k8s-cluster
# 重置 `kubeadm` 安装的状态
echo "y" | kubeadm reset
# 删除节点信息
rm -rf /etc/kubernetes/
# 删除本地配置
rm -rf $HOME/.kube/config
# 删除网路
rm -rf /var/lib/kubelet/
# 重启kubelet
systemctl restart kubelet

生成 token

一键生成

1
kubeadm token create --print-join-command --ttl=240h 

生成 certificateKey

安装etcd集群讲到 etcd-ca.crt

1
2
3
kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

完整示例

host158 master

1
2
3
4
5
6
7
8
9
mkdir -p  /etc/kubernetes/pki/etcd/
#\cp /etc/certs/etcd/ca.pem /etc/kubernetes/pki/ca.crt
\cp /etc/certs/etcd/etcd-158.pem /etc/kubernetes/pki/etcd/etcd.crt
\cp /etc/certs/etcd/etcd-158-key.pem /etc/kubernetes/pki/etcd/etcd.key
# 生成 certificateKey
#默认证书 /etc/kubernetes/pki/ca.crt
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
vim ~/kubeadm-config-init.yaml

参考
pkg.go.dev/k8s.io v1beta3/types.go
github.com/kubernetes v1beta3/types.go

kubelet-config:KubeletConfiguration

写入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
# Groups指定当使用/进行身份验证时,此令牌将作为身份验证的额外组
- groups:
- system:bootstrappers:kubeadm:default-node-token
# token id.secret
token: fk3wpg.gs0mcv4twx3tz2mc
# 过期时间
ttl: 240h0m0s
# 用法描述了这个标记可以被使用的方式。默认情况下可用于建立双向信任,但这里可以更改。
usages:
- signing
- authentication
description: "描述设置了一个人性化的消息,为什么这个令牌存在以及它的用途"
# NodeRegistration 包含与将新控制平面节点注册到集群相关的字段
nodeRegistration:
# Name 是将在此 `kubeadm init` 或 `kubeadm join` 操作中创建的 Node API 对象的 `.Metadata.Name` 字段。
# 该字段也用于 kubelet 的客户端证书到 API 服务器的 CommonName 字段。
# 如果未提供,则默认为节点的主机名。
name: master-158
# CRISocket 用于检索容器运行时信息。此信息将被注释到 Node API 对象,以供以后重用
criSocket: unix:///var/run/containerd/containerd.sock
# Taints指定节点API对象应该注册的 taints 。如果该字段未设置,即为nil,它将默认带有控制平面 taints 的控制平面节点。
# 如果你不想 taints 你的控制平面节点,将这个字段设置为一个空片,即。' taints:[] '在YAML文件中。此字段仅用于节点注册。
# 挺复杂 https://pkg.go.dev/k8s.io/api/core/v1#Taint
taints: []
# taints:
# - key: "kubeadmNode"
# value: "someValue"
# effect: "NoSchedule"
# KubeletExtraArgs传递额外的参数给kubelet。
# 这里的参数通过kubeadm在运行时为kubelet写入源代码的环境文件传递到kubelet命令行。
# 这覆盖了kubelet-config中通用的基本级配置,ConfigMap Flags在解析时具有更高的优先级。
# 这些值是本地的,并且特定于kubeadm正在执行的节点。这个映射中的键是出现在命令行上的标志名,除非没有前导破折号。
kubeletExtraArgs:
v: 4
# IgnorePreflightErrors 提供了在当前节点注册时要忽略的 pre-flight 错误片段。
ignorePreflightErrors:
- IsPrivilegedUser
# ImagePullPolicy 指定在 kubeadm "init" 和 "join" 操作期间拉取镜像的策略。
# The value of this field must be one of "Always", "IfNotPresent" or "Never".
# 如果未设置此字段,kubeadm 将默认为“IfNotPresent”,或者如果主机上不存在所需的图像,则拉取所需的图像。
imagePullPolicy: IfNotPresent
# LocalAPIEndpoint表示部署在这个控制平面节点上的API服务器实例的端点。
# ControlPlaneEndpoint的意义是,ControlPlaneEndpoint是集群的全局端点,然后将请求负载均衡到每个单独的API服务器。
# 这个配置对象允许您自定义本地API服务器发布的可访问的IP/DNS名称和端口。
# 默认情况下,kubeadm尝试自动检测默认接口的IP并使用它,但如果这个过程失败,您可以在这里设置所需的值
localAPIEndpoint:
# AdvertiseAddress 设置 API 服务器发布的 IP 地址。
advertiseAddress: 192.168.2.158
# BindPort 设置 API Server 绑定的安全端口。
bindPort: 6443
# 设置一个秘钥,该秘钥将对 uploadcerts init 阶段上传到集群中某 Secret 内的秘钥和证书加密。
certificateKey: "011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025"
# SkipPhases是执行命令时要跳过的阶段列表。
# 阶段列表可以通过“kubeadm init——help”命令获取。
# 标记"——skip-phases"优先于此字段。
skipPhases: []
# - addon/kube-proxy
# Patches 包含与将补丁应用于 kubeadm 部署的组件相关的选项
# https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3#Patches
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: k8s-cluster
etcd:
# one of local or external
# local:
# imageRepository: "k8s.gcr.io"
# imageTag: "3.2.24"
# dataDir: "/var/lib/etcd"
# extraArgs:
# listen-client-urls: "http://10.100.0.1:2379"
# serverCertSANs:
# - "ec2-10-100-0-1.compute-1.amazonaws.com"
# peerCertSANs:
# - "10.100.0.1"
# 外部
external:
endpoints:
- "https://192.168.2.158:2379"
- "https://192.168.2.159:2379"
- "https://192.168.2.160:2379"
caFile: "/etc/kubernetes/pki/etcd/etcd-ca.crt"
certFile: "/etc/kubernetes/pki/etcd/etcd.crt"
keyFile: "/etc/kubernetes/pki/etcd/etcd.key"
# 网络持有集群的网络拓扑结构的配置
networking:
# DNSDomain 是 k8s 服务使用的 dns 域。默认为“cluster.local”。
dnsDomain: cluster.158
# ServiceSubnet 是 k8s 服务使用的子网。默认为“10.96.0.0/12”。
serviceSubnet: 10.96.0.0/16
# Pod 使用的子网
podSubnet: "10.244.0.0/16"
kubernetesVersion: 1.24.1
#设置一个稳定的 IP 地址或 DNS 名称
# 如果 controlPlaneEndpoint 未设置,则使用 advertiseAddress + bindPort。
controlPlaneEndpoint: "192.168.2.158:6443"
# APIServer 包含 API 服务器控制平面组件的额外设置
apiServer:
# ExtraArgs是传递给控制面组件的额外标志集。
# 这个映射中的键是出现在命令行上的标志名,除非没有前导破折号。
# 待办事项:这是暂时的,理想情况下,我们想要切换所有组件,使用ComponentConfig + ConfigMaps。
extraArgs:
# 在安全端口上进行鉴权的插件的顺序列表。 逗号分隔的列表:AlwaysAllow、AlwaysDeny、ABAC、Webhook、RBAC、Node。
# 默认 AlwaysAllow
authorization-mode: "Node,RBAC"
# ExtraVolumes is an extra set of host volumes, mounted to the control plane component.
extraVolumes:
- name: "some-volume"
# HostPath 是主机中将被挂载的路径
hostPath: "/etc/some-path"
# MountPath 是 pod 内将挂载 hostPath 的路径。
mountPath: "/etc/some-pod-path"
readOnly: false
# https://pkg.go.dev/k8s.io/api/core/v1#HostPathType
pathType: FileOrCreate
# certSANs:
# - "10.100.1.1"
# - "ec2-192.168.2.158.compute-1.maxzhao.com"
timeoutForControlPlane: 4m0s
# 包含控制器管理器控制平面组件的额外设置
controllerManager:
extraArgs:
"node-cidr-mask-size": "20"
extraVolumes:
- name: "some-volume"
hostPath: "/etc/some-path"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
# 调度程序包含调度程序控制平面组件的额外设置
scheduler:
extraArgs:
address: "192.168.2.158"
extraVolumes:
- name: "kube-config"
hostPath: "/etc/kubernetes/scheduler.conf"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
- name: "kube-config"
hostPath: "/etc/kubernetes/kubescheduler-config.yaml"
mountPath: "/etc/some-pod-path"
readOnly: false
pathType: FileOrCreate
# DNS 定义集群中安装的 DNS 插件的选项。
dns: {}
# CertificatesDir 指定存储或查找所有必需证书的位置。
certificatesDir: /etc/kubernetes/pki
#imageRepository: k8s.gcr.io
imageRepository: registry.aliyuncs.com/google_containers
#用户启用的 FeatureGates。
#featureGates:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# kubelet specific options here
# 启用 kubelet 的安全服务器。 注意:kubelet 的不安全端口由 readOnlyPort 选项控制。
enableServer: true
#发送给 kubelet 服务器的请求是如何进行身份认证的
authentication:
#包含与匿名身份认证相关的配置信息。
anonymous:
# enabled允许匿名用户向 kubelet 服务器发送请求。 未被其他身份认证方法拒绝的请求都会被当做匿名请求。 匿名请求对应的用户名为system:anonymous,对应的用户组名为 system:unauthenticated。
enabled: false
# 包含与 Webhook 持有者令牌认证相关的配置。
webhook:
# cacheTTL启用对身份认证结果的缓存。
cacheTTL: 2m
#enabled允许使用tokenreviews.authentication.k8s.io API 来提供持有者令牌身份认证。
enabled: true
x509:
# 是一个指向 PEM 编发的证书包的路径。
# 如果设置了此字段,则能够提供由此证书包中机构之一所签名的客户端证书的请求会被成功认证,
# 并且其用户名对应于客户端证书的CommonName、组名对应于客户端证书的 Organization。
# clientCAFile: /etc/kubernetes/pki/ca.crt
clientCAFile: /etc/kubernetes/pki/etcd/etcd-ca.crt
# 发送给 kubelet 服务器的请求是如何进行鉴权的
authorization:
#鉴权模式:AlwaysAllow和Webhook
mode: Webhook
# WebHook 配置
webhook:
#设置来自 Webhook 鉴权组件的 'authorized' 响应的缓存时长。
cacheAuthorizedTTL: 5m
#设置来自 Webhook 鉴权组件的 'unauthorized' 响应的缓存时长。
cacheUnauthorizedTTL: 30s
# kubelet 用来操控宿主系统上控制组 (CGroup) 的驱动程序(cgroupfs 或 systemd)。
cgroupDriver: systemd
# 集群的 DNS IP 地址
#clusterDNS是集群 DNS 服务器的 IP 地址的列表。
# 如果设置了,kubelet 将会配置所有容器使用这里的 IP 地址而不是宿主系统上的 DNS 服务器来完成 DNS 解析。
clusterDNS:
- 10.96.0.10
# clusterDomain是集群的 DNS 域名。如果设置了此字段,kubelet 会配置所有容器,使之在搜索主机的搜索域的同时也搜索这里指定的 DNS 域。
clusterDomain: cluster.local
#healthz 服务器用来提供服务的 IP 地址。
healthzBindAddress: 192.168.2.158
healthzPort: 10248
# 日志
logging:
# 设置日志消息的结构。默认的格式取值为 text。
format: "text"
# 对日志进行清洗的最大间隔纳秒数 (例如,1s = 1000000000)
flushFrequency: 1000
# verbosity 用来确定日志消息记录的详细程度阈值。默认值为 0, 意味着仅记录最重要的消息。数值越大,额外的消息越多。出错消息总是会被记录下来。
# vmodule 会在单个文件层面重载 verbosity 阈值的设置。 这一选项仅支持 "text" 日志格式。
# options 中包含特定于不同日志格式的配置参数。 只有针对所选格式的选项会被使用,但是合法性检查时会查看所有选项配置。
options:
json:
# splitStream 将错误信息重定向到标准错误输出(stderr), 而将提示信息重定向到标准输出(stdout),并为二者提供缓存。 默认设置是将二者都写出到标准输出,并且不提供缓存。
# infoBufferSize 在分离数据流时用来设置提示数据流的大小。 默认值为 0,相当于禁止缓存。
infoBufferSize: "0"
verbosity: 0
# 配置容器负载可用的交换内存。
memorySwap: {}
# 指向要运行的本地(静态)Pod 的目录, 或者指向某个静态 Pod 文件的路径。
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy specific options here
# 代理服务器提供服务时所用 IP 地址(设置为 0.0.0.0 时意味着在所有网络接口上提供服务)。
bindAddress: 0.0.0.0
#设置为 true 时, kube-proxy 将无法绑定到某端口这类问题视为致命错误并直接退出。
bindAddressHardFail: false
clientConnection:
#定义客户端在连接到服务器时所发送的 Accept 头部字段。 此设置值会覆盖默认配置 'application/json'。 此字段会控制某特定客户端与指定服务器的所有链接。
acceptContentTypes: ""
#允许客户端超出其速率限制时可以临时累积的额外查询个数。
burst: 0
#从此客户端向服务器发送数据时使用的内容类型(Content Type)
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
#控制此连接上每秒钟可以发送的查询请求个数。
qps: 0
#集群中 Pods 所使用的 CIDR 范围。 这一地址范围用于对来自集群外的请求流量进行桥接。 如果未设置,则 kube-proxy 不会对非集群内部的流量做桥接。
clusterCIDR: ""
# 从 API 服务器刷新配置的频率。此值必须大于 0。
configSyncPeriod: 2s
#包含与 conntrack 相关的配置选项。
conntrack:
#每个 CPU 核所跟踪的 NAT 链接个数上限 (0 意味着保留当前上限限制并忽略 min 字段设置值)。
maxPerCore: null
# 要分配的链接跟踪记录个数下限。 设置此值时会忽略 maxPerCore 的值(将 maxPerCore 设置为 0 时不会调整上限值)。
min: null
#用来设置空闲的、处于 CLOSE_WAIT 状态的 conntrack 条目 保留在 conntrack 表中的时间长度(例如,'60s')。 此设置值必须大于 0。
tcpCloseWaitTimeout: 60s
#空闲 TCP 连接的保留时间(例如,'2s')。 此值必须大于 0。
tcpEstablishedTimeout: 2s
#确定检测本地流量的方式,默认为 LocalModeClusterCIDR
detectLocalMode: ""
#确定检测本地流量的方式,默认为 LocalModeClusterCIDR
detectLocal:
#一个表示单个桥接接口名称的字符串参数。 Kube-proxy 将来自这个给定桥接接口的流量视为本地流量。 如果 DetectLocalMode 设置为 LocalModeBridgeInterface,则应设置该参数。
bridgeInterface: ""
#一个表示单个接口前缀名称的字符串参数。 Kube-proxy 将来自一个或多个与给定前缀匹配的接口流量视为本地流量。 如果 DetectLocalMode 设置为 LocalModeInterfaceNamePrefix,则应设置该参数。
interfaceNamePrefix: ""
#通过 '/debug/pprof' 处理程序在 Web 界面上启用性能分析。 性能分析处理程序将由度量值服务器执行
enableProfiling: false
#健康状态检查服务器提供服务时所使用的的 IP 地址和端口, 默认设置为 '0.0.0.0:10256'。
healthzBindAddress: "0.0.0.0:10256"
#非空时, 所给的字符串(而不是实际的主机名)将被用作 kube-proxy 的标识。
hostnameOverride: "kube-proxy-158"
iptables:
#通知 kube-proxy 在使用纯 iptables 代理模式时对所有流量执行 SNAT 操作。
masqueradeAll: false
# iptables fwmark 空间中的具体一位, 用来在纯 iptables 代理模式下设置 SNAT。此值必须介于 [0, 31](含边界值)。
masqueradeBit: null
# iptables 规则被刷新的最小周期(例如,'5s'、'1m'、'2h22m')。
minSyncPeriod: 1m
# iptables 规则的刷新周期(例如,'5s'、'1m'、'2h22m')。此值必须大于 0。
syncPeriod: 1m
ipvs:
#取值为一个 CIDR 列表,ipvs 代理程序在清理 IPVS 服务时不应触碰这些 IP 地址。
excludeCIDRs: null
minSyncPeriod: 1m
#IPVS 调度器。
scheduler: ""
#配置 arp_ignore 和 arp_announce,以避免(错误地)响应来自 kube-ipvs0 接口的 ARP 查询请求。
strictARP: true
syncPeriod: 1m
#设置 IPVS TCP 会话在收到 FIN 之后的超时值。 默认值为 0,意味着使用系统上当前的超时值设置。
tcpFinTimeout: 0s
#用于设置空闲 IPVS TCP 会话的超时值。 默认值为 0,意味着使用系统上当前的超时值设置。
tcpTimeout: 0s
#设置 IPVS UDP 包的超时值。 默认值为 0,意味着使用系统上当前的超时值设置。
udpTimeout: 0s
#度量值服务器提供服务时所使用的的 IP 地址和端口, 默认设置为 '127.0.0.1:10249'(设置为 0.0.0.0 意味着在所有接口上提供服务)。
metricsBindAddress: "127.0.0.1:10249"
# 代理模式 'userspace'(相对较老,即将被淘汰)、 'iptables'(相对较新,速度较快)、'ipvs'(最新,在性能和可扩缩性上表现好)

mode: "ipvs"
# kube-proxy 进程的 --nodeport-addresses 命令行参数设置。
#此值必须是合法的 IP 段。所给的 IP 段会作为参数来选择 NodePort 类型服务所使用的接口。
#如果有人希望将本地主机(Localhost)上的服务暴露给本地访问,同时暴露在某些其他网络接口上 以实现某种目标,可以使用 IP 段的列表。
#如果此值被设置为 "127.0.0.0/8",则 kube-proxy 将仅为 NodePort 服务选择本地回路(loopback)接口。
#如果此值被设置为非零的 IP 段,则 kube-proxy 会对 IP 作过滤,
#仅使用适用于当前节点的 IP 地址。 空的字符串列表意味着选择所有网络接口。
nodePortAddresses: null
#主机端口的范围,形式为 ‘beginPort-endPort’(包含边界), 用来设置代理服务所使用的端口。如果未指定(即‘0-0’),则代理服务会随机选择端口号。
portRange: "0-0"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
# NodeRegistration 包含与将新控制平面节点注册到集群相关的字段
nodeRegistration:
# Name 是将在此 `kubeadm init` 或 `kubeadm join` 操作中创建的 Node API 对象的 `.Metadata.Name` 字段。
# 该字段也用于 kubelet 的客户端证书到 API 服务器的 CommonName 字段。
# 如果未提供,则默认为节点的主机名。
name: master-158
# CRISocket 用于检索容器运行时信息。此信息将被注释到 Node API 对象,以供以后重用
criSocket: unix:///var/run/containerd/containerd.sock
# Taints指定节点API对象应该注册的 taints 。如果该字段未设置,即为nil,它将默认带有控制平面 taints 的控制平面节点。
# 如果你不想 taints 你的控制平面节点,将这个字段设置为一个空片,即。' taints:[] '在YAML文件中。此字段仅用于节点注册。
# 挺复杂 https://pkg.go.dev/k8s.io/api/core/v1#Taint
taints: null
# taints:
# - key: "kubeadmNode"
# value: "someValue"
# effect: "NoSchedule"
# KubeletExtraArgs传递额外的参数给kubelet。
# 这里的参数通过kubeadm在运行时为kubelet写入源代码的环境文件传递到kubelet命令行。
# 这覆盖了kubelet-config中通用的基本级配置,ConfigMap Flags在解析时具有更高的优先级。
# 这些值是本地的,并且特定于kubeadm正在执行的节点。这个映射中的键是出现在命令行上的标志名,除非没有前导破折号。
kubeletExtraArgs:
v: 4
# IgnorePreflightErrors 提供了在当前节点注册时要忽略的 pre-flight 错误片段。
ignorePreflightErrors:
- IsPrivilegedUser
# ImagePullPolicy 指定在 kubeadm "init" 和 "join" 操作期间拉取镜像的策略。
# The value of this field must be one of "Always", "IfNotPresent" or "Never".
# 如果未设置此字段,kubeadm 将默认为“IfNotPresent”,或者如果主机上不存在所需的图像,则拉取所需的图像。
imagePullPolicy: IfNotPresent
# CACertPath is the path to the SSL certificate authority used to
# secure comunications between node and control-plane.
# Defaults to "/etc/kubernetes/pki/ca.crt".
caCertPath: "/etc/kubernetes/pki/etcd/etcd-ca.crt"
# Discovery 指定 kubelet 在 TLS 引导过程中使用的选项
discovery:
# BootstrapToken用于设置基于引导令牌的发现选项。
# BootstrapToken和File是互斥的
bootstrapToken:
token: fk3wpg.gs0mcv4twx3tz2mc
# APIServerEndpoint 是从其获取信息的 API 服务器的 IP 或域名
apiServerEndpoint: 192.168.2.158:6443
# CACertHashes指定一组公钥引脚,以便在使用基于令牌的发现时进行验证。
# 发现期间发现的根CA必须与这些值中的一个匹配。
# 指定空集将禁用根CA固定,这可能是不安全的。
# 每个散列都被指定为“:”,其中目前唯一支持的类型是“sha256”。
# 这是在DER-encoded ASN.1中主题公钥信息(SPKI)对象的一个十六进制编码的SHA-256哈希值。这些散列可以使用OpenSSL等进行计算。
caCertHashes: sha256:011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
# UnsafeSkipCAVerification允许基于令牌的发现,无需通过CACertHashes进行CA验证。
# 这可能会削弱kubeadm的安全性,因为其他节点可以模拟控制平面。
unsafeSkipCAVerification: false
# File 用于指定 kubeconfig 文件的文件或 URL,从中加载集群信息
# BootstrapToken和File是互斥的
#file:
# KubeConfigPath 用于指定 kubeconfig 文件的实际文件路径或 URL,从中加载集群信息
#kubeConfigPath:
# TLSBootstrapToken 是用于 TLS 引导的令牌。
# 如果设置了 .BootstrapToken,则该字段默认为 .BootstrapToken.Token,但可以被覆盖。
# 如果设置了.File,则该字段**必须设置**,以防 KubeConfigFile 不包含任何其他身份验证信息
tlsBootstrapToken: fk3wpg.gs0mcv4twx3tz2mc
# 超时修改发现超时
timeout: 4m0s
# ControlPlane 定义要在加入节点上部署的附加控制平面实例。
# 如果为零,则不会部署额外的控制平面实例。
controlPlane:
# LocalAPIEndpoint 表示要在此节点上部署的 API 服务器实例的端点。
localAPIEndpoint:
# AdvertiseAddress 设置 API 服务器发布的 IP 地址。
advertiseAddress: 192.168.2.158
# BindPort 设置 API Server 绑定的安全端口。
bindPort: 6443
certificateKey: 011acbb00e4983761f3cbe774f45477b75a99a42d14004f72c043ffbb6e5b025
# SkipPhases 是命令执行期间要跳过的阶段列表。
# 阶段列表可以通过“kubeadm init——help”命令获取。
# 标记"——skip-phases"优先于此字段。
skipPhases:
- addon/kube-proxy
# Patches 包含与将补丁应用于 kubeadm 部署的组件相关的选项
# https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3#Patches

本文地址: https://github.com/maxzhao-it/blog/post/cdb1e23h/

前言

安装好docker

这里只做笔记,官网文档介绍很全面

官网文档

我这里有

  • 192.168.222.180 master
  • 192.168.222.181 master
  • 192.168.222.182 master
  • 192.168.222.185 node
  • 192.168.222.186 node
  • 192.168.222.240 nfs

Rancher安装

Docker单节点

1
2
3
4
5
6
docker run -d --privileged --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /var/lib/rancher:/var/lib/rancher \
rancher/rancher:latest

docker ps

注意:如果是虚拟机,必须是固定IP

设置密码:登录界面会提示获取密码脚本

1
docker logs  container-id  2>&1 | grep "Bootstrap Password:"

高可用Rancher

1、安装RKE

1
2
3
4
5
6
7
8
9
10
11
12
13
# 180上执行
cd ~/
curl https://github.com/rancher/rke/releases/download/v1.3.11/rke_linux-amd64 -o
mv rke_linux-amd64 rke
chmod +x rke
mv rke /usr/local/bin/
rke --version
# 分发到其它master
scp /usr/local/bin/rke root@192.168.222.181:/usr/local/bin/
scp /usr/local/bin/rke root@192.168.222.182:/usr/local/bin/
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.222.180
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.222.181
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.222.182

GitHub RKE发布页面

国内资源

操作

2、安装 kubectl

配置镜像后,所有节点都要安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
sudo yum install -y kubectl kubelet
# 关闭swap(k8s禁止虚拟内存以提高性能)
swapoff -a
#永久
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 确保 br_netfilter 模块被加载
lsmod | grep br_netfilter
# 显式加载该模块
sudo modprobe overlay
sudo modprobe br_netfilter
#为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
#验证 br_netfilter 模块是否已加载
lsmod | grep br_netfilter
# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

3、安装 RKE 集群

RKE安装集群

cluster.yml示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
nodes:
- address: 192.168.222.180
internal_address: 192.168.222.180
user: docker
role: [controlplane, worker, etcd]
hostname_override: host180
- address: 192.168.222.181
internal_address: 192.168.222.181
user: docker
role: [controlplane, worker, etcd]
hostname_override: host181
- address: 192.168.222.182
internal_address: 192.168.222.182
user: docker
role: [controlplane, worker, etcd]
hostname_override: host182

services:
etcd:
snapshot: true
creation: 6h
retention: 24h

# 当使用外部 TLS 终止,并且使用 ingress-nginx v0.22或以上版本时,必须。
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
1
2
3
4
5
6
# 安装
rke up --config cluster.yml
# 卸载
rke remove --config cluster.yml
# 更新
rke up --update-only --config cluster.yml

4、安装helm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cd ~/
curl https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz -o helm-v3.9.0-linux-amd64.tar.gz
tar -zxvf helm-v3.9.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
# 添加 chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo bitnami
# 添加 rancher repo
# helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
# 国内
helm repo add rancher-latest http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/latest
helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable
helm repo list
helm repo update
#helm repo remove
# example
helm install bitnami/mysql --generate-name

helm releases

快速入门指南

5、安装 Rancher

我这里是从其它Rancher安装的集群,需要配置kubeconfig

  1. 在集群管理-选择集群-更多-Download KubeConfig

  2. 所有节点配置

  3. ```sh
    mkdir -p ~/.kube
    cat > ~/.kube/config << EOF
    apiVersion: v1
    kind: Config
    clusters:

    • name: “ctmd”
      cluster:
      server: “https://192.168.222.251/k8s/clusters/c-gwdd5"
      certificate-authority-data: “LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJwekNDQ
      VUyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQTdNUnd3R2dZRFZRUUtFeE5rZVc1aGJXbGoKY
      kdsemRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR1Z1WlhJdFkyRXdIa
      GNOTWpJdwpOakE0TURZd01qQTRXaGNOTXpJd05qQTFNRFl3TWpBNFdqQTdNUnd3R2dZRFZRUUtFe
      E5rZVc1aGJXbGpiR2x6CmRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR
      1Z1WlhJdFkyRXdXVEFUQmdjcWhrak8KUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVJUNHhxcGNQM0ZKV
      nhKYW5HZjRIVTJLbUFWRkJuTUc2YUZjbFFVWitDdgo4UEx2R0FMTDFsWTkveGVFeFhQMHNRQjdzc
      UVINDFSSzhqSHBwcG1laVFRd28wSXdRREFPQmdOVkhROEJBZjhFCkJBTUNBcVF3RHdZRFZSMFRBU
      UgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVWGJNQ0VmNHVIRXZFUnNwUTJ3N2QKK3pjVWRYY3dDZ
      1lJS29aSXpqMEVBd0lEU0FBd1JRSWhBTFhuZDhFNzNWU080UnFoeElpZHJ3YWxjMk5iYW03bQpwR
      1hjdUl6MW9YNVRBaUEvRVZZMWwrT1JSRzRCK0JCUEc4UTlzd1dsVzljdndDVk1ydENlb2p1VTBRP
      T0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==”
      users:
    • name: “ctmd”
      user:
      token: “kubeconfig-user-6x5rds8w65:tkpcjwlpvhv9qrsvp8djbtb9qwwfqfh9hdtpb24hzkf9bjb6m2mwqw”
      contexts:
    • name: “ctmd”
      context:
      user: “ctmd”
      cluster: “ctmd”
      current-context: “ctmd”
      EOF
      1
      2
      3
      4
      5
      6
      7





      ```sh
      kubectl create namespace cattle-system

系统配置

关闭防火墙

1
2
3
4
sudo systemctl stop firewalld
sudo systemctl disable firewalld
# 关闭selinux
setenforce 0

Docker镜像

1
sudo vim /etc/docker/daemon.json

改为:

1
2
3
4
5
{
"registry-mirrors": ["https://aj2rgad5.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn"],
"dns": ["8.8.8.8", "8.8.4.4"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

远程访问:"hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]

配置 k8s 镜像

1
sudo vim /etc/yum.repos.d/kubernetes.repo

写入

1
2
3
4
5
6
7
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

刷新

1
2
# 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况
# sudo yum clean all && sudo yum makecache

华为云镜像

1
2
3
4
5
6
7
8
[kubernetes]
name=Kubernetes
baseurl=https://repo.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://repo.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg
https://repo.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg

卸载集群

kubectl config delete-cluster 删除对集群的本地引用。

删除节点

使用适当的凭证与控制平面节点通信,运行:

1
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets

在删除节点之前,请重置 kubeadm 安装的状态:

1
kubeadm reset

重置过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables,则必须手动进行:

1
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

如果要重置 IPVS 表,则必须运行以下命令:

1
ipvsadm -C

现在删除节点:

1
kubectl delete node <node name>

如果你想重新开始,只需运行 kubeadm initkubeadm join 并加上适当的参数。

附录

Rancher集群配置文件

cluster.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
# If you intended to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 192.168.222.180
port: "22"
internal_address: 192.168.222.180
role:
- controlplane
- worker
- etcd
hostname_override: host180
user: root
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: ""
port: "22"
internal_address: ""
role:
- controlplane
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: ""
port: "22"
internal_address: ""
role:
- controlplane
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
uid: 0
gid: 0
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: true
always_pull_images: false
secrets_encryption_config: null
audit_log: null
admission_configuration: null
event_rate_limit: null
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_binds: []
win_extra_env: []
network:
plugin: canal
options: {}
mtu: 0
node_selector: {}
update_strategy: null
tolerations: []
authentication:
strategy: x509
sans: []
webhook: null
addons: ""
addons_include: []
system_images:
etcd: rancher/mirrored-coreos-etcd:v3.5.3
alpine: rancher/rke-tools:v0.1.80
nginx_proxy: rancher/rke-tools:v0.1.80
cert_downloader: rancher/rke-tools:v0.1.80
kubernetes_services_sidecar: rancher/rke-tools:v0.1.80
kubedns: rancher/mirrored-k8s-dns-node-cache:1.21.1
dnsmasq: rancher/mirrored-k8s-dns-dnsmasq-nanny:1.21.1
kubedns_sidecar: rancher/mirrored-k8s-dns-sidecar:1.21.1
kubedns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.5
coredns: rancher/mirrored-coredns-coredns:1.9.0
coredns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.5
nodelocal: rancher/mirrored-k8s-dns-node-cache:1.21.1
kubernetes: rancher/hyperkube:v1.23.6-rancher1
flannel: rancher/mirrored-coreos-flannel:v0.15.1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
calico_node: rancher/mirrored-calico-node:v3.22.0
calico_cni: rancher/mirrored-calico-cni:v3.22.0
calico_controllers: rancher/mirrored-calico-kube-controllers:v3.22.0
calico_ctl: rancher/mirrored-calico-ctl:v3.22.0
calico_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.22.0
canal_node: rancher/mirrored-calico-node:v3.22.0
canal_cni: rancher/mirrored-calico-cni:v3.22.0
canal_controllers: rancher/mirrored-calico-kube-controllers:v3.22.0
canal_flannel: rancher/mirrored-flannelcni-flannel:v0.17.0
canal_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.22.0
weave_node: weaveworks/weave-kube:2.8.1
weave_cni: weaveworks/weave-npc:2.8.1
pod_infra_container: rancher/mirrored-pause:3.6
ingress: rancher/nginx-ingress-controller:nginx-1.2.0-rancher1
ingress_backend: rancher/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1
ingress_webhook: rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.1.1
metrics_server: rancher/mirrored-metrics-server:v0.6.1
windows_pod_infra_container: rancher/mirrored-pause:3.6
aci_cni_deploy_container: noiro/cnideploy:5.1.1.0.1ae238a
aci_host_container: noiro/aci-containers-host:5.1.1.0.1ae238a
aci_opflex_container: noiro/opflex:5.1.1.0.1ae238a
aci_mcast_container: noiro/opflex:5.1.1.0.1ae238a
aci_ovs_container: noiro/openvswitch:5.1.1.0.1ae238a
aci_controller_container: noiro/aci-containers-controller:5.1.1.0.1ae238a
aci_gbp_server_container: noiro/gbp-server:5.1.1.0.1ae238a
aci_opflex_server_container: noiro/opflex-server:5.1.1.0.1ae238a
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: null
enable_cri_dockerd: null
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
dns_policy: ""
extra_envs: []
extra_volumes: []
extra_volume_mounts: []
update_strategy: null
http_port: 0
https_port: 0
network_mode: ""
tolerations: []
default_backend: null
default_http_backend_priority_class_name: ""
nginx_ingress_controller_priority_class_name: ""
default_ingress_class: null
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
win_prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
ignore_proxy_env_vars: false
monitoring:
provider: ""
options: {}
node_selector: {}
update_strategy: null
replicas: null
tolerations: []
metrics_server_priority_class_name: ""
restore:
restore: false
snapshot_name: ""
rotate_encryption_key: false
dns: null

CentOS7镜像

1
2
3
4
5
6
7
8
# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载源文件
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# 或者
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓存
yum makecache

Docker

1
2
3
4
5
6
7
# 需要挂载 ISO 镜像
sudo mkdir /mnt/cdrom
sudo mount /dev/cdrom /mnt/cdrom/
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

使用阿里云

1
2
3
4
5
6
7
8
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast

配置 k8s 镜像

配置:/etc/yum.repos.d/kubernetes.repo

使用阿里云镜像

1
2
3
4
5
6
7
8
9
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

刷新

Docker镜像加速

手动修改

1
2
mkdir /etc/docker
vim /etc/docker/daemon.json

写入

1
2
3
4
5
6
7
8
9
10
{
"registry-mirrors": [
"https://aj2rgad5.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.cn-hangzhou.aliyuncs.com",
"https://registry.docker-cn.com",
"https://05f073ad3c0010ea0f4bc00b7105ec20.mirror.swr.myhuaweicloud.com"
],
"dns": ["8.8.8.8", "8.8.4.4"]
}

Spray安装后使用脚本

可以使用任意一个地址

1
2
3
4
5
6
7
8
9
10
11
12
13
# Docker中国 mirror
# export REGISTRY_MIRROR="https://registry.docker-cn.com"
# 腾讯云 docker hub mirror
# export REGISTRY_MIRROR="https://mirror.ccs.tencentyun.com"
# 华为云镜像
# export REGISTRY_MIRROR="https://05f073ad3c0010ea0f4bc00b7105ec20.mirror.swr.myhuaweicloud.com"
# DaoCloud 镜像
# export REGISTRY_MIRROR="http://f1361db2.m.daocloud.io"
# 阿里云 docker hub mirror
export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

systemctl restart kubelet # 假设您安装了 kubenetes

查看修改结果

1
docker info

image-20220522214702999

本文地址: https://github.com/maxzhao-it/blog/post/72c978ad/

前言

了解证书

k8s 的证书都在 /etc/kubernetes/pki

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
├── apiserver.crt  apiserver 证书
├── apiserver.key apiserver 证书
├── apiserver-kubelet-client.crt
├── apiserver-kubelet-client.key
├── apiserver-etcd-client.crt apiserver访问etcd的证书
├── apiserver-etcd-client.key apiserver访问etcd的证书
├── ca.crt 根证书
├── ca.key 根证书
├── etcd
│   ├── etcd-ca.crt
│   ├── healthcheck-client.crt pod中Liveness探针客户端证书
│   ├── healthcheck-client.key pod中Liveness探针客户端证书
│   ├── etcd.crt 节点通信证书
│   └── etcd.key
├── front-proxy-ca.crt 代理根证书
├── front-proxy-ca.key
├── front-proxy-client.crt 代理根证书签发的客户端证书
├── front-proxy-client.key
├── sa.key
└── sa.pub

非对称加密会生成一个密钥对,如上面的sa.key sa.pub就是密钥对,公钥用于加密私钥用于解密。

kubeletapiserver 会互相访问,所有他们都有证书。

kubelet变化频繁,一般只需要指定 ca 根证书。

apiserver变化不频繁,所以在创建集群时,分配好用作 kube-apiserverIP或主机名/域名

但是由于部署在node节点上的kubelet会因为集群规模的变化而频繁变化, 而无法预知node的所有IP信息, 所以kubelet上一般不会明确指定服务端证书,
而是只指定ca根证书, 让kubelet根据本地主机信息自动生成服务端证书并保存到配置的cert-dir文件夹中

代理根证书:

1
2
front-proxy-ca.crt
front-proxy-ca.key

由代理根证书签发的客户端证书:

1
2
front-proxy-client.crt
front-proxy-client.key

比如使用kubectl proxy代理访问时,kube-apiserver使用这个证书来验证客户端证书是否是自己签发的证书。

我这里有

  • 192.168.2.158 master-158
  • 192.168.2.159 master-159
  • 192.168.2.160 master-160
  • 192.168.2.161 node-161

CA

配置时间

1
2
yum install  ntpdate -y 
ntpdate time1.aliyun.com

cfssl 生成自签名 TLS 证书的方法

host158上执行

生成自签名 root CA 证书

1
2
3
4
5
6
7
8
9
10
11
12
13
#rm -f /opt/cfssl* 
rm -rf /opt/certs
mkdir -p /opt/certs
cd /opt/certs
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 -o /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 -o /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl-certinfo
# 查看版本
/usr/local/bin/cfssl version
/usr/local/bin/cfssljson -h

生成

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# 创建根证书签名请求文件
cat > /opt/certs/ca-csr.json <<EOF
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "maxzhao-ca",
"OU": "etcd Security",
"L": "NanJing",
"ST": "Jiang Su",
"C": "CN"
}
],
"CN": "maxzhao"
}
EOF
# CN:Common Name:kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),
# O:Organization:kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
# kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

# 证书配置文件
cat > /opt/certs/ca-config.json <<EOF
{
"signing": {
"default": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "175200h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "175200h"
},
"etcd": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "175200h"
}
}
}
}
EOF
# signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
# server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
# client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
# "expiry": "175200h" 有效期20年

生成ca 证书和私钥

1
2
3
4
5
# 生成
cfssl gencert --initca /opt/certs/ca-csr.json | cfssljson --bare /opt/certs/ca

# verify
openssl x509 -in /opt/certs/ca.pem -text -noout

结果

1
2
3
4
5
6
7
8
9
10
# CSR configuration
/opt/certs/ca-csr.json
# CSR 双向认证
/opt/certs/ca.csr
# self-signed root CA public key 其它文档里会叫 ca.crt
/opt/certs/ca.pem
# self-signed root CA private key
/opt/certs/ca-key.pem
# 证书配置文件 for other TLS assets
/opt/certs/ca-config.json

使用私钥生成本地颁发的证书(etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# peer 
cat > /opt/certs/etcd-158-ca-csr.json <<EOF
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "maxzhao-ca",
"OU": "etcd Security",
"L": "NanJing",
"ST": "Jiang Su",
"C": "CN"
}
],
"CN": "etcd-158",
"hosts": [
"127.0.0.1",
"192.168.2.158",
"192.168.2.159",
"192.168.2.160",
"192.168.2.161",
"10.96.0.1",
"10.244.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"kubernetes.default.svc.cluster.158"
]
}
EOF
# 生成etcd用的证书文件 peer
cfssl gencert \
--ca /opt/certs/ca.pem \
--ca-key /opt/certs/ca-key.pem \
--config /opt/certs/ca-config.json \
-profile=etcd \
/opt/certs/etcd-158-ca-csr.json | cfssljson --bare /opt/certs/etcd-158
# --profile=k8s-server-client 表示客户端与服务端要双向通讯
# verify
openssl x509 -in /opt/certs/etcd-158.pem -text -noout

信任自签名的 CA 证书

1
2
3
yum install -y ca-certificates
cp /opt/certs/etcd/ca.pem /etc/pki/ca-trust/source/anchors/maxzhao-ca.crt
update-ca-trust

生成之后

1
2
3
4
5
6
7
8
9
10
11
12
13
# 传输到每一个节点
rm -rf /etc/certs/etcd
mkdir -p /etc/certs/etcd
\cp /opt/certs/ca.pem /etc/certs/etcd/ca.pem
\cp /opt/certs/etcd-158-key.pem /etc/certs/etcd/etcd-158-key.pem
\cp /opt/certs/etcd-158.pem /etc/certs/etcd/etcd-158.pem
# 拷贝 ca.pem, etcd-158.pem, etcd-158-key.pem
ssh root@192.168.2.159 "mkdir -p /etc/certs/etcd"
ssh root@192.168.2.160 "mkdir -p /etc/certs/etcd"
ssh root@192.168.2.161 "mkdir -p /etc/certs/etcd"
scp -r /etc/certs/etcd/* root@192.168.2.159:/etc/certs/etcd/
scp -r /etc/certs/etcd/* root@192.168.2.160:/etc/certs/etcd/
scp -r /etc/certs/etcd/* root@192.168.2.161:/etc/certs/etcd/

ca-cert-hash

1
2
#默认证书 /etc/kubernetes/pki/ca.crt
openssl x509 -pubkey -in /etc/certs/etcd/ca.pem | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

base64

1
cat /etc/certs/etcd/ca.pem | base64 -w 0

K8S

当前 ca base64 值

1
cat /etc/kubernetes/ca.pem | base64 -w 0

会在各个配置中,
比如:

  • .kube/config
  • /etc/kubernetes/kubelet.conf
  • /var/lib/kubelet/config.yaml

K8S 中使用自定义CA

kubernetes verion=1.24.1

为用户帐户配置证书

你必须手动配置以下管理员帐户和服务帐户

文件名 凭据名称 默认 CN O (位于 Subject 中)
admin.conf default-admin kubernetes-admin system:masters
kubelet.conf default-auth system:node:<nodeName> (参阅注释) system:nodes
controller-manager.conf default-controller-manager system:kube-controller-manager
scheduler.conf default-scheduler system:kube-scheduler
1
2
3
4
5
6
7
8
9
10
11
KUBECONFIG=/etc/kubernetes/admin.conf 
#配置集群的管理员
kubectl config set-cluster default-cluster --server=https://192.168.2.159:6443 --certificate-authority /etc/certs/etcd/ca.pem --embed-certs
kubectl config set-cluster default-cluster --server=https://192.168.2.160:6443 --certificate-authority /etc/certs/etcd/ca.pem --embed-certs
kubectl config set-cluster default-cluster --server=https://192.168.2.161:6443 --certificate-authority /etc/certs/etcd/ca.pem --embed-certs
# 集群中的每个节点都需要一份
kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
#必需添加到 manifests/kube-controller-manager.yaml 清单中
kubectl config set-context default-system --cluster default-cluster --user <credential-name>
# 必需添加到 manifests/kube-scheduler.yaml 清单中
kubectl config use-context default-system

PKI 证书和要求

1
2
3
4
/etc/kubernetes/admin.conf
/etc/kubernetes/kubelet.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/scheduler.conf

修复证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# root 用户执行
rm -f /etc/kubernetes/pki/apiserver.crt
rm -f /etc/kubernetes/pki/apiserver.key
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.crt
rm -f /etc/kubernetes/pki/apiserver-kubelet-client.key
rm -f /etc/kubernetes/pki/apiserver.*
# 如果两个文件都已存在,则 kubeadm 将跳过生成步骤,使用现有文件。
kubeadm init phase certs apiserver --config=kubeadm-config-init.yaml
kubeadm init phase certs apiserver-kubelet-client --config=kubeadm-config-init.yaml
kubectl get pods -A -o wide
kubectl delete pod -n kube-system kube-apiserver-master-158
kubectl delete pod -n kube-system kube-apiserver-master-159
kubectl delete pod -n kube-system kube-apiserver-master-160
systemctl restart kubelet

参考:

kubeadm init phase

[[Invalid x509 certificate for kubernetes master](https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master)](

本文地址: https://github.com/maxzhao-it/blog/post/ebede6dg/

查看所有打开的端口

1
sudo firewall-cmd --zone=public --list-ports

查看端口是否开放

1
sudo firewall-cmd --zone=public --query-port=80/tcp

添加开放端口

1
2
3
4
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=20003/tcp --permanent
# 开放服务
sudo firewall-cmd --zone=public --add-service=nfs --permanent

permanent永久生效,没有此参数重启后失效

更新防火墙规则

1
sudo firewall-cmd --reload

删除开放端口

1
sudo firewall-cmd --zone=public --remove-port=80/tcp --permanent

端口转发

1
2
3
4
5
6
7
8
9
10
make
make install
sed -i 's/#user nobody;/user nginx;/' /opt/nginx/nginx/conf/nginx.conf
# 默认情况下Linux的1024以下端口是只有root用户才有权限占用 ,所以80 端口改为 50080
# firewall配置端口转发
sudo firewall-cmd --permanent --add-masquerade
# sudo firewall-cmd --permanent --zone=public --add-forward-port=port=80:proto=tcp:toaddr=127.0.0.1:toport=50080
sudo firewall-cmd --permanent --zone=public --add-forward-port=port=80:proto=tcp:toport=50080
sudo firewall-cmd --reload
/opt/nginx/nginx/sbin/nginx -c /opt/nginx/nginx/conf/nginx.conf

源IP

1
2
# 192.168.1.1/24
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.1/24" port protocol="tcp" port="3306" accept"

本文地址: https://github.com/maxzhao-it/blog/post/93d07d99/

1
sed -i 's/str/targetStr/g'  filename

示例:修改 IP 地址

1
2
3
sed -i 's/UUID=6695c513-e9fa-4d7b-83c1-b795ce225485-122/UUID=6695c513-e9fa-4d7b-83c1-b795ce225485-158/g'  /etc/sysconfig/network-scripts/ifcfg-ens33
sed -i 's/IPADDR=192.168.2.122/IPADDR=192.168.2.158/g' /etc/sysconfig/network-scripts/ifcfg-ens33
sed -i 's/host122/host158/g' /etc/hostname

本文地址: https://github.com/maxzhao-it/blog/post/ba563af2/

在安装之前

1
2
cd nginx-1.22.0
vim src/core/nginx.h

image-20220727162934378

配置

1
2
sudo yum install -y make gcc gcc-c++ pcre-devel zlib-devel
./configure --prefix=/opt/nginx/nginx

编译安装

1
2
3
4
5
6
7
8
9
10
11
make
make install
# 如果使用nginx用户安装,需要修改 nginx.conf 中的用户
# sed -i 's/#user nobody;/user nginx;/' /opt/nginx/nginx/conf/nginx.conf
# 默认情况下Linux的1024以下端口是只有root用户才有权限占用 ,所以80 端口改为 50080
# firewall配置端口转发
sudo firewall-cmd --permanent --add-masquerade
# sudo firewall-cmd --permanent --zone=public --add-forward-port=port=80:proto=tcp:toaddr=127.0.0.1:toport=50080
sudo firewall-cmd --permanent --zone=public --add-forward-port=port=80:proto=tcp:toport=50080
sudo firewall-cmd --reload
/opt/nginx/nginx/sbin/nginx -c /opt/nginx/nginx/conf/nginx.conf

开启防火墙

1
2
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --reload

问题

问题

1
checking for C compiler ... not found

安装

1
sudo yum install -y gcc

问题

1
the HTTP rewrite module requires the PCRE library

安装

1
sudo yum install -y  pcre-devel 

每日一个日志文件

1
2
3
4
5
6
7
http {
map $time_iso8601 $logdate {
'~^(?<ymd>\d{4}-\d{2}-\d{2})' $ymd;
default 'date-not-found';
}
access_log /opt/nginx/nginx/logs/access-$logdate.log json_log;
}

本文地址: https://github.com/maxzhao-it/blog/post/12e2d0c0/

多节点负载的配置

1
2
3
4
5
6
7
8
9
10
11
upstream client {
server 192.168.2.1:20031 weight=1;
server 192.168.2.1:20011 weight=1;
}
server {
listen 80;
server_name 0.0.0.0;
location / {
proxy_pass http://client;
}
}

upstream 的分配方式

  1. 轮询

    1
    2
    3
    4
    upstream client {
    server 192.168.2.1:20031;
    server 192.168.2.1:20011;
    }
  2. 权重

    1
    2
    3
    4
    upstream client {
    server 192.168.2.1:20031 weight=1;
    server 192.168.2.1:20011 weight=1;
    }
  3. ip_hash:每个请求按照访问IPhash结果分配,这样每个client固定访问一个后端,可以解决session一致的问题。

    1
    2
    3
    4
    5
    upstream client {
    ip_hash;
    server 192.168.2.1:20031 weight=1;
    server 192.168.2.1:20011 weight=1;
    }
  4. url_hash:每个请求按照urlhash结果分配,使得每个url固定访问一个后端,主要用于缓存服务器。

    1
    2
    3
    4
    5
    6
    7
    upstream client {
    hash $request_uri;
    # 一致性算法
    hash_method crc32;
    server 192.168.2.1:20031 weight=1;
    server 192.168.2.1:20011 weight=1;
    }
  5. fair:根据后端响应时间来分配,即rt越短,有限分配级别越高

    1
    2
    3
    4
    5
    upstream client {
    fair;
    server 192.168.2.1:20031 weight=1;
    server 192.168.2.1:20011 weight=1;
    }

upstream节点参数配置

server 后的参数

1
2
3
4
upstream client {
server 192.168.2.1:20031 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.2.1:20011 weight=1;
}
  • address:地址必须配置
  • down:标记 server 停用
  • backup:标记是备用服务器
  • weight:设置权重
  • max_failsfail_timeout:错误次数、达到错误次数多久不再访问

引用参数

fastcgi.conf 文件里有一些可以使用的参数

1
2
3
4
5
6
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://client;
}

一个多节点负载的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
http {
include mime.types;
default_type application/octet-stream;

sendfile on;

#keepalive_timeout 0;
keepalive_timeout 65;

server {
listen 80;
server_name 0.0.0.0;
location / {
proxy_pass http://client;
}
}
upstream client {
server 192.168.2.1:20031 weight=1;
server 192.168.2.1:20011 weight=1;
}
}

校验客户端IP指向固定的集群地址

1
2
3
4
5
6
7
location / {
# 可以配置多个IP
if($remote_addr ~"192.168.2.8|9") {
proxy_pass http://client2;
}
proxy_pass http://client;
}

proxy_pass 路径

URL地址:127.0.0.1/demo/index.html

实际访问i地址:client/demo/index.html

1
2
3
4
5
6
7
location /demo {
proxy_pass http://client;
}
# 等同于
location / {
proxy_pass http://client/demo/;
}

静态地址

URL地址:127.0.0.1/index.html

实际访问i地址:/home/front/index.html

1
2
3
location / {
root /home/front/;
}

安装之前

1
sudo yum install -y yum-utils

前后端分离的集群 session处理

后端是一个集群部署,后端路径与前端路径不同,则需要复制 session。

1
2
3
4
5
6
7
8
9
10
11
12
server {
listen 10001;
server_name 0.0.0.0;
location /auth {
proxy_pass http://authServer;
proxy_cookie_path /auth /;

}
location / {
root /home/maxzhao/auth/frontend/;
index index.html;
}

如果是 java 后端,还需要配置

1
2
3
4
5
registry.addMapping("/**")
.allowedOriginPatterns("*")
.allowedHeaders()
.allowedMethods()
.exposedHeaders();

本文地址: https://github.com/maxzhao-it/blog/post/aa6dc455/

加解密JAVA 实现

引入依赖

1
2
3
4
5
6
7
8

<dependencies>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.15</version>
</dependency>
</dependencies>

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.crypto.Cipher;
import java.nio.charset.StandardCharsets;
import java.security.*;
import java.security.spec.EncodedKeySpec;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.spec.X509EncodedKeySpec;
import java.util.Base64;

public class SecretUtil {
private static final Logger log = LoggerFactory.getLogger(SecretUtil.class);
private static final String SM2 = "SM2";
private static final String SM2_ALGORITHM = "EC";
private static final String RSA = "RSA";
private static final int RSA_KEY_SIZE = 2048;

private SecretUtil() {
}


/**
* 生成 rsa key
*
* @return [publicKey, privateKey]
*/
public static String[] generateRsaKey() {
return generatePairKey(RSA, RSA_KEY_SIZE, null);
}

private static String[] generatePairKey(String algorithm, int keySize, Provider provider) {
KeyPairGenerator keyPairGenerator = null;
try {
/*KeyPairGenerator类用于生成公钥和私钥对,基于RSA算法生成对象*/
keyPairGenerator = provider == null
? KeyPairGenerator.getInstance(algorithm)
: KeyPairGenerator.getInstance(algorithm, provider);
/* 初始化密钥对生成器 */
keyPairGenerator.initialize(keySize, new SecureRandom());
} catch (NoSuchAlgorithmException e) {
log.warn("{} 公私钥生成失败", algorithm, e);
}
/*判断是否生成成功*/
if (keyPairGenerator == null) {
/*反馈空数组*/
return new String[0];
}
/*生成秘钥*/
KeyPair keyPair = keyPairGenerator.generateKeyPair();
PublicKey publicKey = keyPair.getPublic();
byte[] publicKeyEncoded = publicKey.getEncoded();
String publicKeyString = Base64.getEncoder().encodeToString(publicKeyEncoded);
PrivateKey privateKey = keyPair.getPrivate();
byte[] privateKeyEncoded = privateKey.getEncoded();
String privateKeyString = Base64.getEncoder().encodeToString(privateKeyEncoded);
return new String[]{publicKeyString, privateKeyString};
}


/**
* RSA 加密
*
* @param content 待加密内容
* @param key 公钥
* @return 加密后结果(Base64编码)
*/

public static String rsaEncrypt(String content, String key) {
return encryptPk(content, key, RSA, null);
}

public static String encryptPk(String content, String key, String algorithm, Provider provider) {
try {
EncodedKeySpec keySpec = new X509EncodedKeySpec(Base64.getDecoder().decode(key));
KeyFactory keyFactory = KeyFactory.getInstance(SM2.equalsIgnoreCase(algorithm) ? SM2_ALGORITHM : algorithm);
Cipher cipher = provider == null
? Cipher.getInstance(algorithm)
: Cipher.getInstance(algorithm, provider);
cipher.init(Cipher.ENCRYPT_MODE, keyFactory.generatePublic(keySpec));
byte[] encryptStr = cipher.doFinal(content.getBytes(StandardCharsets.UTF_8));
return Base64.getEncoder().encodeToString(encryptStr);
} catch (Exception e) {
log.error("{} 数据加密失败:{}", algorithm, e.getMessage(), e);
return null;
}
}

/**
* RSA 解密
*
* @param content 内容(Base64编码)
* @param key 私钥
* @return 解密后数据
*/
public static String rsaDecrypt(String content, String key) {
return decryptPk(content, key, RSA, null);
}

public static String decryptPk(String content, String key, String algorithm, Provider provider) {
try {
EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(Base64.getDecoder().decode(key));
KeyFactory keyFactory = KeyFactory.getInstance(SM2.equalsIgnoreCase(algorithm) ? SM2_ALGORITHM : algorithm);
Cipher cipher = provider == null
? Cipher.getInstance(algorithm)
: Cipher.getInstance(algorithm, provider);
cipher.init(Cipher.DECRYPT_MODE, keyFactory.generatePrivate(keySpec));
byte[] decryptBytes = cipher.doFinal(Base64.getDecoder().decode(content));
return new String(decryptBytes);
} catch (Exception e) {
log.error("{} 数据解密失败:{}", algorithm, e.getMessage(), e);
return null;
}
}


}

本文地址: https://github.com/maxzhao-it/blog/post/efc116f5/