0%

Windows 操作

Windows 创建好共享文件夹

这里是 \\Maxzhao-work\80,并添加了账号 vm-80/vm-80

windows 11 可以通过运行 netplwiz 添加用户。

Linux 操作

安装 cifs

1
sudo yum install -y cifs-utils

挂载目录

1
2
#             共享方式  网络地址      Linux本地   -o 加用户信息
sudo mount -t cifs //Maxzhao-work/80 /home -o rw,username="vm-80",password="vm-80"

开机启动挂载

1
2
vim /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local

挂载某个用户

1
2
3
4
5
6
7
8
9
sudo useradd es 
sudo passwd es
# 查询用户id与用户组id
cat /etc/passwd |grep es
# 共享方式 网络地址 Linux本地 -o 加用户信息
sudo mount -t cifs //192.168.222.1/80/es /home/es -o rw,uid=1001,gid=1001,username="vm-80",password="vm-80"
sudo mount -t cifs //192.168.222.1/80/nginx /home/nginx -o rw,uid=55557,username="vm-80",password="vm-80"
sudo mount -t cifs //192.168.222.1/80/n9e /home/n9e -o rw,uid=55556,username="vm-80",password="vm-80"
sudo mount -t cifs //192.168.222.1/80/nexus /home/nexus -o rw,uid=55558,username="vm-80",password="vm-80"

本文地址: https://github.com/maxzhao-it/blog/post/3ec01b/

LVM 磁盘扩容

查看分区情况

1
fdisk -l

查看磁盘使用情况

1
df -h

结果中可以找到需要扩展的 LV, 名称是/dev/mapper/centos_localhost-root

1
2
3
4
5
6
7
8
文件系统                           容量  已用  可用 已用% 挂载点
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 8.9M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/centos_localhost-root 222G 8.8G 213G 4% /
/dev/sda1 1014M 182M 833M 18% /boot
tmpfs 783M 0 783M 0% /run/user/0

查看分区使用情况

1
lsblk 

结果

1
2
3
4
5
6
7
8
9
10
11
12
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 99G 0 part
│ ├─centos_localhost-root 253:0 0 221.1G 0 lvm /
│ └─centos_localhost-swap 253:1 0 7.9G 0 lvm [SWAP]
└─sda3 8:3 0 50G 0 part
└─centos_localhost-root 253:0 0 221.1G 0 lvm /
sdb 8:16 0 80G 0 disk
└─sdb1 8:17 0 80G 0 part
└─centos_localhost-root 253:0 0 221.1G 0 lvm /
sdc 8:32 0 50G 0 disk

场景一:虚拟磁盘追加空间

通过 lsblk 命令,可以查询到 sda 磁盘有200G,其中有三块分区 sda1、sda2、sda3,三块分区一共150G,剩余 50G,现在就把剩余的 50G 加入到 LVM 中,并分配给逻辑卷 centos_localhost-root

sda剩余空间创建分区

分区sda4:因为现在已经存在 sda1、sda2、sda3,所以创建的下一个分区为 sda4.

记住此次示例的分区编号:4

下面是具体步骤:

1、输入:fdisk /dev/sda

1
2
3
4
5
6
欢迎使用 fdisk (util-linux 2.23.2)。

更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。

命令(输入 m 获取帮助):

2、输入:n,并回车

1
2
3
4
命令(输入 m 获取帮助):n
Partition type:
p primary (3 primary, 0 extended, 1 free)
e extended

3、输入:p,并回车

1
2
3
4
Partition type:
p primary (3 primary, 0 extended, 1 free)
e extended
Select (default e): p

4、这里会出现两种情况:

情况一:如果为第4个分区,则这里直接回车

1
2
已选择分区 4
起始 扇区 (314572800-419430399,默认为 314572800):

直接回车会选择所有剩余的磁盘空间创建当前分区

情况二:出现下面文字,直接回车

1
分区号(1-3,默认 3):

回车

1
起始 扇区 (314572800-419430399,默认为 314572800):

5、4步骤后,会出现下面文字,再次回车

1
Last 扇区, +扇区 or +size{K,M,G} (314572800-419430399,默认为 419430399):

回车后会出现

1
命令(输入 m 获取帮助):

6、输入:t,并回车

7、输入分区号:4,并回车

8、输入:8e,并回车

9、输入:w,并回车

如果提醒 设备或资源忙

1
2
3
WARNING: Re-reading the partition table failed with error 16: 设备或资源忙.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)

则执行:

1
partprobe

查看分区结果

1
lsblk

结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 99G 0 part
│ ├─centos_localhost-root 253:0 0 221.1G 0 lvm /
│ └─centos_localhost-swap 253:1 0 7.9G 0 lvm [SWAP]
├─sda3 8:3 0 50G 0 part
│ └─centos_localhost-root 253:0 0 221.1G 0 lvm /
└─sda4 8:4 0 50G 0 part
sdb 8:16 0 80G 0 disk
└─sdb1 8:17 0 80G 0 part
└─centos_localhost-root 253:0 0 221.1G 0 lvm /
sdc 8:32 0 50G 0 disk

将分区sda4加入逻辑卷

分区名:sda4

逻辑卷名:centos_localhost-root

1
2
3
4
5
6
7
8
9
10
# 这里输入分区地址
CURRENT_PARTITION=/dev/sda4
# 这里输入 LV 名称
CURRENT_LV_NAME=centos_localhost-root
pvcreate $CURRENT_PARTITION
CURRENT_VG_NAME=`vgdisplay |grep Name|awk '{print $3}' `
vgextend $CURRENT_VG_NAME $CURRENT_PARTITION
lvextend -l +100%FREE /dev/mapper/$CURRENT_LV_NAME
xfs_growfs /dev/mapper/$CURRENT_LV_NAME
df -h

场景二:新增磁盘

通过 lsblk 命令,可以查询到 新磁盘sdc 磁盘有50G,现在就把 50G 加入到 LVM 中,并分配给逻辑卷 centos_localhost-root

sdc创建分区

分区sdc1:当前磁盘第一个分区为1

记住此次示例的分区编号:1

下面是具体步骤:

1、输入:fdisk /dev/sdc

1
2
3
4
5
6
欢迎使用 fdisk (util-linux 2.23.2)。

更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。

命令(输入 m 获取帮助):

2、输入:n,并回车

3、直接输入回车,再回车,再回车,再回车

直到出现

1
命令(输入 m 获取帮助):

4、输入:t,并回车,再回车

5、输入:8e,并回车

6、输入:w,并回车

如果提醒 设备或资源忙

1
2
3
WARNING: Re-reading the partition table failed with error 16: 设备或资源忙.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)

则执行:

1
partprobe

查看分区结果

1
lsblk

结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 99G 0 part
│ ├─centos_localhost-root 253:0 0 221.1G 0 lvm /
│ └─centos_localhost-swap 253:1 0 7.9G 0 lvm [SWAP]
├─sda3 8:3 0 50G 0 part
│ └─centos_localhost-root 253:0 0 221.1G 0 lvm /
└─sda4 8:4 0 50G 0 part
sdb 8:16 0 80G 0 disk
└─sdb1 8:17 0 80G 0 part
└─centos_localhost-root 253:0 0 221.1G 0 lvm /
sdc 8:32 0 50G 0 disk

将分区sdc1加入逻辑卷

分区名:sdc1

逻辑卷名:centos_localhost-root

1
2
3
4
5
6
7
8
9
10
# 创建PV
pvcreate /dev/sdc1
# PV 加入 VG
CURRENT_VG_NAME=`vgdisplay |grep Name|awk '{print $3}' `
vgextend $CURRENT_VG_NAME $/dev/sdc1
# 给 LV 扩容
lvextend -l +100%FREE /dev/mapper/centos_localhost-root
# 刷新文件系统
xfs_growfs /dev/mapper/centos_localhost-root
df -h

将分区加入逻辑卷的详细说明

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 创建分区的 pv
pvcreate /dev/sda4
# Physical volume "/dev/sda4" successfully created.
# 查看 vg 名
vgdisplay |grep Name|awk '{print $3}'
# VG Name centos_localhost
# pv 加入 vg
vgextend centos_localhost /dev/sda4
# Volume group "centos_localhost" successfully extended
# 扩展 lv
lvextend -l +100%FREE /dev/mapper/centos_localhost-root
# Size of logical volume centos_localhost/root changed from 221.11 GiB (56605 extents) to <271.11 GiB (69404 extents).
# Logical volume centos_localhost/root successfully resized.
# 扩展文件系统,
xfs_growfs /dev/mapper/centos_localhost-root
# 查询磁盘容量
df -h

xfs_growfs用于扩展 xfs 文件系统

resize2fs 是用户扩展ext系统

查询文件系统

1
2
lsblk -f
df -Th

切换文件系统

1
mkfs -t xfs /dev/sdb1

文件系统有 ext2ext3ext4

本文地址: https://github.com/maxzhao-it/blog/post/cc5b4532/

1
mvn deploy:deploy-file -DgroupId=com.skytech.ark.auth  -DartifactId=com.skytech.ark.auth.stateless  -Dversion=1.0.1-SNAPSHOT -Dfile=F:\Downloads\1\1.0.1-SNAPSHOT\com.skytech.ark.auth.stateless-1.0.1-SNAPSHOT.jar -DrepositoryId=nexus-snapshot -Durl=http://nexus.skytech.io/repository/maven-skytech-snapshot/

setting.xml 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<localRepository>D:\repository</localRepository>
<servers>
<!-- Used for deploy war to tomcat from tomcat6/7-maven-plugin-->
<server>
<id>nexus-release</id>
<username>deployer</username>
<password>xxx</password>
</server>
<server>
<id>nexus-snapshot</id>
<username>deployer</username>
<password>xxx</password>
</server>
</servers>

<mirrors>
<mirror>
<id>nexus-release</id>
<mirrorOf>*</mirrorOf>
<url>http://nexus.skytech.io/repository/maven-skytech/</url>
</mirror>
<mirror>
<id>nexus-snapshot</id>
<mirrorOf>*</mirrorOf>
<url>http://nexus.skytech.io/repository/maven-skytech-snapshot/</url>
</mirror>
<mirror>
<id>maven-59-release</id>
<mirrorOf>*</mirrorOf>
<url>http://32.1.0.59:8081/repository/maven-public/</url>
</mirror>
</mirrors>
</settings>

本文地址: https://github.com/maxzhao-it/blog/post/450e8a25/

安装

推荐:官方指导

下载

下载地址

1
2
3
4
5
6
7
8
9
mkdir ~/tools && cd tools
wget https://dlcdn.apache.org/nifi/1.18.0/nifi-1.18.0-bin.zip
unzip nifi-1.18.0-bin.zip --no-check-certificate
wget https://dlcdn.apache.org/nifi/1.18.0/nifi-toolkit-1.18.0-bin.zip --no-check-certificate
mv nifi-1.18.0 nifi
mv nifi-toolkit-1.18.0 nifi-toolkit
mv nifi ../
mv nifi-toolkit ../
cd ../nifi

启动

1
bin/nifi.sh start

image-20221121143546215

1
2
# 查看状态
bin/nifi.sh status

image-20221121143748553

1
2
# 停止
bin/nifi.sh stop

配置

/home/nifi/nifi/conf/bootstrap.conf

查看启动后的默认账号密码

1
cat logs/nifi-app.log | grep Generated

修改端口

1
2
vim ~/nifi/conf/nifi.properties
# 修改参数 nifi.web.https.port=58443

修改用户名密码

1
2
# 语法 bin/nifi.sh set-single-user-credentials <username> <password>
~/nifi/bin/nifi.sh set-single-user-credentials nifi Skynj@123QWE

image-20221121144431104

配置证书

默认证书只有 60天,这里生成新的证书。

1
2
cd ~/
~/nifi-toolkit/bin/tls-toolkit.sh standalone -n '192.168.15.45' -C 'CN=Skynj,OU=NIFI' -o 'target' -d 3650

image-20221121150036406

查看生成结果

image-20221121150128039

复制证书到 nifi 配置中

1
cp -rf ~/target/192.168.15.45/* ~/nifi/conf/

配置 nifi

1
2
3
4
5
6
7
8
9
vim ~/nifi/conf/nifi.properties
# 一个12位数的密码 nifi.sensitive.props.key=skynj@123qwe
# 修改 web host nifi.web.https.port=192.168.14.122
# 修改 web 端口 nifi.web.https.port=58443
# 启动
~/nifi/bin/nifi.sh start
~/nifi/bin/nifi.sh status
# 如果启动失败,则查看日志
cat ~/nifi/logs/nifi-app.log

查看接口是否正常访问

1
2
curl https://127.0.0.1:58443/nifi/login
curl https://192.168.14.122:58443/nifi/login

访问

浏览器访问:https://192.168.14.122:58443/nifi/login

伪集群

前言

  • 三个节点:node1、node2、node3
  • 三个节点hostnode1.nifi、node2.nifi、node3.nifi
  • 主节点:node1
  • 使用内置 zk
    • 客户端端口:12181,12888,13888
    • node连接端口:22181,22888,23888
    • leader选举端口:32181,32888,33888
  • nifi端口使用
    • 负载均衡端口:16342, 26342, 36342
    • Https UI/API 端口:19443,29443,39443
    • sitesite 端口:10443,20443,30443
    • 集群通讯端口:11443,21443,31443

准备环境

修改host

1
2
3
4
5
# root 权限执行,这里不是伪分布式,则使用真是的局域网IP,每个节点都需要添加
# 这里不能使用 127.0.0.1
echo '192.168.1.1 node1.nifi' >> /etc/hosts
echo '192.168.1.1 node2.nifi' >> /etc/hosts
echo '192.168.1.1 node3.nifi' >> /etc/hosts

创建用户

1
2
3
4
5
6
7
8
# root 权限执行
useradd -d "/home/nifi1" -m -s "/bin/bash" nifi1
useradd -d "/home/nifi2" -m -s "/bin/bash" nifi2
useradd -d "/home/nifi3" -m -s "/bin/bash" nifi3
# 修改密码
passwd nifi1
passwd nifi2
passwd nifi3

主节点ca

1
2
3
4
5
6
7
8
9
10
11
su nifi1
ssh-keygen -t ecdsa
# 一直回车
# 颁发给其它两个节点,高版本 openssh-clients 无 ssh-copy-id 命令
ssh-copy-id -i ~/.ssh/id_ecdsa.pub nifi2@node2.nifi
ssh-copy-id -i ~/.ssh/id_ecdsa.pub nifi3@node3.nifi
# 无 ssh-copy-id 命令 时,可以使用
scp ~/.ssh/id_ecdsa.pub nifi2@node2.nifi:/home/nifi2/
scp ~/.ssh/id_ecdsa.pub nifi3@node3.nifi:/home/nifi3/
ssh nifi2@node2.nifi 'mkdir ~/.ssh ; chmod 700 ~/.ssh;cat /home/nifi2/id_ecdsa.pub >> ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys '
ssh nifi3@node3.nifi 'mkdir ~/.ssh ; chmod 700 ~/.ssh;cat /home/nifi3/id_ecdsa.pub >> ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys'

测试

1
2
3
4
ssh nifi2@node2.nifi
exit
ssh nifi3@node3.nifi
exit

安装NIFI

下载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 安装节点 node1
su nifi1
mkdir ~/tools ; cd ~/tools
wget https://dlcdn.apache.org/nifi/1.18.0/nifi-1.18.0-bin.zip --no-check-certificate
wget https://dlcdn.apache.org/nifi/1.18.0/nifi-toolkit-1.18.0-bin.zip --no-check-certificate
unzip nifi-1.18.0-bin.zip
unzip nifi-toolkit-1.18.0-bin.zip
mv nifi-1.18.0 nifi ; mv nifi ~/
mv nifi-toolkit-1.18.0 nifi-toolkit ; mv nifi-toolkit ~/
echo 'PATH=$PATH:/home/nifi1/nifi/bin:/home/nifi1/nifi-toolkit/bin' >> ~/.bashrc
echo 'export $PATH' >> ~/.bashrc
source ~/.bashrc
cd ~/
# 复制到 node2、node3
scp -r nifi nifi2@node2.nifi:/home/nifi2/
scp -r nifi nifi3@node3.nifi:/home/nifi3/

集群 CA

生成CA

1
2
3
4
5
6
7
8
9
cd ~/
# 批量生成
tls-toolkit.sh standalone -n 'node[1-3].nifi' -C 'CN=nifi' -c 'ca.nifi' -o 'ca' -d 3650
# 或者单个生成
tls-toolkit.sh standalone -n 'node1.nifi' -c 'ca.nifi' -o 'ca' -d 3650
tls-toolkit.sh standalone -n 'node2.nifi' -o 'ca' -d 3650
tls-toolkit.sh standalone -n 'node3.nifi' -o 'ca' -d 3650
tls-toolkit.sh standalone -C 'CN=nifi' -o 'ca' -d 3650
ll ca

结果

1
2
3
4
5
6
7
8
9
#  The client certificate in a PKCS12 keystore
-rw------- 1 nifi1 nifi1 3469 12月 15 14:47 CN=nifi.p12
# The corresponding file containing the randomly-generated password. Use -b or --clientCertPassword when generating to specify a password
-rw------- 1 nifi1 nifi1 43 12月 15 14:47 CN=nifi.password
-rw------- 1 nifi1 nifi1 1224 12月 15 14:47 nifi-cert.pem
-rw------- 1 nifi1 nifi1 1675 12月 15 14:47 nifi-key.key
drwx------ 2 nifi1 nifi1 71 12月 15 14:47 node1.nifi
drwx------ 2 nifi1 nifi1 71 12月 15 14:47 node2.nifi
drwx------ 2 nifi1 nifi1 71 12月 15 14:47 node3.nifi

复制证书

1
2
3
cp -R  ~/ca/node1.nifi/* ~/nifi/conf/
scp -r ~/ca/node2.nifi/* nifi2@node2.nifi:/home/nifi2/nifi/conf/
scp -r ~/ca/node3.nifi/* nifi3@node3.nifi:/home/nifi3/nifi/conf/

配置节点

node1 节点上执行

node1 配置

1
2
3
4
5
6
7
8
9
# 在 node1.nifi 上直接执行
sed -i 's?nifi.state.management.embedded.zookeeper.start=false?nifi.state.management.embedded.zookeeper.start=true?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.remote.input.socket.port=10443?nifi.remote.input.socket.port=10443?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.web.https.port=9443?nifi.web.https.port=19443?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.cluster.is.node=false?nifi.cluster.is.node=true?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.cluster.node.protocol.port=11443?nifi.cluster.node.protocol.port=11443?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.cluster.load.balance.host=?nifi.cluster.load.balance.host=node1.nifi?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.cluster.load.balance.port=6342?nifi.cluster.load.balance.port=16342?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.zookeeper.connect.string=?nifi.zookeeper.connect.string=node1.nifi:12181,node2.nifi:22181,node3.nifi:32181?g' ~/nifi/conf/nifi.properties

node2 配置

1
2
3
4
5
6
7
8
9
# 在 node1.nifi 上直接执行
ssh nifi2@node2.nifi "sed -i 's?nifi.state.management.embedded.zookeeper.start=false?nifi.state.management.embedded.zookeeper.start=true?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.remote.input.socket.port=10443?nifi.remote.input.socket.port=20443?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.web.https.port=9443?nifi.web.https.port=29443?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.is.node=false?nifi.cluster.is.node=true?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.node.protocol.port=11443?nifi.cluster.node.protocol.port=21443?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.load.balance.host=?nifi.cluster.load.balance.host=node2.nifi?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.load.balance.port=6342?nifi.cluster.load.balance.port=26342?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.zookeeper.connect.string=?nifi.zookeeper.connect.string=node1.nifi:12181,node2.nifi:22181,node3.nifi:32181?g' ~/nifi/conf/nifi.properties"

node3 配置

1
2
3
4
5
6
7
8
9
# 在 node1.nifi 上直接执行
ssh nifi3@node3.nifi "sed -i 's?nifi.state.management.embedded.zookeeper.start=false?nifi.state.management.embedded.zookeeper.start=true?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.remote.input.socket.port=10443?nifi.remote.input.socket.port=30443?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.web.https.port=9443?nifi.web.https.port=39443?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.is.node=false?nifi.cluster.is.node=true?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.node.protocol.port=11443?nifi.cluster.node.protocol.port=31443?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.load.balance.host=?nifi.cluster.load.balance.host=node3.nifi?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.load.balance.port=6342?nifi.cluster.load.balance.port=36342?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.zookeeper.connect.string=?nifi.zookeeper.connect.string=node1.nifi:12181,node2.nifi:22181,node3.nifi:32181?g' ~/nifi/conf/nifi.properties"

修改集群节点等待时间与数量

1
2
3
4
5
6
7
8
9
# node1
sed -i 's?nifi.cluster.flow.election.max.wait.time=5 mins?nifi.cluster.flow.election.max.wait.time=1 mins?g' ~/nifi/conf/nifi.properties
sed -i 's?nifi.cluster.flow.election.max.candidates=?nifi.cluster.flow.election.max.candidates=3?g' ~/nifi/conf/nifi.properties
# node2
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.flow.election.max.wait.time=5 mins?nifi.cluster.flow.election.max.wait.time=1 mins?g' ~/nifi/conf/nifi.properties"
ssh nifi2@node2.nifi "sed -i 's?nifi.cluster.flow.election.max.candidates=?nifi.cluster.flow.election.max.candidates=3?g' ~/nifi/conf/nifi.properties"
# node3
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.flow.election.max.wait.time=5 mins?nifi.cluster.flow.election.max.wait.time=1 mins?g' ~/nifi/conf/nifi.properties"
ssh nifi3@node3.nifi "sed -i 's?nifi.cluster.flow.election.max.candidates=?nifi.cluster.flow.election.max.candidates=3?g' ~/nifi/conf/nifi.properties"

修改节点的配置加密key

1
2
3
4
5
6
# node1
sed -i 's?nifi.sensitive.props.key=?nifi.sensitive.props.key=qweQWE123123?g' ~/nifi/conf/nifi.properties
# node2
ssh nifi2@node2.nifi "sed -i 's?nifi.sensitive.props.key=?nifi.sensitive.props.key=qweQWE123123?g' ~/nifi/conf/nifi.properties"
# node3
ssh nifi3@node3.nifi "sed -i 's?nifi.sensitive.props.key=?nifi.sensitive.props.key=qweQWE123123?g' ~/nifi/conf/nifi.properties"

zk 配置

node1.nifi 上直接执行

添加 zk server 配置

1
2
3
4
5
6
7
8
9
# node1
sed -i 's?server.1=?server.1=node1.nifi:12888:13888;12181?g' ~/nifi/conf/zookeeper.properties
echo '' >> ~/nifi/conf/zookeeper.properties
echo 'server.2=node2.nifi:22888:23888;22181' >> ~/nifi/conf/zookeeper.properties
echo 'server.3=node3.nifi:32888:33888;32181' >> ~/nifi/conf/zookeeper.properties
# node2
scp ~/nifi/conf/zookeeper.properties nifi2@node2.nifi:/home/nifi2/nifi/conf/
# node3
scp ~/nifi/conf/zookeeper.properties nifi3@node3.nifi:/home/nifi3/nifi/conf/

添加节点标识

1
2
3
4
5
6
7
# node1
mkdir -p ~/nifi/state/zookeeper
echo 1 >> ~/nifi/state/zookeeper/myid
# node2
ssh nifi2@node2.nifi "mkdir -p ~/nifi/state/zookeeper;echo 2 >> ~/nifi/state/zookeeper/myid"
# node3
ssh nifi3@node3.nifi "mkdir -p ~/nifi/state/zookeeper;echo 3 >> ~/nifi/state/zookeeper/myid"

允许连接

1
2
3
4
5
6
# node1
sed -i 's?<property name="Connect String"></property>?<property name="Connect String">node1.nifi:12181,node2.nifi:22181,node3.nifi:32181</property>?g' ~/nifi/conf/state-management.xml
# node2
scp ~/nifi/conf/state-management.xml nifi2@node2.nifi:/home/nifi2/nifi/conf/
# node3
scp ~/nifi/conf/state-management.xml nifi3@node3.nifi:/home/nifi3/nifi/conf/

身份配置

1
2
3
4
5
6
7
8
# node1 
sed -i 's?<property name="Initial User Identity 1"></property>?<property name="Initial User Identity 1">CN=nifi</property>\n<property name="Initial User Identity 2">CN=node1.nifi, OU=NIFI</property>\n<property name="Initial User Identity 3">CN=node2.nifi, OU=NIFI</property>\n<property name="Initial User Identity 4">CN=node3.nifi, OU=NIFI</property>?g' ~/nifi/conf/authorizers.xml
sed -i 's?<property name="Initial Admin Identity"></property>?<property name="Initial Admin Identity">CN=nifi</property>?g' ~/nifi/conf/authorizers.xml
sed -i 's?<property name="Node Identity 1"></property>?<property name="Node Identity 1">CN=node1.nifi, OU=NIFI</property>\n<property name="Node Identity 2">CN=node2.nifi, OU=NIFI</property>\n<property name="Node Identity 3">CN=node3.nifi, OU=NIFI</property>?g' ~/nifi/conf/authorizers.xml
# node2
scp ~/nifi/conf/authorizers.xml nifi2@node2.nifi:/home/nifi2/nifi/conf/
# node3
scp ~/nifi/conf/authorizers.xml nifi3@node3.nifi:/home/nifi3/nifi/conf/

启动

1
2
3
~/nifi/bin/nifi.sh start
ssh nifi2@node2.nifi "source /etc/profile;~/nifi/bin/nifi.sh start"
ssh nifi3@node2.nifi "source /etc/profile;~/nifi/bin/nifi.sh start"

查看启动日志

1
tailf ~/nifi/logs/nifi-app.log

伪集群

1
firewall-cmd --zone=public --add-port=19443/tcp --add-port=29443/tcp --add-port=29443/tcp --permanent

集群

安装ZK

1
192.168.15.44:22181

安装集群

配置证书

58节点

1
2
3
4
cd ~/
~/nifi-toolkit/bin/tls-toolkit.sh standalone -n '192.168.15.58,192.168.15.59' -o 'target' -c 'ca.nifi' -d 3650
scp target/192.168.15.58/* nifi@192.168.15.58:/home/nifi/nifi/conf/
scp target/192.168.15.59/* nifi@192.168.15.59:/home/nifi/nifi/conf/
1
vim ~/nifi/conf/authorizers.xml
1
2
3
4
5
6

<accessPolicyProvider>
<property name="Node Identity 1">CN=192.168.15.58, OU=NIFI</property>
<property name="Node Identity 2">CN=192.168.15.59, OU=NIFI</property>
<property name="Initial Admin Identity">192.168.15.58</property>
</accessPolicyProvider>

58 节点配置 NIFI

1
2
3
4
5
sed -i 's/nifi.cluster.is.node=false/nifi.cluster.is.node=true/g' ~/nifi/conf/nifi.properties
sed -i 's/nifi.zookeeper.connect.string=/nifi.zookeeper.connect.string=192.168.15.44:22181/g' ~/nifi/conf/nifi.properties
sed -i 's/nifi.sensitive.props.key=/nifi.sensitive.props.key=Skynj@123QWE/g' ~/nifi/conf/nifi.properties
sed -i 's?<property name=\"Connect String\"></property>?<property name=\"Connect String\">192.168.15.44:22181</property>?g' ~/nifi/conf/state-management.xml
cat ~/nifi/conf/nifi.properties
1
2
3
4
5
nifi.sensitive.props.key=Skynj@123QWE
nifi.cluster.is.node=true
# 选举
nifi.cluster.flow.election.max.candidates=
nifi.zookeeper.connect.string=192.168.15.44:22181

置空

集群要使用一致的 authorizations.xml users.xml

1
2
3
rm ~/nifi/conf/authorizations.xml  
rm ~/nifi/conf/users.xml
rm ~/nifi/conf/flow.*

修改用户名密码

从节点不需要

1
~/nifi/bin/nifi.sh set-single-user-credentials nifi Skynj@123QWE

启动

1
2
3
4
~/nifi/bin/nifi.sh start
~/nifi/bin/nifi.sh status
cat ~/nifi/logs/nifi-app.log
tailf ~/nifi/logs/nifi-app.log

重启

1
2
~/nifi/bin/nifi.sh restart
~/nifi/bin/nifi.sh status

使用内置 ZK

修改 nifi.properties

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
nifi.state.management.configuration.file=./conf/state-management.xml
nifi.state.management.embedded.zookeeper.start=true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# https
nifi.remote.input.secure=true
nifi.web.https.host=192.168.15.58
nifi.web.https.port=9443

nifi.sensitive.props.key=Skynj@123QWE

nifi.cluster.protocol.is.secure=true
nifi.cluster.is.node=true
nifi.cluster.node.address=192.168.15.58
nifi.cluster.node.protocol.port=11443
# nodes x 7
nifi.cluster.node.protocol.max.threads=16
nifi.cluster.flow.election.max.wait.time=5 mins
# 选举
nifi.cluster.flow.election.max.candidates=


nifi.cluster.load.balance.host=192.168.15.58
nifi.cluster.load.balance.port=6342
nifi.zookeeper.connect.string=192.168.15.44:22181
nifi.zookeeper.root.node=/nifi-test

各节点配置 ~/nifi/conf/state-management.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<stateManagement>
<local-provider>
<id>local-provider</id>
<class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
<property name="Directory">./state/local</property>
<property name="Always Sync">false</property>
<property name="Partitions">16</property>
<property name="Checkpoint Interval">2 mins</property>
</local-provider>
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">192.168.15.58:2181,192.168.15.58:2181</property>
<property name="Root Node">/nifi-test</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
<!-- <cluster-provider>
<id>redis-provider</id>
<class>org.apache.nifi.redis.state.RedisStateProvider</class>
<property name="Redis Mode">Standalone</property>
<property name="Connection String">localhost:6379</property>
</cluster-provider>
-->
</stateManagement>

~/nifi/conf/zookeeper.properties

1
2
server.1=192.168.15.58:2888:3888;2181
server.2=192.168.15.59:2888:3888;2181

属性 server.节点ID=IP:2888:3888;2181

节点 58

1
2
mkdir -p ~/nifi/state/zookeeper
echo 1 > ~/nifi/state/zookeeper/myid

节点59

1
2
mkdir -p ~/nifi/state/zookeeper
echo 2 > ~/nifi/state/zookeeper/myid

本文地址: https://github.com/maxzhao-it/blog/post/ba72ba5e/

Java打包k8s镜像

打包

配置

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.13</version>
<executions>
<execution>
<id>default</id>
<goals>
<!--不想用 docker 打包,就注释掉-->
<goal>build</goal>
<goal>push</goal>
<goal>tag</goal>
</goals>
</execution>
</executions>
<configuration>
<imageName>${project.artifactId}</imageName>
<!--docker.image.prefix/docker.project.artifactId-->
<repository>maxzhao/${project.artifactId}</repository>
<dockerDirectory>src/main/docker</dockerDirectory>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
# 基础镜像使用java
FROM openjdk:8-jdk-alpine
MAINTAINER maxzhao
# VOLUME 指定了临时文件目录为/tmp。
# 其效果是在主机 /var/lib/docker 目录下创建了一个临时文件,并链接到容器的/tmp
VOLUME /tep
ENV PARAMS=""

EXPOSE 8080
ARG JAR_FILE
# 将jar包添加到容器中并更名为app.jar
ADD ${JAR_FILE} /app.jar
ENTRYPOINT ["sh","-c","java -jar $JAVA_OPTS /app.jar $PARAMS"]

构建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mvn package dockerfile:build
# 保存镜像到本地
docker save -o /admin.tar image-name
# 加载本地镜像
docker load </usr/local/admin.tar
# 查询镜像
docker images
# 运行镜像
docker run --name admin -p 8000:8000 -d image-name
# 停止
docker ps
docker stop 容器的ID
# 删除
docker rm 容器ID
docker rmi 镜像名称

本文地址: https://github.com/maxzhao-it/blog/post/b46c777d/

语法参考

引入 Maven 插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.2.2</version>
<inherited>true</inherited>
<configuration>
<!--https://maven.apache.org/shared/maven-archiver/index.html-->
<!--https://maven.apache.org/shared/maven-archiver/#class_manifest-->
<archive>
<manifestFile>src/main/resources/META-INF/MANIFEST.MF</manifestFile>
<forced>false</forced>
<manifest>
<addClasspath>false</addClasspath>
<addDefaultEntries>true</addDefaultEntries>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
<addBuildEnvironmentEntries>true</addBuildEnvironmentEntries>
<addExtensions>false</addExtensions>
</manifest>
</archive>
</configuration>
</plugin>

配置 MANIFEST.MF

1
Build-By: zhaoliansheng

本文地址: https://github.com/maxzhao-it/blog/post/b46c777d/

以Nacos举例,

下载 WinSW

WinSW-x64.exe 复制到 nacos/bin 下,并命名为 Nacos.exe

添加 Nacos.xml 文件,并写入

1
2
3
4
5
6
7
8
9
10
11
<service>
<id>Nacos-2.2.0</id>
<name>Nacos-2.2.0 Service</name>
<description>Nacos.</description>
<executable>D:\develop\nacos\nacos\bin\startup.cmd</executable>
<stopexecutable>D:\develop\nacos\nacos\bin\shutdown.cmd</stopexecutable>
<logpath>D:\develop\nacos\nacos\logs\</logpath>
<executable>java</executable>
<arguments> -m standalone </arguments>
<log mode="roll"></log>
</service>

打开命令行到当前路径,并执行

1
Nacos.exe install

启动服务

1
net start Nacos-2.2.0

其它参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
<!--
MIT License
Copyright (c) 2008-2020 Kohsuke Kawaguchi, Sun Microsystems, Inc., CloudBees,
Inc., Oleg Nenashev and other contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
-->

<!--
This is a sample configuration of the Windows Service Wrapper.
This configuration file should be placed near the WinSW executable, the name should be the same.
E.g. for myapp.exe the configuration file name should be myapp.xml

You can find more information about configuration options here: https://github.com/kohsuke/winsw/blob/master/doc/xmlConfigFile.md
-->
<service>

<!--
SECTION: Mandatory options
All options in other sections are optional
-->

<!-- ID of the service. It should be unique accross the Windows system-->
<id>myapp</id>
<!-- Display name of the service -->
<name>MyApp Service (powered by WinSW)</name>
<!-- Service description -->
<description>This service is a service created from a sample configuration</description>

<!-- Path to the executable, which should be started -->
<executable>%BASE%\myExecutable.exe</executable>

<!--
SECTION: Installation
These options are being used during the installation only.
Their modification will not take affect without the service re-installation.
-->

<!--
OPTION: serviceaccount
Defines account, under which the service should run.
-->
<!--
<serviceaccount>
<domain>YOURDOMAIN</domain>
<user>useraccount</user>
<password>Pa55w0rd</password>
<allowservicelogon>true</allowservicelogon>
</serviceaccount>
-->

<!--
OPTION: onfailure
Defines a sequence of actions, which should be performed if the managed executable fails.
Supported actions: restart, reboot, none
-->
<!--
<onfailure action="restart" delay="10 sec"/>
<onfailure action="restart" delay="20 sec"/>
<onfailure action="reboot" />
-->

<!--
OPTION: resetfailure
Time, after which the Windows service resets the failure status.
Default value: 1 day
-->
<!--
<resetfailure>1 hour</resetfailure>
-->

<!--
OPTION: securityDescriptor
The security descriptor string for the service in SDDL form.
For more information, see https://docs.microsoft.com/windows/win32/secauthz/security-descriptor-definition-language.
-->

<!--<securityDescriptor></securityDescriptor>-->

<!--
SECTION: Executable management
-->

<!--
OPTION: arguments
Arguments, which should be passed to the executable
-->
<!--
<arguments>-classpath c:\cygwin\home\kohsuke\ws\hello-world\out\production\hello-world test.Main</arguments>
-->

<!--
OPTION: startarguments
Arguments, which should be passed to the executable when it starts
If specified, overrides 'arguments'.
-->
<!--
<startarguments></startarguments>
-->

<!--
OPTION: workingdirectory
If specified, sets the default working directory of the executable
Default value: Directory of the service wrapper executable.
-->
<!--
<workingdirectory>C:\myApp\work</workingdirectory>
-->

<!--
OPTION: priority
Desired process priority.
Possible values: Normal, Idle, High, RealTime, BelowNormal, AboveNormal
Default value: Normal
-->
<priority>Normal</priority>

<!--
OPTION: stoptimeout
Time to wait for the service to gracefully shutdown the executable before we forcibly kill it
Default value: 15 seconds
-->
<stoptimeout>15 sec</stoptimeout>

<!--
OPTION: stopparentprocessfirst
If set, WinSW will terminate the parent process before stopping the children.
Default value: true
-->
<stopparentprocessfirst>true</stopparentprocessfirst>


<!--
OPTION: stopexecutable
Path to an optional executable, which performs shutdown of the service.
This executable will be used if and only if 'stoparguments' are specified.
If 'stoparguments' are defined without this option, 'executable' will be used as a stop executable
-->
<!--
<stopexecutable>%BASE%\stop.exe</stopexecutable>
-->

<!--
OPTION: stoparguments
Additional arguments, which should be passed to the stop executable during termination.
This OPTION also enables termination of the executable via stop executable
-->
<!--
<stoparguments>-stop true</stoparguments>
-->
<!--
SECTION: Service management
-->
<!--
OPTION: startmode
Defines start mode of the service.
Supported modes: Automatic, Manual, Boot, System (latter ones are supported for driver services only)
Default mode: Automatic
-->
<startmode>Automatic</startmode>

<!--
OPTION: delayedAutoStart
Enables the Delayed Automatic Start if 'Automatic' is specified in the 'startmode' field.
See the WinSW documentation to get info about supported platform versions and limitations.
-->
<!--<delayedAutoStart/>-->

<!--
OPTION: depend
Optionally specifies services that must start before this service starts.
-->
<!--
<depend>Eventlog</depend>
<depend>W32Time</depend>
-->

<!--
OPTION: waithint
The estimated time required for a pending stop operation.
Before the specified amount of time has elapsed, the service should make its next call to the SetServiceStatus function.
Otherwise the service will be marked as non-responding
Default value: 15 seconds
-->
<waithint>15 sec</waithint>

<!--
OPTION: sleeptime
The time before the service should make its next call to the SetServiceStatus function.
Do not wait longer than the wait hint. A good interval is one-tenth of the wait hint but not less than 1 second and not more than 10 seconds.
Default value: 1 second
-->
<sleeptime>1 sec</sleeptime>

<!--
OPTION: interactive
Indicates the service can interactwith the desktop.
-->
<!--
<interactive/>
-->

<!--
SECTION:Logging
-->

<!--
OPTION: logpath
Sets a custom logging directory for all logs being produced by the service wrapper
Default value: Directory, which contains the executor
-->
<!--
<logpath>%BASE%\logs</logpath>
-->

<!--
OPTION: log
Defines logging mode for logs produced by the executable.
Supported modes:
* append - Rust update the existing log
* none - Do not save executable logs to the disk
* reset - Wipe the log files on startup
* roll - Roll logs based on size
* roll-by-time - Roll logs based on time
* rotate - Rotate logs based on size, (8 logs, 10MB each). This mode is deprecated, use "roll"
Default mode: append

Each mode has different settings.
See https://github.com/kohsuke/winsw/blob/master/doc/loggingAndErrorReporting.md for more details
-->
<log mode="append">
<!--
<setting1/>
<setting2/>
-->
</log>

<!--
SECTION: Environment setup
-->
<!--
OPTION: env
Sets or overrides environment variables.
There may be multiple entries configured on the top level.
-->
<!--
<env name="MY_TOOL_HOME" value="C:\etc\tools\myTool" />
<env name="LM_LICENSE_FILE" value="host1;host2" />
-->


<!--
OPTION: download
List of downloads to be performed by the wrapper before starting
-->
<!--
<download from="http://www.google.com/" to="%BASE%\index.html" />

Download and fail the service startup on Error:
<download from="http://www.nosuchhostexists.com/" to="%BASE%\dummy.html" failOnError="true"/>
An example for unsecure Basic authentication because the connection is not encrypted:
<download from="http://example.com/some.dat" to="%BASE%\some.dat"
auth="basic" unsecureAuth=“true”
username="aUser" password=“aPassw0rd" />
Secure Basic authentication via HTTPS:
<download from="https://example.com/some.dat" to="%BASE%\some.dat"
auth="basic" username="aUser" password="aPassw0rd" />
Secure authentication when the target server and the client are members of the same domain or
the server domain and the client domain belong to the same forest with a trust:
<download from="https://example.com/some.dat" to="%BASE%\some.dat" auth="sspi" />
-->

<!--
SECTION: Other options
-->

<!--
OPTION: beeponshutdown
Indicates the service should beep when finished on shutdown (if it's supported by OS).
-->
<!--
<beeponshutdown/>
-->

<!--
SECTION: Extensions
This configuration section allows specifying custom extensions.
More info is available here: https://github.com/kohsuke/winsw/blob/master/doc/extensions/extensions.md
-->

<!--
<extensions>
Extension 1: id values must be unique
<extension enabled="true" id="extension1" className="winsw.Plugins.SharedDirectoryMapper.SharedDirectoryMapper">
<mapping>
<map enabled="false" label="N:" uncpath="\\UNC"/>
<map enabled="false" label="M:" uncpath="\\UNC2"/>
</mapping>
</extension>
...
</extensions>
-->

</service>

本文地址: https://github.com/maxzhao-it/blog/post/a895626f/