Kubernetes安装

1.环境#

1
2
3
4
5
6
7
8
192.168.31.211 - vhost1 - 控制节点
192.168.31.212 - vhost2 - 工作节点
192.168.31.213 - vhost3 - 工作节点
192.168.31.208 - test - 本地镜像仓库

操作系统:CentOS7 Linux

Kubernetes版本:v1.26.0

2 安装docker registry本地镜像仓库#

由于官方镜像地址在国内访问不了,需要搭建本地镜像仓库,从网上找寻相关镜像文件下载并导入(见尾部 5.资源)

主机:test

1
# docker run --restart=always -d -p 15000:5000 -v /var/lib/registry:/var/lib/registry registry

配置http访问(即不需要https)

1
# cat /etc/docker/daemon.json
1
2
3
4
5
6
7
8
9
{
"registry-mirrors" : [
"https://cr.console.aliyun.com/"
],
# 这段
"insecure-registries": [
"192.168.31.208:15000"
]
}

3.集群主机配置及所需软件安装#

主机:vhost1, vhost2, vhost3

3.1 配置先决条件#

1
2
3
4
5
6
7
# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# sudo modprobe overlay
# sudo modprobe br_netfilter

设置所需的 sysctl 参数,参数在重新启动后保持不变

1
2
3
4
5
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

应用 sysctl 参数而不重新启动

1
sudo sysctl --system

3.2 安装kubeadm, kubectl, kubelet#

1
2
3
4
5
6
7
8
9
10
11
12
13
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# setenforce 0
# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet && systemctl start kubelet

3.3 安装containerd#

3.3.1 下载并安装containerd#

1
2
3
4
# tar Cxzvf /usr/local containerd-1.6.15-linux-amd64.tar.gz
# cp containerd.service /usr/lib/systemd/system/containerd.service
# systemctl daemon-reload
# systemctl enable --now containerd

3.3.2 下载并安装runc#

1
# install -m 755 runc.amd64 /usr/local/sbin/runc

3.3.3 验证#

1
2
# ctr container list
# crictl ps

如果出现如下错误信息

1
2
3
4
5
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD

执行修复

1
2
# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
# crictl config image-endpoint unix:///run/containerd/containerd.sock

3.3.4 下载并安装CNI插件#

1
2
# mkdir -p /opt/cni/bin
# tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.3.5 配置systemd cgroup驱动#

1
2
# mkdir /etc/containerd
# containerd config default > /etc/containerd/config.toml
1
# vim /etc/containerd/config.toml # 修改如下
1
2
3
4
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

3.3.6 修改沙窗(pause)镜像地址#

1
# vim /etc/containerd/config.toml # 修改如下
1
2
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"

3.3.7 配置containerd使用http#

1
# vim /etc/containerd/config.toml # 修改如下
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""

[plugins."io.containerd.grpc.v1.cri".registry.auths]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
# 这段
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.31.208:15000"]
endpoint = ["http://192.168.31.208:15000"]

[plugins."io.containerd.grpc.v1.cri".registry.configs]
# 这段
[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.31.208:15000".tls]
insecure_skip_verify = true

[plugins."io.containerd.grpc.v1.cri".registry.headers]

3.3.8 重启containerd#

1
# systemctl restart containerd

4.启动集群#

4.1 使用kubeadm初始化集群#

主机:vhost1

1
# kubeadm init --image-repository=registry.aliyuncs.com/google_containers
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vhost1] and IPs [10.96.0.1 192.168.31.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vhost1] and IPs [192.168.31.211 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vhost1] and IPs [192.168.31.211 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 44.098707 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vhost1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vhost1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 9pk20q.rk8jrrx5valnw0di
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.211:6443 --token 9pk20q.rk8jrrx5valnw0di \
--discovery-token-ca-cert-hash sha256:0c2746c2e25ebb8543c5e905454a54887562bafacf69465fc7a83d6159bf3a1c

4.2 安装Pod网络附加组件#

主机:vhost1

下载calico.yaml

1
2
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl apply -f calico.yaml

4.3 其他节点加入集群#

主机:vhost2, vhost3

1
# kubeadm join 192.168.31.211:6443 --token 9pk20q.rk8jrrx5valnw0di --discovery-token-ca-cert-hash sha256:0c2746c2e25ebb8543c5e905454a54887562bafacf69465fc7a83d6159bf3a1c
1
2
3
4
5
6
7
8
9
10
11
12
13
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4.4 验证及其他命令#

主机:vhost1

4.4.1 查询集群节点看是否启动成功#

1
# kubectl get nodes
1
2
3
4
NAME     STATUS   ROLES           AGE   VERSION
vhost1 Ready control-plane 19d v1.26.0
vhost2 Ready <none> 19d v1.26.0
vhost3 Ready <none> 19d v1.26.0

4.4.2 令牌相关#

1
# kubeadm token list

默认情况下,令牌会在 24 小时后过期。如果要在当前令牌过期后将节点加入集群, 则可以通过在控制平面节点上运行以下命令来创建新令牌:

1
# kubeadm token create

如果你没有 –discovery-token-ca-cert-hash 的值,则可以通过在控制平面节点上执行以下命令链来获取它:

1
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

4.4.3 重启集群#

1
2
3
4
5
6
7
8
# 重置
kubeadm reset

# 控制平面
kubeadm init ...

# 子节点
kubeadm join ...

5.资源#

6.资料#

Linux命令工具-awk-基础用法

1.简介#

模式扫描和处理的语言,一般可作为格式化工具。

2.命令格式#

awk [OPTIONS] 'script' file
awk [OPTIONS] -f scriptfile file

3.示例#

1
# cat data.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 777/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1075/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1409/master
tcp 0 0 127.0.0.1:17380 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:17400 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:17221 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 1133/tinyproxy
tcp 0 0 127.0.0.1:17131 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 0.0.0.0:9080 0.0.0.0:* LISTEN 2625/WorkerMan: mas
tcp 0 0 127.0.0.1:8899 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:17033 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:15640 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:15538 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:15445 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 0.0.0.0:8090 0.0.0.0:* LISTEN 1412/phpstudy
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 1080/redis-server *
tcp 0 0 127.0.0.1:15329 0.0.0.0:* LISTEN 2497/nginx: master
tcp 0 0 127.0.0.1:5538 0.0.0.0:* LISTEN 2478/php-fpm: maste
tcp 0 0 192.168.31.208:51704 192.168.31.221:9200 ESTABLISHED 1067/metricbeat
tcp 0 0 192.168.31.208:22 192.168.31.151:59684 ESTABLISHED 5894/sshd: root@pts
tcp 0 0 192.168.31.208:22 192.168.31.151:63547 ESTABLISHED 7573/sshd: root@pts
tcp 0 0 127.0.0.1:8090 127.0.0.1:55354 ESTABLISHED 1412/phpstudy
tcp 0 0 127.0.0.1:55354 127.0.0.1:8090 ESTABLISHED 1193/phpstudy
tcp6 0 0 ::1:25 :::* LISTEN 1409/master
tcp6 0 0 :::3306 :::* LISTEN 2433/mysqld
tcp6 0 0 :::111 :::* LISTEN 777/rpcbind
tcp6 0 0 :::22 :::* LISTEN 1075/sshd
tcp6 0 0 :::21 :::* LISTEN 1112/vsftpd
tcp6 0 0 :::9100 :::* LISTEN 764/node_exporter
tcp6 0 0 :::6379 :::* LISTEN 1080/redis-server *
tcp6 0 0 192.168.31.208:9100 192.168.31.222:35232 ESTABLISHED 764/node_exporter
udp 0 0 0.0.0.0:111 0.0.0.0:* 777/rpcbind
udp 0 0 127.0.0.1:323 0.0.0.0:* 824/chronyd
udp 0 0 0.0.0.0:935 0.0.0.0:* 777/rpcbind
udp 0 0 192.168.31.208:59713 111.230.189.174:123 ESTABLISHED 824/chronyd
udp6 0 0 :::111 :::* 777/rpcbind
udp6 0 0 ::1:323 :::* 824/chronyd
udp6 0 0 :::935 :::* 777/rpcbind
raw6 213504 0 :::58 :::* 7 805/NetworkManager

以上是文件原始内容


1
# awk '{print $1, $4, $5}' data.txt

该命令作用是打印第1、4、5列,包含标题(即不区分是否标题)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Proto Local Address
tcp 0.0.0.0:111 0.0.0.0:*
tcp 0.0.0.0:80 0.0.0.0:*
tcp 0.0.0.0:22 0.0.0.0:*
tcp 127.0.0.1:25 0.0.0.0:*
tcp 127.0.0.1:17380 0.0.0.0:*
tcp 127.0.0.1:17400 0.0.0.0:*
tcp 127.0.0.1:17221 0.0.0.0:*
tcp 0.0.0.0:8888 0.0.0.0:*
tcp 127.0.0.1:17131 0.0.0.0:*
tcp 0.0.0.0:9080 0.0.0.0:*
tcp 127.0.0.1:8899 0.0.0.0:*
tcp 127.0.0.1:17033 0.0.0.0:*
tcp 127.0.0.1:15640 0.0.0.0:*
tcp 127.0.0.1:15538 0.0.0.0:*
tcp 127.0.0.1:15445 0.0.0.0:*
tcp 0.0.0.0:8090 0.0.0.0:*
tcp 0.0.0.0:6379 0.0.0.0:*
tcp 127.0.0.1:15329 0.0.0.0:*
tcp 127.0.0.1:5538 0.0.0.0:*
tcp 192.168.31.208:51704 192.168.31.221:9200
tcp 192.168.31.208:22 192.168.31.151:59684
tcp 192.168.31.208:22 192.168.31.151:63547
tcp 127.0.0.1:8090 127.0.0.1:55354
tcp 127.0.0.1:55354 127.0.0.1:8090
tcp6 ::1:25 :::*
tcp6 :::3306 :::*
tcp6 :::111 :::*
tcp6 :::22 :::*
tcp6 :::21 :::*
tcp6 :::9100 :::*
tcp6 :::6379 :::*
tcp6 192.168.31.208:9100 192.168.31.222:35232
udp 0.0.0.0:111 0.0.0.0:*
udp 127.0.0.1:323 0.0.0.0:*
udp 0.0.0.0:935 0.0.0.0:*
udp 192.168.31.208:59713 111.230.189.174:123
udp6 :::111 :::*
udp6 ::1:323 :::*
udp6 :::935 :::*
raw6 :::58 :::*

1
# awk '{printf "%-10s %-22s %-32s \n", $1, $4, $5}' data.txt

该命令作用同上一条,只是进行了格式化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Proto      Local                  Address                          
tcp 0.0.0.0:111 0.0.0.0:*
tcp 0.0.0.0:80 0.0.0.0:*
tcp 0.0.0.0:22 0.0.0.0:*
tcp 127.0.0.1:25 0.0.0.0:*
tcp 127.0.0.1:17380 0.0.0.0:*
tcp 127.0.0.1:17400 0.0.0.0:*
tcp 127.0.0.1:17221 0.0.0.0:*
tcp 0.0.0.0:8888 0.0.0.0:*
tcp 127.0.0.1:17131 0.0.0.0:*
tcp 0.0.0.0:9080 0.0.0.0:*
tcp 127.0.0.1:8899 0.0.0.0:*
tcp 127.0.0.1:17033 0.0.0.0:*
tcp 127.0.0.1:15640 0.0.0.0:*
tcp 127.0.0.1:15538 0.0.0.0:*
tcp 127.0.0.1:15445 0.0.0.0:*
tcp 0.0.0.0:8090 0.0.0.0:*
tcp 0.0.0.0:6379 0.0.0.0:*
tcp 127.0.0.1:15329 0.0.0.0:*
tcp 127.0.0.1:5538 0.0.0.0:*
tcp 192.168.31.208:51704 192.168.31.221:9200
tcp 192.168.31.208:22 192.168.31.151:59684
tcp 192.168.31.208:22 192.168.31.151:63547
tcp 127.0.0.1:8090 127.0.0.1:55354
tcp 127.0.0.1:55354 127.0.0.1:8090
tcp6 ::1:25 :::*
tcp6 :::3306 :::*
tcp6 :::111 :::*
tcp6 :::22 :::*
tcp6 :::21 :::*
tcp6 :::9100 :::*
tcp6 :::6379 :::*
tcp6 192.168.31.208:9100 192.168.31.222:35232
udp 0.0.0.0:111 0.0.0.0:*
udp 127.0.0.1:323 0.0.0.0:*
udp 0.0.0.0:935 0.0.0.0:*
udp 192.168.31.208:59713 111.230.189.174:123
udp6 :::111 :::*
udp6 ::1:323 :::*
udp6 :::935 :::*
raw6 :::58 :::*

不打印标题(即不打印第一行)

1
# awk 'NR!=1 {printf "%-10s %-22s %-32s \n", $1, $4, $5}' data.txt

内置变量

1
2
3
4
5
6
7
8
9
10
$0          当前记录(这个变量中存放着整个行的内容)
$1~$n 当前记录的第n个字段,字段间由FS分隔
FS 输入字段分隔符 默认是空格或Tab
NF 当前记录中的字段个数,就是有多少列
NR 已经读出的记录数,就是行号,从1开始,如果有多个文件话,这个值也是不断累加中。
FNR 当前记录数,与NR不同的是,这个值会是各个文件自己的行号
RS 输入的记录分隔符, 默认为换行符
OFS 输出字段分隔符, 默认也是空格
ORS 输出的记录分隔符,默认为换行符
FILENAME 当前输入文件的名字

1
# awk '$1=="tcp" && $6=="ESTABLISHED" || NR==1 {printf "%-10s %-22s %-22s %-16s %-30s \n", $1, $4, $5, $6, $7}' data.txt

该命令作用为,匹配第1列为”tcp”,并且第6列为”ESTABLISHED”,或者行号为1的行(即包含标题),然后进行格式化输出

1
2
3
4
5
6
Proto      Local                  Address                Foreign          Address                        
tcp 192.168.31.208:51704 192.168.31.221:9200 ESTABLISHED 1067/metricbeat
tcp 192.168.31.208:22 192.168.31.151:59684 ESTABLISHED 5894/sshd:
tcp 192.168.31.208:22 192.168.31.151:63547 ESTABLISHED 7573/sshd:
tcp 127.0.0.1:8090 127.0.0.1:55354 ESTABLISHED 1412/phpstudy
tcp 127.0.0.1:55354 127.0.0.1:8090 ESTABLISHED 1193/phpstudy

高级用法和原理见资料

4.资料#

Linux命令工具-sed-基础用法

1.简介#

一个非交互式、面向文本流的编辑器,能对文本进行匹配、过滤和转换

2.命令格式#

sed [OPTION]... {script-only-if-no-other-script} [input-file]...

3.示例#

1
# cat data.txt
1
2
unix macos macos unix unix unix windows windows unix unix
unix unix unix unix unix unix unix unix unix unix

以上是文件原始内容


1
# sed 's/unix/linux/g' data.txt

该命令作用是使用指令s(格式为:s/pattern/replacement/flags)匹配每行的unix,将其替换为linux,g表示对每行的所有匹配进行替换,输出如下:

1
2
linux macos macos linux linux linux windows windows linux linux
linux linux linux linux linux linux linux linux linux linux

上一个例子的操作不会影响到原文件,只是默认进行打印输出

1
# sed -i 's/unix/linux/g' data.txt

该命令添加了-i选项,表示直接对原文件进行操作

1
# sed -i.bak 's/unix/linux/g' data.txt

该命令添加了-i并且紧跟.bak(可以是其他任意字符串作为后缀比如temp、-save等),这会使得产生一个相应名字的备份文件,并对源文件进行操作

1
# ls -l
1
2
data.txt
data.txt.bak

1
# cat data.txt
1
2
unix macos macos unix unix unix windows windows unix unix
unix unix unix unix unix unix unix unix unix unix
1
# sed '/macos/p' data.txt
1
2
3
unix macos macos unix unix unix windows windows unix unix
unix macos macos unix unix unix windows windows unix unix
unix unix unix unix unix unix unix unix unix unix

该命令会将存在匹配(macos)的行进行打印(p)
这里输出了3行,第1行是原始第1行,第2行是由于第1行的匹配进行的打印输出,第3行为原始第3行
可以指定-n选项抑制原始行数据的打印,如下:

1
# sed -n '/macos/p' data.txt
1
unix macos macos unix unix unix windows windows unix unix

1
# cat data.txt
1
2
unix macos macos unix unix unix windows windows unix unix
unix unix unix unix unix unix unix unix unix unix
1
# sed '2d' data.txt

该命令将指定的第2行进行删除操作,输入如下:

1
unix macos macos unix unix unix windows windows unix unix

4.资料#

Linux命令工具-find

1.简介#

能够在多层级目录中递归查找文件的工具

2.命令格式#

find [path...] [expression]

[path] - 指定递归查找的根目录(默认当前目录)
[expression] 可以由3部分组成:
options - 选项(相当于全局配置)
tests - 测试(即判断指定条件是否满足)
actions - 动作(即找到符合条件的文件之后,对每个文件进行的操作)(默认-print)

3.示例#

find . -mindepth 2 -name '*.txt' -size +1M -exec ls -sailh {} \;

path: .
expression.options: -mindepth 2
expression.tests: -name '*.txt' -size +1M
expression.actions: -exec ls -sailh {} \;

{} 表示查找到的每个文件, \; 表示 -exec 的结束

该示例表示的意思是,从当前目录开始递归查找,文件路径最小深度为2(即>=2),文件名以 .txt 结尾,并且文件大小大于1M,查找完之后的动作为使用 ls -sailh 命令将其展示出来

4.资料#

  • linux man find

ab - Apache HTTP服务器基准测试工具

1.简介#

HTTP服务器基准测试工具

2.摘要#

1
2
3
4
5
6
ab [ -A auth-username:password ] [ -c concurrency ] [ -C cookie-name=value ] [ -d ]
[ -e csv-file ] [ -g gnuplot-file ] [ -h ] [ -H custom-header ] [ -i ] [ -k ]
[ -n requests ] [ -p POST-file ] [ -P proxy-auth-username:password ] [ -q ]
[ -s ] [ -S ] [ -t timelimit ] [ -T content-type ] [ -v verbosity] [ -V ]
[ -w ] [ -x <table>-attributes ] [ -X proxy[:port] ] [ -y <tr>-attributes ]
[ -z <td>-attributes ] [http://]hostname[:port]/path

3.选项#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
-A auth-username:password
Http Basic认证
-c concurrency
并发数,默认1
-C cookie-name=value
添加一个Cookie,示例 -C 'XSESSIONID=aaaa;key2=bbbb'
-d
不要显示"Percentage of the requests served within a certain time (ms)"这个表格(遗留支持)
-e csv-file
输入csv文件
-g gnuplot-file
输出gnuplot文件
-h
显示帮助信息
-H custom-header
添加头部,示例 -H "Accept-Encoding: zip/zop;8bit"
-i
发送HEAD请求
-k
开启http KeepAlive特性,默认关闭
-n requests
请求数,默认1
-p POST-file
指定数据文件并发送POST请求
-P proxy-auth-username:password
代理
-q
不显示执行百分比信息
-s
https相关,该特性为实验性,一般不会用到
-S
不显示中位数和标准偏差值,还有不显示其他的一些值。默认仅显示最小值/平均值/最大值(遗留支持)
-t timelimit
基准测试的最大时间(秒)
-T content-type
为POST数据指定Content-Type头部
-v verbosity
设置详情级别,详细程度从4到2递减
-V
显示版本号
-w
打印输出结果为HTML表格(源码)
-x <table>-attributes
String to use as attributes for <table>. Attributes are inserted <table here >.
设置<table>属性,示例 -x bgcolor=blue => <table bgcolor=blue>
-X proxy[:port]
为请求设置一个代理服务器
-y <tr>-attributes
设置<tr>属性
-z <td>-attributes
设置<td>属性

不同版本参数不完全相同,具体见版本的帮助文档 ab -h

4.示例#

1
# ab -n 100 -c 10 'https://publib.boulder.ibm.com/httpserv/manual70/programs/ab.html'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking publib.boulder.ibm.com (be patient).....done


Server Software:
Server Hostname: publib.boulder.ibm.com
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,4096,128

Document Path: /httpserv/manual70/programs/ab.html
Document Length: 10193 bytes

Concurrency Level: 10
Time taken for tests: 17.458 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 1045400 bytes
HTML transferred: 1019300 bytes
Requests per second: 5.73 [#/sec] (mean)
Time per request: 1745.789 [ms] (mean)
Time per request: 174.579 [ms] (mean, across all concurrent requests)
Transfer rate: 58.48 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 695 951 198.0 925 1629
Processing: 454 582 57.3 596 693
Waiting: 228 293 30.9 298 368
Total: 1152 1533 233.7 1526 2267

Percentage of the requests served within a certain time (ms)
50% 1526
66% 1558
75% 1594
80% 1614
90% 1652
95% 2222
98% 2267
99% 2267
100% 2267 (longest request)

5.资料#

SpringBoot集成Elasticsearch

1.Maven配置#

1
2
3
4
5
6
<!-- Spring Data Elasticsearch -->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
<!-- 版本随spring-boot-starter-parent -->
</dependency>

2.代码配置#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.yeshimin.ysmspace.config;

import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.client.ClientConfiguration;
import org.springframework.data.elasticsearch.client.RestClients;
import org.springframework.data.elasticsearch.config.AbstractElasticsearchConfiguration;

@Configuration
public class ElasticsearchConfiguration extends AbstractElasticsearchConfiguration {

@Override
public RestHighLevelClient elasticsearchClient() {
final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("vhost4:9200") // 指定es服务地址
.build();

return RestClients.create(clientConfiguration).rest();
}
}

3.数据访问层代码#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package com.yeshimin.ysmspace.model.elasticsearch;

// ... import

@Data
@Document(indexName = Common.PROJECT_NAME + "_product")
public class ProductDoc {

/**
* 主键ID
*/
@Id
@Field(name = "id", type = FieldType.Long)
private Long id;

/**
* 创建时间
*/
@Field(name = "create_time", type = FieldType.Date, format = DateFormat.basic_date_time)
private LocalDateTime createTime;

/**
* 商品名称
*/
@Field(name = "name", type = FieldType.Keyword)
private String name;

// ...
}
1
2
3
4
5
6
7
package com.yeshimin.ysmspace.dal.elasticsearch;

// ... import

@Repository
public interface ProductEsRepository extends ElasticsearchRepository<ProductDoc, Long> {
}

4.使用#

遵循spring data规则

示例:

1
2
3
4
5
6
@Autowired
private ProductEsRepository productEsRepository;

ProductDoc productDoc = new ProductDoc();
// ...
productEsRepository.save(productDoc);

5.版本信息#

  • springboot parent - 2.3.5.RELEASE
  • spring data elasticsearch - 7.6.2
  • elasticsearch - 7.9.3

SpringBoot集成RabbitMQ-实现订单超时取消

1.解决方案#

利用RabbitMQ的死信队列机制实现订单超时自动取消;
具体为下单后发送设置了TTL的订单消息到队列A,当超时未消费时,消息被转发到对应的死信队列B,监听队列B的消费者收到消息对其进行处理;
消费者对订单进行取消操作需保证幂等,因为可能订单已经被用户主动取消;

2.Maven配置#

1
2
3
4
5
6
<!-- Spring RabbitMQ -->
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<!-- 版本随spring-boot-starter-parent -->
</dependency>

3.YML配置#

1
2
3
4
5
6
7
spring:
rabbitmq:
host: 192.168.31.208
port: 5672
virtual-host: ysmspace
username: username1
password: password1

4.代码配置#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
package com.yeshimin.ysmspace.config;

import org.springframework.amqp.core.AmqpAdmin;
import org.springframework.amqp.core.Binding;
import org.springframework.amqp.core.DirectExchange;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.rabbit.connection.ConnectionFactory;
import org.springframework.amqp.rabbit.core.RabbitAdmin;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.HashMap;
import java.util.Map;

@Configuration
public class RabbitMqConfiguration {

public static final String EXCHANGE_DEFAULT_DIRECT = "amq.direct";
public static final String QUEUE_ORDER_PENDING = "q.order.pending";
public static final String ROUTING_ORDER_PENDING = "rk.order.pending";
public static final String DLX_ORDER_PENDING = EXCHANGE_DEFAULT_DIRECT;
public static final String DLQ_ORDER_PENDING = "q.dl.order.pending";
public static final String DLR_ORDER_PENDING = "rk.dl.order.pending";

@Bean
public AmqpAdmin amqpAdmin(ConnectionFactory connectionFactory) {
return new RabbitAdmin(connectionFactory);
}

@Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
return new RabbitTemplate(connectionFactory);
}

@Bean
public DirectExchange defaultDirectExchange() {
return new DirectExchange(EXCHANGE_DEFAULT_DIRECT);
}

@Bean
public Queue orderPendingQueue() {
Map<String, Object> args = new HashMap<>();
// args.put("x-message-ttl", 30 * 60 * 1000); ttl由消息实体指定
args.put("x-dead-letter-exchange", DLX_ORDER_PENDING);
args.put("x-dead-letter-routing-key", DLR_ORDER_PENDING);
return new Queue(QUEUE_ORDER_PENDING, true, false, false, args);
}

@Bean
public Queue orderPendingDeadLetterQueue() {
return new Queue(DLQ_ORDER_PENDING);
}

@Bean
public Binding orderPendingBinding() {
return new Binding(QUEUE_ORDER_PENDING, Binding.DestinationType.QUEUE,
EXCHANGE_DEFAULT_DIRECT, ROUTING_ORDER_PENDING, null);
}

@Bean
public Binding orderPendingDeadLetterBinding() {
return new Binding(DLQ_ORDER_PENDING, Binding.DestinationType.QUEUE,
DLX_ORDER_PENDING, DLR_ORDER_PENDING, null);
}
}

5.消息发布和消费及相关配置#

5.1 消息后置处理器-用于设置消息的超时时间#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.yeshimin.ysmspace.rabbitmq;

import org.springframework.amqp.AmqpException;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.core.MessagePostProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Component
public class DefaultMessagePostProcessor implements MessagePostProcessor {

@Value("${order.timeout}")
private Integer orderTimeout;

@Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setExpiration(String.valueOf(orderTimeout));
return message;
}
}

5.2 消息发布#

1
2
3
4
5
6
7
8
9
10
11
12
13
@Autowired
private RabbitTemplate rabbitTemplate;
@Autowired
private DefaultMessagePostProcessor defaultMessagePostProcessor;

// ...

rabbitTemplate.convertAndSend(
RabbitMqConfiguration.EXCHANGE_DEFAULT_DIRECT,
RabbitMqConfiguration.ROUTING_SYNC_TO_ES,
JSON.toJSONString(message), defaultMessagePostProcessor);

// ...

5.3 消息消费#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package com.yeshimin.ysmspace.rabbitmq;

// ... import

@Slf4j
@Component
public class DefaultRabbitListener {

@Autowired
private OrderAppService orderAppService;

@RabbitListener(queues = RabbitMqConfiguration.DLQ_ORDER_PENDING)
public void listen(String message) {
log.debug("listen(), message: {}", message);

OrderPendingMessage jsonData = JSON.parseObject(message, OrderPendingMessage.class);

try {
orderAppService.cancel(jsonData.getOrderId());
} catch (Exception e) {
log.warn("未成功取消订单!");
e.printStackTrace();
}
}
}

6.版本信息#

  • springboot parent - 2.3.5.RELEASE
  • rabbitmq - 3.3.5

MySQL复制之一主二从配置

1.环境#

主机:vhost4, vhost5, vhost6
系统:CentOS7
vhost4: MySQL.slave
vhost5: MySQL.master
vhost6: MySQL.slave
MySQL: v5.7.36

2.配置#

2.1 (一个)master配置#

2.1.1 my.cnf配置#

cat /etc/my.cnf

1
2
3
4
5
6
7
8
9
10
11
12
...略

[mysqld]
# 开启binary log
log-bin=mysql-bin
# 设置server_id
server-id=1
# For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1 andsync_binlog=1 in the source's my.cnf file
innodb_flush_log_at_trx_commit=1
sync_binlog=1

...略

2.1.2 确保MySQL的 skip_networking 变量是OFF的#

1
SHOW VARIABLES LIKE '%skip_networking%';

2.1.3 重启MySQL服务#

1
systemctl restart mysqld

2.1.4 在master上创建用于复制的用户账号#

1
2
CREATE USER 'repl'@'%' IDENTIFIED BY '密码';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';

2.1.5 获取master上binary log的坐标#

1
2
3
4
5
6
7
8
mysql> FLUSH TABLES WITH READ LOCK; # 锁定只读
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000354 | 4649036 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.01 sec)

2.1.6 同步现有数据,从master上导出数据#

1
mysqldump -uroot -p --all-databases --master-data > dbdump.db

2.1.7 解除锁定#

在master上执行UNLOCK TABLES;(对应上面的FLUSH TABLES WITH READ LOCK;)

2.2 (每个)slave配置#

2.2.1 replica设置(一般不需要开启binary log)#

cat /etc/my.cnf
两个slave的server-id分别配置为2和3

1
2
3
4
5
6
...略

[mysqld]
server-id=2

...略

2.2.2 在replica上配置master#

MASTER_LOG_FILE和MASTER_LOG_POST参数为master上binary log的坐标

1
2
3
4
5
6
CHANGE MASTER TO
MASTER_HOST='vhost5',
MASTER_USER='repl',
MASTER_PASSWORD='密码',
MASTER_LOG_FILE='mysql-bin.000354',
MASTER_LOG_POS=4649036;

2.2.3 启动slave开始复制#

1.导入master的数据
2.start slave

3.资料#

PowerShell脚本简单应用

1.背景#

Windows系统的HyperV中的虚拟机,在正常关机、开机后不能正确(符合预期地)保存/恢复虚拟机状态;
具体表现在CentOS7 Linux虚拟机中prometheus服务启动出错(详细错误未记录);
在尝试通过HyperV的配置使达到预期未果后,转而使用其他方式;
只要使虚拟机经历正常关机和开机就能使结果符合预期;
最终使用Windows PowerShell自动化脚本的方式。

2.代码#

2.1 启动脚本:StartHyperVMs.ps1#

1
2
3
4
5
6
7
8
9
10
11
12
# Self-elevate the script if required
# 这段代码会使脚本已管理员角色运行,不添加会报权限错误
if (-Not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
if ([int](Get-CimInstance -Class Win32_OperatingSystem | Select-Object -ExpandProperty BuildNumber) -ge 6000) {
$Command = "-File `"" + $MyInvocation.MyCommand.Path + "`" " + $MyInvocation.UnboundArguments
Start-Process -FilePath PowerShell.exe -Verb RunAs -ArgumentList $Command
Exit
}
}

# Place your script here
start-vm -name vhost*

2.2 关机脚本:StopHyperVMs.ps1#

1
2
3
4
5
6
7
8
9
10
11
12
# Self-elevate the script if required
# 这段代码会使脚本已管理员角色运行,不添加会报权限错误
if (-Not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
if ([int](Get-CimInstance -Class Win32_OperatingSystem | Select-Object -ExpandProperty BuildNumber) -ge 6000) {
$Command = "-File `"" + $MyInvocation.MyCommand.Path + "`" " + $MyInvocation.UnboundArguments
Start-Process -FilePath PowerShell.exe -Verb RunAs -ArgumentList $Command
Exit
}
}

# Place your script here
stop-vm -name vhost*

3.配置#

3.1 打开【本地组策略编辑器】#

Win + r 输入 gpedit.msc

3.1 配置启动脚本#

计算机配置 -> Windows设置 -> 脚本(启动/关机) -> 【启动】 -> PowerShell脚本 -> 添加 -> 脚本名(配置为StartHyperVMs.p1的路径)

3.1 配置关机脚本#

计算机配置 -> Windows设置 -> 脚本(启动/关机) -> 【关机】 -> PowerShell脚本 -> 添加 -> 脚本名(配置为StopHyperVMs.p1的路径)

4.资料#

Prometheus+Grafana监控主机和MySQL数据库

1.环境#

主机:vhost4, vhost5, vhost6
系统:CentOS7
vhost4: MySQL.slave, Prometheus, NodeExporter, MysqlExporter
vhost5: MySQL.master, NodeExporter, MysqlExporter
vhost6: MySQL.slave, NodeExporter, MysqlExporter
Prometheus: v2.24.1
Grafana: v7.4.0
NodeExporter: v1.1.0
MysqlExporter: v0.14.0

2.Prometheus#

2.1 下载压缩包并解压#

1
# ls -al /opt/prometheus/prometheus/prometheus-2.24.1.linux-amd64
1
2
3
4
5
6
7
8
9
10
11
总用量 165812
drwxr-xr-x 5 3434 3434 144 5月 10 16:35 .
drwxr-xr-x 3 root root 84 5月 5 13:51 ..
drwxr-xr-x 2 3434 3434 38 1月 20 2021 console_libraries
drwxr-xr-x 2 3434 3434 173 1月 20 2021 consoles
drwxr-xr-x 13 root root 4096 5月 10 17:00 data
-rw-r--r-- 1 3434 3434 11357 1月 20 2021 LICENSE
-rw-r--r-- 1 3434 3434 3420 1月 20 2021 NOTICE
-rwxr-xr-x 1 3434 3434 89894679 1月 20 2021 prometheus
-rw-r--r-- 1 root root 1324 5月 10 16:35 prometheus.yml
-rwxr-xr-x 1 3434 3434 79870425 1月 20 2021 promtool

2.2 配置#

1
# cat /opt/prometheus/prometheus/prometheus-2.24.1.linux-amd64/prometheus.yml 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets: ['localhost:9090']

- job_name: 'node'
static_configs:
- targets: ['vhost4:9100', 'vhost5:9100', 'vhost6:9100']

- job_name: 'mysqld'
static_configs:
- targets: ['vhost4:9104','vhost5:9104','vhost6:9104']

2.3 服务化#

1
# cat /usr/lib/systemd/system/prometheus.service 
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Prometheus

[Service]
WorkingDirectory=/opt/prometheus/prometheus/prometheus-2.24.1.linux-amd64
ExecStart=/opt/prometheus/prometheus/prometheus-2.24.1.linux-amd64/prometheus --config.file=/opt/prometheus/prometheus/prometheus-2.24.1.linux-amd64/prometheus.yml
User=root

[Install]
WantedBy=multi-user.target

3.Grafana#

3.1 下载安装包并安装#

1
# ls -al /opt/prometheus/grafana/
1
2
3
4
总用量 49732
drwxr-xr-x 2 root root 40 2月 8 2021 .
drwxr-xr-x 6 root root 83 5月 10 16:33 ..
-rw-r--r-- 1 root root 50921720 2月 8 2021 grafana-7.4.0-1.x86_64.rpm
1
# rpm -ivh /opt/prometheus/grafana/grafana-7.4.0-1.x86_64.rpm

3.2 配置#

默认

3.3 服务化#

1
2
# systemctl start grafana-server.service
# systemctl enable grafana-server.service

4.NodeExporter#

4.1 下载压缩包并解压#

1
# ls -al node_exporter/node_exporter-1.1.0.linux-amd64
1
2
3
4
5
6
总用量 18720
drwxr-xr-x 2 root root 56 2月 6 2021 .
drwxr-xr-x 3 root root 88 2月 8 2021 ..
-rw-r--r-- 1 root root 11357 2月 6 2021 LICENSE
-rwxr-xr-x 1 root root 19151814 2月 6 2021 node_exporter
-rw-r--r-- 1 root root 463 2月 6 2021 NOTICE

4.2 配置#

4.3 服务化#

1
# cat /usr/lib/systemd/system/node-exporter.service
1
2
3
4
5
6
7
8
9
[Unit]
Description=NodeExporter

[Service]
ExecStart=/opt/prometheus/node_exporter/node_exporter-1.1.0.linux-amd64/node_exporter
User=root

[Install]
WantedBy=multi-user.target

5.MysqlExporter#

5.1 下载压缩包并解压#

1
# ls -al /opt/prometheus/mysqld_exporter/mysqld_exporter-0.14.0.linux-amd64/
1
2
3
4
5
6
7
总用量 14828
drwxr-xr-x 2 root root 72 5月 10 16:37 .
drwxr-xr-x 3 root root 48 5月 10 16:34 ..
-rw-r--r-- 1 root root 11357 5月 10 16:34 LICENSE
-rw-r--r-- 1 root root 35 5月 10 16:37 my.cnf
-rwxr-xr-x 1 root root 15163162 5月 10 16:34 mysqld_exporter
-rw-r--r-- 1 root root 65 5月 10 16:34 NOTICE

5.2 配置#

1
# cat /opt/prometheus/mysqld_exporter/mysqld_exporter-0.14.0.linux-amd64/my.cnf 
1
2
3
[client]
user=数据库账号
password=数据库密码

按需还可以指定 host 和 port

5.3 服务化#

1
# cat /usr/lib/systemd/system/mysqld-exporter.service 
1
2
3
4
5
6
7
8
9
[Unit]
Description=MysqldExporter

[Service]
ExecStart=/opt/prometheus/mysqld_exporter/mysqld_exporter-0.14.0.linux-amd64/mysqld_exporter --config.my-cnf=/opt/prometheus/mysqld_exporter/mysqld_exporter-0.14.0.linux-amd64/my.cnf
User=root

[Install]
WantedBy=multi-user.target

6.资料#