2022-01-16:升级 filebeat-8.6.0
安装
下载
1 2 3
| wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.6.0-linux-x86_64.tar.gz tar -zxf filebeat-8.6.0-linux-x86_64.tar.gz mv filebeat-8.6.0-linux-x86_64 filebeat
|
启动 nginx 模块
1
| /home/nginx/filebeat/filebeat modules enable nginx -c /home/nginx/filebeat/filebeat.yml
|
修改 nginx.yml 文件
1 2 3 4 5 6 7 8 9 10 11 12
| cat > /home/nginx/filebeat/modules.d/nginx.yml <<EOF - module: nginx # Access logs access: enabled: true var.paths: ["/home/nginx/nginx/logs/access.log*"] # Error logs error: enabled: true var.paths: ["/home/nginx/nginx/logs/error.log*"] EOF cat /home/nginx/filebeat/modules.d/nginx.yml
|
参考地址
命令行中指定
1
| -M "nginx.access.var.paths=[/home/nginx/nginx/logs/access.log*]" -M "nginx.error.var.paths=[/home/nginx/nginx/logs/error.log*]"
|
输出到 ES
1 2
| cp /home/nginx/filebeat/filebeat.yml /home/nginx/filebeat/filebeat.yml-bak vim /home/nginx/filebeat/filebeat.yml
|
修改配置
参考
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
| filebeat.inputs: - type: filestream id: my-filestream-id enabled: false paths: - /var/log/*.log
filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false reload.period: 60s setup.template.settings: index.number_of_shards: 1 setup.template.name: "nginx-log" setup.template.pattern: "nginx-log-*" setup.template.overwrite: false setup.kibana: host: "192.168.14.123:45601" protocol: "http" username: "maxzhao" password: "maxzhao." output.elasticsearch: hosts: ["192.168.14.123:49200"] protocol: "http" index: "nginx-log-%{+yyyy.MM.dd}" username: "maxzhao" password: "maxzhao."
processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
|
启动
1 2 3 4 5 6 7 8 9
| cd /home/nginx/filebeat
./filebeat test config -c filebeat.yml -e
./filebeat -c filebeat.yml -e ./filebeat setup -c filebeat.yml --dashboards
nohup /home/nginx/filebeat/filebeat -c /home/nginx/filebeat/filebeat.yml -e >> /home/nginx/filebeat/filebeat.log 2>&1 & tail -f /home/nginx/filebeat/filebeat.log
|
附
启停脚本
1 2 3
| echo "nohup /home/nginx/filebeat/filebeat -c /home/nginx/filebeat/filebeat.yml -e >> /home/nginx/filebeat/filebeat.log 2>&1 &" > /home/nginx/filebeat-start.sh echo "ps -ef|grep '/filebeat' | grep -v grep |awk '{print \$2}' |xargs -I {} kill '{}'" > /home/nginx/filebeat-shutdown.sh chmod +x filebeat-*.sh
|
nginx
配置
1 2 3 4 5 6 7
| http { #日志格式 log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' '$request_time $upstream_response_time $upstream_addr $upstream_status'; }
|
filebeat
配置
filebeat/module/nginx/access/ingest/pipeline.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202
| description: Pipeline for parsing Nginx access logs. Requires the geoip and user_agent plugins. processors: - set: field: event.ingested value: '{{_ingest.timestamp}}' - rename: field: message target_field: event.original - grok: field: event.original patterns: - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address}) - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "(-|%{DATA:http.request.referrer})" (-|%{NUMBER:nginx.access.request_time:double}) (-|%{NUMBER:nginx.access.response_time:double}) %{DATA:nginx.access.upstream_addr} (-|%{NUMBER:nginx.access.upstream_status_code:long}) "(-|%{DATA:user_agent.original})" pattern_definitions: NGINX_HOST: (?:%{IP:destination.ip}|%{NGINX_NOTSEPARATOR:destination.domain})(:%{NUMBER:destination.port})? NGINX_NOTSEPARATOR: "[^\t ,:]+" NGINX_ADDRESS_LIST: (?:%{IP}|%{WORD})("?,?\s*(?:%{IP}|%{WORD}))* ignore_missing: true - grok: field: nginx.access.info patterns: - '%{WORD:http.request.method} %{DATA:_tmp.url_orig} HTTP/%{NUMBER:http.version}' - "" ignore_missing: true - uri_parts: field: _tmp.url_orig ignore_failure: true - set: field: url.domain value: "{{destination.domain}}" if: ctx.url?.domain == null && ctx.destination?.domain != null - remove: field: - nginx.access.info - _tmp.url_orig ignore_missing: true - split: field: nginx.access.remote_ip_list separator: '"?,?\s+' ignore_missing: true - split: field: nginx.access.origin separator: '"?,?\s+' ignore_missing: true - set: field: source.address if: ctx.source?.address == null value: "" - script: if: ctx.nginx?.access?.remote_ip_list != null && ctx.nginx.access.remote_ip_list.length > 0 lang: painless source: >- boolean isPrivate(def dot, def ip) { try { StringTokenizer tok = new StringTokenizer(ip, dot); int firstByte = Integer.parseInt(tok.nextToken()); int secondByte = Integer.parseInt(tok.nextToken()); if (firstByte == 10) { return true; } if (firstByte == 192 && secondByte == 168) { return true; } if (firstByte == 172 && secondByte >= 16 && secondByte <= 31) { return true; } if (firstByte == 127) { return true; } return false; } catch (Exception e) { return false; } } try { ctx.source.address = null; if (ctx.nginx.access.remote_ip_list == null) { return; } def found = false; for (def item : ctx.nginx.access.remote_ip_list) { if (!isPrivate(params.dot, item)) { ctx.source.address = item; found = true; break; } } if (!found) { ctx.source.address = ctx.nginx.access.remote_ip_list[0]; } } catch (Exception e) { ctx.source.address = null; } params: dot: . - remove: field: source.address if: ctx.source.address == null - grok: field: source.address patterns: - ^%{IP:source.ip}$ ignore_failure: true - set: copy_from: '@timestamp' field: event.created - date: field: nginx.access.time target_field: '@timestamp' formats: - dd/MMM/yyyy:H:m:s Z on_failure: - append: field: error.message value: '{{ _ingest.on_failure_message }}' - remove: field: nginx.access.time - user_agent: field: user_agent.original ignore_missing: true - geoip: field: source.ip target_field: source.geo ignore_missing: true - geoip: database_file: GeoLite2-ASN.mmdb field: source.ip target_field: source.as properties: - asn - organization_name ignore_missing: true - rename: field: source.as.asn target_field: source.as.number ignore_missing: true - rename: field: source.as.organization_name target_field: source.as.organization.name ignore_missing: true - set: field: event.kind value: event - append: field: event.category value: web - append: field: event.type value: access - set: field: event.outcome value: success if: "ctx?.http?.response?.status_code != null && ctx.http.response.status_code < 400" - set: field: event.outcome value: failure if: "ctx?.http?.response?.status_code != null && ctx.http.response.status_code >= 400" - append: field: related.ip value: "{{source.ip}}" if: "ctx?.source?.ip != null" - append: field: related.ip value: "{{destination.ip}}" if: "ctx?.destination?.ip != null" - append: field: related.user value: "{{user.name}}" if: "ctx?.user?.name != null" - script: lang: painless description: This script processor iterates over the whole document to remove fields with null values. source: | void handleMap(Map map) { for (def x : map.values()) { if (x instanceof Map) { handleMap(x); } else if (x instanceof List) { handleList(x); } } map.values().removeIf(v -> v == null); } void handleList(List list) { for (def x : list) { if (x instanceof Map) { handleMap(x); } else if (x instanceof List) { handleList(x); } } } handleMap(ctx); on_failure: - set: field: error.message value: '{{ _ingest.on_failure_message }}'
|
filebeat/fields.yml ,在nginx
的结构下添加
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| - name: request_time type: double description: > The request_time. - name: response_time type: double description: > The upstream_response_time. - name: upstream_addr type: keyword description: > The upstream_addr. - name: upstream_status_code type: long description: > The upstream_status.
|
本文地址: https://github.com/maxzhao-it/blog/post/e0916ba3/