[TOC] # **日志中心** ## 日志处理流程 ![](https://box.kancloud.cn/926dd5b6dbdc5fa90b65844e9fdfb14c_1581x368.png) ![](https://box.kancloud.cn/7bec6d917700607b454082de7ac270a4_1068x531.png) ## log-spring-boot-starter基础日志模块封装 log-spring-boot-starter的logback-spring.xml ![](https://img.kancloud.cn/4e/e7/4ee7712c1e8b72b1bee94577e60f1b62_1695x454.png) 此处修改 ``` <springProperty name="LOG_FILE" scope="context" source="logging.file" defaultValue="/logs/${APP_NAME}"/> ``` 将所有日志放在/logs下面,在部署微服务的机器中执行 ``` mkdir /logs chmod -R 777 /logs ``` 还有一种方式,不需要修改上诉logback.xml配置,将微服务的日志都建立软连接,参考下面ln -s 建立软连接方式, ``` mkdir /logs ln -s /app/ocp/user-center/logs/user-center/ /logs ln -s /app/ocp/eureka-server/logs/eureka-server/ /logs ln -s /app/ocp/api-gateway/logs/auth-gateway/ /logs ln -s /app/ocp/api-gateway/logs/api-gateway/ /logs ln -s /app/ocp/auth-server/logs/auth-server/ /logs ln -s /app/ocp/file-center/logs/file-center/ /logs ``` ![](https://img.kancloud.cn/a8/84/a88418ced3a70f708d59b5243c8aeef7_850x84.png) 以上两种方式目的是将所有的微服务日志都放在/logs中,方便filebeat抽取改目录的日志 ## log-center 对应es索引 * 对应es的索引字段 ![](https://img.kancloud.cn/94/50/94508481f1029388825d1ad32408b036_1769x819.png) * java对象对应es的mapping信息 ![](https://img.kancloud.cn/2a/72/2a72b15f93d9ad78d4c0edbb37ae88a1_1914x494.png) * ServiceLogDao读取Es数据到ServiceLogDocument中 ![](https://img.kancloud.cn/bc/50/bc5025f3fc971e6eb7bfea9d75cfe113_1511x362.png) ![](https://img.kancloud.cn/d6/da/d6dac0f5e83e0892bf62b9eea4acada3_600x571.png) ## 核心原理 ![](https://img.kancloud.cn/79/0b/790b3bc93718a062ac6d4f4d8365aae1_859x194.png) ### Greenwich.SR6 版本es6 方式 此方式需要注意修改application.yml的配置,默认是es7配置 es6 application.yml需要按以下修改配置文件 ``` spring: #elasticsearch服务配置 data: elasticsearch: cluster-name: elasticsearch cluster-nodes: 47.99.88.28:9300 repositories: enabled: true properties: transport: tcp: connect_timeout: 120s ``` es6方式 type= doc ![](https://img.kancloud.cn/32/b4/32b426952258e9f19eef7082c5b6218d_1499x510.png) ### Hoxton.SR8 版本es7方式 ([es7搭建方式](https://www.kancloud.cn/owenwangwen/open-capacity-platform/1656401)) 默认是es7配置,high levl rest 高并发集成spring data es方式 配置文件 ``` spring: #elasticsearch服务配置 elasticsearch: rest: uris: - http://192.168.11.130:9200 data: elasticsearch: client: reactive: endpoints : 192.168.11.130:9200 socket-timeout: 3000 connection-timeout: 3000 ``` es7方式 type= _doc ![](https://img.kancloud.cn/7b/7f/7b7fae8f3ca28c12afae35f0635040f4_1005x559.png) ## spring data es ![](https://img.kancloud.cn/31/48/3148ec8530f6da153efec32f9cf8c848_2374x930.png) * AbstractElasticsearchConfiguration:创建 * ElasticsearchRestTemplate。 * AbstractReactiveElasticsearchConfiguration:创建 * ReactiveElasticsearchTemplate * ElasticsearchRepositoryConfigExtension * ReactiveElasticsearchRepositoryConfigurationExtension * ElasticsearchCrudRepository:支持crud的抽象接口。 * ReactiveElasticsearchRepository:支持crud的reactive抽象接口。 ![](https://img.kancloud.cn/70/ec/70ec075c489d25bf65d3bca2f3e0ccf3_1472x858.png) ReactiveElasticsearchClient使用Elasticsearch core项目提供的请求/响应对象,调用直接在响应堆栈上操作,而不是使用异步线程池的方式进行响应。 这是高版本的Spring Data Elasticsearch默认支持的方式,使用方式如下: ``` elasticsearch: rest: uris: - http://192.168.11.130:9200 data: elasticsearch: client: reactive: endpoints : 192.168.11.130:9200 socket-timeout: 3000 connection-timeout: 3000 = ``` ## swagger访问接口 ![](https://img.kancloud.cn/fa/e2/fae29d0804b8891cad96f7400491b190_1895x981.png) 启动 log-center ,之前需要部署 Filebeat logstash elasticearch | 软件 | 版本 | 备注 | | --- | --- |--- | | centos| 7.5 | | | JDK | 1.8 |on 47.99.88.28 | | elasticsearch| 6.5.4 |on 47.99.88.28| |elasticsearch-head|6.x|windows| | filebeat| 6.5.4|on 47.99.88.28 | | logstash|6.5.4 |on 47.99.88.28 | # elasticearch安装 ## 创建目录 ``` mkdir /app cd /app wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.tar.gz tar -zxvf elasticsearch-6.5.4.tar.gz useradd es cd elasticsearch-6.5.4 修改config/jvm.options为内存的一半大小 vi config/jvm.options -Xms512m -Xmx512m 修改 max file 和 max virtual memory 参数 用root 或 sudo 用户 vi /etc/sysctl.conf 添加下面配置: vm.max_map_count=655360 并执行命令: sysctl -p ``` ## 修改/etc/security/limits.conf ``` grep -q "* - nofile" /etc/security/limits.conf || cat >> /etc/security/limits.conf << EOF ######################################## * soft nofile 65536 * hard nofile 65536 * soft nproc 4096 * hard nproc 4096 EOF ``` ## 修改elasticsearch.yml ``` vi /app/elasticsearch-6.5.4/config/elasticsearch.yml cluster.name: elasticsearch node.name: node-1 network.host: 0.0.0.0 http.port: 9200 node.max_local_storage_nodes: 2 http.cors.enabled: true http.cors.allow-origin: "*" ``` ## 赋权启动 ``` chown -R es:es /app/elasticsearch-6.5.4/ su - es -c '/app/elasticsearch-6.5.4/bin/elasticsearch -d' ``` ## 查看进程 ``` jinfo -flags 2114 VM Flags: -XX:+AlwaysPreTouch -XX:CICompilerCount=2 -XX:CMSInitiatingOccupancyFraction=75 -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:InitialHeapSize=536870912 -XX:MaxDirectMemorySize=268435456 -XX:MaxHeapSize=536870912 -XX:MaxNewSize=87228416 -XX:MaxTenuringThreshold=6 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=87228416 -XX:NonNMethodCodeHeapSize=5825164 -XX:NonProfiledCodeHeapSize=122916538 -XX:OldSize=449642496 -XX:-OmitStackTraceInFastThrow -XX:ProfiledCodeHeapSize=122916538 -XX:-RequireSharedSpaces -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:ThreadStackSize=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:-UseSharedSpaces ``` ## 查看jvm参数 ``` jinfo -flag MaxHeapSize 2114 -XX:MaxHeapSize=536870912 jinfo -flag NewSize 2114 -XX:NewSize=87228416 jinfo -flag ThreadStackSize 2114 -XX:ThreadStackSize=1024 jinfo -flag OldSize 2114 -XX:OldSize=449642496 ``` ## 动态修改jvm参数 ``` jinfo -flag +HeapDumpOnOutOfMemoryError 2114 jinfo -flag HeapDumpPath=/app/elasticsearch-6.5.4/dump 2114 ``` # logstash ## logstash 安装配置 ``` cd /app wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz tar -zxvf logstash-6.5.4.tar.gz cd logstash-6.5.4/ ls cd bin ``` ## logstash.conf如下 vi logstash.conf ``` input { beats { port => 5044 } } filter { if [fields][docType] == "sys-log" { grok { patterns_dir => ["/app/logstash-6.5.4/patterns"] match => { "message" => "\[%{NOTSPACE:appName}\:%{NOTSPACE:serverIp}\:%{NOTSPACE:serverPort}\] \[%{MYAPPNAME:contextTraceId},%{MYAPPNAME:currentTraceId}\] %{TIMESTAMP_ISO8601:logTime} %{LOGLEVEL:logLevel} %{WORD:pid} \[%{MYTHREADNAME:threadName}\] %{NOTSPACE:classname} %{GREEDYDATA:message}" } overwrite => ["message"] } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"] } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"] target => "timestamp" } mutate { remove_field => "logTime" remove_field => "@version" remove_field => "host" remove_field => "offset" } } } output { if [fields][docType] == "sys-log" { elasticsearch { hosts => ["127.0.0.1:9200"] manage_template => false index => "ocp-log-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } if [fields][docType] == "biz-log" { elasticsearch { hosts => ["127.0.0.1:9200"] manage_template => false index => "biz-log-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } } ``` ## 在Logstash中使用grok ~~~ mkdir -p /app/logstash-6.5.4/patterns cd /app/logstash-6.5.4/patterns vi java # user-center MYAPPNAME ([0-9a-zA-Z_-]*) MYTHREADNAME ([0-9a-zA-Z._-]|\(|\)|\s)* ~~~ ## 权限 ``` chmod -R 777 /app/logstash-6.5.4 ``` ## 启动 ``` cd /app/logstash-6.5.4/bin nohup ./logstash -f logstash.conf >&/dev/null & ``` # filebeat ## 注意filebeat与logstash在同一台服务器中 filebeat(收集、聚合) ->logstash(过滤结构化) -> ES ![](https://box.kancloud.cn/c07fb298e033603ef6992a906a5105ac_918x717.png) filebeat 抽取的是/logs/*/*.log的日志,可以建立软连接,将不同模块的日志都放在/logs下面 * 准备工作 ``` mkdir /logs ln -s /app/ocp/user-center/logs/user-center/ /logs ln -s /app/ocp/eureka-server/logs/eureka-server/ /logs ln -s /app/ocp/api-gateway/logs/auth-gateway/ /logs ln -s /app/ocp/auth-server/logs/auth-server/ /logs ln -s /app/ocp/file-center/logs/file-center/ /logs ``` ![](https://img.kancloud.cn/a8/84/a88418ced3a70f708d59b5243c8aeef7_850x84.png) * 下载filebeat ``` cd /app wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-linux-x86_64.tar.gz tar -zxvf filebeat-6.5.4-linux-x86_64.tar.gz cd /app/filebeat-6.5.4-linux-x86_64 ``` * 配置filebeat.yml配置 vi filebeat.yml ``` ###################### Filebeat Configuration Example ######################### # This file is an example configuration file highlighting only the most common # options. The filebeat.reference.yml file from the same directory contains all the # supported options with more comments. You can use it as a reference. # # You can find the full configuration reference here: # https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample # configuration file. #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. - type: log enabled: true paths: #- /var/log/*.log - /logs/*/*.log ##控制台日志 exclude_lines: ['^DEBUG'] ##增加字段 fields: docType: sys-log project: open-capacity-platform #聚合日志 multiline: pattern: ^\[ negate: true match: after - type: log enabled: true paths: #- /var/log/*.log - /app/ocp/user-center/logs/biz/*.log ##业务日志 exclude_lines: ['^DEBUG'] ##增加字段 fields: docType: biz-log project: open-capacity-platform #聚合日志 #keys_under_root可以让字段位于根节点,默认为false json.keys_under_root: true #对于同名的key,覆盖原有key值 json.overwrite_keys: true #message_key是用来合并多行json日志使用的,如果配置该项还需要配置multiline的设置,后面会讲 json.message_key: message #将解析错误的消息记录储存在error.message字段中 json.add_error_key: true # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering #fields: # level: debug # review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common # for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [ #multiline.pattern: ^\[ # Defines if the pattern set under pattern should be negated or not. Default is false. #multiline.negate: false # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern # that was (not) matched before or after or as long as a pattern is not matched based on negate. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash #multiline.match: after #============================= Filebeat modules =============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 3 setup.template.name: "filebeat" setup.template.pattern: "filebeat-*" #index.codec: best_compression #_source.enabled: false #================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. #name: # The tags of the shipper are included in their own field with each # transaction published. #tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the # output. #fields: # env: staging #============================== Dashboards ===================================== # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here, or by using the `-setup` CLI flag or the `setup` command. #setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL # has a value which is computed based on the Beat name and version. For released # versions, this URL points to the dashboard archive on the artifacts.elastic.co # website. #setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" # Kibana Space ID # ID of the Kibana Space into which the dashboards should be loaded. By default, # the Default Space will be used. #space.id: #============================= Elastic Cloud ================================== # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and # `setup.kibana.host` options. # You can find the `cloud.id` in the Elastic Cloud web UI. #cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and # `output.elasticsearch.password` settings. The format is `<user>:<pass>`. #cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------ #output.elasticsearch: # Array of hosts to connect to. # hosts: ["192.168.28.130:9200"] # index: "filebeat-log" # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["127.0.0.1:5044"] bulk_max_size: 2048 # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" #================================ Procesors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors: - add_host_metadata: ~ - add_cloud_metadata: ~ #================================ Logging ===================================== # Sets log level. The default log level is info. # Available log levels are: error, warning, info, debug #logging.level: debug # At debug level, you can selectively enable logging only for some components. # To enable all selectors use ["*"]. Examples of other selectors are "beat", # "publish", "service". #logging.selectors: ["*"] #============================== Xpack Monitoring =============================== # filebeat can export internal metrics to a central Elasticsearch monitoring # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The # reporting is disabled by default. # Set to true to enable the monitoring reporter. #xpack.monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the # Elasticsearch output are accepted here as well. Any setting that is not set is # automatically inherited from the Elasticsearch output configuration, so if you # have the Elasticsearch output configured, you can simply uncomment the # following line. #xpack.monitoring.elasticsearch: ``` ## 注意filebeat与logstash如果不在一台服务器中,上面的配置文件需要修改 ``` output.logstash: # The Logstash hosts ,假设在47.99.88.66部署了logstash hosts: ["47.99.88.66:5044"] bulk_max_size: 2048 ``` * 权限 ``` chmod -R 777 /app/filebeat-6.5.4 chmod go-w /app/filebeat-6.5.4/filebeat.yml ``` * 启动 ``` nohup ./filebeat -e -c filebeat.yml >&/dev/null & ``` * lsof -p filebeat进程号查看启动情况 ``` [root@iZbp178t3hp8rt4k9u953rZ filebeat-6.5.4]# lsof -p `ps | grep "filebeat" | grep -v "grep" |awk '{print $1}'` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME filebeat 15802 root cwd DIR 253,1 4096 2359362 /app/filebeat-6.5.4 filebeat 15802 root rtd DIR 253,1 4096 2 / filebeat 15802 root txt REG 253,1 35930715 2359779 /app/filebeat-6.5.4/filebeat filebeat 15802 root mem REG 253,1 61624 1052387 /usr/lib64/libnss_files-2.17.so filebeat 15802 root mem REG 253,1 2156160 1052369 /usr/lib64/libc-2.17.so filebeat 15802 root mem REG 253,1 19288 1052375 /usr/lib64/libdl-2.17.so filebeat 15802 root mem REG 253,1 142232 1052395 /usr/lib64/libpthread-2.17.so filebeat 15802 root mem REG 253,1 163400 1052362 /usr/lib64/ld-2.17.so filebeat 15802 root 0w CHR 1,3 0t0 18 /dev/null filebeat 15802 root 1w CHR 1,3 0t0 18 /dev/null filebeat 15802 root 2w CHR 1,3 0t0 18 /dev/null filebeat 15802 root 3u IPv4 583959162 0t0 TCP iZbp178t3hp8rt4k9u953rZ:47346->izbp1jc2amxbl3xjw02s2xz:XmlIpcRegSvc (ESTABLISHED) filebeat 15802 root 4u a_inode 0,10 0 6091 [eventpoll] filebeat 15802 root 5r REG 253,1 627836 1710139 /app/openresty/nginx/logs/access.log filebeat 15802 root 6r REG 253,1 298509 1442290 /app/ocp/user-center/logs/user-center/user-center-info.log filebeat 15802 root 7r REG 253,1 956962 1442293 /app/ocp/api-gateway/logs/api-gateway/api-gateway-info.log filebeat 15802 root 8r REG 253,1 1347580 1442289 /app/ocp/auth-server/logs/auth-server/auth-server-info.log filebeat 15802 root 10u IPv4 583959848 0t0 TCP iZbp178t3hp8rt4k9u953rZ:47348->izbp1jc2amxbl3xjw02s2xz:XmlIpcRegSvc (ESTABLISHED) filebeat 15802 root 11r REG 253,1 50445 1442291 /app/ocp/file-center/logs/file-center/file-center-info.log ``` * 结构化日志数据为以下格式存在ES中 ``` { "contextTraceId":"上下文traceId", "currentTraceId":"当前traceId", "timestamp": "时间", "message": "具体日志信息", "threadName": "线程名", "serverPort": "服务端口", "serverIp": "服务ip", "logLevel": "日志级别", "appName": "工程名称", "classname": "类名" } ``` * linux统计调用次数 ``` awk '{print $7} ' user-center-info.log | sort | uniq -c | sort -fr ``` ![](https://img.kancloud.cn/d8/de/d8deb79578608efde3f99670578b2644_700x110.png) # elasticseach-head ## 安装 1. 安装 ElasticSearch 6.x,访问 http://47.99.88.28:9200/ 查看是否安装成功。 2. 安装 Node,使用 node -v 查看是否安装成功。 3. 在 Node 中执行 npm install -g grunt-cli 安装grunt,使用 grunt -version 查看是否安装成功。 4. 安装 elasticsearch-head。 * 访问 https://github.com/mobz/elasticsearch-head 下载 head 插件(选择 zip 压缩包下载方式)。 ![](https://box.kancloud.cn/7d134411776127fbad0490d8dbcd7271_778x486.png) * 修改 ~\\elasticsearch-6.6.2\\elasticsearch-head-master\\Gruntfile.js,在对应的位置加上 hostname:'\*' 配置项。 ![](https://box.kancloud.cn/de8a0294efc7874039014cff43b8676c_572x253.png) * 在 ~\\elasticsearch-6.6.2\\elasticsearch-head-master 下执行 npm install 开始安装,完成后可执行 grunt server 或者 npm run start 运行 head 插件。 ![](https://box.kancloud.cn/201550aa6875ddcaf4ab9013105e02f8_792x386.png) * 安装成功,访问 http://localhost:9100/。 ![](https://box.kancloud.cn/9d183a58df74ce8240d2b16d7f80f93a_1890x650.png) 5. 答疑 Issue - 在 head 中连接 ES 失败。  ![](https://box.kancloud.cn/2a438ae6d7385e9eb21850beee5c2fe2_1733x899.png) 对于 Access-Control-Allow-Origin 的问题,可以在 ElasticSearch 6.x 的 ~\\config\\elasticsearch.yml 文件的末尾加入以下代码: ``` http.cors.enabled: true http.cors.allow-origin: "\*" node.master: true node.data: true ``` 配置更新后,重启 ES 即可连接成功。 ## 使用 ![](https://img.kancloud.cn/3f/08/3f087c2cf254c14b6d4d5f17bf21ad05_1082x341.png) # 安装grokdebug ## 安装docker ``` [root@localhost ~]# systemctl stop firewalld [root@localhost ~]# systemctl disable firewalld [root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config [root@localhost ~]# cat /etc/selinux/config [root@localhost ~]# getenforce [root@localhost ~]# setenforce 0 [root@localhost ~]# getenforce [root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config [root@localhost ~]# swapoff -a [root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@localhost ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo [root@localhost ~]# yum -y install docker-ce-18.06.1.ce-3.el7 [root@localhost ~]# systemctl enable docker && systemctl start docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@localhost ~]# docker --version Docker version 18.06.1-ce, build e68fc7a ``` ## 安装docker compose ``` curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose ``` ## 配置docker-compose.yml ``` [root@JD app]# cat docker-compose.yml version: "3" services: grok: image: epurs/grokdebugger ports: - "80:80" ``` ## 启动 ``` [root@JD app]# docker-compose up -d ``` ## 查看 * [user-center:172.16.26.117:7000] [869f32593b6bbf6b,5aa8fbe5ba17b0d8] 2019-02-25 00:40:58.749 INFO 3417 [http-nio-7000-exec-197] com.open.capacity.log.aop.LogAnnotationAOP 开始请求,transid=869f32593b6bbf6b, url=com.open.capacity.user.controller.SysUserController/findByUsername , httpMethod=null, reqData=["admin"] * \[%{NOTSPACE:appName}\:%{NOTSPACE:serverIp}\:%{NOTSPACE:serverPort}\] \[%{MYAPPNAME:contextTraceId},%{MYAPPNAME:currentTraceId}\] %{TIMESTAMP_ISO8601:logTime} %{LOGLEVEL:logLevel} %{WORD:pid} \[%{MYTHREADNAME:threadName}\] %{NOTSPACE:classname} %{GREEDYDATA:message} * MYAPPNAME ([0-9a-zA-Z_-]*) * MYTHREADNAME ([0-9a-zA-Z._-]|\(|\)|\s)* ![](https://img.kancloud.cn/c4/6b/c46bfbe09e56c87715fd9617145256f9_1912x982.png) ## 解析后文件 ``` ~~~ { "appName": [ [ "user-center" ] ], "serverIp": [ [ "172.16.26.117" ] ], "serverPort": [ [ "7000" ] ], "contextTraceId": [ [ "869f32593b6bbf6b" ] ], "currentTraceId": [ [ "5aa8fbe5ba17b0d8" ] ], "logTime": [ [ "2019-02-25 00:40:58.749" ] ], "YEAR": [ [ "2019" ] ], "MONTHNUM": [ [ "02" ] ], "MONTHDAY": [ [ "25" ] ], "HOUR": [ [ "00", null ] ], "MINUTE": [ [ "40", null ] ], "SECOND": [ [ "58.749" ] ], "ISO8601_TIMEZONE": [ [ null ] ], "logLevel": [ [ "INFO" ] ], "pid": [ [ "3417" ] ], "threadName": [ [ "http-nio-7000-exec-197" ] ], "classname": [ [ "com.open.capacity.log.aop.LogAnnotationAOP" ] ], "message": [ [ "开始请求,transid=869f32593b6bbf6b, url=com.open.capacity.user.controller.SysUserController/findByUsername , httpMethod=null, reqData=["admin"] " ] ] } ~~~ ``` # 业务日志 业务日志采用结构化输出 {"message":"tttt","transId":"46d803fd318f1dd3","token":"6d8ea03f-c82d-4e39-bdf2-11347d8f6be5","username":"admin","msg":"hello","error":null,"host":"130.75.131.208","appName":"user-center"} 这样可以不再使用logstash的gork格式化日志。 * 代码使用 BizLog.info("角色列表", LogEntry.builder().clazz(this.getClass().getName()).method("findRoles").msg("hello").path("/roles").build()); * 效果 ![](https://img.kancloud.cn/c4/b0/c4b05d582f3efa19c6e7724cdb355ac5_1910x394.png) ## filebeat 结构化日志es ``` #=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. - type: log enabled: true paths: #- /var/log/*.log - /app/ocp/user-center/logs/biz/*.log exclude_lines: ['^DEBUG'] ##增加字段 fields: docType: biz-log project: open-capacity-platform #聚合日志 #keys_under_root可以让字段位于根节点,默认为false json.keys_under_root: true #对于同名的key,覆盖原有key值 json.overwrite_keys: true #message_key是用来合并多行json日志使用的,如果配置该项还需要配置multiline的设置,后面会讲 json.message_key: message #将解析错误的消息记录储存在error.message字段中 json.add_error_key: true #============================= Filebeat modules =============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 3 setup.template.name: "filebeat" setup.template.pattern: "filebeat-*" #index.codec: best_compression #_source.enabled: false setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" # Kibana Space ID # ID of the Kibana Space into which the dashboards should be loaded. By default, # the Default Space will be used. #space.id: output.elasticsearch: enabled: true hosts: ["127.0.0.1:9200"] index: "biz-log-%{+yyyy-MM-dd}" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors: #- add_host_metadata: ~ #- add_cloud_metadata: ~ - drop_fields: fields: ["beat.name", "beat.version", "host.architecture","host.architecture","host.name","beat.hostname","log.file.path"] ``` ![](https://img.kancloud.cn/fa/c9/fac97114bd6cb5e4602f379aa2532816_1916x262.png) 生产建议参考 [13.统一日志中心](11.%E7%BB%9F%E4%B8%80%E6%97%A5%E5%BF%97%E4%B8%AD%E5%BF%83.md)