SpringCloud Logback日志配置输出 sleuth 中的 span和traceid
springboot logback的源码分析请参考这篇,讲的很好:https://juejin.im/post/5cd0cdcc6fb9a0323120a3f4一、日志常规配置直接贴出logback-spring.xml,基本满足了对日志的常规需求:1、控制台日志(带颜色显示)2、全部日志文件(包括全别日志级别的日志,每天一个,自动压缩)3、错误日志文件(仅抽取ERROR级别日志,每天一个,自动
springboot logback的源码分析请参考这篇,讲的很好:https://juejin.im/post/5cd0cdcc6fb9a0323120a3f4
一、日志常规配置
直接贴出logback-spring.xml,基本满足了对日志的常规需求:
1、控制台日志(带颜色显示)
2、全部日志文件(包括全别日志级别的日志,每天一个,自动压缩)
3、错误日志文件(仅抽取ERROR级别日志,每天一个,自动压缩)
说明:
a. 日志存放目录需要在application.yml中配置:logging: path: /data/logs/${spring.application.name}
b. 此配置文件可以自行扩展,比如增加elk输出、调整日志级别、增加日志过滤等。不在本文讨论访问内。
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
<!-- 参考SpringBoot默认的logback配置,增加了error日志文件 -->
<!-- org/springframework/boot/logging/logback/base.xml -->
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
<conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
<conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
<property name="LOG_PATH" value="${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}"/>
<property name="CONSOLE_LOG_PATTERN" value="${CONSOLE_LOG_PATTERN:-%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
<property name="FILE_LOG_PATTERN" value="${FILE_LOG_PATTERN:-%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
<!-- 控制台日志 -->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
</encoder>
</appender>
<!-- 全量日志 -->
<appender name="FILE_ALL" class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_PATH}/all.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/all.log.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
<maxFileSize>${LOG_FILE_MAX_SIZE:-50MB}</maxFileSize>
<maxHistory>${LOG_FILE_MAX_HISTORY:-0}</maxHistory>
</rollingPolicy>
</appender>
<!-- 错误日志 -->
<appender name="FILE_ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_PATH}/err.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/err.log.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
<maxFileSize>${LOG_FILE_MAX_SIZE:-50MB}</maxFileSize>
<maxHistory>${LOG_FILE_MAX_HISTORY:-0}</maxHistory>
</rollingPolicy>
<!-- 过滤出ERROR级别的日志 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- 日志总开关 -->
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE_ALL" />
<appender-ref ref="FILE_ERROR" />
</root>
<!-- 日志过滤 -->
<logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
<logger name="org.apache.catalina.util.LifecycleBase" level="ERROR"/>
<logger name="org.apache.coyote.http11.Http11NioProtocol" level="WARN"/>
<logger name="org.apache.sshd.common.util.SecurityUtils" level="WARN"/>
<logger name="org.apache.tomcat.util.net.NioSelectorPool" level="WARN"/>
<logger name="org.eclipse.jetty.util.component.AbstractLifeCycle" level="ERROR"/>
<logger name="org.hibernate.validator.internal.util.Version" level="WARN"/>
</configuration>
二、引入sleuth zipkin是怎么输出traceId的?
一直有个疑问,为什么引入sleuth+zipkin之后,日志中会输出traceId和spanId:
2019-06-11 16:16:00.117 INFO [test-project,29ce39ac8a54f753,29ce39ac8a54f753,false] 1636 --- [http-nio-12001-exec-2] ...
2019-06-11 16:16:00.332 INFO [test-project,29ce39ac8a54f753,29ce39ac8a54f753,false] 1636 --- [http-nio-12001-exec-2] ...
从上面logback的配置文件FILE_LOG_PATTERN
并未发现相关配置,其中必有妖怪!
通过分析引入的sleuth依赖,终于发现在spring-cloud-sleuth-core-xxx.jar找到,请看这个类
org.springframework.cloud.sleuth.autoconfig.TraceEnvironmentPostProcessor
@Override
public void postProcessEnvironment(ConfigurableEnvironment environment,
SpringApplication application) {
Map<String, Object> map = new HashMap<String, Object>();
// This doesn't work with all logging systems but it's a useful default so you see
// traces in logs without having to configure it.
if (Boolean.parseBoolean(environment.getProperty("spring.sleuth.enabled", "true"))) {
map.put("logging.pattern.level",
"%5p [${spring.zipkin.service.name:${spring.application.name:-}},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}]");
}
addOrReplace(environment.getPropertySources(), map);
}
这段代码在项目启动阶段,把环境变量logging.pattern.level
替换了,增加了spring.application.name
、X-B3-TraceId
和X-B3-SpanId
。实际上就是替换了环境变量LOG_LEVEL_PATTERN
,再回到logback的配置文件FILE_LOG_PATTERN
:
${FILE_LOG_PATTERN:-%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}
其中${LOG_LEVEL_PATTERN:-%5p}
就是引用了这个环境变量。
到此为止,是不是就完了呢?No~~!增加的spring.application.name
、X-B3-TraceId
和X-B3-SpanId
的值从哪里来呢?继续看代码
org.springframework.cloud.sleuth.log.Slf4jCurrentTraceContext
这个类中通过MDC ( Mapped Diagnostic Contexts )
的方式把traceId、spanId插入到日志内容中。
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)