Spring Cloud实战8——分布式跟踪

1. Spring Cloud Sleuth和相关ID

通过将Spring Cloud Sleuth添加到Spring Microservices中,可以

  • 如果不存在,则在服务调用中透明地创建并注入相关ID
  • 管理相关ID到出站服务调用的传播,以便事务的相关ID自动添加到出站调用
  • 将相关信息添加到Spring的MDC日志记录中,以便生成Spring Boots默认SL4J和Logback会自动记录相关ID 实现
  • (可选)在服务调用中将跟踪信息发布到Zipkin分布式跟踪平台

将Spring Cloud Sleuth添加到licensing和organization服务中

添加依赖

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Spring Cloud Sleuth痕迹的剖析

Spring Cloud Sleuth将为每个日志条目添加四条信息

2. 日志聚合和Spring Cloud Sleuth

用于Spring Boot的日志聚合解决方案的选项

Spring Cloud Sleuth/ELK实现日志聚合

添加logback相关依赖

<dependency>
	<groupId>net.logstash.logback</groupId>
	<artifactId>logstash-logback-encoder</artifactId>
	<version>4.9</version>
</dependency>
<dependency>
	<groupId>ch.qos.logback</groupId>
	<artifactId>logback-classic</artifactId>
	<version>1.2.3</version>
</dependency>
<dependency>
	<groupId>ch.qos.logback</groupId>
	<artifactId>logback-core</artifactId>
	<version>1.2.3</version>
</dependency>

创建logback-spring.xml日志配置文件

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <springProperty scope="context" name="springAppName" source="spring.application.name"/>

    <!-- You can override this to have a custom pattern -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

    <!-- Appender to log to console -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <!-- Minimum logging level to be presented in the console logs-->
            <level>DEBUG</level>
        </filter>
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>
    <!-- 为logstash输出的JSON格式的Appender -->
    <appender name="logstash-tcp"
              class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>logstash:9601</destination>
        <!-- 日志输出编码 -->
        <encoder
                class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>
    <root level="INFO">
        <appender-ref ref="console"/>
        <appender-ref ref="logstash-tcp"/>
    </root>
</configuration>

ELK日志收集界面(ELK通过docker部署,部署文件可查看源码)

3. 使用Open Zipkin进行分布式跟踪

在服务中添加Zipkin依赖

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>

配置服务以指向Zipkin

spring:
	zipkin:
		base-url: http://zipkin:9411/
		sender:
			type: rabbit
	sleuth:
		sampler:
			probability: 1.0

安装和配置Zipkin服务器,使用docker部署

version:  '3'
services:
zipkin:
    image: openzipkin/zipkin
    restart: always
    ports:
      - "9411:9411"
    environment:
      RABBIT_ADDRESSES: "mq-dev"

Zipkin跟踪结果

转载于:https://my.oschina.net/u/869718/blog/2250373