SpringBoot结合sharding-jdbc实现分库分表

说明

定位为轻量级Java框架,在Java的JDBC层提供的额外服务。 它使用客户端直连数据库,以jar包形式提供服务,无需额外部署和依赖,可理解为增强版的JDBC驱动,完全兼容JDBC和各种ORM框架。

  • 适用于任何基于Java的ORM框架,如:JPA, Hibernate, Mybatis, Spring JDBC Template或直接使用JDBC。
  • 基于任何第三方的数据库连接池,如:DBCP, C3P0, BoneCP, Druid, HikariCP等。
  • 支持任意实现JDBC规范的数据库。目前支持MySQL,Oracle,SQLServer和PostgreSQL。

分表配置

引入jar包

<!-- sharding-jdbc -->
<dependency>
	<groupId>org.apache.shardingsphere</groupId>
	<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
	<version>4.0.0-RC2</version>
</dependency>

1、使用properties进行配置

# 分表配置
spring.shardingsphere.enabled=true
spring.shardingsphere.datasource.names=ds0,ds1

# 数据源
spring.shardingsphere.datasource.ds0.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.ds0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds0.url=jdbc:mysql://localhost:3306/ds_0?characterEncoding=utf-8
spring.shardingsphere.datasource.ds0.username=root
spring.shardingsphere.datasource.ds0.password=123456

spring.shardingsphere.datasource.ds1.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.ds1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds1.url=jdbc:mysql://localhost:3306/ds_1?characterEncoding=utf-8
spring.shardingsphere.datasource.ds1.username=root
spring.shardingsphere.datasource.ds1.password=123456

# 分表配置
spring.shardingsphere.sharding.tables.user.actual-data-nodes=ds$->{0..1}.user_$->{0..2}
spring.shardingsphere.sharding.tables.user.table-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.tables.user.table-strategy.inline.algorithm-expression=user_$->{id % 3}
spring.shardingsphere.sharding.tables.user.key-generator.column=id
spring.shardingsphere.sharding.tables.user.key-generator.type=SNOWFLAKE

# 自定义分表算法
#spring.shardingsphere.sharding.tables.user.table-strategy.standard.sharding-column=created_date
#spring.shardingsphere.sharding.tables.user.table-strategy.standard.precise-algorithm-class-name=com.hsoft.vip.server.config.TableShardingAlgorithm

# 分库配置
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds$->{id % 2}

# 不分库分表的数据源指定
#spring.shardingsphere.sharding.default-data-source-name=ds0

2、使用java代码进行配置

@Bean("datasource") // 声明其为Bean实例
@Primary // 在同样的DataSource中,首先使用被标注的DataSource
public DataSource dataSource() {
	
	DataSource datasource = createDataSource();
	// 分库设置
	Map<String, DataSource> dataSourceMap = new HashMap<>(2);
	// 添加两个数据库database0和database1
	// 这里只是示例,实际业务不可能两个datasource一样
	dataSourceMap.put("database0", datasource);
	dataSourceMap.put("database1", datasource);
	
	// 分表设置,根据字段数据规则映射到对应的表
	TableRuleConfiguration tableRuleConfigs =new TableRuleConfiguration("t_vip_flow","database${0..1}.t_vip_flow_${201908..201910}01")  ;
	ShardingStrategyConfiguration tableShardingStrategyConfig =new StandardShardingStrategyConfiguration("created_date",new IdShardingAlgorithm(), new TableShardingAlgorithm());
	tableRuleConfigs.setTableShardingStrategyConfig(tableShardingStrategyConfig);
	// 主键生成策略
	KeyGeneratorConfiguration key =new KeyGeneratorConfiguration("SNOWFLAKE", "id");
	tableRuleConfigs.setKeyGeneratorConfig(key);
	ShardingRuleConfiguration shardingRuleConfiguration=new ShardingRuleConfiguration();
	shardingRuleConfiguration.setTableRuleConfigs(Arrays.asList(tableRuleConfigs));
	// 分库策略
	ShardingStrategyConfiguration defaultDatabaseShardingStrategyConfig =new StandardShardingStrategyConfiguration("id",new DatabaseShardingAlgorithm());
	shardingRuleConfiguration.setDefaultDatabaseShardingStrategyConfig(defaultDatabaseShardingStrategyConfig);
	try {
		Properties props =new Properties();
		props.put("sql.show", true);
		DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfiguration, props);
		return dataSource;
	} catch (SQLException e) {
		e.printStackTrace();
	}
	return datasource;
}

private DataSource createDataSource() {
	DruidDataSource datasource = new DruidDataSource();
	datasource.setUrl(url);
	datasource.setUsername(username);
	datasource.setPassword(password);
	datasource.setDriverClassName(driver);
	datasource.setInitialSize(initialSize);
	datasource.setMinIdle(minIdle);
	datasource.setMaxActive(maxActive);
	datasource.setMaxWait(maxWait);
	datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
	datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
	datasource.setValidationQuery(validationQuery);
	datasource.setTestWhileIdle(testWhileIdle);
	datasource.setTestOnBorrow(testOnBorrow);
	datasource.setTestOnReturn(testOnReturn);
	datasource.setPoolPreparedStatements(poolPreparedStatements);
	datasource.setMaxPoolPreparedStatementPerConnectionSize(maxPoolPreparedStatementPerConnectionSize);
	// 配置removeAbandoned对性能会有一些影响,建议怀疑存在泄漏之后再打开。在上面的配置中,如果连接超过30分钟未关闭,就会被强行回收,并且日志记录连接申请时的调用堆栈。
	datasource.setRemoveAbandoned(true);
	datasource.setLogAbandoned(true);
	datasource.setRemoveAbandonedTimeout(1800);// 30分钟
	try {
		datasource.setFilters(filters);
	} catch (SQLException e) {
		logger.error("druid configuration initialization filter", e);
	}
	datasource.setConnectionProperties(connectionProperties);
	return datasource;
}

分表算法

日期精确映射

public class IdShardingAlgorithm implements PreciseShardingAlgorithm<Date> {

	@Override
	public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Date> shardingValue) {
		Date date = shardingValue.getValue();
		String firstDay =DateOperator.formatDate(DateOperator.getFirstDayOfMonth(date),"yyyyMMdd");
		return availableTargetNames.stream().filter(p -> p.endsWith(firstDay)).findFirst().orElse(null);
	}

}

日期范围映射

public class TableShardingAlgorithm implements RangeShardingAlgorithm<Date> {
	@Override
	public Collection<String> doSharding(Collection<String> availableTargetNames,
			RangeShardingValue<Date> shardingValue) {
		Date dateFrom = shardingValue.getValueRange().lowerEndpoint();
		Date dateEnd = shardingValue.getValueRange().upperEndpoint();
		int monthFrom = Integer.valueOf(DateOperator.formatDate(DateOperator.getFirstDayOfMonth(dateFrom),"yyyyMM"));
		int monthEnd = Integer.valueOf(DateOperator.formatDate(DateOperator.getFirstDayOfMonth(dateEnd),"yyyyMM"));
		List<String> lstMonth=new ArrayList<>();
		for(int i = monthFrom;i<=monthEnd;i++) {
			lstMonth.add(i+"01");
		}
		return availableTargetNames.stream().filter(p ->lstMonth.contains(p.substring(p.length()-8))).collect(Collectors.toList());
	}
}

分库算法

根据id分库(这里只是示例,实际业务不可能根据id分库)

public class DatabaseShardingAlgorithm implements PreciseShardingAlgorithm<Comparable<?>> {

	@Override
	public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Comparable<?>> shardingValue) {
		Long value;
		if (shardingValue.getValue() instanceof Integer) {
			value = Long.valueOf(shardingValue.getValue().toString());
		}else {
			value =(Long)shardingValue.getValue();
		}
		long index = value % 2L;
		return availableTargetNames.stream().filter(p -> p.endsWith(index + "")).findFirst().orElse(null);
	}

}

注意事项

  1. 用于分库分表的字段,必须在查询条件中
  2. 当前仅支持三种查询条件:=,in,between
package org.apache.shardingsphere.core.constant;
public enum ShardingOperator {
    EQUAL, BETWEEN, IN
}
  1. 框架自带两种id生成器(SNOWFLAKE、UUID)。使用框架自动生成id时,insert语句中其他所有的字段值不能是null,否则插入时,id的赋值会错位
@RequiredArgsConstructor
@Getter
public abstract class InsertOptimizeResultUnit {
    //...省略其他代码...
    public final void addColumnParameter(final Object parameter) {
        parameters[getCurrentIndex(parameters)] = parameter;
    }
    
    private int getCurrentIndex(final Object[] array) {
        int count = 0;
        for (Object each : array) {
            // 这里判断了不为null才计数
            if (null != each) {
                count++;
            }
        }
        return count;
    }
    //...省略其他代码...
}

参考

  • https://shardingsphere.apache.org/document/current/cn/overview/
  • https://github.com/yinjihuan/sharding-jdbc

版权声明:本文为wlddhj原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。