ogg_for_bigdata (oracle 数据通过ogg到hbase)

ogg_for_bigdata (oracle 数据通过ogg到hbase)

1、zookeeper安装

基础环境见ogg_forbigdata hdfs安装文档 https://blog.csdn.net/weixin_43761300/article/details/116246042?spm=1001.2014.3001.5501

zookeeper安装在node01,node02,node03上

cd /opt
tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz
cd apache-zookeeper-3.6.3-bin/conf
mv zoo_sample.cfg zoo.cfg
vi zoo.cfg
# 修改以下内容
dataDir=/opt/data/zookeeper
# 添加以下内容
#-------------------------
server.0=node01:2888:3888
server.1=node02:2888:3888
server.2=node03:2888:3888
#-------------------------
# 保存退出
vi /etc/profile
export ZK_HOME=/opt/apache-zookeeper-3.6.3-bin
export PATH=$PATH:$ZK_HOME/bin

将zookeeper,profile scp到另外两台机器上

scp -r /opt/apache-zookeeper-3.6.3-bin node02:/opt
scp -r /opt/apache-zookeeper-3.6.3-bin node03:/opt
vi /etc/profile # 在node02上执行,node02上有别的环境变量
export ZK_HOME=/opt/apache-zookeeper-3.6.3-bin
export PATH=$PATH:$ZK_HOME/bin

scp /etc/profile node03:/etc

在三台机器上创建zookeeper data目录,并创建myid文件,然后加载环境变量

mkdir -p /opt/data/zookeeper #三台均执行
echo 0 >/opt/data/zookeeper/myid # node01执行
echo 1 >/opt/data/zookeeper/myid # node02执行
echo 2 >/opt/data/zookeeper/myid # node03执行
source /etc/profile #三台均执行

启动zookeeper

分别在三台机器上执行zkServer.sh start

关闭zookeeper

分别在三台机器上执行zkServer.sh stop

2、hbase安装

cd /opt
tar -zxvf hbase-1.3.1-bin.tar.gz
cp /opt/hadoop-2.7.7/etc/hadoop/hdfs-site.xml /opt/hbase-1.3.1/conf/
vi hbase-site.xml
<configuration>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node01,node02,node03</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/opt/data/zookeeper</value>
    <description>
        注意这里的zookeeper数据目录与hadoop ha的共用,也即要与 zoo.cfg 中配置的一致
        Property from ZooKeeper config zoo.cfg.
        The directory where the snapshot is stored.
    </description>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://node01:9000/hbase</value>
    <description>
        The directory shared by RegionServers.
        官网多次强调这个目录不要预先创建,hbase会自行创建,否则会做迁移操作,引发错误
        至于端口,有些是8020,有些是9000,看 $HADOOP_HOME/etc/hadoop/hdfs-site.xml 里面的配置,本实验配置的是
        dfs.namenode.rpc-address.hdfscluster.nn1 , dfs.namenode.rpc-address.hdfscluster.nn2
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>
        分布式集群配置,这里要设置为true,如果是单节点的,则设置为false
        The mode the cluster will be in. Possible values are
        false: standalone and pseudo-distributed setups with managed ZooKeeper
        true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
    </description>
  </property>
</configuration>

vi regionserver
node02
node03

vi backup-masters
node02

vi hbase-env.sh
export JAVA_HOME/opt/jdk1.8.0_141
export HBASE_MANAGES_ZK=false




vi /etc/profile
export HBASE_HOME=/opt/hbase-1.3.1
export PATH=$PATH:$HBASE_HOME/bin


拷贝到另外两台机器上,并配置环境变量

start-all.sh

3、ogg for hbase 配置

edit params r2hbase

REPLICAT r2hbase
sourcedefs /opt/ogg/dirdef/tcloud.t_ogg
TARGETDB LIBFILE libggjava.so SET property=dirprm/hbase.props
REPORTCOUNT EVERY 1 MINUTES, RATE
handlecollisions
GROUPTRANSOPS 10000
MAP tcloud.t_ogg, TARGET tcloud.t_ogg;
MAP tcloud.stu_info, TARGET tcloud.stu_info;

add replicat r2hbase exttrail /opt/ogg/dirdat/tc

cd /opt/ogg/dirprm

vi hbase.props

gg.handlerlist=hbase
gg.handler.hbase.type=hbase
gg.handler.hbase.mode=tx
gg.handler.hbase.hBaseColumnFamilyName=cf
gg.handler.hbase.includeTokens=true
gg.handler.hbase.keyValueDelimiter=CDATA[=]
gg.handler.hbase.keyValuePairDelimiter=CDATA[,]
gg.handler.hbase.encoding=UTF-8
gg.handler.hbase.pkUpdateHandling=abend
gg.handler.hbase.nullValueRepresentation=CDATA[NULL]
gg.handler.hbase.authType=none
gg.classpath=/opt/hbase-1.3.1/lib/*:/opt/hbase-1.3.1/conf

start r2hbase


版权声明:本文为weixin_43761300原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。