pyspark 读写 hbase (指定列)

spark使用newAPIHadoopRDD和saveAsNewAPIHadoopDataset来存取hbase的数据,直接上代码:

1. spark读取hbase:

readkeyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
readvalueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
conf = {"hbase.zookeeper.quorum": host, "hbase.mapreduce.inputtable": table}
hbase_rdd = spark.sparkContext.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat",
"org.apache.hadoop.hbase.io.ImmutableBytesWritable",
"org.apache.hadoop.hbase.client.Result",keyConverter=readkeyConv,valueConverter=readvalueConv,conf=conf)

这样会把所有的列簇都读取出来,如果需要读取特定的列,需要conf中曾加如下配置:

"hbase.mapreduce.scan.columns"= "basic_info:allbuildings"

 如果再进一步,需要指定开始和结束的列,则更改配置:

hbase.mapreduce.scan.row.start
hbase.mapreduce.scan.row.stop

其他有用的配置:

hbase.mapreduce.scan.row.start
hbase.mapreduce.scan.row.stop
hbase.mapreduce.scan.column.family
hbase.mapreduce.scan.columns
hbase.mapreduce.scan.timestamp
hbase.mapreduce.scan.timerange.start
hbase.mapreduce.scan.timerange.end
hbase.mapreduce.scan.maxversions
hbase.mapreduce.scan.cacheblocks
hbase.mapreduce.scan.cachedrows
hbase.mapreduce.scan.batchsize

 这些配置都需要添加到conf配置中才生效。

2. spark写入hbase:

keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"
conf={"hbase.zookeeper.quorum": host, "hbase.mapred.outputtable": table,
            "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",
            "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
            "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable",
            "mapreduce.output.fileoutputformat.outputdir": "/tmp"}
rdd.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)

rdd写入的格式为(rowkey, [rowkey,  col_family,  column,  value])

 


版权声明:本文为lmb09122508原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。