pyspark将hbase的数据以dataframe的形式写入hive
作者:高景洋 日期:2021-09-03 18:16:00 浏览次数:1399
from pyspark import SparkContext,SparkConf,HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
df_tmp = list_filter_websiteids.where('WebsiteID in ({})'.format(','.join(['1','71']))).filter(list_filter_websiteids['IsDeleted']==True)
# 过滤生成dataframe
df_tmp.registerTempTable('test_hive') # 将dataframe 注册临时表
hivec = HiveContext(sc) # 生成HiveContext对象
hivec.sql('create table test.product select * from test_hive') # 将临时表的数据,写入到hive表
本文永久性链接:
<a href="http://r4.com.cn/art196.aspx">pyspark将hbase的数据以dataframe的形式写入hive</a>
<a href="http://r4.com.cn/art196.aspx">pyspark将hbase的数据以dataframe的形式写入hive</a>
当前header:Host: r4.com.cn
X-Host1: r4.com.cn
X-Host2: r4.com.cn
X-Host3: 127.0.0.1:8080
X-Forwarded-For: 44.192.67.10
X-Real-Ip: 44.192.67.10
X-Domain: r4.com.cn
X-Request: GET /art196.aspx HTTP/1.1
X-Request-Uri: /art196.aspx
Connection: close
User-Agent: CCBot/2.0 (https://commoncrawl.org/faq/)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
If-Modified-Since: Fri, 01 Mar 2024 14:51:29 GMT
Accept-Encoding: br,gzip