python 从mysql读取数据后,如何转为spark rdd?
作者:高景洋 日期:2021-01-29 18:27:52 浏览次数:1679
直接上代码:
from pyspark import SparkContext,SparkConf
conf = SparkConf()
sc = SparkContext(conf=conf)
list_url_group_data = ListUrlDA().select_list_url_count_group_by_websiteid(list_schedule_website_id) #从mysql读出来的数据 类型 List
list_url_rdd = sc.parallelize(list_url_group_data) # 将List转换为rdd
spark = SparkSession.builder.master("local").appName("SparkMysql").getOrCreate()
schema = StructType([
# true代表不为空
StructField("WebsiteID", StringType(), True),
StructField("count", StringType(), True)
])
df_list_url = spark.createDataFrame(list_url_rdd, schema=schema) # 将rdd转换为dataFrame
df_list_url.show()
calculate_product_count(list_filter_websiteids,df_list_url) # 业务逻辑处理方法
spark.stop()
执行结果:
29-01-2021 17:47:34 CST hbase_scan INFO - +---------+------+ 29-01-2021 17:47:34 CST hbase_scan INFO - |WebsiteID| count| 29-01-2021 17:47:34 CST hbase_scan INFO - +---------+------+ 29-01-2021 17:47:34 CST hbase_scan INFO - | 1|215923| 29-01-2021 17:47:34 CST hbase_scan INFO - | 17| 7843| 29-01-2021 17:47:34 CST hbase_scan INFO - | 23| 1563| 29-01-2021 17:47:34 CST hbase_scan INFO - | 24| 1720| 29-01-2021 17:47:34 CST hbase_scan INFO - | 71|334890| 29-01-2021 17:47:34 CST hbase_scan INFO - | 94| 6782| 29-01-2021 17:47:34 CST hbase_scan INFO - | 103| 9| 29-01-2021 17:47:34 CST hbase_scan INFO - | 108| 319| 29-01-2021 17:47:34 CST hbase_scan INFO - | 167| 8352| 29-01-2021 17:47:34 CST hbase_scan INFO - | 168| 5417| 29-01-2021 17:47:34 CST hbase_scan INFO - | 171| 8598| 29-01-2021 17:47:34 CST hbase_scan INFO - | 221| 43| 29-01-2021 17:47:34 CST hbase_scan INFO - | 237| 1128| 29-01-2021 17:47:34 CST hbase_scan INFO - | 238| 11| 29-01-2021 17:47:34 CST hbase_scan INFO - | 242| 111| 29-01-2021 17:47:34 CST hbase_scan INFO - | 243| 922| 29-01-2021 17:47:34 CST hbase_scan INFO - | 251| 445| 29-01-2021 17:47:34 CST hbase_scan INFO - | 253| 372| 29-01-2021 17:47:34 CST hbase_scan INFO - | 279| 1739| 29-01-2021 17:47:34 CST hbase_scan INFO - | 282| 59| 29-01-2021 17:47:34 CST hbase_scan INFO - +---------+------+ 29-01-2021 17:47:34 CST hbase_scan INFO - only showing top 20 rows
本文永久性链接:
<a href="http://r4.com.cn/art173.aspx">python 从mysql读取数据后,如何转为spark rdd?</a>
<a href="http://r4.com.cn/art173.aspx">python 从mysql读取数据后,如何转为spark rdd?</a>
当前header:Host: r4.com.cn
X-Host1: r4.com.cn
X-Host2: r4.com.cn
X-Host3: 127.0.0.1:8080
X-Forwarded-For: 3.15.179.238
X-Real-Ip: 3.15.179.238
X-Domain: r4.com.cn
X-Request: GET /art173.aspx HTTP/1.1
X-Request-Uri: /art173.aspx
Connection: close
Accept: */*
User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
Accept-Encoding: gzip, br, zstd, deflate