Spark reduceByKey算子学习使用详解
作者:高景洋 日期:2020-10-25 10:50:56 浏览次数:2388
reduceByKey:
按key 分组,并进行计算
示例代码:
统计RDD中,每个单词出现的次数:
def my_reduceByKey(): data = ['hello spark', 'hello world', 'hello world'] rdd = sc.parallelize(data) map_rdd = rdd.flatMap(lambda line: line.split(' ')).map(lambda x: (x, 1)) reduce_by_key = map_rdd.reduceByKey(lambda a,b:a+b)
print(reduce_by_key.collect())
输出结果:
[('hello', 3), ('spark', 1), ('world', 2)]
本文永久性链接:
<a href="http://r4.com.cn/art152.aspx">Spark reduceByKey算子学习使用详解</a>
<a href="http://r4.com.cn/art152.aspx">Spark reduceByKey算子学习使用详解</a>
当前header:Host: r4.com.cn
X-Host1: r4.com.cn
X-Host2: r4.com.cn
X-Host3: 127.0.0.1:8080
X-Forwarded-For: 18.97.14.86
X-Real-Ip: 18.97.14.86
X-Domain: r4.com.cn
X-Request: GET /art152.aspx HTTP/1.1
X-Request-Uri: /art152.aspx
Connection: close
User-Agent: CCBot/2.0 (https://commoncrawl.org/faq/)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
If-Modified-Since: Mon, 11 Aug 2025 07:52:50 GMT
Accept-Encoding: br,gzip