当前位置:首页 - Spark

Spark reduceByKey算子学习使用详解

作者:高景洋 日期:2020-10-25 10:50:56 浏览次数:1826

reduceByKey:

     按key 分组,并进行计算


示例代码:

统计RDD中,每个单词出现的次数:

def my_reduceByKey():
    data = ['hello spark', 'hello world', 'hello world']
    rdd = sc.parallelize(data)
    map_rdd = rdd.flatMap(lambda line: line.split(' ')).map(lambda x: (x, 1))
    reduce_by_key = map_rdd.reduceByKey(lambda a,b:a+b) 
 print(reduce_by_key.collect())

输出结果:
[('hello', 3), ('spark', 1), ('world', 2)]

本文永久性链接:
<a href="http://r4.com.cn/art152.aspx">Spark reduceByKey算子学习使用详解</a>
当前header:Host: r4.com.cn X-Host1: r4.com.cn X-Host2: r4.com.cn X-Host3: 127.0.0.1:8080 X-Forwarded-For: 18.117.172.52 X-Real-Ip: 18.117.172.52 X-Domain: r4.com.cn X-Request: GET /art152.aspx HTTP/1.1 X-Request-Uri: /art152.aspx Connection: close Accept: */* User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com) Accept-Encoding: gzip, br, zstd, deflate