当前位置:首页 - Spark

Spark groupByKey算子学习使用详解

作者:高景洋 日期:2020-10-25 10:39:47 浏览次数:1407

groupByKey():

    分组统计

示例代码:

    

def my_groupByKey():
    data = ['hello spark', 'hello world', 'hello world']
    rdd = sc.parallelize(data)
    map_rdd = rdd.flatMap(lambda line:line.split(' ')).map(lambda x:(x,1))
    group_by_rdd = map_rdd.groupByKey() 
 print(group_by_rdd.collect())
    result_rdd = group_by_rdd.map(lambda x:{x[0]:list(x[1])}) 
 print(result_rdd.collect())

输出结果:
('hello', <pyspark.resultiterable.ResultIterable object at 0x1061b45f8>), ('spark', <pyspark.resultiterable.ResultIterable object at 0x1061b4630>), ('world', <pyspark.resultiterable.ResultIterable object at 0x1061b4898>)]

[{'hello': [1, 1, 1]}, {'spark': [1]}, {'world': [1, 1]}]

本文永久性链接:
<a href="http://r4.com.cn/art151.aspx">Spark groupByKey算子学习使用详解</a>
当前header:Host: r4.com.cn X-Host1: r4.com.cn X-Host2: r4.com.cn X-Host3: 127.0.0.1:8080 X-Forwarded-For: 3.133.119.66 X-Real-Ip: 3.133.119.66 X-Domain: r4.com.cn X-Request: GET /art151.aspx HTTP/1.1 X-Request-Uri: /art151.aspx Connection: close Accept: */* User-Agent: claudebot