当前位置:首页 - Spark

pyspark dataframe添加特定列

作者:高景洋 日期:2020-12-11 11:44:35 浏览次数:1411

from pyspark.sql.functions import lit,rand


df = spark.read.csv('file:///Users/jasongao/Documents/tmp/hbase-0.csv',schema) # 原始DF

df11 = df.withColumn('RandomKey',rand()) # 给df 增加 RandomKey 随机列,并生成新的DF

df11.orderBy(df11['RandomKey']).show() # 按新增列正排序 输出显示

PS : 新增列时,优先使用  # from pyspark.sql.functions # 下的方法,尽量避免通过用户自定义函数实现列内容增加

本文永久性链接:
<a href="http://r4.com.cn/art169.aspx">pyspark dataframe添加特定列</a>
当前header:Host: r4.com.cn X-Host1: r4.com.cn X-Host2: r4.com.cn X-Host3: 127.0.0.1:8080 X-Forwarded-For: 18.207.133.13 X-Real-Ip: 18.207.133.13 X-Domain: r4.com.cn X-Request: GET /art169.aspx HTTP/1.1 X-Request-Uri: /art169.aspx Connection: close User-Agent: CCBot/2.0 (https://commoncrawl.org/faq/) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 If-Modified-Since: Sun, 26 May 2024 07:48:33 GMT Accept-Encoding: br,gzip