Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
712 views
in Technique[技术] by (71.8m points)

apache spark - Split Contents of String column in PySpark Dataframe

I have a pyspark data frame whih has a column containing strings. I want to split this column into words

Code:

>>> sentenceData = sqlContext.read.load('file://sample1.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')
>>> sentenceData.show(truncate=False)
+---+---------------------------+
|key|desc                       |
+---+---------------------------+
|1  |Virat is good batsman      |
|2  |sachin was good            |
|3  |but modi sucks big big time|
|4  |I love the formulas        |
+---+---------------------------+


Expected Output
---------------

>>> sentenceData.show(truncate=False)
+---+-------------------------------------+
|key|desc                                 |
+---+-------------------------------------+
|1  |[Virat,is,good,batsman]              |
|2  |[sachin,was,good]                    |
|3  |....                                 |
|4  |...                                  |
+---+-------------------------------------+

How can I achieve this?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Use split function:

from pyspark.sql.functions import split

df.withColumn("desc", split("desc", "s+"))

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...