Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
334 views
in Technique[技术] by (71.8m points)

apache spark sql - how to get all column in Pyspark code with Selectexpr

I have the following code:

df = df1.withColumn('idx',
F.coalesce(
     #Get the smallest index of a stop word in the string
    F.least(*[F.when(F.instr('Title_lower_case', s) != 0, F.instr('Title_lower_case', s)) for s in ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']]),
    # If no stop words found, get the whole string
    F.length('Title_lower_case') + 1)
).selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case')

It actually removes words after ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']. The problem is this code just give me Title_lower_case column. but I need to get the data set with all column

Is that right if I add more columns in .selectExpr? For example:

df.selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case', "id", "count")
question from:https://stackoverflow.com/questions/65896980/how-to-get-all-column-in-pyspark-code-with-selectexpr

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...