I have the following code:
df = df1.withColumn('idx',
F.coalesce(
#Get the smallest index of a stop word in the string
F.least(*[F.when(F.instr('Title_lower_case', s) != 0, F.instr('Title_lower_case', s)) for s in ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']]),
# If no stop words found, get the whole string
F.length('Title_lower_case') + 1)
).selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case')
It actually removes words after ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']
. The problem is this code just give me Title_lower_case
column. but I need to get the data set with all column
Is that right if I add more columns in .selectExpr
? For example:
df.selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case', "id", "count")
question from:
https://stackoverflow.com/questions/65896980/how-to-get-all-column-in-pyspark-code-with-selectexpr 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…