I am very new to the Pyspark. I hope I get an answer here. I need an answer using DataFrame API
My question is to find the number of lines in the text file test.txt that contain words “testA” or “testB” or “testC”
lines=spark.read.text("C:est.txt") listStr=["testA","testB","testC"] lines.filter(lines.isin(listStr)).count() --> this is showing all the lines in the textfile
P.S: better if it can be solved using "lambda"
If you want to use lambda function, you can use RDD:
lines.rdd.filter(lambda r: any(s in r[0] for s in listStr)).count()
1.4m articles
1.4m replys
5 comments
57.0k users