I have a column in spark dataframe which has text.
I want to extract all the words which start with a special character '#' and I am using regexp_extract from each row in that text column. If the text contains multiple words starting with '#' it just returns the first one.
I am looking for extracting multiple words which match my pattern in Spark.
data_frame.withColumn("Names", regexp_extract($"text","(?<=^|(?<=[^a-zA-Z0-9-_\.]))#([A-Za-z]+[A-Za-z0-9_]+)",1).show
Sample input: #always_nidhi #YouTube no i dnt understand bt i loved the music nd their dance awesome all the song of this mve is rocking
Sample output: #always_nidhi,#YouTube
You can create a udf function in spark as below:
import java.util.regex.Pattern
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.functions.lit
def regexp_extractAll = udf((job: String, exp: String, groupIdx: Int) => {
println("the column value is" + job.toString())
val pattern = Pattern.compile(exp.toString)
val m = pattern.matcher(job.toString)
var result = Seq[String]()
while (m.find) {
val temp =
result =result:+m.group(groupIdx)
}
result.mkString(",")
})
And then call the udf as below:
data_frame.withColumn("Names", regexp_extractAll(new Column("text"), lit("#\\w+"), lit(0))).show()
Above you give you output as below:
+--------------------+
| Names|
+--------------------+
|#always_nidhi,#Yo...|
+--------------------+
I have used regex, as per the output you have posted in the question. You can modify it to suite your needs.
You can use java RegEx to extract those words. Below is the working code.
val sparkConf = new SparkConf().setAppName("myapp").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
import org.apache.spark.sql.functions.{col, udf}
import java.util.regex.Pattern
//User Defined function to extract
def toExtract(str: String) = {
val pattern = Pattern.compile("#\\w+")
val tmplst = scala.collection.mutable.ListBuffer.empty[String]
val matcher = pattern.matcher(str)
while (matcher.find()) {
tmplst += matcher.group()
}
tmplst.mkString(",")
}
val Extract = udf(toExtract _)
val values = List("#always_nidhi #YouTube no i dnt understand bt i loved the music nd their dance awesome all the song of this mve is rocking")
val df = sc.parallelize(values).toDF("words")
df.select(Extract(col("words"))).show()
Output
+--------------------+
| UDF(words)|
+--------------------+
|#always_nidhi,#Yo...|
+--------------------+
In Spark 3.1+ it's possible using regexp_extract_all
Test with your input:
import spark.implicits._
var df = Seq(
("#always_nidhi #YouTube no"),
("#always_nidhi"),
("no")
).toDF("text")
val col_re_list = expr("regexp_extract_all(text, '(?<=^|(?<=[^a-zA-Z0-9-_\\\\.]))#([A-Za-z]+[A-Za-z0-9_]+)', 0)")
df.withColumn("Names", array_join(col_re_list, ", ")).show(false)
// +-------------------------+-----------------------+
// |text |Names |
// +-------------------------+-----------------------+
// |#always_nidhi #YouTube no|#always_nidhi, #YouTube|
// |#always_nidhi |#always_nidhi |
// |no | |
// +-------------------------+-----------------------+
array_join is used, because you wanted results to be in string format while regexp_extract_all returns array.
if you use \ for escaping in your pattern, you will need to use \\\\ instead of \, until regexp_extract_all is available directly without expr.
I took the suggestion of Amit Kumar and created a UDF and then ran it in Spark SQL:
select Words(status) as people from dataframe
Words is my UDF and status is my dataframe column.
Related
Say I have a dataframe df1 with the column "color" that contains a bunch of colors, and another dataframe df2 with column "phrase" that contains various phrases.
I'd like to join the two dataframes where the color in d1 appears in phrases in d2. I cannot use d1.join(d2, d2("phrases").contains(d1("color")), since it would join on anywhere the word appears within the phrase. I don't want to match on words like scaRED for example, where RED is a part of another word. I only want to join when the color appears as a seperate word in the phrases.
Can I use a regular expression to solve this? What function can I use and how is the syntax when I need to reference the column in the expression?
You could create a REGEX pattern that checks for word boundaries (\b) when matching colors and use a regexp_replace check as the join condition:
val df1 = Seq(
(1, "red"), (2, "green"), (3, "blue")
).toDF("id", "color")
val df2 = Seq(
"red apple", "scared cat", "blue sky", "green hornet"
).toDF("phrase")
val patternCol = concat(lit("\\b"), df1("color"), lit("\\b"))
df1.join(df2, regexp_replace(df2("phrase"), patternCol, lit("")) =!= df2("phrase")).
show
// +---+-----+------------+
// | id|color| phrase|
// +---+-----+------------+
// | 1| red| red apple|
// | 3| blue| blue sky|
// | 2|green|green hornet|
// +---+-----+------------+
Note that "scared cat" would have been a match in the absence of the enclosed word boundaries.
Building up on your own solution, you can also try this:
d1.join(d2, array_contains(split(d2("phrases"), " "), d1("color")))
Did not see your data but this is a starter, with a little variation. No need for regex as far as I can see, but who knows:
// You need to do some parsing like stripping of . ? and may be lowercase or uppercase
// You did not provide an example on the JOIN
import org.apache.spark.sql.functions._
import scala.collection.mutable.WrappedArray
val checkValue = udf { (array: WrappedArray[String], value: String) => array.iterator.map(_.toLowerCase).contains(value.toLowerCase() ) }
//Gen some data
val dfCompare = spark.sparkContext.parallelize(Seq("red", "blue", "gold", "cherry")).toDF("color")
val rdd = sc.parallelize( Array( (("red","hello how are you red",10)), (("blue", "I am fine but blue",20)), (("cherry", "you need to do some parsing and I like cherry",30)), (("thebluephantom", "you need to do some parsing and I like fanta",30)) ))
//rdd.collect
val df = rdd.toDF()
val df2 = df.withColumn("_4", split($"_2", " "))
df2.show(false)
dfCompare.show(false)
val res = df2.join(dfCompare, checkValue(df2("_4"), dfCompare("color")), "inner")
res.show(false)
returns:
+------+---------------------------------------------+---+--------------------------------------------------------+------+
|_1 |_2 |_3 |_4 |color |
+------+---------------------------------------------+---+--------------------------------------------------------+------+
|red |hello how are you red |10 |[hello, how, are, you, red] |red |
|blue |I am fine but blue |20 |[I, am, fine, but, blue] |blue |
|cherry|you need to do some parsing and I like cherry|30 |[you, need, to, do, some, parsing, and, I, like, cherry]|cherry|
+------+---------------------------------------------+---+--------------------------------------------------------+------+
I am having trouble retrieving a value from a JSON string using regex in spark.
My pattern is:
val st1 = """id":"(.*?)"""
val pattern = s"${'"'}$st1${'"'}"
//pattern is: "id":"(.*?)"
My test string in a DF is
import spark.implicits._
val jsonStr = """{"type":"x","identifier":"y","id":"1d5482864c60d5bd07919490"}"""
val df = sqlContext.sparkContext.parallelize(Seq(jsonStr)).toDF("request")
I am then trying to parse out the id value and add it to the df through a UDF like so:
def getSubStringGroup(pattern: String) = udf((request: String) => {
val patternWithResponseRegex = pattern.r
var subString = request match {
case patternWithResponseRegex(idextracted) => Array(idextracted)
case _ => Array("na")
}
subString
})
val dfWithIdExtracted = df.select($"request")
.withColumn("patternMatchGroups", getSubStringGroup(pattern)($"request"))
.withColumn("idextracted", $"patternMatchGroups".getItem(0))
.drop("patternMatchGroups")
So I want my df to look like
|------------------------------------------------------------- | ------------------------|
| request | id |
|------------------------------------------------------------- | ------------------------|
|{"type":"x","identifier":"y","id":"1d5482864c60d5bd07919490"} | 1d5482864c60d5bd07919490|
| -------------------------------------------------------------|-------------------------|
However, when I try the above method, my match comes back as "null" despite working on regex101.com
Could anyone advise or suggest a different method? Thank you.
Following Krzysztof's solution, my table now looks like so:
|------------------------------------------------------------- | ------------------------|
| request | id |
|------------------------------------------------------------- | ------------------------|
|{"type":"x","identifier":"y","id":"1d5482864c60d5bd07919490"} | "id":"1d5482864c60d5bd07919490"|
| -------------------------------------------------------------|-------------------------|
I created a new udf to trim the unnecessary characters and added it to the df:
def trimId = udf((idextracted: String) => {
val id = idextracted.drop(6).dropRight(1)
id
})
val dfWithIdExtracted = df.select($"request")
.withColumn("patternMatchGroups", getSubStringGroup(pattern)($"request"))
.withColumn("idextracted", $"patternMatchGroups".getItem(0))
.withColumn("id", trimId($"idextracted"))
.drop("patternMatchGroups", "idextracted")
The df now looks as desired. Thanks again Krzysztof!
When you're using pattern matching with regex, you're trying to match whole string, which obviously can't succeed. You should rather use findFirstMatchIn:
def getSubStringGroup(pattern: String) = udf((request: String) => {
val patternWithResponseRegex = pattern.r
patternWithResponseRegex.findFirstIn(request).map(Array(_)).getOrElse(Array("na"))
})
You're also creating your pattern in a very bizarre way unless you've got special use case for it. You could just do:
val pattern = """"id":"(.*?)""""
Given a string like abab/docId/example-doc1-2019-01-01, I want to use Regex to extract these values:
firstPart = example
fullString = example-doc1-2019-01-01
I have this:
import scala.util.matching.Regex
case class Read(theString: String) {
val stringFormat: Regex = """.*\/docId\/([A-Za-z0-9]+)-([A-Za-z0-9-]+)$""".r
val stringFormat(firstPart, fullString) = theString
}
But this separates it like this:
firstPart = example
fullString = doc1-2019-01-01
Is there a way to retain the fullString and do a regex on that to get the part before the first hyphen? I know I can do this using the String split method but is there a way do it using regex?
You may use
val stringFormat: Regex = ".*/docId/(([A-Za-z0-9])+-[A-Za-z0-9-]+)$".r
||_ Group 2 _| |
| |
|_________________ Group 1 __|
See the regex demo.
Note how capturing parentheses are re-arranged. Also, you need to swap the variables in the regex match call, see demo below (fullString should come before firstPart).
See Scala demo:
val theString = "abab/docId/example-doc1-2019-01-01"
val stringFormat = ".*/docId/(([A-Za-z0-9]+)-[A-Za-z0-9-]+)".r
val stringFormat(fullString, firstPart) = theString
println(s"firstPart: '$firstPart'\nfullString: '$fullString'")
Output:
firstPart: 'example'
fullString: 'example-doc1-2019-01-01'
I tried this solution to test if a string in substring:
val reg = ".*\\[CS_RES\\].*".r
reg.findAllIn(my_DataFrame).length
But it is not working because I can't apply findAllIn to a Dataframe.
I tried this second solution, I converted my DataFrame to RDD:
val rows: RDD[Row] = myDataFrame.rdd
val processedRDD = rows.map{
str =>
val patternReg = ".*\\[CS_RES\\].*".r
val result = patternReg.findAllIn(str).length
(str, result)
}
it displays an error:
<console>:69: error: type mismatch;
found : org.apache.spark.sql.Row
required: CharSequence
val result = patternReg.findAllIn(str).length
How can I apply a Regex on a DataFrame scala in the first solution to compute the number of the lines that contain the string [CS_RES]
or if someone have a solution for the second solution ?
You can use regexp_extract function to filter and count the lines. For example:
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
private val session: SparkSession = ...
import session.implicits._
val myDataFrame = Seq(
(1L, "abc"),
(2L, "def"),
(3L, "a[CS_RES]b"),
(4L, "adg")
).toDF("id", "text")
val resultRegex = myDataFrame.where(regexp_extract($"text", "\\[CS_RES\\]", 0).notEqual("")).count()
println(resultRegex) // outputs 1
The idea is: if the first group (i=0) returned by regexp_extract is not an empty string, the substring is found. The invocation of count() returns the total number of those strings.
But if you need to find only exact matches of substrings, the solution can be simplified by using locate function:
val resultLocate = myDataFrame.where(locate("[CS_RES]", $"text") > 0).count()
println(resultLocate) // outputs 1
import org.apache.spark.sql.functions.udf
val reg = ".*\\[CS_RES\\].*".r
val contains=udf((s:String)=>reg.findAllIn(s).length >0)
val cnt = df.select($"summary").filter(contains($"summary")).count()
I have a text file i want to parse. The file has multiple items I want to extract. I want to capture everything in between a colon ":" and a particular word. Let's take the following example.
Description : a pair of shorts
amount : 13 dollars
requirements : must be blue
ID1 : 199658
----
The following code parses the information out.
import re
f = open ("parse.txt", "rb")
fileRead = f.read()
Description = re.findall("Description :(.*?)amount", fileRead, re.DOTALL)
amount = re.findall("amount :(.*?)requirements", fileRead, re.DOTALL)
requirements = re.findall("requirements :(.*?)ID1", fileRead, re.DOTALL)
ID1 = re.findall("ID1 :(.*?)-", fileRead, re.DOTALL)
print Description[0]
print amount[0]
print requirements[0]
print ID1[0]
f.close()
The problem is that sometimes the text file will have a new line such as this
Description
: a pair of shorts
amount
: 13 dollars
requirements: must be blue
ID1: 199658
----
In this case my code will not work because it is unable to find "Description :" because it is now separated into a new line. If I choose to change the search to ":(.*?)requirements" it will not return just the 13 dollars, it will return a pair of shorts and 13 dollars because all of that text is in between the first colon and the word, requirements. I want to have a way of parsing out the information no matter if there is a line break or not. I have hit a road block and your help would be greatly appreciated.
You can use a regex like this:
Description[^:]*(.*)
^--- use the keyword you want
Working demo
Quoting your code you could use:
import re
f = open ("parse.txt", "rb")
fileRead = f.read()
Description = re.findall("Description[^:]*(.*)", fileRead)
amount = re.findall("amount[^:]*(.*)", fileRead)
requirements = re.findall("requirements[^:]*(.*)", fileRead)
ID1 = re.findall("ID1[^:]*(.*)", fileRead)
print Description[0]
print amount[0]
print requirements[0]
print ID1[0]
f.close()
You can simply do this:
import re
f = open ("new.txt", "rb")
fileRead = f.read()
keyvals = {k.strip():v.strip() for k,v in dict(re.findall('([^:]*):(.*)(?=\b[^:]*:|$)',fileRead,re.M)).iteritems()}
print(keyvals)
f.close()
Output:
{'amount': '13 dollars', 'requirements': 'must be blue', 'Description': 'a pair of shorts', 'ID1': '199658'}