I have the raw of string in logs file . I do many filter and other operation after that . I have reached the following problem as below. I need to convert the string into json format . So that i can save it as a single object.
Suppose i have the following data
Val CDataTime = "20191012"
Val LocationId = "12345"
Val SetInstruc = "Comm=Qwe123,Elem=12345,Elem123=Test"
I am trying to create a data frame that contains datetime|location|jsonofinstruction
The Jsonofstring is the json of third Val; I try to split the string first by comma than by equal to sign and loop through by 2 and create a map of value of one and 2 as value. But json not created . Please help here.
You can use scala.util.parsing.json.JSONObject to convert a map to JSON and then to a string.
val df = spark.createDataset(Seq("Comm=Qwe123,Elem=12345,Elem123=Test")).toDF("col3")
val dfWithJson = df.map{ row =>
val insMap = row.getAs[String]("col3").split(",").map{kv =>
val kvArray = kv.split("=")
(kvArray(0),kvArray(1))
}.toMap
val insJson = JSONObject(insMap).toString()
(row.getAs[String]("col3"),insJson)
}.toDF("col3","col4").show()
Result -
+--------------------+--------------------+
| col3| col4|
+--------------------+--------------------+
|Comm=Qwe123,Elem=...|{"Comm" : "Qwe123...|
+--------------------+--------------------+
Related
I have a function that extracts a node from JSON document as follows:
...
Json = GetJson(Url),
Value = Json[#"values"]
values correspond to the actual node within the JSON document.
I would like to generalize this piece of code and provide the name of the node as a variable like:
let myFunc = (parentNodeName as text) =>
...
Json = GetJson(Url),
Value = Json[parentNodeName]
However getting this error:
An error occurred in the ‘myFunc’ query. Expression.Error: The field 'parentNodeName' of the record wasn't found.
How can I refer to the JSON node dynamically?
Try
(Json, parentNodeName ) =>
let
...
Value = Record.Field(Json,parentNodeName)
in Value
sample code:
let Json = Json.Document(Web.Contents("http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json")),
Value=myFunc(Json,"title")
in Value
and myFunc:
(Json, parentNodeName ) =>
let
Value = Record.Field(Json,parentNodeName)
in Value
We have .txt log file , i used scala spark to read the file. the file contains sets of data in row wise . i read the data one by one like as below
val sc = spark.SparkContext
val dataframe = sc.textFile(/path/to/log/*.txt)
We have .txt log file , i used scala spark to read the file. the file contains sets of data in row wise . i read the data one by one like as below
val sc = spark.SparkContext
val dataframe = sc.textFile(/path/to/log/*.txt)
val get_set_element = sc.textFile(filepath.txt)
val pattern = """(\S+) "([\S\s]+)\" (\S+) (\S+) (\S+) (\S+)""".r
val test = get_set_element.map{ line =>
( for {
m <- pattern.findAllIn(line).matchData
g <- m.subgroups
} yield(g)
).toList
}.
map(l => (l(0), l(1), l(2), l(3), l(4), l(5)))
I want to create a DataFrame so that i can save it into csv file.
Can be created from RDD[Row], with schema assigned:
// instead of: map(l => (l(0), l(1), l(2), l(3), l(4), l(5)))
.map(Row.fromSeq)
val fields = (0 to 5).map(idx => StructField(name = "l" + idx, dataType = StringType, nullable = true))
val df = spark.createDataFrame(test, StructType(fields))
Output:
+---+---+---+---+---+---+
|l0 |l1 |l2 |l3 |l4 |l5 |
+---+---+---+---+---+---+
|a |b |c |d |e |f |
+---+---+---+---+---+---+
Assuming I've a Dataframe with many columns, some are type string others type int and others type map.
e.g.
field/columns types: stringType|intType|mapType<string,int>|...
|--------------------------------------------------------------------------
| myString1 |myInt1| myMap1 |...
|--------------------------------------------------------------------------
|"this_is_#string"| 123 |{"str11_in#map":1,"str21_in#map":2, "str31_in#map": 31}|...
|"this_is_#string"| 456 |{"str12_in#map":1,"str22_in#map":2, "str32_in#map": 32}|...
|"this_is_#string"| 789 |{"str13_in#map":1,"str23_in#map":2, "str33_in#map": 33}|...
|--------------------------------------------------------------------------
I want to remove some characters like '_' and '#' from all columns of String and Map type
so the result Dataframe/RDD will be:
|------------------------------------------------------------------------
|myString1 |myInt1| myMap1|... |
|------------------------------------------------------------------------
|"thisisstring"| 123 |{"str11inmap":1,"str21inmap":2, "str31inmap": 31}|...
|"thisisstring"| 456 |{"str12inmap":1,"str22inmap":2, "str32inmap": 32}|...
|"thisisstring"| 789 |{"str13inmap":1,"str23inmap":2, "str33inmap": 33}|...
|-------------------------------------------------------------------------
I am not sure if it's better to convert the Dataframe into an RDD and work with it or perform the work in the Dataframe.
Also, not sure how to handle the regexp with different column types in the best way (I am sing scala).
And I would like to perform this action for all column of these two types (string and map), trying to avoid using the column names like:
def cleanRows(mytabledata: DataFrame): RDD[String] = {
//this will do the work for a specific column (myString1) of type string
val oneColumn_clean = mytabledata.withColumn("myString1", regexp_replace(col("myString1"),"[_#]",""))
...
//return type can be RDD or Dataframe...
}
Is there any simple solution to perform this?
Thanks
One option is to define two udfs to handle string type column and Map type column separately:
import org.apache.spark.sql.functions.udf
val df = Seq(("this_is#string", 3, Map("str1_in#map" -> 3))).toDF("myString", "myInt", "myMap")
df.show
+--------------+-----+--------------------+
| myString|myInt| myMap|
+--------------+-----+--------------------+
|this_is#string| 3|Map(str1_in#map -...|
+--------------+-----+--------------------+
1) Udf to handle string type columns:
def remove_string: String => String = _.replaceAll("[_#]", "")
def remove_string_udf = udf(remove_string)
2) Udf to handle Map type columns:
def remove_map: Map[String, Int] => Map[String, Int] = _.map{ case (k, v) => k.replaceAll("[_#]", "") -> v }
def remove_map_udf = udf(remove_map)
3) Apply udfs to corresponding columns to clean it up:
df.withColumn("myString", remove_string_udf($"myString")).
withColumn("myMap", remove_map_udf($"myMap")).show
+------------+-----+-------------------+
| myString|myInt| myMap|
+------------+-----+-------------------+
|thisisstring| 3|Map(str1inmap -> 3)|
+------------+-----+-------------------+
I am new to Spark and Scala coming from R background.After a few transformations of RDD, I get a RDD of type
Description: RDD[(String, Int)]
Now I want to apply a Regular expression on the String RDD and extract substrings from the String and add just substring in a new coloumn.
Input Data :
BMW 1er Model,278
MINI Cooper Model,248
Output I am looking for :
Input | Brand | Series
BMW 1er Model,278, BMW , 1er
MINI Cooper Model ,248 MINI , Cooper
where Brand and Series are newly calculated substrings from String RDD
What I have done so far.
I could achieve this for a String using regular expression, but I cani apply fro all lines.
val brandRegEx = """^.*[Bb][Mm][Ww]+|.[Mm][Ii][Nn][Ii]+.*$""".r //to look for BMW or MINI
Then I can use
brandRegEx.findFirstIn("hello this mini is bmW testing")
But how can I use it for all the lines of RDD and to apply different regular expression to achieve the output as above.
I read about this code snippet, but not sure how to put it altogether.
val brandRegEx = """^.*[Bb][Mm][Ww]+|.[Mm][Ii][Nn][Ii]+.*$""".r
def getBrand(Col4: String) : String = Col4 match {
case brandRegEx(str) =>
case _ => ""
return 'substring
}
Any help would be appreciated !
Thanks
To apply your regex to each item in the RDD, you should use the RDD map function, which transforms each row in the RDD using some function (in this case, a Partial Function in order to extract to two parts of the tuple which makes up each row):
import org.apache.spark.{SparkContext, SparkConf}
object Example extends App {
val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("Example"))
val data = Seq(
("BMW 1er Model",278),
("MINI Cooper Model",248))
val dataRDD = sc.parallelize(data)
val processedRDD = dataRDD.map{
case (inString, inInt) =>
val brandRegEx = """^.*[Bb][Mm][Ww]+|.[Mm][Ii][Nn][Ii]+.*$""".r
val brand = brandRegEx.findFirstIn(inString)
//val seriesRegEx = ...
//val series = seriesRegEx.findFirstIn(inString)
val series = "foo"
(inString, inInt, brand, series)
}
processedRDD.collect().foreach(println)
sc.stop()
}
Note that I think you have some problems in your regular expression, and you also need a regular expression for finding the series. This code outputs:
(BMW 1er Model,278,BMW,foo)
(MINI Cooper Model,248,NOT FOUND,foo)
But if you correct your regexes for your needs, this is how you can apply them to each row.
hi I was just looking for aother question and got this question. The above problem can be done using normal transformations.
val a=sc.parallelize(collection)
a.map{case (x,y)=>(x.split (" ")(0)+" "+x.split(" ")(1))}.collect
Just wondering does the filter turn the data into tuples? For example
val filesLines = sc.textFile("file.txt")
val split_lines = filesLines.map(_.split(";"))
val filteredData = split_lines.filter(x => x(4)=="Blue")
//from here if we wanted to map the data would it be using tuple format ie. x._3 OR x(3)
val blueRecords = filteredData.map(x => x._1, x._2)
OR
val blueRecords = filteredData.map(x => x(0), x(1))
No, all filter does is take a predicate function and uses it such that any of the datapoints in the set that return a false when passed through that predicate, then they are not passed back out to the resultant set. So, the data remians the same:
filesLines //RDD[String] (lines of the file)
split_lines //RDD[Array[String]] (lines delimited by semicolon)
filteredData //RDD[Array[String]] (lines delimited by semicolon where the 5th item is Blue
So, to use filteredData, you will have to access the data as an array using parentheses with the appropriate index
filter will not change the RDD - filtered data would still be RDD(Array[String])