Convert List of Maps into Single Map In Groovy - list

I have a list of map like below:
List mapOne = [[hi:1], [hello:2],[xyx:4]]
This map should be converted to one single map like below
Map resultMap=[hi:1, hello:2,xyx:4]
Do we have any built in functions in Groovy?

Just do:
Map resultMap = mapOne.collectEntries()

Another option is sum:
groovy:000> [[hi:1], [hello:2], [xyx:4]].sum()
===> [hi:1, hello:2, xyx:4]

Related

Flutter: Create map from list using a delimiter

I am trying to store a list of activities with a specific color locally and trying to convert the list into either a map or a list of lists.
Using shared preferences to save the data locally I have the following list:
List<String> value = ['Sleep: Blue', 'Meditation: Green', 'Running: Red'];
prefs.setStringList('ActivityList', value); //save data locally
But I want to be able to retrieve an object of the form:
values = [ {'Sleep', 'Blue'}, {'Meditation', 'Green'}, {'Running', 'Red'} ];
What would be the best way to do this and how would I use the delimiter ':' to split the data accordingly?
Thanks in advance!
I am not sure what you mean by array of objects. If you simply want an array of pairs, then the following should work for you
value.map((item) => item.split(": "))
Or if you want a key value map from your data, then you can do something like this:
Map.fromEntries(value.map((item) {
List<String> pair = item.split(": ");
return MapEntry(pair[0], pair[1]);
}));

How to show which values of one list are in other list?

Having 2 lists, I want check which values of List1 are in List2. I'm trying as below but I get error
List1 = {3,2,8,7,5}
List2 = {1,3,4,2,6,7,9}
= List.Transform(List1, each Text.Contains(List2, _))
Expression.Error: We cannot convert a value of type List to type Text.
Details:
Value=[List]
Type=[Type]
My expected output would be 3,2,7.
How can I do this?
See List.Intersect Documentation
Intersect = List.Intersect({List1,List2})
#horseyride has probably the best answer but using your original logic, you could also write the intersection like this:
List.Select(List1, each List.Contains(List2, _))
This uses Select instead of Transform since you are trying to select/filter instead of changing the elements and uses the appropriate List type instead of Text for the Contains part.

pyspark convert a list of tuples of mix type into a dataframe give null values

I have a function that calculate something and return a list of tuple, it looks like this:
def check():
[...]
return [("valid": 1, "wrong": 4, "lines":["line1","line2"])]
Then I'd like to add all these values together to have the final counts
rdd = lines.mapPartitions(lambda x: check()).reduceByKey(lambda a,b: a+b)
the result is something like
[("valid": 102), ("wrong": 322), ("lines": ["test1", "test2",
"test2"]))
My goal is to be able to write in a files (or multiple files) the 'lines' tuple and in a separate file the valid and wrong counts.
My question is: Is there a better data structure of what I'm currently using ? If no, how can I search for the "lines" tuple in my list ?
Or maybe better, is it possible to transform that RDD into a Dataframe where I could make a sql select on it?
I tried rdd.toDF().show() but for some reasons the value column of "lines" becomes null

Loop through dict key equals to list than put in new list

I have a rather large dictionary right now that is set up like this:
largedict = {'journalname':{code: 2065},'journalname1':{code: 3055}}
and so on and so on. And another dictionary:
codes = {3055: 'medicine',3786: 'sciences'}
And I want to loop through largedict, compare it's code value to the keys in codes. Then either add all journalname key/value pairs that match the code to a different dictionary, or delete all that don't from largedict.
new_dic = {journal_name : journal_body for journal_name, journal_body in largedict.items() if journal_body["code"] in codes}

Convert a Java List of Lists to Scala without O(n) iteration?

Answers to this question do a good job of explaining how to use Scala's Java Converters to change a Java List into a Scala List. Unfortunately, I need to convert a List of Lists from Java to Scala types, and that solution doesn't work:
// pseudocode
java.util.List[java.util.List[String]].asScala
-> scala.collection.immutable.List[java.util.List[String]]
Is there a way to do this conversion without an O(N) iteration over the Java object?
You need to convert the nested lists as well, but that would require the up front O(n):
import scala.collection.JavaConverters._
val javaListOfLists = List(List("a", "b", "c").asJava, List("d", "e", "f").asJava).asJava
val scalaListOfLists = javaListOfLists.asScala.toList.map(_.asScala.toList)
Alternatively, you could convert the outer list into a Stream[List[T]], that would only apply the conversion cost as you accessed each item
val scalaStreamOfLists = javaListOfLists.asScala.toStream.map(_.asScala.toList)
If you don't want to pay the conversion cost at all, you could write a wrapper around java.util.List which would give you a scala collection interface. a rought shot at that would be:
def wrap[T](javaIterator: java.util.Iterator[T]): Stream[T] = {
if (javaIterator.hasNext)
javaIterator.next #:: wrap(javaIterator)
else
empty
}
val outerWrap = wrap(javaListOfLists.iterator).map(inner => wrap(inner.iterator()))
alternatively you can use scalaj-collection library i wrote specifically for this purpose
import com.daodecode.scalaj.collection._
val listOfLists: java.util.List[java.util.List[String]] = ...
val s: mutable.Seq[mutable.Seq[String]] = listOfLists.deepAsScala
that's it. It will convert all nested java collections and primitive types to scala versions. You can also convert directly to immutable data structures using deepAsScalaImmutable (with some copying overhead of course)