Java8 Lambda compare two List and trasform to Map - list

Suppose I have two class:
class Key {
private Integer id;
private String key;
}
class Value {
private Integer id;
private Integer key_id;
private String value;
}
Now I fill the first list as follows:
List<Key> keys = new ArrayLisy<>();
keys.add(new Key(1, "Name"));
keys.add(new Key(2, "Surname"));
keys.add(new Key(3, "Address"));
And the second one:
List<Value> values = new ArrayLisy<>();
values.add(new Value(1, 1, "Mark"));
values.add(new Value(2, 3, "Fifth Avenue"));
values.add(new Value(3, 2, "Fischer"));
Can you please tell me how can I rewrite the follow code:
for (Key k : keys) {
for (Value v : values) {
if (k.getId().equals(v.getKey_Id())) {
map.put(k.getKey(), v.getValue());
break;
}
}
}
Using Lambdas?
Thank you!
‐------UPDATE-------
Yes sure it works, I forget "using Lambdas" on the first post (now I added). I would like to rewrite the two nested for cicle with Lamdas.

Here is how you would do it using streams.
stream the keylist
stream an index for indexing the value list
filter matching ids
package the key instance key and the value instance value into a SimpleEntry.
then add that to a map.
Map<String, String> results = keys.stream()
.flatMap(k -> IntStream.range(0, values.size())
.filter(i -> k.getId() == values.get(i).getKey_id())
.mapToObj(i -> new AbstractMap.SimpleEntry<>(
k.getKey(), values.get(i).getValue())))
.collect(Collectors.toMap(Entry::getKey, Entry::getValue));
results.entrySet().forEach(System.out::println);
prints
Address=Fifth Avenue
Surname=Fischer
Name=Mark
Imo, your way is much clearer and easier to understand. Streams/w lambdas or method references are not always the best approach.
A hybrid approach might also be considered.
allocate a map.
iterate over the keys.
stream the values trying to find a match on key_id's and return first one found.
The value was found (isPresent) add to map.
Map<String,String> map = new HashMap<>();
for (Key k : keys) {
Optional<Value> opt = values.stream()
.filter(v -> k.getId() == v.getKey_id())
.findFirst();
if (opt.isPresent()) {
map.put(k.getKey(), opt.get().getValue());
}
}

Related

How To Retrieve Group Of Elements From IEnumerable Without Iterating

I have the following:
IEnumerable<Personel> personel= page.Retrieve<Personel>....
Then I have List which contains only personelIDs
List<int> personelIDs....
I need to retrived all 'personels' from the IEnumerable and assign it into a new List which matches the personelIDs from 'personelIDs ' list.
I can do it my iterating and having verify the IDs and if they're equal assign it into another List,
but is there a short here where I can retrieve it without iterating or having multiple lines of code?
Basically Is there a way on how to shortened this
List<int> pIds = ....// contains only specific personellID's
IEnumerable personelIEn = // contains Personel data like personel IDs, name..etc
List<Personel> personel = personelIEn.ToList();
List<Personel> personelByTag = new List<Personel>();
foreach (Personel b in personel ) {
if (pIds.Contains(b.DocumentID)) {
personelByTag .Add(b);
}
}
return personelByTag ;
basically I'm trying to find ways how to shortened the above code
You can use a predicate:
public List<Personel> search(String documentId, List<Personel> list)
{
Predicate<Personel> predicate = (Personel personel) => (personel.Id== documentId);
return list.FindAll(predicate);
}
Could that help?

How to add an item to a list in Kotlin?

I'm trying to add an element list to the list of string, but I found Kotlin does not have an add function like java so please help me out how to add the items to the list.
class RetrofitKotlin : AppCompatActivity() {
var listofVechile:List<Message>?=null
var listofVechileName:List<String>?=null
var listview:ListView?=null
var progressBar:ProgressBar?=null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_retrofit_kotlin)
listview=findViewById<ListView>(R.id.mlist)
var apiInterfacee=ApiClass.client.create(ApiInterfacee::class.java)
val call=apiInterfacee.getTaxiType()
call.enqueue(object : Callback<TaxiTypeResponse> {
override fun onResponse(call: Call<TaxiTypeResponse>, response: Response<TaxiTypeResponse>) {
listofVechile=response.body()?.message!!
println("Sixze is here listofVechile ${listofVechile!!.size}")
if (listofVechile!=null) {
for (i in 0..listofVechile!!.size-1) {
//how to add the name only listofVechileName list
}
}
//println("Sixze is here ${listofVechileName!!.size}")
val arrayadapter=ArrayAdapter<String>(this#RetrofitKotlin,android.R.layout.simple_expandable_list_item_1,listofVechileName)
listview!!.adapter=arrayadapter
}
override fun onFailure(call: Call<TaxiTypeResponse>, t: Throwable) {
}
})
}
}
A more idiomatic approach would be to use MutableList instead of specifically ArrayList. You can declare:
val listOfVehicleNames: MutableList<String> = mutableListOf()
And add to it that way. Alternatively, you may wish to prefer immutability, and declare it as:
var listOfVehicleNames: List<String> = emptyList()
And in your completion block, simply reassign it:
listOfVehicleNames = response.body()?.message()?.orEmpty()
.map { it.name() /* assumes name() function exists */ }
Talking about an idiomatic approach... 🙄
When you can get away with only using immutable lists (which means usually in Kotlin), simply use + or plus. It returns a new list
with all elements of the original list plus the newly added one:
val original = listOf("orange", "apple")
val modified = original + "lemon" // [orange, apple, lemon]
original.plus("lemon") yields the same result as original + "lemon". Slightly more verbose but might come in handy when combining several collection operations:
return getFruit()
.plus("lemon")
.distinct()
Besides adding a single element, you can use plus to concatenate a whole collection too:
val original = listOf("orange", "apple")
val other = listOf("banana", "strawberry")
val newList = original + other // [orange, apple, banana, strawberry]
Disclaimer: this doesn't directly answer OP's question, but I feel that in a question titled "How to add an item to a list in Kotlin?", which is a top Google hit for this topic, plus must be mentioned.
If you don't want or can't use array list directly use this code for add item
itemsList.toMutableList().add(item)
itemlist : list of your items
item : item you want to add
instead of using a regular list which is immutable just use an arrayListof which is mutable
so your regular list will become
var listofVehicleNames = arrayListOf("list items here")
then you can use the add function
listOfVehicleNames.add("what you want to add")
you should use a MutableList like ArrayList
var listofVechileName:List<String>?=null
becomes
var listofVechileName:ArrayList<String>?=null
and with that you can use the method add
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-mutable-list/add.html
For any specific class, the following may help
var newSearchData = List<FIRListValuesFromServer>()
for (i in 0 until this.singleton.firListFromServer.size) {
if (searchText.equals(this.singleton.firListFromServer.get(i).FIR_SRNO)) {
newSearchData.toMutableList().add(this.singleton.firListFromServer.get(i))
}
}
val listofVechile = mutableListOf<String>()
Declare mutable list like that and you will be able to add elements to list :
listofVechile.add("car")
https://kotlinlang.org/docs/collections-overview.html

Remove all duplicate and the original elements from a list

I have a list of sObject elements. I want to remove the duplicate element having the same record name along with the original record .
Like suppose if I have a list of element having record names as
Chair1,Chair2,Chair3,Chair4,Chair5,Chair6,Chair7,Chair1,Chair2
I want to print a list having only the elements that have no duplicates.For this case I should get the list Chair3,Chair4,Chair5,Chair6,Chair7.
I am using the below code to achieve this functionality.But I am getting thw records as : Chair1,Chair2,Chair3,Chair4,Chair5,Chair6,Chair7.
In ideal case we should not get the records Chair1,Chair2 as these already have duplicate records.
List <Chair__c> chairList = [SELECT
ID,
Name
FROM Chair__c
ORDER BY Name ASC];
System.debug('chairListOrderbyName::'+chairList);
List <String> chairNameList = new List <String>();
for(Integer i = 0; i < chairList.size();i++) {
for(Integer j = 0;j < chairList.size();j++) {
if(chairList[i].Name.equalsIgnoreCase(chairList[j].Name) && i != j) {
chairList.remove(i);
chairList.remove(j);
}
}
}
System.debug('chairList::'+chairList);
If names is really all you need you could do it with pure SOQL using GROUP BY and HAVING. Something like
SELECT Name
FROM Chair__c
GROUP BY Name
HAVING COUNT(Id) = 1 // only unique entries
If you need full sObjects then I'd make a helper Set<String> and loop through results. If name isn't in the set - add it. But if it already is there -> remove it!
Actually let's make it Map, similar idea...
Map<Set, Chair__c> chairs = new Map<Set, Chair__c>();
for(Chair__c c : [SELECT ...]){
if(chairs.containsKey(c.Name)){
chairs.remove(c.Name);
} else {
chairs.put(c.Name, c);
}
}
System.debug(JSON.serializePretty(chairs));
System.debug(chairs.values());
Try using a Set<String> instead of List<String>.Refer below solution
List <Chair__c> chairList = [SELECT
ID,
Name
FROM Chair__c
ORDER BY Name ASC];
Set<String> chairNameSet = new Set<String>();
for(String item: chairList ) {
chairNameSet.add(item);
}

Univocity - parse each TSV file row to different Type of class object

I have a tsv file which has fixed rows but each row is mapped to different Java Class.
For example.
recordType recordValue1
recordType recordValue1 recordValue2
for First row I have follofing class:
public class FirstRow implements ItsvRecord {
#Parsed(index = 0)
private String recordType;
#Parsed(index = 1)
private String recordValue1;
public FirstRow() {
}
}
and for second row I have:
public class SecondRow implements ItsvRecord {
#Parsed(index = 0)
private String recordType;
#Parsed(index = 1)
private String recordValue1;
public SecondRow() {
}
}
I want to parse the TSV file directly to the respective objects but I am falling short of ideas.
Use an InputValueSwitch. This will match a value in a particular column of each row to determine what RowProcessor to use. Example:
Create two (or more) processors for each type of record you need to process:
final BeanListProcessor<FirstRow> firstProcessor = new BeanListProcessor<FirstRow>(FirstRow.class);
final BeanListProcessor<SecondRow> secondProcessor = new BeanListProcessor<SecondRow>(SecondRow.class);
Create an InputValueSwitch:
//0 means that the first column of each row has a value that
//identifies what is the type of record you are dealing with
InputValueSwitch valueSwitch = new InputValueSwitch(0);
//assigns the first processor to rows whose first column contain the 'firstRowType' value
valueSwitch.addSwitchForValue("firstRowType", firstProcessor);
//assigns the second processor to rows whose first column contain the 'secondRowType' value
valueSwitch.addSwitchForValue("secondRowType", secondProcessor);
Parse as usual:
TsvParserSettings settings = new TsvParserSettings(); //configure...
// your row processor is the switch
settings.setProcessor(valueSwitch);
TsvParser parser = new TsvParser(settings);
Reader input = new StringReader(""+
"firstRowType\trecordValue1\n" +
"secondRowType\trecordValue1\trecordValue2");
parser.parse(input);
Get the parsed objects from your processors:
List<FirstRow> firstTypeObjects = firstProcessor.getBeans();
List<SecondRow> secondTypeObjects = secondProcessor.getBeans();
The output will be*:
[FirstRow{recordType='firstRowType', recordValue1='recordValue1'}]
[SecondRow{recordType='secondRowType', recordValue1='recordValue1', recordValue2='recordValue2'}]
Assuming you have a sane toString() implemented in your classes
If you want to manage associations among the objects that are parsed:
If your FirstRow should contain the elements parsed for records of type SecondRow, simply override the rowProcessorSwitched method:
InputValueSwitch valueSwitch = new InputValueSwitch(0) {
#Override
public void rowProcessorSwitched(RowProcessor from, RowProcessor to) {
if (from == secondProcessor) {
List<FirstRow> firstRows = firstProcessor.getBeans();
FirstRow mostRecentRow = firstRows.get(firstRows.size() - 1);
mostRecentRow.addRowsOfOtherType(secondProcessor.getBeans());
secondProcessor.getBeans().clear();
}
}
};
The above assumes your FirstRow class has a addRowsOfOtherType method that takes a list of SecondRow as parameter.
And that's it!
You can even mix and match other types of RowProcessor. There's another example here that demonstrates this.
Hope this helps.

Spark - Reduce operation taking too long

I'm making an application with Spark that will run some topic extration algorithms. For that, first I need to make some preprocessing, extracting the document-term matrix by the end. Ive could done that, but for a (not that much) big collection of documents (only 2 thousand, 5MB), this proccess is taking forever.
So, debugging, Ive found where the program kinda stucks, and it's in a reduce operation. What I'm doing in this part of the code is counting how many times each term occurs on the collection, so first I done a "map", couting it for each rdd, and them I "reduce" it, saving the result inside a hashmap. The map operation is very fast, but in the reduce, its splitting the operation in 40 blocks, and each block takes 5~10 minutes to proccess.
So I'm trying to figure out what I'm doing wrong, or if reduce operations are that much costly.
SparkConf: Standalone mode, using local[2]. I've tried to use it as "spark://master:7077", and it worked, but still the same slowness.
Code:
"filesIn" is a JavaPairRDD where the key is the file path and the value is the content of the file.
So, first the map, where I take this "filesIn", split the words, and count their frequency (in that case doesn't matter what document is)
And then the reduce, where I create a HashMap (term, freq).
JavaRDD<HashMap<String, Integer>> termDF_ = filesIn.map(new Function<Tuple2<String, String>, HashMap<String, Integer>>() {
#Override
public HashMap<String, Integer> call(Tuple2<String, String> t) throws Exception {
String[] allWords = t._2.split(" ");
HashMap<String, Double> hashTermFreq = new HashMap<String, Double>();
ArrayList<String> words = new ArrayList<String>();
ArrayList<String> terms = new ArrayList<String>();
HashMap<String, Integer> termDF = new HashMap<String, Integer>();
for (String term : allWords) {
if (hashTermFreq.containsKey(term)) {
Double freq = hashTermFreq.get(term);
hashTermFreq.put(term, freq + 1);
} else {
if (term.length() > 1) {
hashTermFreq.put(term, 1.0);
if (!terms.contains(term)) {
terms.add(term);
}
if (!words.contains(term)) {
words.add(term);
if (termDF.containsKey(term)) {
int value = termDF.get(term);
value++;
termDF.put(term, value);
} else {
termDF.put(term, 1);
}
}
}
}
}
return termDF;
}
});
HashMap<String, Integer> termDF = termDF_.reduce(new Function2<HashMap<String, Integer>, HashMap<String, Integer>, HashMap<String, Integer>>() {
#Override
public HashMap<String, Integer> call(HashMap<String, Integer> t1, HashMap<String, Integer> t2) throws Exception {
HashMap<String, Integer> result = new HashMap<String, Integer>();
Iterator iterator = t1.keySet().iterator();
while (iterator.hasNext()) {
String key = (String) iterator.next();
if (result.containsKey(key) == false) {
result.put(key, t1.get(key));
} else {
result.put(key, result.get(key) + 1);
}
}
iterator = t2.keySet().iterator();
while (iterator.hasNext()) {
String key = (String) iterator.next();
if (result.containsKey(key) == false) {
result.put(key, t2.get(key));
} else {
result.put(key, result.get(key) + 1);
}
}
return result;
}
});
Thanks!
OK, so just off the top of my head:
Spark transformations are lazy. It means that map is not executed until you call subsequent reduce action so what you describe as slow reduce is most likely slow map + reduce
ArrayList.contains is O(N) so all these words.contains and terms.contains are extremely inefficient
map logic smells fishy. In particular:
if term has been already seen you never get into else branch
at first glance words and terms should have exactly the same content and should be equivalent to the hashTermFreq keys or termDF keys.
it looks like values in termDF can only take value 1. If this is what you want and you ignore frequencies what is the point of creating hashTermFreq?
reduce phase as implemented here means an inefficient linear scan with growing object over the data while you what you really want is reduceByKey.
Using Scala as a pseudocode your whole code can be efficiently expressed as follows:
val termDF = filesIn.flatMap{
case (_, text) =>
text.split(" ") // Split
.toSet // Take unique terms
.filter(_.size > 1) // Remove single characters
.map(term => (term, 1))} // map to pairs
.reduceByKey(_ + _) // Reduce by key
termDF.collectAsMap // Optionally
Finally it looks like you're reinventing the wheel. At least some tools you need are already implemented in mllib.feature or ml.feature