Filing map in groovy without using list - list

I'am trying to fill up my HashMap with objects witchout using List<Order>, I know that .with and it is used like foreach, but at first I have to put index [0]. How to proper fill those map?
Map<String, Object> orders = new HashMap<>()
try {
context.apiClient.getDAO(pl.bonita.integrator.OrderDAO.class).findByArchived(IsArchived, startIndex, maxRows)[0].with {
Map<String, Object> order = new HashMap<>()
if (!it) {
response = "No case with id ${processId}"
}
order.put("CaseId", it.getCaseId())
order.put("AssignedName", it.assignedName)
orders.put(it.getCaseId().toString(), order)
}
} catch (Exception e) {
response = "Exception: " + e.getMessage()
}

Related

Export table in Postgis to a shape file without a specific column

I Would like to export a postgreSQL (Postgis) table to a Shape file, but without a cetrain column. I don't want to delete the column in the db first. How do I exclude this certain column?
This is the export function:
private void exportShapeFile(String name, String path) {
try {
DataStore pgDatastore = Snippets.createPostgisDataStore();
SimpleFeatureCollection sfc = Snippets.getSimpleFeatureCollection(pgDatastore, name);
final SimpleFeatureType TYPE = Snippets.getPostgisSimpleFeatureType(pgDatastore, name);
String filename = path + "\\" + name + ".shp";
File newFile = new File(filename);
CoordinateReferenceSystem sourceCRS = Snippets.getCRS(name);
Object[] a = sourceCRS.getIdentifiers().toArray();
String crsOrig = a[0].toString();
String wkt = null;
ShapefileDataStoreFactory dataStoreFactory = new ShapefileDataStoreFactory();
Map<String, Serializable> params = new HashMap<String, Serializable>();
params.put("url", newFile.toURI().toURL());
params.put("create spatial index", Boolean.TRUE);
File directory = new File(txtFieldDir.getText());
ShapefileDataStore newDataStore = (ShapefileDataStore) dataStoreFactory.createNewDataStore(params);
newDataStore.createSchema(TYPE);
Transaction transaction = new DefaultTransaction("create");
String typeName = newDataStore.getTypeNames()[0];
SimpleFeatureSource featureSource = newDataStore.getFeatureSource(typeName);
if (featureSource instanceof SimpleFeatureStore) {
SimpleFeatureStore featureStore = (SimpleFeatureStore) featureSource;
featureStore.setTransaction(transaction);
try {
featureStore.addFeatures(sfc);
transaction.commit();
} catch (Exception problem) {
problem.printStackTrace();
transaction.rollback();
} finally {
transaction.close();
pgDatastore.dispose();
newDataStore.dispose();
}
} else {
Snippets.appendToPane(txtLog,"ERROR:" + typeName + " does not support read/write access.\n", MainDialog.colRed);
}
} catch (MalformedURLException e) {
Snippets.appendToPane(txtLog,e.toString() + "\n", MainDialog.colRed);
e.printStackTrace();
} catch (IOException e) {
Snippets.appendToPane(txtLog,e.toString() + "\n", MainDialog.colRed);
e.printStackTrace();
}
You need to generate a new schema for your shapefile and then retype your features to that schema. DataUtilities provides useful methods for this, createSubType to generate a new schema limited to a shorter list of attributes, and reType to change a filter into one that matches the new schema.

Understanding Window query type in Siddhi

I am trying to implement a basic window on an input stream in siddhi.
This is the window query
executionPlan = "" +
"define stream inputStream (height int); " +
"" +
"#info(name = 'query1') " +
"from inputStream #window.length(5) " +
"select avg(height) as avgHt " +
"insert into outputStream ;";
And this is how I am giving data to the input Stream.
Object[] obj1 = {10};
Object[] obj2 = {5};
for (int i=0;i<10;i++) {
try {
inputHandler.send(obj1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
for (int i=0;i<20;i++) {
try {
inputHandler.send(obj2);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Am I wrong in supposing that the the query should give a callback after each input to the inputHandler. So for this example the initial output should be 10 and then It should gradually decrease and become 5. At a point where I have sent all the 10's and 2 5's then I should get a callback with average as (10+10+10+5+5)/5= 8. But this is not happening currently. For this implementation I get two callback with average 10 and 5 respectively. Why isn't there a gradual decrease from 10 to 5?
This is how I add the callback
executionPlanRuntime.addCallback("query1", new QueryCallback() {
#Override
public void receive(long timeStamp, Event[] inEvents, Event[] removeEvents) {
// printing inEvents
EventPrinter.print(inEvents);
});
What am I missing here?
Since you are sending events in a burst it's batching events within. But if you add Thread.Sleep(100) in between the events you send then it will output as you expected.

Spark - Reduce operation taking too long

I'm making an application with Spark that will run some topic extration algorithms. For that, first I need to make some preprocessing, extracting the document-term matrix by the end. Ive could done that, but for a (not that much) big collection of documents (only 2 thousand, 5MB), this proccess is taking forever.
So, debugging, Ive found where the program kinda stucks, and it's in a reduce operation. What I'm doing in this part of the code is counting how many times each term occurs on the collection, so first I done a "map", couting it for each rdd, and them I "reduce" it, saving the result inside a hashmap. The map operation is very fast, but in the reduce, its splitting the operation in 40 blocks, and each block takes 5~10 minutes to proccess.
So I'm trying to figure out what I'm doing wrong, or if reduce operations are that much costly.
SparkConf: Standalone mode, using local[2]. I've tried to use it as "spark://master:7077", and it worked, but still the same slowness.
Code:
"filesIn" is a JavaPairRDD where the key is the file path and the value is the content of the file.
So, first the map, where I take this "filesIn", split the words, and count their frequency (in that case doesn't matter what document is)
And then the reduce, where I create a HashMap (term, freq).
JavaRDD<HashMap<String, Integer>> termDF_ = filesIn.map(new Function<Tuple2<String, String>, HashMap<String, Integer>>() {
#Override
public HashMap<String, Integer> call(Tuple2<String, String> t) throws Exception {
String[] allWords = t._2.split(" ");
HashMap<String, Double> hashTermFreq = new HashMap<String, Double>();
ArrayList<String> words = new ArrayList<String>();
ArrayList<String> terms = new ArrayList<String>();
HashMap<String, Integer> termDF = new HashMap<String, Integer>();
for (String term : allWords) {
if (hashTermFreq.containsKey(term)) {
Double freq = hashTermFreq.get(term);
hashTermFreq.put(term, freq + 1);
} else {
if (term.length() > 1) {
hashTermFreq.put(term, 1.0);
if (!terms.contains(term)) {
terms.add(term);
}
if (!words.contains(term)) {
words.add(term);
if (termDF.containsKey(term)) {
int value = termDF.get(term);
value++;
termDF.put(term, value);
} else {
termDF.put(term, 1);
}
}
}
}
}
return termDF;
}
});
HashMap<String, Integer> termDF = termDF_.reduce(new Function2<HashMap<String, Integer>, HashMap<String, Integer>, HashMap<String, Integer>>() {
#Override
public HashMap<String, Integer> call(HashMap<String, Integer> t1, HashMap<String, Integer> t2) throws Exception {
HashMap<String, Integer> result = new HashMap<String, Integer>();
Iterator iterator = t1.keySet().iterator();
while (iterator.hasNext()) {
String key = (String) iterator.next();
if (result.containsKey(key) == false) {
result.put(key, t1.get(key));
} else {
result.put(key, result.get(key) + 1);
}
}
iterator = t2.keySet().iterator();
while (iterator.hasNext()) {
String key = (String) iterator.next();
if (result.containsKey(key) == false) {
result.put(key, t2.get(key));
} else {
result.put(key, result.get(key) + 1);
}
}
return result;
}
});
Thanks!
OK, so just off the top of my head:
Spark transformations are lazy. It means that map is not executed until you call subsequent reduce action so what you describe as slow reduce is most likely slow map + reduce
ArrayList.contains is O(N) so all these words.contains and terms.contains are extremely inefficient
map logic smells fishy. In particular:
if term has been already seen you never get into else branch
at first glance words and terms should have exactly the same content and should be equivalent to the hashTermFreq keys or termDF keys.
it looks like values in termDF can only take value 1. If this is what you want and you ignore frequencies what is the point of creating hashTermFreq?
reduce phase as implemented here means an inefficient linear scan with growing object over the data while you what you really want is reduceByKey.
Using Scala as a pseudocode your whole code can be efficiently expressed as follows:
val termDF = filesIn.flatMap{
case (_, text) =>
text.split(" ") // Split
.toSet // Take unique terms
.filter(_.size > 1) // Remove single characters
.map(term => (term, 1))} // map to pairs
.reduceByKey(_ + _) // Reduce by key
termDF.collectAsMap // Optionally
Finally it looks like you're reinventing the wheel. At least some tools you need are already implemented in mllib.feature or ml.feature

Using Mockito to test Java Hbase API

This is the method that I am testing. This method gets some Bytes from a Hbase Database based on an specific id, in this case called dtmid. The reason I why I want to return some specific values is because I realized that there is no way to know if an id will always be in Hbase. Also, the column Family and column name could change.
#Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
try {
if (tuple.size() > 0) {
Long dtmid = tuple.getLong(0);
byte[] rowKey = HBaseRowKeyDistributor.getDistributedKey(dtmid);
Get get = new Get(rowKey);
get.addFamily("a".getBytes());
Result result = table.get(get);
byte[] bidUser = result.getValue("a".getBytes(),
"co_created_5076".getBytes());
collector.emit(new Values(dtmid, bidUser));
}
} catch (IOException e) {
e.printStackTrace();
}
}
On my main class when this method is called I want to return a specific value. The method should return some bytes.
byte[] bidUser = result.getValue("a".getBytes(),
"co_created_5076".getBytes());
This is what I have on my Unit Test.
#Test
public void testExecute() throws IOException {
long dtmId = 350000000770902930L;
final byte[] COL_FAMILY = "a".getBytes();
final byte[] COL_QUALIFIER = "co_created_5076".getBytes();
//setting a key value pair to put in result
List<KeyValue> kvs = new ArrayList<KeyValue>();
kvs.add(new KeyValue("--350000000770902930".getBytes(), COL_FAMILY, COL_QUALIFIER, Bytes.toBytes("ExpedtedBytes")));
// I create an Instance of result
Result result = new Result(kvs);
// A mock tuple with a single dtmid
Tuple tuple = mock(Tuple.class);
bolt.table = mock(HTable.class);
Result mcResult = mock(Result.class);
when(tuple.size()).thenReturn(1);
when(tuple.getLong(0)).thenReturn(dtmId);
when(bolt.table.get(any(Get.class))).thenReturn(result);
when(mcResult.getValue(any(byte[].class), any(byte[].class))).thenReturn(Bytes.toBytes("Bytes"));
BasicOutputCollector collector = mock(BasicOutputCollector.class);
// Execute the bolt.
bolt.execute(tuple, collector);
ArgumentCaptor<Values> valuesArg = ArgumentCaptor
.forClass(Values.class);
verify(collector).emit(valuesArg.capture());
Values d = valuesArg.getValue();
//casting this object in to a byteArray.
byte[] i = (byte[]) d.get(1);
assertEquals(dtmId, d.get(0));
}
I am using this down here to return my bytes.For some reason is not working.
when(mcResult.getValue(any(byte[].class), any(byte[].class))).thenReturn(Bytes
.toBytes("myBytes"));
For some reason when I capture the values, I still get the bytes that I specified here:
List<KeyValue> kvs = new ArrayList<KeyValue>();
kvs.add(new KeyValue("--350000000770902930".getBytes(),COL_FAMILY, COL_QUALIFIER, Bytes
.toBytes("ExpedtedBytes")));
Result result = new Result(kvs);
How about replacing
when(bolt.table.get(any(Get.class))).thenReturn(result);
with...
when(bolt.table.get(any(Get.class))).thenReturn(mcResult);

How to verify for counter signed XML document?

How to verify the use library Xades4j for counter signed xml document.
Iam getting the following error when verifying with Xades4j :
xades4j.verification.CounterSignatureSigValueRefException:
Verification failed for property 'CounterSignature': the counter
signature doesn't reference the SignatureValue element of the
countersigned signature at
xades4j.verification.CounterSignatureVerifier.verify(CounterSignatureVerifier.java:75)
at
xades4j.verification.CounterSignatureVerifier.verify(CounterSignatureVerifier.java:37)
at
xades4j.verification.GenericDOMDataVerifier.verify(GenericDOMDataVerifier.java:65)
at
xades4j.verification.GenericDOMDataVerifier.verify(GenericDOMDataVerifier.java:30)
at
xades4j.verification.QualifyingPropertiesVerifierImpl.verifyProperties(QualifyingPropertiesVerifierImpl.java:59)
at
xades4j.verification.XadesVerifierImpl.verify(XadesVerifierImpl.java:187)
at
com.fit.einvoice.ingcountersigner.service.xades.XadesVerifyOperation.verifySignature(XadesVerifyOperation.java:92)
at
com.fit.einvoice.ingcountersigner.service.xades.XadesVerifyOperation.verifySignature(XadesVerifyOperation.java:87)
at
com.fit.einvoice.ingcountersigner.service.xades.XadesVerifyOperation.verifySignature(XadesVerifyOperation.java:64)
My validation function :
static void checkSigned(File file) {
InputStream inputStream = null;
try {
inputStream = new FileInputStream(file);
XadesVerifyOperation verifyOperation = new XadesVerifyOperation();
ArrayList<XadesVerificationResults> results = verifyOperation.verifySignature(inputStream);
System.out.println("results size: " + results.size());
for (XadesVerificationResults result : results) {
System.out.println(result.SigningCertificate.getIssuerDN());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
inputStream.close();
} catch (IOException ex) {
}
}
}
EDIT:
My counter signed function :
public void CounterSign() throws TransformerFactoryConfigurationError, Exception {
Document doc = SignatureServicesBase.getDocument(_inputStream);
Element sigElem = (Element) doc.getElementsByTagNameNS(Constants.SignatureSpecNS, Constants._TAG_SIGNATURE).item(0);
System.out.println(sigElem.getNodeName());
org.apache.xml.security.Init.init();
XMLSignature xmlSig = new XMLSignature(sigElem, doc.getBaseURI());
//Create counter signer
XadesBesSigningProfile signingProfile = new XadesBesSigningProfile(new Pkcs11KeyingDataProvider(_certInfo));
signingProfile.withAlgorithmsProvider(Sha1AlgProvider.class);
signingProfile.withBasicSignatureOptionsProvider(new MyBasicSignatureOptionsProvider(true, true, false));
final XadesSigner counterSigner = signingProfile.newSigner();
//Extend with counter signature
XadesFormatExtenderProfile extenderProfile = new XadesFormatExtenderProfile();
XadesSignatureFormatExtender extender = extenderProfile.getFormatExtender();
List unsignedProps = Arrays.asList(new CounterSignatureProperty(counterSigner));
extender.enrichSignature(xmlSig, new UnsignedProperties(unsignedProps));
SignatureServicesBase.outputDocument(doc, _outStream);
if (!_isStream) {
_inputStream.close();
_outStream.close();
}
}
I'm not sure I completely understood your question. If you're asking how to verify a counter signature property, it is already done as part of the verification of the "main" signature. Please note:
The same XadesVerifier is used for both the main signature and the counter signature.
If the validation succeeds, a property of type CounterSignatureProperty is added to the result.
You can access the property through the verification result of the main signature
XAdESVerificationResult res = ...;
CounterSignatureProperty p = res.getPropertiesFilter().getOfType(CounterSignatureProperty.class);
EDIT:
The message says everything: the counter signature is probably invalid. By definition, a counter signature must include a reference to the countersigned SignatureValue element.
Can you lookup the CounterSignature element on the original XML document and post it here?