SWF Flow FrameWork DataConverterException while serializing CancellationException - amazon-web-services

I am trying to explicitly cancel a TryCatchFinally statement using:
`TryCatchFinally executionTask = new TryCatchFinally(){
#Override
doTry(){
//some nested try catch in #Async methods here.
}`
#Override
protected void doCatch(final Throwable e) throws Throwable {
//All fatal exceptions
if (e instanceof GameDayExecutionFailureException) {
//doSomethin
} else if (e instanceof CancellationException) {
//doSomething
} else if (e instanceof Exception) {
//doSomething
}
#Override
protected void doFinally() throws Throwable {
//doCleanUp from here.
}
};
Then inside an #Signal method:
executionTask.cancel(null); //tried passing a new RunTime Exception here as well.
The problem is, for some reason, after all child ActivityTasks are cancelled and doFinally() of the executionTask is executed, the workflow rather than gracefully exiting is throwing further CancellationException, which is leading to the following stack trace:
com.amazonaws.services.simpleworkflow.flow.DataConverterException: Failure serializing "java.util.concurrent.CancellationException" of type "class java.util.concurrent.CancellationException" when mapping key "null"
at com.amazonaws.services.simpleworkflow.flow.JsonDataConverter.throwDataConverterException(JsonDataConverter.java:90)
at com.amazonaws.services.simpleworkflow.flow.JsonDataConverter.toData(JsonDataConverter.java:78)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.throwWorkflowException(POJOWorkflowDefinition.java:177)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.access$300(POJOWorkflowDefinition.java:30)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition$1.doCatch(POJOWorkflowDefinition.java:93) at --- continuation ---.(:0)
at com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition.execute(POJOWorkflowDefinition.java:67) at com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider$WorkflowExecuteAsyncScope.doAsync(AsyncDecider.java:68)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError) (through reference chain: com.fasterxml.jackson.databind.ObjectMapper["factory"]->
com.fasterxml.jackson.databind.MappingJsonFactory["codec"]->
com.fasterxml.jackson.databind.ObjectMapper["factory"]->
com.fasterxml.jackson.databind.MappingJsonFactory["codec"]->
com.fasterxml.jackson.databind.ObjectMapper["factory"]->
com.fasterxml.jackson.databind.MappingJsonFactory["codec"]->
com.fasterxml.jackson.databind.ObjectMapper["factory"]->
com.fasterxml.jackson.databind.MappingJsonFactory["codec"]->
com.fasterxml.jackson.databind.ObjectMapper["factory"]->
com.fasterxml.jackson.databind.MappingJsonFactory["codec"]->
..
I checked and most of the values inside CancellationException instance are null (messege,cause) etc.
But, shouldn't the exception be consumed by the doCatch ? Why is it being thrown out again ?

Related

How can I prevent a method call when JUnit testing with Mockito?

For the life of me, I can't seem to figure out how to prevent the method I'm testing from calling a method in another class.
Here is my test class:
#ExtendWith(MockitoExtension.class)
class QueryHandlerTest {
#InjectMocks
QueryHandler queryHandler;
#Mock
ResponseBuilder responseBuilder;
#BeforeEach
void setUp() {
MockitoAnnotations.openMocks(this);
}
#Test
void TC5() {
doThrow(AddMessageResponseException.class).when(responseBuilder).addMessageResponse(isA(Boolean.class), isA(Boolean.class));
assertThrows(AddMessageResponseException.class, ()-> queryHandler.addMessage("Hello",true));
}
}
Here is the method that I'm testing:
public void addMessage(String message, boolean lengthExceedsLimit) {
boolean messageAdded;
if (checkIfJarExists()) {
if (!lengthExceedsLimit) {
// attempt to add the message to the jar
messageAdded = addMessageQuery(new Message(event.getMessageAuthor().getIdAsString(), message));
} else {
messageAdded = false;
}
} else {
messageAdded = false;
}
responseBuilder.addMessageResponse(messageAdded, lengthExceedsLimit);
if (messageAdded) {
// check to see if the jar's message limit has been reached; if so, perform opening ceremony
if (checkMessageLimit()) {
responseBuilder.performOpeningEvent(currentJar);
deleteJarQuery(this.serverId);
}
}
}
And here is the method that it's calling:
public void addMessageResponse(boolean messageAdded, boolean lengthExceedsLimit){
if (lengthExceedsLimit) {
event.getChannel().sendMessage("I'm sorry, your message is too long. Please limit your message " +
"to 250 characters or less.");
} else if(messageAdded){
String nickname = getNickname();
event.getChannel().sendMessage("Thanks, " + nickname + "! Your message has " +
"been added to the jar!");
} else {
event.getChannel().sendMessage("Sorry, it looks like a jar has not been set up for your server. " +
"If you're a server admin, you can create a jar! " +
"Please use '!tiko help' to see a list of my commands.");
}
}
When I run the test, I get this output:
org.opentest4j.AssertionFailedError: Unexpected exception type thrown,
Expected :class com.tikoJar.exceptions.AddMessageResponseException
Actual :class java.lang.NullPointerException
<Click to see difference>
...
Caused by: java.lang.NullPointerException: Cannot invoke "org.javacord.api.entity.channel.TextChannel.sendMessage(String)" because the return value of "org.javacord.api.event.message.MessageCreateEvent.getChannel()" is null
at com.tikoJar.DTO.ResponseBuilder.addMessageResponse(ResponseBuilder.java:35)
at com.tikoJar.DTO.QueryHandler.addMessage(QueryHandler.java:74)
at com.tikoJar.DTO.QueryHandlerTest.lambda$TC5$0(QueryHandlerTest.java:68)
at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
... 73 more
As you can see, the method I'm testing is calling and running the addMessageResponse() method in the ResponseBuilder class, even though I specified in my test that a custom exception should be thrown when attempting to call that method.
I've also tried specifying:
doNothing().when(responseBuilder).addMessageResponse(isA(Boolean.class), isA(Boolean.class));
... but the method still gets called and run. What can I do here?

how to apply in DebuggerHiddenAttribute dependent/cascade methods

I'm trying to ignore a exception that happen when run my test Methods.
I`m using a unitTest proyect.
The problem apears when
TestCleanup Method runs. It's a recursive method. This method clean all entities created in DB during the test. This is a recursive method because of dependences.
Anyway this method call to delete generic method in my ORM(Petapoco). It throws an exception if it can't delete the entity. Any problem, it run again with recursive way until delete it.
Now the problem, if i'm debugging VS stop a lot of times in Execute method because of failed deletes. But I can't modify this method to ignore it. I need a way to ignore this stops when i'm debugging tests. A way like DebuggerHiddenAttribute or similar.
Thanks!
I tried to use DebuggerHiddenAttribute, but cannot works in methods called by main method.
[TestCleanup(), DebuggerHidden]
public void CleanData()
{
ErrorDlt = new Dictionary<Guid, object>();
foreach (var entity in TestEntity.CreatedEnt)
{
try
{
CallingTest(entity);
}
catch (Exception e)
{
if (!ErrorDlt.ContainsKey(entity.Key))
ErrorDlt.Add(entity.Key, entity.Value);
}
}
if (ErrorDlt.Count > 0)
{
TestEntity.CreatedEnt = new Dictionary<Guid, object>();
ErrorDlt.ForEach(x => TestEntity.CreatedEnt.Add(x.Key, x.Value));
CleanData();
}
}
public int Execute(string sql, params object[] args)
{
try
{
OpenSharedConnection();
try
{
using (var cmd = CreateCommand(_sharedConnection, sql, args))
{
var retv = cmd.ExecuteNonQuery();
OnExecutedCommand(cmd);
return retv;
}
}
finally
{
CloseSharedConnection();
}
}
catch (Exception x)
{
OnException(x);
throw new DatabaseException(x.Message, LastSQL, LastArgs);
}
}
Error messages are not required.

SolrJ - NPE when accessing to SolrCloud

I'm running the following test code on SolrCloud using Solrj library:
public static void main(String[] args) {
String zkHostString = "192.168.56.99:2181";
SolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
List<MyBean> beans = new ArrayList<>();
for(int i = 0; i < 10000 ; i++) {
// creating a bunch of MyBean to be indexed
// and temporarily storing them in a List
// no Solr operations performed here
}
System.out.println("Adding...");
try {
solr.addBeans("myCollection", beans);
} catch (IOException | SolrServerException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println("Committing...");
try {
solr.commit("myCollection");
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This code fails due to the following exception
Exception in thread "main" java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1175)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:312)
at com.togather.solr.testing.SolrIndexingTest.main(SolrIndexingTest.java:83)
This is the full stacktrace of the exception. I just "upgraded" from a Solr standalone installation to a SolrCloud (with an external Zookeeper single instance, not the embedded one). With standalone Solr the same code (with just some minor differences, like the host URL) used to work perfectly.
The NPE sends me inside the SolrJ library, which I don't know.
Anyone can help me understand where the problem originates from and how I can overcome it? Due to my unexperience and the brevity of the error message, I can't figure out where to start inquiring from.
Looking at your code, I would suggest to specify the default collection as first thing.
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("myCollection");
Regarding the NPE you're experiencing, very likely is due to a network error.
In these lines your exception is raised by for loop: for (DocCollection ext : requestedCollections)
if (wasCommError) {
// it was a communication error. it is likely that
// the node to which the request to be sent is down . So , expire the state
// so that the next attempt would fetch the fresh state
// just re-read state for all of them, if it has not been retired
// in retryExpiryTime time
for (DocCollection ext : requestedCollections) {
ExpiringCachedDocCollection cacheEntry = collectionStateCache.get(ext.getName());
if (cacheEntry == null) continue;
cacheEntry.maybeStale = true;
}
if (retryCount < MAX_STALE_RETRIES) {//if it is a communication error , we must try again
//may be, we have a stale version of the collection state
// and we could not get any information from the server
//it is probably not worth trying again and again because
// the state would not have been updated
return requestWithRetryOnStaleState(request, retryCount + 1, collection);
}
}

EMF model transaction

In our project we use an implementation of HL7 document from openehealth. This implementation uses EMF as primitive model and delegates all calls to EMF. We need to handle a large volume of documents and our flows involve concurrent processing of documents(read, validate, query). In concurrency environment the EMF layer crashes with UnsupportedOperationException. From openehealth site it says to handle the synchronized processing in the client api, but this will decrease our system performance and we don't want this. I tried EMF transaction API, TransactionalEditingDomain, which says that supports read only model transactions but without success. My test looks something like this:
ExecutorService executorService = Executors.newFixedThreadPool(4);
final List<ClinicalDocument> documents = new ArrayList<ClinicalDocument>();
for (int i = 0; i < 100; i ++) {
executorService.submit(new Runnable() {
#Override
public void run() {
try {
int randomNum = 1 + (int)(Math.random()*6);
ClinicalDocument cda = readCda();
processIntensiveWork(cda);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
private void processIntensiveWork(final ClinicalDocument document) {
for (final Method method : document.getClass().getMethods())
if (method.getName().startsWith("get")) {
try {
domain.runExclusive(new RunnableWithResult.Impl() {
#Override
public void run() {
try {
method.invoke(document);
System.out.println("Invoked method: " + method.getName());
setResult(null);
} catch (UnsupportedOperationException e) {
e.printStackTrace();
}catch (Exception e){
e.printStackTrace();
}
}
});
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
For this test case we frequently caught java.lang.UnsupportedOperationException.
I mention that for some test cases i also caught the the following error from EMF transaction API: java.lang.IllegalArgumentException: Can only deactivate the active transaction
Any suggestions are kindly appreciated. Feel free to ask other information that might help you in resolving the problem.

How to handle exceptions from StorageFile::OpenAsync when URI is bad

I have a section of code that correctly load images from http URIs when the URIs are valid but I cannot figure how to catch the exception OpenAsync throws when the URI is invalid (results in 404).
The problem is that when the lambda which contains the call to OpenAsync exits, the exception is thrown; the exception is not thrown while in the try/catch block.
The question is:
What is the correct way to catch the exception thrown by StorageFile::OpenAsync?
auto bm = ref new BitmapImage();
try {
Uri^ uri = ref new Uri("http://invaliduri.tumblr.com/avatar/128");
auto task = Concurrency::create_task(CreateStreamedFileFromUriAsync("temp-web-file.png", uri, nullptr));
task.then([] (StorageFile^ file) {
try {
return file->OpenAsync(FileAccessMode::Read);
} catch (...) {
// this does not catch the exception because the exception
// occurs after this lambda is exitted
}
}).then([bm](IRandomAccessStream^ inputStream) {
try {
return bm->SetSourceAsync(inputStream);
} catch (...) {
// this does not catch the exception because the exception
// occurs before this lambda is entered
}
});
} catch (...) {
// and obviously this would not catch the exception
}
I had this question 3 years later. I referenced this article. My scenario was, then, solved as follows,
#include<ppltasks.h>
...
auto file = ref new Windows::Foundation::Uri::Uri("ms-appx:///SomeFile.txt");
concurrency::create_task(Windows::Storage::StorageFile::GetFileFromApplicationUriAsync(data))
.then([](Windows::Storage::StorageFile^ f) {
return Windows::Storage::FileIO::ReadTextAsync(f);
})
.then([this](String^ s) {
this->someFileContent = s;
})
.then([](concurrency::task<void> t) {
try {
t.get();
} catch(Platform::COMException^ e) {
OutputDebugString(e->Message->Data());
}
});
This async task chain may fail in GetFileFromApplicationUriAsync or in ReadTextAsync throwing an exception. The key is that when thrown the only matching then(...) prototype is the final one. On entering the try block, task::get re-throws the exception caught by the concurrency classes on your behalf.
task.then([] (StorageFile^ file) { // this is where the exception is actually thrown
The exception is most likely thrown on this line because to be able to pass in the StorageFile to the lambda the .then is doing a non-explicit get() on the task. You're using what is called a "value continuation" while you probably want a "task continuation" and check for exceptions there.
auto task = Concurrency::create_task(CreateStreamedFileFromUriAsync("temp-web-file.png", uri, nullptr));
task.then([] (concurrency::task<StorageFile^> fileTask) {
StorageFile^ file;
try
{
file = fileTask.get(); // this is what actually throws if Uri is wrong
create_task(file->OpenAsync(FileAccessMode::Read)).then(/* ... */);
} catch (...)
{
// nothing to do here
}
});