I have a requirement where Lambda polls messages from SQS event source. I have nested loops and need to write unit tests for it.
My code looks something like this -
public Void handleRequest(final SQSEvent sqsEvent, final Context context) {
for (SQSMessage msg : sqsEvent.getRecords()) {
List<ClassA> classAList = ClassB.getClassAList(msg.getBody());
for (ClassA item : classAlList) {
var1 = ClassC.getVar(item.getAttr1());
item.setAttr2(var1);
ClassB.update(classAList);
}
}
}
How do I test if the functions "getVar" and "update" are being called?
Related
I'm trying to unit test an api call made with retrofit and rxjava.
In order to do that i'm mocking the api call object but api calls subscriber won't trigger its onNext method.
ApiCallsTest.java:
//custom object replacing api call response object
Observable<FastRechargeClass[]> apiObservable = Observable.just(fastRechargeList);
InterfaceAPI api = mock(InterfaceAPI.class);
when(retrofitApi.getApiInterface(context)).thenReturn(api); when(api.getLatestTransactions("token")).thenReturn(apiObservable);
apiCalls.getLatestTransactions("token",context);
ApiCalls.java:
public void getLatestTransactions(String token, final Context context) {
String methodName = "getLatestTransactions";
InterfaceAPI api = retrofitApi.getApiInterface(context);
Observable<FastRechargeClass[]> call = api.getLatestTransactions(token);
call.observeOn(AndroidSchedulers.mainThread()).subscribeOn(Schedulers.io()).subscribe(new Observer<FastRechargeClass[]>() {
#Override
public void onSubscribe(Disposable d) {
WriteLog.print("onSubscribe");
}
#Override
public void onNext(FastRechargeClass[] fastRechargeClasses) {
fastRechargeManager.runUpdateFastRechargeDb(fastRechargeClasses);
}
#Override
public void onError(Throwable e) {
logOnFailureRequests(methodName, e.getMessage());
}
#Override
public void onComplete() {
}
});
}
When running test
onSubscribe is being called and it stops
You need to trigger event emission manually. To do this you need to call method
.blockingFirst()
or
.blockingGet()
depends of observable type you are using.
So you have to add
call.blockingGet()
at the end of getLatestTransactions method or this method should return created observable and call blocking get inside a test method.
I expect that uploadImage method finishes once the file is uploaded to AWS, while scanFile method is still running asynchronously in the background;
#RestController
public class EmailController {
#PostMapping("/upload")
#ResponseStatus(HttpStatus.OK)
public void uploadImage(#RequestParam MultipartFile photos) {
awsAPIService.uploadImage(photos);
}
}
...
#Service
public class AwsAPIService {
public void uploadImage(MultipartFile file) {
try {
File fileToUpload = this.convertMultiPartToFile(file);
String fileName = this.generateFileName(file);
s3client.putObject(new PutObjectRequest(AWS_S3_QUARANTINE_BUCKET_NAME,fileName, fileToUpload));
fileToUpload.delete();
// start scan file
scanFile();
} ...
}
#Async
public void scanFile() {
log.info("Start scanning");
String queueUrl = sqs.getQueueUrl("bucket-antivirus").getQueueUrl();
List<Message> messages = sqs.receiveMessage(new ReceiveMessageRequest().withQueueUrl(queueUrl)
.withWaitTimeSeconds(20)).getMessages();
for (Message message : messages) {
// delete message
...
}
}
}
...
#EnableAsync
public class AppConfig {
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(2);
taskExecutor.setQueueCapacity(200);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
}
But this seems still running synchronously. What is the problem here?
By default #Async and other Spring method-level annotations like #Transactional work only on the external, bean-to-bean method call. An internal method call from uploadImage() to scanFile() in the same bean won't trigger the proxy implementing the Spring behaviour. As per Spring docs:
In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with #Transactional. Also, the proxy must be fully initialized to provide the expected behaviour so you should not rely on this feature in your initialization code, i.e. #PostConstruct.
You could configure AspectJ to enable annotations on internal method calls, but it's usually easier to refactor the code.
I have 2 lambda functions, my first function w/c we'll call "PostStep" invokes another lambda function "RetryStep" asynchronously whenever a timeout occurs. It's been working fine ever since. Now I have to do some code changes, and during testing, a weird issue occurred.
Upon timeout, it still calls the RetryStep function asynchronously, but then the function is not really being invoked. I tried invoking the PostStep function again, then I noticed that's the time when the RetryStep function is really invoked, but with the previous request's data.
Here's how I'm doing the invocation:
LivePostingService.class
#Override
public void postTransaction(Transaction transaction) {
... some posting logic ...
conditionalCallRetryLambdaFunction(transaction);
}
private void conditionalCallRetryLambdaFunction(Transaction transaction) {
try {
String payload = objectMapper.writeValueAsString(transaction);
lambdaInvokerService.invokeAsync(lambdaRetryFunctionName, payload);
} catch (JsonProcessingException e) {
if(LOGGER.isErrorEnabled()) {
LOGGER.error(e.getMessage(), e);
}
}
}
LambdaInvokerService.class
#Service
public class LambdaInvokerServiceImpl implements LambdaInvokerService {
LOGGER.info("Calling lambda function: " + functionName + ", with payload: " + payload);
InvokeRequest req = new InvokeRequest()
.withFunctionName(functionName)
.withInvocationType(InvocationType.Event)
.withPayload(payload);
AsyncHandler<InvokeRequest, InvokeResult> asyncHandler = new AsyncHandler<InvokeRequest, InvokeResult>() {
#Override
public void onError(Exception e) {
LOGGER.error(e.getMessage());
}
#Override
public void onSuccess(InvokeRequest request, InvokeResult invokeResult) {
LOGGER.info("Success! " + invokeResult);
}
};
lambdaAsyncClient.invokeAsync(req, asyncHandler);
}
Here's my handler:
#Override
public Void handleRequest(DynamodbEvent dynamodbEvent, Context context) {
livePostingService.postTransaction(transaction);
return null;
}
As you can see from the code, the log Calling lambda function.. appears at the end of PostStep function, but the log Success! appears at the beginning of the PostStep function if i invoke it again.
It's the same code with what's currently on our production. I even did a git checkout on our master branch then ran it on dev, but the same issue still occurs. Do you have any idea about this?
Thanks!
For example I have event bus from documentation:
import akka.event.EventBus
import akka.event.LookupClassification
final case class MsgEnvelope(topic: String, payload: Any)
class LookupBusImpl extends EventBus with LookupClassification {
type Event = MsgEnvelope
type Classifier = String
type Subscriber = ActorRef
override protected def classify(event: Event): Classifier = event.topic
override protected def publish(event: Event, subscriber: Subscriber): Unit = {
subscriber ! event.payload
}
override protected def compareSubscribers(a: Subscriber, b: Subscriber): Int =
a.compareTo(b)
override protected def mapSize: Int = 128
}
If my subscriber(Actor) dies, it will not removed from subscribers. It's correct? Should I unsubscribe in postStop or maybe other way?
Use postStop hook - it's the simplest solution that does the job
override def postStop(): Unit = {
context.system.eventStream.unsubscribe(self)
}
Take a look if you can extend akka.event.ManagedActorClassification as that already contains an implementation that watches actors and unsubscribes if they terminate. If not possible to extend that trait, you may at least get some ideas of how to do it from that implementation.
I'm rather new to JavaFX8 and facing the following problem. In my current App, which is for document processing/editing, I have two rather expensive tasks. Opening a document and saving a document.
My app has the buttons "import next", "export current" and "export current and import next". For Import and Export, I have two Task of the following structure:
private class Export extends Task<Void> {
public Export() {
this.setOnRunning(event -> {
// do stuff (change cursor etc)
});
this.setOnFailed(event -> {
// do stuff, eg. show error box
});
this.setOnSucceeded(event -> {
// do stuff
});
}
#Override
protected Void call() throws Exception {
// do expensive stuff
return null;
}
}
I submit the task using the Executors.newSingleThreadExecutor();.
For the functionality "export current and import next", my goal is to submit the Export and Import tasks to the executor, but my Import tasks should only run if the export-task was sucessful and the EventHandler given in setOnSucceedded (whichs runs on the GUI thread) finished. If the export fails, it does not make any sense to load the next document because user interaction is needed. How can this be achieved?
First I tired to but the entire logic/error handling in the call method, but this does not work as I cannot change the GUI from this method (i.e. to show an error-box).
As workaround, I'm manually submitting the import-task on the last line of my setOnSucceeded in the export-task, but this is not very flexible, because I want to be sure this task exports only (without subsequent import)...
Don't call the handler property methods setOnXXX in your Task subclass constructor. These actually set a property on the task, so if you also call those methods from elsewhere you will replace the functionality you're implementing in the class itself, rather than add to it.
Instead, override the protected convenience methods:
public class Export extends Task<Void> {
#Override
protected void succeeded() {
super.succeeded();
// do stuff...
}
#Override
protected void running() {
super.running();
// do stuff...
}
#Override
protected void failed() {
super.failed();
// do stuff...
}
#Override
protected Void call() {
// do expensive stuff....
return null ;
}
}
Now you can safely use setOnXXX(...) externally to the Export class without breaking its functionality:
Export export = new Export();
export.setOnSucceeded(e -> {
Import import = new Import();
executor.submit(import);
});
executor.submit(export);
This puts the logic for chaining the tasks at the point where you actually create them, which would seem to be the correct place to do it.
Note that another way to provide multiple handlers for the change of state is to register listeners with the stateProperty():
Export export = new Export();
export.stateProperty().addListener((obs, oldState, newState) -> {
if (newState == Worker.State.SUCCEEDED) {
// ...
}
});
From testing, it appears the order of execution of these different mechanisms is:
state listeners
the onSucceeded handler
the Task.succeeded method
All are executed on the FX Application Thread.
So if you want the code in the Task subclass to be executed before the handler added externally, do
public class Export extends Task<Void> {
public Export() {
stateProperty().addListener((obs, oldState, newState) -> {
if (newState == Worker.State.RUNNING) {
// do stuff
} else if (newState == Worker.State.SUCCEEDED) {
// do stuff
} else if (newState == Worker.State.FAILED) {
// do stuff
}
});
}
#Override
public Void call() {
// ...
}
}
Finally, you could implement the entire logic in your call method: if you need to interact with the UI you can wrap those calls in a Platform.runLater(() -> {});. However, separating the functionality into different tasks as you have done is probably cleaner anyway.