Spring Boot #Async not working - amazon-web-services

I expect that uploadImage method finishes once the file is uploaded to AWS, while scanFile method is still running asynchronously in the background;
#RestController
public class EmailController {
#PostMapping("/upload")
#ResponseStatus(HttpStatus.OK)
public void uploadImage(#RequestParam MultipartFile photos) {
awsAPIService.uploadImage(photos);
}
}
...
#Service
public class AwsAPIService {
public void uploadImage(MultipartFile file) {
try {
File fileToUpload = this.convertMultiPartToFile(file);
String fileName = this.generateFileName(file);
s3client.putObject(new PutObjectRequest(AWS_S3_QUARANTINE_BUCKET_NAME,fileName, fileToUpload));
fileToUpload.delete();
// start scan file
scanFile();
} ...
}
#Async
public void scanFile() {
log.info("Start scanning");
String queueUrl = sqs.getQueueUrl("bucket-antivirus").getQueueUrl();
List<Message> messages = sqs.receiveMessage(new ReceiveMessageRequest().withQueueUrl(queueUrl)
.withWaitTimeSeconds(20)).getMessages();
for (Message message : messages) {
// delete message
...
}
}
}
...
#EnableAsync
public class AppConfig {
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(2);
taskExecutor.setQueueCapacity(200);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
}
But this seems still running synchronously. What is the problem here?

By default #Async and other Spring method-level annotations like #Transactional work only on the external, bean-to-bean method call. An internal method call from uploadImage() to scanFile() in the same bean won't trigger the proxy implementing the Spring behaviour. As per Spring docs:
In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with #Transactional. Also, the proxy must be fully initialized to provide the expected behaviour so you should not rely on this feature in your initialization code, i.e. #PostConstruct.
You could configure AspectJ to enable annotations on internal method calls, but it's usually easier to refactor the code.

Related

Unknown thread spawns which ignores the filter chain and fails on async decorator

I am currently facing a strange issue I am not able to reproduce locally, but happens in AWS ECS regularly, letting the application crash or run slow.
We have a spring boot application which extracts the tenant from the incoming GraphQL request and sets the tenant to a ThreadLocal instance.
To support DataLoader from GraphQL Java kickstart we populate the tenant to each child thread which will be used by the graphql dataloader. The tenant is mandatory to specify the database schema.
The executor
#Bean
#Override
public Executor getAsyncExecutor() {
log.info("Configuring async executor for multi tenancy...");
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(15);
executor.setThreadNamePrefix("tenant-child-executor-");
// Important part: Set the MultiTenancyTaskDecorator to populate current tenant to child thread
executor.setTaskDecorator(new MultiTenancyAsyncTaskDecorator());
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setWaitForTasksToCompleteOnShutdown(true);
log.info("Executor configured successfully!");
executor.initialize();
return executor;
}
Task Decorator
#NonNull
#Override
public Runnable decorate(#NonNull Runnable runnable) {
if (Objects.isNull(CurrentTenantContext.getTenant())) {
log.warn("Current tenant is null while decorating a new thread!");
}
final TenantIdentifier parentThreadTenantIdentifier = Objects.isNull(CurrentTenantContext.getTenant()) ? TenantIdentifier.asSystem() : CurrentTenantContext.getTenant();
// Also need to get the MDC context map as it is bound to the current local thread
final Map<String, String> parentContextMap = MDC.getCopyOfContextMap();
final var requestAttributes = RequestContextHolder.getRequestAttributes();
return () -> {
try {
CurrentTenantContext.setTenant(TenantIdentifier.of(parentThreadTenantIdentifier.getTenantName()));
if (Objects.isNull(requestAttributes)) {
log.warn("RequestAttributes are not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
RequestContextHolder.setRequestAttributes(requestAttributes, true);
}
if (Objects.isNull(parentContextMap)) {
log.warn("Parent context map not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
MDC.setContextMap(parentContextMap);
}
runnable.run();
} finally {
// Will be executed after thread finished or on exception
RequestContextHolder.resetRequestAttributes();
CurrentTenantContext.clear();
MDC.clear();
}
};
}
Tenant Context
public class CurrentTenantContext {
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
private CurrentTenantContext() {
// Hide constructor to only provide static functionality
}
public static TenantIdentifier getTenant() {
return currentTenant.get();
}
public static String getTenantName() {
return getTenant().getTenantName();
}
public static void setTenant(TenantIdentifier tenant) {
currentTenant.set(tenant);
}
public static void clear() {
currentTenant.remove();
}
public static boolean isTenantSet() {
return Objects.nonNull(currentTenant.get());
}
}
Locally, this works like a charm. Even in a docker compose environment with limited resources (CPU and Mem) like in AWS. Even 100.000 requests (JMETER) everything works like expected.
On AWS we can easily let the application crash.
After one or two requests, containing some child objects to resolve by GraphQL, we see a thread spawning which seems to ignore or not go through the chain
Thread-110 | [sys ] | WARN | MultiTenancyAsyncTaskDecorator | Current tenant is null while decorating a new thread!
An interesting thing in this line is the name of the thread.
Each incoming request has the pattern http-nio-9100-exec-[N] and each child thread the pattern tenant-child-executor-[I] but this one has the pattern Thread-[Y].
Now I am wondering where this thread is coming from and why is it not reproducible locally.
I was able to find the solution to the problem.
I needed to change
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
to
private static final InheritableThreadLocal<TenantIdentifier> currentTenant = new InheritableThreadLocal<>();
But I don't know why it works with InheritableThreadLocal but not with ThreadLocal within the AWS environment.
Further, I wonder why this change was not necessary for local testing which works with both ways.
Maybe somebody can provide some ideas.

Mockito mock(clazz, delegatesTo(..)) vs anonymous class

I'm working on GRPC client for the server.
In GRPC repo the advise is to mock a service in a such manner:
private final GreeterGrpc.GreeterImplBase serviceImpl =
mock(GreeterGrpc.GreeterImplBase.class, delegatesTo(
new GreeterGrpc.GreeterImplBase() {
// By default the client will receive Status.UNIMPLEMENTED for all RPCs.
// You might need to implement necessary behaviors for your test here, like this:
//
// #Override
// public void sayHello(HelloRequest request, StreamObserver<HelloReply> respObserver) {
// respObserver.onNext(HelloReply.getDefaultInstance());
// respObserver.onCompleted();
// }
}));
https://github.com/grpc/grpc-java/blob/master/examples/src/test/java/io/grpc/examples/helloworld/HelloWorldClientTest.java
I wonder, what would change if I just replace
mock(GreeterGrpc.GreeterImplBase.class, delegatesTo(
with anonymous class creation like this:
private final GreeterGrpc.GreeterImplBase serviceImpl =
new GreeterGrpc.GreeterImplBase() {
// By default the client will receive Status.UNIMPLEMENTED for all RPCs.
// You might need to implement necessary behaviors for your test here, like this:
//
// #Override
// public void sayHello(HelloRequest request, StreamObserver<HelloReply> respObserver) {
// respObserver.onNext(HelloReply.getDefaultInstance());
// respObserver.onCompleted();
// }
};
I don't see any benefits Mockito can offer here as all calls are delegated to the delegate.
Is it correct or am I missing something?
You will lose the ability to use Mockito to verify that your service was interacted with in some specific way. E.g. the "verify(serviceImpl)" call you can see in HelloWorldClientTest would not work.

Getting SQS dead letter queue to work with Spring Boot and JMS

I've been working on a small Spring Boot application that receives messages from Amazon SQS. However I foresee that processing these messages may fail, so that's why I thought adding a dead letter queue would be a good idea.
There is a problem though: when the processing fails (which I force by throwing an Exception for some of the messages) it is not reattempted later on and it's not moved to the dead letter queue. I am struggling to find the issue, since there doesn't seem to much info on it.
However if I look at Amazon's documentation, they seem to be able to do it, but without using the Spring Boot annotations. Is there any way I can make the code below work transactional without writing too much of the JMS code myself?
This is the current configuration that I am using.
#Configuration
public class AWSConfiguration {
#Value("${aws.sqs.endpoint}")
private String endpoint;
#Value("${aws.iam.key}")
private String iamKey;
#Value("${aws.iam.secret}")
private String iamSecret;
#Value("${aws.sqs.queue}")
private String queue;
#Bean
public JmsTemplate createJMSTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(getSQSConnectionFactory());
jmsTemplate.setDefaultDestinationName(queue);
jmsTemplate.setDeliveryPersistent(true);
jmsTemplate.setDeliveryMode(DeliveryMode.PERSISTENT);
return jmsTemplate;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(getSQSConnectionFactory());
factory.setConcurrency("1-1");
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(getSQSConnectionFactory());
}
#Bean
public ConnectionFactory getSQSConnectionFactory() {
return SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withEndpoint(endpoint)
.withNumberOfMessagesToPrefetch(10).build();
}
private final AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return new BasicAWSCredentials(iamKey, iamSecret);
}
#Override
public void refresh() {
}
};
}
And finally the receiving end:
#Service
public class QueueReceiver {
private static final String EXPERIMENTAL_QUEUE = "${aws.sqs.queue}";
#JmsListener(destination = EXPERIMENTAL_QUEUE)
public void receiveSegment(String jsonSegment) throws IOException {
Segment segment = Segment.fromJSON(jsonSegment);
if(segment.shouldFail()) {
throw new IOException("This segment is expected to fail");
}
System.out.println(segment.getText());
}
}
Spring Cloud AWS
You can greatly simplify your configuration by leveraging Spring Cloud AWS.
MessageHandler
#Service
public class MessageHandler {
#SqsListener(value = "test-queue", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void queueListener(String msg, Acknowledgment acknowledgment){
System.out.println("message: " + msg);
if(/*successful*/){
acknowledgment.acknowledge();
}
}
}
The example shown above is all you need to receive messages. This assumes you've created an sqs queue with an associated dead letter queue. If you're messages aren't acknowledged, then they will be retried again until they reach the maximum # of receives. Then it will be forwarded to the dead letter queue.

Eclipse Scout client unit tests with ScoutClientTestRunner

I am trying to create unit test with scout context and I can't find proper tutorial or example for it.
When I create test with ScoutClientTestRunner, I get error
java.lang.Exception: Client session class is not set. Either set the default client session using 'ScoutClientTestRunner.setDefaultClientSessionClass' or annotate your test class and/or method with 'ClientTest'
I try to set client session class like this :
#Before
public void setClassSession() throws Exception {
ScoutClientTestRunner.setDefaultClientSessionClass(ClientSession.class)
}
and
#BeforeClass
public void setClassSession() throws Exception {
ScoutClientTestRunner.setDefaultClientSessionClass(ClientSession.class);
}
I try to add #ClientTest to the class and to all methods but I still get same error.
How to set client session in tests if you use ScoutClientTestRunner ?
The ScoutClientTestRunner ensures that the JUnit tests are executed having all the Scout Context (OSGi and so on) available.
Your attempts with #Before or #BeforeClass are too late. You need to provide the Scout Context initialization parameters before that. As the exception message says, you have 2 possibilities:
(1) #ClientTest annotation
You can annotate test classes or methods with #ClientTest using the clientSessionClass parameter:
#RunWith(ScoutClientTestRunner.class)
#ClientTest(clientSessionClass = ClientSession.class)
public class DesktopFormTest {
#Test
public void test1() throws Exception {
//Do something requiring a scout context:
//for example instantiate a DesktopForm.
}
}
If necessary you can also do it at method level:
#RunWith(ScoutClientTestRunner.class)
public class DesktopFormTest {
#Test
#ClientTest(clientSessionClass = Client1Session.class)
public void test1() throws Exception {
//client session is an instance of Client1Session.
}
#Test
#ClientTest(clientSessionClass = Client2Session.class)
public void test2() throws Exception {
//client session is an instance of Client2Session.
}
}
(2) Defining a TestEnvironment
When the test is run (directly or using maven-tycho), a lookup for a fully qualified class org.eclipse.scout.testing.client.runner.CustomClientTestEnvironment is done.
The CustomClientTestEnvironment class should implement org.eclipse.scout.testing.client.runner.IClientTestEnvironment
The method setupGlobalEnvironment() is called once and can be used to define the default client session with ScoutClientTestRunner.setDefaultClientSessionClass(..). This method can also be used to register required services.
Here an example:
package org.eclipse.scout.testing.client.runner; // <= can not be changed.
// add imports
public class CustomClientTestEnvironment implements IClientTestEnvironment {
#Override
public void setupGlobalEnvironment() {
//Set client session:
ScoutClientTestRunner.setDefaultClientSessionClass(ClientSession.class);
}
#Override
public void setupInstanceEnvironment() {
}
}
Of course (1) and (2) are compatible. The second mechanism defines only the default and ClientSession configured with (1) will override the default.

How to unit test an interceptor?

I want to write some unit tests for an interceptor that intercepts the Loggable base class (which implements ILoggable).
The Loggable base class has no methods to call and it is used only to be initialized by the logging facility.
To my understanding I should:
Mock an ILoggable and an ILogger
Initialize the logging facility
Register my interceptor on it
Invoke some method of the mocked ILoggable
The problem is that my ILoggable interface has no methods to call and thus nothing will be intercepted.
What is the right way to act here?
Should I mock ILoggable manually and add a stub method to call?
Also, should I be mocking the container as well?
I am using Moq and NUnit.
EDIT:
Here's my interceptor implementation for reference:
public class LoggingWithDebugInterceptor : IInterceptor
{
#region IInterceptor Members
public void Intercept(IInvocation invocation)
{
var invocationLogMessage = new InvocationLogMessage(invocation);
ILoggable loggable = invocation.InvocationTarget as ILoggable;
if (loggable == null)
throw new InterceptionFailureException(invocation, string.Format("Class {0} does not implement ILoggable.", invocationLogMessage.InvocationSource));
loggable.Logger.DebugFormat("Method {0} called with arguments {1}", invocationLogMessage.InvokedMethod, invocationLogMessage.Arguments);
Stopwatch stopwatch = new Stopwatch();
try
{
stopwatch.Start();
invocation.Proceed();
stopwatch.Stop();
}
catch (Exception e)
{
loggable.Logger.ErrorFormat(e, "An exception occured in {0} while calling method {1} with arguments {2}", invocationLogMessage.InvocationSource, invocationLogMessage.InvokedMethod, invocationLogMessage.Arguments);
throw;
}
finally
{
loggable.Logger.DebugFormat("Method {0} returned with value {1} and took exactly {2} to run.", invocationLogMessage.InvokedMethod, invocation.ReturnValue, stopwatch.Elapsed);
}
}
#endregion IInterceptor Members
}
If it's just the interceptor that uses the Logger Property on your class than why have in there at all? You might just as well have it on the interceptor. (like Ayende explained in his post here).
Other than that - interceptor is just a class which interacts with an interface - everything highly testable.
I agree with Krzysztof, if you're looking to add Logging through AOP, the responsibility and implementation details about logging should be transparent to the caller. Thus it's something that the Interceptor can own. I'll try to outline how I would test this.
If I follow the question correctly, your ILoggable is really just a naming container to annotate the class so that the interceptor can determine if it should perform logging. It exposes a property that contains the Logger. (The downside to this is that the class still needs to configure the Logger.)
public interface ILoggable
{
ILogger { get; set; }
}
Testing the interceptor should be a straight-forward process. The only challenge I see that you've presented is how to manually construct the IInvocation input parameter so that it resembles runtime data. Rather than trying to reproduce this through mocks, etc, I would suggest you test it using classic State-based verification: create a proxy that uses your interceptor and verify that your log reflects what you expect.
This might seem like a bit more work, but it provides a really good example of how the interceptor works independently from other parts of your code-base. Other developers on your team benefit from this as they can reference this example as a learning tool.
public class TypeThatSupportsLogging : ILoggable
{
public ILogger { get; set; }
public virtual void MethodToIntercept()
{
}
public void MethodWithoutLogging()
{
}
}
public class TestLogger : ILogger
{
private StringBuilder _output;
public TestLogger()
{
_output = new StringBuilder();
}
public void DebugFormat(string message, params object[] args)
{
_output.AppendFormat(message, args);
}
public string Output
{
get { return _output.ToString(); }
}
}
[TestFixture]
public class LoggingWithDebugInterceptorTests
{
protected TypeThatSupportsLogging Input;
protected LoggingWithDebugInterceptor Subject;
protected ILogger Log;
[Setup]
public void Setup()
{
// create your interceptor
Subject = new LoggingWithDebugInterceptor();
// create your proxy
var generator = new Castle.DynamicProxy.ProxyGenerator();
Input = generator.CreateClassProxy<TypeThatSupportLogging>( Subject );
// setup the logger
Log = new TestLogger();
Input.Logger = Log;
}
[Test]
public void DemonstrateThatTheInterceptorLogsInformationAboutVirtualMethods()
{
// act
Input.MethodToIntercept();
// assert
StringAssert.Contains("MethodToIntercept", Log.Output);
}
[Test]
public void DemonstrateNonVirtualMethodsAreNotLogged()
{
// act
Input.MethodWithoutLogging();
// assert
Assert.AreEqual(String.Empty, Log.Output);
}
}
No methods? What are you testing?
Personally, this sounds like it goes too far. I realize that TDD and code coverage is dogma, but if you mock an interface with no methods and prove that the mocking framework does what you instructed it to do, what have you really proven?
There's another misdirection going on here: logging is the "hello world" of aspect oriented programming. Why aren't you doing logging in an interceptor/aspect? If you did it that way, there'd be no reason for all your classes to implement ILoggable; you could decorate them with logging capability declaratively. I think it's a less invasive design and a better use of interceptors.