Unknown thread spawns which ignores the filter chain and fails on async decorator - amazon-web-services

I am currently facing a strange issue I am not able to reproduce locally, but happens in AWS ECS regularly, letting the application crash or run slow.
We have a spring boot application which extracts the tenant from the incoming GraphQL request and sets the tenant to a ThreadLocal instance.
To support DataLoader from GraphQL Java kickstart we populate the tenant to each child thread which will be used by the graphql dataloader. The tenant is mandatory to specify the database schema.
The executor
#Bean
#Override
public Executor getAsyncExecutor() {
log.info("Configuring async executor for multi tenancy...");
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(15);
executor.setThreadNamePrefix("tenant-child-executor-");
// Important part: Set the MultiTenancyTaskDecorator to populate current tenant to child thread
executor.setTaskDecorator(new MultiTenancyAsyncTaskDecorator());
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setWaitForTasksToCompleteOnShutdown(true);
log.info("Executor configured successfully!");
executor.initialize();
return executor;
}
Task Decorator
#NonNull
#Override
public Runnable decorate(#NonNull Runnable runnable) {
if (Objects.isNull(CurrentTenantContext.getTenant())) {
log.warn("Current tenant is null while decorating a new thread!");
}
final TenantIdentifier parentThreadTenantIdentifier = Objects.isNull(CurrentTenantContext.getTenant()) ? TenantIdentifier.asSystem() : CurrentTenantContext.getTenant();
// Also need to get the MDC context map as it is bound to the current local thread
final Map<String, String> parentContextMap = MDC.getCopyOfContextMap();
final var requestAttributes = RequestContextHolder.getRequestAttributes();
return () -> {
try {
CurrentTenantContext.setTenant(TenantIdentifier.of(parentThreadTenantIdentifier.getTenantName()));
if (Objects.isNull(requestAttributes)) {
log.warn("RequestAttributes are not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
RequestContextHolder.setRequestAttributes(requestAttributes, true);
}
if (Objects.isNull(parentContextMap)) {
log.warn("Parent context map not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
MDC.setContextMap(parentContextMap);
}
runnable.run();
} finally {
// Will be executed after thread finished or on exception
RequestContextHolder.resetRequestAttributes();
CurrentTenantContext.clear();
MDC.clear();
}
};
}
Tenant Context
public class CurrentTenantContext {
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
private CurrentTenantContext() {
// Hide constructor to only provide static functionality
}
public static TenantIdentifier getTenant() {
return currentTenant.get();
}
public static String getTenantName() {
return getTenant().getTenantName();
}
public static void setTenant(TenantIdentifier tenant) {
currentTenant.set(tenant);
}
public static void clear() {
currentTenant.remove();
}
public static boolean isTenantSet() {
return Objects.nonNull(currentTenant.get());
}
}
Locally, this works like a charm. Even in a docker compose environment with limited resources (CPU and Mem) like in AWS. Even 100.000 requests (JMETER) everything works like expected.
On AWS we can easily let the application crash.
After one or two requests, containing some child objects to resolve by GraphQL, we see a thread spawning which seems to ignore or not go through the chain
Thread-110 | [sys ] | WARN | MultiTenancyAsyncTaskDecorator | Current tenant is null while decorating a new thread!
An interesting thing in this line is the name of the thread.
Each incoming request has the pattern http-nio-9100-exec-[N] and each child thread the pattern tenant-child-executor-[I] but this one has the pattern Thread-[Y].
Now I am wondering where this thread is coming from and why is it not reproducible locally.

I was able to find the solution to the problem.
I needed to change
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
to
private static final InheritableThreadLocal<TenantIdentifier> currentTenant = new InheritableThreadLocal<>();
But I don't know why it works with InheritableThreadLocal but not with ThreadLocal within the AWS environment.
Further, I wonder why this change was not necessary for local testing which works with both ways.
Maybe somebody can provide some ideas.

Related

Spring Boot #Async not working

I expect that uploadImage method finishes once the file is uploaded to AWS, while scanFile method is still running asynchronously in the background;
#RestController
public class EmailController {
#PostMapping("/upload")
#ResponseStatus(HttpStatus.OK)
public void uploadImage(#RequestParam MultipartFile photos) {
awsAPIService.uploadImage(photos);
}
}
...
#Service
public class AwsAPIService {
public void uploadImage(MultipartFile file) {
try {
File fileToUpload = this.convertMultiPartToFile(file);
String fileName = this.generateFileName(file);
s3client.putObject(new PutObjectRequest(AWS_S3_QUARANTINE_BUCKET_NAME,fileName, fileToUpload));
fileToUpload.delete();
// start scan file
scanFile();
} ...
}
#Async
public void scanFile() {
log.info("Start scanning");
String queueUrl = sqs.getQueueUrl("bucket-antivirus").getQueueUrl();
List<Message> messages = sqs.receiveMessage(new ReceiveMessageRequest().withQueueUrl(queueUrl)
.withWaitTimeSeconds(20)).getMessages();
for (Message message : messages) {
// delete message
...
}
}
}
...
#EnableAsync
public class AppConfig {
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(2);
taskExecutor.setQueueCapacity(200);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
}
But this seems still running synchronously. What is the problem here?
By default #Async and other Spring method-level annotations like #Transactional work only on the external, bean-to-bean method call. An internal method call from uploadImage() to scanFile() in the same bean won't trigger the proxy implementing the Spring behaviour. As per Spring docs:
In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with #Transactional. Also, the proxy must be fully initialized to provide the expected behaviour so you should not rely on this feature in your initialization code, i.e. #PostConstruct.
You could configure AspectJ to enable annotations on internal method calls, but it's usually easier to refactor the code.

Getting SQS dead letter queue to work with Spring Boot and JMS

I've been working on a small Spring Boot application that receives messages from Amazon SQS. However I foresee that processing these messages may fail, so that's why I thought adding a dead letter queue would be a good idea.
There is a problem though: when the processing fails (which I force by throwing an Exception for some of the messages) it is not reattempted later on and it's not moved to the dead letter queue. I am struggling to find the issue, since there doesn't seem to much info on it.
However if I look at Amazon's documentation, they seem to be able to do it, but without using the Spring Boot annotations. Is there any way I can make the code below work transactional without writing too much of the JMS code myself?
This is the current configuration that I am using.
#Configuration
public class AWSConfiguration {
#Value("${aws.sqs.endpoint}")
private String endpoint;
#Value("${aws.iam.key}")
private String iamKey;
#Value("${aws.iam.secret}")
private String iamSecret;
#Value("${aws.sqs.queue}")
private String queue;
#Bean
public JmsTemplate createJMSTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(getSQSConnectionFactory());
jmsTemplate.setDefaultDestinationName(queue);
jmsTemplate.setDeliveryPersistent(true);
jmsTemplate.setDeliveryMode(DeliveryMode.PERSISTENT);
return jmsTemplate;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(getSQSConnectionFactory());
factory.setConcurrency("1-1");
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(getSQSConnectionFactory());
}
#Bean
public ConnectionFactory getSQSConnectionFactory() {
return SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withEndpoint(endpoint)
.withNumberOfMessagesToPrefetch(10).build();
}
private final AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return new BasicAWSCredentials(iamKey, iamSecret);
}
#Override
public void refresh() {
}
};
}
And finally the receiving end:
#Service
public class QueueReceiver {
private static final String EXPERIMENTAL_QUEUE = "${aws.sqs.queue}";
#JmsListener(destination = EXPERIMENTAL_QUEUE)
public void receiveSegment(String jsonSegment) throws IOException {
Segment segment = Segment.fromJSON(jsonSegment);
if(segment.shouldFail()) {
throw new IOException("This segment is expected to fail");
}
System.out.println(segment.getText());
}
}
Spring Cloud AWS
You can greatly simplify your configuration by leveraging Spring Cloud AWS.
MessageHandler
#Service
public class MessageHandler {
#SqsListener(value = "test-queue", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void queueListener(String msg, Acknowledgment acknowledgment){
System.out.println("message: " + msg);
if(/*successful*/){
acknowledgment.acknowledge();
}
}
}
The example shown above is all you need to receive messages. This assumes you've created an sqs queue with an associated dead letter queue. If you're messages aren't acknowledged, then they will be retried again until they reach the maximum # of receives. Then it will be forwarded to the dead letter queue.

Task chaining in JavaFX8: Start next Task after onSucceeded finished on previous task

I'm rather new to JavaFX8 and facing the following problem. In my current App, which is for document processing/editing, I have two rather expensive tasks. Opening a document and saving a document.
My app has the buttons "import next", "export current" and "export current and import next". For Import and Export, I have two Task of the following structure:
private class Export extends Task<Void> {
public Export() {
this.setOnRunning(event -> {
// do stuff (change cursor etc)
});
this.setOnFailed(event -> {
// do stuff, eg. show error box
});
this.setOnSucceeded(event -> {
// do stuff
});
}
#Override
protected Void call() throws Exception {
// do expensive stuff
return null;
}
}
I submit the task using the Executors.newSingleThreadExecutor();.
For the functionality "export current and import next", my goal is to submit the Export and Import tasks to the executor, but my Import tasks should only run if the export-task was sucessful and the EventHandler given in setOnSucceedded (whichs runs on the GUI thread) finished. If the export fails, it does not make any sense to load the next document because user interaction is needed. How can this be achieved?
First I tired to but the entire logic/error handling in the call method, but this does not work as I cannot change the GUI from this method (i.e. to show an error-box).
As workaround, I'm manually submitting the import-task on the last line of my setOnSucceeded in the export-task, but this is not very flexible, because I want to be sure this task exports only (without subsequent import)...
Don't call the handler property methods setOnXXX in your Task subclass constructor. These actually set a property on the task, so if you also call those methods from elsewhere you will replace the functionality you're implementing in the class itself, rather than add to it.
Instead, override the protected convenience methods:
public class Export extends Task<Void> {
#Override
protected void succeeded() {
super.succeeded();
// do stuff...
}
#Override
protected void running() {
super.running();
// do stuff...
}
#Override
protected void failed() {
super.failed();
// do stuff...
}
#Override
protected Void call() {
// do expensive stuff....
return null ;
}
}
Now you can safely use setOnXXX(...) externally to the Export class without breaking its functionality:
Export export = new Export();
export.setOnSucceeded(e -> {
Import import = new Import();
executor.submit(import);
});
executor.submit(export);
This puts the logic for chaining the tasks at the point where you actually create them, which would seem to be the correct place to do it.
Note that another way to provide multiple handlers for the change of state is to register listeners with the stateProperty():
Export export = new Export();
export.stateProperty().addListener((obs, oldState, newState) -> {
if (newState == Worker.State.SUCCEEDED) {
// ...
}
});
From testing, it appears the order of execution of these different mechanisms is:
state listeners
the onSucceeded handler
the Task.succeeded method
All are executed on the FX Application Thread.
So if you want the code in the Task subclass to be executed before the handler added externally, do
public class Export extends Task<Void> {
public Export() {
stateProperty().addListener((obs, oldState, newState) -> {
if (newState == Worker.State.RUNNING) {
// do stuff
} else if (newState == Worker.State.SUCCEEDED) {
// do stuff
} else if (newState == Worker.State.FAILED) {
// do stuff
}
});
}
#Override
public Void call() {
// ...
}
}
Finally, you could implement the entire logic in your call method: if you need to interact with the UI you can wrap those calls in a Platform.runLater(() -> {});. However, separating the functionality into different tasks as you have done is probably cleaner anyway.

Action Compositon Play2 Java

I'm currently developing a web service with Play2 and I have a problem with action composition.
Here is one of the methods available for my web service :
#Authenticated(Secured.class)
#BodyParser.Of(BodyParser.Json.class)
public static Result createObject() {
try {
JsonNode json = request().body().asJson();
// Retrieve user from request
User user;
try {
user = getUserFromRequest();
}
catch (BeanNotFoundException e) {
return badRequest(Messages.get("userNotFound"));
}
// Retrieve owner from user
Owner owner;
try {
owner = getOwnerFromUser(user);
}
catch (BeanNotFoundException e) {
return badRequest(Messages.get("ownerNotFound"));
}
// Create the object
// Here is the code using User and Owner previously found
}
catch (BeanValidationException e) {
return badRequest(JsonUtils.beanValidationMessagesToJson(e));
}
}
The problem is that I have to repeat the code to retrieve the user and the owner in each method of my web service.
How can I use action composition to do that, since I'm calling the methods in the middle of my main action ?
I read the documentation http://www.playframework.com/documentation/2.1.1/JavaActionsComposition but I don't understand how to change the behavior of the action with a simple annotation ?
Thank you
There are examples of Play Java action composition here:
https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/play-java/app/controllers/Application.java

Calling CMT bean from BMT bean results in "Local transaction already has 1 non-XA Resource"

I have one EJB with Bean-Managed transaction:
#Singleton
#TransactionManagement(TransactionManagementType.BEAN)
public class BmtBean {
#Resource
private DataSource ds1;
#Resource
private SessionContext sessionCtx;
#EJB
private CmtBean cmtBean;
public void callCmtBean() {
Connection conn1 = null;
try {
conn1 = ds1.getConnection();
// create a PreparedStatement and execute a query
// process result set
while(resultSet.next()) {
// map resultSet to an entity
Entity entity = mapResultSetToEntity(resultSet);
sessionCtx.getUserTransaction().begin();
// pass an entity to another EJB,
// that operates on a different JTA data source
cmtBean.call(entity);
sessionCtx.getUserTransaction().commit();
}
} finally {
// release connection
}
}
}
And another bean with container-managed transaction:
#Singleton
#TransactionManagement(TransactionManagementType.CONTAINER)
public class CmtBean {
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.MANDATORY)
public void call(Entity entities) {
//persist passed entities
//em.flush()
//em.clear();
}
}
Calling cmtBean#call doesn't cause TransactionRequiredException, because prior to that I start a UserTransaction. But when em#flush is called, this exception is thrown:
Caused by: javax.resource.spi.ResourceAllocationException: Error in
allocating a connection. Cause: java.lang.IllegalStateException: Local
transaction already has 1 non-XA Resource: cannot add more resources.
After digging some EclipseLink code, I see that when calling em#flush() it attempts to obtain a new connection from dataSource and fails to do so.
Is this a bug or expected behaviour? How can I fix this?
UPDATE:
See the updated code example.
Also, I have to stress, that I do use 2 non-XA JTA data sources. But since the connection in BmtBean is by default set to autocommit, by the time CmtBean is called, the transaction has to be already commited.