dynamic container creation in spring-rabbitmq per queue - concurrency

My application has multiple queues (queue names will be taken from database) and each of the queues will be consuming huge data daily.
For this purpose, I need one container and message listener to be created per queue so that there will be a separate thread for each queue. In addition to this, there can be some queues getting created dynamically and I need a container to be assigned for newly created queues
My Consumer class is starting like below
// Below is the way by which my class is starting
#Component
public class RequestConsumer implements MessageListener {```
//and below is the code by which I am creating Message listner
#Bean
#Scope(value = "prototype")
public SimpleMessageListenerContainer simpleMessageListenerNotification(
ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer simpleMessageListenerContainer =
new SimpleMessageListenerContainer(connectionFactory);
RabbitAdmin rabbitAdmin = getRabbitAdmin(connectionFactory);
RequestConsumer RequestConsumer = (RequestConsumer) beanFactory.getBean("requestConsumer");
simpleMessageListenerContainer.setupMessageListener(RequestConsumer);
simpleMessageListenerContainer.setAutoDeclare(true);
for (String queueName : requestConsumerQueueList()) {
Queue queue = new Queue(queueName);
rabbitAdmin.declareQueue(queue);
simpleMessageListenerContainer.addQueues(queue);
}
simpleMessageListenerContainer.start();
return simpleMessageListenerContainer;
}
My current code is creating only one container with one messageListner for all the queues whereas I am expecting separate container for each queue.

First, you should not be declaring queues in a bean definition - it is too early in the context's lifecycle.
You should also not be calling start() in the bean definition - again, too early.
You should do something like this:
#SpringBootApplication
public class So56951298Application {
public static void main(String[] args) {
SpringApplication.run(So56951298Application.class, args);
}
#Bean
public Declarables queues() {
return new Declarables(Arrays.asList(new Queue("q1"), new Queue("q2")));
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
Queue queue) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueues(queue);
container.setMessageListener(msg -> System.out.println(msg));
return container;
}
#Bean
public ApplicationRunner runner(ConnectionFactory connectionFactory, Declarables queues) {
return args -> {
queues.getDeclarables().forEach(dec -> container(connectionFactory, (Queue) dec).start());
};
}
}
The framework will automatically declare the queues at the right time (as long as there is a RabbitAdmin in the application context (which Spring Boot automatically configures).

Related

Unknown thread spawns which ignores the filter chain and fails on async decorator

I am currently facing a strange issue I am not able to reproduce locally, but happens in AWS ECS regularly, letting the application crash or run slow.
We have a spring boot application which extracts the tenant from the incoming GraphQL request and sets the tenant to a ThreadLocal instance.
To support DataLoader from GraphQL Java kickstart we populate the tenant to each child thread which will be used by the graphql dataloader. The tenant is mandatory to specify the database schema.
The executor
#Bean
#Override
public Executor getAsyncExecutor() {
log.info("Configuring async executor for multi tenancy...");
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(15);
executor.setThreadNamePrefix("tenant-child-executor-");
// Important part: Set the MultiTenancyTaskDecorator to populate current tenant to child thread
executor.setTaskDecorator(new MultiTenancyAsyncTaskDecorator());
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setWaitForTasksToCompleteOnShutdown(true);
log.info("Executor configured successfully!");
executor.initialize();
return executor;
}
Task Decorator
#NonNull
#Override
public Runnable decorate(#NonNull Runnable runnable) {
if (Objects.isNull(CurrentTenantContext.getTenant())) {
log.warn("Current tenant is null while decorating a new thread!");
}
final TenantIdentifier parentThreadTenantIdentifier = Objects.isNull(CurrentTenantContext.getTenant()) ? TenantIdentifier.asSystem() : CurrentTenantContext.getTenant();
// Also need to get the MDC context map as it is bound to the current local thread
final Map<String, String> parentContextMap = MDC.getCopyOfContextMap();
final var requestAttributes = RequestContextHolder.getRequestAttributes();
return () -> {
try {
CurrentTenantContext.setTenant(TenantIdentifier.of(parentThreadTenantIdentifier.getTenantName()));
if (Objects.isNull(requestAttributes)) {
log.warn("RequestAttributes are not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
RequestContextHolder.setRequestAttributes(requestAttributes, true);
}
if (Objects.isNull(parentContextMap)) {
log.warn("Parent context map not available!");
log.warn("Running on tenant: {}", parentThreadTenantIdentifier.getTenantName());
} else {
MDC.setContextMap(parentContextMap);
}
runnable.run();
} finally {
// Will be executed after thread finished or on exception
RequestContextHolder.resetRequestAttributes();
CurrentTenantContext.clear();
MDC.clear();
}
};
}
Tenant Context
public class CurrentTenantContext {
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
private CurrentTenantContext() {
// Hide constructor to only provide static functionality
}
public static TenantIdentifier getTenant() {
return currentTenant.get();
}
public static String getTenantName() {
return getTenant().getTenantName();
}
public static void setTenant(TenantIdentifier tenant) {
currentTenant.set(tenant);
}
public static void clear() {
currentTenant.remove();
}
public static boolean isTenantSet() {
return Objects.nonNull(currentTenant.get());
}
}
Locally, this works like a charm. Even in a docker compose environment with limited resources (CPU and Mem) like in AWS. Even 100.000 requests (JMETER) everything works like expected.
On AWS we can easily let the application crash.
After one or two requests, containing some child objects to resolve by GraphQL, we see a thread spawning which seems to ignore or not go through the chain
Thread-110 | [sys ] | WARN | MultiTenancyAsyncTaskDecorator | Current tenant is null while decorating a new thread!
An interesting thing in this line is the name of the thread.
Each incoming request has the pattern http-nio-9100-exec-[N] and each child thread the pattern tenant-child-executor-[I] but this one has the pattern Thread-[Y].
Now I am wondering where this thread is coming from and why is it not reproducible locally.
I was able to find the solution to the problem.
I needed to change
private static final ThreadLocal<TenantIdentifier> currentTenant = new ThreadLocal<>();
to
private static final InheritableThreadLocal<TenantIdentifier> currentTenant = new InheritableThreadLocal<>();
But I don't know why it works with InheritableThreadLocal but not with ThreadLocal within the AWS environment.
Further, I wonder why this change was not necessary for local testing which works with both ways.
Maybe somebody can provide some ideas.

Akka: Can an actor of some class become an actor of a diferent class?

As a course project, I am trying to implement a (simulation) of the Raft protocol.
In this post, I will not use Raft terminology at all; instead, I will use a simplified one.
The protocol is run by a number of servers (for example, 5) which can be in three different states (A, B, C).
The servers inherit some state variables and behavior from a "base" kind, but they all also have many unique state variables and methods, and respond to different messages.
At some point of the protocol, a server in some state (for example, A) is required to become the other state (for example, B).
In other words, the server should:
Lose the state variables and methods of state A, acquire those of state B, but maintain the variables of the "base" kind.
Stop responding to messages destined for state A, start responding to messages destined for state B.
In Akka, Point 1 can be implemented using Receives and become().
Point 2 is needed because, for example, an actor of class B should not have access to state variables and methods of an actor of class A. This aims at separating concerns, and achieving a better code organization.
The issues I am facing in implementing these Point 2 are the following:
Right now, my implementation has only one actor, which contains both A and B state variables and methods.
The protocol I am trying to implement requires each server has to keep a reference to the others (i.e., the ActorRef of the others).
I can't simply spawn an actor in state B, transfer the values of the state variables of the "base" kind to it, and stop the old actor, because the newly spawned actor has a new ActorRef, and the other servers are in the dark about it, and they will continue sending messages using the old ActorRef (therefore, the new actor would not receive anything, and both parties time out).
A way to circumvent the issue is that the newly spawned actor "advertises" itself by sending a message to the other actors, including its old ActorRef.
However, again due to the protocol, the other servers may be temporarily not available (i.e., they are crashed), thus they might not receive and process the advertisement.
In the project, I must use extensions of AbstractActor, and not FSM (final state machines), and have to use Java.
Is there any Akka pattern or functionality that solves this use case? Thank you for any insight. Below is a simplified example.
public abstract class BaseActor extends AbstractActor {
protected int x = 0;
// some state variables and methods that make sense for both A and B
#Override
public Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("x", msg -> {
System.out.println(x);
x++;
})
.build();
}
}
public class A extends BaseActor {
protected int a = 10;
// many other state variables and methods that are own of A and do NOT make sense to B
#Override
public Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("a", msg -> {
System.out.println(a);
})
.matchEquals("change", msg -> {
// here I want A to become B, but maintain value of x
})
.build()
.orElse(super.createReceive());
}
}
public class B extends BaseActor {
protected int b = 20;
// many other state variables and methods that are own of B and do NOT make sense to A
#Override
public AbstractActor.Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("b", msg -> {
System.out.println(b);
})
.matchEquals("change", msg -> {
// here I want B to become A, but maintain value of x
})
.build()
.orElse(super.createReceive());
}
}
public class Example {
public static void main(String[] args) {
var system = ActorSystem.create("example");
// actor has class A
var actor = system.actorOf(Props.create(A.class));
actor.tell("x", ActorRef.noSender()); // prints "0"
actor.tell("a", ActorRef.noSender()); // prints "10"
// here, the actor should become of class B,
// preserving the value of x, a variable of the "base" kind
actor.tell("change", ActorRef.noSender());
// actor has class B
actor.tell("x", ActorRef.noSender()); // should print "1"
actor.tell("b", ActorRef.noSender()); // should print "20"
}
}
This is a sketch implementation of how this could look like.
You model each of the states a separate class:
public class BaseState {
//base state fields/getters/setters
}
public class StateA {
BaseState baseState;
//state A fields/getters/setters
..
//factory methods
public static StateA fromBase(BaseState baseState) {...}
//if you need to go from StateB to StateA:
public static StateA fromStateB(StateB stateB) {...}
}
public class StateB {
BaseState baseState;
//state B fields/getters/setters
//factory methods
public static StateB fromBase(BaseState baseState) {...}
//if you need to go from StateA to StateB:
public static StateB fromStateA(StateA stateA) {...}
}
Then in your Actor you can have receive functions defined for both A and B and initialize it to A or B depending which one is the initial one
private static class MyActor extends AbstractActor
{
private AbstractActor.Receive receive4StateA(StateA stateA)
{
return new ReceiveBuilder()
.matchEquals("a", msg -> stateA.setSomeProperty(msg))
.matchEquals("changeToB", msg -> getContext().become(
receive4StateB(StateB.fromStateA(stateA))))
.build();
}
private AbstractActor.Receive receive4StateB(StateB stateB)
{
return new ReceiveBuilder()
.matchEquals("b", msg -> stateB.setSomeProperty(msg))
.matchEquals("changeToA", msg -> getContext().become(
receive4StateA(StateA.fromStateB(stateB))))
.build();
}
//assuming stateA is the initial state
#Override
public AbstractActor.Receive createReceive()
{
return receive4StateA(StateA.fromBase(new BaseState()));
}
}
Admittedly, my Java is rusty, but for example, this actor (or something very much like it...) will take strings until it receives a Lock message, after which it can be queried for how many distinct strings it received before being locked. So in the first Receive it gets, it tracks a Set of the strings received in order to dedupe. On a Lock it transitions to a second Receive which does not contain the Set (just an Integer field) and ignores String and Lock messages.
import akka.japi.JavaPartialFunction;
import java.util.HashSet;
import scala.runtime.BoxedUnit;
public class StringCounter extends AbstractActor {
public StringCounter() {}
public static class Lock {
private Lock() {}
public static final Lock INSTANCE = new Lock();
}
public static class Query {
private Query() {}
public static final Query INSTANCE = new Query();
}
/** The taking in Strings state */
public class AcceptingStrings extends JavaPartialFunction<Object, BoxedUnit> {
private HashSet<String> strings;
public AcceptingStrings() {
strings = new HashSet<String>();
}
public BoxedUnit apply(Object msg, boolean isCheck) {
if (msg instanceof String) {
if (!isCheck) {
strings.add(msg);
}
} else if (msg instanceof Lock) {
if (!isCheck) {
context().become(new Queryable(strings.size()), true);
}
} else {
// not handling any other message
throw noMatch();
}
return BoxedUnit.UNIT;
}
}
/** The responding to queries state */
public class Queryable extends JavaPartialFunction<Object, BoxedUnit> {
private Integer ans;
public Queryable(int answer) {
ans = Integer.valueOf(answer);
}
public BoxedUnit apply(Object msg, boolean isCheck) {
if (msg instanceof Query) {
if (!isCheck) {
getSender().tell(ans, getSelf());
}
} else {
// not handling any other message
throw noMatch();
}
return BoxedUnit.UNIT;
}
}
#Override
public Receive createReceive() {
return new Receive(new AcceptingStrings());
}
}
Note that in Queryable the set is long gone. One thing to be careful of is that the JavaPartialFunction will typically have apply called once with isCheck set to true and if that call doesn't throw the exception returned by noMatch(), it will be called again "for real" with isCheck set to false. You therefore need to be careful to not do anything but throw noMatch() or return in the case that isCheck is true.
This pattern is exceptionally similar to what happens in Akka Typed (especially in the functional API) under the hood.
Hopefully this illuminates this approach. There's a chance, of course, that your instructors will not accept this, though in that case it might be worth pushing back with the argument that:
in the actor model state and behavior are effectively the same thing
all the functionality is contained within an AbstractActor
I'd also not necessarily recommend using this approach normally in Java Akka code (the AbstractActor with state in its fields feels a lot more Java-y).

Spring Boot #Async not working

I expect that uploadImage method finishes once the file is uploaded to AWS, while scanFile method is still running asynchronously in the background;
#RestController
public class EmailController {
#PostMapping("/upload")
#ResponseStatus(HttpStatus.OK)
public void uploadImage(#RequestParam MultipartFile photos) {
awsAPIService.uploadImage(photos);
}
}
...
#Service
public class AwsAPIService {
public void uploadImage(MultipartFile file) {
try {
File fileToUpload = this.convertMultiPartToFile(file);
String fileName = this.generateFileName(file);
s3client.putObject(new PutObjectRequest(AWS_S3_QUARANTINE_BUCKET_NAME,fileName, fileToUpload));
fileToUpload.delete();
// start scan file
scanFile();
} ...
}
#Async
public void scanFile() {
log.info("Start scanning");
String queueUrl = sqs.getQueueUrl("bucket-antivirus").getQueueUrl();
List<Message> messages = sqs.receiveMessage(new ReceiveMessageRequest().withQueueUrl(queueUrl)
.withWaitTimeSeconds(20)).getMessages();
for (Message message : messages) {
// delete message
...
}
}
}
...
#EnableAsync
public class AppConfig {
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(2);
taskExecutor.setQueueCapacity(200);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
}
But this seems still running synchronously. What is the problem here?
By default #Async and other Spring method-level annotations like #Transactional work only on the external, bean-to-bean method call. An internal method call from uploadImage() to scanFile() in the same bean won't trigger the proxy implementing the Spring behaviour. As per Spring docs:
In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with #Transactional. Also, the proxy must be fully initialized to provide the expected behaviour so you should not rely on this feature in your initialization code, i.e. #PostConstruct.
You could configure AspectJ to enable annotations on internal method calls, but it's usually easier to refactor the code.

Getting SQS dead letter queue to work with Spring Boot and JMS

I've been working on a small Spring Boot application that receives messages from Amazon SQS. However I foresee that processing these messages may fail, so that's why I thought adding a dead letter queue would be a good idea.
There is a problem though: when the processing fails (which I force by throwing an Exception for some of the messages) it is not reattempted later on and it's not moved to the dead letter queue. I am struggling to find the issue, since there doesn't seem to much info on it.
However if I look at Amazon's documentation, they seem to be able to do it, but without using the Spring Boot annotations. Is there any way I can make the code below work transactional without writing too much of the JMS code myself?
This is the current configuration that I am using.
#Configuration
public class AWSConfiguration {
#Value("${aws.sqs.endpoint}")
private String endpoint;
#Value("${aws.iam.key}")
private String iamKey;
#Value("${aws.iam.secret}")
private String iamSecret;
#Value("${aws.sqs.queue}")
private String queue;
#Bean
public JmsTemplate createJMSTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(getSQSConnectionFactory());
jmsTemplate.setDefaultDestinationName(queue);
jmsTemplate.setDeliveryPersistent(true);
jmsTemplate.setDeliveryMode(DeliveryMode.PERSISTENT);
return jmsTemplate;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(getSQSConnectionFactory());
factory.setConcurrency("1-1");
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(getSQSConnectionFactory());
}
#Bean
public ConnectionFactory getSQSConnectionFactory() {
return SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withEndpoint(endpoint)
.withNumberOfMessagesToPrefetch(10).build();
}
private final AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return new BasicAWSCredentials(iamKey, iamSecret);
}
#Override
public void refresh() {
}
};
}
And finally the receiving end:
#Service
public class QueueReceiver {
private static final String EXPERIMENTAL_QUEUE = "${aws.sqs.queue}";
#JmsListener(destination = EXPERIMENTAL_QUEUE)
public void receiveSegment(String jsonSegment) throws IOException {
Segment segment = Segment.fromJSON(jsonSegment);
if(segment.shouldFail()) {
throw new IOException("This segment is expected to fail");
}
System.out.println(segment.getText());
}
}
Spring Cloud AWS
You can greatly simplify your configuration by leveraging Spring Cloud AWS.
MessageHandler
#Service
public class MessageHandler {
#SqsListener(value = "test-queue", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void queueListener(String msg, Acknowledgment acknowledgment){
System.out.println("message: " + msg);
if(/*successful*/){
acknowledgment.acknowledge();
}
}
}
The example shown above is all you need to receive messages. This assumes you've created an sqs queue with an associated dead letter queue. If you're messages aren't acknowledged, then they will be retried again until they reach the maximum # of receives. Then it will be forwarded to the dead letter queue.

Connect HTTP thread handlers to SessionPools.

We are using the model set up in the PoCo-project library documentation. A thread/Handler is spawned for every connection to the http server. We want to connect each thread to a shared SessionPoolContainer(SPC). We are working on the assumption that we should instantiate the SPC in the HandlerFactory and give the handler a reference to the SPC.
class Handler: public Poco::Net::HTTPRequestHandler{
public:
Handler(SessionPoolContainer &spc){
//Here is where it goes wrong. "spc is private."
SessionPool sp = spc.getPool("p1");
//Todo fetch a session once we have the sessionpool reference.
}
void handleRequest(Poco::Net::HTTPServerRequest& request, Poco::Net::HTTPServerResponse& response){
//Do stuff.
}
};
class HandlerFactory : public Poco::Net::HTTPRequestHandlerFactory{
public:
SessionPoolContainer spc;
Poco::Net::HTTPRequestHandler* createRequestHandler(const Poco::Net::HTTPServerRequest &request){
Poco::Data::MySQL::Connector::registerConnector();
AutoPtr<SessionPool> p1 = new SessionPool("MySQL", "host=127.0.0.1;port=3306;db=testdb2;user=bachelor;password=bachelor;compress=true;auto-reconnect=true");
spc.add(p1);
if (request.getContentType().compare("Application/JSON")) {
return new Handler(spc);
}
}
};
class MyWebHTTPServerApplication : public Poco::Util::ServerApplication{
protected:
int main(const std::vector<std::string> &args){
// Instanciate HandlerFactory
Poco::Net::HTTPServer server(new HandlerFactory(), socket, pParams);
server.start();
//SIC
}
};
The error we get from this is (from the 3rd line):
/home/notandi/git/poco-1.7.2-all/cmake_install/debug/include/Poco/Data/SessionPool.h:187:9: error: 'Poco::Data::SessionPool::SessionPool(const Poco::Data::SessionPool&)' is private
SessionPool(const SessionPool&);
^ /home/notandi/QT/MySQLWithPool/main.cpp:68:41: error: within this context
SessionPool sp = spc.getPool("p");
From where I'm sitting this just needs to work and have the reference passed around.
I have tried to "friend class Handler;" in Handler with no change in status.
The relevant part of SessionPoolContainer looks like:
private:
typedef std::map<std::string, AutoPtr<SessionPool>, Poco::CILess> SessionPoolMap;
SessionPoolContainer(const SessionPoolContainer&);
SessionPoolContainer& operator = (const SessionPoolContainer&);
SessionPoolMap _sessionPools;
Poco::FastMutex _mutex;
Do I edit and recompile PoCo with SessionPoolContainer with "friend class Handler;"? How do I get around this or am I just thinking this all wrong?
From the posted code, it looks like the pool container is not needed at all because p1 pool is never added to the pool container; so, even if you could compile, spc would contain no session pools; using SessionPoolContainer makes sense only if you are connecting to multiple databases, in which case you have to add session pool to the pool container:
spc.add(p1);
However, if there is no need for SessionPoolContainer, then just pass the reference to SessionPool to the handler and get a session from it:
class Handler: public Poco::Net::HTTPRequestHandler{
public:
Handler(SessionPool &sp){
Session s = sp.get();
}
//...
};
Look at this code to get a better understanding how to use session pools and containers thereof.