Currently I am reading from my SQS queue with the following code
public class SqsReceiver implements Runnable {
#Autowired
protected AmazonSQS sqs;
protected ReceiveMessageRequest receiveMessageRequest;
public void run() {
List<Message> messages = sqs.receiveMessage(receiveMessageRequest)
.getMessages();
for (Message msg : messages) {
logger.info("Processing message: {}", msg.getBody());
try {
processMessage(msg);
} catch (Exception e) {
logger.error("Error processing message", e);
} finally {
sqs.deleteMessage(new DeleteMessageRequest().withQueueUrl(
receiveMessageRequest.getQueueUrl()).withReceiptHandle(
msg.getReceiptHandle()));
}
}
}
}
And having a scheduler read every second:
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(1);
scheduler.initialize();
scheduler.schedule(sqsReceiver, new CronTrigger("* * * * * *"));
But recently, I learned Amazon released Using JMS with Amazon SQS. Now I was wondering besides having my code be independent from the underlying implementation, what benefits would there be in terms of performance and costs in switching my implementation to use JMS?
Related
I am using this sample: https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret
I did some minor modifications in TestProducer class (add line 18 and line 26), I want to produce message to 2 different EVENT HUB namespaces (means creating 2 diffrent Kafka Producers for 2 bootstrap servers) in ONE console application, see code:
public class TestProducer {
//Change constant to send messages to the desired topic, for this example we use 'test'
private final static String TOPIC = "do.kafka.oauth";
private final static int NUM_THREADS = 1;
public static void main(String... args) throws Exception {
//Create Kafka Producer
final Producer<Long, String> producer = createProducer(false);
final Producer<Long, String> producer_auto = createProducer(true);
final ExecutorService executorService = Executors.newFixedThreadPool(NUM_THREADS);
//Run NUM_THREADS TestDataReporters
for (int i = 0; i < NUM_THREADS; i++){
executorService.execute(new TestDataReporter(producer, TOPIC));
executorService.execute(new TestDataReporter(producer_auto, TOPIC));
}
}
private static Producer<Long, String> createProducer(boolean isAuto) {
try{
Properties properties = new Properties();
if(isAuto)
properties.load(new FileReader("src/main/resources/producer_auto.config"));
else
properties.load(new FileReader("src/main/resources/producer.config"));
properties.put(ProducerConfig.CLIENT_ID_CONFIG, "KafkaExampleProducer");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
return new KafkaProducer<>(properties);
} catch (Exception e){
System.out.println("Failed to create producer with exception: " + e);
System.exit(0);
return null; //unreachable
}
}
}
Here is producer.config:
bootstrap.servers=advantcoeventhubs.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
and producer_auto.config:
bootstrap.servers=autoeventhubtesting.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
When I execute the code, the application can produce message to just the first namespace (advantcoeventhubs), and throw exception when produce message to the second namespace (autoeventhubtesting):
"ERROR NetworkClient [Producer clientId=KafkaExampleProducer] Connection to node -1 (autoeventhubtesting.servicebus.windows.net/13.66.138.74:9093) failed authentication due to: Invalid SASL mechanism response, server may be expecting a different protocol"
Please see attached picture for error here:
Invalid SASL mechanism response, server may be expecting a different protocol
Can any experts advise the root cause and work around solution?
Thank you so much!
Why is AWS SQS not a default connector for Apache Flink? Is there some technical limitation to doing this? Or was it just something that didn't get done? I want to implement this, any pointers would be appreciated
Probably too late for an answer to the original question... I wrote a SQS consumer as a SourceFunction, using the Java Messaging Service library for SQS:
SQSConsumer extends RichParallelSourceFunction<String> {
private volatile boolean isRunning;
private transient AmazonSQS sqs;
private transient SQSConnectionFactory connectionFactory;
private transient ExecutorService consumerExecutor;
#Override
public void open(Configuration parameters) throws Exception {
String region = ...
AWSCredentialsProvider credsProvider = ...
// may be use a blocking array backed thread pool to handle surges?
consumerExecutor = Executors.newCachedThreadPool();
ClientConfiguration clientConfig = PredefinedClientConfigurations.defaultConfig();
this.sqs = AmazonSQSAsyncClientBuilder.standard().withRegion(region).withCredentials(credsProvider)
.withClientConfiguration(clientConfig)
.withExecutorFactory(()->consumerExecutor).build();
this.connectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), sqs);
this.isRunning = true;
}
#Override
public void run(SourceContext<String> ctx) throws Exception {
SQSConnection connection = connectionFactory.createConnection();
// ack each msg explicitly
Session session = connection.createSession(false, SQSSession.UNORDERED_ACKNOWLEDGE);
Queue queue = session.createQueue(<queueName>);
MessageConsumer msgConsumer = session.createConsumer(queue);
msgConsumer.setMessageListener(msg -> {
try {
String msgId = msg.getJMSMessageID();
String evt = ((TextMessage) msg).getText();
ctx.collect(evt);
msg.acknowledge();
} catch (JSMException e) {
// log and move on the next msg or bail with an exception
// have a dead letter queue is configured so this message is not lost
// msg is not acknowledged so it may be picked up again by another consumer instance
}
};
// check if we were canceled
if (!isRunning) {
return;
}
connection.start();
while (!consumerExecutor.awaitTermination(1, TimeUnit.MINUTES)) {
// keep waiting
}
}
#Override
public void cancel() {
isRunning = false;
// this method might be called before the task actually starts running
if (sqs != null) {
sqs.shutdown();
}
if(consumerExecutor != null) {
consumerExecutor.shutdown();
try {
consumerExecutor.awaitTermination(1, TimeUnit.MINUTES);
} catch (Exception e) {
//log e
}
}
}
#Override
public void close() throws Exception {
cancel();
super.close();
}
}
Note if you are using a standard SQS queue you may have to de-dup the messages depending on whether exactly-once guarantees are required.
Reference:
Working with JMS and Amazon SQS
At the moment, there is no connector for AWS SQS in Apache Flink. Have a look at the already existing connectors. I assume you already know about this, and would like to give some pointers. I was also looking for an SQS connector recently and found this mail thread.
Apache Kinesis Connector is somewhat similar to what you can implement on this. See whether you can get a start on this using this connector.
I am hitting an unhandled TaskCanceledException every time my code invokes an AWS Lambda. The code runs on an Android device. (It's written in C# with Xamarin.Android and references AWSSDK.Core, AWSSDK.Lambda).
Why is the task timing out? [Update: this has been figured out]
Why isn't the exception handled?
Why can't I see any diagnostics from AWS SDK for .NET in the logs?
Code:
public class SomeActivity: Activity
{
private AmazaonLambdaClient mAWSLambdaClient;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
SetContentView(...);
FindViewById(...).Click += ButtonClickAsync;
// System.Diagnostics.Trace redirects to Log.Debug with TAG="System.Diagnostics.Trace"
System.Diagnostics.Trace.Listeners.Add(new MyAndroidTraceListener("System.Diagnostics.Trace"));
System.Diagnostics.Trace.TraceInformation("Android trace listener installed");
// AWS logs to System.Diagnostics
AWSConfigs.LoggingConfig.LogTo = LoggingOptions.SystemDiagnostics;
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
}
protected override void OnStart()
{
base.OnStart();
var idToken = ...
var awsCredentials = new CognitoAWSCredentials("IdentityPoolID", AWSConfig.RegionEndpoint);
awsCredentials.AddLogin("accounts.google.com", idToken);
mAWSLambdaClient = new AmazonLambdaClient(awsCredentials, AWSConfig.RegionEndpoint);
}
protected override void OnStop()
{
base.OnStop();
mAWSLambdaClient.Dispose();
mAWSLambdaClient = null;
}
private async void ButtonClickAsync(object sender, System.EventArgs e)
{
await DoSomethingAsync();
}
private async Task DoSomethingAsync()
{
var lambdaRequest = ...
try
{
var lambdaInvokeTask = mAWSLambdaClient.InvokeAsync(lambdaRequest);
invokeResponse = await lambdaInvokeTask; <= VS breaks here after ~30 to 60 seconds
}
catch (TaskCanceledException e) // also tried catching Exception, no luck
{
Log.Error(TAG, "Lambda Task Canceled: {0}, {1}", e.Message, e.InnerException);
return;
}
}
}
Visual Studio breaks on the await line, telling me I have an unhandled TaskCanceledException: a task was canceled. Weird I do handle that exception.
After the unhandled exception, I check the Device Log in Visual Studio. I filter by TAG="System.Diagnostics.Trace" and all I find is:
base apk Information 0:
Android trace listener installed
Where is the AWS SDK log I should have gotten according to logging-with-the-aws-sdk-for-net?
UPDATE:
I've figured out question 1, why it times out. It was due to a lambdaRequest with a bad PayloadStream set to a MemoryStream whose position had not been reset to 0 after JSON serializing an object to the stream.
I have not figured out why 2, the exception wasn't handled by the try/catch, and 3, why AWS SDK did not log as requested.
I'm guessing either the TaskCanceledException instance is not from the same namespace your code is expecting in the catch statement, or it is being thrown from the line just above your try-catch, i.e mAWSLambdaClient.InvokeAsync(lambdaRequest). What happens if you move that line and possibly more lines inside the try-catch block?
If this doesn't help, please post the stack trace.
I am trying to implement the following use case as part of my akka learning
I would like to calculate the total streets in all cities of all states. I have a database that contain the details needed. Here is what i have so far
Configuration
akka.actor.deployment {
/CityActor{
router = random-pool
nr-of-instances = 10
}
/StateActor {
router = random-pool
nr-of-instances = 1
}}
Main
public static void main(String[] args) {
try {
Config conf = ConfigFactory
.parseReader(
new FileReader(ClassLoader.getSystemResource("config/forum.conf").getFile()))
.withFallback(ConfigFactory.load());
System.out.println(conf);
final ActorSystem system = ActorSystem.create("AkkaApp", conf);
final ActorRef masterActor = system.actorOf(Props.create(MasterActor.class), "Migrate");
masterActor.tell("", ActorRef.noSender());
} catch (Exception e) {
e.printStackTrace();
}
}
MasterActor
public class MasterActor extends UntypedActor {
private final ActorRef randomRouter = getContext().system()
.actorOf(Props.create(StateActor.class).withRouter(new akka.routing.FromConfig()), "StateActor");
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
getContext().watch(randomRouter);
for (String aState : getStates()) {
randomRouter.tell(aState, getSelf());
}
randomRouter.tell(new Broadcast(PoisonPill.getInstance()), getSelf());
} else if (message instanceof Terminated) {
Terminated ater = (Terminated) message;
if (ater.getActor().equals(randomRouter)) {
getContext().system().terminate();
}
}
}
public List<String> getStates() {
return new ArrayList<String>(Arrays.asList("CA", "MA", "TA", "NJ", "NY"));
};}
StateActor
public class StateActor extends UntypedActor {
private final ActorRef randomRouter = getContext().system()
.actorOf(Props.create(CityActor.class).withRouter(new akka.routing.FromConfig()), "CityActor");
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
System.out.println("Processing state " + message);
for (String aCity : getCitiesForState((String) message)) {
randomRouter.tell(aCity, getSelf());
}
Thread.sleep(1000);
}
}
public List<String> getCitiesForState(String stateName) {
return new ArrayList<String>(Arrays.asList("Springfield-" + stateName, "Salem-" + stateName,
"Franklin-" + stateName, "Clinton-" + stateName, "Georgetown-" + stateName));
};}
CityActor
public class CityActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
System.out.println("Processing city " + message);
Thread.sleep(1000);
}
}}
Did i implement this use case properly?
I cannot get the code to terminate properly, i get dead letters messages. I know why i am getting them, but not sure how to properly implement it.
Any help is greatly appreciated.
Thanks
I tested and ran your use case with Akka 2.4.17. It works and terminate properly, without any dead letters logged.
Here are some remarks/suggestions to improve your understanding of the Akka toolkit:
Do not use Thread.sleep() inside an actor. Basically, it is never a good practice since a same thread may do many tasks for many actors (this is the default behavior with a shared thread pool). Instead, you can use an Akka scheduler or assign a single thread to a specific Actor (see this post for more details). See also the Akka documentation about that topic.
Having some dead letters is not always an issue. It generally arises when the system stops an Actor that had some messages within its mailbox. In this case, the remaining unprocessed messages are sent to deadLetters of the ActorSystem. I recommend you to check the configuration you provided for the logging of dead letters. If the file forum.conf you provided is your complete configuration file for Akka, you may want to customize some additional settings. See the page Logging of Dead Letters and Stopping actors on Akka's website. For instance, you could have a section like this:
akka {
# instead of System.out.println(conf);
log-config-on-start = on
# Max number of dead letters to log
log-dead-letters = 10
log-dead-letters-during-shutdown = on
}
Instead of using System.out.println() to log/debug, it is more convenient to set up a dedicated logger for each Actor that provides you additional information such as dispatchers, Actor name, etc. If your are interested, have a look to the Logging page.
Use some custom immutable message objects instead of systematic Strings. At first, it may seem painful to have to declare new additional classes but in the end it helps to better design complex behaviors and it's more readable. For instance, an actor A can answer to a RequestMsg coming from an actor B with an AnswerMsg or a custom ErrorMsg. Then, for your actor B, you will end up with the following onReceive() method:
#Override
public void onReceive(Object message) {
if (message instanceof AnswerMsg) {
// OK
AnswerMsg answerMsg = (AnswerMsg) message;
// ...
}
if (message instanceof ErrorMsg) {
// Not OK
ErrorMsg errorMsg = (ErrorMsg) message;
// ...
}
else {
// Unexpected behaviour, log it
log.error("Error, received " + message.toString() + " object.")
}
}
I hope that these resources will be useful for you.
Have a happy Akka programming! ;)
In our project we use an implementation of HL7 document from openehealth. This implementation uses EMF as primitive model and delegates all calls to EMF. We need to handle a large volume of documents and our flows involve concurrent processing of documents(read, validate, query). In concurrency environment the EMF layer crashes with UnsupportedOperationException. From openehealth site it says to handle the synchronized processing in the client api, but this will decrease our system performance and we don't want this. I tried EMF transaction API, TransactionalEditingDomain, which says that supports read only model transactions but without success. My test looks something like this:
ExecutorService executorService = Executors.newFixedThreadPool(4);
final List<ClinicalDocument> documents = new ArrayList<ClinicalDocument>();
for (int i = 0; i < 100; i ++) {
executorService.submit(new Runnable() {
#Override
public void run() {
try {
int randomNum = 1 + (int)(Math.random()*6);
ClinicalDocument cda = readCda();
processIntensiveWork(cda);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
private void processIntensiveWork(final ClinicalDocument document) {
for (final Method method : document.getClass().getMethods())
if (method.getName().startsWith("get")) {
try {
domain.runExclusive(new RunnableWithResult.Impl() {
#Override
public void run() {
try {
method.invoke(document);
System.out.println("Invoked method: " + method.getName());
setResult(null);
} catch (UnsupportedOperationException e) {
e.printStackTrace();
}catch (Exception e){
e.printStackTrace();
}
}
});
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
For this test case we frequently caught java.lang.UnsupportedOperationException.
I mention that for some test cases i also caught the the following error from EMF transaction API: java.lang.IllegalArgumentException: Can only deactivate the active transaction
Any suggestions are kindly appreciated. Feel free to ask other information that might help you in resolving the problem.