Kafka OAUTHBEARER: Could not produce message to multiple Event hub namespaces in one application - azure-eventhub

I am using this sample: https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret
I did some minor modifications in TestProducer class (add line 18 and line 26), I want to produce message to 2 different EVENT HUB namespaces (means creating 2 diffrent Kafka Producers for 2 bootstrap servers) in ONE console application, see code:
public class TestProducer {
//Change constant to send messages to the desired topic, for this example we use 'test'
private final static String TOPIC = "do.kafka.oauth";
private final static int NUM_THREADS = 1;
public static void main(String... args) throws Exception {
//Create Kafka Producer
final Producer<Long, String> producer = createProducer(false);
final Producer<Long, String> producer_auto = createProducer(true);
final ExecutorService executorService = Executors.newFixedThreadPool(NUM_THREADS);
//Run NUM_THREADS TestDataReporters
for (int i = 0; i < NUM_THREADS; i++){
executorService.execute(new TestDataReporter(producer, TOPIC));
executorService.execute(new TestDataReporter(producer_auto, TOPIC));
}
}
private static Producer<Long, String> createProducer(boolean isAuto) {
try{
Properties properties = new Properties();
if(isAuto)
properties.load(new FileReader("src/main/resources/producer_auto.config"));
else
properties.load(new FileReader("src/main/resources/producer.config"));
properties.put(ProducerConfig.CLIENT_ID_CONFIG, "KafkaExampleProducer");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
return new KafkaProducer<>(properties);
} catch (Exception e){
System.out.println("Failed to create producer with exception: " + e);
System.exit(0);
return null; //unreachable
}
}
}
Here is producer.config:
bootstrap.servers=advantcoeventhubs.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
and producer_auto.config:
bootstrap.servers=autoeventhubtesting.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
When I execute the code, the application can produce message to just the first namespace (advantcoeventhubs), and throw exception when produce message to the second namespace (autoeventhubtesting):
"ERROR NetworkClient [Producer clientId=KafkaExampleProducer] Connection to node -1 (autoeventhubtesting.servicebus.windows.net/13.66.138.74:9093) failed authentication due to: Invalid SASL mechanism response, server may be expecting a different protocol"
Please see attached picture for error here:
Invalid SASL mechanism response, server may be expecting a different protocol
Can any experts advise the root cause and work around solution?
Thank you so much!

Related

Connect AWS SQS to Apache-Flink

Why is AWS SQS not a default connector for Apache Flink? Is there some technical limitation to doing this? Or was it just something that didn't get done? I want to implement this, any pointers would be appreciated
Probably too late for an answer to the original question... I wrote a SQS consumer as a SourceFunction, using the Java Messaging Service library for SQS:
SQSConsumer extends RichParallelSourceFunction<String> {
private volatile boolean isRunning;
private transient AmazonSQS sqs;
private transient SQSConnectionFactory connectionFactory;
private transient ExecutorService consumerExecutor;
#Override
public void open(Configuration parameters) throws Exception {
String region = ...
AWSCredentialsProvider credsProvider = ...
// may be use a blocking array backed thread pool to handle surges?
consumerExecutor = Executors.newCachedThreadPool();
ClientConfiguration clientConfig = PredefinedClientConfigurations.defaultConfig();
this.sqs = AmazonSQSAsyncClientBuilder.standard().withRegion(region).withCredentials(credsProvider)
.withClientConfiguration(clientConfig)
.withExecutorFactory(()->consumerExecutor).build();
this.connectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), sqs);
this.isRunning = true;
}
#Override
public void run(SourceContext<String> ctx) throws Exception {
SQSConnection connection = connectionFactory.createConnection();
// ack each msg explicitly
Session session = connection.createSession(false, SQSSession.UNORDERED_ACKNOWLEDGE);
Queue queue = session.createQueue(<queueName>);
MessageConsumer msgConsumer = session.createConsumer(queue);
msgConsumer.setMessageListener(msg -> {
try {
String msgId = msg.getJMSMessageID();
String evt = ((TextMessage) msg).getText();
ctx.collect(evt);
msg.acknowledge();
} catch (JSMException e) {
// log and move on the next msg or bail with an exception
// have a dead letter queue is configured so this message is not lost
// msg is not acknowledged so it may be picked up again by another consumer instance
}
};
// check if we were canceled
if (!isRunning) {
return;
}
connection.start();
while (!consumerExecutor.awaitTermination(1, TimeUnit.MINUTES)) {
// keep waiting
}
}
#Override
public void cancel() {
isRunning = false;
// this method might be called before the task actually starts running
if (sqs != null) {
sqs.shutdown();
}
if(consumerExecutor != null) {
consumerExecutor.shutdown();
try {
consumerExecutor.awaitTermination(1, TimeUnit.MINUTES);
} catch (Exception e) {
//log e
}
}
}
#Override
public void close() throws Exception {
cancel();
super.close();
}
}
Note if you are using a standard SQS queue you may have to de-dup the messages depending on whether exactly-once guarantees are required.
Reference:
Working with JMS and Amazon SQS
At the moment, there is no connector for AWS SQS in Apache Flink. Have a look at the already existing connectors. I assume you already know about this, and would like to give some pointers. I was also looking for an SQS connector recently and found this mail thread.
Apache Kinesis Connector is somewhat similar to what you can implement on this. See whether you can get a start on this using this connector.

TaskCanceledException while invoking AWS Lambda

I am hitting an unhandled TaskCanceledException every time my code invokes an AWS Lambda. The code runs on an Android device. (It's written in C# with Xamarin.Android and references AWSSDK.Core, AWSSDK.Lambda).
Why is the task timing out? [Update: this has been figured out]
Why isn't the exception handled?
Why can't I see any diagnostics from AWS SDK for .NET in the logs?
Code:
public class SomeActivity: Activity
{
private AmazaonLambdaClient mAWSLambdaClient;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
SetContentView(...);
FindViewById(...).Click += ButtonClickAsync;
// System.Diagnostics.Trace redirects to Log.Debug with TAG="System.Diagnostics.Trace"
System.Diagnostics.Trace.Listeners.Add(new MyAndroidTraceListener("System.Diagnostics.Trace"));
System.Diagnostics.Trace.TraceInformation("Android trace listener installed");
// AWS logs to System.Diagnostics
AWSConfigs.LoggingConfig.LogTo = LoggingOptions.SystemDiagnostics;
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
}
protected override void OnStart()
{
base.OnStart();
var idToken = ...
var awsCredentials = new CognitoAWSCredentials("IdentityPoolID", AWSConfig.RegionEndpoint);
awsCredentials.AddLogin("accounts.google.com", idToken);
mAWSLambdaClient = new AmazonLambdaClient(awsCredentials, AWSConfig.RegionEndpoint);
}
protected override void OnStop()
{
base.OnStop();
mAWSLambdaClient.Dispose();
mAWSLambdaClient = null;
}
private async void ButtonClickAsync(object sender, System.EventArgs e)
{
await DoSomethingAsync();
}
private async Task DoSomethingAsync()
{
var lambdaRequest = ...
try
{
var lambdaInvokeTask = mAWSLambdaClient.InvokeAsync(lambdaRequest);
invokeResponse = await lambdaInvokeTask; <= VS breaks here after ~30 to 60 seconds
}
catch (TaskCanceledException e) // also tried catching Exception, no luck
{
Log.Error(TAG, "Lambda Task Canceled: {0}, {1}", e.Message, e.InnerException);
return;
}
}
}
Visual Studio breaks on the await line, telling me I have an unhandled TaskCanceledException: a task was canceled. Weird I do handle that exception.
After the unhandled exception, I check the Device Log in Visual Studio. I filter by TAG="System.Diagnostics.Trace" and all I find is:
base apk Information 0:
Android trace listener installed
Where is the AWS SDK log I should have gotten according to logging-with-the-aws-sdk-for-net?
UPDATE:
I've figured out question 1, why it times out. It was due to a lambdaRequest with a bad PayloadStream set to a MemoryStream whose position had not been reset to 0 after JSON serializing an object to the stream.
I have not figured out why 2, the exception wasn't handled by the try/catch, and 3, why AWS SDK did not log as requested.
I'm guessing either the TaskCanceledException instance is not from the same namespace your code is expecting in the catch statement, or it is being thrown from the line just above your try-catch, i.e mAWSLambdaClient.InvokeAsync(lambdaRequest). What happens if you move that line and possibly more lines inside the try-catch block?
If this doesn't help, please post the stack trace.

SolrJ - NPE when accessing to SolrCloud

I'm running the following test code on SolrCloud using Solrj library:
public static void main(String[] args) {
String zkHostString = "192.168.56.99:2181";
SolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
List<MyBean> beans = new ArrayList<>();
for(int i = 0; i < 10000 ; i++) {
// creating a bunch of MyBean to be indexed
// and temporarily storing them in a List
// no Solr operations performed here
}
System.out.println("Adding...");
try {
solr.addBeans("myCollection", beans);
} catch (IOException | SolrServerException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println("Committing...");
try {
solr.commit("myCollection");
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This code fails due to the following exception
Exception in thread "main" java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1175)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:312)
at com.togather.solr.testing.SolrIndexingTest.main(SolrIndexingTest.java:83)
This is the full stacktrace of the exception. I just "upgraded" from a Solr standalone installation to a SolrCloud (with an external Zookeeper single instance, not the embedded one). With standalone Solr the same code (with just some minor differences, like the host URL) used to work perfectly.
The NPE sends me inside the SolrJ library, which I don't know.
Anyone can help me understand where the problem originates from and how I can overcome it? Due to my unexperience and the brevity of the error message, I can't figure out where to start inquiring from.
Looking at your code, I would suggest to specify the default collection as first thing.
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("myCollection");
Regarding the NPE you're experiencing, very likely is due to a network error.
In these lines your exception is raised by for loop: for (DocCollection ext : requestedCollections)
if (wasCommError) {
// it was a communication error. it is likely that
// the node to which the request to be sent is down . So , expire the state
// so that the next attempt would fetch the fresh state
// just re-read state for all of them, if it has not been retired
// in retryExpiryTime time
for (DocCollection ext : requestedCollections) {
ExpiringCachedDocCollection cacheEntry = collectionStateCache.get(ext.getName());
if (cacheEntry == null) continue;
cacheEntry.maybeStale = true;
}
if (retryCount < MAX_STALE_RETRIES) {//if it is a communication error , we must try again
//may be, we have a stale version of the collection state
// and we could not get any information from the server
//it is probably not worth trying again and again because
// the state would not have been updated
return requestWithRetryOnStaleState(request, retryCount + 1, collection);
}
}

Graceful termination

I am trying to implement the following use case as part of my akka learning
I would like to calculate the total streets in all cities of all states. I have a database that contain the details needed. Here is what i have so far
Configuration
akka.actor.deployment {
/CityActor{
router = random-pool
nr-of-instances = 10
}
/StateActor {
router = random-pool
nr-of-instances = 1
}}
Main
public static void main(String[] args) {
try {
Config conf = ConfigFactory
.parseReader(
new FileReader(ClassLoader.getSystemResource("config/forum.conf").getFile()))
.withFallback(ConfigFactory.load());
System.out.println(conf);
final ActorSystem system = ActorSystem.create("AkkaApp", conf);
final ActorRef masterActor = system.actorOf(Props.create(MasterActor.class), "Migrate");
masterActor.tell("", ActorRef.noSender());
} catch (Exception e) {
e.printStackTrace();
}
}
MasterActor
public class MasterActor extends UntypedActor {
private final ActorRef randomRouter = getContext().system()
.actorOf(Props.create(StateActor.class).withRouter(new akka.routing.FromConfig()), "StateActor");
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
getContext().watch(randomRouter);
for (String aState : getStates()) {
randomRouter.tell(aState, getSelf());
}
randomRouter.tell(new Broadcast(PoisonPill.getInstance()), getSelf());
} else if (message instanceof Terminated) {
Terminated ater = (Terminated) message;
if (ater.getActor().equals(randomRouter)) {
getContext().system().terminate();
}
}
}
public List<String> getStates() {
return new ArrayList<String>(Arrays.asList("CA", "MA", "TA", "NJ", "NY"));
};}
StateActor
public class StateActor extends UntypedActor {
private final ActorRef randomRouter = getContext().system()
.actorOf(Props.create(CityActor.class).withRouter(new akka.routing.FromConfig()), "CityActor");
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
System.out.println("Processing state " + message);
for (String aCity : getCitiesForState((String) message)) {
randomRouter.tell(aCity, getSelf());
}
Thread.sleep(1000);
}
}
public List<String> getCitiesForState(String stateName) {
return new ArrayList<String>(Arrays.asList("Springfield-" + stateName, "Salem-" + stateName,
"Franklin-" + stateName, "Clinton-" + stateName, "Georgetown-" + stateName));
};}
CityActor
public class CityActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String) {
System.out.println("Processing city " + message);
Thread.sleep(1000);
}
}}
Did i implement this use case properly?
I cannot get the code to terminate properly, i get dead letters messages. I know why i am getting them, but not sure how to properly implement it.
Any help is greatly appreciated.
Thanks
I tested and ran your use case with Akka 2.4.17. It works and terminate properly, without any dead letters logged.
Here are some remarks/suggestions to improve your understanding of the Akka toolkit:
Do not use Thread.sleep() inside an actor. Basically, it is never a good practice since a same thread may do many tasks for many actors (this is the default behavior with a shared thread pool). Instead, you can use an Akka scheduler or assign a single thread to a specific Actor (see this post for more details). See also the Akka documentation about that topic.
Having some dead letters is not always an issue. It generally arises when the system stops an Actor that had some messages within its mailbox. In this case, the remaining unprocessed messages are sent to deadLetters of the ActorSystem. I recommend you to check the configuration you provided for the logging of dead letters. If the file forum.conf you provided is your complete configuration file for Akka, you may want to customize some additional settings. See the page Logging of Dead Letters and Stopping actors on Akka's website. For instance, you could have a section like this:
akka {
# instead of System.out.println(conf);
log-config-on-start = on
# Max number of dead letters to log
log-dead-letters = 10
log-dead-letters-during-shutdown = on
}
Instead of using System.out.println() to log/debug, it is more convenient to set up a dedicated logger for each Actor that provides you additional information such as dispatchers, Actor name, etc. If your are interested, have a look to the Logging page.
Use some custom immutable message objects instead of systematic Strings. At first, it may seem painful to have to declare new additional classes but in the end it helps to better design complex behaviors and it's more readable. For instance, an actor A can answer to a RequestMsg coming from an actor B with an AnswerMsg or a custom ErrorMsg. Then, for your actor B, you will end up with the following onReceive() method:
#Override
public void onReceive(Object message) {
if (message instanceof AnswerMsg) {
// OK
AnswerMsg answerMsg = (AnswerMsg) message;
// ...
}
if (message instanceof ErrorMsg) {
// Not OK
ErrorMsg errorMsg = (ErrorMsg) message;
// ...
}
else {
// Unexpected behaviour, log it
log.error("Error, received " + message.toString() + " object.")
}
}
I hope that these resources will be useful for you.
Have a happy Akka programming! ;)

mysterious console output to stderr from jetty?

When running my embedded jetty web app launcher, I see the following output to stderr. I just started seeing this after moving my build to maven-2. Has anyone seen this before?
IDLE SCEP#988057 [d=false,io=1,w=true,rb=false,wb=false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5469 bytesProduced = 5509
It repeats occasionally seemingly at random times.
This seems to be coming from jetty NIO support -- it appears that jetty feels it is appropriate to log to stderr when it close idle connections.
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.checkIdleTimestamp(SelectChannelEndPoint.java:231)
at org.eclipse.jetty.io.nio.SelectorManager$SelectSet$2.run(SelectorManager.java:768)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
For those with similar problems, I overrode System.err with a mock output stream:
public class DebugOutputStream extends OutputStream {
private Logger s_logger = LoggerFactory.getLogger(DebugOutputStream.class);
private final OutputStream m_realStream;
private ByteArrayOutputStream baos = new ByteArrayOutputStream();
private Pattern m_searchFor;
public DebugOutputStream(OutputStream realStream, String regex) {
m_realStream = realStream;
m_searchFor = Pattern.compile(regex);
}
public void write(int b) throws IOException {
baos.write(b);
if (m_searchFor.matcher(baos.toString()).matches()) {
s_logger.info("unwanted output detected", new RuntimeException());
}
if (b == '\n') baos.reset();
m_realStream.write(b);
}
}