Azure Event hub across the regions - azure-eventhub

Can someone help,
1) We have a scenario where we do have one Eventhub(Eh1) in one region X and can we use the same Eventhub(Eh1) across the different regions at a time or at least in case of Disaster recovery process.
In detail
We have a server running in Primary Region(Server Primone) & a similar replicative Secondary server(Server Secone) running in Secondary Region. We have a mobile device with which we will be sending messages to the server (Primone) in the Primary Region via event hub (Eh1).Can i use the same event hub(Eh1) in case my server(Primone) in the primary region goes down & not accessible?
Suggestions might be helpful.
Thank you.

Yes, you can.
The event hub does not care about the region. If the primary server goes down, then you can configure event hub to send to the secondary-server.
Please let me know if you still have more issues.

Related

WSO2 CEP horizontal scalability

I'm investigating of the new WSO2CEP and WSO2 STREAM PROCESSOR products and I would like some information:
I would know if it can manage the scalability in a configuration where I have multiple instances installed in cluster on multiple servers, and each instance share the same information (rules, events, streams, etc ...)
Is it possible to aggregate the events across the servers? For example given the rule "select * from my_stream.window(10 minutes) having count = 2" and server 1 receives the first event and server 2 the second, validating the condition and firing an associated action only one time (not for each server/instance)
Is it possible to aggregate the events across the servers using pattern
where condition?
I could answer it only for WSO2SP
I would know if it can manage the scalability in a configuration where I have multiple instances installed in cluster on multiple servers, and each instance share the same information (rules, events, streams, etc ...)
indeed, yo ucan have a manager(s) node and all worker nodes will synchronize the apps (rules, streams, ..) from the manager node
For other questions - the state (aggregated values) can be persisted across the cluster, so it should work

(AWS SWF) Is there a way to get a list of all activity workers listening on a particular tasklist?

In our beta stack, we have a single EC2 instance listening to a tasklist. Sometimes another developer in the team start's his own instance for testing purposes and forget to turn it off. This creates problems for the next developer who tries to start an activity only for it to be taken up by the last developer's machine. Is there a way to get the hostnames of all activity workers listening to a particular tasklist ?
It is not currently possible to get a list of pollers waiting on a task list through the SWF API. The workaround is to look at the identity field on the ActivityExecutionStarted event after it was picked up by the wrong worker.
One way to avoid this issue is always use a task list name that is specific to a machine or developer to avoid collisions.

WSO2 API Manager 2.1 : Gateway not enforcing Throttling Limits

We have deployed API-M 2.1 in a distributed way (each component, GW, TM, KM are running in their own Docker image) on top on DC/OS 1.9 ( Mesos ).
We have issues to get the gateway to enforce throttling policies (should it be subscription tiers or app-level policies). Here is what we have managed to define so far:
The Traffic Manager itself does it job : it receives the event streams, analyzes them on the fly and pushes an event onto the JMS topic throttledata
The Gateway reads the message properly.
So basically we have discarded a communication issue.
However we found two potential issues:
In the event which is pushed to the TM component, the value of the appTenant is null (instead of carbon.super)- We have a single tenant defined.
When the gateway receives the throttling message, it decides to let the message go thinking the "stopOnQuotaReach" is set to false, when it is set to true (we checked the value in the database).
Digging into the source code, we related those two issues to a single source: the value for both values above are read from the authContext and apparently incorrectly set. We are stuck and running out of ideas of things to try and would need some pointers to what could be a potential source of the problem and things to check.
Can somebody help please ?
Thanks- Isabelle.
Is there two TM with HA enabled available in the system?
If the TM is HA enabled, how gateways publish data to TM. Is it load balanced data publishing or failover data publishing to the TMs?
Did you follow below articles to configure the environment with respect to your deployment?
http://wso2.com/library/articles/2016/10/article-scalable-traffic-manager-deployment-patterns-for-wso2-api-manager-part-1/
http://wso2.com/library/articles/2016/10/article-scalable-traffic-manager-deployment-patterns-for-wso2-api-manager-part-2/
Is throttling completely not working in your environment?
Have you noticed any JMS connection related logs in gateways nodes?
In these tests, we have disabled HA to avoid possible complications. Neither subscription nor app throttling policies are working, both because parameters that should have values have not the adequate value (appTenant, stopOnQuotaReach).
Our scenario is far more basic. If we go with one instance of each component, it fails as Isabelle described. And the only thing we know is that both parameters come from the Authentication Context.
Thank you!

I need help clarifying a high level use-case of Amazon SQS

So I need a second pair of eyes to correct or confirm my understand standing of Amazon SQS. From my understanding, you can add an unlimited amount of messages to one queue. A message can be 256 KB in size, and if it needs to be larger than that, you can use amazon s3 to store 2 GB. Reading around online, it appears there are many use cases for this queuing service. For example one use case of SQS can act as a database buffer.
But here's what I'm looking to do.. I'm looking to make a real time messaging system. My current functionality acts like more of a message board, so the implementation just inserts into the database then reads the data and packages it into JSON to be inserted on SQLITE mobile phone. That works great, but I'm getting a lot of requests from people to make it real-time.
So what I'm wondering is can I utilize amazon SQS to write and read messages for a chat application? So in my theoretical use case of SQS would have a message queue to write to, and pull from the that queue every second to check for messages on mobile. But here's where I'm confused. Since you cannot "Query" a particular message from the queue, would it make sense to have a queue per user then a generic queue for the app server to read from? Or am I just talking crazy and should spend cognitive resources thinking about implementing an open connection on an Ec2 instance?
Any help would be great,
Thanks!
Have you thought about using Amazon SNS to push the chat messages to your mobile devices? Each user publishes to a topic and the readers subscribe to that topic. You just have to be ok with missing messages if the app isn't running.
If you only have a few (or maybe, less than 100) users, you could have thought of having one SQS queue per user. If that is not so, the solution won't be operationally feasible.
If you were to have one generic queue, SQS won't help because it doesn't allow querying for a given field in all available messages.
I can think of following options for your use case:
Setup one Redis cluster, possibly on Amazon ElastiCache. Have one message List per user.
One Messages table in MySQL, possibly on AWS RDS. This will provide an easy way to query messages for a given user.
You can also use DynamoDB in #2.

AWS Kinesis leaseOwner confusion

A very simple application running on a Spark cluster with 2 workers, using Kinesis with 2 shards.
And I check the Kinesis Streams Application State on DynamoDB (show in this screenshot) at region North Virginia.
I start and stop workers from time to time, and I just noticed, when the leaseOwner for 2 shards is the same worker, application works fine.
But when I stop the current leaseOwner (10.0.7.63), then there will be a owner switch and new owner will be the other worker (10.0.7.62), then my application pulls data and no data returned from Kinesis (but, the connection with Kinesis is still on).
My guess, is that when the owner is switched to another worker, the checkpoints on the new owner is not matching what is left inside Kinesis, and the pulling the data will get nothing.
Could anyone please explain a bit what's going on here? Am I guessing it right?
Thanks a lot.
First of all, just a friendly reminder; define "workerID" in the configuration of your application with hostname; it will help you with more user friendly names.
Second, are you sure the shard-000 receives data? Maybe you've set a static partition key on consumer side and that is causing the data to stack on only shard-001?