AWS IVS get notified for the stream start and stream end - amazon-web-services

I am Using AWS IVS (Interactive Video Service) for live streaming. I need the notification when the stream start and the stream ends. In the Amazon Event bridge, I have created a Rule. source as IVS and the target as a queue. but I am not getting the messages to the queue when the stream start and the stream ends. I am polling to the queue but the queue is empty. I think the event pattern in the Event Bridge is wrong. can someone help me to validate the event pattern below? or how to get notification when stream start or stream end from the AWS IVS?
{
"source": [
"aws.ivs"
],
"detail": {
"stream_status": [
"Stream End",
"Stream Start",
"Session Created"
]
}
}

The EventBridge sample event had a bug where event_name was shown improperly as eventName. If you manually specify event_name, the events will properly fire and you should be good to use this rule for your needs.
Refer to the documentation here.

Imo, you have to manage it by yourself. AWS does not provide any automated messages when your IVS endpoint is ingesting data.
The best solution I've been thinking of right now is to have an observer pattern using websocket.
The dirtier implementation would be to send a message using a websocket whenever your data source is streaming. This means that you have to trigger it somewhere with your interface if you're using another broadcasting service.
The best way would be a service checking your stream health and sessions regularly and notifying your clients whenever you have a live session, as well as providing info whenever your session health is dropping.

Related

How to redirect multiple ECS log streams into a single log stream in CloudWatch

I currently have my application running in ECS. I have enabled the awslogs agent indicating the Log group and the region. Everything works great, send the logs to the Log group and create a Log stream. However, every time I restart the container, it creates a new Log stream.
Is there a way that instead of creating a Log stream as the container restarts, it all goes into a single Log stream?
I've been looking for a solution for a long time and I haven't found anything.
For example, instead of there being 2 Log streams, there is only 1 each time the container is restarted.
Something like this:
The simplest way is to use the PutLogEvents api directly. Beyond that you can get as fancy as you want. You could use a firelens side car container in your task to handle all events using a logging api that writes directly to cloudwatch.
For example, you can do this in python with boto3 cloudwatch put_log_events
response = boto3.client("logs").put_log_events(
logGroupName="your-log-group",
logStreamName="your-log-stream",
logEvents=[
{"timestamp": 123, "message": "log message"},
],
)

Ability to ensure message was successfully sent to Event Hub from APIM

Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.
Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.
Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.

Lambda Handler to read CloudWatch events coming from Elemental

I want to write a Lambda Handler in JAVA to read the events of CloudWatch. These Events are coming from Media Convert API.
Steps that I covered:
Configured the eclipse using AWS tool Kit.
Created an AWS Project with a Lambda function.
Doubt begin from here:
Which Event type shall I select to make a Lambda Handler as it is showing following options:
S3, SNS, Custom, Stream Request Handler, Kinesis Event, Cognito Event.
Note: No mention of Elemental Media Convert type event that are written over CloudWatch Stream.
What is Stream Request Handler here? Does it a handler that could be configured to listen Event stream based event. Is it So. If Yes, Kindly help me to figure out this one.
Added Flow:
A) Media Convert service is used to change format of submitted media.
b) Documentation states that All events are published on Event stream of CloudWatch, when job status changes.
C) Here, i want to read these events from Event stream of Cloud watch regarding change in job status.
You can write a small Lambda function that prints the incoming event to the log file:
def lambda_handler(event, context):
print (event)
return
Then, trigger the function via CloudWatch. The function will write the event to the log files. You can examine the logs to see what information was passed into the function.
This will show you the real content of the message. The other message types are simply for testing in case you do not have a trigger setup to create a real event.

Multiple different consumers of same Kinesis stream

I have a Kinesis producer which writes a single type of message to a stream. I want to process this stream in multiple, completely different consumer applications. So, a pub/sub with a single publisher for a given topic/stream. I also want to make use of checkpointing to ensure that each consumer processes every message written to the stream.
Initially, I was using the same App Name for all consumers and producers. However, I started getting the following error once I started more than one consumer:
com.amazonaws.services.kinesis.model.InvalidArgumentException: StartingSequenceNumber 49564236296344566565977952725717230439257668853369405442 used in GetShardIterator on shard shardId-000000000000 in stream PackageCreated under account ************ is invalid because it did not come from this stream. (Service: AmazonKinesis; Status Code: 400; Error Code: InvalidArgumentException; Request ID: ..)
This seems to be because consumers are clashing with their checkpointing as they are using the same App Name.
From reading the documentation, it seems the only way to do pub/sub with checkpointing is by having a stream per consumer application, which requires each producer to know about all possible consumers. This is more tightly coupled than I want; it's really just a queue.
It seems like Kafka supports what I want: arbitrary consumption of a given topic/partition, since consumers are completely in control of their own checkpointing. Is my only option to move to Kafka, or some other alternative, if I want pub/sub with checkpointing?
My RecordProcessor code, which is identical in each consumer:
override def processRecords(processRecordsInput: ProcessRecordsInput): Unit = {
log.trace("Received record(s) from kinesis")
for {
record <- processRecordsInput.getRecords
json <- jawn.parseByteBuffer(record.getData).toOption
msg <- decode[T](json.toString).toOption
} yield subscriber ! msg
processRecordsInput.getCheckpointer.checkpoint()
}
The code parses the message and sends it off to the subscriber. For now, I'm simply marking all messages as successfully received. I can see messages being sent on the AWS Kinesis dashboard, but no reads happen, presumably because each application has its own AppName and doesn't see any other messages.
The pattern you want, that of one publisher to & multiple consumers from one Kinesis stream, is supported. You don't need a separate stream per consumer.
How do you do that? You need to give a different application-name to every consumer. That way, checkpointing info of one consumer won't collide with that of another.
Check the first response to this: https://forums.aws.amazon.com/message.jspa?messageID=554375

WSO2 CEP output event adaptor error

I am new to WSO2 CEP
I have created the entire flow to read the JMS message and split it using Text formatter. The problem is that when I try to push messages into the queue, it is not able to reach the the output event adaptor. I have a mysql event adaptor and configured it into my event formatter but I keep getting the below message in my log
[2014-02-13 21:20:06,347] ERROR - {ReceiverGroup} No receiver is reachable at reconnection, can't publish the events
[2014-02-13 21:20:06,352] ERROR - {AsyncDataPublisher} Reconnection failed for for tcp://localhost:7661
Can someone help me understand what is this tcp://localhost:7661 is all about
Regards
Subbu
tcp://localhost:7661 is the default port to which Thrift(WSO2 events are published). It seems a default event formatter has been created and trying to publish events to that port.
Can you check your list of event formatters and ensure that no event formatters of type WSO2Event are created. This event formatter might be automatically created if you set an exported stream to be 'pass-through' when creating the execution plan.
You can enable event tracing[1] and monitor to determine exactly upto which point the event is coming in your configured flow.
[1] http://docs.wso2.org/display/CEP300/CEP+Event+Tracer
HTH,
Lasantha