Trigger Argo Workflow with webhook - argocd

I have a use case where I want to trigger a argo workflow when github push events occur.
So far from what I understand the following would be the steps of my approach,
Create Github webhook
and then create the following in kubernetes
Event source(receives event from webhook and writes to event bus) -> Event Bus -> Sensor(listen to event from event bus & trigger the actions) -> trigger workflow template
Now, I have a few questions,
Can multiple event sources use same event bus?
How are we connecting event source to argo & to the github webhook? Are we creating a service & ingress from event source?
Are there any examples or documentations available for this?
After spending a couple of days, I am still confused how to bridge the gaps between these pieces.
I am a little new to the argocd, so it will be helpful if you could point out the gaps in my understanding.

After going through official documentation :
https://argoproj.github.io/argo-events/eventsources/setup/github/
It explains that once Event source is created, it automatically creates a service and a pod. The name for the service is in {event-source-name}-eventsource-svc format.
We can then further create an ingress to access it from argo endpoint.
https://argoproj.github.io/argo-events/eventsources/naming/ This further explains how to create Event source correctly.
Finally, we can create a sensor file and define trigger conditions if any (https://argoproj.github.io/argo-events/sensors/trigger-conditions/).

Related

AWS AppConfig Amazon EventBridge extension

Ref - https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions-about-predefined-notification-eventbridge.html
Based on the above documentation, by enabling EventBrdige extension for AWS AppConfig, we can get notifications from AppConfig on actions like ON_DEPLOYMENT_START, ON_DEPLOYMENT_COMPLETE.
I am trying to implement a pub/sub type architecture using EventBridge and SNS Topics. The idea is that services will request for latest configuration from App Config once they receive a ON_DEPLOYMENT_COMPLETE event. It seems straightforward for 'AllAtOnce' deployment strategy that we will get a notification when the deployment is complete.
I would like to know how the notifications work with other strategies like 'AppConfig.Linear50PercentEvery30Seconds' where we incrementally roll out deployments to a certain percentage of hosts.
Can somebody help me with this?
Thanks!
In the strategies like AppConfig.Linear50PercentEvery30Seconds or AppConfig.Canary10Percent20Minutes, we get only one complete notification, same as AllAtOnce
This is not specifically described, but there are sentences like "If no alarms are received in this time, the deployment is complete." in the doc
It seems there is no way to get notified when each growth factor ends.

How to pull data from AWS Security Hub using Scheduler?

How to pull data from AWS Security hub automatically using a scheduler ?
I am new to AWS on doing some analysis I found below :
In Security Hub data is in Json format , we don't have option to do Export to csv/excel ?
All Security hub findings/insights are automatically sent to eventbridge ? Is it true ? If yes where i can check the same in eventbridge ?
Are there any other options in order to pull data from security hub , every 12 hours automatically. I want to take the data from security hub and pass it to the ETL Process in order to apply some logic on this data ?
Is Eventbridge the only and best approach for this ?
On:
It is a JSON based but it's their own format named AWS Security Finding Format (ASFF)
It is true (for all resources that SecurityHub supports and is able to see). It should be noted that Each Security Hub Findings - Imported event contains a single finding.
In order to see those events you'll need to create an EventBridge rule based on the format for each type of event.
how to create rule for automatically sent events (Security Hub Findings - Imported)
In addition you can create a custom action in SecurityHub and then have an EventBridge event filter for it too.
Once you have that set up, the event could trigger an automatic action like:
Invoking an AWS Lambda function
Invoking the Amazon EC2 run command
Relaying the event to Amazon Kinesis Data Streams
Activating an AWS Step Functions state machine
Notifying an Amazon SNS topic or an Amazon SQS queue
Sending a finding to a third-party ticketing, chat, SIEM, or incident response and management tool.
In general, EventBridge is the way forward, but rather than using a scheduled based approach you'll need to resort to an event-based one.
In order to intercept all findings, instead of rule being triggered by just specific one, you'll need to adjust the filter and essentially create a catch-all rule for SecurityHub which will then trigger your ETL job.
EDIT (as requested in comment):
The filter in the rule would look like this:
{
"source": [
"aws.securityhub"
]
}
with regard to the ETL, it really depends on your use case, having Kinesis Data Firehose dumping it to S3 and then using Athena as you suggest on your own would work. Another common approach is to send the data to ElasticSearch (or now OpenSearch).
This blog post described them both, you can adjust it based on your needs.
EDIT 2:
Based on the discussion in the comments section if you really want to use a cron based approach you'll need to use the SDK based on your preferred language and create something around the GetFindings API that will poll for data from SecurityHub.
You can use this function in Python, which extracts data from SecurityHub to Azure Sentinel as an example

Monitoring Alert for Cloud Build failure on master

I would like to receive a notification on my Notification Channel every time in Cloud Build a Build on master fails.
Now there were mentions of using Log Viewer but it seems like there is no immediate way of accessing the branch.
Is there another way where I can create a Monitoring Alert/a Metric which is specific to master?
A easy solution might be to define a logging metric and link an alerting trigger to this.
Configure Slack alerting in Notification channels of GCP.
Define your logging metric trigger in Logs-based Metrics. Make a Counter with Units 1 and filter using the logging query language:
resource.type="build"
severity=ERROR
or
resource.type="build"
textPayload=~"^ERROR:"
Create an Alerting Policy with that metric you've just defined and link the trigger to your Slack notification channel you've configured in step 1.
you can create Cloud Build notifications sending you updates to desired channels, such as Slack or your SMTP server HTTP channel. Also create a PubSub topic when your build's state changes, such as when your build is created, when your build transitions to a working state.
I just went through the pain of trying to get the official GCP slack integration via Cloud Run working. It was too cumbersome and didn't let me customize what I wanted.
Best solution I see is to get Cloud Build setup to send Pub/Sub messages to the cloud-builds topic. With that, you can use the below repo I just made public to filter on the specific branch you want but looking at the data_json['substitutions']['BRANCH_NAME'] field.
https://github.com/Ucnt/gcp-cloud-build-slack-notifier

How to extract event relayed from AWS EventBridge to ECS Fargate

I articulate the question as follows:
Is the EventBridge event relayed to the ECS Task? (I can't see how much useful it could be if the event is not relayed).
If the event is relayed, then how to able to extract it from within say a Node app running as Task.
Some Context is Due: It is possible to set an EventBridge rule to trigger ECS Fargate Tasks as the result of events sourced from, say, CodeCommit. Mind you, the issue here is the sink/target, not the source. I was able to trigger a Fargate Task as I updated my repo. I could have used other events. My challenge resides in extracting the event relayed (in this case, repository name, commitId, etc from Fargate.)
The EventBridge documentation is clear on how to set the rules to trigger events but is mum on how events can be extracted - which makes sense as the sink/target documentation would have the necessary reference. But ECS documentation is not clear on how to extract relayed events.
I was able to inspect the metadata and process.env. I could not find the event in either of the stores.
I have added a CloudWatch Log Group as a target for the same rule and was able to extract the event. So it certainly relayed to some of the targets, but not sure if events are relayed to ECS Task.
Therefore, the questions arise: is the event relayed to the ECS Task? If so, how would you access it?

AWS CodeCommit: Repository Notifications vs Repository Triggers

Notifications: https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-repository-email.html
Triggers: https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify.html
The official document states that for CodeCommit repository 'events which follow CloudWatch Event Rules' (like pull requests), we use Repository Notifications.
Whereas for CodeCommit repository events which are just 'operational events' (like creating branches, pushing code to a branch), we use Repository Triggers.
I don't understand the difference between 'events which follow CloudWatch Event Rules' and 'operational events'. For me, both pull requests and pushing code to branch seem similar events.
Thus, confused between why we need both Repository Notifications and Repository Triggers.
I have asked the same question today and I found this on docs:
Repository notifications are different from repository triggers. Although you can configure a trigger to use Amazon SNS to send emails about some repository events, those events are limited to operational events, such as creating branches and pushing code to a branch. Triggers do not use CloudWatch Events rules to evaluate repository events. They are more limited in scope. For more information about using triggers, see Manage Triggers for a Repository.
IMO, AWS documentation has not clearly stated the difference between notification and triggers and cloudwatch events. Here is my understanding :
Notifications should be used for literal notification and not for taking action based on them.
Triggers are supposed to initiate action. So, if I need to invoke some service based on this event on which trigger is based, I would do that and hence the option to integrate Lambda service. In a way to add automation after codecommit events.
However, Cloudwatch Events provide a wide variety of integration option for codecommit events which are not available with trigger.