how to add sharedIdentifier to aws event bridge rule for scheduled execution of aws batch job - amazon-web-services

I configured aws bridge event rule (via web gui) for running aws batch job - rule is triggered but a I am getting following error after invocation:
shareIdentifier must be specified. (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: 07da124b-bf1d-4103-892c-2af2af4e5496; Proxy: null)
My job is using scheduling policy and needs shareIdentifier to be set but I don`t know how to set it. Here is screenshot from configuration of rule:
There are no additional settings for subsequent arguments/parameters of job, the only thing I can configure is retries. I also checked aws-cli command for putting rule (https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/put-rule.html) but it doesn`t seem to have any additional settings. Any suggestions how to solve it? Or working examples?
Edited:
I ended up using java sdk for aws batch: https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-batch. I have a scheduled method that periodically spawns jobs with following peace of code:
AWSBatch client = AWSBatchClientBuilder.standard().withRegion("eu-central-1").build();
SubmitJobRequest request = new SubmitJobRequest()
.withJobName("example-test-job-java-sdk")
.withJobQueue("job-queue")
.withShareIdentifier("default")
.withJobDefinition("job-type");
SubmitJobResult response = client.submitJob(request);
log.info("job spawn response: {}", response);

Have you tried to provide additional settings to your target via the input transformer as referenced in the AWS docs AWS Batch Jobs as EventBridge Targets ?
FWIW I'm running into the same problem.

I had a similar issue, from the CLI and the GUI, I just couldn't find a way to pass ShareIdentifier from an Eventbridge rule. In the end I had to use a state machine (step function) instead:
"States": {
"Batch SubmitJob": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": <name>,
"JobDefinition": <Arn>,
"JobQueue": <QueueName>,
"ShareIdentifier": <Share>
},
...
You can see it could handle ShareIdentifier fine.

Related

Specifying Share Identifier in EventBridge rule for an AWS Batch job

I am writing a cloudformation template for an AWS Batch job triggered by an Eventbridge rule. However, I am getting the following error:
shareIdentifier must be specified. (Service: AWSBatch; Status Code: 400; Error Code: ClientException;
I cannot find any documentation of how to pass a shareIdentifier to my batch job, how can I add it to my eventbridge rule's cloudformation template?
I have tried passing as the Input variable:
Input: |
{
"shareIdentifier": "mid"
}
this is not picked up, I have also tried passing shareIdentifier/ShareIdentifier directly in the BatchParameters. this was an unrecognised key.
In the end, I couldn't crack this, I had to wrap it in a state machine step, and call that from eventbridge instead. This was the logic in the state machine to add the Share Identifier:
"States": {
"Batch SubmitJob": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": <name>,
"JobDefinition": <Arn>,
"JobQueue": <QueueName>,
"ShareIdentifier": <Share>
},
If anyone works out how to do it directly from Eventbridge, I'd love to hear it.

Phillips-Labs terraform-aws-github-runner not creating ec2 instance

I am trying to set up self-hosted runners for GitHub using Terraform with Phillips-Labs terraform-aws-github-runner module. I see the GH webhook send/receive messages, SQS queue receiving messages and those messages being retrieve. The scale-up lambda is firing and I see the following logs:
2023-01-31 11:50:15.879 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119051 scaleUp] Received workflow_job from {my-org}/terraform-aws-github-self-hosted-runners
{}
2023-01-31 11:50:15.880 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119084 scaleUp] Received event
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
2023-01-31 11:50:16.188 DEBUG [gh-auth:22b11002-76d2-5596-9451-4c51746730c2 index.js:118486 createAuth] GHES API URL: {"runnerType":"Org","runnerOwner":"my-org","event":"workflow_job","id":"11002102910"}
2023-01-31 11:50:16.193 WARN [scale-runners:22b11002-76d2-5596-9451-4c51746730c2 index.js:118529 Runtime.handler] Ignoring error: error:1E08010C:DECODER routines::unsupported
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
I do not see any EC2 instances being creating. I suspect the GHES API URL: should have a value after it, but I'm not certain. Also, the final log says it is ignoring an error...
I have confirmed my private key pem file is stored as a multi-line secret in secrets manager.
Any advice would be much appreciated!
It looks like not all the permissions needed by the github app are documented. I needed to add a subscription to the Workflow run event.

Cloud Tasks Not Triggering HTTPrequest Endpoints

I have one simple cloud task queue and have successfully submitted a task to the queue. It is supposed to deliver a JSON payload to my API to perform a basic database update. The task is created at the end of a process in a .net core 3.1 app running locally on my desktop triggered by postman and the API is a golang app running in cloud run. However, the task never seems to fire and never registers an error.
The tasks in queue is always 0 and the tasks running is always blank. I have hit the "Run Now" button dozens of times but it never changes anything and no log entries or failed attempts are ever registered.
The task is created with the OIDCToken with a service account and audience set for the service account that has the authorization to create tokens and execute the cloud run instance.
Screen Shot of Tasks Queue in Google Cloud Console
Task creation log entry shows that it was created OK:
{
"insertId": "efq7sxb14",
"jsonPayload": {
"taskCreationLog": {
"targetAddress": "PUT https://{readacted}",
"targetType": "HTTP",
"scheduleTime": "2020-04-25T01:15:48.434808Z",
"status": "OK"
},
"#type": "type.googleapis.com/google.cloud.tasks.logging.v1.TaskActivityLog",
"task": "projects/{readacted}/locations/us-central1/queues/database-updates/tasks/0998892809207251757"
},
"resource": {
"type": "cloud_tasks_queue",
"labels": {
"target_type": "HTTP",
"project_id": "{readacted}",
"queue_id": "database-updates"
}
},
"timestamp": "2020-04-25T01:15:48.435878120Z",
"severity": "INFO",
"logName": "projects/{readacted}/logs/cloudtasks.googleapis.com%2Ftask_operations_log",
"receiveTimestamp": "2020-04-25T01:15:49.469544393Z"
}
Any ideas as to why the tasks are not running? This is my first time using Cloud Tasks so don't rule out the idiot between the keyboard and the chair.
Thanks!
You might be using a non-default service. See Configuring Cloud Tasks queues
Try creating a task from the command line and watch the logs e.g.
gcloud tasks create-app-engine-task --queue=default \
--method=POST --relative-uri=/update_counter --routing=service:worker \
--body-content=10
In my own case, I used --routing=service:api and it worked straight away. Then I added AppEngineRouting to the AppEngineHttpRequest.

Is there a way to write git check status back to git when there is a failure in awscode pipeline

I am setting up aws code pipeline based on the git developer branch.
when the developer commits his code, the pipeline will be trigger based on the webhook. Now the idea is when there is a failure in the pipeline, and when the developer triggers a pull-request, the reviewer should know that this is bad branch. He should be able to see the git status of the branch showing there is a failure.
Earlier I have used a build tool called Codeship which has a git-hub app to do this stuff. Now I have gone through git-hub API
https://developer.github.com/v3/checks/runs/#create-a-check-run
But not sure where to start.
To send a notification when stage fails, please follow these steps:
Based on your Cloudwatch events emitted by CodePipeline [0], trigger a lambda function [1].
Lambda function can trigger API call "list-pipeline-executions"[2] and from that you can fetch all the required values like Commit Id, status message etc [3].
Once the values are fetched, you can send same value to SNS by writing lambda function code inside same lambda. Following blogs shows how to publish in SNS using lambda [4][5].
[0] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-cloudwatch-sns-notifications.html
[1] https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
[2] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/list-pipeline-executions.html
[3] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/index.html
[4] https://gist.github.com/jeremypruitt/ab70d78b815eae84e037
[5] Can you publish a message to an SNS topic using an AWS Lambda function backed by node.js?
I have done the following to write back the status to gitrepo:
I have made use of gitstatus api:
https://developer.github.com/v3/repos/statuses/
Written a Lambda function to do POST request with these details
state, TargetURL , context
{
"state": "success",
"target_url": "https://example.com/build/status" (Build Tool URL like codebuild or jenkins)
"description": "The build succeeded!",
"context": "continuous-integration/jenkins"
}
the response here should be
"url": "https://api.github.com/repos//-/statuses/6dcb09b5b57875f334f61aebed695e2e4193db5e ",
All these details can be obtained using CLoudwatch event for the pipeline. by using patterns of event detail list :
event.detail.state;event.detail.pipeline;event.region, event ['detail']['execution-id'],
data.pipelineExecution.artifactRevisions[0].revisionUrl
Cheers.

aws pinpoint update-apns-sandbox-channel command results in: missing credentials

aws --version
aws-cli/1.16.76 Python/2.7.10 Darwin/16.7.0 botocore/1.12.66
I'm trying to programmatically add an APNS_SANDBOX channel to a pinpoint app. I'm able to do this successfully via the pinpoint console, but not with aws cli or a lambda function which is the end goal. Changes to our Test/Prod environments can only be made via the CodePipeline, but for testing purposes I'm trying to achieve this with the aws cli.
I've tried both aws cli (using the root credentials) and a lambda function -- both result in the following error:
An error occurred (BadRequestException) when calling the UpdateApnsSandboxChannel operation: Missing credentials
I have tried setting the Certificate field in the UpdateApnsSandboxChannel json object as the path to the .p12 certificate file as well as using a string value retrieved from the openssl tool.
Today I worked with someone from aws support, and they were not able to figure out the issue after trying to debug for a couple of hours. They said they would send an email to the pinpoint team, but they did not have an ETA on when they might respond.
Thanks
I ended up getting this to work successfully -- This is why it was failing:
I was originally making the cli call with the following request object as this is what is including in the documentation:
aws pinpoint update-apns-sandbox-channel --application-id [physicalID] --cli-input-json file:///path-to-requestObject.json
{
"APNSSandboxChannelRequest": {
"BundleId": "com.bundleId.value",
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING",
"TeamId": "",
"TokenKey": "",
"TokenKeyId": ""
},
"ApplicationId": "Pinpoint_PhysicalId"
}
After playing around with it some more I got it to work by removing BundleId, TeamId, TokenKey, and TokenKeyId. I believe these fields are needed when using a p8 certificate.
{
"APNSSandboxChannelRequest": {
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING"
},
"ApplicationId": "Pinpoint_PhysicalId"
}