How to get the AWS SSM Document execution id inside runCommand - amazon-web-services

I am trying to run a shell script inside the SSM document and triggering it every 2 minutes through event triggers. For logging purposes, I want to get the execution ID of the document for each run so that I can append it to the execution log which later will be pushed to Datadog.
Here is my document code.
        mainSteps:
          - action: aws:runShellScript
            name: runCurlCommand
            inputs:
              runCommand:
              - |
                #!/bin/bash
                exec  1> >(tee -ia /log/infrascheduler.log)
                exec  2> >(tee -ia /log/infrascheduler.log >& 2)
                echo {{automation:EXECUTION_ID}}" (How to get this ..?)
echo "other commands"
How to get the execution id inside the SSM document? Similar to automation:EXECUTION_ID in https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-variables.html

Related

Cypress AWS codebuild error: spec must be a string or comma-separated list

I am trying to implement parallel testing in AWS code build. I created a buildspec.yml file like this sample project:
https://github.com/cypress-io/cypress-realworld-app/blob/develop/buildspec.yml
My problem is the environments that I use during the cypress command are getting as empty.
- echo $CY_GROUP_SPEC
- CY_GROUP=$(echo $CY_GROUP_SPEC | cut -d'|' -f1)
- CY_BROWSER=$(echo $CY_GROUP_SPEC | cut -d'|' -f2)
- CY_SPEC=$(echo $CY_GROUP_SPEC | cut -d'|' -f3)
- CY_CONFIG=$(echo $CY_GROUP_SPEC | cut -d'|' -f4)
And then the cypress code build fails with this error:
Opening Cypress...
Cypress encountered an error while parsing the argument: --spec
You passed: true
The error was: spec must be a string or comma-separated list
I use this command to run cypress:
- NO_COLOR=1 ./node_modules/.bin/cypress run --browser $CY_BROWSER --spec "$CY_SPEC" --config "$CY_CONFIG" --headless. --record --key $CYPRESS_KEY --parallel --ci-build-id $CODEBUILD_INITIATOR --group "$CY_GROUP"
I defined these env variables like this on the top of the file:
batch:
build-matrix:
dynamic:
env:
image:
- ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/cypress:latest
variables:
CY_GROUP_SPEC:
- "UI - Chrome|chrome|cypress/e2e/account/*"
- "UI - Chrome|chrome|cypress/e2e/auth/*"
- "UI - Chrome|chrome|cypress/e2e/mastering/*"
- "UI - Chrome|chrome|cypress/e2e/pages/**/*"
- "UI - Chrome|chrome|cypress/e2e/user-flows/**/*"
WORKERS:
- 1
- 2
- 3
- 4
- 5
How can I fix this problem?
Thanks
The errors definitely tell you that the command is wrong. Check that carefully.

Ansible match inventory groups in role with dynamic inventory in GCP

I have a dynamic inventory in GCP for app that runs on multiple VMs.
However I cannot seem to match any hosts with my playbook that calls a role.
The dynamic inventory creates groups with this naming convention, based on GCP labels and gpc.yml dynamic inventory file:
demo4_costing_db
demo4_costing_app
I can interrogate the entire inventory with ansible-inventory --list -i gcp.yml
Example output from the list command:
"demo4_costing_db": {
"hosts": [
"10.10.60.194",
"10.10.60.195",
"10.10.60.196",
"10.10.60.197",
"10.10.60.198"
]
and so on, for various server functions in the app.
I then have a role that needs to do various disparate tasks to the different inventory groups.
The tasks/main.yml in the role looks like this:
- name: "Install packages"
ansible.builtin.dosomething
action
when: "'costing_db' in group_names"
- name: "Install Python packages"
ansible.builtin.dosomethingelse
different_action
when: "'costing_app' in group_names"
the role is invoked with a playbook like this:
ansible-playbook deploy.yml -i ../inventory/gcp-dynamic/demo4/gcp.yml --extra-vars "targets=demo4" -u ansible --key-file ~/ansible.pem --vault-password-file ~/.ansible/vault_pass.txt
The deployment playbook looks like this:
- hosts: all
become: true
become_user: root
roles:
- ../roles/postgres
Why does my playbook fail with no hosts matched? From other examples of the when: conditional, I should be able to string-match on group names like that.
My inventory looks soemthing like this:
ansible-inventory --list -i gcp.yml
"all": {
"children": [
"demo4_component_admin",
"demo4_component_artemis",
"demo4_component_batch",
"demo4_component_discovery",
"demo4_component_elastic",
"demo4_component_gateway",
"demo4_component_inbox",
"demo4_component_tools",
"demo4_component_transfer",
"demo4_costing_app",
"demo4_costing_db",
"demo4_inventory_demo4",
"demo4_schema_artemisdb",
"demo4_schema_batchdb",
"demo4_schema_gatewaydb",
"demo4_schema_inboxdb",
"demo4_schema_transferdb",
"ungrouped"
]
}

A playbook with two roles: running role B complains with role A's code which successfully ran

I am experiencing a strange behavior: when I run role B, it complains role A's code which I can successfully run! I have reproduced this to this minimal example:
$ cat playbooka.yml
- hosts:
- host_a
roles:
- role: rolea
tags:
- taga
- role: roleb
tags:
- tagb
I have tagged two roles because I want to selectively run role A or role B, they consist simple tasks as shown below in this minimal example:
$ cat roles/rolea/tasks/main.yml
- name: Get service_facts
service_facts:
- debug:
msg: '{{ ansible_facts.services["amazon-ssm-agent"]["state"] }}'
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
meta: end_play
$ cat roles/roleb/tasks/main.yml
- debug:
msg: "I am roleb"
The preview confirms that I can run individual roles as specified by tags:
$ ansible-playbook playbooka.yml -t taga -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
rolea : Get service_facts TAGS: [taga]
debug TAGS: [taga]
$ ansible-playbook playbooka.yml -t tagb -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
debug TAGS: [tagb]
I can run role A OK:
$ ansible-playbook playbooka.yml -t taga -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : Get service_facts] ******************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : debug] ******************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4] => {
"msg": "running"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************
3.11.111.4 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
But when I run role B, it complains the code in role A which I just successfully ran!
$ ansible-playbook playbooka.yml -t tagb -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
ERROR! The conditional check 'ansible_facts.services["amazon-ssm-agent"]["state"] != "running"' failed. The error was: error while evaluating conditional (ansible_facts.services["amazon-ssm-agent"]["state"] != "running"): 'dict object' has no attribute 'services'
The error appears to be in '<path>/roles/rolea/tasks/main.yml': line 9, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
^ here
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
I have two questions:
Why role A's code should be involved at all?
Even it gets involved, ansible_facts has services, and the service is "running" as shown above by running role A.
PS: I am using the latest Ansible 2.10.2 and latest python 3.9.1 locally on a MacOS. The remote python can be either 2.7.12 or 3.5.2 (Ubuntu 16_04). I worked around the problem by testing if the dictionary has the services key:
ansible_facts.services is not defined or ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
but it still surprises me that role B will interpret role A's code and interpreted it incorrectly. Is this a bug that I should report?
From the notes in meta module documentation:
Skipping meta tasks with tags is not supported before Ansible 2.11.
Since you run ansible 2.10, the when condition for your meta task in rolea is always evaluated, whatever tag you use. When you use -t tagb, ansible_facts.services["amazon-ssm-agent"] does not exist as you skipped service_facts, and you then get the error you reported.
You can either:
upgrade to ansible 2.11 (might be a little soon as I write this answer since it is not yet available over pip...)
rewrite your condition so that the meta task skips when the var does not exists e.g.
when:
- ansible_facts.services["amazon-ssm-agent"]["state"] is defined
- ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
The second solution is still a good practice IMO in whatever condition (e.g. share your work with someone running an older version, running accidentally against a host without the agent installed....).
One other possibility in your specific case is to move the service_facts tasks to an other role higher in play order, or in the pre_tasks section of your playbook, and tag it always. In this case the task will always play and the fact will always exists, whatever tag you use.

Gitlab CI pipeline to run jobs parallel in same stage and invoke/trigger other jobs of same stage

I am trying to create a automation pipeline for data load. I have a scenario as explained below:
stages
- stage1
- stage2
job1:
stage: stage1
script:
- echo "stage 1 job 1"
job2:
stage: stage1
script:
- echo "stage 1 job 2"
job3:
stage: stage1
script:
- echo "stage 1 job 3"
job4:
stage: stage1
script:
- echo "stage 1 job 4"
I want to run the job1 and job2 parallel in the same stage. So, after Job1 and job2 success
job1 will invoke/trigger the job3. that means job3 will start automatically when job1 successes
job2 will invoke/trigger the job4 that means job4 will start automatically when job2 successes
I am writing the pipeline in the .gitlab-ci.yml.
Can anyone help me to implement this?
Strict implementation of your requirements is not possible (according to my knowledge), the jobs 3 and 4 would need to be in a separate stage (although support for putting them in the same stage is planned). To be clear: the other functional requirements can be fulfilled, i.e:
job1 and job2 start in parallel
job1 will trigger the job3 (immediately, without waiting for the job2 to finish)
job2 will trigger the job4 (immediately, without waiting for the job1 to finish)
The key is using needs keyword to convert the pipieline to a directed acyclic graph:
stages:
- stage-1
- stage-2
job-1:
stage: stage-1
needs: []
script:
- echo "job-1 started"
- sleep 5
- echo "job-1 done"
job-2:
stage: stage-1
needs: []
script:
- echo "job-2 started"
- sleep 60
- echo "job-2 done"
job-3:
stage: stage-2
needs: [job-1]
script:
- echo "job-3 started"
- sleep 5
- echo "job-3 done"
job-4:
stage: stage-2
needs: [job-2]
script:
- echo "job-4 started"
- sleep 5
- echo "job-4 done"
As you can see on the screenshot, the job 3 is started, even though the job 2 is still running.

Can't execute AWS Lambda function built with Micronaut and Graal: Error decoding JSON stream

I built a native java AWS Lambda function using Graal and Micronaut as explained here
After deploying it to AWS Lambda (custom runtime), I can't successfully execute it.
The error that AWS shows is:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1"
}
The AWS log output is:
START RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Version: $LATEST
01:13:08.015 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [ec2, cloud, function]
Error executing function (Use -x for more information): Error decoding JSON stream for type [request]: No content to map due to end-of-input
at [Source: (BufferedInputStream); line: 1, column: 0]
END RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2
REPORT RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Duration: 698.31 ms Billed Duration: 700 ms Memory Size: 512 MB Max Memory Used: 54 MB
RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1
Runtime.ExitError
But when I test it locally using
echo '{"value":"testing"}' | ./server
I got
01:35:56.675 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [function]
{"value":"New value: testing"}
The function code is:
#FunctionBean("user-data-function")
public class UserDataFunction implements Function<UserDataRequest, UserData> {
private static final Logger LOG = LoggerFactory.getLogger(UserDataFunction.class);
private final UserDataService userDataService;
public UserDataFunction(UserDataService userDataService) {
this.userDataService = userDataService;
}
#Override
public UserData apply(UserDataRequest request) {
if (LOG.isDebugEnabled()) {
LOG.debug("Request: {}", request.getValue());
}
return userDataService.get(request.getValue());
}
}
And the UserDataService is:
#Singleton
public class UserDataService {
public UserData get(String value) {
UserData userData = new UserData();
userData.setValue("New value: " + value);
return userData;
}
}
To test it on AWS console, I configured the following test event:
{ "value": "aws lambda test" }
PS.: I uploaded to AWS Lambda a zip file that contains the "server" and the "bootstrap" file to allow the "custom runtime" as explained before.
What I'm doing wrong?
Thanks in advance.
Tiago Peixoto.
EDIT: added the lambda test event used on AWS console.
Ok, I figured it out. I just changed the bootstrap file from this
#!/bin/sh
set -euo pipefail
./server
to this
#!/bin/sh
set -euo pipefail
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$(echo "$EVENT_DATA" | ./server)
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
as explained here