How to make blocking AWS CLI calls - amazon-web-services

Most of the AWS CLI calls are asynchronous.
Therefore, after you call them, you have no idea if end product was successful or not.
Is there a simple solution for, checking if environment was created successfully, as an example, other than creating timed polling verification calls, etc etc.
Sorry I did not mention previously, but I am specifically looking for solutions from Powershell

You can check the Exit Status for cli command.
What is an exit code in bash shell?
Every Linux or Unix command executed by the shell script or user has
an exit status. Exit status is an integer number. 0 exit status means
the command was successful without any errors
code snippet:
aws cli command
if [ $? -ne 0 ]
then
echo "Error"
exit 1;
else
echo "Passed"
fi
another method is to wait for a response from the command :
while :
do
sleep 10
echo "Waiting for elasticsearch domain endpoint..."
local ELASTICSEARCH_ENDPOINT=$(aws es describe-elasticsearch-domain --domain-name ${ES_DOMAIN_NAME} --region ${AWS_REGION} --output text --query 'DomainStatus.Endpoints.vpc')
if [ ${ELASTICSEARCH_ENDPOINT} != "null" ]
then
echo "Elasticsearch endpoint: ${ELASTICSEARCH_ENDPOINT}"
break
fi
done

Related

Why my AWS Lambda function doesn't wait for SSM Send Command (which runs Shell script remotely on EC2) to complete?

I have the below python code inside Lambda function:
print("Executing shell script-")
response = ssm_client.send_command(
InstanceIds=ec2_instance,
DocumentName="AWS-RunShellScript",
Parameters={
"commands": [f"sh /bin/test/publishImage.sh -c {ecr} -r {ecr_repo} -r {region} -e {environment}"]
},
OutputS3BucketName="deploymentlogs",
OutputS3Region=region
)
print("Done - Executing shell script.")
and when executed Lambda returns immediately with Status Code 200 in just few seconds whereas the publishImage.sh script takes around 3 minutes to complete its execution.
I would want Lambda to return after publishImage.sh script completes it execution irrespective of success or failure.
How can I achieve this ? Please help.
Thanks
That's expected, because send_command does not wait for the completion of the command on the remote host.
If you want the lambda function to wait until the shell script has finished running, you will need to implement some kind of waiting logic to check the status of the command. You can check the status of the remote command execution by using the get_command_invocation function.

aws command not getting recognized after MSI install

I am new to powershell here. I can't figure out why after successfully installing AWS CLI, I intermittently get back aws command not recognized error. I put in sleep thinking some environment variables might be getting set in background. Need help figuring out what do I need to do here to able to to successfully execute $putItem command.
I followed the instructions here https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html
PLEASE NOTE: The whole thing has to be automated, so I can't manually login to a host and fix something as this same script has to be run on 100+ hosts
Write-Output "Checking if AWS CLI support exists..."
cmd.exe /c "aws --version"
if ($LASTEXITCODE -eq 0){
Write-Output "AWS CLI installed already"
} else {
Write-Output "Installing AWS CLI V2"
cmd.exe /c "msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi /qn"
if ($LASTEXITCODE -eq 0){
Write-Output "AWS CLI installed successfully"
Start-Sleep -s 5
} else {
Write-Output "Could not install AWS CLI"
exit 1
}
}
$putItem = 'aws dynamodb put-item --table-name ' + $instanceStatusDDBTable + ' --item "{\"HostName\" : {\"S\" : \"' + $instanceName + '\"}, \"Modules\" : {\"M\" : {}}, \"DAGName\" : {\"S\" : \"' + $dagName +'\"}}"'
Write-Output "Executing DB put item query $putItem"
cmd.exe /c $putItem
if ($LASTEXITCODE -eq 0){
Write-Output "Created entry for $instanceName in $instanceStatusDDBTable DDB table"
} else {
Write-Output "Could not complete put Item operation for $instanceName"
exit 1
}
Here is the output
Checking if AWS CLI support exists...
Installing AWS CLI V2
AWS CLI installed successfully
Executing DB put item query aws dynamodb put-item --table-name Ex2019-HostStatusTable --item "{\"HostName\" : {\"S\" : \"Host1\"}, \"Modules\" : {\"M\" : {}}, \"DAGName\" : {\"S\" : \"USW-D01\"}}"
Could not complete put Item operation for Host1
Error output -
'aws' is not recognized as an internal or external command,
operable program or batch file.
Try adding the below code to refresh your environment variables after you check your $LASTEXITCODE variable. The shell session has to regather the updated environment variables your installer just added. See this response for more info.
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
You may also want to consider using the Start-Process with the -wait and -passthru params to invoke your installer as the cmd may not wait long enough for the app to finish installing. You can read up on here. I do agree with David, you could just check to see if it's installed by running aws --version and then reading in the version number or catching the error in a try catch block.

Jupyterlab health check failing for AI Notebook

The JupyterLab health check is intermittently failing for my AI Notebook. I noticed that Jupyter service still works even when the health check fails, since my queries still get executed successfully, and I'm able to open my notebook by clicking "Open Jupyterlab".
How can I debug why this health check is failing?
jupyterlab status failing
This is a new feature in Jupyter Notebooks, there is a now a health agent which runs inside Notebook and this specific check is to verify Jupyter Service status:
The check is:
if [ "$(systemctl show --property ActiveState jupyter | awk -F "=" '{print $2}')" == "active" ]; then echo 1; else echo -1; fi
You can use the gcloud notebooks instance get-health to get more details.
What is the DLVM version/image you are using and Machine type?
If you are using container based the check is:
docker exec -i payload-container ps aux | grep /opt/conda/bin/jupy
ter-lab | grep -v grep >/dev/null 2>&1; test $? -eq 0 && echo 1 || echo -1
And make sure: report-container-health and report-system-health are both True.

Waiting for K8S Job to finish [duplicate]

This question already has answers here:
Tell when Job is Complete
(7 answers)
Closed 3 years ago.
I'm looking for a way to wait for Job to finish execution Successfully once deployed.
Job is being deployed from Azure DevOps though CD on K8S on AWS. It is running one time incremental database migrations using Fluent migrations each time it's deployed. I need to read pod.status.phase field.
If field is "Succeeded", then CD will continue. If it's "Failed", CD stops.
Anyone have an idea how to achieve this?
I think the best approach is to use the kubectl wait command:
Wait for a specific condition on one or many resources.
The command takes multiple resources and waits until the specified
condition is seen in the Status field of every given resource.
It will only return when the Job is completed (or the timeout is reached):
kubectl wait --for=condition=complete job/myjob --timeout=60s
If you don't set a --timeout, the default wait is 30 seconds.
Note: kubectl wait was introduced on Kubernetes v1.11.0. If you are using older versions, you can create some logic using kubectl get with --field-selector:
kubectl get pod --field-selector=status.phase=Succeeded
We can check Pod status using K8S Rest API.
In order to connect to API, we need to get a token:
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
From above code we have acquired TOKEN and APISERVER address.
On Azure DevOps, on your target Release, on Agent Job, we can add Bash task:
#name of K8S Job object we are waiting to finish
JOB_NAME=name-of-db-job
APISERVER=set-api-server-from-previous-code
TOKEN=set-token-from-previous-code
#log APISERVER and JOB_NAME for troubleshooting
echo API Server: $APISERVER
echo JOB NAME: $JOB_NAME
#keep calling API until you get status Succeeded or Failed.
while true; do
#read all pods and query for pod containing JOB_NAME using jq.
#note that you should not have similar pod names with job name otherwise you will get mutiple results. This script is not expecting multiple results.
res=$(curl -X GET $APISERVER/api/v1/namespaces/default/pods/ --header "Authorization: Bearer $TOKEN" --insecure | jq --arg JOB_NAME "$JOB_NAME" '.items[] | select(.metadata.name | contains($JOB_NAME))' | jq '.status.phase')
if (res=="Succeeded"); then
echo Succeeded
exit 0
elif (res=="Failed"); then
echo Failed
exit 1
else
echo $res
fi
sleep 2
done
If Failed, script will exit with code 1 and CD will stop (if configured that way).
If Succeeded, exist with code 0 and CD will continue.
In final setup:
- Script is part of artifact and I'm using it inside Bash task in Agent Job.
- I have placed JOB_NAME into Task Env. Vars so it can be used for multiple DB migrations.
- Token and API Server address are in Variable group on global level.
TODO:
curl is not existing with code 0 if URL is invalid. It needs --fail flag, but still above line exists 0.
"Unknown" Pod status should be handled as well

command terminated with exit code 1 when executing aws s3 cp in Jenkins Declarative Pipeline

I'm trying to upload my files to AWS S3 Bucket from my Kubernetes Pod thru Jenkins and got command terminated with exit code 1. Succeeding commands are terminated.
I've tried to manually run my command on the pod, and it works perfectly fine.
Here's my code on jenkins declarative pipeline:
sh (script: "kubectl exec -i ${MONGODB_POD_NAME} -n ${ENVIRONMENT} -- bash -c 'aws s3 sync /dump/${AUTOMATION_DUMP_DIRECTORY} s3://${S3_BUCKET}/${ENVIRONMENT}/${AUTOMATION_DUMP_DIRECTORY} --debug'")
Here's the log:
2019-04-29 09:55:23,373 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread,shutting down result thread.command terminated with exit code 1