I'm using 'eb deploy' in my continuous integration script. I'm having 2 problems with it:
It always returns returncode 0, even if there is an error. This breaks my deploy pipeline, because there is no way to detect an error.
It displays output only after command is finished.
Is there any way to make 'eb deploy' to work as any normal script and return proper error codes?
This is a know issue reported upstream here. You can fix it by using grep in a pretty straight forward way. Instead of:
eb deploy
Use grep to get the success string. This will return a non-zero status (ie: failure) if it can't be found:
eb deploy | tee /dev/tty | grep "update completed successfully"
Note how I used tee to make sure that the output can still be seen on the continuous integration portal (in my case circleci).
Related
I'm currently moving an application off of static EC2 servers to ECS, as until now the release process has been ssh'ing into the server to git pull/migrate the database.
I've created everything I need using terraform to deploy my code from my organisations' Elastic Container Registry. I have a cluster, some services and task definitions.
I can deploy the app successfully for any given version now, however my main problem is finding a way to run migrations.
My approach so far has been to split the application into 3 services, I have my 'web' service which handles all HTTP traffic (serving the frontend, responding to API requests), my 'cron' service which handles things like sending emails/push notifications on specific times/events and my 'migrate' service which is just the 'cron' service but with the entryPoint to the container overwritten to just run the migrations (as I don't need any of the apache2 stuff for this container, and I didn't see reason to make another one for just migrations).
The problem I had with this was the 'migrate' service would constantly try and schedule more tasks for migrating the database, even though it only needed to be done once. So I've scrapped it as a service and kept it as a task definition however, so that I can still place it into my cluster.
As part of the deploy process I'm writing, I run that task inside the cluster via a bash script so I can wait until the migrations finish before deciding whether to take the application out of maintenance mode (if the migrations fail) or to deploy the new 'web'/'cron' containers once the migration has been completed.
Currently this is inside a shell script (ran by Github actions) that looks like this:
#!/usr/bin/env bash
CLUSTER_NAME=$1
echo $CLUSTER_NAME
OUTPUT=`aws ecs run-task --cluster ${CLUSTER_NAME} --task-definition saas-app-migrate`
if [$? -n 0]; then
>&2 echo $OUTPUT
exit 1
fi
TASKS=`echo $OUTPUT | jq '.tasks[].taskArn' | jq #sh | sed -e "s/'//g" | sed -e 's/"//g'`
for task in $TASKS
do
# check for task to be done
done
Because $TASKS contains the taskArn of any tasks that have been spawned by this, I am freely able to query the task however I don't know what information I'm looking for.
The AWS documentation says I should use the 'describe-task' command to then find out why a task has reached the 'STOPPED' status, as it provides a 'stopCode' and 'stoppedReason' property in the response. However, it doesn't say what these values would be if it was succesfully stopped? I don't want to have to introduce a manual step in my deployment where I wait until the migrations are done - with the application not being usable - to then tell my release process to continue.
Is there a link to documentation I might have missed with the values I'm searching for, or an alternate way to handle this case?
here's a problem that is driving me nuts. First off, I am not a Linux expert, so I might just be missing some detail.
I am trying to restart an application (namely rpi-webrtc-streamer, but that shouldn't matter) using a shell script. The reason is that when a configuration change happens I need to update the config files and restart.
The idea is to call a bash script using system() function and pass in the pid of the current process. The script should then just kill the process using the supplied pid, and execute it again. In theory this shouldn't be a problem...
What may be complicating it is that the process needs to run with sudo. Not sure if that's the case but just thought I should mention it.
Now this is the script:
#!/bin/bash
echo "restarting streamer..."
echo "killing process with PID $1"
kill $1
# I have tried different intervals, even 10 seconds, doesn't help
sleep 2
echo "running new streamer instance"
echo "path:"
pwd
#printenv
echo "id -u"
# just to verify the script runs with sudo
id -u
./webrtc-streamer --verbose
echo "done"
The problem is that the application fails with the following error:
(direct_socket.cc:77): Failed to listen 0.0.0.0:8888.
... and then it shuts down. Well obviously it's not able open the port. It almost looks as if the previous instance of the app is still holding the port open. I have however tried tweaking the sleep amount of seconds in the script but that shouldn't be a problem, first I think the script will continue execution after the process is actually killed and second the process shuts down immediately anyway, I can see that from the logs.
If I however run the app immediately after the script fails from the shell that actually executed the initial app in the first place, it runs without any issues (being able to open the port). No matter how much seconds it waited in the sleep previously.
The only other thing I though of would be that the bash script might be running with different environment variables. I tried to print those but I don't see anything significant.
Also I verified that the app does not change the working directory, but that again should not be a problem as it actually launches. It then just exits after not being able to open the port.
I also tried adding sudo before the app execution in the script (which shouldn't be necessary AFAIK). Doesn't make a difference.
Any ideas?
As suggested by jordanm in the comments, I solved the problem by using systemd.
This question already has answers here:
Tell when Job is Complete
(7 answers)
Closed 3 years ago.
I'm looking for a way to wait for Job to finish execution Successfully once deployed.
Job is being deployed from Azure DevOps though CD on K8S on AWS. It is running one time incremental database migrations using Fluent migrations each time it's deployed. I need to read pod.status.phase field.
If field is "Succeeded", then CD will continue. If it's "Failed", CD stops.
Anyone have an idea how to achieve this?
I think the best approach is to use the kubectl wait command:
Wait for a specific condition on one or many resources.
The command takes multiple resources and waits until the specified
condition is seen in the Status field of every given resource.
It will only return when the Job is completed (or the timeout is reached):
kubectl wait --for=condition=complete job/myjob --timeout=60s
If you don't set a --timeout, the default wait is 30 seconds.
Note: kubectl wait was introduced on Kubernetes v1.11.0. If you are using older versions, you can create some logic using kubectl get with --field-selector:
kubectl get pod --field-selector=status.phase=Succeeded
We can check Pod status using K8S Rest API.
In order to connect to API, we need to get a token:
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
From above code we have acquired TOKEN and APISERVER address.
On Azure DevOps, on your target Release, on Agent Job, we can add Bash task:
#name of K8S Job object we are waiting to finish
JOB_NAME=name-of-db-job
APISERVER=set-api-server-from-previous-code
TOKEN=set-token-from-previous-code
#log APISERVER and JOB_NAME for troubleshooting
echo API Server: $APISERVER
echo JOB NAME: $JOB_NAME
#keep calling API until you get status Succeeded or Failed.
while true; do
#read all pods and query for pod containing JOB_NAME using jq.
#note that you should not have similar pod names with job name otherwise you will get mutiple results. This script is not expecting multiple results.
res=$(curl -X GET $APISERVER/api/v1/namespaces/default/pods/ --header "Authorization: Bearer $TOKEN" --insecure | jq --arg JOB_NAME "$JOB_NAME" '.items[] | select(.metadata.name | contains($JOB_NAME))' | jq '.status.phase')
if (res=="Succeeded"); then
echo Succeeded
exit 0
elif (res=="Failed"); then
echo Failed
exit 1
else
echo $res
fi
sleep 2
done
If Failed, script will exit with code 1 and CD will stop (if configured that way).
If Succeeded, exist with code 0 and CD will continue.
In final setup:
- Script is part of artifact and I'm using it inside Bash task in Agent Job.
- I have placed JOB_NAME into Task Env. Vars so it can be used for multiple DB migrations.
- Token and API Server address are in Variable group on global level.
TODO:
curl is not existing with code 0 if URL is invalid. It needs --fail flag, but still above line exists 0.
"Unknown" Pod status should be handled as well
(Docker container on AWS-ECS exits before all the logs are printed to CloudWatch Logs)
Why are some streams of a CloudWatch Logs Group incomplete (i.e., the Fargate Docker Container exits successfully but the logs stop being updated abruptly)? Seeing this intermittently, in almost all log groups, however, not on every log stream/task run. I'm running on version 1.3.0
Description:
A Dockerfile runs node.js or Python scripts using the CMD command.
These are not servers/long-running processes, and my use case requires the containers to exit when the task completes.
Sample Dockerfile:
FROM node:6
WORKDIR /path/to/app/
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "node", "run-this-script.js" ]
All the logs are printed correctly to my terminal's stdout/stderr when this command is run on the terminal locally with docker run.
To run these as ECS Tasks on Fargate, the log driver for is set as awslogs from a CloudFormation Template.
...
LogConfiguration:
LogDriver: 'awslogs'
Options:
awslogs-group: !Sub '/ecs/ecs-tasks-${TaskName}'
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: ecs
...
Seeing that sometimes the cloduwatch logs output is incomplete, I have run tests and checked every limit from CW Logs Limits and am certain the problem is not there.
I initially thought this is an issue with node js exiting asynchronously before console.log() is flushed, or that the process is exiting too soon, but the same problem occurs when i use a different language as well - which makes me believe this is not an issue with the code, but rather with cloudwatch specifically.
Inducing delays in the code by adding a sleep timer has not worked for me.
It's possible that since the docker container exits immediately after the task is completed, the logs don't get enough time to be written over to CWLogs, but there must be a way to ensure that this doesn't happen?
sample logs:
incomplete stream:
{ "message": "configs to run", "data": {"dailyConfigs":"filename.json"]}}
running for filename
completed log stream:
{ "message": "configs to run", "data": {"dailyConfigs":"filename.json"]}}
running for filename
stdout: entered query_script
... <more log lines>
stderr:
real 0m23.394s
user 0m0.008s
sys 0m0.004s
(node:1) DeprecationWarning: PG.end is deprecated - please see the upgrade guide at https://node-postgres.com/guides/upgrading
UPDATE: This now appears to be fixed, so there is no need to implement the workaround described below
I've seen the same behaviour when using ECS Fargate containers to run Python scripts - and had the same resulting frustration!
I think it's due to CloudWatch Logs Agent publishing log events in batches:
How are log events batched?
A batch becomes full and is published when any of the following conditions are met:
The buffer_duration amount of time has passed since the first log event was added.
Less than batch_size of log events have been accumulated but adding the new log event exceeds the batch_size.
The number of log events has reached batch_count.
Log events from the batch don't span more than 24 hours, but adding the new log event exceeds the 24 hours constraint.
(Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html)
So a possible explanation is that log events are buffered by the agent but not yet published when the ECS task is stopped. (And if so, that seems like an ECS issue - any AWS ECS engineers willing to give their perspective on this...?)
There doesn't seem to be a direct way to ensure the logs are published, but it does suggest one could wait at least buffer_duration seconds (by default, 5 seconds), and any prior logs should be published.
With a bit of testing that I'll describe below, here's a workaround I landed on. A shell script run_then_wait.sh wraps the command to trigger the Python script, to add a sleep after the script completes.
Dockerfile
FROM python:3.7-alpine
ADD run_then_wait.sh .
ADD main.py .
# The original command
# ENTRYPOINT ["python", "main.py"]
# To run the original command and then wait
ENTRYPOINT ["sh", "run_then_wait.sh", "python", "main.py"]
run_then_wait.sh
#!/bin/sh
set -e
# Wait 10 seconds on exit: twice the `buffer_duration` default of 5 seconds
trap 'echo "Waiting for logs to flush to CloudWatch Logs..."; sleep 10' EXIT
# Run the given command
"$#"
main.py
import logging
import time
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
if __name__ == "__main__":
# After testing some random values, had most luck to induce the
# issue by sleeping 9 seconds here; would occur ~30% of the time
time.sleep(9)
logger.info("Hello world")
Hopefully the approach can be adapted to your situation. You could also implement the sleep inside your script, but it can be trickier to ensure it happens regardless of how it terminates.
It's hard to prove that the proposed explanation is accurate, so I used the above code to test whether the workaround was effective. The test was the original command vs. with run_then_wait.sh, 30 runs each. The results were that the issue was observed 30% of the time, vs 0% of the time, respectively. Hope this is similarly effective for you!
Just contacted AWS support about this issue and here is their response:
...
Based on that case, I can see that this occurs for containers in a
Fargate Task that exit quickly after outputting to stdout/stderr. It
seems to be related to how the awslogs driver works, and how Docker in
Fargate communicates to the CW endpoint.
Looking at our internal tickets for the same, I can see that our
service team are still working to get a permanent resolution for this
reported bug. Unfortunately, there is no ETA shared for when the fix
will be deployed. However, I've taken this opportunity to add this
case to the internal ticket to inform the team of the similar and try
to expedite the process
In the meantime, this can be avoided by extending the lifetime of the
exiting container by adding a delay (~>10 seconds) between the logging
output of the application and the exit of the process (exit of the
container).
...
Update:
Contacted AWS around August 1st, 2019, they say this issue has been fixed.
I observed this as well. It must be an ECS bug?
My workaround (Python 3.7):
import atexit
from time import sleep
atexit.register(finalizer)
def finalizer():
logger.info("All tasks have finished. Exiting.")
# Workaround:
# Fargate will exit and final batch of CloudWatch logs will be lost
sleep(10)
I had the same problem with flushing logs to CloudWatch.
Following asavoy's answer I switched from exec form to shell form of the ENTRYPOINT and added a 10 sec sleep at the end.
Before:
ENTRYPOINT ["java","-jar","/app.jar"]
After:
ENTRYPOINT java -jar /app.jar; sleep 10
I made a task defination in AWS ECS as shown in screenshots:
Now when I run taskdefination in a cluster, it successfully run, but status of container remains unhealthy forever. And when I try same command(healthcheck command[curl command]) running inside container, I am able to run same command inside container, then it run successfully. I had also tried CMD instead of CMD-SHELL , but nothing working. Inside container apache is running at port 80.
Note I am making docker image by committing docker container not by Dockerfile.
Not getting why healthcheck is not working. Nothing significant found online. Please help if someone before had faced this issue.
You have made a mistake in command in HEALTHCHECK section (double pipe).
I suppose you want to use CMD-SHELL,curl --fail http://localhost/ || exit 1 instead of ...|exit 1|
Remove the double quotes around the command, should work