Do a task regardless of upstream task status - directed-acyclic-graphs

I have a relatively simple DAG defined as follows:
set_variables >> create_conf_file >> check_running_job >> transform_data >> update_variables
create_conf_file >> remove_conf_file
check_running_stat_job >> remove_conf_file
transform_openidm_data >> remove_conf_file
update_orc_file_variable >> remove_conf_file
My goal here is to ensure that remove_conf_file is always executed regardless of the status of all previous tasks.
I've tried to use trigger_rule=TriggerRule.ALL_DONE in my PythonOperator call, but remove_conf_file is only executed if all previous tasks are done.
If task check_running_stat_job fails, the remove_conf_file task won't be executed.
I want the file to be removed whatever the status of upstream tasks is, DONE, FAILED, NOT DONE.
I've tried several DAG configurations but none seems to work.
[EDIT]
Here's the DAG tree view and DAG view in Airflow:

I've replicated the task structure displayed in your image and successfully ran the task remove_conf_file after task spark_etl failed:
The key was adding trigger_rule='all_done' to the remove_conf_file task. This doesn't mean that all upstream tasks need to be executed successfully, just that they need to be finished (irrespective of success or failure). I've used BashOperator instead of PythonOperator you mention in your question.
This git repository contains corresponding dockerfile with DAG.
EDIT:
For the reference, here is what the successful code looks like if we want to test failing of spark_etl task and subsequent successful execution of remove_conf_file task:
t4 = BashOperator(
task_id='spark_etl',
bash_command='exit 123"', # simulation of task failure
dag=dag)
t6 = BashOperator(
task_id='remove_conf_file',
bash_command='echo "Task 6"',
dag=dag,
trigger_rule='all_done')
(Full code can be found in my git repository mentioned above.)

Related

Couldn't start the dag in apache airflow MWAA

I am experiencing some weird error while starting my sample dag, it executed fine when i created it for the first time,but now executions seems to be failing.
Below is the error i am getting:-
Dag is just simple bash commands, which first print date followed by some more bash tasks.But its seems first task is getting failed before executions starts.
And Since its hasnt executed, it seems to not producing any logs as well.

After clearing logs Dag is in running state but task not getting scheduled/executed

I have airflow (v1.10.4) running in Kubernetes cluster using Celery Executor. We are using RDS Postgres as an external metadata db.
Problem Statement: All dags are getting scheduled and run on time but the problem happens when we clear any log (old run) to re-run the pipeline Airflow puts this Dag in running state but none of the tasks gets executed. We also saw when we re-mark those cleared logs as success Airflow schedules the next run but none of the tasks get executed (none of the tasks goes into queue or running state).
The dag in question has depends_on_past=True and catchup=True.
Can anyone know what's going wrong here?

Missing log lines when writing to cloudwatch from ECS Docker containers

(Docker container on AWS-ECS exits before all the logs are printed to CloudWatch Logs)
Why are some streams of a CloudWatch Logs Group incomplete (i.e., the Fargate Docker Container exits successfully but the logs stop being updated abruptly)? Seeing this intermittently, in almost all log groups, however, not on every log stream/task run. I'm running on version 1.3.0
Description:
A Dockerfile runs node.js or Python scripts using the CMD command.
These are not servers/long-running processes, and my use case requires the containers to exit when the task completes.
Sample Dockerfile:
FROM node:6
WORKDIR /path/to/app/
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "node", "run-this-script.js" ]
All the logs are printed correctly to my terminal's stdout/stderr when this command is run on the terminal locally with docker run.
To run these as ECS Tasks on Fargate, the log driver for is set as awslogs from a CloudFormation Template.
...
LogConfiguration:
LogDriver: 'awslogs'
Options:
awslogs-group: !Sub '/ecs/ecs-tasks-${TaskName}'
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: ecs
...
Seeing that sometimes the cloduwatch logs output is incomplete, I have run tests and checked every limit from CW Logs Limits and am certain the problem is not there.
I initially thought this is an issue with node js exiting asynchronously before console.log() is flushed, or that the process is exiting too soon, but the same problem occurs when i use a different language as well - which makes me believe this is not an issue with the code, but rather with cloudwatch specifically.
Inducing delays in the code by adding a sleep timer has not worked for me.
It's possible that since the docker container exits immediately after the task is completed, the logs don't get enough time to be written over to CWLogs, but there must be a way to ensure that this doesn't happen?
sample logs:
incomplete stream:
{ "message": "configs to run", "data": {"dailyConfigs":"filename.json"]}}
running for filename
completed log stream:
{ "message": "configs to run", "data": {"dailyConfigs":"filename.json"]}}
running for filename
stdout: entered query_script
... <more log lines>
stderr:
real 0m23.394s
user 0m0.008s
sys 0m0.004s
(node:1) DeprecationWarning: PG.end is deprecated - please see the upgrade guide at https://node-postgres.com/guides/upgrading
UPDATE: This now appears to be fixed, so there is no need to implement the workaround described below
I've seen the same behaviour when using ECS Fargate containers to run Python scripts - and had the same resulting frustration!
I think it's due to CloudWatch Logs Agent publishing log events in batches:
How are log events batched?
A batch becomes full and is published when any of the following conditions are met:
The buffer_duration amount of time has passed since the first log event was added.
Less than batch_size of log events have been accumulated but adding the new log event exceeds the batch_size.
The number of log events has reached batch_count.
Log events from the batch don't span more than 24 hours, but adding the new log event exceeds the 24 hours constraint.
(Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html)
So a possible explanation is that log events are buffered by the agent but not yet published when the ECS task is stopped. (And if so, that seems like an ECS issue - any AWS ECS engineers willing to give their perspective on this...?)
There doesn't seem to be a direct way to ensure the logs are published, but it does suggest one could wait at least buffer_duration seconds (by default, 5 seconds), and any prior logs should be published.
With a bit of testing that I'll describe below, here's a workaround I landed on. A shell script run_then_wait.sh wraps the command to trigger the Python script, to add a sleep after the script completes.
Dockerfile
FROM python:3.7-alpine
ADD run_then_wait.sh .
ADD main.py .
# The original command
# ENTRYPOINT ["python", "main.py"]
# To run the original command and then wait
ENTRYPOINT ["sh", "run_then_wait.sh", "python", "main.py"]
run_then_wait.sh
#!/bin/sh
set -e
# Wait 10 seconds on exit: twice the `buffer_duration` default of 5 seconds
trap 'echo "Waiting for logs to flush to CloudWatch Logs..."; sleep 10' EXIT
# Run the given command
"$#"
main.py
import logging
import time
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
if __name__ == "__main__":
# After testing some random values, had most luck to induce the
# issue by sleeping 9 seconds here; would occur ~30% of the time
time.sleep(9)
logger.info("Hello world")
Hopefully the approach can be adapted to your situation. You could also implement the sleep inside your script, but it can be trickier to ensure it happens regardless of how it terminates.
It's hard to prove that the proposed explanation is accurate, so I used the above code to test whether the workaround was effective. The test was the original command vs. with run_then_wait.sh, 30 runs each. The results were that the issue was observed 30% of the time, vs 0% of the time, respectively. Hope this is similarly effective for you!
Just contacted AWS support about this issue and here is their response:
...
Based on that case, I can see that this occurs for containers in a
Fargate Task that exit quickly after outputting to stdout/stderr. It
seems to be related to how the awslogs driver works, and how Docker in
Fargate communicates to the CW endpoint.
Looking at our internal tickets for the same, I can see that our
service team are still working to get a permanent resolution for this
reported bug. Unfortunately, there is no ETA shared for when the fix
will be deployed. However, I've taken this opportunity to add this
case to the internal ticket to inform the team of the similar and try
to expedite the process
In the meantime, this can be avoided by extending the lifetime of the
exiting container by adding a delay (~>10 seconds) between the logging
output of the application and the exit of the process (exit of the
container).
...
Update:
Contacted AWS around August 1st, 2019, they say this issue has been fixed.
I observed this as well. It must be an ECS bug?
My workaround (Python 3.7):
import atexit
from time import sleep
atexit.register(finalizer)
def finalizer():
logger.info("All tasks have finished. Exiting.")
# Workaround:
# Fargate will exit and final batch of CloudWatch logs will be lost
sleep(10)
I had the same problem with flushing logs to CloudWatch.
Following asavoy's answer I switched from exec form to shell form of the ENTRYPOINT and added a 10 sec sleep at the end.
Before:
ENTRYPOINT ["java","-jar","/app.jar"]
After:
ENTRYPOINT java -jar /app.jar; sleep 10

Dataflow process hanging

I am running a batch job on dataflow, querying from BigQuery. When I use the DirectRunner, everything works, and the results are written to a new BigQuery table. Things seem to break when I change to DataflowRunner.
The logs show that 30 worker instances are spun up successfully. The graph diagram in the web UI shows the job has started. The first 3 steps show "Running", the rest show "not started". None of the steps show any records transformed (i.e. outputcollections all show '-'). The logs show many messages that look like this, which may be the issue:
skipping: failed to "StartContainer" for "python" with CrashLoopBackOff: "Back-off 10s restarting failed container=python pod=......
I took a step back and just ran the minimal wordcount example, and that completed successfully. So all the necessary APIs seem to be enabled for Dataflow runner. I'm just trying to get a sense of what is causing my Dataflow job to hang.
I am executing the job like this:
python2.7 script.py --runner DataflowRunner --project projectname --requirements_file requirements.txt --staging_location gs://my-store/staging --temp_location gs://my-store/temp
I'm not sure if my solution was the cause of the error pasted above, but fixing dependencies problems (which were not showing up as errors in the log at all!) did solve the hanging dataflow processes.
So if you have a hanging process, make sure your workers have all their necessary dependencies. You can provide them through the --requirements_file argument, or through a custom setup.py script.
Thanks to the help I received in this post, the pipeline appears to be operating, albeit VERY SLOWLY.

unable to update source code using cfn-hup in aws

I am trying to update source code on an EC2 instance using cfn-hup service in cloud formation (AWS).
When I update the stack with new source code using build number, the source code does changes at EC2.
cfn-hup service running fine and all configurations are OK.
Below are the logs of cfn-hup.
2016-03-05 08:48:19,912 [INFO] Data has changed from previous state; action for cfn-auto-reloader-hook will be run
2016-03-05 08:48:19,912 [INFO] Running action for cfn-auto-reloader-hook
2016-03-05 08:48:20,191 [WARNING] Action for cfn-auto-reloader-hook exited with 1; will retry on next iteration
Can anyone plz help me on this.
The error states Action for cfn-auto-reloader-hook exited with 1. This means that the action specified in your cfn-auto-reloader-hook has been executed, but returned an error code of 1 indicating a failure state. The good news is that everything else is set up correctly (the cfn-hup script is installed and running, it correctly detected a metadata change, and it found the cfn-auto-reloader hook).
Look at the action= line in your cfn-hup entry for this hook. A typical hook will look something like this:
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.WebServerInstance.Metadata.AWS::CloudFormation::Init
action=some_shell_command_here
runas=root
To find the hook, run cat /etc/cfn/hooks.d/cfn-auto-reloader.conf on the instance, or trace back where these file contents are defined in your CloudFormation template (e.g., in the example LAMP stack, this hook is created by the files section of an AWS::CloudFormation::Init Metadata Resource, used by the cfn-init helper script). Try manually executing the line in a local shell. If it fails, use the relevant output or error logs to continue debugging. Change the command and cfn-hup should succeed the next time it runs.
This means one of your cfn-init items is failing. If it works the first time, it's likely that you have a commands section item that needs a "test" clause to determine if it needs to run or not.