I have followed the documentation and previous stackoverflow links eg : Airflow sla_miss_callback function not triggering
Still I am not able to trigger sla_miss_callback for my case.
I have a top level DAG(DAG X1) that has scheduler interval of : 0 18 * * * -> run everyday at 6 pm.
This calls sub DAG(DAG Y1) using TriggerDagRunOperator.
Sub DAG Y1 do not have schedule interval attached so I cannot apply SLA to it because of this piece of code - Airflow Code
So I have attached SLA to my top level DAG task exactly the same way as this - https://airflow.apache.org/docs/apache-airflow/2.3.1/concepts/tasks.html#slas
#dag(
schedule_interval="0 18 * * *",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
sla_miss_callback=sla_callback,
default_args={'sla': timedelta(seconds=60)
}
def sla_callback(dag, task_list, blocking_task_list, slas, blocking_tis):
print(
"The callback arguments are: ",
{
"dag": dag,
"task_list": task_list,
"blocking_task_list": blocking_task_list,
"slas": slas,
"blocking_tis": blocking_tis,
},
)
In my DAG Processor log I do see this log
{{processor.py:377}} INFO - Running SLA Checks for DAG X1, Coming from this part of the Airflow Code
but after that I dont see any log related to SLA and SLA Callback function is not getting kicked off. My DAG run time is more than hour long.
Related
I have a following code in my Lambda (Python and Boto3):
rds.restore_db_instance_from_db_snapshot(
DBSnapshotIdentifier=snapshot_name,
DBInstanceIdentifier=db_id,
DBInstanceClass=rds_instance_class,
VpcSecurityGroupIds=secgroup,
DBSubnetGroupName=rds_subnet_groupname,
MultiAZ=False,
PubliclyAccessible=False,
CopyTagsToSnapshot=True
)
waiter = rds.get_waiter('db_instance_available')
waiter.wait(DBInstanceIdentifier=db_id)
# some other operation that expects that DB is up and running.
The waiter was added as an attempt to properly wait for DB. However, it looks like the waiter times out.
What would be the correct waiter to use in this case?
try setting waiter.config.delay and/or waiter.config.max_attempts.
waiter = rds.get_waiter('db_instance_available')
waiter.config.delay = 123 # this is in seconds
waiter.config.max_attempts = 123
waiter.wait(DBInstanceIdentifier=db_id)
OR
waiter = rds.get_waiter('db_instance_available')
waiter.wait(
DBClusterIdentifier=db_id
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
WaiterConfig (dict) A dictionary that provides parameters to control
waiting behavior.
Delay (integer) The amount of time in seconds to wait between
attempts. Default: 30
MaxAttempts (integer) The maximum number of attempts to be made.
Default: 60
Could it be that your waiter is actually checking the existing db and seeing that it's available before the status can update on the previous command to restore the snapshot?
I have a job which runs at 13:30. Of which first task takes almost 1 hour to complete after that we need to wait 15 mins. So, I am using Timedeltasensor like below.
waitfor15min = TimeDeltaSensor(
task_id='waitfor15min',
delta=timedelta(minutes=15),
dag=dag)
However in logs, It is showing schedule_interval + 15 min like below
[2020-11-05 20:36:27,013] {time_delta_sensor.py:45} INFO - Checking if the time (2020-11-05T13:45:00+00:00) has come
[2020-11-05 20:36:27,013] {base_sensor_operator.py:79} INFO - Success criteria met. Exiting.
[2020-11-05 20:36:30,655] {logging_mixin.py:95} INFO - [2020-11-05 20:36:30,655] {jobs.py:2612} INFO - Task exited with return code 0
How can I create delay between job??
You could use PythonOperator and write a function that simply waits 15 minutes. There is an example on how a wait task could look like:
def my_sleeping_function(random_base, **kwargs)):
"""This is a function that will run within the DAG execution"""
time.sleep(random_base)
# Generate 5 sleeping tasks, sleeping from 0.0 to 0.4 seconds respectively
for i in range(5):
task = PythonOperator(
task_id='sleep_for_' + str(i),
python_callable=my_sleeping_function,
op_kwargs={'random_base': float(i) / 10},
provide context=true,
dag=dag,
)
run_this >> task
I started migrating my code to boto 3 and one nice addition I noticed are the waiters.
I want to create a snapshot from a db instance and I want to check for it's availability before I resume with my code.
My approach is the following:
# Notice: Step : Check snapshot availability [1st account - Oregon]
print "--- Check snapshot availability [1st account - Oregon] ---"
new_snap = client1.describe_db_snapshots(DBSnapshotIdentifier=new_snapshot_name)['DBSnapshots'][0]
# print pprint.pprint(new_snap) #debug
waiter = client1.get_waiter('db_snapshot_completed')
print "Manual snapshot is -pending-"
sleep(60)
waiter.wait(
DBSnapshotIdentifier = new_snapshot_name,
IncludeShared = True,
IncludePublic = False
)
print "OK. Manual snapshot is -available-"
,but the documentation says that it polls the status every 15 seconds for 40 times. That is 10 minutes. Yet, a rather big DB will need more than that .
How could I use the waiter to alleviate for that?
Waiters have configuration parameters'delay' and 'max_attempts'
like this :
waiter = rds_client.get_waiter('db_instance_available')
print( "waiter delay: " + str(waiter.config.delay) )
waiter.py on github
You could do it without the waiter if you like.
From the documentation for that waiter:
Polls RDS.Client.describe_db_snapshots() every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
Basically that means it does the following:
RDS = boto3.client('rds')
RDS.describe_db_snapshots()
You can just run that but filter to your snapshot id, here is the syntax.http://boto3.readthedocs.io/en/latest/reference/services/rds.html#RDS.Client.describe_db_snapshots
response = client.describe_db_snapshots(
DBInstanceIdentifier='string',
DBSnapshotIdentifier='string',
SnapshotType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
IncludeShared=True|False,
IncludePublic=True|False
)
This will end up looking something like this:
snapshot_description = RDS.describe_db_snapshots(DBSnapshotIdentifier='YOURIDHERE')
then you can just loop until that returns a snapshot which is available. So here is a very rough idea.
import boto3
import time
RDS = boto3.client('rds')
RDS.describe_db_snapshots()
snapshot_description = RDS.describe_db_snapshots(DBSnapshotIdentifier='YOURIDHERE')
while snapshot_description['DBSnapshots'][0]['Status'] != 'available' :
print("still waiting")
time.sleep(15)
snapshot_description = RDS.describe_db_snapshots(DBSnapshotIdentifier='YOURIDHERE')
I think the other answer alluded to this solution but here it is expressly.
[snip]
...
# Create your waiter
waiter_db_snapshot = client1.get_waiter('db_snapshot_completed')
# Increase the max number of tries as appropriate
waiter_db_snapshot.config.max_attempts = 120
# Add a 60 second delay between attempts
waiter_db_snapshot.config.delay = 60
print "Manual snapshot is -pending-"
....
[snip]
I'm requesting a file with a size around 14MB from a slow server with urllib2.urlopen, and it spend more than 60 seconds to get the data, and I'm getting the error:
Deadline exceeded while waiting for HTTP response from URL:
http://bigfile.zip?type=CSV
Here my code:
class CronChargeBT(webapp2.RequestHandler):
def get(self):
taskqueue.add(queue_name = 'optimized-queue', url='/cronChargeBTB')
class CronChargeBTB(webapp2.RequestHandler):
def post(self):
url = "http://bigfile.zip?type=CSV"
url_request = urllib2.Request(url)
url_request.add_header('Accept-encoding', 'gzip')
urlfetch.set_default_fetch_deadline(300)
response = urllib2.urlopen(url_request, timeout=300)
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
...work with the data insiste the file...
I create a cron task who calls CronChargeBT. Here the cron.yaml:
- description: cargar BlueTomato
url: /cronChargeBT
target: charge
schedule: every wed,sun 01:00
and it create a new task and insert into a queue, here the queue configuration:
- name: optimized-queue
rate: 40/s
bucket_size: 60
max_concurrent_requests: 10
retry_parameters:
task_retry_limit: 1
min_backoff_seconds: 10
max_backoff_seconds: 200
Of coursethat the timeout=300 isn't working because the 60seconds limit in GAE but I think yhat I can avoid it using a task... anyone knows how I can get the data in the file avoiding this timeout.
Thanks a lot!!!
Cron jobs are limited to 10 minutes deadline, not 60 seconds. If your download fails, perhaps just retry? Does the download work if you download it from your computer? There's nothing you can do on GAE if the server you are downloading from is too slow or unstable.
Edit: According to https://cloud.google.com/appengine/docs/java/outbound-requests#request_timeouts, there is a maximum deadline of 60 seconds for cron job requests. Therefore, you can't get around it.
code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'}
code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
I am inserting into Cassandra Cassandra 2.0.13(single node for testing) by python cassandra-driver version 2.6
The following are my keyspace and table definitions:
CREATE KEYSPACE test_keyspace WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' };
CREATE TABLE test_table (
key text PRIMARY KEY,
column1 text,
...,
column17 text
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
What I tried:
1) multiprocessing(protocol version set to 1)
each process has its own cluster, session(default_timeout set to 30.0)
def get_cassandra_session():
"""creates cluster and gets the session base on key space"""
# be aware that session cannot be shared between threads/processes
# or it will raise OperationTimedOut Exception
if CLUSTER_HOST2:
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1, CLUSTER_HOST2])
else:
# if only one address is available, we have to use older protocol version
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1], protocol_version=1)
session = cluster.connect(KEY_SPACE)
session.default_timeout = 30.0
return session
2) batch insert (protocol version set to 2 because BatchStatement is enabled on Cassandra 2.X)
def batch_insert(session, batch_queue, batch):
try:
insert_user = session.prepare("INSERT INTO " + db.TABLE + " (" + db.COLUMN1 + "," + db.COLUMN2 + "," + db.COLUMN3 +
"," + db.COLUMN4 + ") VALUES (?,?,?,?)")
while batch_queue.qsize() > 0:
'''batch queue size is 1000'''
row_tuple = batch_queue.get()
batch.add(insert_user, row_tuple)
session.execute(batch)
except Exception as e:
logger.error("batch insert fail.... %s", e)
the above function is invoked by:
batch = BatchStatement(consistency_level=ConsistencyLevel.ONE)
batch_insert(session, batch_queue, batch)
tuples are stored in batch_queue.
3) synchronizing execution
Several days ago I post another question Cassandra update fails , cassandra was complaining about TimeOut issue. I was using synchronize execution for updating.
Can anyone help, is this my code issue or python cassandra-driver issue or Cassandra itself ?
Thanks a million!
If your question is about those errors at the top, those are server-side error responses.
The first says that the coordinator you contacted cannot satisfy the request at CL.ONE, with the nodes it believes are alive. This can happen if all replicas are down (more likely with a low replication factor).
The other two errors are timeouts, where the coordinator didn't get responses from 'live' nodes in a time configured in the cassandra.yaml.
All of these indicate that the cluster you're connected to is not healthy. This could be because it is overwhelmed (high GC pauses), or experiencing network issues. Check the server logs for clues.
I got the following error, which looks very similar:
cassandra.Unavailable: Error from server: code=1000 [Unavailable exception] message="Cannot achieve consistency level LOCAL_ONE" info={'consistency': 'LOCAL_ONE', 'alive_replicas': 0, 'required_replicas': 1}
When I added a sleep(0.5) in the code, it worked fine. I was trying to write too much too fast...