How to I execute part of a daily workflow only on Sundays - informatica

i have one workflow 'wf_OFSLL_WVR_CONV_ERROR_LOG_ARC' which should run only on sunday. i have created one workflow which trigger the this 'wf_OFSLL_WVR_CONV_ERROR_LOG_ARC'workflow only on sunday.so what will be the condition for this to trigger only on sunday

can you schedule the workflow using scheduler?
Edit the workflow > go to scheduler > change schedule according to this pic.
Make sure the workflow runs forever and repeat every week.
EDIT-
per comment,
Let the workflow link run everyday and dont mention any condition. And then add below piece of code into your code so it will kick off worflows only on Sundays.
#/bin/sh
if [[ $(date +%u) -eq 7 ]]; then
echo 'Sunday, time to kick off the infa workflow'
## write your pmcmd here
exit
fi

Related

Is there a AWS CloudWatch Log Export Task finish event?

I have a flow that once a day:
(cron at hour H): Creates a AWS CloudWatch Export Tag
(cron at hour H+2): Consumes the logs exported in step 1
Things were kept simple by design:
The two steps are separate scripts that don't relate to each other.
Step 2 doesn't have the task ID created in step 1.
Step 2 is not aware of step 1 and doesn't know if the logs export task is finished.
I could add a mechanism by which the first script publishes the task ID somewhere and the second script consumes that task ID and queries CloudWatch to check if the task is finished and only proceeds when it is.
However, I'd prefer to keep it where there's no such handoff from step 1 to step 2.
What I'd like to do is when the log export is done, step 2 automatically starts.
👉 Is there an event "CloudWatch Export Task finished" that can be used to trigger the start of step 2?

what will be the query for check completion of workflow?

I have to cheack the status of workflow weather that workflow completed within scheduled time or not in sql query format. And also send an email of workflow status like 'completed within time ' or not 'completed within time'. So, please help me out
You can do it either using option1 or option 2.
You need access to repository meta database.
Create a post session shell script. You can pass workflow name and benchmark value to the shell script.
Get workflow run time from repository metadata base.
SQL you can use -
SELECT WORKFLOW_NAME,(END_TIME-START_TIME)*24*60*60 diff_seconds
FROM
REP_WFLOW_RUN
WHERE WORKFLOW_NAME='myWorkflow'
You can then compare above value with benchmark value. Shell script can send a mail depending on outcome.
you need to create another workflow to check this workflow.
If you do not have access to Metadata, please follow above steps except metadata SQL.
Use pmcmd GetWorkflowDetails to check status, start and end time for a workflow.
pmcmd GetWorkflowDetails -sv service -d domain -f folder myWorkflow
You can then grep start and end time from there, compare them with benchmark values. The problem is the format etc. You need little bit scripting here.

Google App Engine, tasks in Task Queue are not executed automatically

My tasks are added to Task Queue, but nothing executed automatically. I need to click the button "Run now" to run tasks, tasks are executed without problem. Have I missed some configurations ?
I use default queue configuration, standard App Engine with python 27.
from google.appengine.api import taskqueue
taskqueue.add(
url='/inserturl',
params={'name': 'tablename'})
This documentation is for the API you are now mentioning. The idea would be the same: you need to specify the parameter for when you want the task to be executed. In this case, you have different options, such as countdown or eta. Here is the specific documentation for the method you are using to add a task to the queue (taskqueue.add)
ORIGINAL ANSWER
If you follow this tutorial to create queues and tasks, you will see it is based on the following github repo. If you go to the file where the tasks are created (create_app_engine_queue_task.py). There is where you should specify the time when the task must be executed. In this tutorial, to finally create the task, they use the following command:
python create_app_engine_queue_task.py --project=$PROJECT_ID --location=$LOCATION_ID --queue=$QUEUE_ID --payload=hello
However, it is missing the time when you want to execute it, it should look like this
python create_app_engine_queue_task.py --project=$PROJECT_ID --location=$LOCATION_ID --queue=$QUEUE_ID --payload=hello --in_seconds=["countdown" for when the task will be executed, in seconds]
Basically, the key is in this part of the code in create_app_engine_queue_task.py:
if in_seconds is not None:
# Convert "seconds from now" into an rfc3339 datetime string.
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=in_seconds)
# Create Timestamp protobuf.
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
# Add the timestamp to the tasks.
task['schedule_time'] = timestamp
If you create the task now and you go to your console, you will see you task will execute and disappear from the queue in the amount of seconds you specified.

Dataprep: job finish event

We are considering using Dataprep on an automatic schedule in order to wrangle & load a folder of GCS .gz files into Big Query.
The challenge is: how can the source .gz files be moved to cold storage once they are processed ?
I can't find an event that's generated by Dataprep that we could hook-up to in order to perform the archiving task. What would be ideal is if Dataprep could archive the source files by itself.
Any suggestions ?
I don't believe there is a way to get notified when a job is done directly from Dataprep. What you could do instead is poll the underlying dataflow jobs. You could schedule a script to run whenever your scheduled dataprep job runs. Here's a simple example:
#!/bin/bash
# list running dataflow jobs, filter so that only the one with the "dataprep" string in its name is actually listed and keep its id
id=$(gcloud dataflow jobs list --status=active --filter="name:dataprep" | sed -n 2p | cut -f 1 -d " ")
# loop until the state of the job changes to done
until [ $(gcloud dataflow jobs describe $id | grep currentState | head -1 | awk '{print $2}') == "JOB_STATE_DONE" ]
do
# sleep so that you reduce API calls
sleep 5m
done
# send to cold storage, e.g. gsutil mv ...
echo "done"
The problem here is that the above assumes that you only run one dataprep job. If you schedule many concurrent dataprep jobs the script would be more complicated.

How do I run a Team City Build Step only if a previous build step fails? [duplicate]

We're using TeamCity 7 and wondered if it's possible to have a step run only if a previous one has failed? Our options in the build step configuration give you the choice to execute only if all steps were successful, even if a step failed, or always run it.
Is there a means to execute a step only if a previous one failed?
Theres no way to setup a step to execute only if a previous one failed.
The closest I've seen to this, is to setup a build that has a "Finish Build" trigger that would always execute after your first build finishes. (Regardless of success or failure).
Then in that second build, you could use the TeamCity REST API to determine if the last execution from the first build was successful or not. If it wasn't successful then you could whatever it is you want to do.
As a work around it is possible to set a variable via a command line step that only runs on success which can be checked later.
echo "##teamcity[setParameter name='env.BUILD_STATUS' value='SUCCESS']"
This can then be queried inside a powershell step that is set to run even if a step fails.
if($env:BUILD_STATUS -ne "SUCCESS"){
}
I was surprised that TeamCity does not support it out of the box in 2021. But API gives you a lot usefull features and you can do it
As a solution you need to write bash script and call TeamCity API inside
setup API key in MySettings & Tools => Access token
create env variable with API token
create a step in your configuration with Execute step: Even if some of the previous steps failed
build own container with jq or use any existing container with jq support
place this bash script
#!/bin/bash
set -e -x
declare api_response=$(curl -v -H "Authorization: Bearer %env.teamcity_internal_api_key%" %teamcity.serverUrl%/app/rest/latest/builds?locator=buildType:%system.teamcity.buildType.id%,running:any,canceled:all,count:2\&fields=build\(id,status\))
declare current_status=`echo ${api_response} | jq '.build[0].status'`
declare prev_status=`echo ${api_response} | jq '.build[1].status'`
if [ "$current_status" != "$prev_status" ]; then
do you code here
fi
some explanation of code above. with API call you get 2 last builds of current buildType. This is last build and previous build. After you assign variable with statuses and compare them in if statement. If you need to run some code in case of current build failed use
if [ "$current_status" = "FAILURE" ]; then
write your code here
fi
Another solution is Webhooks.
This plugin can send webhook to an URL if build fails too.
On the webhook side, you can handle some actions, for example, send notification.