Based on the previous post, I have an AWS Glue Pythonshell job that needs to retrieve some information from the arguments that are passed to it through a boto3 call.
My Glue job name is test_metrics
The Glue pythonshell code looks like below
import sys
from awsglue.utils import getResolvedOptions
args = getResolvedOptions(sys.argv,
['test_metrics',
's3_target_path_key',
's3_target_path_value'])
print ("Target path key is: ", args['s3_target_path_key'])
print ("Target Path value is: ", args['s3_target_path_value'])
The boto3 code that calls this job is below:
glue = boto3.client('glue')
response = glue.start_job_run(
JobName = 'test_metrics',
Arguments = {
'--s3_target_path_key': 's3://my_target',
'--s3_target_path_value': 's3://my_target_value'
}
)
print(response)
I see a 200 response after I run the boto3 code in my local machine, but Glue error log tells me:
test_metrics.py: error: the following arguments are required: --test_metrics
What am I missing?
Which job you are trying to launch? Spark Job or Python shell job?
If spark job, JOB_NAME is mandatory parameter. In Python shell job, it is not needed at all.
So in your python shell job, replace
args = getResolvedOptions(sys.argv,
['test_metrics',
's3_target_path_key',
's3_target_path_value'])
with
args = getResolvedOptions(sys.argv,
['s3_target_path_key',
's3_target_path_value'])
Seems like the documentation is kinda broken.
I had to update the boto3 code like below to make it work
glue = boto3.client('glue')
response = glue.start_job_run(
JobName = 'test_metrics',
Arguments = {
'--test_metrics': 'test_metrics',
'--s3_target_path_key': 's3://my_target',
'--s3_target_path_value': 's3://my_target_value'} )
We can get glue job name in python shell from sys.argv
Related
According to https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-jobs-job.html#aws-glue-api-jobs-job-GetJob the Action GetJob (AWS Glue) should return CodeGenConfigurationNodes for a given JobName as input however it only returns the Job object.
Am I missing a request parameter that is undocumented? Is there another way to get the CodeGenConfigurationNodes for an AWS Glue Job in Step Functions?
[UPDATE]
OK I tried using the get_job() python function in a Lambda:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import json
import boto3
glue = boto3.client(service_name='glue', region_name='eu-west-2',
endpoint_url='https://glue.eu-west-2.amazonaws.com')
def lambda_handler(event, context):
job_name = event.get('JobName')
job_details = glue.get_job(JobName = job_name)
return json.dumps(job_details, indent=4, sort_keys=True, default=str)
And that doesn't return a CodeGenConfigurationNodes either, so a bug in the python library?
CodeGenConfigurationNodes is only returned if the job is a glue studio job. Jobs that are not Glue Studio jobs do not have CodeGenConfigurationNodes as CodeGenConfigurationNodes represents the visual DAG.
I have been working with AWS Glue workflow for orchestrating batch jobs.
we need to pass push-down-predicate in order to limit the processing for batch job.
When we run Glue jobs alone, we can pass push down predicates as a command line argument at run time (i.e. aws glue start-job-run --job-name foo.scala --arguments --arg1-text ${arg1}..). But when we use glue workflow to execute Glue jobs, it is bit unclear.
When we orchestrate Batch jobs using AWS Glue workflows, we can add run properties while creating workflow.
Can I use run properties to pass push down predicate for my Glue Job ?
If yes, then how can I define value for the run property (push down predicate) at run time. The reason I want to define value for push down predicate at run time, is because the predicate arbitrarily changes every day. (i.e. run glue-workflow for past 10 days, past 20 days, past 2 days etc.)
I tried:
aws glue start-workflow-run --name workflow-name | jq -r '.RunId '
aws glue put-workflow-run-properties --name workflow-name --run-id "ID"
--run-properties --pushdownpredicate="some value"
I am able to see the run property I have passed using put-workflow-run-property
aws glue put-workflow-run-properties --name workflow-name --run-id "ID"
But I am not able to detect "pushdownpredicate" in my Glue Job.
Any idea how to access workflow's run property in Glue Job?
If you are using python as programming language for your Glue job then you can issue get_workflow_run_properties API call to retrieve the property and use it inside your Glue job.
response = client.get_workflow_run_properties(
Name='string',
RunId='string'
)
This will give you below response which you can parse and use it:
{
'RunProperties': {
'string': 'string'
}
}
If you are using scala then you can use equivalent AWS SDK.
In first instance you need to be sure that the job is running from a workflow:
def get_worfklow_params(args: Dict[str, str]) -> Dict[str, str]:
"""
get_worfklow_params is delegated to retrieve the WORKFLOW parameters
"""
glue_client = boto3.client("glue")
if "WORKFLOW_NAME" in args and "WORKFLOW_RUN_ID" in args:
workflow_args = glue_client.get_workflow_run_properties(Name=args['WORKFLOW_NAME'], RunId=args['WORKFLOW_RUN_ID'])["RunProperties"]
print("Found the following workflow args: \n{}".format(workflow_args))
return workflow_args
print("Unable to find run properties for this workflow!")
return None
This method will return a map of the workflow input parameter.
Than you can use the following method in order to retrieve a given parameter:
def get_worfklow_param(args: Dict[str, str], arg) -> str:
"""
get_worfklow_param is delegated to verify if the given parameter is present in the job and return it. In case of no presence None will be returned
"""
if args is None:
return None
return args[arg] if arg in args else None
From reuse the code, in my opinion is better to create a python (whl) module and set the module in the script path of your job. By this way, you can retrieve the method with a simple import.
Without the whl module, you can move in the following way:
def MyTransform(glueContext, dfc) -> DynamicFrameCollection:
import boto3
import sys
from typing import Dict
def get_worfklow_params(args: Dict[str, str]) -> Dict[str, str]:
"""
get_worfklow_params is delegated to retrieve the WORKFLOW parameters
"""
glue_client = boto3.client("glue")
if "WORKFLOW_NAME" in args and "WORKFLOW_RUN_ID" in args:
workflow_args = glue_client.get_workflow_run_properties(
Name=args['WORKFLOW_NAME'], RunId=args['WORKFLOW_RUN_ID'])["RunProperties"]
print("Found the following workflow args: \n{}".format(workflow_args))
return workflow_args
print("Unable to find run properties for this workflow!")
return None
def get_worfklow_param(args: Dict[str, str], arg) -> str:
"""
get_worfklow_param is delegated to verify if the given parameter is present in the job and return it. In case of no presence None will be returned
"""
if args is None:
return None
return args[arg] if arg in args else None
_args = getResolvedOptions(sys.argv, ['JOB_NAME', 'WORKFLOW_NAME', 'WORKFLOW_RUN_ID'])
worfklow_params = get_worfklow_params(_args)
job_run_id = get_worfklow_param(_args, "WORKFLOW_RUN_ID")
my_parameter= get_worfklow_param(_args, "WORKFLOW_CUSTOM_PARAMETER")
If you run Glue Job using workflow then sys.argv (which is a list) will contain parameters --WORKFLOW_NAME and --WORKFLOW_RUN_ID in it. You can use this fact to check if a Glue Job is being run by Workflow or not and then retrieve the Workflow Runtime Properties
from awsglue.utils import getResolvedOptions
if '--WORKFLOW_NAME' in sys.argv and '--WORKFLOW_RUN_ID' in sys.argv:
glue_args = getResolvedOptions(
sys.argv, ['WORKFLOW_NAME', 'WORKFLOW_RUN_ID']
)
workflow_args = glue_client.get_workflow_run_properties(
Name=glue_args['WORKFLOW_NAME'], RunId=glue_args['WORKFLOW_RUN_ID']
)["RunProperties"]
return {**workflow_args}
else:
raise Exception("GlueJobNotStartedByWorkflow")
Is there a way to send a JSON response (of a dictionary of outputs) from A AWS Glue pythonshell job? Similar to returning a JSON response from AWS Lambda?
I am calling a Glue pythonshell job like below:
response = glue.start_job_run(
JobName = 'test_metrics',
Arguments = {
'--test_metrics': 'test_metrics',
'--s3_target_path_key': 's3://my_target',
'--s3_target_path_value': 's3://my_target_value'} )
print(response)
The response I get is a 200 stating the fact that the Glue start_job_run was a success. From the documentation, all I see is the result if a Glue job is either written in s3 or some other database.
I tried adding return {'result':'some_string'} at the end of my Glue pythonshell job to test if it works or not with below code.
import sys
from awsglue.utils import getResolvedOptions
args = getResolvedOptions(sys.argv,
['JOB_NAME',
's3_target_path_key',
's3_target_path_value'])
print ("Target path key is: ", args['s3_target_path_key'])
print ("Target Path value is: ", args['s3_target_path_value'])
return {'result':"some_string"}
But it throws error SyntaxError: 'return' outside function
Glue is not made to return response as it is expected to run long running operation inside it. Blocking for response for long running task is not right approach in itself. Instead of it, you may use launch job (service 1) -> execute job (service 2)-> get result (service 3) pattern. You can send json response to AWS service 3 which you want to launch from AWS Service 2 (execute job) e.g. if you launch lambda from glue job, you can send json response to it.
I am trying to access the AWS ETL Glue job id from the script of that job. This is the RunID that you can see in the first column in the AWS Glue Console, something like jr_5fc6d4ecf0248150067f2. How do I get it programmatically with pyspark?
As it's documented in https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-get-resolved-options.html, it's passed in as a command line argument to the Glue Job. You can access the JOB_RUN_ID and other default/reserved or custom job parameters using getResolvedOptions() function.
import sys
from awsglue.utils import getResolvedOptions
args = getResolvedOptions(sys.argv)
job_run_id = args['JOB_RUN_ID']
NOTE: JOB_RUN_ID is a default identity parameter, we don't need to include it as part of options (the second argument to getResolvedOptions()) for getting its value during runtime in a Glue Job.
You can use boto3 SDK for python to access the AWS services
import boto3
def lambda_handler(event, context):
client = boto3.client('glue')
client.start_crawler(Name='test_crawler')
glue = boto3.client(service_name='glue', region_name='us-east-2',
endpoint_url='https://glue.us-east-2.amazonaws.com')
myNewJobRun = client.start_job_run(JobName=myJob['Name'])
print myNewJobRun['JobRunId']
I'm looking for a good BOTO3 example of an AWS EMR already running and I wish to inject a Pig Step into that EMR. Previously, I used the boto2.42 version of:
from boto.emr.connection import EmrConnection
from boto.emr.step import InstallPigStep, PigStep
# AWS_ACCESS_KEY = '' # REQUIRED
# AWS_SECRET_KEY = '' # REQUIRED
# conn = EmrConnection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
# loop next element on bucket_compare list
pig_file = 's3://elasticmapreduce/samples/pig-apache/do-reports2.pig'
INPUT = 's3://elasticmapreduce/samples/pig-apache/input/access_log_1'
OUTPUT = '' # REQUIRED, S3 bucket for job output
pig_args = ['-p', 'INPUT=%s' % INPUT,
'-p', 'OUTPUT=%s' % OUTPUT]
pig_step = PigStep('Process Reports', pig_file, pig_args=pig_args)
steps = [InstallPigStep(), pig_step]
conn.run_jobflow(name='prs-dev-test', steps=steps,
hadoop_version='2.7.2-amzn-2', ami_version='latest',
num_instances=2, keep_alive=False)
The main problem now is that, BOTO3 doesn't use: from boto.emr.connection import EmrConnection, nor from boto.emr.step import InstallPigStep, PigStep and I can't find an equivalent set of modules?
After a bit of checking, I've found a very simple way to inject Pig Script commands from within Python using the awscli and subprocess modules. One can import awscli & subprocess, and then encapsulate and inject the desired PIG steps to an already running EMR with:
import awscli
import subprocess
cmd='aws emr add-steps --cluster-id j-GU07FE0VTHNG --steps Type=PIG,Name="AggPigProgram",ActionOnFailure=CONTINUE,Args=[-f,s3://dev-end2end-test/pig_scripts/AggRuleBag.pig,-p,INPUT=s3://dev-end2end-test/input_location,-p,OUTPUT=s3://end2end-test/output_location]'
push=subprocess.Popen(cmd, shell=True, stdout = subprocess.PIPE)
print(push.returncode)
Of course, you'll have to find your JobFlowID using something like:
aws emr list-clusters --active
Using the same subprocess and push command above. Of course you can add monitoring to your hearts delight instead of just a print statement.
Here is how to add a new step to existing emr cluster job flow for a pig job sing boto3
Note: your script log file, input and output directories should have
the complete path in the format
's3://<bucket>/<directory>/<file_or_key>'
emrcon = boto3.client("emr")
cluster_id1 = cluster_status_file_content #Retrieved from S3, where it was recorded on creation
step_id = emrcon.add_job_flow_steps(JobFlowId=str(cluster_id1),
Steps=[{
'Name': str(pig_job_name),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': ['pig', "-l", str(pig_log_file_full_path), "-f", str(pig_job_run_script_full_path), "-p", "INPUT=" + str(pig_input_dir_full_path),
"-p", "OUTPUT=" + str(pig_output_dir_full_path) ]
}
}]
)
Please see screenshot to monitor-