I'm trying to convert a dataframe to a Dynamic Frame using the toDF and fromDF functions (https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-dynamic-frame.html#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-fromDF) as per the below code snippet:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## #type: DataSource
## #args: [database = "test-3", table_name = "test", transformation_ctx = "datasource0"]
## #return: datasource0
## #inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test-3", table_name = "test", transformation_ctx = "datasource0")
foo = datasource0.toDF()
bar = DynamicFrame.fromDF(foo, glueContext, "bar")
However, I'm getting an error on the line:
bar = DynamicFrame.fromDF(foo, glueContext, "bar")
The error says:
NameError: name 'DynamicFrame' is not defined
I've tried the usual googling to no avail, I can't see what I've done wrong from other examples. Does anyone know why I'm getting this error and how to resolve it?
from awsglue.dynamicframe import DynamicFrame
Import DynamicFrame
You need to import the DynamicFrame class from awsglue.dynamicframe module:
from awsglue.dynamicframe import DynamicFrame
There are lot of things missing in the examples provided with the AWS Glue ETL documentation.
However, you can refer to the following GitHub repository which contains lots of examples for performing basic tasks with Glue ETL:
AWS Glue samples
Related
I am trying to get all records from a RDS postgresql in AWS and load than in a S3 bucket as csv file.
The script i am using is:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
# Script generated for node JDBC Connection
JDBCConnection_node1 = glueContext.create_dynamic_frame.from_catalog(
database="core",
table_name="core_public_table",
transformation_ctx="JDBCConnection_node1",
)
# Script generated for node ApplyMapping
ApplyMapping_node2 = ApplyMapping.apply(
frame=JDBCConnection_node1,
mappings=[
("sentiment", "string", "sentiment", "string"),
("scheduling", "long", "scheduling", "long"),
],
transformation_ctx="ApplyMapping_node2",
)
# Script generated for node S3 bucket
S3bucket_node3 = glueContext.write_dynamic_frame.from_options(
frame=ApplyMapping_node2,
connection_type="s3",
format="csv",
connection_options={"path": "s3://path_to_s3", "partitionKeys": []},
transformation_ctx="S3bucket_node3",
)
job.commit()
But this job is failing due to An error occurred while calling getDynamicFrame
I read every post about in stackoverflow but I can't solve it. Can anyone please help me with this issue?
I use AWS Glue and Apache Hudi to replicate data in RDS to S3. If I execute the following job, 2 parquet files (initial one, and updated one) will be generated in the S3 bucket (basePath). In this case, I want only 1 latest file, and would like to delete old one.
Does anyone know how to keep 1 latest file in the bucket?
import sys
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from awsglue.context import GlueContext
from awsglue.job import Job
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
spark = SparkSession.builder.config('spark.serializer','org.apache.spark.serializer.KryoSerializer').getOrCreate()
sc = spark.sparkContext
glueContext = GlueContext(sc)
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
dataGen = sc._jvm.org.apache.hudi.QuickstartUtils.DataGenerator()
inserts = sc._jvm.org.apache.hudi.QuickstartUtils.convertToStringList(dataGen.generateInserts(5))
df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
df.show()
tableName = 'hudi_mor_athena_sample'
bucketName = 'cm-sato-hudi-sample--datalake'
basePath = f's3://{bucketName}/{tableName}'
hudi_options = {
'hoodie.table.name': tableName,
'hoodie.datasource.write.storage.type': 'MERGE_ON_READ',
'hoodie.compact.inline': 'false',
'hoodie.datasource.write.recordkey.field': 'uuid',
'hoodie.datasource.write.partitionpath.field': 'partitionpath',
'hoodie.datasource.write.table.name': tableName,
'hoodie.datasource.write.operation': 'upsert',
'hoodie.datasource.write.precombine.field': 'ts',
'hoodie.upsert.shuffle.parallelism': 2,
'hoodie.insert.shuffle.parallelism': 2,
}
df.write.format("hudi"). \
options(**hudi_options). \
mode("overwrite"). \
save(basePath)
updates = sc._jvm.org.apache.hudi.QuickstartUtils.convertToStringList(dataGen.generateUpdates(3))
df = spark.read.json(spark.sparkContext.parallelize(updates, 2))
df.show()
# update
df.write.format("hudi"). \
options(**hudi_options). \
mode("append"). \
save(basePath)
job.commit()
Instead of mode("append") use mode("overwrite")
I wanted to find out if I can apply the FindMatch ml transform in AWS Glue on a spark dataframe. Currently I can use it on a dynamicframe. Below is the syntax if i want to use the findmatch transform on a dynamic frame.
<output DynamicFrame on which the ml transform has been applied> =
FindMatches.apply(frame = <Input DynamicFrame>, transformId = <transformation
id of the findmatch ml transform created separately>)
I have tried using a dataframe in place of the input dynamic frame and when I run the Glue job, it fails. Error shown is as below
"Attribute Error: 'DataFrame' object has no attribute 'glue_ctx'"
Below is the code i tried where i tried using a dataframe
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglueml.transforms import FindMatches
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "hospitality", table_name =
"personinputdata", transformation_ctx = "datasource0")
df0 = datasource0.toDF()
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database =
"hospitality", table_name = "personinputdata", transformation_ctx = "resolvechoice1")
findmatchdf = FindMatches.apply(frame = df0, transformId = "tfm-
01cc9b02c93640cfc7ce5ea91745e24258cb2e01")
findmatchdf.show()
And below is the code when instead of a dataframe i tried using a dynamicframe and the code works.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglueml.transforms import FindMatches
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "hospitality", table_name =
"patientinputdata", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database =
"hospitality", table_name = "patientinputdata", transformation_ctx = "resolvechoice1")
findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "tfm-
0cadd1e6d2da40d7c18db7836e92be93833b6019", transformation_ctx = "findmatches2")
I tried searching online if I could find the code for FindMatch ml transform but could not find it anywhere.
FindMatch works on dynamic frames only as you already know...
So you can convert your spark df to dynamic frame whenever you want to run it
from awsglue.dynamicframe import DynamicFrame
glueContext = GlueContext(SparkContext.getOrCreate())
Dyf0 = DynamicFrame.fromDF(df0, glueContext, "anyname")
And then run your FindMatch as required.
I am new to AWS Glue. As per AWS Glue documentation, Spigot function will help you to write sample records from a dynamicFrame to an S3 Directory. But when I run this, it is not creating any file under that S3 directory. Any inputs on where I am doing wrong. Below is the test code.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "amssurveydb", table_name = "amssurvey", transformation_ctx = "datasource0")
split1 = SplitRows.apply(datasource0, {"count": {">": 50}}, "split11", "split12", transformation_ctx ="split1")## #type: SplitRows
selFromCol1 = SelectFromCollection.apply(dfc = split1, key = "split11", transformation_ctx = "selFromCol1")
selFromCol2 = SelectFromCollection.apply(dfc = split1, key = "split12", transformation_ctx = "selFromCol2")
spigot1 = Spigot.apply(frame = selFromCol1, path = "s3://asgqatestautomation3/SourceFiles/spigot1Op", options = {"topk":5},transformation_ctx ="spigot1")
job.commit()
Consider the following aws glue job code:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
medicare_dynamicframe = glueContext.create_dynamic_frame.from_catalog(
database = "my_database",
table_name = "my_table")
medicare_dynamicframe.printSchema()
job.commit()
It prints something like that (note that price_key is not on second position):
root
|-- day_key: string
...
|-- price_key: string
While my_table in datalake is defined with day_key as int (first column) and price_key as decimal(25,0) (second column).
May be I am wrong but I spot from sources that aws glue uses table and database to get just s3 path to data but completelly ignores any type definitions. May be for some data formats like parquet it is normal, but not for csv.
How configure aws glue to set schema from datalake table defintion for dynamic frame with csv?