We have a JSON structure that we need to parse and use it in impala/hive.
Since the JSON structure is evolving, we thought we can use Avro.
We have planned to parse the JSON and format it as avro.
The avro formatted data can be used directly by impala. Lets say we store it in HDFS directory /user/hdfs/person_data/
We will keep putting avro serialized data in that folder as and we will be parsing input json one by one.
Lets say, we have a avro schema file for person (hdfs://user/hdfs/avro/scheams/person.avsc) like
{
"type": "record",
"namespace": "avro",
"name": "PersonInfo",
"fields": [
{ "name": "first", "type": "string" },
{ "name": "last", "type": "string" },
{ "name": "age", "type": "int" }
]
}
For this we will create table in hive by creating external table -
CREATE TABLE kst
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES (
'avro.schema.url'='hdfs://user/hdfs/avro/scheams/person.avsc');
Lets say tomorrow we need to change this schema (hdfs://user/hdfs/avro/scheams/person.avsc) to -
{
"type": "record",
"namespace": "avro",
"name": "PersonInfo",
"fields": [
{ "name": "first", "type": "string" },
{ "name": "last", "type": "string" },
{ "name": "age", "type": "int" },
{ "name": "city", "type": "string" }
]
}
Can we keep putting the new seriliazied data in same HDFS directory /user/hdfs/person_data/ and impala/hive will still work by giving city column as NULL value old records?
Yes, you can, but for all new columns you should specify a default value:
{ "name": "newField", "type": "int", "default":999 }
or mark them as nullable:
{ "name": "newField", "type": ["null", "int"] }
Related
I have a DMS CDC task set (change data capture) from a MySQL database to stream to a Kinesis stream which a Lambda is connected to.
I was hoping to ultimately receive only the value that has changed and not on entire dump of the row, this way I know what column is being changed (at the moment it's impossible to decipher this without setting up another system to track changes myself).
Example, with the following mapping rule:
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"object-locator": {
"schema-name": "my-schema",
"table-name": "product"
},
"rule-action": "include",
"filters": []
},
and if I changed the name property of a record on the product table, I would hope to recieve a record like this:
{
"data": {
"name": "newValue"
},
"metadata": {
"timestamp": "2021-07-26T06:47:15.762584Z",
"record-type": "data",
"operation": "update",
"partition-key-type": "schema-table",
"schema-name": "my-schema",
"table-name": "product",
"transaction-id": 8633730840
}
}
However what I actually recieve is something like this:
{
"data": {
"name": "newValue",
"id": "unchangedId",
"quantity": "unchangedQuantity",
"otherProperty": "unchangedValue"
},
"metadata": {
"timestamp": "2021-07-26T06:47:15.762584Z",
"record-type": "data",
"operation": "update",
"partition-key-type": "schema-table",
"schema-name": "my-schema",
"table-name": "product",
"transaction-id": 8633730840
}
}
As you can see when receiving this, it's impossible to decipher what property has changed without setting up additional systems to track this.
I've found another stackoverflow thread where someone is posting an issue because their CDC is doing what I want mine to do. Can anyone point me into the right direction to achieve this?
I found the answer after digging into AWS documentation some more.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.BeforeImage
Different source database engines provide different amounts of
information for a before image:
Oracle provides updates to columns only if they change.
PostgreSQL provides only data for columns that are part of the primary
key (changed or not).
MySQL generally provides data for all columns (changed or not).
I used the BeforeImageSettings on the task setting to include the original data with payloads.
"BeforeImageSettings": {
"EnableBeforeImage": true,
"FieldName": "before-image",
"ColumnFilter": "all"
}
While this still gives me the whole record, it give me enough data to work out what's changed without additional systems.
{
"data": {
"name": "newValue",
"id": "unchangedId",
"quantity": "unchangedQuantity",
"otherProperty": "unchangedValue"
},
"before-image": {
"name": "oldValue",
"id": "unchangedId",
"quantity": "unchangedQuantity",
"otherProperty": "unchangedValue"
},
"metadata": {
"timestamp": "2021-07-26T06:47:15.762584Z",
"record-type": "data",
"operation": "update",
"partition-key-type": "schema-table",
"schema-name": "my-schema",
"table-name": "product",
"transaction-id": 8633730840
}
}
I am trying to save this Avro schema. I get the message that the schema is not valid. Can someone share why its not valid?
{
"type": "record",
"name": "Interactions",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "InvoiceNo",
"type": "int"
},
{
"name": "StockCode",
"type": "int"
},
{
"name": "Description",
"type": "long"
},
{
"name": "Quantity",
"type": "string"
},
{
"name": "InvoiceDate",
"type": "string"
},
{
"name": "UnitPrice",
"type": "string"
},
{
"name": "CustomerID",
"type": "string"
},
{
"name": "CustomerID",
"type": "string"
},
{
"name": "Country",
"type": "string"
}
],
"version": "1.0"
}
I'm a bit late to the party here but I think your issue is twofold.
(1) You haven't reformatted your columns to use the field names that Personalize want to see. Required fields for Interactions are USER_ID, ITEM_ID, and TIMESTAMP. (With TIMESTAMP being in Unix Epoch format.) See reference here.
(2) The five specified fields for Interactions are USER_ID, ITEM_ID, TIMESTAMP, EVENT_TYPE, and EVENT_VALUE. If you do include more fields, they will be considered metadata fields and you can only include up to 5 metadata fields. If you do include them AND the data type is "String" you, they must be specified as "categorical". See page 35 of the Personalize Developer's Guide for an example.
Hope this helps!
I've been unable to get any query to work against my AWS Glue Partitioned table. The error I'm getting is
HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Error:
type expected at the position 0 of 'STRING' but 'STRING' is found.
(Service: null; Status Code: 0; Error Code: null; Request ID: null)
I've found one other thread that brings up the fact that the database name and table cannot have characters other than alphanumeric and underscores. So, I made sure the database name, table name and all column names adhere to this restriction. The only object that does not adhere to this restriction is my s3 bucket name which would be very difficult to change.
Here are the table definitions and parquet-tools dumps of the data.
AWS Glue Table Definition
{
"Table": {
"UpdateTime": 1545845064.0,
"PartitionKeys": [
{
"Comment": "call_time year",
"Type": "INT",
"Name": "date_year"
},
{
"Comment": "call_time month",
"Type": "INT",
"Name": "date_month"
},
{
"Comment": "call_time day",
"Type": "INT",
"Name": "date_day"
}
],
"StorageDescriptor": {
"OutputFormat": "org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat",
"SortColumns": [],
"InputFormat": "org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat",
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe",
"Name": "ser_de_info_system_admin_created",
"Parameters": {
"serialization.format": "1"
}
},
"BucketColumns": [],
"Parameters": {},
"Location": "s3://ph-data-lake-cududfs2z3xveg5t/curated/system/admin_created/",
"NumberOfBuckets": 0,
"StoredAsSubDirectories": false,
"Columns": [
{
"Comment": "Unique user ID",
"Type": "STRING",
"Name": "user_id"
},
{
"Comment": "Unique group ID",
"Type": "STRING",
"Name": "group_id"
},
{
"Comment": "Date and time the message was published",
"Type": "TIMESTAMP",
"Name": "call_time"
},
{
"Comment": "call_time year",
"Type": "INT",
"Name": "date_year"
},
{
"Comment": "call_time month",
"Type": "INT",
"Name": "date_month"
},
{
"Comment": "call_time day",
"Type": "INT",
"Name": "date_day"
},
{
"Comment": "Given name for user",
"Type": "STRING",
"Name": "given_name"
},
{
"Comment": "IANA time zone for user",
"Type": "STRING",
"Name": "time_zone"
},
{
"Comment": "Name that links to geneaology",
"Type": "STRING",
"Name": "family_name"
},
{
"Comment": "Email address for user",
"Type": "STRING",
"Name": "email"
},
{
"Comment": "RFC BCP 47 code set in this user's profile language and region",
"Type": "STRING",
"Name": "language"
},
{
"Comment": "Phone number including ITU-T ITU-T E.164 country codes",
"Type": "STRING",
"Name": "phone"
},
{
"Comment": "Date user was created",
"Type": "TIMESTAMP",
"Name": "date_created"
},
{
"Comment": "User role",
"Type": "STRING",
"Name": "role"
},
{
"Comment": "Provider dashboard preferences",
"Type": "STRUCT<portal_welcome_done:BOOLEAN,weekend_digests:BOOLEAN,patients_hidden:BOOLEAN,last_announcement:STRING>",
"Name": "preferences"
},
{
"Comment": "Provider notification settings",
"Type": "STRUCT<digest_email:BOOLEAN>",
"Name": "notifications"
}
],
"Compressed": true
},
"Parameters": {
"classification": "parquet",
"parquet.compress": "SNAPPY"
},
"Description": "System wide admin_created messages",
"Name": "system_admin_created",
"TableType": "EXTERNAL_TABLE",
"Retention": 0
}
}
AWS Athena schema
CREATE EXTERNAL TABLE `system_admin_created`(
`user_id` STRING COMMENT 'Unique user ID',
`group_id` STRING COMMENT 'Unique group ID',
`call_time` TIMESTAMP COMMENT 'Date and time the message was published',
`date_year` INT COMMENT 'call_time year',
`date_month` INT COMMENT 'call_time month',
`date_day` INT COMMENT 'call_time day',
`given_name` STRING COMMENT 'Given name for user',
`time_zone` STRING COMMENT 'IANA time zone for user',
`family_name` STRING COMMENT 'Name that links to geneaology',
`email` STRING COMMENT 'Email address for user',
`language` STRING COMMENT 'RFC BCP 47 code set in this user\'s profile language and region',
`phone` STRING COMMENT 'Phone number including ITU-T ITU-T E.164 country codes',
`date_created` TIMESTAMP COMMENT 'Date user was created',
`role` STRING COMMENT 'User role',
`preferences` STRUCT<portal_welcome_done:BOOLEAN,weekend_digests:BOOLEAN,patients_hidden:BOOLEAN,last_announcement:STRING> COMMENT 'Provider dashboard preferences',
`notifications` STRUCT<digest_email:BOOLEAN> COMMENT 'Provider notification settings')
PARTITIONED BY (
`date_year` INT COMMENT 'call_time year',
`date_month` INT COMMENT 'call_time month',
`date_day` INT COMMENT 'call_time day')
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
's3://ph-data-lake-cududfs2z3xveg5t/curated/system/admin_created/'
TBLPROPERTIES (
'classification'='parquet',
'parquet.compress'='SNAPPY')
parquet-tools cat
role = admin
date_created = 2018-01-11T14:40:23.142Z
preferences:
.patients_hidden = false
.weekend_digests = true
.portal_welcome_done = true
email = foo.barr+123#example.com
notifications:
.digest_email = true
group_id = 5a5399df23a804001aa25227
given_name = foo
call_time = 2018-01-11T14:40:23.000Z
time_zone = US/Pacific
family_name = bar
language = en-US
user_id = 5a5777572060a700170240c3
parquet-tools schema
message spark_schema {
optional binary role (UTF8);
optional binary date_created (UTF8);
optional group preferences {
optional boolean patients_hidden;
optional boolean weekend_digests;
optional boolean portal_welcome_done;
optional binary last_announcement (UTF8);
}
optional binary email (UTF8);
optional group notifications {
optional boolean digest_email;
}
optional binary group_id (UTF8);
optional binary given_name (UTF8);
optional binary call_time (UTF8);
optional binary time_zone (UTF8);
optional binary family_name (UTF8);
optional binary language (UTF8);
optional binary user_id (UTF8);
optional binary phone (UTF8);
}
I ran into a similar PrestoException and the cause was using uppercase letters for the column type. Once I changed 'VARCHAR(10)' to 'varchar(10)' then it worked.
I was declaring the partition keys as fields in the table. I also ran into the Parquet vs Hive difference in TIMESTAMP and switched those to ISO8601 strings. From there, I pretty much gave up because Athena throws a schema error if all parquet files in the s3 buckets do not have the same schema as Athena. However, with optional fields and sparse columns this is guaranteed to happen
I also ran into this error and of course, the error message ended up telling me nothing about the actual problem. I had the exact same error as the original poster.
I am creating my glue tables via the python boto3 API and feeding it the column names, types, partition columns, and some other things. The problem:
here was my code that I was using to create the table:
import boto3
glu_clt = boto3.client("glue", region_name="us-east-1")
glue_clt.create_table(
DatabaseName=database,
TableInput={
"Name": table,
"StorageDescriptor": {
"Columns": table_cols,
"Location": table_location,
"InputFormat": "org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat",
"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe",
}
},
"PartitionKeys": partition_cols,
"TableType": "EXTERNAL_TABLE"
}
)
So I ended defining all the column names and types for the input Columns to the API. Then I was also giving it the column name and type for the input PartitionKeys in the API. When I browsed to the AWS console, I realized because I defined the partition column both in the Columns and PartitionKeys, it was defined twice in the table.
Interestingly enough, if you try to do this via the console, it will throw a more descriptive error letting you know that the column already exists (if you try adding a partition column that already exists in the table).
TO SOLVE:
I removed the partition columns and their types from the input Columns and instead just fed them through the PartitionKeys input so they wouldn't be put on the table twice. Soooooo frustrating that this was ultimately causing the same error message as the OP's when querying through Athena.
This could also be related to how you created your database (whether through CloudFormation or UI or CLI) or if you have any forbidden characters like '-'. We have hyphens in our database and table names and it renders much functionality useless.
I'm trying to use Data Pipeline to run a Spark Application. How can I access the input / output I specify (S3DataNode) for the EmrActivity, inside my Spark application?
My question is similar to this - https://forums.aws.amazon.com/message.jspa?messageID=507877
Earlier I used to pass the input and output as arguments to the Spark application in steps.
Thanks
I ran across the same question. There's very limited documentation around this. This is my understanding:
You specify the input and output for the EmrActivity. This will create the dependencies between the data nodes and the activity.
In the EmrActivity, you can reference the input sources like this: #{input.directoryPath},#{output.directoryPath}
Example:
...
{
"name": "Input Data Node",
"id": "inputDataNode",
"type": "S3DataNode",
"directoryPath": "s3://my/raw/data/path"
},
{
"name": "transform",
"id": "transform",
"type": "EmrActivity",
"step": [
"s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar,s3://my/transform/script.sh,#{input.directoryPath},#{output.directoryPath}"
],
"runsOn": {
"ref": "emrcluster"
},
"input": {
"ref": "inputDataNode"
},
"output": {
"ref": "outputDataNode"
}
},
{
"name": "Output Data Node",
"id": "outputDataNode",
"type": "S3DataNode",
"directoryPath": "s3://path/to/output/"
},
...
I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.
The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.
The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:
UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF
doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?
And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).
The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {#ScheduleStartTime}
Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.
You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.
Sample pipeline:
{
"objects": [{
"id": "CSVId1",
"name": "DefaultCSV1",
"type": "CSV"
}, {
"id": "RedshiftDatabaseId1",
"databaseName": "dbname",
"username": "user",
"name": "DefaultRedshiftDatabase1",
"*password": "password",
"type": "RedshiftDatabase",
"clusterId": "redshiftclusterId"
}, {
"id": "Default",
"scheduleType": "timeseries",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
}, {
"id": "RedshiftDataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"tableName": "orders",
"name": "DefaultRedshiftDataNode1",
"type": "RedshiftDataNode",
"database": {
"ref": "RedshiftDatabaseId1"
}
}, {
"id": "Ec2ResourceId1",
"schedule": {
"ref": "ScheduleId1"
},
"securityGroups": "MySecurityGroup",
"name": "DefaultEc2Resource1",
"role": "DataPipelineDefaultRole",
"logUri": "s3://myLogs",
"resourceRole": "DataPipelineDefaultResourceRole",
"type": "Ec2Resource"
}, {
"myComment": "This object is used to control the task schedule.",
"id": "DefaultSchedule1",
"name": "RunOnce",
"occurrences": "1",
"period": "1 Day",
"type": "Schedule",
"startAt": "FIRST_ACTIVATION_DATE_TIME"
}, {
"id": "S3DataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"directoryPath": "s3://my-bucket/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"dataFormat": {
"ref": "CSVId1"
},
"type": "S3DataNode"
}, {
"id": "RedshiftCopyActivityId1",
"output": {
"ref": "S3DataNodeId1"
},
"input": {
"ref": "RedshiftDataNodeId1"
},
"schedule": {
"ref": "ScheduleId1"
},
"name": "DefaultRedshiftCopyActivity1",
"runsOn": {
"ref": "Ec2ResourceId1"
},
"type": "RedshiftCopyActivity"
}]
}
Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query
By using a redshift procedural wrapper around unload statement and dynamically deriving the s3 path name.
Execute the dynamic query and in your job, call the procedure that dynamically creates the UNLOAD statement and executes the statement.
This way you can avoid the other services. But depends on what kind of usecase you are working on.