Related
This is more of a lack of understanding on my part but I cannot seem to debug this.
I have created an codepipeline which runs terraform apply ( which internally creates the aws infrastructure for me ). the codepipeline seems to be working.
I need to implement the same codepipeline for another account, how can I do so.
I tried to get the json script using the below command.
aws codepipeline get-pipeline --name
I convert json script to yaml script.
When I try to run the yaml script on another account I get below error
Template format error: At least one Resources member must be defined.
ISSUES:
1.) Best Way I can export codepipeline to cloudformation template
2.) The approach which I used didn't work, how to solve it?
{
"pipeline": {
"name": "my-code-pipeline",
"roleArn": "arn:aws:iam::aws-account-id:role/service-role/AWSCodePipelineServiceRole-aws-region-my-code-pipeline",
"artifactStore": {
"type": "S3",
"location": "codepipeline-aws-region-45856771421"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"runOrder": 1,
"configuration": {
"Branch": "master",
"OAuthToken": "****",
"Owner": "github-account-name",
"PollForSourceChanges": "false",
"Repo": "repo-name"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "aws-region",
"namespace": "SourceVariables"
}
]
},
{
"name": "codebuild-for-terraform-init-and-plan",
"actions": [
{
"name": "codebuild-for-terraform-init",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "my-code-pipeline-build-stage"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "aws-region"
}
]
},
{
"name": "manual-approve",
"actions": [
{
"name": "approval",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"runOrder": 1,
"configuration": {
"NotificationArn": "arn:aws:sns:aws-region:aws-account-id:Email-Service"
},
"outputArtifacts": [],
"inputArtifacts": [],
"region": "aws-region"
}
]
},
{
"name": "codebuild-for-terraform-apply",
"actions": [
{
"name": "codebuild-for-terraform-apply",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "codebuild-project-for-apply"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "aws-region"
}
]
}
],
"version": 11
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:aws-region:aws-account-id:my-code-pipeline",
"created": "2020-09-17T13:12:50.085000+05:30",
"updated": "2020-09-21T15:46:19.613000+05:30"
}
}
The given code is the yaml template that I used to create cloudformation template.
The aws codepipeline get-pipeline --name CLI command returns information about the pipeline structure and pipeline metadata, but it is not the same format as a CloudFormation template (or the resource part of it).
There is no built-in support for exporting existing AWS resources to create a CloudFormation template, though you do have a couple of options.
Use former2 (built and maintained by AWS Hero, Ian Mckay) to generate a CloudFormation template from the resources you select.
Take the JSON output from the aws codepipeline get-pipeline --name command you used and manually craft a CloudFormation template. The pipeline will be one resource in the list of resources in the full template. The info it contains is pretty close, but needs some adjustments to conform to the CloudFormation resource specification for a CodePipeline, which you can find here. You'll also need to do the same for other resources that you need to bring into the template, with aws <service name> describe.
If you go with option 2 (and even if you don't), I recommend using cfn-lint with your code editor to help adhere to the spec.
We have a requirement to pass on Output Schema of the wrangler as a run time arguments
Below are the formats we tried but nothing seems to work, can any one guide us on how to provide schema as a run time argument through UI or Rest API Call
[
{
"name": "etlSchemaBody",
"schema": {
"type": "record",
"name": "etlSchemaBody",
"fields": [
{
"name": "body_1",
"type": [
"string",
"null"
]
},
{
"name": "body_2",
"type": [
"string",
"null"
]
},
{
"name": "body_3",
"type": [
"string",
"null"
]
},
{
"name": "body_4",
"type": [
"string",
"null"
]
},
{
"name": "body_5",
"type": [
"string",
"null"
]
}
]
}
}
]
"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"body_1\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_2\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_3\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_4\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_5\",\"type\":[\"string\",\"null\"]}]}"
Instead of
"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"body_1\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_2\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_3\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_4\",\"type\":[\"string\",\"null\"]},{\"name\":\"body_5\",\"type\":[\"string\",\"null\"]}]}"
use
{"type":"record","name":"etlSchemaBody","fields":[{"name":"body_1","type":["string","null"]},{"name":"body_2","type":["string","null"]},{"name":"body_3","type":["string","null"]},{"name":"body_4","type":["string","null"]},{"name":"body_5","type":["string","null"]}]}
as the value for your schema macro.
My guess is that your pipeline run failed with malformed JSON error.
If this does not work, please post logs from the pipeline run.
I have created a simple ShellCommandActivity which echos some text. It runs on a plain ec2 (vpc) instance. I see that the host has spinned up but it never executes the tasks and the task remains in WAITING_FOR_RUNNER status. After all the retries I get this error
Resource is stalled. Associated tasks not able to make progress.
I followed this troubleshoot-link but it didn't resolve my problem.
Here is the json description of the pipeline:
{
"objects": [
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"name": "ec2-compute",
"id": "ResourceId_viWO9",
"type": "Ec2Resource"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://xyz-logs/",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"name": "EchoActivity",
"id": "ShellCommandActivityId_kc8xz",
"runsOn": {
"ref": "ResourceId_viWO9"
},
"type": "ShellCommandActivity",
"command": "echo HelloWorld"
}
],
"parameters": []
}
What could be the problem here?
Thanks in advance.
I figured this out. The routing table in the VPC subnets was not properly configured.
To be specific, in my case the routing table didn't have 0.0.0.0/0 mapped to an internet-gateway. When I added this mapping, everything started working.
I'm trying to run simple AWS Data Pipeline for my POC. The case that I have is following: get data from CSV stored on S3, perform simple hive query on them and put results back to S3.
I've created very basic pipeline definition and tried to run it on different emr versions: 4.2.0 and 5.3.1 - both are failing though in different places.
So pipeline definition is following:
{
"objects": [
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"maximumRetries": "1",
"enableDebugging": "true",
"name": "EmrCluster",
"keyPair": "Jeff Key Pair",
"id": "EmrClusterId_CM5Td",
"releaseLabel": "emr-5.3.1",
"region": "us-west-2",
"type": "EmrCluster",
"terminateAfter": "1 Day"
},
{
"column": [
"policyID INT",
"statecode STRING"
],
"name": "SampleCSVOutputFormat",
"id": "DataFormatId_9sLJ0",
"type": "CSV"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://aws-logs/datapipeline/",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"directoryPath": "s3://data-pipeline-input/",
"dataFormat": {
"ref": "DataFormatId_KIMjx"
},
"name": "InputDataNode",
"id": "DataNodeId_RyNzr",
"type": "S3DataNode"
},
{
"s3EncryptionType": "NONE",
"directoryPath": "s3://data-pipeline-output/",
"dataFormat": {
"ref": "DataFormatId_9sLJ0"
},
"name": "OutputDataNode",
"id": "DataNodeId_lnwhV",
"type": "S3DataNode"
},
{
"output": {
"ref": "DataNodeId_lnwhV"
},
"input": {
"ref": "DataNodeId_RyNzr"
},
"stage": "true",
"maximumRetries": "2",
"name": "HiveTest",
"hiveScript": "INSERT OVERWRITE TABLE ${output1} select policyID, statecode from ${input1};",
"runsOn": {
"ref": "EmrClusterId_CM5Td"
},
"id": "HiveActivityId_JFqr5",
"type": "HiveActivity"
},
{
"name": "SampleCSVDataFormat",
"column": [
"policyID INT",
"statecode STRING",
"county STRING",
"eq_site_limit FLOAT",
"hu_site_limit FLOAT",
"fl_site_limit FLOAT",
"fr_site_limit FLOAT",
"tiv_2011 FLOAT",
"tiv_2012 FLOAT",
"eq_site_deductible FLOAT",
"hu_site_deductible FLOAT",
"fl_site_deductible FLOAT",
"fr_site_deductible FLOAT",
"point_latitude FLOAT",
"point_longitude FLOAT",
"line STRING",
"construction STRING",
"point_granularity INT"
],
"id": "DataFormatId_KIMjx",
"type": "CSV"
}
],
"parameters": []
}
And CSV file looks like this:
policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1
448094,FL,CLAY COUNTY,1322376.3,1322376.3,1322376.3,1322376.3,1322376.3,1438163.57,0,0,0,0,30.063936,-81.707664,Residential,Masonry,3
206893,FL,CLAY COUNTY,190724.4,190724.4,190724.4,190724.4,190724.4,192476.78,0,0,0,0,30.089579,-81.700455,Residential,Wood,1
HiveActivity is just a simple query (copy from AWS docs):
"INSERT OVERWRITE TABLE ${output1} select policyID, statecode from ${input1};"
However it fails when running on emr-5.3.1:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
/mnt/taskRunner/./hive-script:617:in `<main>': Error executing cmd: /usr/share/aws/emr/scripts/hive-script "--base-path" "s3://us-west-2.elasticmapreduce/libs/hive/" "--hive-versions" "latest" "--run-hive-script" "--args" "-f"
Going deep into logs I could find following exception:
2017-02-25T00:33:00,434 ERROR [316e5d21-dfd8-4663-a03c-2ea4bae7b1a0 main([])]: tez.DagUtils (:()) - Could not find the jar that was being uploaded
2017-02-25T00:33:00,434 ERROR [316e5d21-dfd8-4663-a03c-2ea4bae7b1a0 main([])]: exec.Task (:()) - Failed to execute tez graph.
java.io.IOException: Previous writer likely failed to write hdfs://ip-170-41-32-05.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/31ae6d21-dfd8-4123-a03c-2ea4bae7b1a0/emr-hive-goodies.jar. Failing because I am unlikely to write too.
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1022)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:294)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155)
When running on emr-4.2.0 I have another crash:
Number of reduce tasks is set to 0 since there's no reduce operator
java.lang.NullPointerException
at org.apache.hadoop.fs.Path.<init>(Path.java:105)
at org.apache.hadoop.fs.Path.<init>(Path.java:94)
at org.apache.hadoop.hive.ql.exec.Utilities.toTempPath(Utilities.java:1517)
at org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3555)
at org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3520)
Both S3 and EMR cluster are in same region and running under same AWS account. I've tried bunch of experiments with S3DataNode and EMRCluster configurations but it always crashes.
Also I couldn't find any working example of data pipeline with HiveActivity nor in documentation or over github.
Can someone please help me figure it out? Thank you.
I was facing the same problem when updating my EMR cluster from a 4.*.* release to 5.28.0 release. After changing the release label, I followed #andrii-gorishnii comment and added
delete jar /mnt/taskRunner/emr-hive-goodies.jar;
to the beginning of my Hive Script and it solved my problem! Thanks #andrii-gorishnii
I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.
The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.
The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:
UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF
doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?
And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).
The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {#ScheduleStartTime}
Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.
You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.
Sample pipeline:
{
"objects": [{
"id": "CSVId1",
"name": "DefaultCSV1",
"type": "CSV"
}, {
"id": "RedshiftDatabaseId1",
"databaseName": "dbname",
"username": "user",
"name": "DefaultRedshiftDatabase1",
"*password": "password",
"type": "RedshiftDatabase",
"clusterId": "redshiftclusterId"
}, {
"id": "Default",
"scheduleType": "timeseries",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
}, {
"id": "RedshiftDataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"tableName": "orders",
"name": "DefaultRedshiftDataNode1",
"type": "RedshiftDataNode",
"database": {
"ref": "RedshiftDatabaseId1"
}
}, {
"id": "Ec2ResourceId1",
"schedule": {
"ref": "ScheduleId1"
},
"securityGroups": "MySecurityGroup",
"name": "DefaultEc2Resource1",
"role": "DataPipelineDefaultRole",
"logUri": "s3://myLogs",
"resourceRole": "DataPipelineDefaultResourceRole",
"type": "Ec2Resource"
}, {
"myComment": "This object is used to control the task schedule.",
"id": "DefaultSchedule1",
"name": "RunOnce",
"occurrences": "1",
"period": "1 Day",
"type": "Schedule",
"startAt": "FIRST_ACTIVATION_DATE_TIME"
}, {
"id": "S3DataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"directoryPath": "s3://my-bucket/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"dataFormat": {
"ref": "CSVId1"
},
"type": "S3DataNode"
}, {
"id": "RedshiftCopyActivityId1",
"output": {
"ref": "S3DataNodeId1"
},
"input": {
"ref": "RedshiftDataNodeId1"
},
"schedule": {
"ref": "ScheduleId1"
},
"name": "DefaultRedshiftCopyActivity1",
"runsOn": {
"ref": "Ec2ResourceId1"
},
"type": "RedshiftCopyActivity"
}]
}
Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query
By using a redshift procedural wrapper around unload statement and dynamically deriving the s3 path name.
Execute the dynamic query and in your job, call the procedure that dynamically creates the UNLOAD statement and executes the statement.
This way you can avoid the other services. But depends on what kind of usecase you are working on.