Update asset files using Tokens with aws-cdk - amazon-web-services

I have created this stack:
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const bucket = new s3.Bucket(this, "My Hello Website", {
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
removalPolicy: cdk.RemovalPolicy.DESTROY
});
const api = new apigateway.RestApi(this, "My Endpoint", {
restApiName: "My rest API name",
description: "Some cool description"
});
const myLambda = new lambda.Function(this, 'My Backend', {
runtime: lambda.Runtime.NODEJS_8_10,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'code'))
});
const apiToLambda = new apigateway.LambdaIntegration(myLambda)
api.root.addMethod('GET', apiToLambda);
updateWebsiteUrl.newUrl(api.url);
}
}
Last line of code is my function to update asset that will be deployed on S3 as a website with a API url that will be created during deployment. This is just a plain Node.js script that replaces files PLACEHOLDER with api.url.
Of course during compile time the CDK does not know what will be the final adress of REST endpoint because this is happening during deploy time and it updates my url with somethis like:
'https://${Token[TOKEN.26]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.32]}/;'
Is there any way that I can update this after integrating lambda with API endpooint after deploying those?
I would like to use #aws-cdk/aws-s3-deployment module to deploy code to newly created bucket. All in the same Stack, so one cdk deploy will update everything I need.
To avoid confusion. My updateWebsiteUrl is:
export function newUrl(newUrl: string): void {
const scriptPath = path.join(__dirname, '/../../front/');
const scriptName = 'script.js';
fs.readFile(scriptPath + scriptName, (err, buf) => {
let scriptContent : string = buf.toString();
let newScript = scriptContent.replace('URL_PLACEHOLDER', newUrl);
fs.writeFile(scriptPath + 'newScript.js', newScript, () => {
console.log('done writing');
});
});
}
And my script is just simple:
const url = URL_PLACEHOLDER;
function foo() {
let req = new XMLHttpRequest();
req.open('GET', url , false);
req.send(null);
if (req.status == 200) {
replaceContent(req.response);
}
}
function replaceContent(content) {
document.getElementById('content').innerHTML = content;
}

I ran into the same issue today and managed to find a solution for it.
The C# code I am using in my CDK program is the following:
// This will at runtime be just a token which refers to the actual JSON in the format {'api':{'baseUrl':'https://your-url'}}
var configJson = stack.ToJsonString(new Dictionary<string, object>
{
["api"] = new Dictionary<string, object>
{
["baseUrl"] = api.Url
}
});
var configFile = new AwsCustomResource(this, "config-file", new AwsCustomResourceProps
{
OnUpdate = new AwsSdkCall
{
Service = "S3",
Action = "putObject",
Parameters = new Dictionary<string, string>
{
["Bucket"] = bucket.BucketName,
["Key"] = "config.json",
["Body"] = configJson,
["ContentType"] = "application /json",
["CacheControl"] = "max -age=0, no-cache, no-store, must-revalidate"
},
PhysicalResourceId = PhysicalResourceId.Of("config"),
},
Policy = AwsCustomResourcePolicy.FromStatements(
new[]
{
new PolicyStatement(new PolicyStatementProps
{
Actions = new[] { "s3:PutObject" },
Resources= new [] { bucket.ArnForObjects("config.json") }
})
})
});
}
You will need to install the following package to have the types available: https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html
It is basically a part of the solution you can find as an answer to this question AWS CDK passing API Gateway URL to static site in same Stack, or at this GitHub repository: https://github.com/jogold/cloudstructs/blob/master/src/static-website/index.ts#L134

Related

Flutter aws amplify not returning data when calling graphql api

On button click I have programmed to call a graphql api which is connected to a Lambda function and the function is pulling data from a dynamodb table. The query does not produce any error, but it doesn't give me any results as well. I have also checked the cloudwatch logs and I dont see any traces of the function being called. Not sure on the careless mistake I am making here.
Here is my api
void findUser() async {
try {
String graphQLDocument = '''query getUserById(\$userId: ID!) {
getUserById(userId: \$id) {
id
name
}
}''';
var operation = Amplify.API.query(
request: GraphQLRequest<String>(
document: graphQLDocument,
variables: {'id': 'USER-14160000000'}));
var response = await operation.response;
var data = response.data;
print('Query result: ' + data);
} on ApiException catch (e) {
print('Query failed: $e');
}
}
Here is my lambda function -
const getUserById = require('./user-queries/getUserById');
exports.handler = async (event) => {
var userId = event.arguments.userId;
var name = event.arguments.name;
var avatarUrl = event.arguments.avatarUrl;
//console.log('Received Event - ', JSON.stringify(event,3));
console.log(userId);
switch(event.info.fieldName) {
case "getUserById":
return getUserById(userId);
}
};
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'ca-central-1'});
async function getUserById(userId) {
const params = {
TableName:"Bol-Table",
KeyConditionExpression: 'pk = :hashKey and sk = :sortKey',
ExpressionAttributeValues: {
':hashKey': userId,
':sortKey': 'USER'
}
};
try {
const Item = await docClient.query(params).promise();
console.log(Item);
return {
id: Item.Items[0].pk,
name: Item.Items[0].details.displayName,
avatarUrl: Item.Items[0].details.avatarUrl,
createdAt: Item.Items[0].details.createdAt,
updatedAt: Item.Items[0].details.updatedAt
};
} catch(err) {
console.log("BOL Error: ", err);
}
}
module.exports = getUserById;
Upon button click I get this
Moving my comment to an answer:
Can you try changing your graphQLDocumnet to
String graphQLDocument = '''query getUserById(\$id: ID!) {
getUserById(userId: \$id) {
id
name
}
}''';
Your variable is $userId and then $id. Try calling it $id in both places like in your variables object.
Your flutter code is working fine but in lambda from the aws is returning blank string "" to not to print anything

AWS Cloudfront for S3 backed website + Rest API: (Error - MethodNotAllowed / The specified method is not allowed against this resource)

I have an AWS S3 backed static website and a RestApi. I am configuring a single Cloudfront Distribution for the static website and the RestApi. I have OriginConfigs done for the S3 origins and the RestApi origin. I am using AWS CDK to define the infrastructure in code.
The approach has been adopted from this article: https://dev.to/evnz/single-cloudfront-distribution-for-s3-web-app-and-api-gateway-15c3]
The API are defined under the relative path /r/<resourcename> or /r/api/<methodname>. Examples would be /r/Account referring to the Account resource and /r/api/Validate referring to an rpc-style method called Validate (in this case a HTTP POST method). The Lambda methods that implement the resource methods are configured with the proper PREFLIGHT OPTIONS with the static website's url listed in the allowed origins for that resource. For eg: the /r/api/Validate method lambda has
exports.main = async function(event, context) {
try {
var method = event.httpMethod;
if(method === "OPTIONS") {
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers" : "*",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Origin": website_url,
"Vary": "Origin",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET,DELETE"
}
};
return response;
} else if(method === "POST") {
...
}
...
}
The API and website are deployed fine. Here's the CDK deployment code fragment.
const string api_domain = "myrestapi.execute-api.ap-south-1.amazonaws.com";
const string api_stage = "prod";
internal WebAppStaticWebsiteStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// The S3 bucket to hold the static website contents
var bucket = new Bucket(this, "WebAppStaticWebsiteBucket", new BucketProps {
PublicReadAccess = false,
BlockPublicAccess = BlockPublicAccess.BLOCK_ALL,
RemovalPolicy = RemovalPolicy.DESTROY,
WebsiteIndexDocument = "index.html",
Cors = new ICorsRule[] {
new CorsRule() {
AllowedHeaders = new string[] { "*" },
AllowedMethods = new HttpMethods[] { HttpMethods.GET, HttpMethods.POST, HttpMethods.PUT, HttpMethods.DELETE, HttpMethods.HEAD },
AllowedOrigins = new string[] { "*" }
}
}
});
var cloudfrontOAI = new OriginAccessIdentity(this, "CloudfrontOAI", new OriginAccessIdentityProps() {
Comment = "Allows cloudfront access to S3"
});
bucket.AddToResourcePolicy(new PolicyStatement(new PolicyStatementProps() {
Sid = "Grant cloudfront origin access identity access to s3 bucket",
Actions = new [] { "s3:GetObject" },
Resources = new [] { bucket.BucketArn + "/*" },
Principals = new [] { cloudfrontOAI.GrantPrincipal }
}));
// The cloudfront distribution for the website
var distribution = new CloudFrontWebDistribution(this, "WebAppStaticWebsiteDistribution", new CloudFrontWebDistributionProps() {
ViewerProtocolPolicy = ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
DefaultRootObject = "index.html",
PriceClass = PriceClass.PRICE_CLASS_ALL,
GeoRestriction = GeoRestriction.Whitelist(new [] {
"IN"
}),
OriginConfigs = new [] {
new SourceConfiguration() {
CustomOriginSource = new CustomOriginConfig() {
OriginProtocolPolicy = OriginProtocolPolicy.HTTPS_ONLY,
DomainName = api_domain,
AllowedOriginSSLVersions = new OriginSslPolicy[] { OriginSslPolicy.TLS_V1_2 },
},
Behaviors = new IBehavior[] {
new Behavior() {
IsDefaultBehavior = false,
PathPattern = $"/{api_stage}/r/*",
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS,
DefaultTtl = Duration.Seconds(0),
ForwardedValues = new CfnDistribution.ForwardedValuesProperty() {
QueryString = true,
Headers = new string[] { "Authorization" }
}
}
}
},
new SourceConfiguration() {
S3OriginSource = new S3OriginConfig() {
S3BucketSource = bucket,
OriginAccessIdentity = cloudfrontOAI
},
Behaviors = new [] {
new Behavior() {
IsDefaultBehavior = true,
//PathPattern = "/*",
DefaultTtl = Duration.Seconds(0),
Compress = false,
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS
}
},
}
}
});
// The distribution domain name - output
var domainNameOutput = new CfnOutput(this, "WebAppStaticWebsiteDistributionDomainName", new CfnOutputProps() {
Value = distribution.DistributionDomainName
});
// The S3 bucket deployment for the website
var deployment = new BucketDeployment(this, "WebAppStaticWebsiteDeployment", new BucketDeploymentProps(){
Sources = new [] {Source.Asset("./website/dist")},
DestinationBucket = bucket,
Distribution = distribution
});
}
I am encountering the following error (extracted from Browser console error log):
bundle.js:67 POST https://mywebapp.cloudfront.net/r/api/Validate 405
bundle.js:67
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<Method>POST</Method>
<ResourceType>OBJECT</ResourceType>
<RequestId>xxxxx</RequestId>
<HostId>xxxxxxxxxxxxxxx</HostId>
</Error>
The intended flow is that the POST call (made using fetch() api) to https://mywebapp.cloudfront.net/r/api/Validate is forwarded to the RestApi backend by cloudfront. It appears like Cloudfront is doing it, but the backend is returning an error (based on the error message).
What am I missing? How do I make this work?
This was fixed by doing the following:
Moved to the Distribution construct (which as per AWS documentation is the one to use as it is receiving latest updates).
Adding a CachePolicy and OriginRequestPolicy to control Cookie forwarding and Header forwarding

How to Get Signed S3 Url in AWS-SDK JS Version 3?

I am following the proposed solution by Trivikr for adding support for s3.getSignedUrl api which is not currently available in newer v3. I am trying to make a signed url for getting an object from bucket.
Just for convenience, the code is being added below:
const { S3, GetObjectCommand } = require("#aws-sdk/client-s3"); // 1.0.0-gamma.2 version
const { S3RequestPresigner } = require("#aws-sdk/s3-request-presigner"); // 0.1.0-preview.2 version
const { createRequest } = require("#aws-sdk/util-create-request"); // 0.1.0-preview.2 version
const { formatUrl } = require("#aws-sdk/util-format-url"); // 0.1.0-preview.1 //version
const fetch = require("node-fetch");
(async () => {
try {
const region = "us-east-1";
const Bucket = `SOME_BUCKET_NAME`;
const Key = `SOME_KEY_VALUE`;
const credentials = {
accessKeyId: ACCESS_KEY_HERE,
secretAccessKey: SECRET_KEY_HERE,
sessionToken: SESSION_TOKEN_HERE
};
const S3Client = new S3({ region, credentials, signatureVersion: 'v4' });
console.log('1'); // for quick debugging
const signer = new S3RequestPresigner({ ...S3Client.config });
console.log('2')
const request = await createRequest(
S3Client,
new GetObjectCommand({ Key, Bucket })
);
console.log('3');
let signedUrl = formatUrl(await signer.presign(request));
console.log(signedUrl);
let response = await fetch(signedUrl);
console.log("Response", response);
}catch(e) {
console.error(e);
}
I successfully create S3Client and signer but on creating request, I get the following error:
clientStack.concat(...).filter is not a function
Anything wrong I am doing?
Please also note that I am using webpack for bundling
Just add my example in TypeScript:
import { S3Client, GetObjectCommand, S3ClientConfig } from '#aws-sdk/client-s3';
import { getSignedUrl } from '#aws-sdk/s3-request-presigner';
const s3Configuration: S3ClientConfig = {
credentials: {
accessKeyId: '<ACCESS_KEY_ID>',
secretAccessKey: '<SECRET_ACCESS_KEY>'
},
region: '<REGION>',
};
const s3 = new S3Client(s3Configuration);
const command = new GetObjectCommand({Bucket: '<BUCKET>', Key: '<KEY>' });
const url = await getSignedUrl(s3, command, { expiresIn: 15 * 60 }); // expires in seconds
console.log('Presigned URL: ', url);
RESOLVED
I ended up successfully making the signed urls by installing the beta versions rather than preview (default) ones

AWS Lambda not working when deployed with CDK

I am trying to deploy a lambda using AWS CDK and it seems not to be working/deployed properly.
The "box" in the pipeline is green, so no errors are returned.
Everything appears to be fine, but when I ran it manually to test, I receive the next message:
{
"errorType": "LambdaException",
"errorMessage": "Could not find the required 'QuickSight.Lambdas.SpiceRefresh.deps.json'. This file should be present at the root of the deployment package."
}
The issue is that if I download the artefact manually to my machine, and upload it with the Function package upload button, it is working properly.
I have one Stack which contains CfnParametersCode which is the stack I use to create the lambda.
public class LambdaStack : Stack
{
public CfnParametersCode LambdaCode { get; set; }
//code
private Function BuildSpiceRefreshLambda()
{
LambdaCode = Code.FromCfnParameters();
var func = new Function(this, Constants.Lambda.LambdaName, new FunctionProps
{
Code = LambdaCode,
Handler = Constants.Lambda.LambdaHandler,
FunctionName = Constants.Lambda.LambdaName,
MemorySize = 1024,
Tracing = Tracing.ACTIVE,
Timeout = Duration.Seconds(480),
Runtime = Runtime.DOTNET_CORE_2_1,
Environment = new Dictionary<string, string>()
{
{"ENVIRONMENT", Fn.Ref(Constants.EnvironmentVariables.Environment)},
{"APPLICATION_NAME", Constants.Lambda.ApplicationName},
{"AWS_ACCOUNT_ID", Fn.Ref("AWS::AccountId")},
{"LOG_GROUP_NAME", Constants.Lambda.LogGroupName}
},
ReservedConcurrentExecutions = 1,
Role = SpiceRefreshLambdaRole,
Vpc = this.GetProjectVpc(),
SecurityGroups = new ISecurityGroup[]
{
securityGroup
}
});
return func;
}
}
and then I have the pipeline which one of the steps is build the lambda:
var lambdaBuild = new PipelineProject(this, "appLambda", new PipelineProjectProps
{
BuildSpec = BuildSpec.FromObject(new Dictionary<string, object>
{
["version"] = "0.2",
["phases"] = new Dictionary<string, object>
{
["install"] = new Dictionary<string, object>
{
["commands"] = new string[]
{
"echo \"Installing lambda tools for dotnet\"",
"dotnet tool install -g Amazon.Lambda.Tools",
}
},
["build"] = new Dictionary<string, object>
{
["commands"] = new string[]
{
"echo \"Packaging app lambda\"",
"(cd app/src/Lambdas/app.Lambdas.Action; dotnet lambda package)"
}
}
},
["artifacts"] = new Dictionary<string, object>
{
["files"] = new[]
{
"app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/app.Lambdas.Action.zip",
}
}
}),
Environment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.STANDARD_2_0
}
});
var lambdaBuildOutput = new Artifact_("LambdaBuildOutput");
new Amazon.CDK.AWS.CodePipeline.Pipeline(this, "appPipeline", new PipelineProps
{
ArtifactBucket = Bucket.FromBucketAttributes(this, "artifact-bucket", new BucketAttributes
{
BucketArn = "bucket",
EncryptionKey = "key"
}),
Role = "role",
Stages = new[]
{
new StageProps
{
StageName = "Source",
Actions = new[]
{
new CodeCommitSourceAction(new CodeCommitSourceActionProps
{
ActionName = "Source",
Repository = code,
Output = sourceOutput,
})
}
},
new StageProps
{
StageName = "Build",
Actions = new[]
{
new CodeBuildAction(new CodeBuildActionProps
{
ActionName = "Lambda_Build",
Project = lambdaBuild,
Input = sourceOutput,
Outputs = new[] {lambdaBuildOutput},
}),
}
},
new StageProps
{
StageName = "Deploy",
Actions = new[]
{
new CloudFormationCreateUpdateStackAction(new CloudFormationCreateUpdateStackActionProps
{
ActionName = "DeployLambdaapp",
TemplatePath = props.appLambdaStack.StackTemplate,
StackName = "appLambdaDeploymentStack",
AdminPermissions = true,
ParameterOverrides = props.appLambdaStack.LambdaCode.Assign(lambdaBuildOutput.S3Location),
ExtraInputs = new[] {lambdaBuildOutput},
Role = "role",
DeploymentRole = "deployRole"
}),
}
}
}
});
there is more steps but they are not relevant.
so as you can see I am applying the ParameterOverrides to props.appLambdaStack.LambdaCode.Assign(lambdaBuildOutput.S3Location) which seems to be fine, because When the lambda gets created, and it specifies the size, which is the same size as the lambda was supposed to be, but when I execute it I receive that "errorMessage": "Could not find the required 'QuickSight.Lambdas.SpiceRefresh.deps.json'. This file should be present at the root of the deployment package."
On the result Cloudformation file seems to be fine too:
"appLambdaF0BB8286": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "appLambdaSourceBucketNameParameter"
},
"S3Key": {
"Ref": "appLambdaSourceObjectKeyParameter"
}
},
"Handler": "Constants.Lambda.LambdaHandler", //same as the constant in c#
//Rest of the properties
}
}
I checked before creating the post and most of the people had a problem with the handler. Un fortunately if I download manually the object in appLambdaSourceBucketNameParameter, appLambdaSourceObjectKeyParameter and upload it to the lambda, it works perfectly. I think that will exclude my issue.
Any idea what can be wrong?
Found the solution.
The issue is that in the artifact I am returning the lambda .zip
["files"] = new[]
{
"app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/app.Lambdas.Action.zip",
}
But what I really need is to return the binaries of the lambda. (the publish folder)
["artifacts"] = new Dictionary<string, object>
{
["base-directory"] = "app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/publish",
["files"] = new[] { "**.*" }
}
nothing else changed, and it worked.
Cloudformation translation:
before I was exporting artifact::app.Lambdas.Action.zip
then AWS was trying to find the binaries.
Now it is exporting artifact::**.*, so all the files.

Launching Cloud Dataflow from Cloud Functions

How do I launch a Cloud Dataflow job from a Google Cloud Function? I'd like to use Google Cloud Functions as a mechanism to enable cross-service composition.
I've included a very basic example of the WordCount sample below. Please note that you'll need to include a copy of the java binary in your Cloud Function deployment, since it is not in the default environment. Likewise, you'll need to package your deploy jar with your Cloud Function as well.
module.exports = {
wordcount: function (context, data) {
const spawn = require('child_process').spawn;
const child = spawn(
'jre1.8.0_73/bin/java',
['-cp',
'MY_JAR.jar',
'com.google.cloud.dataflow.examples.WordCount',
'--jobName=fromACloudFunction',
'--project=MY_PROJECT',
'--runner=BlockingDataflowPipelineRunner',
'--stagingLocation=gs://STAGING_LOCATION',
'--inputFile=gs://dataflow-samples/shakespeare/*',
'--output=gs://OUTPUT_LOCATION'
],
{ cwd: __dirname });
child.stdout.on('data', function(data) {
console.log('stdout: ' + data);
});
child.stderr.on('data', function(data) {
console.log('error: ' + data);
});
child.on('close', function(code) {
console.log('closing code: ' + code);
});
context.success();
}
}
You could further enhance this example by using the non-blocking runner and having the function return the Job ID, so that you can poll for job completion separately. This pattern should be valid for other SDKs as well, so long as their dependencies can be packaged into the Cloud Function.
The best way is to launch is via cloud function but be careful, if you are using the cloud function for google cloud storage, then for every file uploaded a dataflow job will be launched.
const { google } = require('googleapis');
const templatePath = "gs://template_dir/df_template;
const project = "<project_id>";
const tempLoc = "gs://tempLocation/";
exports.PMKafka = (data, context, callback) => {
const file = data;
console.log(`Event ${context.eventId}`);
console.log(`Event Type: ${context.eventType}`);
console.log(`Bucket Name: ${file.bucket}`);
console.log(`File Name: ${file.name}`);
console.log(`Metageneration: ${file.metageneration}`);
console.log(`Created: ${file.timeCreated}`);
console.log(`Updated: ${file.updated}`);
console.log(`Uploaded File Name - gs://${file.bucket}/${file.name}`);
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
if (authClient.createScopedRequired && authClient.createScopedRequired()) {
authClient = authClient.createScoped(authScope);
}
const dataflow = google.dataflow({ version: 'v1b3', auth: authClient });
var inputDict= {
inputFile: `gs://${file.bucket}/${file.name}`,
...
...
<other_runtime_parameters>
};
var env = {
tempLocation: tempLoc
};
var resource_opts = {
parameters: inputDict,
environment: env,
jobName: config.jobNamePrefix + "-" + new Date().toISOString().toLowerCase().replace(":","-").replace(".","-")
};
var opts = {
gcsPath: templatePath,
projectId: project,
resource: resource_opts
}
console.log(`Dataflow Run Time Options - ${JSON.stringify(opts)}`)
dataflow.projects.templates.launch(opts, function (err, response) {
if (err) {
console.error("problem running dataflow template, error was: ", err);
slack.publishMessage(null, null, false, err);
return;
}
console.log("Dataflow template response: ", response);
var jobid = response["data"]["job"]["id"];
console.log("Dataflow Job ID: ", jobid);
});
callback();
});
};