gcloud codebuild sdk, trigger build from cloud function - build

Trying to use the #google/cloudbuild client library in a cloud function to trigger a manual build against a project but no luck. My function runs async and does not throw an error.
Function:
exports.index = async (req, res) => {
const json = // json that contains build steps using docker, and project id
// Creates a client
const cb = new CloudBuildClient();
try {
const result = await cb.createBuild({
projectId: "myproject",
build: JSON.parse(json)
})
return res.status(200).json(result)
} catch(error) {
return res.status(400).json(error);
};
};
I am assuming from the documentation that my default service account is implicit and credentials are sources properly, or it would throw an error.
Advice appreciated.

Related

Can write to google cloud storage from one machine but not

I have a weird bug where I'm able to write to my cloud storage bucket from one machine but not another. I can't tell if the issue is vercel or if it's my configurations but the app is deployed on vercel so it should be the same no matter where I'm accessing it.
upload.ts
export const upload = async (req: IncomingMessage, userId: string) => {
const storage = new Storage({
// credentials
});
const bucket = storage.bucket(process.env.GCS_BUCKET_NAME as string);
const form = formidable();
const { files } = await parseForm(form, req);
const file = files.filepond;
const { path } = file;
const options = {
destination: `products/${userId}/${file.name}`,
preconditionOpts: {
ifGenerationMatch: 0
}
};
await bucket.upload(path, options);
}
Again, my app is deployed on Vercel and I'm able to upload images on my own machine but can't do it if I try on my phone or another pc/mac. My cloud storage bucket is also public so I should be able to read/write to it from anywhere. Any clues?

TestCafe works with SAM Local but not after SAM Deploy

I'm currently trying to set up an AWS Lambda (nodejs10.x) function that should execute a simple TestCafe test.
If I run my Lambda locally with sam local invoke --no-event it executes just fine:
2019-12-03T13:39:46.345Z 7b906b79-d7e5-1aa6-edb7-3e749d4e4b08
INFO hello world ✓ My first test 1 passed (1s)
2019-12-03T13:39:46.578Z 7b906b79-d7e5-1aa6-edb7-3e749d4e4b08
INFO Tests failed: 0
After I deployed it with sam build and sam deploy it stops working. It just throws the following error in AWS:
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "Error: Page crashed!",
"trace": [
"Runtime.UnhandledPromiseRejection: Error: Page crashed!",
" at process.on (/var/runtime/index.js:37:15)",
" at process.emit (events.js:203:15)",
" at process.EventEmitter.emit (domain.js:448:20)",
" at emitPromiseRejectionWarnings (internal/process/promises.js:140:18)",
" at process._tickCallback (internal/process/next_tick.js:69:34)"
]
}
My lambda handler looks like this:
const createTestCafe = require("testcafe");
const chromium = require("chrome-aws-lambda");
let testcafe = null;
exports.lambdaHandler = async (event, context) => {
const executablePath = await chromium.executablePath;
await createTestCafe()
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.src("sample-fixture.js")
.browsers(
"puppeteer-core:launch?arg=--no-sandbox&arg=--disable-gpu&arg=--disable-setuid-sandbox&path=" + executablePath
)
.run({
skipJsErrors: true,
selectorTimeout: 50000
});
})
.then(failedCount => {
console.log("Tests failed: " + failedCount);
testcafe.close();
});
return {
statusCode: 200
};
};
My sample-fixture.js looks like this:
fixture `Getting Started`
.page `http://devexpress.github.io/testcafe/example`;
test('My first test', async t => {
console.log('hello world');
});
I'm using the following dependencies:
"testcafe": "^1.7.0"
"chrome-aws-lambda": "^2.0.1"
"testcafe-browser-provider-puppeteer-core": "^1.1.0"
Has someone an idea why my Lambda functions works locally on my machine but not in AWS?
I cannot currently say anything precise based only on this information.
I suggest you try the following:
Check that packet deployed size is less then 50 Mb (https://github.com/puppeteer/puppeteer/blob/master/docs/troubleshooting.md#running-puppeteer-on-aws-lambda)
Check that testcafe-browser-provider-puppeteer-core can be run on the AWS Labmda
So I've been able to run testcafe in an AWS lambda. The key for me was to take advantage of Lambda Layers. (https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html)
The issue with testcafe it that it's a huge package, both it and chrome-aws-lambda should be deployed in Layers, as you may exceed the bundle size limit.
Also, you may run into an issue with speed on the Lambda as testcafe compiles the files in babel. If your files are already compiled it may add additional unneeded time to your execution.
Hope this helps!

AWS RDSDataService query not running

I'm trying to use RDSDataService to query an Aurora Serverless database. When I'm trying to query, my lambda just times out (I've set it up to 5 minutes just to make sure it isn't a problem with that). I have 1 record in my database and when I try to query it, I get no results, and neither the error or data flows are called. I've verified executeSql is called by removing the dbClusterOrInstanceArn from my params and it throw the exception for not having it.
I have also run SHOW FULL PROCESSLIST in the query editor to see if the queries were still running and they are not. I've given the lambda both the AmazonRDSFullAccess and AmazonRDSDataFullAccess policies without any luck either. You can see by the code below, i've already tried what was recommended in issue #2376.
Not that this should matter, but this lambda is triggered by a Kinesis event trigger.
const AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
const RDS = new AWS.RDSDataService({apiVersion: '2018-08-01', region: 'us-east-1'})
for (record of event.Records) {
const payload = JSON.parse(new Buffer(record.kinesis.data, 'base64').toString('utf-8'));
const data = compileItem(payload);
const params = {
awsSecretStoreArn: 'arn:aws:secretsmanager:us-east-1:149070771508:secret:xxxxxxxxx,
dbClusterOrInstanceArn: 'arn:aws:rds:us-east-1:149070771508:cluster:xxxxxxxxx',
sqlStatements: `select * from MY_DATABASE.MY_TABLE`
// database: 'MY_DATABASE'
}
console.log('calling executeSql');
RDS.executeSql(params, (error, data) => {
if (error) {
console.log('error', error)
callback(error, null);
} else {
console.log('data', data);
callback(null, { success: true })
}
});
}
}
EDIT: We've run the command through the aws cli and it returns results.
EDIT 2: I'm able to connect to it using the mysql2 package and connecting to it through the URI, so it's defiantly an issue with either the aws-sdk or how I'm using it.
Nodejs excution is not waiting for the result that's why process exit before completing the request.
use mysql library https://www.npmjs.com/package/serverless-mysql
OR
use context.callbackWaitsForEmptyEventLoop =false
Problem was the RDS had to be crated in a VPC, in which the Lambda's were not in

Setting lambda environmental variable using ask CLI?

How can I use the Ask CLI to set lambda function environmental variables? I tried setting them using the AWS console, but after I do that I get this error when I try to run ask deploy:
[Error]: Lambda update failed. Lambda ARN: arn:aws:lambda:us-east-1:608870357221:function:ask-custom-talk_stem-default
The Revision Id provided does not match the latest Revision Id. Call the GetFunction/GetAlias API to retrieve the latest Revision Id
Hello have you tried using --force command?
ask deploy --force
The only solution I've found is to update the variables through the AWS console and manually fetch the function's info using the AWS CLI and update the local revision id to match the revision id that's live on AWS. Here is my script:
const path = require('path');
const { readFileSync, writeFileSync } = require('fs');
const execa = require('execa');
const skillRoot = path.join(__dirname, '..');
const functionRoot = path.join(skillRoot, 'lambda', 'custom');
const askConfigPath = path.join(skillRoot, '.ask', 'config');
const askConfig = JSON.parse(readFileSync(askConfigPath, 'utf8'));
const { functionName } = askConfig.deploy_settings.default.resources.lambda[0];
async function main() {
console.log('Downloading function info from AWS');
const result = await execa('aws', ['lambda', 'get-function', '--function-name', functionName]);
const functionInfo = JSON.parse(result.stdout);
const revisionId = functionInfo.Configuration.RevisionId;
console.log('Downloading function contents from AWS');
await execa('ask', ['lambda', 'download', '--function', functionName], { cwd: functionRoot, stdio: 'inherit' });
console.log('Updating skill\'s revisionId');
askConfig.deploy_settings.default.resources.lambda[0].revisionId = revisionId;
writeFileSync(askConfigPath, JSON.stringify(askConfig, null, 2));
console.log('Done');
}
main();

How to invoke AWS CLI command from CodePipeline?

I want to copy artifacts from S3 bucket in Account 1 to S3 bucket in Account 2. Though I was able to setup replication but I want to know whether there is a way to invoke AWS CLI command from within a pipeline.
Can it be invoked using Lambda function? If yes, any small sample script will be helpful.
Yes, you can add a Lambda Invoke action to your pipeline to call the copyobject API. The core part of the Lambda function is as follow.
exports.copyRepoToProdS3 = (event, context) => {
const jobId = event['CodePipeline.job'].id
const s3Location = event['CodePipeline.job'].data.inputArtifacts[0].location.s3Location
const cpParams = JSON.parse(event['CodePipeline.job'].data.actionConfiguration.configuration.UserParameters)
let promises = []
for (let bucket of prodBuckets) {
let params = {
Bucket: bucket,
CopySource: s3Location['bucketName'] + '/' + s3Location['objectKey'],
Key: cpParams['S3ObjectKey']
}
promises.push(s3.copyObject(params).promise())
}
return Promise.all(promises)
.then((data) => {
console.log('Successfully copied repo to buckets!')
}).catch((error) => {
console.log('Failed to copy repo to buckets!', error)
})
}
And more detailed steps to add roles and report processing result to CodePipeline can be find at the following link. https://medium.com/#codershunshun/how-to-invoke-aws-lambda-in-codepipeline-d7c77457af95