I'm currently trying to set up an AWS Lambda (nodejs10.x) function that should execute a simple TestCafe test.
If I run my Lambda locally with sam local invoke --no-event it executes just fine:
2019-12-03T13:39:46.345Z 7b906b79-d7e5-1aa6-edb7-3e749d4e4b08
INFO hello world ✓ My first test 1 passed (1s)
2019-12-03T13:39:46.578Z 7b906b79-d7e5-1aa6-edb7-3e749d4e4b08
INFO Tests failed: 0
After I deployed it with sam build and sam deploy it stops working. It just throws the following error in AWS:
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "Error: Page crashed!",
"trace": [
"Runtime.UnhandledPromiseRejection: Error: Page crashed!",
" at process.on (/var/runtime/index.js:37:15)",
" at process.emit (events.js:203:15)",
" at process.EventEmitter.emit (domain.js:448:20)",
" at emitPromiseRejectionWarnings (internal/process/promises.js:140:18)",
" at process._tickCallback (internal/process/next_tick.js:69:34)"
]
}
My lambda handler looks like this:
const createTestCafe = require("testcafe");
const chromium = require("chrome-aws-lambda");
let testcafe = null;
exports.lambdaHandler = async (event, context) => {
const executablePath = await chromium.executablePath;
await createTestCafe()
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.src("sample-fixture.js")
.browsers(
"puppeteer-core:launch?arg=--no-sandbox&arg=--disable-gpu&arg=--disable-setuid-sandbox&path=" + executablePath
)
.run({
skipJsErrors: true,
selectorTimeout: 50000
});
})
.then(failedCount => {
console.log("Tests failed: " + failedCount);
testcafe.close();
});
return {
statusCode: 200
};
};
My sample-fixture.js looks like this:
fixture `Getting Started`
.page `http://devexpress.github.io/testcafe/example`;
test('My first test', async t => {
console.log('hello world');
});
I'm using the following dependencies:
"testcafe": "^1.7.0"
"chrome-aws-lambda": "^2.0.1"
"testcafe-browser-provider-puppeteer-core": "^1.1.0"
Has someone an idea why my Lambda functions works locally on my machine but not in AWS?
I cannot currently say anything precise based only on this information.
I suggest you try the following:
Check that packet deployed size is less then 50 Mb (https://github.com/puppeteer/puppeteer/blob/master/docs/troubleshooting.md#running-puppeteer-on-aws-lambda)
Check that testcafe-browser-provider-puppeteer-core can be run on the AWS Labmda
So I've been able to run testcafe in an AWS lambda. The key for me was to take advantage of Lambda Layers. (https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html)
The issue with testcafe it that it's a huge package, both it and chrome-aws-lambda should be deployed in Layers, as you may exceed the bundle size limit.
Also, you may run into an issue with speed on the Lambda as testcafe compiles the files in babel. If your files are already compiled it may add additional unneeded time to your execution.
Hope this helps!
Related
I am trying to learn to make a web app and I am trying to follow the tutorial provided by AWS but I am coming in to this issue in making a Lambda function.
{
"errorType": "ReferenceError",
"errorMessage": "exports is not defined in ES module scope",
"trace": [
"ReferenceError: exports is not defined in ES module scope",
" at file:///var/task/index.mjs:3:1",
" at ModuleJob.run (node:internal/modules/esm/module_job:193:25)",
" at async Promise.all (index 0)",
" at async ESMLoader.import (node:internal/modules/esm/loader:530:24)",
" at async _tryAwaitImport (file:///var/runtime/index.mjs:921:16)",
" at async _tryRequire (file:///var/runtime/index.mjs:970:86)",
" at async _loadUserApp (file:///var/runtime/index.mjs:994:16)",
" at async UserFunction.js.module.exports.load (file:///var/runtime/index.mjs:1035:21)",
" at async start (file:///var/runtime/index.mjs:1200:23)",
" at async file:///var/runtime/index.mjs:1206:1"
]
}
index.mjs
// Define handler function, the entry point to our code for the Lambda service
// We receive the object that triggers the function as a parameter
exports.handler = async (event) => {
// Extract values from event and format as strings
let name = JSON.stringify(`Hello from Lambda, ${event.firstName} ${event.lastName}`);
// Create a JSON object with our response and store it in a constant
const response = {
statusCode: 200,
body: name
};
// Return the response constant
return response;
};
JSON
{
"firstName":"Tyler",
"lastName":"Schnitzer"
}
It is suppose to be a simple hello world but I am confused why I can't get the event to work?
I am hoping someone can help explain this error and how to solve it, I tried looking through AWS Lambda trouble shooting page, but I still don't understand.
I had the same error. I deleted and recreated the lambda function with a runtime of Node.js 14.x, it works fine. The error was with Node.js 18.x, which is the default as of this writing.
I was following this tutorial, https://www.eventbox.dev/published/lesson/innovator-island/2-realtime/2-backend.html
we have ECS commands running in our Pipeline to deploy a Drupal 8 website. We have added commands like this to our CDK code. We have about 5 commands like this running in the CDK code. We execute these commands using Lambda.
'use strict';
const AWS = require('aws-sdk');
const ecs = new AWS.ECS();
const codeDeploy = new AWS.CodeDeploy({ apiVersion: '2014-10-06' });
exports.handler = async (event, context, callback) => {
console.log('Drush execution lambda invoked');
const deploymentId = event.DeploymentId;
const lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;
try{
let validationTestResult = 'Failed';
const clusterName = process.env.CLUSTER_NAME;
const containerName = process.env.CONTAINER_NAME;
const taskListParams = {
cluster: clusterName,
desiredStatus: 'RUNNING',
};
const taskList = await ecs.listTasks(taskListParams).promise();
const activeTask = taskList.taskArns[0];
console.log('Active task: ' + activeTask);
.......................
const cimParams = {
command: 'drush cim -y',
interactive: true,
task: activeTask,
cluster: clusterName,
container: containerName
};
await ecs.executeCommand(cimParams, function (err, data) {
if (err) {
console.log(err, err.stack, "FAILED on drush cim -y");
} else {
validationTestResult = 'Succeeded';
console.log(data, "Succeeded on drush cim -y");
}
}).promise();
.............................
// Pass CodeDeploy the prepared validation test results.
await codeDeploy.putLifecycleEventHookExecutionStatus({
deploymentId: deploymentId,
lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
status: validationTestResult // status can be 'Succeeded' or 'Failed'
}).promise();
}catch (e) {
console.log(e);
console.log('Drush execution lambda failed');
await codeDeploy.putLifecycleEventHookExecutionStatus({
deploymentId: deploymentId,
lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
status: 'Failed' // status can be 'Succeeded' or 'Failed'
}).promise();
}
};
The problem we have is when these commands are executed it says successful, but, we still can't see the changes on the website. If we run the pipeline again for the second time, the changes will be applied successfully.
The commands are not showing any errors or not failing the pipeline, but, the changes will only apply to the site if the pipeline was executed twice.
We are not sure if this is a Drupal/Drush or an ECS issue at this stage.
When we realized that the changes are not applying for the first time, we SSH into the ECS container and manually executed the command drush cim -y and it applied the changes to the site when we did that. So that tells us this is probably not an issue with the command, but, the ECS execution?
Can anyone see if we are doing anything wrong here ?. Is there a known issue with CDK or ECS commands like this ?.
Most importantly if someone can tell us how to debug ECS commands correctly, that would be great. Because the current level of logs we are having is not enough to find where the problem is.
Thanks in advance for taking the time to read this question.
Trying to use the #google/cloudbuild client library in a cloud function to trigger a manual build against a project but no luck. My function runs async and does not throw an error.
Function:
exports.index = async (req, res) => {
const json = // json that contains build steps using docker, and project id
// Creates a client
const cb = new CloudBuildClient();
try {
const result = await cb.createBuild({
projectId: "myproject",
build: JSON.parse(json)
})
return res.status(200).json(result)
} catch(error) {
return res.status(400).json(error);
};
};
I am assuming from the documentation that my default service account is implicit and credentials are sources properly, or it would throw an error.
Advice appreciated.
I have an API that is containerized and running inside cloud run. How can I get the current project ID where my cloud run is executing? I have tried:
I see it in textpayload in logs but I am not sure how to read the textpayload inside the post function? The pub sub message I receive is missing this information.
I have read up into querying the metadata api, but it is not very clear on how to do that again from within the api. Any links?
Is there any other way?
Edit:
After some comments below, I ended up with this code inside my .net API running inside Cloud Run.
private string GetProjectid()
{
var projectid = string.Empty;
try {
var PATH = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Metadata-Flavor", "Google");
projectid = client.GetStringAsync(PATH).Result.ToString();
}
Console.WriteLine("PROJECT: " + projectid);
}
catch (Exception ex) {
Console.WriteLine(ex.Message + " --- " + ex.ToString());
}
return projectid;
}
Update, it works. My build pushes had been failing and I did not see. Thanks everyone.
You get the project ID by sending an GET request to http://metadata.google.internal/computeMetadata/v1/project/project-id with the Metadata-Flavor:Google header.
See this documentation
In Node.js for example:
index.js:
const express = require('express');
const axios = require('axios');
const app = express();
const axiosInstance = axios.create({
baseURL: 'http://metadata.google.internal/',
timeout: 1000,
headers: {'Metadata-Flavor': 'Google'}
});
app.get('/', (req, res) => {
let path = req.query.path || 'computeMetadata/v1/project/project-id';
axiosInstance.get(path).then(response => {
console.log(response.status)
console.log(response.data);
res.send(response.data);
});
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
package.json:
{
"name": "metadata",
"version": "1.0.0",
"description": "Metadata server",
"main": "app.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "Apache-2.0",
"dependencies": {
"axios": "^0.18.0",
"express": "^4.16.4"
}
}
Others have shown how to get the project name via HTTP API, but in my opinion the easier, simpler, and more performant thing to do here is to just set the project ID as a run-time environment variable. To do this, when you deploy the function:
gcloud functions deploy myFunction --set-env-vars PROJECT_ID=my-project-name
And then you would access it in code like:
exports.myFunction = (req, res) => {
console.log(process.env.PROJECT_ID);
}
You would simply need to set the proper value for each environment where you deploy the function. This has the very minor downside of requiring a one-time command line parameter for each environment, and the very major upside of not making your function depend on successfully authenticating with and parsing an API response. This also provides code portability, because virtually all hosting environments support environment variables, including your local development environment.
#Steren 's answer in python
import os
def get_project_id():
# In python 3.7, this works
project_id = os.getenv("GCP_PROJECT")
if not project_id: # > python37
# Only works on runtime.
import urllib.request
url = "http://metadata.google.internal/computeMetadata/v1/project/project-id"
req = urllib.request.Request(url)
req.add_header("Metadata-Flavor", "Google")
project_id = urllib.request.urlopen(req).read().decode()
if not project_id: # Running locally
with open(os.environ["GOOGLE_APPLICATION_CREDENTIALS"], "r") as fp:
credentials = json.load(fp)
project_id = credentials["project_id"]
if not project_id:
raise ValueError("Could not get a value for PROJECT_ID")
return project_id
I followed the tutorial Using Pub/Sub with Cloud Run tutorial
I added to the requirements.txt the module gcloud
Flask==1.1.1
pytest==5.3.0; python_version > "3.0"
pytest==4.6.6; python_version < "3.0"
gunicorn==19.9.0
gcloud
I changed index function in main.py:
def index():
envelope = request.get_json()
if not envelope:
msg = 'no Pub/Sub message received'
print(f'error: {msg}')
return f'Bad Request: {msg}', 400
if not isinstance(envelope, dict) or 'message' not in envelope:
msg = 'invalid Pub/Sub message format'
print(f'error: {msg}')
return f'Bad Request: {msg}', 400
pubsub_message = envelope['message']
name = 'World'
if isinstance(pubsub_message, dict) and 'data' in pubsub_message:
name = base64.b64decode(pubsub_message['data']).decode('utf-8').strip()
print(f'Hello {name}!')
#code added
from gcloud import pubsub # Or whichever service you need
client = pubsub.Client()
print('This is the project {}'.format(client.project))
# Flush the stdout to avoid log buffering.
sys.stdout.flush()
return ('', 204)
I checked the logs:
Hello (pubsub message).
This is the project my-project-id.
Here is a snippet of Java code that fetches the current project ID:
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
official Google's client library:
import gcpMetadata from 'gcp-metadata'
const projectId = await gcpMetadata.project('project-id')
It should be possible to use the Platform class from Google.Api.Gax (https://github.com/googleapis/gax-dotnet/blob/master/Google.Api.Gax/Platform.cs). The Google.Api.Gax package is usually installed as dependency for the other Google .NET packages like Google.Cloud.Storage.V1
var projectId = Google.Api.Gax.Platform.Instance().ProjectId;
On the GAE platform, you can also simply check environment variables GOOGLE_CLOUD_PROJECT and GCLOUD_PROJECT
var projectId = Environment.GetEnvironmentVariable("GOOGLE_CLOUD_PROJECT")
?? Environment.GetEnvironmentVariable("GCLOUD_PROJECT");
I'm trying to use RDSDataService to query an Aurora Serverless database. When I'm trying to query, my lambda just times out (I've set it up to 5 minutes just to make sure it isn't a problem with that). I have 1 record in my database and when I try to query it, I get no results, and neither the error or data flows are called. I've verified executeSql is called by removing the dbClusterOrInstanceArn from my params and it throw the exception for not having it.
I have also run SHOW FULL PROCESSLIST in the query editor to see if the queries were still running and they are not. I've given the lambda both the AmazonRDSFullAccess and AmazonRDSDataFullAccess policies without any luck either. You can see by the code below, i've already tried what was recommended in issue #2376.
Not that this should matter, but this lambda is triggered by a Kinesis event trigger.
const AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
const RDS = new AWS.RDSDataService({apiVersion: '2018-08-01', region: 'us-east-1'})
for (record of event.Records) {
const payload = JSON.parse(new Buffer(record.kinesis.data, 'base64').toString('utf-8'));
const data = compileItem(payload);
const params = {
awsSecretStoreArn: 'arn:aws:secretsmanager:us-east-1:149070771508:secret:xxxxxxxxx,
dbClusterOrInstanceArn: 'arn:aws:rds:us-east-1:149070771508:cluster:xxxxxxxxx',
sqlStatements: `select * from MY_DATABASE.MY_TABLE`
// database: 'MY_DATABASE'
}
console.log('calling executeSql');
RDS.executeSql(params, (error, data) => {
if (error) {
console.log('error', error)
callback(error, null);
} else {
console.log('data', data);
callback(null, { success: true })
}
});
}
}
EDIT: We've run the command through the aws cli and it returns results.
EDIT 2: I'm able to connect to it using the mysql2 package and connecting to it through the URI, so it's defiantly an issue with either the aws-sdk or how I'm using it.
Nodejs excution is not waiting for the result that's why process exit before completing the request.
use mysql library https://www.npmjs.com/package/serverless-mysql
OR
use context.callbackWaitsForEmptyEventLoop =false
Problem was the RDS had to be crated in a VPC, in which the Lambda's were not in