We can use GCP cloud functions to start and stop the GCP instances but I need to work on scheduled suspend and resume of GCP instances using cloud function and scheduler.
From GCP documentation, I got that we can do start and stop using cloud functions available below
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/functions/scheduleinstance
Do we have same node JS or other language Pcgks available to suspend and resume GCP instances?
If not can we create our own for suspend/resume.
When I tried one I got below error
"TypeError: compute.zone(...).vm(...).resume is not a function
Edit, thanks Chris and Guillaume, after going through you links i have edited my code and below is my index.js file now.
For some reason when I do
gcloud functions deploy resumeInstancePubSub --trigger-topic resume-instance --runtime nodejs10 --allow-unauthenticated
i always get
Function 'resumeInstancePubSub1' is not defined in the provided module.
resumeInstancePubSub1 2020-09-04 10:57:00.333 Did you specify the correct target function to execute?
i have not worked on Node JS Or JS before, I was expecting something similar to start/stop documentation which I could make work easily using below git repo
https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
My index.js file,
// BEFORE RUNNING:
// ---------------
// 1. If not already done, enable the Compute Engine API
// and check the quota for your project at
// https://console.developers.google.com/apis/api/compute
// 2. This sample uses Application Default Credentials for authentication.
// If not already done, install the gcloud CLI from
// https://cloud.google.com/sdk and run
// `gcloud beta auth application-default login`.
// For more information, see
// https://developers.google.com/identity/protocols/application-default-credentials
// 3. Install the Node.js client library by running
// `npm install googleapis --save`
const {google} = require('googleapis');
var compute = google.compute('beta');
authorize(function(authClient) {
var request = {
// Project ID for this request.
project: 'my-project', // TODO: Update placeholder value.
// The name of the zone for this request.
zone: 'my-zone', // TODO: Update placeholder value.
// Name of the instance resource to resume.
instance: 'my-instance', // TODO: Update placeholder value.
resource: {
// TODO: Add desired properties to the request body.
},
auth: authClient,
};
exports.resumeInstancePubSub = async (event, context, callback) => {
try {
const payload = _validatePayload(
JSON.parse(Buffer.from(event.data, 'base64').toString())
);
const options = {filter: `labels.${payload.label}`};
const [vms] = await compute.getVMs(options);
await Promise.all(
vms.map(async (instance) => {
if (payload.zone === instance.zone.id) {
const [operation] = await compute
.zone(payload.zone)
.vm(instance.name)
.resume();
// Operation pending
return operation.promise();
}
})
);
// Operation complete. Instance successfully started.
const message = `Successfully started instance(s)`;
console.log(message);
callback(null, message);
} catch (err) {
console.log(err);
callback(err);
}
};
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
// TODO: Change code below to process the `response` object:
console.log(JSON.stringify(response, null, 2));
});
});
function authorize(callback) {
google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
}).then(client => {
callback(client);
}).catch(err => {
console.error('authentication failed: ', err);
});
}
Here and here is the documetation for the new beta verison of the api. You can see that you can suspend an instance like:
compute.instances.suspend(request, function(err, response) {
if (err) {
console.error(err);
return;
}
And you can resume a instance in a similar way:
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
GCP recently added "create schedule" feature to start and stop the VM instances based on the configured schedule.
More details can be found at
https://cloud.google.com/compute/docs/instances/schedule-instance-start-stop#managing_instance_schedules
Related
My goal is to develop an end point using google cloud function that would update a pulumi stack configuration.
I am using pulumi automation api for it.
exports.updateServiceImageVersion = async(req, res) => {
const auto = require("#pulumi/pulumi/automation");
const pulumiProgram = async() => {
console.log("test")
}
const args = {
stackName: "xxx",
projectName: "xxx-service",
program: pulumiProgram
};
try {
const stack = await auto.LocalWorkspace.selectStack(args);
await stack.setConfig("aws:region", {
value: "us-west-2"
});
await stack.refresh({
onOutput: console.info
});
} catch (err) {
console.log(err.message);
}
res.status(200).send("test");
};
I get the below error when selectStack method is called. Could you help me in solving this error. The documentation mentions that Pulumi Automation Api requires pulumi cli to be installed and available in PATH variable. How can I do that for Google Cloud Function.
enter image description here
I have been working with AWS for the last month and I need to know how can we update the step function without changing the name of the file.
The way that the documentation provided to make the changes in the step function is to change the name of the existing one and add the changes in the file. But that will eliminate the logs that have been created in the AWS CLI.
For Example, if I replace the following code with something else I have to change the whole dynamic of the project in order to make them appear in the AWS CLI
Can somebody please provide a solution for this??
The update part can be done through an AWS configuration command. Follow the commands below, it will keep all the changes in the execution logs as wells.
let aws = require('aws-sdk');
let roleArn = `roleARN goes here`;
let params = {
name: stepFunctionName,
roleArn: roleArn,
definition: JSON.stringify(definitionGoesHere),
};
let stepFunctions = new aws.StepFunctions();
stepfunctions.createStateMachine(params, function (err, data) {
if (err) {
console.log("error occured while creating the step function");
console.log(err, err.stack);
if (err.code === "StateMachineAlreadyExists" && err.statusCode === 400) {
let paramsUpdate = {
stateMachineArn: "stateMachine ARN for the existing stateMachine",
definition: JSON.stringify(definition),
loggingConfiguration: {
includeExecutionData: true,
},
roleArn: roleArn,
};
stepfunctions.updateStateMachine(
paramsUpdate,
function (error, updateData) {
if (error) {
console.log("error occured while updating the step function.");
console.log("Error", error.stack);
}
console.log("step function updated successfully");
console.log("response", updateData);
}
);
}
console.log(
"step function does not exist and the function creation and update faild in the process."
);
console.log("definition", definition for the stateMachine);
} // an error occurred
else console.log(data); // successful response
});
What I want to do?
I want to create REST API that returns data from my DynamoDB table which is being created by GraphQL model.
What I've done
Create GraphQL model
type Public #model {
id: ID!
name: String!
}
Create REST API with Lambda Function with access to my PublicTable
$ amplify add api
? Please select from one of the below mentioned services: REST
? Provide a friendly name for your resource to be used as a label for this category in the project: rest
? Provide a path (e.g., /book/{isbn}): /items
? Choose a Lambda source Create a new Lambda function
? Provide an AWS Lambda function name: listPublic
? Choose the runtime that you want to use: NodeJS
? Choose the function template that you want to use: Hello World
Available advanced settings:
- Resource access permissions
- Scheduled recurring invocation
- Lambda layers configuration
? Do you want to configure advanced settings? Yes
? Do you want to access other resources in this project from your Lambda function? Yes
? Select the category storage
? Storage has 8 resources in this project. Select the one you would like your Lambda to access Public:#model(appsync)
? Select the operations you want to permit for Public:#model(appsync) create, read, update, delete
You can access the following resource attributes as environment variables from your Lambda function
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
? Do you want to invoke this function on a recurring schedule? No
? Do you want to configure Lambda layers for this function? No
? Do you want to edit the local lambda function now? No
Successfully added resource listPublic locally.
Next steps:
Check out sample function code generated in <project-dir>/amplify/backend/function/listPublic/src
"amplify function build" builds all of your functions currently in the project
"amplify mock function <functionName>" runs your function locally
"amplify push" builds all of your local backend resources and provisions them in the cloud
"amplify publish" builds all of your local backend and front-end resources (if you added hosting category) and provisions them in the cloud
Succesfully added the Lambda function locally
? Restrict API access No
? Do you want to add another path? No
Successfully added resource rest locally
Edit my Lambda function
/* Amplify Params - DO NOT EDIT
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
Amplify Params - DO NOT EDIT */
const AWS = require("aws-sdk");
const region = process.env.REGION
AWS.config.update({ region });
const docClient = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: "PublicTable"
}
async function listItems(){
try {
const data = await docClient.scan(params).promise()
return data
} catch (err) {
return err
}
}
exports.handler = async (event) => {
try {
const data = await listItems()
return { body: JSON.stringify(data) }
} catch (err) {
return { error: err }
}
};
Push my updates
$ amplify push
Open my REST API endpoint /items
{
"message": "User: arn:aws:sts::829736458236:assumed-role/myprojectLambdaRolef4f571b-dev/listPublic-dev is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:8297345848236:table/Public-ssrh52tnjvcdrp5h7evy3zdldsd-dev",
"code": "AccessDeniedException",
"time": "2021-04-21T21:21:32.778Z",
"requestId": "JOA5KO3GVS3QG7RQ2V824NGFVV4KQNSO5AEMVJF66Q9ASUAAJG",
"statusCode": 400,
"retryable": false,
"retryDelay": 28.689093010346657
}
Problems
What I did wrong?
How do I access my table and why I didn't get it when I created it?
Why API_MYPROJECT_PUBLICTABLE_NAME and other constants are needed?
Decision
The problem turned out to be either the NodeJS version or the amplify-cli version. After updating amplify-cli and installing the node on the 14.16.0 version, everything worked.
I also changed the name of the table to what Amplify creates for us, although this code did not work before. The code became like this:
/* Amplify Params - DO NOT EDIT
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
Amplify Params - DO NOT EDIT */
const AWS = require("aws-sdk");
const region = process.env.REGION
const tableName = process.env.API_MYPROJECT_PUBLICTABLE_NAME
AWS.config.update({ region });
const docClient = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: tableName
}
async function listItems(){
try {
const data = await docClient.scan(params).promise()
return data
} catch (err) {
return err
}
}
exports.handler = async (event) => {
try {
const data = await listItems()
return { body: JSON.stringify(data) }
} catch (err) {
return { error: err }
}
};
I'm trying to connect to AWS documentDB from Lambda function but, not able to connect.
MongoClient.connect never calls the callback function connected.
TLS is off on documentDB Cluster. I'm able to connect via mongo shell.
Lambda & documentDB are in same VPC & Security group.
'use strict';
module.exports.search = async (event, context, callback) => {
const MongoClient = require('mongodb').MongoClient;
const url = "mongodb://xxx:xxxx#xxx-db.cluster-xxx.us-east-2.docdb.amazonaws.com:27017";
console.log("Starting");
MongoClient.connect(url,
{
useNewUrlParser: true
},
function(err, client) {
if(err)
throw err;
console.log("Connected");
db = client.db('mydb');
col = db.collection('mycollection');
col.find({}).toArray().then(result => {
console.log(result);
return { statusCode: 200, body: result };
}).catch(err => {
console.log('=> an error occurred: ', err);
return { statusCode: 500, body: 'error' };
});
});
};
Output only prints starting which was consoled before calling Mongo.Connect.
How to identify or debug the issue ?
Just from looking at the current code I am pretty sure your function exit before it is able to complete. Therefore, your callback is not executed
Because MongoClient.connect runs asynchronously
Try to take a look at some resource around async/await/promise and Lambda
https://medium.com/tensult/async-await-on-aws-lambda-function-for-nodejs-2783febbccd9
How to wait for async actions inside AWS Lambda?
I'm building a React native app with serverless framework using AWS services.
I created a RESTapi with lambda function (nodeJs8.10 environment) and API gateway to use rekognition services such as indexFaces, listCollection, etc. My lambda is in VPC with RDS( later I'll Aurora) to store faceID and other data.
Everything works fine except rekognition services.
When I call any rekognition services it shows Task timed out after 270.04 seconds.But it works when I call locally using serverless-offline-plugin
I attach all necessary permissions to my lambda like AmazonRekognitionFullAccess
Here is my code
index.js
app.post('/myapi', function (req, res) {
var params = {
MaxResults: 3,
};
const rekognition = aws_config(); <-- rekognition configuration
rekognition.listCollections(params, function(err, data) {
if (err) {
res.json(err.stack);
console.log(err, err.stack);
}
else{
res.json(data);
console.log(data);
}
});
});
function aws_config(){
const $options = {
'region' : 'ap-southeast-2',
'version' : '2016-06-27',
'accessKeyId ' : config.ENV.aws_key,
'secretAccessKey ' : config.ENV.aws_secret,
};
return new AWS.Rekognition($options);
}
How to solve this timeout error as it doesn't show any error on CloudWatch logs?