Pulumi automation api in google cloud function - google-cloud-platform

My goal is to develop an end point using google cloud function that would update a pulumi stack configuration.
I am using pulumi automation api for it.
exports.updateServiceImageVersion = async(req, res) => {
const auto = require("#pulumi/pulumi/automation");
const pulumiProgram = async() => {
console.log("test")
}
const args = {
stackName: "xxx",
projectName: "xxx-service",
program: pulumiProgram
};
try {
const stack = await auto.LocalWorkspace.selectStack(args);
await stack.setConfig("aws:region", {
value: "us-west-2"
});
await stack.refresh({
onOutput: console.info
});
} catch (err) {
console.log(err.message);
}
res.status(200).send("test");
};
I get the below error when selectStack method is called. Could you help me in solving this error. The documentation mentions that Pulumi Automation Api requires pulumi cli to be installed and available in PATH variable. How can I do that for Google Cloud Function.
enter image description here

Related

AWS: Get the public Image id's (AMI), for an Account, which are used to launch EC2 instances

We are trying to create an Config rule, which should warn and notify when an public image is used to launch our EC2 instances in an account. This is to avoid any security issues and unwanted user scripts to be run against our instances. We use NODE to develop the logic and I am using describeImages method of ec2, to get the public images, by passing the is-public filter, the results are actually all the public images available in that account, but we need only public images which are being used to launch our instances, I couldn't figure out how to achieve this. Any help would be greatly appreciated. Sample code, which I am using to test this is as follows.
const aws = require ('aws-sdk')
aws.config.update({ region: 'us-west-2' });
const ec2 = new aws.EC2()
const getPublicImages = async ()=>{
let publicImages
const params = {
DryRun: false,
Filters: [
{
Name: 'is-public',
Values: ['true']
}
]
// Owners: [event['accountId']]
}
publicImages = await ec2
.describeImages(params)
.promise()
.then(images => {
return (images)
})
.catch(err => {
return err
})
return publicImages
}
const handleGetPublicImages = async ()=>{
const res = await getPublicImages()
console.log(res);
}
handleGetPublicImages()

Problems accessing storage from Lambda function in Amplify

What I want to do?
I want to create REST API that returns data from my DynamoDB table which is being created by GraphQL model.
What I've done
Create GraphQL model
type Public #model {
id: ID!
name: String!
}
Create REST API with Lambda Function with access to my PublicTable
$ amplify add api
? Please select from one of the below mentioned services: REST
? Provide a friendly name for your resource to be used as a label for this category in the project: rest
? Provide a path (e.g., /book/{isbn}): /items
? Choose a Lambda source Create a new Lambda function
? Provide an AWS Lambda function name: listPublic
? Choose the runtime that you want to use: NodeJS
? Choose the function template that you want to use: Hello World
Available advanced settings:
- Resource access permissions
- Scheduled recurring invocation
- Lambda layers configuration
? Do you want to configure advanced settings? Yes
? Do you want to access other resources in this project from your Lambda function? Yes
? Select the category storage
? Storage has 8 resources in this project. Select the one you would like your Lambda to access Public:#model(appsync)
? Select the operations you want to permit for Public:#model(appsync) create, read, update, delete
You can access the following resource attributes as environment variables from your Lambda function
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
? Do you want to invoke this function on a recurring schedule? No
? Do you want to configure Lambda layers for this function? No
? Do you want to edit the local lambda function now? No
Successfully added resource listPublic locally.
Next steps:
Check out sample function code generated in <project-dir>/amplify/backend/function/listPublic/src
"amplify function build" builds all of your functions currently in the project
"amplify mock function <functionName>" runs your function locally
"amplify push" builds all of your local backend resources and provisions them in the cloud
"amplify publish" builds all of your local backend and front-end resources (if you added hosting category) and provisions them in the cloud
Succesfully added the Lambda function locally
? Restrict API access No
? Do you want to add another path? No
Successfully added resource rest locally
Edit my Lambda function
/* Amplify Params - DO NOT EDIT
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
Amplify Params - DO NOT EDIT */
const AWS = require("aws-sdk");
const region = process.env.REGION
AWS.config.update({ region });
const docClient = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: "PublicTable"
}
async function listItems(){
try {
const data = await docClient.scan(params).promise()
return data
} catch (err) {
return err
}
}
exports.handler = async (event) => {
try {
const data = await listItems()
return { body: JSON.stringify(data) }
} catch (err) {
return { error: err }
}
};
Push my updates
$ amplify push
Open my REST API endpoint /items
{
"message": "User: arn:aws:sts::829736458236:assumed-role/myprojectLambdaRolef4f571b-dev/listPublic-dev is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:8297345848236:table/Public-ssrh52tnjvcdrp5h7evy3zdldsd-dev",
"code": "AccessDeniedException",
"time": "2021-04-21T21:21:32.778Z",
"requestId": "JOA5KO3GVS3QG7RQ2V824NGFVV4KQNSO5AEMVJF66Q9ASUAAJG",
"statusCode": 400,
"retryable": false,
"retryDelay": 28.689093010346657
}
Problems
What I did wrong?
How do I access my table and why I didn't get it when I created it?
Why API_MYPROJECT_PUBLICTABLE_NAME and other constants are needed?
Decision
The problem turned out to be either the NodeJS version or the amplify-cli version. After updating amplify-cli and installing the node on the 14.16.0 version, everything worked.
I also changed the name of the table to what Amplify creates for us, although this code did not work before. The code became like this:
/* Amplify Params - DO NOT EDIT
API_MYPROJECT_GRAPHQLAPIIDOUTPUT
API_MYPROJECT_PUBLICTABLE_ARN
API_MYPROJECT_PUBLICTABLE_NAME
ENV
REGION
Amplify Params - DO NOT EDIT */
const AWS = require("aws-sdk");
const region = process.env.REGION
const tableName = process.env.API_MYPROJECT_PUBLICTABLE_NAME
AWS.config.update({ region });
const docClient = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: tableName
}
async function listItems(){
try {
const data = await docClient.scan(params).promise()
return data
} catch (err) {
return err
}
}
exports.handler = async (event) => {
try {
const data = await listItems()
return { body: JSON.stringify(data) }
} catch (err) {
return { error: err }
}
};

AccessDeniedException while starting pipeline

I'm using AWS Lambda with AWS CDK to start my Pipeline
const PipelinesParams = {
name: "GatsbyPipelineLolly",
}
try {
const pipeline = new AWS.CodePipeline();
await docClient.put(params).promise();
pipeline.startPipelineExecution(
PipelinesParams,
function (err: any, data: any) {
if (err) {
console.log(err);
} else {
console.log(data);
}
}
)
and that's the action I authorized
const policy = new PolicyStatement();
policy.addActions('s3:*');
policy.addResources('*');
policy.addActions('codepipeline:*');
still getting unauthorized error image is also been attached for review
Are you sure, the policy is attached to the role, with which you are deploying the pipeline?
It looks like you've created a policy but haven't attached it to the role you are using (from your error message). Please see:
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-iam-readme.html#using-existing-roles and https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-iam.Policy.html#roles
const role = iam.Role.fromRoleArn(this, 'Role', 'arn:aws:iam...')
policy.attachToRole(role)

GCP cloud function to suspend and resume the GCP instances

We can use GCP cloud functions to start and stop the GCP instances but I need to work on scheduled suspend and resume of GCP instances using cloud function and scheduler.
From GCP documentation, I got that we can do start and stop using cloud functions available below
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/functions/scheduleinstance
Do we have same node JS or other language Pcgks available to suspend and resume GCP instances?
If not can we create our own for suspend/resume.
When I tried one I got below error
"TypeError: compute.zone(...).vm(...).resume is not a function
Edit, thanks Chris and Guillaume, after going through you links i have edited my code and below is my index.js file now.
For some reason when I do
gcloud functions deploy resumeInstancePubSub --trigger-topic resume-instance --runtime nodejs10 --allow-unauthenticated
i always get
Function 'resumeInstancePubSub1' is not defined in the provided module.
resumeInstancePubSub1 2020-09-04 10:57:00.333 Did you specify the correct target function to execute?
i have not worked on Node JS Or JS before, I was expecting something similar to start/stop documentation which I could make work easily using below git repo
https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
My index.js file,
// BEFORE RUNNING:
// ---------------
// 1. If not already done, enable the Compute Engine API
// and check the quota for your project at
// https://console.developers.google.com/apis/api/compute
// 2. This sample uses Application Default Credentials for authentication.
// If not already done, install the gcloud CLI from
// https://cloud.google.com/sdk and run
// `gcloud beta auth application-default login`.
// For more information, see
// https://developers.google.com/identity/protocols/application-default-credentials
// 3. Install the Node.js client library by running
// `npm install googleapis --save`
const {google} = require('googleapis');
var compute = google.compute('beta');
authorize(function(authClient) {
var request = {
// Project ID for this request.
project: 'my-project', // TODO: Update placeholder value.
// The name of the zone for this request.
zone: 'my-zone', // TODO: Update placeholder value.
// Name of the instance resource to resume.
instance: 'my-instance', // TODO: Update placeholder value.
resource: {
// TODO: Add desired properties to the request body.
},
auth: authClient,
};
exports.resumeInstancePubSub = async (event, context, callback) => {
try {
const payload = _validatePayload(
JSON.parse(Buffer.from(event.data, 'base64').toString())
);
const options = {filter: `labels.${payload.label}`};
const [vms] = await compute.getVMs(options);
await Promise.all(
vms.map(async (instance) => {
if (payload.zone === instance.zone.id) {
const [operation] = await compute
.zone(payload.zone)
.vm(instance.name)
.resume();
// Operation pending
return operation.promise();
}
})
);
// Operation complete. Instance successfully started.
const message = `Successfully started instance(s)`;
console.log(message);
callback(null, message);
} catch (err) {
console.log(err);
callback(err);
}
};
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
// TODO: Change code below to process the `response` object:
console.log(JSON.stringify(response, null, 2));
});
});
function authorize(callback) {
google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
}).then(client => {
callback(client);
}).catch(err => {
console.error('authentication failed: ', err);
});
}
Here and here is the documetation for the new beta verison of the api. You can see that you can suspend an instance like:
compute.instances.suspend(request, function(err, response) {
if (err) {
console.error(err);
return;
}
And you can resume a instance in a similar way:
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
GCP recently added "create schedule" feature to start and stop the VM instances based on the configured schedule.
More details can be found at
https://cloud.google.com/compute/docs/instances/schedule-instance-start-stop#managing_instance_schedules

lambda task timed out error while using rekognition

I'm building a React native app with serverless framework using AWS services.
I created a RESTapi with lambda function (nodeJs8.10 environment) and API gateway to use rekognition services such as indexFaces, listCollection, etc. My lambda is in VPC with RDS( later I'll Aurora) to store faceID and other data.
Everything works fine except rekognition services.
When I call any rekognition services it shows Task timed out after 270.04 seconds.But it works when I call locally using serverless-offline-plugin
I attach all necessary permissions to my lambda like AmazonRekognitionFullAccess
Here is my code
index.js
app.post('/myapi', function (req, res) {
var params = {
MaxResults: 3,
};
const rekognition = aws_config(); <-- rekognition configuration
rekognition.listCollections(params, function(err, data) {
if (err) {
res.json(err.stack);
console.log(err, err.stack);
}
else{
res.json(data);
console.log(data);
}
});
});
function aws_config(){
const $options = {
'region' : 'ap-southeast-2',
'version' : '2016-06-27',
'accessKeyId ' : config.ENV.aws_key,
'secretAccessKey ' : config.ENV.aws_secret,
};
return new AWS.Rekognition($options);
}
How to solve this timeout error as it doesn't show any error on CloudWatch logs?