Setting lambda environmental variable using ask CLI? - amazon-web-services

How can I use the Ask CLI to set lambda function environmental variables? I tried setting them using the AWS console, but after I do that I get this error when I try to run ask deploy:
[Error]: Lambda update failed. Lambda ARN: arn:aws:lambda:us-east-1:608870357221:function:ask-custom-talk_stem-default
The Revision Id provided does not match the latest Revision Id. Call the GetFunction/GetAlias API to retrieve the latest Revision Id

Hello have you tried using --force command?
ask deploy --force

The only solution I've found is to update the variables through the AWS console and manually fetch the function's info using the AWS CLI and update the local revision id to match the revision id that's live on AWS. Here is my script:
const path = require('path');
const { readFileSync, writeFileSync } = require('fs');
const execa = require('execa');
const skillRoot = path.join(__dirname, '..');
const functionRoot = path.join(skillRoot, 'lambda', 'custom');
const askConfigPath = path.join(skillRoot, '.ask', 'config');
const askConfig = JSON.parse(readFileSync(askConfigPath, 'utf8'));
const { functionName } = askConfig.deploy_settings.default.resources.lambda[0];
async function main() {
console.log('Downloading function info from AWS');
const result = await execa('aws', ['lambda', 'get-function', '--function-name', functionName]);
const functionInfo = JSON.parse(result.stdout);
const revisionId = functionInfo.Configuration.RevisionId;
console.log('Downloading function contents from AWS');
await execa('ask', ['lambda', 'download', '--function', functionName], { cwd: functionRoot, stdio: 'inherit' });
console.log('Updating skill\'s revisionId');
askConfig.deploy_settings.default.resources.lambda[0].revisionId = revisionId;
writeFileSync(askConfigPath, JSON.stringify(askConfig, null, 2));
console.log('Done');
}
main();

Related

How to correctly run ECS commands via AWS CDK in a pipeline and debug?

we have ECS commands running in our Pipeline to deploy a Drupal 8 website. We have added commands like this to our CDK code. We have about 5 commands like this running in the CDK code. We execute these commands using Lambda.
'use strict';
const AWS = require('aws-sdk');
const ecs = new AWS.ECS();
const codeDeploy = new AWS.CodeDeploy({ apiVersion: '2014-10-06' });
exports.handler = async (event, context, callback) => {
console.log('Drush execution lambda invoked');
const deploymentId = event.DeploymentId;
const lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;
try{
let validationTestResult = 'Failed';
const clusterName = process.env.CLUSTER_NAME;
const containerName = process.env.CONTAINER_NAME;
const taskListParams = {
cluster: clusterName,
desiredStatus: 'RUNNING',
};
const taskList = await ecs.listTasks(taskListParams).promise();
const activeTask = taskList.taskArns[0];
console.log('Active task: ' + activeTask);
.......................
const cimParams = {
command: 'drush cim -y',
interactive: true,
task: activeTask,
cluster: clusterName,
container: containerName
};
await ecs.executeCommand(cimParams, function (err, data) {
if (err) {
console.log(err, err.stack, "FAILED on drush cim -y");
} else {
validationTestResult = 'Succeeded';
console.log(data, "Succeeded on drush cim -y");
}
}).promise();
.............................
// Pass CodeDeploy the prepared validation test results.
await codeDeploy.putLifecycleEventHookExecutionStatus({
deploymentId: deploymentId,
lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
status: validationTestResult // status can be 'Succeeded' or 'Failed'
}).promise();
}catch (e) {
console.log(e);
console.log('Drush execution lambda failed');
await codeDeploy.putLifecycleEventHookExecutionStatus({
deploymentId: deploymentId,
lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
status: 'Failed' // status can be 'Succeeded' or 'Failed'
}).promise();
}
};
The problem we have is when these commands are executed it says successful, but, we still can't see the changes on the website. If we run the pipeline again for the second time, the changes will be applied successfully.
The commands are not showing any errors or not failing the pipeline, but, the changes will only apply to the site if the pipeline was executed twice.
We are not sure if this is a Drupal/Drush or an ECS issue at this stage.
When we realized that the changes are not applying for the first time, we SSH into the ECS container and manually executed the command drush cim -y and it applied the changes to the site when we did that. So that tells us this is probably not an issue with the command, but, the ECS execution?
Can anyone see if we are doing anything wrong here ?. Is there a known issue with CDK or ECS commands like this ?.
Most importantly if someone can tell us how to debug ECS commands correctly, that would be great. Because the current level of logs we are having is not enough to find where the problem is.
Thanks in advance for taking the time to read this question.

Get generated API key from AWS AppSync API created with CDK

I'm trying to access data from my stack where I'm creating an AppSync API. I want to be able to use the generated Stacks' url and apiKey but I'm running into issues with them being encoded/tokenized.
In my stack I'm setting some fields to the outputs of the deployed stack:
this.ApiEndpoint = graphAPI.url;
this.Authorization = graphAPI.graphqlApi.apiKey;
When trying to access these properties I get something like ${Token[TOKEN.209]} and not the values.
If I'm trying to resolve the token like so: this.resolve(graphAPI.graphqlApi.apiKey) I instead get { 'Fn::GetAtt': [ 'AppSyncAPIApiDefaultApiKey537321373E', 'ApiKey' ] }.
But I would like to retrieve the key itself as a string, like da2-10lksdkxn4slcrahnf4ka5zpeemq5i.
How would I go about actually extracting the string values for these properties?
The actual values of such Tokens are available only at deploy-time. Before then you can safely pass these token properties between constructs in your CDK code, but they are opaque placeholders until deployed. Depending on your use case, one of these options can help retrieve the deploy-time values:
If you define CloudFormation Outputs for a variable, CDK will (apart from creating it in CloudFormation), will, after cdk deploy, print its value to the console and optionally write it to a json file you pass with the --outputs-file flag.
// AppsyncStack.ts
new cdk.CfnOutput(this, 'ApiKey', {
value: this.api.apiKey ?? 'UNDEFINED',
exportName: 'api-key',
});
// at deploy-time, if you use a flag: --outputs-file cdk.outputs.json
{
"AppsyncStack": {
"ApiKey": "da2-ou5z5di6kjcophixxxxxxxxxx",
"GraphQlUrl": "https://xxxxxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql"
}
}
Alternatively, you can write a script to fetch the data post-deploy using the listGraphqlApis and listApiKeys commands from the appsync JS SDK client. You can run the script locally or, for advanced use cases, wrap the script in a CDK Custom Resource construct for deploy-time integration.
Thanks to #fedonev I was able to extract the API key and url like so:
const client = new AppSyncClient({ region: "eu-north-1" });
const command = new ListGraphqlApisCommand({ maxResults: 1 });
const res = await client.send(command);
if (res.graphqlApis) {
const apiKeysCommand = new ListApiKeysCommand({
apiId: res.graphqlApis[0].apiId,
});
const apiKeyResponse = await client.send(apiKeysCommand);
const urls = flatMap(res.graphqlApis[0].uris);
if (apiKeyResponse.apiKeys && res.graphqlApis[0].uris) {
sendSlackMessage(urls[1], apiKeyResponse.apiKeys[0].id || "");
}
}

gcloud codebuild sdk, trigger build from cloud function

Trying to use the #google/cloudbuild client library in a cloud function to trigger a manual build against a project but no luck. My function runs async and does not throw an error.
Function:
exports.index = async (req, res) => {
const json = // json that contains build steps using docker, and project id
// Creates a client
const cb = new CloudBuildClient();
try {
const result = await cb.createBuild({
projectId: "myproject",
build: JSON.parse(json)
})
return res.status(200).json(result)
} catch(error) {
return res.status(400).json(error);
};
};
I am assuming from the documentation that my default service account is implicit and credentials are sources properly, or it would throw an error.
Advice appreciated.

How to add an environment variable to AWS Lambda without removing what's there?

Is there a way to add a new environment variable to an AWS Lambda function without removing the ones already there?
(With the command line tools, that is.)
Using the Lambda Console you can just append new Environmental variables:
Doing it using the CLI is harder- aws lambda update-function-configuration allows you to selectively update aspects of a lambda, but does not have helper methods to append enviornment variables. You can use aws lambda get-function-configuration to get the current list of enviornment variables. Which could be used in tandem with some bash/powershell scripting (or language of your choice using the matching SDK functions).
For example:
const AWS = require('aws-sdk');
const lambda = new AWS.lambda();
const FunctionName = 'FUNCTION_NAME';
const AppendVars = { key: value };
async function appendVars() {
const { Environment: { Variables } } = await lambda.getFunctionConfiguration({ FunctionName }).promise();
await lambda.updateFunctionConfiguration({
FunctionName,
Environment: { Variables: { ...Variables, ...AppendVars } },
}).promise();
}
appendVars();
I successfully used this as a bash script.
The name of the lambda function is coming in from a parameter. My goal was to simple change a variable so to restart the lambda function.
LAMBDA=$1
CURRENTVARIABLES=$(aws lambda get-function-configuration --function-name $LAMBDA | jq '.Environment.Variables')
NEWVARIABLES=$(echo $CURRENTVARIABLES | jq '. += {"kick":"'$(getrandom 8)'"}')
COMMAND="aws lambda update-function-configuration --function-name $LAMBDA --environment '{\"Variables\":$NEWVARIABLES}'"
eval $COMMAND

AWS SDK connection - How is this working?? (Beginner)

I am working on my AWS cert and I'm trying to figure out how the following bit of js code works:
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3();
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
This code successfully loads a txt file containing the words "Hello" in it. I do not understand how this ^ can identify MY AWS account. It does! But how! It somehow is able to determine that I want a new bucket inside MY account, but this code was taken directly from the AWS docs. I don't know how it could figure that out....
As per Class: AWS.CredentialProviderChain, the AWS SDK for JavaScript looks for credentials in the following locations:
AWS.CredentialProviderChain.defaultProviders = [
function () { return new AWS.EnvironmentCredentials('AWS'); },
function () { return new AWS.EnvironmentCredentials('AMAZON'); },
function () { return new AWS.SharedIniFileCredentials(); },
function () {
// if AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set
return new AWS.ECSCredentials();
// else
return new AWS.EC2MetadataCredentials();
}
]
Environment Variables (useful for testing, or when running code on a local computer)
Local credentials file (useful for running code on a local computer)
ECS credentials (useful when running code in Elastic Container Service)
Amazon EC2 Metadata (useful when running code on an Amazon EC2 instance)
It is highly recommended to never store credentials within an application. If the code is running on an Amazon EC2 instance and a role has been assigned to the instance, the SDK will automatically retrieve credentials from the instance metadata.
The next best method is to store credentials in the ~/.aws/credentials file.