The create_profile_job() method is not accepting the new parameters - amazon-web-services

I am trying to use a Lambda function (written in python) to create a series of profile jobs in DataBrew. AWS recently added a new parameter to this function ("Configuration) which I have added in my code. However, when I call the function, I get the following error message: "Unknown parameter in input: "Configuration", must be one of: DatasetName, EncryptionKeyArn, EncryptionMode, Name, LogSubscription, MaxCapacity, MaxRetries, OutputLocation, RoleArn, Tags, Timeout, JobSample." This does not match the parameter list in the boto3 documentation, which was recently updated to align with the new features added to DataBrew on 07/23/21. Has anyone else had this issue? If so, is there a timeline for this bug to be fixed?

It turns out that the version of boto3 that is available in Lambda by default is not the most updated version. Hence, in order to use all the parameters for this method, you have to add the latest version of boto3 (and all dependencies) as a Lambda layer.

Related

How to run Lambda created in CDK on a regular basis?

As the title says - I've created a Lambda in the Python CDK and I'd like to know how to trigger it on a regular basis (e.g. once per day).
I'm sure it's possible, but I'm new to the CDK and I'm struggling to find my way around the documentation. From what I can tell it will use some sort of event trigger - but I'm not sure how to use it.
Can anyone help?
Sure - it's fairly simple once you get the hang of it.
First, make sure you're importing the right libraries:
from aws_cdk import core, aws_events, aws_events_targets
Then you'll need to make an instance of the schedule class and use the core.Duration (docs for that here) to set the length. Let's say 1 day for example:
lambda_schedule = aws_events.Schedule.rate(core.Duration.days(1))
Then you want to create the event target - this is the actual reference to the Lambda you created in your CDK earlier:
event_lambda_target = aws_events_targets.LambdaFunction(handler=lambda_defined_in_cdk_here)
Lastly you bind it all together in an aws_events.Rule like so:
lambda_cw_event = aws_events.Rule(
self,
"Rule_ID_Here",
description=
"The once per day CloudWatch event trigger for the Lambda",
enabled=True,
schedule=lambda_schedule,
targets=[event_lambda_target])
Hope that helps!
The question is for Python but thought it might be useful to post a Javascript equivalent:
const aws_events = require("aws-cdk-lib/aws-events");
const aws_events_targets = require("aws-cdk-lib/aws-events-targets");
const MyLambdaFunction = <...SDK code for Lambda function here...>
new aws_events.Rule(this, "my-rule-identifier", {
schedule: aws_events.Schedule.rate(aws_cdk_lib.Duration.days(1)),
targets: [new aws_events_targets.LambdaFunction(MyLambdaFunction)],
});
Note: The above is for version 2 of the SDK - might need a few tweaks for v3.

deploying lambda code inside a folder with an autogenerated name

I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),
AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.

AWS parameter store access in lambda function

I'm trying to access the parameter store in an AWS lambda function. This is my code, pursuant to the documentation here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SSM.html
var ssm = new AWS.SSM({apiVersion: '2014-11-06'});
var ssm_params1 = {
Name: 'XXXX', /* required */
WithDecryption: true
};
ssm.getParameter(ssm_params1, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else clientId = data.value;
});
Upon execution, I get the error:
"TypeError: ssm.getParameter is not a function"
Did amazon change this without changing the docs? Did this function move to another type of object?
Please check and try the latest version of the SDK. It is not the case that Amazon has ditched the getParameter method in favor of only getParameters. The fact is the method is getParameter, together with getParametersByPath, is newly added methods. Old version of SDK would not resolve these methods.
The answer here is that Amazon must have ditched the getParameter() method in favor of only maintaining one method getParameter(s)(). But they didn't update the documentation. That method seems to work just fine.
I have tried both getParameter and getParameters function, and both of them are working fine.
It could be possible that you are getting an error since you are passing "apiVersion: '2014-11-06'" to the SSM constructor.
Do not pass any apiVersion parameter to the function. It should work fine.
There seems to be a bug in AWS that is not including correct sdk version in certain environments. This can be confirmed by logging the sdk version used.
console.log("AWS-SDK Version: " + require('aws-sdk/package.json').version);
Including the required aws-sdk package solved the problem for us.
Try adding the following in package.json:
"aws-sdk": "^2.339.0"

How to increase deploy timeout limit at AWS Opsworks?

I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...

Java code to get currently running beanstalk version label?

From within a running Java application running on beanstalk, how can I get the beanstalk version label that is currently running?
[Multiple Edits later...]
After a few back-and-forth comments with Sony (see below), I wrote the following code which works for me now. If you put meaningful comments in your version label when you deploy, then this will tell you what you're running. We have a continuous build environment, so we can get our build environment to supply a label that leads to the check-in comments for the related code. Put this all together, and your server can tell you exactly what code its running relative to your source code check-ins. Really useful for us. OK now I'm actually answering my own question here, but with invaluable help from Sony. Seems a shame you can't remove the hard-coded values and query for those at runtime.
String getMyVersionLabel() throws IOException {
Region region = Region.getRegion(Regions.fromName("us-west-2")); // Need to hard-code this
AWSCredentialsProvider credentialsProvider = new ClasspathPropertiesFileCredentialsProvider();
AWSElasticBeanstalkClient beanstalk = region.createClient(AWSElasticBeanstalkClient.class, credentialsProvider, null);
String environmentName = System.getProperty("PARAM2", "DefaultEnvironmentName"); // Need to hard-code this too
DescribeEnvironmentsResult environments = beanstalk.describeEnvironments();
for (EnvironmentDescription ed : environments.getEnvironments()) {
if (ed.getEnvironmentName().equals( environmentName)) {
return "Running version " + ed.getVersionLabel() + " created on " + ed.getDateCreated());
break;
}
}
return null;
}
You can use AWS Java SDK and call this directly.
See the details of describeApplicationVersions API for how to get all the versions in an application.Ensure to give your regions as well (otherwise you will get the versions from the default AWS region).
Now, if you need to know the version deployed currently, you need to call additionally the DescribeEnvironmentsRequest. This has the versionLabel, which tells you the the version currently deployed.
Here again, if you need to know the environment name in the code, you need to pass it as a param to the beanstalk configuration in the aws console, and access as a PARAM.