So I have created a restAPI backend with express.js and have used claudia.js to upload my endpoints to lambda functions and everything went smoothly. The end points work as expected and return the correct information. My only issue is that when I go to my aws console I do not see the lambda functions that were created. I am not sure where this end point is being hosted. Has anyone else had this issue when working with claudia.js?
In your claudia.json file you should see something like:
"lambda": {
"role": "example-role",
"name": "example-test",
"region": "us-west-2"
},
Being us-west-2 Oregon
Related
I am trying to create a cloud formation stack using AWS Events to trigger an API call on a schedule. Most of the stack is working, however, the AWS::Events::ApiConnection is failing to create and I am not sure why.
This is the CF snippet that is failing: (Note, The API doesn't have any authentication yet, however, cloud formation requires the AuthParameters property)
"CronServerApiConnection": {
"Type": "AWS::Events::Connection",
"Properties": {
"Name": "api-connection",
"AuthorizationType": "API_KEY",
"AuthParameters": {
"ApiKeyAuthParameters": {
"ApiKeyName": "foo",
"ApiKeyValue": "bar"
}
}
}
},
In the cloud formation console this fails to create with the following error:
Resource handler returned message: "Error occurred during operation 'AWS::Events::Connection'." (RequestToken: xxxxxxxxxxxxxxxxx, HandlerErrorCode: GeneralServiceException)
I can't for the life of me figure this one out. from what I can see my CF snippet matches exactly what AWS specify in their docs here https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-connection.html
I ran into this issue myself a few weeks ago, and while looking for an answer I found this question unresolved so I thought I would share the answer. The events API is not descriptive at all with any of the errors, in my case the issues were permissions related. While is not clear in the documentation the AWS::Events::Connection not only needs permissions for the events API but also for the secretsmanager API since it will create some secrets for you under the hood. I solved this by adding full API permissions to the role creating the stack but of course I scoped the permissions by the resource to avoid security issues, something like:
effects: "Allow"
actions: [
"events:*",
"secretsmanager:*"
]
resources: [
"arn:aws:secretsmanager:<your region>:<your-account-id>:secret:events!connection/<yoursecretnameprefix>-*"
]
I will leave the addition of the event resource to you, but essentially is the same just scope by the arn of your resource. The above is just an example please replace the placeholders with the correct values.
I have a simple proof of concept which is working in the GUI. I have an S3 bucket, and a Lambda function, and when the S3 bucket's contents are altered, the Lambda logs the changes to CloudWatch.
I want to build on this and automate as much of the deployment as possible. I have written a shell script that zips up the Lambda's source and uploads it, creates a new version, and grabs the version's ARN to put into the S3 bucket's Event Notification so that the bucket uses the new version.
This works fine in the GUI. I paste the ARN in and the bucket calls whatever version I've pasted in.
However, when I try to script that last step, it always fails with:
An error occurred (InvalidArgument) when calling the PutBucketNotificationConfiguration operation: Unable to validate the following destination configurations
There is never any content listed in the 'following destination configurations'
The command which is failing is:
aws s3api put-bucket-notification-configuration --bucket my_bucket_name --notification-configuration file://config.json
The config file is nearly identical to the output of:
aws s3api get-bucket-notification-configuration --bucket my_bucket_name
Except that the version number at the end of the ARN has changed. Using the output of that command as the input for the previous command is successful.
That looks like:
{
"LambdaFunctionConfigurations": [
{
"Id": "my_S3_event_notificaton",
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:666666666666:function:my_lambda_name:4",
"Events": [
"s3:ObjectCreated:*",
"s3:ObjectRemoved:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "Prefix",
"Value": ""
}
]
}
}
}
]
}
Changing the '4' at the end to '7' fails.
TL;DR Workaround: omit the :7 qualifier on the notification configuration, use the unqualified (latest) lambda version.
Error replicated
I replicated the OP's error using the CLI and using boto3 directly. put-bucket-notification-configuration fails with qualified arns (e.g. :4) unless a given version has been previously added as a target manually in the S3 Console. Odd. The Chrome network traces of the Console's request-response traffic did not clarify.
# List the lambda versions
res_versions = lambda_client.list_versions_by_function(FunctionName=func)
versions = [v["Version"] for v in res_versions["Versions"]]
# -> ['$LATEST', '1', '2', '3']
# Put the configuration
config = { "LambdaFunctionConfigurations": [{ "LambdaFunctionArn": func + ":2", "Events": ["s3:ObjectCreated:*"] }] }
# fails unless the version has previously been manually set at the target in the Console
# unqualified arn always succeeds as expected
res_put = s3_client.put_bucket_notification_configuration(Bucket=bucket, NotificationConfiguration=config)
Workaround: use the unqualified lambda version
You are manually updating the S3 notification to the latest lambda version. This step is not strictly necessary. Life is easier if you omit the :4 version qualifier on your ARN. "When you invoke a function using an unqualified ARN, Lambda implicitly invokes $LATEST." The notification configuration is set once and does not change how many times you update your lambda.
AWS has invented the wheel
As #Maurice pointed out, you should consider AWS's mature infrastructure-as-code tools like CloudFormation and the Cloud Development Kit whose job it is to "automate as much of the deployment as possible".
I have performed aws configure and ask configure after installing ask-cli.
While setting up new skill using ask new selected NodeJS, AWS with CloudFormation.
Trying to deploy the skill using ask deploy, getting [Error]: CliError: The CloudFormation deploy failed for Alexa region "default": Access Denied.
Tried setting the region in ~/.aws/config and in ~/.aws/credentials, still running into same error.
What should be done to fix the issue?
Skill creation
Error deploying skill
I've been able to deploy.
After running aws configure, I called ask new, and I think the solution was to not select AWS With CloudFormation but AWS Lambda:
I wanted to use an existing skill that I previously created in the web UI. So I created two folders: lambda and skill-package. Then I used ask init saying I don't want to use AWS CloudFormation to deploy:
Next, I added my region in ask-resources.json, under skillInfrastructure:
{
"askcliResourcesVersion": "2020-03-31",
"profiles": {
"default": {
"skillMetadata": {
"src": "./skill-package"
},
"code": {
"default": {
"src": "./lambda"
}
},
"skillInfrastructure": {
"type": "#ask-cli/lambda-deployer",
"userConfig": {
"runtime": "nodejs12.x",
"handler": "index.js",
"awsRegion": "eu-west-1"
}
}
}
}
}
And I finished with ask deploy that worked!
We have developed an AWS Serverless Lambda application using dotnetcore to perform operations on EC2 Instances, say start or stop EC2 instance and integrated with Aws API Gateway.
serverless.template in dotnetcore application
"StartInstanceById" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "EC2_Monitoring_Serverless::EC2_Monitoring_Serverless.Functions::StartInstanceById",
"Runtime": "dotnetcore2.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": "arn:aws:iam::2808xxxx1013:role/lamda_start_stop",
"Policies": [ "AWSLambdaBasicExecutionRole" ],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/instances",
"Method": "Get"
}
}
}
}
}
The above Lambda function is working fine for starting ec2 instance when I invoking the API gateway url.
For calling these API's, We have created Angular 6 application and provided authentication using Aws Cognito Userpools.
So the cognito user logins into the website and gets all EC2 informations.
If the user wants to stop / start the EC2 instance, user will click on the particular button which invokes the relevant api gateway url of the lambda functions and It's working fine.
Now the question is who performed that action. After so much of research on stackoverflow and aws community forums for knowing who started or stopped the EC2 instances , I found Aws CloudTrail logs the information when user start or stopped the instance.
So I created a trail and I can see the logs in S3 buckets. But in every log I opened, I saw that the role "arn:aws:iam::2808xxxx1013:role/lamda_start_stop" is captured. I know this is because of the Lambda function. But I want to know who really stopped the instance.
Please advice how to capture user details!
The reason lambda execution role is getting printed in cloudtrail, is because it has initiated the process to stop the ec2 instance. Here the role is assumed (instead of actual user).
To print your actual user, you need to implement logs at your lambda, which will print logs to Cloudwatch. You can get the actual user or any other custom information from those logs.
I'm trying to forward the EC2 Launch logs to cloudwatch from my win 2016-based EC2 instance.
For some reason I can't see the log groups for this specific category.
Here's example of my AWS.EC2.Windows.CloudWatch.json:
{
"IsEnabled": true,
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "Ec2Config",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Log",
"TimestampFormat": "yyyy-MM-ddTHH:mm:ss.fffZ:",
"Encoding": "UTF-8",
"Filter": "UserdataExecution.log",
"CultureName": "en-US",
"TimeZoneKind": "UTC"
}
},
{
"Id": "EC2ConfigSink",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"Region": "eu-west-1",
"LogGroup": "/my-customer/deployment/ec2config-userdata",
"LogStream": "ec2config-userdata"
}
}
...
I have a few more definitions in this file
...],
"Flows": {
"Flows":
[
"Ec2Config,EC2ConfigSink",
... other references here
]
}
}
Cloudwatch agent starts and doesn't report any errors, I can see data from other sources (some application log files - I skipped the definitions intentionally)
It means the cloudwatch config file is correct and is applied / placed in a correct directory.
Logs are coming through with no problem except for the EC2 launch logs.
I'm wondering if anybody ran into this problem? It works perfectly on Windows 2012 - based images
Apparently, the SSM Agent starts after the EC2 Launch executes UserDatascript. I can see it from the SSM Agent's log file modification timestamps.
Therefore, there's no log forwarding happening during the EC2 Launch.
When the SSM Agent starts and loads the cloudwatch plugin, the log files are already filled with entries and never change (wallpaper log is the only exception) So they never end up in cloudwatch console.
There's been a lot of changes implemented on AWS side: they switch to .Net core, removed EC2 config service and moved the log forwarding logic to SSM Agent (cloudwatch Plugin) for Windows 2016-based AMIs
It looks like the behavior has changed quite significantly too so there's no way to get the EC2 launch logs in cloudwatch (when using AWS toolset-only)
Basically we have to stick to our Application logs only which is very unfortunate. We rely on EC2 launch logs to see if the instance started & successfully executed user data.