AWS-CDK & Powershell Lambda - amazon-web-services

I have a Powershell Lambda that I would like to deploy via the AWS CDK however I'm having issues getting it to run.
Deploying the Powershell via a manual Publish-AWSPowerShellLambda works:
Publish-AWSPowerShellLambda -ScriptPath .\PowershellLambda.ps1 -Name PowershellLambda
However the same script deployed with the CDK doesnt log to CloudWatch Logs, even though it has permission:
import events = require('#aws-cdk/aws-events');
import targets = require('#aws-cdk/aws-events-targets');
import lambda = require('#aws-cdk/aws-lambda');
import cdk = require('#aws-cdk/core');
export class LambdaCronStack extends cdk.Stack {
constructor(app: cdk.App, id: string) {
super(app, id);
const lambdaFn = new lambda.Function(this, 'Singleton', {
code: new lambda.AssetCode('./PowershellLambda/PowershellLambda.zip'),
handler: 'PowershellLambda::PowershellLambda.Bootstrap::ExecuteFunction',
timeout: cdk.Duration.seconds(300),
runtime: lambda.Runtime.DOTNET_CORE_2_1
});
const rule = new events.Rule(this, 'Rule', {
schedule: events.Schedule.expression('rate(1 minute)')
});
rule.addTarget(new targets.LambdaFunction(lambdaFn));
}
}
const app = new cdk.App();
new LambdaCronStack(app, 'LambdaCronExample');
app.synth();
The powershell script currently contains just the following lines and works when deployed by Publish-AWSPowerShellLambda on the CLI:
#Requires -Modules #{ModuleName='AWSPowerShell.NetCore';ModuleVersion='3.3.335.0'}
Write-Host "Powershell Lambda Executed"
Note: For the CDK Deployment I generate the .zip file using a build step in package.json:
"scripts": {
"build": "tsc",
"build-package": "pwsh -NoProfile -ExecutionPolicy Unrestricted -command New-AWSPowerShellLambdaPackage -ScriptPath './PowershellLambda/PowershellLambda.ps1' -OutputPackage ./PowershellLambda/PowershellLambda.zip",
"watch": "tsc -w",
"cdk": "cdk"
}
The CDK deploys fine and the Lambda runs as expected but the only thing in Cloudwatch Logs is this:
START RequestId: 4c12fe1a-a9e0-4137-90cf-747b6aecb639 Version: $LATEST
I've checked that the Handler in the CDK script matches the output of the Publish-AWSPowerShellLambda and that the zip file uploaded fine and contains the correct code.
Any suggestions as to why this isnt working?

Setting the memory size to 512mb within the lambda.Function has resolved the issue.
The cloudwatch entry showed the lambda starting but it appears there wasn't enough memory to initialize and run the .net runtime.

Related

Why my 'AWS Lambda Invoke Function' task in Azure DevOps Build Pipeline doesn't fail if the Lambda returns 400?

I have this python code inside Lambda:
#This script will run as a Lambda function on AWS.
import time, json
cmdStatus = "Failed"
message = ""
statusCode = 200
def lambda_handler(event, context):
time.sleep(2)
if(cmdStatus=="Failed"):
message = "Command execution failed"
statusCode = 400
elif(cmdStatus=="Success"):
message = "The script execution is successful"
statusCode = 200
else:
message = "The cmd status is: " + cmdStatus
statusCode = 500
return {
'statusCode': statusCode,
'body': json.dumps(message)
}
and I am invoking this Lambda from Azure DevOps Build Pipeline - AWS Lambda Invoke Function.
As you can see in the above code - have intentionally put that cmdStatus to Failed to make that Lambda fail but when executed from Azure DevOps Build Pipeline - the task succeeds. Strange.
How can I make the pipeline to fail in this case? Please help.
Thanks
I have been working with a similar issue myself and it looks like a bug in the task itself. It was reported in 2019 and nothing happened since so I wouldn't hold out much hope.
https://github.com/aws/aws-toolkit-azure-devops/issues/175
My workaround to this issue was to instead use the AWS CLI task with
Command: lambda
Subcommand: invoke
Options and Parameters: --function-name {nameOfYourFunction} response.json
Followed immediately by a bash task with an inline bash script
cat response.json
if grep -q "errorType" "response.json"; then
echo "An error was found"
exit 1
fi
echo "No error was found"

AWS SAM Incorrect region

I am using AWS SAM to test my Lambda functions in the AWS cloud.
This is my code for testing Lambda:
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
def test_data_extraction_validate():
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="eu-west-1",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=10,
retries={'max_attempts': 1}
)
)
else:
lambda_client = boto3.client('lambda',region_name="eu-west-1")
####################################################
# Test 1. Correct payload
####################################################
with open("payloads/myfunction/ok.json","r") as f:
payload = f.read()
# Correct payload
response = lambda_client.invoke(
FunctionName="myfunction",
Payload=payload
)
result = json.loads(response['Payload'].read())
assert result['status'] == True
assert result['error'] == ""
This is the command I am using to start AWS SAM locally:
sam local start-lambda -t template.yaml --debug --region eu-west-1
Whenever I run the code, I get the following error:
botocore.exceptions.ClientError: An error occurred (ResourceNotFound) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-2:012345678901:function:myfunction
I don't understand why it's trying to invoke function located in us-west-2 when I explicitly told the code to use eu-west-1 region. I also tried to use AWS Profile with hardcoded region - the same error.
When I switch the running_flag to False and run the code without AWS SAM everything works fine.
===== Updated =====
The list of env variables:
# env | grep 'AWS'
AWS_PROFILE=production
My AWS configuration file:
# cat /Users/alexey/.aws/config
[profile production]
region = eu-west-1
My AWS Credentials file
# cat /Users/alexey/.aws/credentials
[production]
aws_access_key_id = <my_access_key>
aws_secret_access_key = <my_secret_key>
region=eu-west-1
Make sure you are actually running the correct local endpoint! In my case the problem was that I had started the lambda client with an incorrect configuration previously, and so my invocation was not invoking what I thought it was. Try killing the process on the port you have specified: kill $(lsof -ti:3001), run again and see if that helps!
This also assumes that you have built the function FunctionName="myfunction" correctly (make sure your function is spelt correctly in the template file you use during sam build)

Start up scripts with google compute engine in node js

Im a novice to google cloud compute api in node
im using this library
https://googleapis.dev/nodejs/compute/latest/index.html
im authenticated and can make API requests that is all set up
all im trying to do is make a start up script that will download from this URL
http://eve-robotics.com/release/EveAIO_setup.exe and places the folder on the desktop
i have this but im 100% sure this is way off based on some articles and docs i am seeing but i know nothing ab bash, start up scripts
this is what i have
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const zone = compute.zone('us-central1-c')
async function createVM(){
vmName = 'start-script-trial3'
// const [vm, operation] = await zone.createVM(vmName, {
// })
const config = {
os: 'windows',
http: true,
metadata: {
items: [
{
key: 'startup-script',
value: `curl http://eve-robotics.com/release/EveAIO_setup.exe --output Eve`,
},
]}
}
const vm = zone.vm(vmName)
const [gas, operation] = await vm.create(config)
console.log(operation.id)
}
createVM()
I was able to do it in bash:
I made a 'bat' script for windows:
#ECHO OFF
curl http://eve-robotics.com/release/EveAIO_setup.exe --output C:\Users\Eve
I copied the script to GCS:
gsutil cp file.bat gs://my-bucket/
Then I run the gcloud command:
gcloud compute instances create example-windows-instance --scopes storage-ro --image-family=windows-1803-core --image-project=windows-cloud --metadata windows-startup-script-url=gs://marian-b/file.bat --zone=europe-west1-c

Terraform - output ec2 instance ids to calling shell script

I am using 'terraform apply' in a shell script to create multiple EC2 instances. I need to output the list of generated IPs to a script variable & use the list in another sub-script. I have defined output variables for the ips in a terraform config file - 'instance_ips'
output "instance_ips" {
value = [
"${aws_instance.gocd_master.private_ip}",
"${aws_instance.gocd_agent.*.private_ip}"
]
}
However, the terraform apply command is printing entire EC2 generation output apart from the output variables.
terraform init \
-backend-config="region=$AWS_DEFAULT_REGION" \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET_NAME" \
-backend-config="role_arn=$PROVISIONING_ROLE" \
-reconfigure \
"$TERRAFORM_DIR"
OUTPUT = $( terraform apply <input variables e.g -
var="aws_region=$AWS_DEFAULT_REGION">
-auto-approve \
-input=false \
"$TERRAFORM_DIR"
)
terraform output instance_ips
So the 'OUTPUT' script variable content is
Terraform command: apply Initialising the backend... Successfully
configured the backend "s3"! Terraform will automatically use this
backend unless the backend configuration changes. Initialising provider
plugins... Terraform has been successfully initialised!
.
.
.
aws_route53_record.gocd_agent_dns_entry[2]: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_master_dns_entry: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_agent_dns_entry[1]: Creation complete after 53s
(ID:<zone ............................)
Apply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs:
instance_ips = [ 10.39.209.155, 10.39.208.44, 10.39.208.251,
10.39.209.227 ]
instead of just the EC2 ips.
Firing the 'terraform output instance_ips' is throwing a 'Initialisation Required' error which I understand means 'terraform init' is required.
Is there any way to suppress ec2 generation & just print output variables. if not, how to retrieve the IPs using 'terraform output' command w/o needing to do a terraform init ?
If I understood the context correctly, you can actually create a file in that directory & that file can be used by your sub-shell script. You can do it by using a null_resource OR "local_file".
Here is how we can use it in a modularized structure -
Using null_resource -
resource "null_resource" "instance_ips" {
triggers {
ip_file = "${sha1(file("${path.module}/instance_ips.txt"))}"
}
provisioner "local-exec" {
command = "echo ${module.ec2.instance_ips} >> instance_ips.txt"
}
}
Using local_file -
resource "local_file" "instance_ips" {
content = "${module.ec2.instance_ips}"
filename = "${path.module}/instance_ips.txt"
}

ASP.Net Core at AWS EBS - write permissions and .ebextensions

We have deployed ASP.Net Core app on AWS EBS and have problem with writing files on it.
Access to the path C:\inetpub\AspNetCoreWebApps\app\App_Data\file.txt is denied
I added .ebextensions\[app_name].config but it did nothing
{
"container_commands": {
"01": {
"command": "icacls \"C:/inetpub/AspNetCoreWebApps/app/App_Data\" /grant DefaultAppPool:(OI)(CI)"
}
}
}
I know that this is permission problem because when I RDP to machine and changed permission manually it solved problem. I would like to it during deploy using .ebextensions\[app_name].config
.ebextensions\[app_name].config run before deploy and during deploy folder was recreated - that why it was not working. I fixed it by adding postInstall Power Shell script into aws-windows-deployment-manifest.json:
"scripts": {
"postInstall": {
"file": "SetupScripts/PostInstallSetup.ps1"
}
# # PostInstallSetup.ps1 #
$SharePath = "C:\inetpub\AspNetCoreWebApps\app\App_Data"
$Acl = Get-ACL $SharePath
$AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("DefaultAppPool","full","ContainerInherit,Objectinherit","none","Allow")
$Acl.AddAccessRule($AccessRule)
Set-Acl $SharePath $Acl