I am using AWS SAM to test my Lambda functions in the AWS cloud.
This is my code for testing Lambda:
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
def test_data_extraction_validate():
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="eu-west-1",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=10,
retries={'max_attempts': 1}
)
)
else:
lambda_client = boto3.client('lambda',region_name="eu-west-1")
####################################################
# Test 1. Correct payload
####################################################
with open("payloads/myfunction/ok.json","r") as f:
payload = f.read()
# Correct payload
response = lambda_client.invoke(
FunctionName="myfunction",
Payload=payload
)
result = json.loads(response['Payload'].read())
assert result['status'] == True
assert result['error'] == ""
This is the command I am using to start AWS SAM locally:
sam local start-lambda -t template.yaml --debug --region eu-west-1
Whenever I run the code, I get the following error:
botocore.exceptions.ClientError: An error occurred (ResourceNotFound) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-2:012345678901:function:myfunction
I don't understand why it's trying to invoke function located in us-west-2 when I explicitly told the code to use eu-west-1 region. I also tried to use AWS Profile with hardcoded region - the same error.
When I switch the running_flag to False and run the code without AWS SAM everything works fine.
===== Updated =====
The list of env variables:
# env | grep 'AWS'
AWS_PROFILE=production
My AWS configuration file:
# cat /Users/alexey/.aws/config
[profile production]
region = eu-west-1
My AWS Credentials file
# cat /Users/alexey/.aws/credentials
[production]
aws_access_key_id = <my_access_key>
aws_secret_access_key = <my_secret_key>
region=eu-west-1
Make sure you are actually running the correct local endpoint! In my case the problem was that I had started the lambda client with an incorrect configuration previously, and so my invocation was not invoking what I thought it was. Try killing the process on the port you have specified: kill $(lsof -ti:3001), run again and see if that helps!
This also assumes that you have built the function FunctionName="myfunction" correctly (make sure your function is spelt correctly in the template file you use during sam build)
Related
I have this python code inside Lambda:
#This script will run as a Lambda function on AWS.
import time, json
cmdStatus = "Failed"
message = ""
statusCode = 200
def lambda_handler(event, context):
time.sleep(2)
if(cmdStatus=="Failed"):
message = "Command execution failed"
statusCode = 400
elif(cmdStatus=="Success"):
message = "The script execution is successful"
statusCode = 200
else:
message = "The cmd status is: " + cmdStatus
statusCode = 500
return {
'statusCode': statusCode,
'body': json.dumps(message)
}
and I am invoking this Lambda from Azure DevOps Build Pipeline - AWS Lambda Invoke Function.
As you can see in the above code - have intentionally put that cmdStatus to Failed to make that Lambda fail but when executed from Azure DevOps Build Pipeline - the task succeeds. Strange.
How can I make the pipeline to fail in this case? Please help.
Thanks
I have been working with a similar issue myself and it looks like a bug in the task itself. It was reported in 2019 and nothing happened since so I wouldn't hold out much hope.
https://github.com/aws/aws-toolkit-azure-devops/issues/175
My workaround to this issue was to instead use the AWS CLI task with
Command: lambda
Subcommand: invoke
Options and Parameters: --function-name {nameOfYourFunction} response.json
Followed immediately by a bash task with an inline bash script
cat response.json
if grep -q "errorType" "response.json"; then
echo "An error was found"
exit 1
fi
echo "No error was found"
I am trying to run the aws-runas to use AWS STS to do some development work using a specific user role. I already have access the required userrole access to the AWS account.
the command executed to run the aws sts
aws-runas.exe <profile_name>
[default]
output = json
region = <example1>
saml_auth_url = <url>
saml_username = <email>
saml_provider = <provider>
federated_username = <username>
credentials_duration = 8h
[profile <name>]
role_arn = arn:aws:iam::<account_id>:role/<role_name>
source_profile = <profile_name>
This is how I formed the config file which is located at C:\Users<NAME>.aws
Debug output is as the follows.
Does anyone have a clue of what's wrong in here ?
I have a lambda function that describes instances that are running in AWS Account, but when I have scaled out instances by using Auto Scaling, the Lambda function returning the wrong number of instances.
To check for this:
I have used the same logic and created a CLI command and the CLI command gives me the correct number of instances.
CLI command:
aws ec2 describe-instances --filters Name=instance-state-name,Values=running --query 'Reservations[].Instances[].{Instance:InstanceId}' --output json --region -eu-west-1
After that created a python script which I have executed in the server but this is giving the same answer as of lambda function.
Putting Lambda Function Code:
client = boto3.client('ec2',region_name = 'eu-west-1')
response_dict = client.describe_instances(Filters=[
{
'Name': 'instance-state-name',
'Values':['running']
}
])
instances_list = response_dict['Reservations']
result_dict={}
for instance_dict in instances_list:
instanceDetailsresult_dict = {}
instance_role=None
instance = instance_dict['Instances'][0]
instanceId = instance['InstanceId']
print(instanceId)
Note:
This is just a code snippet, all libraries are included.
This is a list: instance_dict['Instances'][0] and you are only getting the first instance out of that list. I suggest iterating over that list.
In AWS SSM, I use RunRemoteScript document to run a PowerShell script to install some software on SSM managed instances. The script is hosted in a public accessible S3 bucket.
The RunCommand works fine with the script not taking any parameters. Software was successfully deployed to managed instances. But my script has a unique CID embedded in the code. For security reasons, I need to take it out and set it as a parameter for the PS script. Ever since then, the RunCommand just keeps failing.
My script looks like below (with parameter CID):
param (
[Parameter(Position = 0, Mandatory = 1)]
[string]$CID
)
Start-Transcript -Path "$([System.Environment]::GetEnvironmentVariable('TEMP','Machine'))\app_install.log" -Append
function Install-App {
<#
Installs App
#>
[CmdletBinding()]
[OutputType([PSCustomObject])]
param (
[Parameter(Position = 0, Mandatory = 1)]
[string]$msiURL,
[Parameter(Position = 2, Mandatory = 1)]
[string]$InstallCheck,
[Parameter(Position = 3, Mandatory = 1)]
[string]$CustomerID
)
if ( -not(Test-Path $installCheck)) {
# Do stuff
...
}
else {
Write-Host ("$installCheck - Already Installed")
Return "Already Installed, Skipped $(($msiURL -split '([^\\/]+$)')[1])"
}
}
Install-App -msiURL "https://s3.amazonaws.com/app.foo.com/Windows/app.exe" -InstallCheck "C:\Program Files\App\app.exe" -CustomerID $CID
Stop-Transcript
By following AWS SSM documentation below, I run the command below to kick off the RunCommand.
https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-remote-scripts.html
aws ssm send-command --document-name "AWS-RunRemoteScript" --targets "Key=instanceids,Values=mi-abc12345"
--parameters '{"sourceType":["S3"],"sourceInfo":["{\"path\": "https://s3.amazonaws.com/app.foo.com/Windows/app_install.ps1\"}"],"commandLine":["app_install.ps1 abcd123456"]}'
The RunCommand keeps failing with error below:
----------ERROR-------
app_install.ps1 : The term 'app_install.ps1' is not recognized
as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is
correct and try again.
At C:\ProgramData\Amazon\SSM\InstanceData\mi-abcd1234\document\orchest
ration\a6811111d-c411-411-a222-bad123456\runPowerShellScript\_script.ps1:4
char:2
+ app_install.ps1 abcd123456
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (app_install.ps1:String)
[], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
failed to run commands: exit status 255
I suspect this is to do with the way how RunCommand handles the argument for the PowerShell script. But I cannnot find any examples other than the official document, which I followed. Anyone can point out what the issue is here?
BTW, I already tried putting the ps1 after ".\" without luck.
I found out the cause of the issue. The IAM role attached to the instance did not have sufficient rights to access the S3 bucket holds the script. As a result SSM wasn't able to download the script to the instance, hence the error "...ps1 is not recognized".
So it's not related to the code actually.
I am using 'terraform apply' in a shell script to create multiple EC2 instances. I need to output the list of generated IPs to a script variable & use the list in another sub-script. I have defined output variables for the ips in a terraform config file - 'instance_ips'
output "instance_ips" {
value = [
"${aws_instance.gocd_master.private_ip}",
"${aws_instance.gocd_agent.*.private_ip}"
]
}
However, the terraform apply command is printing entire EC2 generation output apart from the output variables.
terraform init \
-backend-config="region=$AWS_DEFAULT_REGION" \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET_NAME" \
-backend-config="role_arn=$PROVISIONING_ROLE" \
-reconfigure \
"$TERRAFORM_DIR"
OUTPUT = $( terraform apply <input variables e.g -
var="aws_region=$AWS_DEFAULT_REGION">
-auto-approve \
-input=false \
"$TERRAFORM_DIR"
)
terraform output instance_ips
So the 'OUTPUT' script variable content is
Terraform command: apply Initialising the backend... Successfully
configured the backend "s3"! Terraform will automatically use this
backend unless the backend configuration changes. Initialising provider
plugins... Terraform has been successfully initialised!
.
.
.
aws_route53_record.gocd_agent_dns_entry[2]: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_master_dns_entry: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_agent_dns_entry[1]: Creation complete after 53s
(ID:<zone ............................)
Apply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs:
instance_ips = [ 10.39.209.155, 10.39.208.44, 10.39.208.251,
10.39.209.227 ]
instead of just the EC2 ips.
Firing the 'terraform output instance_ips' is throwing a 'Initialisation Required' error which I understand means 'terraform init' is required.
Is there any way to suppress ec2 generation & just print output variables. if not, how to retrieve the IPs using 'terraform output' command w/o needing to do a terraform init ?
If I understood the context correctly, you can actually create a file in that directory & that file can be used by your sub-shell script. You can do it by using a null_resource OR "local_file".
Here is how we can use it in a modularized structure -
Using null_resource -
resource "null_resource" "instance_ips" {
triggers {
ip_file = "${sha1(file("${path.module}/instance_ips.txt"))}"
}
provisioner "local-exec" {
command = "echo ${module.ec2.instance_ips} >> instance_ips.txt"
}
}
Using local_file -
resource "local_file" "instance_ips" {
content = "${module.ec2.instance_ips}"
filename = "${path.module}/instance_ips.txt"
}