I have the lambda function pushed as I can see it in localstack, based on below command/output
aws lambda get-function --function-name books1 --endpoint-url=http://localhost:4574
{
"Code": {
"Location": "http://localhost:4574/2015-03-31/functions/books1/code"
},
"Configuration": {
"Version": "$LATEST",
"FunctionName": "books1",
"CodeSize": 50,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:books1",
"Environment": {},
"Handler": "main",
"Runtime": "go1.x"
}
}
When I try to execute it, as shown below, I get an error, and my localstack is running inside a docker container
aws --endpoint-url=http://localhost:4574 lambda invoke --function-name books1 /tmp/output.json
An error occurred (InternalFailure) when calling the Invoke operation (reached max retries: 4): Error executing Lambda function: Unable to find executor for Lambda function "books1". Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 269, in run_lambda
event, context=context, version=version, asynchronous=asynchronous)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 466, in execute
process.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 462, in do_execute
result = lambda_function(event, context)
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 390, in generic_handler
'Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker') % lambda_name)
Exception: Unable to find executor for Lambda function "books1". Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker
This lambda is written in Go and when I manually execute it on real AWS, it works just fine.
You should run localstack container with passed LAMBDA_EXECUTOR=docker environment and /var/run/docker.sock:/var/run/docker.sock volume
docker run \
-itd \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LAMBDA_EXECUTOR=docker \
-p 4567-4583:4567-4583 -p 8080:8080 \
--name localstack \
localstack/localstack
Related
I am currently trying to get VSCode running with the Amazon Linux distribution v2. I get all the way up to the run step, and then it falls over with "SAM debug: missing AWS credentials (Toolkit is not connected)" and "SAM CLI not configured".
Debug log below
2023-01-27 19:42:44 [WARN]: SAM debug: missing AWS credentials (Toolkit is not connected)
2023-01-27 19:42:45 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:46 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:47 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:47 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:48 [INFO]: Preparing to debug locally: Lambda "HelloWorld::HelloWorld.Function::FunctionHandler"
2023-01-27 19:42:48 [INFO]: Building SAM application...
2023-01-27 19:42:48 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:48 [INFO]: Command: (not started) [/usr/local/bin/sam build --build-dir /tmp/aws-toolkit-vscode/vsctkkE0arS/output --template /src/sample/template.yaml]
2023-01-27 19:42:48 [INFO]: SAM CLI not configured, using SAM found at: '/usr/local/bin/sam'
2023-01-27 19:42:54 [INFO]: Build complete.
2023-01-27 19:42:54 [INFO]: Installing .NET Core Debugger to /src/sample/src/HelloWorld/.vsdbg...
2023-01-27 19:42:56 [INFO]: Command: (not started) [/src/sample/src/HelloWorld/.vsdbg/installVsdbgScript.sh -v latest -r linux-x64 -l /src/sample/src/HelloWorld/.vsdbg]
2023-01-27 19:42:56 [INFO]: Starting SAM application locally
2023-01-27 19:42:56 [INFO]: SAM CLI location: [object Object]
2023-01-27 19:42:56 [INFO]: AWS.running.command
2023-01-27 19:42:56 [INFO]: SAM CLI not configured, using SAM found at: '/usr/local/bin/sam'
2023-01-27 19:42:56 [INFO]: Command: (not started) [/usr/local/bin/sam local invoke HelloWorldFunction --template /tmp/aws-toolkit-vscode/vsctkkE0arS/output/template.yaml -d 5858 --debugger-path /src/sample/src/HelloWorld/.vsdbg]
2023-01-27 19:42:58 [ERROR]: SamLaunchRequestError: Failed to run SAM application locally
I ran the sam local invoke again with the --debug option, and got the following:
2023-01-27 19:49:24,748 | Config file location: /src/sample/samconfig.toml
2023-01-27 19:49:24,748 | Config file '/src/sample/samconfig.toml' does not exist
2023-01-27 19:49:24,752 | Using SAM Template at /src/sample/template.yaml
2023-01-27 19:49:24,854 | Using config file: samconfig.toml, config environment: default
2023-01-27 19:49:24,854 | Expand command line arguments to:
2023-01-27 19:49:24,854 | --template_file=/src/sample/template.yaml --no_event --layer_cache_basedir=/root/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2023-01-27 19:49:24,854 | local invoke command is called
2023-01-27 19:49:24,858 | No Parameters detected in the template
2023-01-27 19:49:24,868 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-01-27 19:49:24,868 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-01-27 19:49:24,868 | 0 stacks found in the template
2023-01-27 19:49:24,868 | No Parameters detected in the template
2023-01-27 19:49:24,876 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-01-27 19:49:24,876 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-01-27 19:49:24,876 | 2 resources found in the stack
2023-01-27 19:49:24,876 | Found Serverless function with name='HelloWorldFunction' and CodeUri='./src/HelloWorld/'
2023-01-27 19:49:24,876 | --base-dir is not presented, adjusting uri ./src/HelloWorld/ relative to /src/sample/template.yaml
2023-01-27 19:49:24,883 | Found one Lambda function with name 'HelloWorldFunction'
2023-01-27 19:49:24,883 | Invoking HelloWorld::HelloWorld.Function::FunctionHandler (dotnetcore3.1)
2023-01-27 19:49:24,883 | Loading AWS credentials from session with profile 'None'
2023-01-27 19:49:24,979 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2023-01-27 19:49:24,988 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': 'cf4207d7-0554-4bcf-9e05-1dda22ed4df8', 'installationId': 'fb284138-2b0f-4c16-98b3-b665990567fc', 'sessionId': 'fd097a74-adc6-42c5-a6cc-de3bceaadc9b', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.71.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': 'af2bdbe1aa9b6ec1e2ade1d694f41fc71a831d0268e9891562113d8a62add1bf', 'initialCommit': None}, 'duration': 125, 'exitReason': 'SSOTokenLoadError', 'exitCode': 255}}]}
2023-01-27 19:49:25,735 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: Error loading SSO Token: The SSO access token has either expired or is otherwise invalid.
Traceback:
File "click/core.py", line 1055, in main
File "click/core.py", line 1657, in invoke
File "click/core.py", line 1657, in invoke
File "click/core.py", line 1404, in invoke
File "click/core.py", line 760, in invoke
File "click/decorators.py", line 84, in new_func
File "click/core.py", line 760, in invoke
File "samcli/lib/telemetry/metric.py", line 183, in wrapped
File "samcli/lib/telemetry/metric.py", line 150, in wrapped
File "samcli/lib/utils/version_checker.py", line 41, in wrapped
File "samcli/cli/main.py", line 92, in wrapper
File "samcli/commands/local/invoke/cli.py", line 116, in cli
File "samcli/commands/local/invoke/cli.py", line 202, in do_cli
File "samcli/commands/local/lib/local_lambda.py", line 133, in invoke
File "samcli/commands/local/lib/local_lambda.py", line 185, in get_invoke_config
File "samcli/commands/local/lib/local_lambda.py", line 285, in _make_env_vars
File "samcli/commands/local/lib/local_lambda.py", line 341, in get_aws_creds
File "botocore/credentials.py", line 435, in access_key
File "botocore/credentials.py", line 527, in _refresh
File "botocore/credentials.py", line 543, in _protected_refresh
File "botocore/credentials.py", line 684, in fetch_credentials
File "botocore/credentials.py", line 694, in _get_cached_credentials
File "botocore/credentials.py", line 2053, in _get_credentials
File "botocore/utils.py", line 2679, in __call__
An unexpected error was encountered while executing "sam local invoke".
Search for an existing issue:
https://github.com/aws/aws-sam-cli/issues?q=is%3Aissue+is%3Aopen+Bug%3A%20sam%20local%20invoke%20-%20SSOTokenLoadError
Or create a bug report:
https://github.com/aws/aws-sam-cli/issues/new?template=Bug_report.md&title=Bug%3A%20sam%20local%20invoke%20-%20SSOTokenLoadError
So it can't seem to authenticate against aws. I did this on another machine and it all worked without a hitch - I didn't have to provide an aws login. I am currently logged in to aws in the host system (Windows 11) using aws sso login. Does anyone know what's broken so I can fix it? Thanks
It turned out that I needed to go back into Docker, which I had previously installed, go into Settings > WSL Integration, make sure the "Enable integration with my default WSL distro" was ticked, and ensure Amazon2 was switched on. Then I was able to debug.
I want send the PromQL query to amazon-managed-prometheus via awscli, But I am not able to filter result based on namespace.
I am able to send the same filter in local prometheus via prometheus_api_client.PrometheusConnect, but can not use the same in aws(because of auth)
Is there any way?
awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222/api/v1/query?query=sum(storage_level_sst_num) by (namespace, instance, level_index)"
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"instance": "10.0.3.68:1250",
"level_index": "0_MVGroup",
"namespace": "benchmark"
},
"value": [
1665730049.128,
"8"
]
}
]
}
}
~ %
s awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num{namespace="benchmark"}) by (instance, level_index)"
{"message":null}
Traceback (most recent call last):
File "/opt/homebrew/bin//awscurl", line 33, in <module>
sys.exit(load_entry_point('awscurl==0.26', 'console_scripts', 'awscurl')())
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 521, in main
inner_main(sys.argv[1:])
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 515, in inner_main
response.raise_for_status()
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num%7Bnamespace=benchmark%7D)%20by%20(instance,%20level_index)
I have a following scenario in the k8s cluster.
AWS managed redis cluster which is exposed by a upstream service called redis.
I have opened a tunnel locally using kube-proxy.
curl 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis",
"namespace": "intekspersistence",
"selfLink": "/api/v1/namespaces/intekspersistence/services/redis",
...
"spec": {
"type": "ExternalName",
"sessionAffinity": "None",
"externalName": "xxx.xxx.usw2.cache.amazonaws.com"
},
"status": {
"loadBalancer": {
}
}
As shown, I am able to route to the redis service locally and it's poinintg to the Actual redis host.
Now I am trying to validate and ping the redis host using the below python script.
from redis import Redis
import logging
logging.basicConfig(level=logging.INFO)
redis = Redis(host='127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis')
if redis.ping():
logging.info("Connected to Redis")
Upon running this, It's throwing error as host not found. [Probably due to inclusion of port in the host].
python test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
if redis.ping():
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 1378, in ping
return self.execute_command('PING')
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis:6379. Name or service not known.
Is there a workaround to trim the port from host?? Or to route to the host using the above python module.
I am running localstack inside of a docker container with this docker-compose.yml file.
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
To start localstack I run TMPDIR=/private$TMPDIR docker-compose up.
I have created two lambdas. When I run aws lambda list-functions --endpoint-url http://localhost:4574 --region=us-east-1 this is the output.
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "qmDiumefhM0UutYv32By67cj24P/NuHIhKHgouPkDBs=",
"FunctionName": "handler",
"LastModified": "2019-08-08T17:56:58.277+0000",
"RevisionId": "ffea379b-4913-420b-9444-f1e5d51b5908",
"CodeSize": 5640253,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:handler",
"Environment": {
"Variables": {
"DB_NAME": "somedbname",
"IS_PRODUCTION": "FALSE",
"SERVER": "xxx.xxx.xx.xxx",
"DB_PASS": "somepass",
"DB_USER": "someuser",
"PORT": "someport"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
},
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "wbT8YzTsYW4sIOAXLtjprrveq5NBMVUaa2srNvwLxM8=",
"FunctionName": "paymentenginerouter",
"LastModified": "2019-08-08T18:00:28.923+0000",
"RevisionId": "bd79cb2e-6531-4987-bdfc-25a5d87e93f4",
"CodeSize": 6602279,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:paymentenginerouter",
"Environment": {
"Variables": {
"DB_QUERY_LAMBDA": "handler",
"AWS_REGION": "us-east-1"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
}
]
}
Inside the paymentenginerouter code I am attempting to call the handler lambda via:
lambdaParams := &invoke.InvokeInput{
FunctionName: aws.String(os.Getenv("DB_QUERY_LAMBDA")),
InvocationType: aws.String("RequestResponse"),
LogType: aws.String("Tail"),
Payload: payload,
}
result, err := svc.Invoke(lambdaParams)
if err != nil {
resp.StatusCode = 500
log.Fatal("Error while invoking lambda:\n", err.Error())
}
Where invoke is an import: invoke "github.com/aws/aws-sdk-go/service/lambda"
When I run the paymentenginerouter lambda via:
aws lambda invoke --function paymentenginerouter --payload '{ "body": "{\"id\":\"12\",\"internalZoneCode\":\"xxxxxx\",\"vehicleId\":\"xxxxx\",\"vehicleVrn\":\"vehicleVrn\",\"vehicleVrnState\":\"vehicleVrnState\",\"durationInMinutes\":\"120\",\"verification\":{\"Token\":null,\"lpn\":null,\"ZoneCode\":null,\"IsExtension\":null,\"ParkingActionId\":null},\"selectedBillingMethodId\":\"xxxxxx\",\"startTimeLocal\":\"2019-07-29T11:36:47\",\"stopTimeLocal\":\"2019-07-29T13:36:47\",\"vehicleVin\":null,\"orderId\":\"1\",\"parkingActionType\":\"OnDemand\",\"digitalPayPaymentInfo\":{\"Provider\":\"<string>\",\"ChasePayData\":{\"ConsumerIP\":\"xxxx\",\"DigitalSessionID\":\"xxxx\",\"TransactionReferenceKey\":\"xxxx\"}}}"
}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I receive this error:
localstack_1 | 2019/08/08 20:02:28 Error while invoking lambda:
localstack_1 | UnrecognizedClientException: The security token included in the request is invalid.
localstack_1 | status code: 403, request id: bd4e3c15-47ae-44a2-ad6a-376d78d8fd92
Note
I can run the handler lambda without error by calling it directly through the cli:
aws lambda invoke --function handler --payload '{"body": "SELECT TOKEN, NAME, CREATED, ENABLED, TIMESTAMP FROM dbo.PAYMENT_TOKEN WHERE BILLING_METHOD_ID='xxxxxxx'"}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I thought the AWS credentials are setup according the environment variables in localstack but I could be mistaken. Any idea how to get past this problem?
I am quite new to AWS lambdas and an absolute noob when it comes to localstack so please ask for more details if you need them. It's possible I am missing a critical piece of information in my description.
The error you receiving The security token included in the request is invalid. means that your lambda is trying to call out to the real AWS with invalid credentials, rather than going to Localstack.
When running a lambda inside localstack and where the lambda code itself has to call out to any AWS services hosted in localstack, you will need to ensure that any endpoints used are re-directed to localstack.
To do this there is a documented feature in localstack found here:
https://github.com/localstack/localstack#configurations
LOCALSTACK_HOSTNAME: Name of the host where LocalStack services are available.
This is needed in order to access the services from within your Lambda functions
(e.g., to store an item to DynamoDB or S3 from Lambda). The variable
LOCALSTACK_HOSTNAME is available for both, local Lambda execution
(LAMBDA_EXECUTOR=local) and execution inside separate Docker containers
(LAMBDA_EXECUTOR=docker).
Within your lambda code, ensure you use this environment variable to set the hostname (preceded with http:// and suffixed with the port number of that service in localstack, e.g. :4569 for dynamodb). This will ensure that the calls go to the right place.
Example in Go code snippet that would be added to your lambda where you are making a call to DynamoDB:
awsConfig.WithEndpoint("http://" + os.Getenv("LOCALSTACK_HOSTNAME") + ":4569")
I submit this on command line (i'm omitting all the other params that i know to work)
aws cloudformation create-stack ... --parameters ParameterKey=Region,ParameterValue=us-east-1
It yields:
Unable to construct an endpoint for cloudformation in regionNone
If i submit the same exact params using the https://console.aws.amazon.com/cloudformation web ui, it works.
How do i specify region using the aws.exe for windows ? The .json file i use as a template even has it as the default, but it still does not take it if i omit region from the command line
"Region": {
"Type": "String",
"Description": "Which Region to launch in",
"Default": "us-east-1",
"AllowedValues": [
"us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-northeast-1"
]
}
in debug mode i get...
File "awscli\clidriver.pyc", line 206, in main
File "awscli\clidriver.pyc", line 354, in __call__
File "awscli\clidriver.pyc", line 461, in __call__
File "awscli\clidriver.pyc", line 555, in invoke
File "botocore\service.pyc", line 161, in get_endpoint
File "botocore\endpoint.pyc", line 265, in create_endpoint
File "botocore\regions.pyc", line 67, in construct_endpoint
UnknownEndpointError: Unable to construct an endpoint for cloudformation in region None
2014-10-27 22:52:38,631 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
The region is an argument of the aws command :
aws --region eu-west-1 cloudformation create-stack --stack-name ...
You can also configure it using aws configure or if already executed you can identify it in ~/.aws/config. Example:
[default]
region=us-east-1
The regions are as follows. See second column.
$ ec2-describe-regions
REGION eu-central-1 ec2.eu-central-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com