Mock AWS Lambda with moto - amazon-web-services

I have a lambda in Python that I want to mock using the moto framework. When I use the mock_lambda() context manager (ie: with mock_lambda(): #do stuff), I'm still getting an error like "error running docker: Error while fetching server API version: HTTPSConnectionPool(host='<host>', port=<port>): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at <location>>: Failed to establish a new connection: [WinError <error code>] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))". Is there a way to run aws lambdas in a mocked AWS environment with fake credentials and IAM roles without having to spin up a local docker daemon? How would I be able to ensure that this type of unit test works in github actions? If lambda absolutely requires docker daemon to be spun up, are there any other good frameworks besides moto that don't require spinning up docker daemon?

Related

AWS CDK unstable deployment of Lambda CustomResource

I use cdk to deploy my AWS stack. It's NextJS app with RDS instance. Initialization Database I do using CustomResource approach (Lambda build from Docker image) as suggested that Article
Sometimes my deployment fails with error message
Received response status [FAILED] from custom resource. Message returned: Connection timed out after 120000ms
Im sure because my database init takes to much time. I do filling the database with "INSERT INTO" SQL queries that repeat about 5000 times.
Could you advise how to avoid that error because deployment script is unstable and I can't rely on it? Many thanks.

AWS Codebuild Project Unable to communicate with RDS db

I am attempting to have AWS CodeBuild run a Flyway migration. The DB and CodeBuild Project are created via Terraform (the pipeline runs as a GitHub action, if it matters)
That code is here.
I figured this solution would make the difference: AWS CodeBuild fails to interact with RDS instance
When the CodeBuild project is executed by my GitHub workflow (using the aws-actions/aws-codebuild-run-build action), the migration times out:
[Container] 2022/10/07 21:03:56 Running command flyway -user=$DB_USER -password=$DB_PASSWORD -url=jdbc:mariadb://$DB_HOST:$DB_PORT/$DB_NAME -createSchemas=true migrate
ERROR: Unable to obtain connection from database (jdbc:mariadb://***:***/***) for user '***': Could not connect to address=(host=***)(port=***)(type=master) : Socket fail to connect to host:***, port:***. connect timed out
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08000
Error Code : -1
Message : Could not connect to address=(host=***)(port=***)(type=master) : Socket fail to connect to host:***, port:***. connect timed out
Caused by: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=***)(port=***)(type=master) : Socket fail to connect to host:***, port:***. connect timed out
Caused by: java.sql.SQLNonTransientConnectionException: Socket fail to connect to host:***, port:***. connect timed out
Caused by: java.net.SocketTimeoutException: connect timed out
This tells me it's some sort of networking problem but I can't put my finger on what route might be missing. No NACLs other than the defaults. Just security groups. I have a similar pipeline in the AWS CDK that works. As near as I can tell, the security groups and IAM permissions are identical, as is the database config itself.
Looking for debugging tips or anything that's missing.
Consider setting the vpc_security_group_ids parameter on your aws_db_instance resource. In that collection should be the security group you associated with your codebuild project. Currently it doesn't appear that your database has an associated security group and so traffic coming from your codebuild project isn't whitelisted and cannot make it through.
See Terrform docs

AWS Datapipeline RDS to S3 Activity Error: Unable to establish connection to jdbc://mysql:

I am currently setting up a AWS Data Pipeline using the RDStoRedshift Template. During the first RDStoS3Copy activity I am receiving the following error:
"[ERROR] (TaskRunnerService-resource:df-04186821HX5MK8S5WVBU_#Ec2Instance_2021-02-09T18:09:17-0) df-04186821HX5MK8S5WVBU amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to jdbc://mysql:/myhostname:3306/mydb No suitable driver found for jdbc://mysql:/myhostname:3306/mydb"
I'm relatively new with AWS services, but it seems that the copy activity spins up an EC2 instance for the copy activity. The error clearly states there isn't a drive available. Do I need to stand up an EC2 instance for AWSDataPipeline to use and install the driver there?
Typically when you are coding a solution that interacts with a MySQL RDS instance, esp a Java solution such a Lambda function written using Java runtime API or a cloud based web app (ie - Spring Boot web app), you specify the driver file using a POM/Gradle dependency.
For this use case, there seems to be information here about a Driver file: https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html

AWS ECS task fails to start becasue daemon can't create Logstream

I have 2 versions of a service that run in the same cluster. I'm using the awslogs driver
The v2 logs works fine however the v1 task fails to start because it can't create a log stream.
The setup is identical between services except for the container being used.
The log group exists and the role has permissions to create a "logstream" and can "putevents" as this is pretty much the same setup for the v2 in a different group.
CannotStartContainerError: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: RequestError: send request failed caused by: Post https://logs.eu-west-1-v1.amazonaws.com/: dial tcp: lookup logs.eu-west-1
I've setup a new service and tried to spin it up again but it failed so I thought that this was to do with the container setup.
On the official documentation here it recommends adding this to the environment variables
ECS_AVAILABLE_LOGGING_DRIVERS '["json-file","awslogs"]'
After adding this, it still failed. I've been searching for a while on this and would appreciate any help or preferably guidance.

Best practices with OpsWorks Setup Failure

yesterday I setup our AWS OpsWorks Bench. We are using a custom cookbook which we are hosting on GitHub. I saw that the setup process failed and had a look in the logs. I saw that the custom cookbook could not be fetched from GitHub because they had server problems. Therefor the setup on the server failed and the process stopped.
Does anyone know if I could handle that sort of failures and restart the setup process till it is done?
One way to avoid issues like this is to host your assets on S3. Alternatively you can poll the deployment status to determine if it succeeds or fails and then have some retry logic.