AWS Lambda for .net core using GitLab - amazon-web-services

I am new to gitlab and need to use it for pushing zipped lambda code written in .net core to s3 which will be later referenced by terraform.
Below is the step that works WITHOUT GITLAB
Created a dotnet Core Lambda Application for DynamoDB using command -
"dotnet new lambda.Dynamo" (standard template)
Ran command dotnet lambda package => generated a zip file in release
folder of the solution
Wrote a terraform script which reference this zip file and ran
terraform to create Lambda in AWS UI
Now I need to achieve all this USING GITLAB
Below are the pseudo steps
Create a new gitlab repo and upload .net core Lambda Application for
DynamoDB
add .gitlab-ci.yml =>
A. This script should be able to create a zip file for the .net core project
B. This script should be able to push the generated zip file to S3
Have a terraform that references this Zip file from S3.
Question 1 - Does the above mentioned Pseudo Stepsfor gitlab makes logical sense?
Question 2 - Can I get some guidance on build script which can help in achieving the pseudo steps. Thanks!

Related

sam local invoke does not work with symlink

I am building a typescript project with AWS CDK and SAM.
I am trying to setup a backend where I have several lambda functions which share a common lib (which will ultimately talk to dynamo-db).
I created a very simple demo with one hello-world lambda importing from one common my-lib package (using yarn workspaces).
https://github.com/ziggy6792/aws-cdk-lambda-shared-package
I am using yarn workspaces to share my common my-lib library with my hello-world lambda
If I deploy this stack to AWS and run my hello-world lambda (by testing from the AWS console) it works! (It successfully imports my-lib does not error).
However I can't invoke my lambda function locally.
I have tried to use this method (which I found here) to mock locally (this method works fine when I don't import my common my-lib).
cdk synth --no-staging > template.yml (to find the logical lambda function id = HelloWorldLambda5A02458E)
sam local invoke HelloWorldLambda5A02458E
But I get an error
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'my-lib/MyLib'\nRequire stack:\n- /var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js"}
It seems sam local invoke does not like my local dependency which is created as a symlink by yarn workspaces. If I replace the synlink my-lib with a hard copy of the my-lib package then the code runs fine locally. (But I don't want to so this every-time I want to run locally)
My question is
How can I invoke my hello-world lambda function locally?
Thanks a lot

aws: .net Core: zip the built code and copy to s3 output bucket

I am a .net developer and using a .net core 2.x application to build and upload the release code to s3 bucket. later that code will be used to deploy to ec2 instance.
I am new to CI/CD using aws and in learning phase.
In order to create CI/CD for my sample project, I gone through some aws tutorials and was able to create the following buildspec.yml file. Using that file I am able to run the successful build.
The problem comes in the phase UPLOAD_ARTIFACTS. I am unable to understand how to create a zip file that will be used to upload to the s3 bucket specified in the build project.
My buildspec.yml files contains the following code, Please help me finding what is wrong or what I am missing.
version: 0.2
phases:
build:
commands:
- dotnet restore
- dotnet build
artifacts:
files:
- target/cicdrepo.zip
- .\bin\Debug\netcoreapp2.1\*
I think I have to add post_build and some commands that will generate the zip file. But don't know the commands.
Following is the output image from the build logs.
your file is good all what you need to do is to create a S3 bucket then
you need to configure your CodeBuild to generate zip (or not) your artifacts for you, and to store it to s3.
this is the step you need to configure:
Edit:
if you want all your files to be copied on the root of your Zip file you can use:
artifacts:
files:
- ...
discard-paths: yes

AWS C# Lambda function code not deployed after successful deployment

I am trying to deploy a C# core2.0 Lambda function create on visual studio to Amazon Lambda function.
I am using these commands on command line:
dotnet lambda package -c Release -f netcoreapp2.0
Which creates the release folder with zip deployment file.
After that I issue:
dotnet lambda deploy-function -fn AWSLambda1
And that function was created on the AWS
But When I enter the Lambda function there is not code in it:
When I try to upload the zip deployment file it is not working and code is not deploying
Please help
Thanks
Got the same issues , uploads the function but not the code... also tried overwriting an existing lambda , no joy.
OK , i think I figured this out , when you publish a dotnet lambda project from the CLI by default it creates a DLL - then the deploy function zips and uploads the DLL to AWS lambda. Naturally you cant then inspect individual code files as they are compiled in the DLL. Maybe theres some option to upload the raw code files.
Lambda deployment way trough command line.
Step 1 : dotnet tool install -g Amazon.Lambda.Tools
Step 2 : dotnet lambda deploy-serverless
Note: Step 2 for whole lambda's deployment command it is required to first time deployment.
Step 3 : if you want to deploy specific lambda then use below command.
dotnet lambda deploy-function Getdata
Note :(Getdata is a function name which mention in serverless.template file in resource section)
Add below configuration in "aws-lambda-tools-defaults.json"

Lambda function throws class not found exception when deployed with Jenkins generated zip file

I'm working on AWS Lambda function .I deploy it by uploading a zip file and source code (project) written in Java 8.
project is built using gradle. upon successful build, it generates the deployment zip.
this works perfectly fine when I deploy the locally generated zip in Lambda function.
Working scenario:
Zip generated through gradle build locally in workspace -> copied to AWS S3
location -> specify the s3 zip path in Lambda upload/specify URL path field.
but when I generate the gradle build from jenkins , the zip which is generated is not working in the lambda function. it throws "class not found exception"
Exception scenario:
Zip generated through gradle in Jenkins -> copied to AWS S3 location ->
specify the s3 zip path in Lambda upload/specify URL path field.
Class not found: com.sample.HelloWorld: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.sample.HelloWorld
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
I suspected this could be the issue with file permissions of the content inside the zip file. i verfied this by comparing both the zip contents in a linux
environment. I could see that files from the zip generated from jenkins lacks some permissions hence i handled permissons provision for the zip contents in
my gradle build code.
task zip(type: Zip) {
archiveName 'lambda-project.zip'
fileMode 0777
from sourceSets.main.output.files.each { zipTree(it) }
from (configurations.runtime) {
into 'lib'
}
}
But still I'm getting the same error. I can see the file contents now have full permissions but still getting the same error.
Note:
Tried to make the deployment package as jar and tested. still getting same error.
I have configured the lambda handler configuration correctly. example: class name is "HelloWorld.java" and package name is com.sample then
my lambda handler configuration is com.sample.HelloWorld. I'm pretty confident about this point because with the same configuration
it works fine when zip generated locally
I have compared the zip contents (locally generated and jenkins generated ) could not see any difference in them
The directories inside the zip files were lacking permissions. I have tried by providing file permissions earlier but it worked after providing permissions for directories in gradle build.
dirMode 0777
I would recommend using serverless framework for lambda deployment, serverless framework help us to deploy lambda functions without much hassle. But if you want to setup CI, CD, monitoring and logging then you can refer to the book below.

Deploy .war to AWS

I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"