.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve
Related
I have a CICD build pipeline in Azure Devops for building a .NET Core AWS API Gateway Serverless application. The Pipeline is using hosted Windows 2019. The step that fails is:
steps:
- task: AmazonWebServices.aws-vsts-tools.LambdaNETCoreDeploy.LambdaNETCoreDeploy#1
displayName: 'Build solution and generate CloudFormation template. '
inputs:
awsCredentials: 'AWS - Development (Infrastructure)'
regionName: 'ap-southeast-2'
command: deployServerless
packageOnly: true
packageOutputFile: '$(Build.ArtifactStagingDirectory)\serverless-output.yaml'
lambdaProjectPath: testapi/LCSApi.csproj
s3Bucket: 'api-dev-xxxxxxxx-s3'
s3Prefix: 'azure_devops_builds/lcs/'
additionalArgs: '-template serverless.template '
All I get from the error is the following:
Beginning Serverless Deployment
Performing package-only build of serverless application, output template will be placed in D:\a\1\a\serverless-output.yaml
"C:\Program Files\dotnet\dotnet.exe" lambda package-ci -ot D:\a\1\a\serverless-output.yaml --region ap-southeast-2 --s3-bucket api-dev-xxxxxx-s3 --s3-prefix azure_devops_builds/lcs/ --disable-interactive true -template serverless.template
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-lambda does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
##[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
Finishing: Build solution and generate CloudFormation template.
However, if I re-run the pipeline straight after this failure, it works fine. Additionally, it does not always fail with this error. Around 70-80% of the time the pipeline works fine. What could this be and how can i address it?
Can you try adding this before your step:
powershell: |
dotnet tool install --global Amazon.Lambda.Tools --version 3.1.1
dotnet tool update -g Amazon.Lambda.Tools
For the buildspec.yml (Generalized)
version: 0.2
env:
secrets-manager:
User: CodeBuild/Auth:User_Name
Password: CodeBuild/Auth:Password
pre_build:
commands:
- echo ${User}
- echo ${Password}
post_build:
commands:
- mvn clean deploy -Dnexus.user=$User -Dnexus.password=$Password
The echo commands give me *** which is masked so I think I'm good up to this point. Also, I'm using the DefaultEncryptionKey for AWS Secrets manager.
Inside of the settings.xml for Maven I have
<username>${nexus.user}</username> and <password>${nexus.password}</password>
But when the mvn command runs it's returning a 401 authorization error...
I added AdministratorAccess to the CodeBuild role just in case, same 401 error.
If I declare the variables in clear text, the mvn command works. I'm just missing one thing but I can't find any documentation on how to get around this. Any help would be greatly appreciated.
Update:
I switched everything over to parameter store and it worked with minimal adjustments. I would like to know what I've done wrong with setting up secrets-manager but I successfully got around this. I'll leave this here just in case someone else struggles with this.
The format that you used to specify the secrets from secrets manager looks a bit off. Here's the documentation: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax.
The secrets are referenced as:
key: secret-id:json-key:version-stage:version-id
I have just started working with AWS. I am trying to deploy a nodejs application using codeship and AWS codedeploy. I am successful in deploying the application from codeship to Ec2 instance. But the problem is that I am not able to run the hooks file in appspec.yml. My appspec.yml is given below:
---
version: 0.0
os: linux
files:
- destination: /home/ec2-user/node-project
source: /
hooks:
ApplicationStart:
- location: bin/app-start.sh
runas: root
timeout: 100
In app-start.sh I have:
#!/bin/bash
npm install
The app-start.sh never works and node-modules are never installed. I have also tried to debug in the logs path(/var/log/aws/codedeploy-agent/codedeploy-agent.log) for code-deploy but there is no error and warning.I have also tried multiple things but nothing is working.
The project is successfully installed in Ec2 instance but appspec.yml never launches app-start.sh. Any help would be appreciated.
The issue is that you're moving the files to /home/ec2-user/node-project, which happens before your app-start.sh gets run at the ApplicationStart lifecycle hook. You need to cd into the right directory before running npm install.
Updated ApplicationStart scripts:
#!/bin/bash
cd /home/ec2-user/node-project
npm install
# You'll need to start your application too.
npm start
As an aside, you may want to use the AfterInstall lifecycle hook to run npm install just for organization purposes, but it will have no functional different.
What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.
I was able to setup integration between github and AWS CodePipeline, so now my code is uploaded to S3 after a push event by a lambda function. That works very well.
A new ZIP with source code on S3 trigger a pipeline, which builds the code. That's fine. Now I'd like to also build a docker image for the project.
The first problem is that you can't mix a project (nodejs) build and docker build. That's fine, makes sense. Next issue is that you can't have another buildspec.yml for the docker build. You have specify the build commands manually, ok, that works as a workaround.
The biggest problem though, or lack of my understanding, is how to put the docker build as part of the pipeline? First build step build the project, the the next build step builds the docker image. Two standalone AWS CodeBuilds.
The thing is that a pipeline build step have to produce an artifact on the output. But a docker build doesn't produce any files and it looks that the final docker push after docker build is not qualified as an artifact by the pipeline service.
Is there a way how to do it?
Thanks
A bit late, but hopefully will be helpful for someone. You should have the docker image published as part of your post_build phase commands. Here's an example of a buildspec.yml:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region $AWS_REGION)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE .
- "docker tag $IMAGE $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- "docker push $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
- "echo {\\\"image\\\":\\\"$REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}\\\"} > image.json"
artifacts:
files:
- 'image.json'
As you can see, the CodeBuild project expects few parameters - AWS_REGION, REPO and IMAGE and publishes the image on AWS ECR (but you can use registry of your choice). It also uses the existing CODEBUILD_BUILD_ID environment variable to extract dynamic value for the image tag. After the image is pushed, it creates json file with the full path to the image and publishes it as an artifact for CodePipeline to use.
For this to work, the CodeBuild project "environment image" should be of type "docker" with the "priviledged" flag activated. When creating the CodeBuild project in your pipeline, you can also specify the environment variables that are used the buildspec file above.
There is a good tutorial on this topic here:
http://queirozf.com/entries/using-aws-codepipeline-to-automatically-build-and-deploy-your-app-stored-on-github-as-a-docker-based-beanstalk-application
Sorry about the inconvenience. Making it less restrictive is in our roadmap. Meanwhile, in order to use CodeBuild action, you can use a dummy file as the output artifact.