I'm using claudia.js CLI to deploy functions and web API to AWS lambda and API gateway.
My project files structure is as follow:
functions
--function1
---- node_modules
---- package.json
---- index.js
---- claudia.json
--function2
---- node_modules
---- package.json
---- index.js
---- claudia.json
The problem is that in order to update new version I have to run "claudia update" in every function folder...so I have to run it once for every function (in every folder). Is there a way to tell claudia.js to update all my functions at once?
Rather than getting ClaudiaJS to do the work, use a tool to run ClaudiaJS.
Most monorepo tools will suffice, such as Lerna but there is gamont of less opinionated tools if you don't care for what Lerna offers - Lolaus is pretty low-level.
With Lerna you would need to use the prescribed repo structure, get linked node_modules, and lerna run deploy would run the npm deploy script of each package that has it.
With Lolaus you would search for all of your functions and then run an arbitrary command in each directory: lolaus "*/*/caudia.json" claudia update
We have a lambda repo with multiple replate lambdas, each in its own subfolder.
> lambdas
> |_lambda1
> |___main.js
> |___main.spec.js
> |___claudia.json
> |___package.json
> |_lambda2
> |___main.js
> |___main.spec.js
> |___claudia.json
> |___package.json
> |_helpers
> |_test.sh
> |_deploy.sh
We use npm and a bash script to iterate over each lambda and run a consistent set of npm/eslint commands on them. If that passes the build process we run a claudia command the same way on each lambda. There is some cut and paste
Related
When using the AWS SAM CLI to build a serverless application, it located dependencies magically and installs them all as the "build" step. For example, using a NodeJS application:
$> sam build
Building resource 'HelloWorldFunction'
Running NodejsNpmBuilder:NpmPack
Running NodejsNpmBuilder:CopyNpmrc
Running NodejsNpmBuilder:CopySource
Running NodejsNpmBuilder:NpmInstall
Running NodejsNpmBuilder:CleanUpNpmrc
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Deploy: sam deploy --guided
$>
Looking at the official documentation they're happy to simply treat it like magic, saying that it:
iterates through the functions in your application, looks for a manifest file (such as requirements.txt) that contains the dependencies, and automatically creates deployment artifacts that you can deploy to Lambda
But what if I have a dependency beyond just those specified in the manifest file? What if my code depends on a compiled binary file, or a static data file?
I would like to add additional build steps so that when I run sam build it compiles these files or copies them appropriately. Is there any way to do this?
sam build is running npm install. So if you insert your own script into a step such as preinstall in package.json, sam build will also execute that step.
package.json
{
...
"preinstall": "cp -r ../../../common ./"
...
}
The above preinstall script is a hack that embeds the common directory in the root folder of the sam inited project in the zip of each lambda handler so that it can be referenced from each.
You should also create a symbolic link in the local lambda handler directory, like ln -s ../common ./common, so that local and lambda work with the same code.
You will need to wrap this command into another custom command and add the steps you need to it.
You can create a make file with multiple targets that satisfy your requirements.
I haven't used sam build before, I usually have a make target for that purpose.
you can give it a try with this bootstrap template here https://github.com/healthbridgeltd/nodejs-sam-bootstrap which is more efficient than using sam build.
Our Jenkins is setup in aws and we did not manage to use slaves. Since the platform is big and some artifacts contain many others, our jenkins comes to his limits when multiple developers commit to different repositories and it is forced to run multiple jobs at the same time.
The aim is to:
- Stay with jenkins since our processes are documented based on it and we use many plugins e.g. test result summary and github integration
- Run jobs in codebuild and get feedback in jenkins to improve the performance
Are there best practices for this?
We did the following steps to build big artifacts outside of jenkins:
- Install jenkins codebuild plugin
- Create jenkins pipeline
- Store settings.xml for maven build in s3
- Store access in system manager parameters to use in codebuild and maven
Create codebuild project with the necessary permissions and following functionality:
-- Get settings.xml from s3
-- run maven with the necessary access data
-- store tests results in s3
Create jenkinsfile whith following functionality:
-- get commitID and run codebuild with it
-- get generated files of test results from s3 and pass it to jenkins
-- delete generated files from s3
-- pass files to jenkins to show test results
With this approach we managed to reduce the runtime to 5 mins.
We next challenge we had was to build and angular application on top of a java microservice, create a docker image and push it to different environments. This jobs was running around 25 mins in jenkins.
We did the following steps to build the docker images outside of jenkins:
- Install jenkins codebuild plugin
- Create jenkins pipeline
- Store settings.xml for maven build in s3
- Store access in system manager parameters to use in codebuild and maven
Create codebuild project with the necessary permissions and following functionality:
-- Get settings.xml from s3
-- login into ecr in all environments
-- build the angular app
-- build the java app
-- copy necessary files for docker build
-- build docker image
-- push to all envoronments
Create jenkinsfile whith following functionality:
-- get branch names of both repositories to build the docker image from
-- get branch latest commitID
-- call the codebuild projects with both commitIDs (notice that the main repository will need the buildspec)
With this approach we managed to reduce the runtime to 5 mins.
Sample code in: https://github.com/felipeloha/samples/tree/master/jenkins-codebuild
GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
I am trying to run git in AWS lambda to make a checkout of a repository.
This is my setup:
I am using nodejs 4.3
I am not using nodegit because I want to use the "--depth=1" parameter, which is not supported by nodegit.
I have copied the git and ssh executable from the correct AWS AMI and placed then in a "bin" folder in the zip I upload.
I added them to PATH with this:
->
process.env['PATH'] = process.env['LAMBDA_TASK_ROOT'] + "/bin:" + process.env['PATH'];
The input variables are set like this:
"checkout_url": "git#...",
"branch":"master
Now I do this (for brevity, I mixed some pseudo-code in):
downloadDeploymentKeyFromS3Sync('/tmp/ssh_key');
fs.chmodSync("/tmp/ssh_key",0600);
process.env['GIT_SSH_COMMAND'] = 'ssh -o StrictHostKeyChecking=no -i /tmp/ssh_key';
execSync("git clone --depth=1 " + checkout_url + " --branch " + branch + " /tmp/checkout");
Running this in my local computer using lambda-local everything works fine! But when I test it in lambda, I get:
warning: templates not found /usr/share/git-core/templates
PRIV_END: seteuid: Operation not permitted\r
fatal: Could not read from remote repository.
The "warning" is of course, because I did not install git but just copied the binary. Is that a reason why this should not work?
Why is git needing "setuid"? I read that in some shells, that is disabled for security reasons. So it makes sense that it does not work in lambda. Can git somehow be instructed to not "need" this command?
Yep, this is definitely possible, I've created a Lambda Layer that achieves just this. No need to mess with any env variables, should work out of the box:
https://github.com/lambci/git-lambda-layer
As stated in the README, all you need to do is add a layer with the following ARN:
arn:aws:lambda:<region>:553035198032:layer:git:<version>
(replace <region> and <version>, check README for latest version)
The issue is that you cannot copy just the git binary. You need a portable version of git and even with that you're going to have a bad time because you cannot guarantee that the os the lambda function runs on is going to be compatible with the binary.
Stepping back, I would just walk away from this approach completely. I would clone and build a package that I would just download pretty much the same way you do downloadDeploymentKeyFromS3Sync.
You might consider this a non-answer, but I've found the easiest way to run arbitrary binaries from Lambda is... not to. If I cannot do the work from within a platform-independent, non-binary approach, I integrate Docker into the workflow, managing Docker containers from the Lambda function.
On AWS one way to do this is to use the Elastic Container Service (ECS) to spawn a task that runs git.
If you stand up a Docker Swarm instance or integrate another Docker-API compatible service such as Rackspace Carina or Joyent's Triton, then you could use a project I personally put together specifically for integrating AWS Lambda with Docker: "Dockaless".
Good luck!