Vert.x Gradle multi module build - build

Both Vert.x and Gradle are quite new for me. I'm familiar with basics and Hello World demos. I'm looking for guidelines to set up multi module build.
Requirements for the project:
dependencies are managed on top level (not in each module)
all modules are located in same level (no sub modules)
module properties are managed in one place (eg. version & groupId/owner is defined once for the whole project)
there is one starter module which is responsible for loading all verticles from other modules (thus there are dependencies between modules)
single (executable) fat jar is produced as build result
build should produce needed files to import project to the IDE (eg. Eclipse)
different languages can be used for development (Java, Scala, JS, etc.)
I did some testing with vertx-gradle-template and vertx-gradle-plugin. Neither of them is good fit for my requirements.

I have a similar project and I chose vertx-gradle-plugin. It just groups several useful pugins.
I use this structure:
root-folder
├── app
├── build.gradle
├── settings.gradle
├── service 1
├── service 2
Service 1/2 are my differents modules
app is a module to assemble everything. It uses the plugin above and so contains the main verticle. It depends on service 1/2. The configuration of the distribution jar is here too.
settings.gradle and build.gradle is used to centralize the versions and declare the different modules

Related

Building a compiled application with Docker

I am building a server, written in C++ and want to deploy it using Docker with docker-compose. What is the "right way" to do it? Should I invoke make from Dockerfile or build manually, upload to some server and then COPY binaries from Dockerfile?
I had difficulties automating our build with docker-compose, and I ended up using docker build for everything:
Three layers for building
Run → develop → build
Then I copy the build outputs into the 'deploy' image:
Run → deploy
Four layers to play with:
Run
Contains any packages required for the application to run - e.g. libsqlite3-0
Develop
FROM <projname>:run
Contains packages required for the build
e.g. g++, cmake, libsqlite3-dev
Dockerfile executes any external builds
e.g. steps to build boost-python3 (not in package manager repositories)
Build
FROM <projname>:develop
Contains source
Dockerfile executes internal build (code that changes often)
Built binaries are copied out of this image for use in deploy
Deploy
FROM <projname>:run
Output of build copied into image and installed
RUN or ENTRYPOINT used to launch the application
The folder structure looks like this:
.
├── run
│ └── Dockerfile
├── develop
│ └── Dockerfile
├── build
│ ├── Dockerfile
│ └── removeOldImages.sh
└── deploy
├── Dockerfile
└── pushImage.sh
Setting up the build server means executing:
docker build -f run -t <projName>:run
docker build -f develop -t <projName>:develop
Each time we make a build, this happens:
# Execute the build
docker build -f build -t <projName>:build
# Install build outputs
docker build -f deploy -t <projName>:version
# If successful, push deploy image to dockerhub
docker tag <projName>:<version> <projName>:latest
docker push <projName>:<version>
docker push <projName>:latest
I refer people to the Dockerfiles as documentation about how to build/run/install the project.
If a build fails and the output is insufficient for investigation, I can run /bin/bash in <projname>:build and poke around to see what went wrong.
I put together a GitHub repository around this idea. It works well for C++, but you could probably use it for anything.
I haven't explored the feature, but #TaylorEdmiston pointed out that my pattern here is quite similar to multi-stage builds, which I didn't know about when I came up with this. It looks like a more elegant (and better documented) way to achieve the same thing.
My recommendation would be to completely develop, build and test on the container itself. This ensures the Docker philosophy that the developer's environment is the same as the production environment, see The Modern Developer Workstation on MacOS with Docker.
Especially, in case of C++ applications where there are usually dependencies with shared libraries/object files.
I don't think there exists a standardized development process for developing, testing and deploying C++ applications on Docker, yet.
To answer your question, the way we do it as of now is, to treat the container as your development environment and enforce a set of practices on the team like:
Our codebase (except config files) always lives on shared volume (on local machine) (versioned on Git)
Shared/dependent libraries, binaries, etc. always live in the container
Build & test in the container and before committing the image, clean unwanted object files, libraries, etc., and ensure docker diff changes are as expected.
Changes/updates to environment, including shared libraries, dependencies, are always documented and communicated with the team.
Update
For anyone visiting this question after 2017, please see the answer by fuglede about using multi-stage Docker builds, that is really a better solution than my answer (below) from 2015, well before that was available.
Old answer
The way I would do it is to run your build outside of your container and only copy the output of the build (your binary and any necessary libraries) into your container. You can then upload your container to a container registry (e.g., use a hosted one or run your own), and then pull from that registry onto your production machines. Thus, the flow could look like this:
build binary
test / sanity-check the binary itself
build container image with binary
test / sanity-check the container image with the binary
upload to container registry
deploy to staging/test/qa, pulling from the registry
deploy to prod, pulling from the registry
Since it's important that you test before production deployment, you want to test exactly the same thing that you will deploy in production, so you don't want to extract or modify the Docker image in any way after building it.
I would not run the build inside the container you plan to deploy in prod, as then your container will have all sorts of additional artifacts (such as temporary build outputs, tooling, etc.) that you don't need in production and needlessly grow your container image with things you won't use for your deployment.
While the solutions presented in the other answers -- and in particular the suggestion of Misha Brukman in the comments to this answer about using one Dockerfile for development and one for production -- would be considered idiomatic at the time the question was written, it should be noted that the problems they are trying to solve -- and in particular the issue of cleaning up the build environment to reduce image size while still being able to use the same container environment in development and production -- have effectively been solved by multi-stage builds, which were introduced in Docker 17.05.
The idea here would be to split up the Dockerfile into two parts, one that's based on your favorite development environment, such as a fully-fledged Debian base image, which is concerned with creating the binaries that you want to deploy at the end of the day, and another which simply runs the built binaries in a minimal environment, such as Alpine.
This way you avoid possible discrepancies between development and production environments as alluded to by blueskin in one of the comments, while still ensuring that your production image is not polluted with development tooling.
The documentation provides the following example of a multi-stage build of a Go application, which you would then adopt to a C++ development environment (with one gotcha being that Alpine uses musl so you have to be careful when linking in your development environment).
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]

Can't load JSTL in multi module maven project with embedded jetty

I have created a test framework for testing .jsp files and .tag files using embedded jetty. I'm starting Jetty server programmatically using Java API, adding servlet holder and wrapper test JSP and initializing the server passing the project's web root.
There were some issues with Jasper discovering TLD locations during runtime when run from maven surefire plugin. I fixed it by providing
<useManifestOnlyJar>false</useManifestOnlyJar>
plugin classpath settings. Everything works good when I run tests using mvn clean install now.
Running tests from eclipse context menu has one issue. If there is any other project in workspace in the multi module maven build, TLD's in that project are not resolved. One workaround I tried was to 'close' the project in eclipse workspace and it worked out.
However I would want it to work with all the projects open in workspace and running from the eclipse JUnit context menu. The problem is in the jasper TldScanner that looks for tld files in jar and WEB-INF of current project only.
TldScanner.scanTlds()
processWebDotXml();
scanJars();
processTldsInFileSystem("/WEB-INF/");
I'm using org.glassfish.web.jsp-impl 2.2.2-b06 version with Jetty-8.1.0-RC5.
Is there a way to specify file based TLD scanning for jasper for extra classpath items?

How do I split up a django project into seperate already created applications?

I have a Django project with some applications from the directories of some github repositories. I've made some changes into them and would like to push the changes to my forks of the projects. The project directory looks like this:
mainproject
--foo
--bar
where foo and bar are two directories of different github repositories. The mainproject is itself a git repository. The repository containing foo looks like this:
project
--foo
--someotherfolder
Ideally I'd like to keep the applications within the mainproject folder.
The way I manage this in my project is using git submodules.
I have a dependencies folder (in my main project app) where I track all my dependencies i.e. Third party django apps.
You will need to do
git submodule add /foo.git /foo
git submodule add /bar.git /bar
You may find this link helpful. http://git-scm.com/book/en/Git-Tools-Submodules

best way to deploy jetty application--too many options?

I need to deploy a production version of a web application. So far, I've been testing it with mvn jetty:run. I've used actual jetty installations before, but they seem only necessary when you want to serve multiple wars on the same web server. In some ways this is the most staightforward however (mvn package and copy it over).
My other options are to create a runnable jar (mvn assembly:single) that starts a server, but I need to tweak the configuration so that the static content src/main/webapp is served and the web.xml can be found.
I've also read about a "runnable war". This might avoid the src/main/webapp problem since these files are already laid out in the warfile. I don't know how to go about doing this, however.
I could also stick with mvn jetty:run, but this doesn't seem like the best option because then the production deployment is tied to code instead of being a standalone jar.
Any opinions on the best way or pros and cons of these different approaches? Am I missing some options?
The jetty-console-maven-plugin from simplericity is simple to use and works great. When you run mvn package you get two wars--one that is executable. java -jar mywar.war --help gives usage, which allows a bit of configuration (port, etc.).
I'm not that familiar with maven, but this is how we approach deployment using embedded Jetty:
We create a single-file JAR with the embedding jetty app and the necessary lib jars packed.
We deploy the static content in a WAR file (which you can package into the JAR as well). Everything is generated by an ANT file that:
1) Build the static files WAR (this also creates the web.xml)
2) Copies the WAR into the application resources
3) Compiles an executable JAR
To get the embedded Jetty to "find and serve" your static files, add the war with a WebAppContext to the Jetty handlers:
Server jetty = new Server(port);
HandlerList handlers = new HandlerList();
WebAppContext staticContentAsWar = new WebAppContext();
staticContentAsWar.setContextPath("/static/");
staticContentAsWar.setWar(resource_Path_to_WAR);
handlers.addHandler(set);
jetty.setHandlers(handlers);
jetty.start();
HTH

Maven dependencies for Clojure libraries on GitHub

I'm developing some applications in Clojure + Java using Eclipse and Maven with m2eclipse.
This works great when my dependencies are in a Maven repository. However there are some libraries that I would like to use that aren't in any repository - e.g. small open source Clojure libraries hosted on GitHub.
These projects typically have a build.xml or a project.clj but not a pom.xml.
Is there a any way to set up Maven to handle these dependencies automatically? Or do I need to manually download and build all of these?
Unfortunately no, you'll either have to:
find a repository containing those libraries
manually add these to your repository using mvn install (and if you're kind enough, ask for those to be published in the central maven repo)
ask the developers if they would be so kind to propose a mavenized version and publish it in some maven repository
Clojure libraries often provide their artifacts in clojars, you might solve your issues just by adding it as a repository in your pom.xml.
Another option available when integrating leiningen and maven builds is to automatically generate a POM out of a project.clj via lein pom
This would allow to include the libraries in your build as long as you checked them out locally.