PCF Staticfile_buildpack not considering Staticfile - cloud-foundry

I'm trying to deploy Angular 6 application to PCF using "cf push -f manifest.yml" command.
Deployment works fine, except its not considering options set in "Staticfiles" .
To be specific, I've below values in Staticfile, to force HTTPS, and to include ".well-known" folder which will be excluded by default due to "." prefix.
force_https: true
host_dot_files: true
location_include: .well-known/assetlinks.json
I also have a manifest.yml file with below values,
applications:
- name: MyApp
instances: 1
memory: 256M
instances: 1
random-route: false
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
path: ./dist
routes:
- route:myapp.example.com
Is there an alternate option to set these params in manifest.yml or how to I achieve it?

First..
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
Don't do this. You don't want to point to the master branch of a buildpack, it could be unstable. Point to a release, like https://github.com/cloudfoundry/staticfile-buildpack.git#v1.4.31 or use the system provided staticfile buildpack, like staticfile_buildpack.
To be specific, I've below values in Staticfiles
It's not Staticfiles, plural. It's Staticfile singular. Make sure you have the correct file name & that it's in the root of your project (i.e. same directory that you're pushing), which is ./dist based on path: in your manifest.
Update: For angular, "Staticfile" should be under "src" folder, which will put it under dist on building.
https://docs.cloudfoundry.org/buildpacks/staticfile/index.html#config-process

Related

Copying a war file from GitLab CI to Tomcat

I am trying to create a CI/CD pipeline to build war file and deploy it to Tomcat Container from GitLab.
I am using a Maven image to build the project. Once the war file is created, I would like to copy it to some folder so that from there it can be copied to the tomcat server, in a container, webapps directory.
My current approach and goal is to use a Dockerfile in my project. From a Tomcat image, run Tomcat using the project war. I tried using ADD in the Dockerfile but the directory paths where the war resides, $CI_PROJECT_DIR, is not where the Dockerfile is looking.
The following is the ".gitlab-ci.yml" file.
stages:
# Build project
- build
- package
# Build and deploy mailService
#- deploy
variables:
# Variables that can be used throughout the pipeline are defined here.
# Maven variables
MAVEN_CLI_OPTS: '-s /appdir/.m2/settings.xml'
MAVEN_PATH: '/appdir/opt/apache-maven-3.8.4/bin'
IMAGE_PATH: 'gitlab-registry.gs.mil/gteam-development/docker'
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
build docker image:
stage: package
image: docker
services:
- docker:dind
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
tags:
- dind
The following is the "Dockerfile" I am using to run Tomcat using the image. I need to copy my war file to the Tomcat webapps folder and build another image.
FROM tomcat:latest
LABEL maintainer=”Jacquelyne Wilson”
# ADD $CI_PROJECT/target/geoint-rfi-data-api.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD [“catalina.sh”, “run”]
NOTE: These images I have in our Gitlab Container Registry. Hope this is enough information.
This is my first experience creating Gitlab CI pipeline. My apologies if my terms and approach is not ideal.
You can provide the WAR file built in your project-build job as an artifact, so it is available in the following job. By the way, you should probably not use spaces in the job name.
This could look like the following:
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
artifacts:
paths:
- /path/to/your/war
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
And in the docker build job, you can then copy the WAR file from the artifact to the docker build context path so it can be ADDed to your image.
Hope this helps :)
Best regards
Andreas

Reuse a cloudfoundry app without having to rebuild from sratch

I deploy a Django Application with Cloudfoundry. Building the app takes some time, however I need to launch the application with different start commands and the only solution I have today is fully to rebuild each time the application.
With Docker, changing the start command is very easy and it doesn't require to rebuild to the whole container, there must be a more efficient way to do this:
Here are the applications launched:
FrontEndApp-Prod: The Django App using gunicorn
OrchesterApp-Prod: The Django Celery Camera & Heartbeat
WorkerApp-Prod: The Django Celery Workers
All these apps are basically identical, they just use different routes, configurations and start commands.
Below is the file manifest.yml I use:
defaults: &defaults
timeout: 120
memory: 768M
disk_quota: 2G
path: .
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/buildpack-python.git
services:
- PostgresDB-Prod
- RabbitMQ-Prod
- Redis-Prod
applications:
- name: FrontEndApp-Prod
<<: *defaults
routes:
- route: www.myapp.com
instances: 2
command: chmod +x ./launch_server.sh && ./launch_server.sh
- name: OrchesterApp-Prod
<<: *defaults
memory: 1G
instances: 1
command: chmod +x ./launch_orchester.sh && ./launch_orchester.sh
health-check-type: process
no-route: true
- name: WorkerApp-Prod
<<: *defaults
instances: 3
command: chmod +x ./launch_worker.sh && ./launch_worker.sh
health-check-type: process
no-route: true
Two options I can think of for this:
You can use some of the new v3 API features and take advantage of their support for multiple processes in a Procfile. With that, you'd essentially have a Profile like this:
web: ./launch_server.sh
worker: ./launch_orchester.sh
worker: ./launch_worker.sh
The platform should then stage your app once, but deploy it three times based on the droplet that is produced from staging. It's slick because you end up with only one application that has multiple processes running off of it. The drawback is that this is a experimental API at the time of me writing this, so it still has some rough edges, plus the exact support you get could vary depending on how quickly your CF provider installs new versions of the Cloud Controller API.
You can read all the details about this here:
https://www.cloudfoundry.org/blog/build-cf-push-learn-procfiles/
You can use cf local. This is a cf cli plugin which allows you to build a droplet locally (staging occurs in a docker container on your local machine). You can then take that droplet and deploy it as much as you want.
The process would look roughly like this, you'll just need to fill in some options/flags (hint run cf local -h to see all the options):
cf local stage
cf local push FrontEndApp-Prod
cf local push OrchesterApp-Prod
cf local push WorkerApp-Prod
The first command will create a file ending in .droplet in your current directory, the subsequent three commands will deploy that droplet to your provider and run it. The net result is that you should end up with three applications, like you have now, that are all deployed from the same droplet.
The drawback is that your droplet is local, so you're uploading it three times once for each app.
I suppose you also have a third option which is to just use a docker container. That has it's own advantages & drawbacks though.
Hope that helps!

Can't find jetty's root.war

I'm trying to build a docker image with my war file and jetty, and the tutorials seem pretty straght forward except for one thing.
FROM jetty
ADD mysample.war /var/lib/jetty/webapps/root.war
EXPOSE 8080
but I don't have /var/lib/jetty/webapps/root.war on my system. Brew installed jetty into /usr/local/Cellar/jetty/9.4.8.v20171121 but there isn't a root.war under the path.
I'm running macOS 10.12.6 if that matters.
If you are using the official docker image ...
https://hub.docker.com/_/jetty/
.. the /var/lib/jetty path is the ${jetty.base} directory.
When your Dockerfile uses:
ADD mysample.war /var/lib/jetty/webapps/root.war
It is taking your mysample.war and putting it in ${jetty.base}/webapps/ with the special reserved name root.war that uses contextPath = "/".
The locally installed path /usr/local/Cellar/jetty/9.4.8.v20171121 has nothing to do with your docker image, and its likely not a ${jetty.base} directory (it looks like a ${jetty.home} directory path)
If you had used the following instead ...
ADD mysample.war /var/lib/jetty/webapps/hello.war
Then that war would have been deployed to contextPath = "/hello", meaning you would access that via the general url ...
<scheme>://<host:port>/<contextPath>/<resourceInWar>
Examples:
http://localhost:8080/hello/
https://machine.com/hello/main.css
Reference: https://www.eclipse.org/jetty/documentation/9.4.x/automatic-webapp-deployment.html

AWS CodeDeploy ScriptFailed Error in AfterInstall

While trying to deploy a Django project with CodeDeploy for the first time, I keep getting the following error in the AfterInstall phase:
Error Code: ScriptFailed
Script Name: /setup.sh
Message: Script at specified location: /setup.sh failed with exit code 2
Log Tail: LifecycleEvent - AfterInstall
Script - /setup.sh
[stderr]python: can't open file 'setup_start.py': [Errno 2] No such file or directory
This is probably because I'm misunderstanding the files section of the AppSpec. Below is a snippet of what I'm doing for that section:
files:
- source: ./BlackBoxes
destination: project/BlackBoxes
- source: ./Documentation
destination: project/Documentation
- source: ./manage.py
destination: project
- source: ./setup_start.py
destination: project
...
I did make the project folder manually on the S3 bucket but none of the subfolders.
And the AfterInstall section:
hooks:
...
AfterInstall:
- location: /setup.sh
timeout: 180
What I originally thought source was supposed to mean was the relative path of the file/directory with respect to the root of the project directory on my local development machine. I also assumed that any folder needed that didn't exist on the S3 bucket would be created automatically. Clearly, I am misunderstanding something about CodeDeploy, most likely pertaining to the AppSpec file. What exactly am I doing wrong with the deployment and what am I supposed to be doing instead?
You're going wrong in the files section. This section is where you tell Code Deploy: "place this directory/file from my repo to this location on my EC2 instance". You only need to do this for stuff your deploying, you don't need to do this for deployment hook scripts. Refer to the docs for the fine details on this section.
In a hook, the location is the relative path to your hook script from the root of your repo. So /setup.sh isn't correct -> you need to give it the relative path. Again, the docs is the place to read more on this.
What I usually do is in the root of my repo, I create a folder called eg. scripts and I store the hook scripts there.
Say my repo directory structure looks like this:
application_code/
scripts/
appspec.yml
I can then set up my appspec.yml like this:
version: 0.0
os: linux
files:
- source: application_code
destination: /desired/code/location #path to where the code should be put on the instance
hooks:
ApplicationStop:
- location: scripts/some_script.sh
timeout: 300
runas: root
Code Deploy is simple to use once you've read the documentation comprehensively. Until then, you're going to keep encountering problems like this.
Best of luck! :)

What's the best way to implement parallel tasks with Django and Elastic beanstalk?

I have been trying to implement celery with django and elastic beanstalk with SQS, but I still don't know how should I start the workers in background, it seems that I need to create an AMI outside of the EB. Am I even following the right path ? Is there a better way to have parallel tasks?
Update:
I found an alternative solution for this that is simpler and more stable. See my answer in this question: How do you run a worker with AWS Elastic Beanstalk?
I just needed to figure this out for a project I am working on. It took some tinkering, but in the end the solution is quite easy to implement. You can add three files "dynamically" to the server using the files: directive in a ebextension hook. The three files are:
A script that starts the daeomon (located in /etc/init.d/)
A config file, configuring the daemon starting script, located in /etc/default/
A shell script that copies the env vars from your app to the environment of celeryd and starts the service (post deployment)
The start script can be the default from the repository, so it is sourced directly from github.
The config has to be adopted to your project. You need to add your own app's name in to the CELERY_APP setting and you can pass additional arguments to the worker through the CELERYD_OPTS setting (for instance, the concurrency value could be set there).
Then you also need to pass your environment variables for your project to the worker daemon, as it needs the same environment variables as the main app. An example are the AWS secret keys that the celery worker needs to have to be able to connect to SQS and possibly S3. You can do that by simply appending the env vars from the current app to the configuration file:
cat /opt/python/current/env | tee -a /etc/default/celeryd
Finally the celery worker should be started. This step needs to happen after the codebase has been deployed to the server, so it needs to be activated "post" deployment. You can do that by using the undocumented post-deploy hooks. Any shell file in /opt/elasticbeanstalk/hooks/appdeploy/post/ will be executed by elasticbeanstalk post deployment. So you can add a service celeryd restart command into a script file in that folder. For convenience, I placed both the copying of environment variables and the start command in one file.
Note that you can not use the services: directive directly to start the daemon, as this will try to start the celeryd worker before the codebase is deployed to the server, so that won't work (hence the "post" deploy script).
Ok, all that put together, the only thing needed is to create a file ./ebextensions/celery.config in the main directory of your codebase with the following content (adopted to your codebase of course):
files:
"/etc/init.d/celeryd":
mode: "000755"
owner: root
group: root
source: https://raw2.github.com/celery/celery/22ae169f570f77ae70eab03346f3d25236a62cf5/extra/generic-init.d/celeryd
"/etc/default/celeryd":
mode: "000755"
owner: root
group: root
content: |
CELERYD_NODES="worker1"
CELERY_BIN="/opt/python/run/venv/bin/celery"
CELERY_APP="yourappname"
CELERYD_CHDIR="/opt/python/current/app"
CELERYD_OPTS="--time-limit=30000"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_USER="ec2-user"
CELERYD_GROUP="ec2-user"
CELERY_CREATE_DIRS=1
"/opt/elasticbeanstalk/hooks/appdeploy/post/myapp_restart_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Copy env vars to celeryd and restart service
su -c "cat /opt/python/current/env | tee -a /etc/default/celeryd" $EB_CONFIG_APP_USER
su -c "service celeryd restart" $EB_CONFIG_APP_USER
services:
sysvinit:
celeryd:
enabled: true
ensureRunning: false
Hope this helps.