Can't find jetty's root.war - jetty

I'm trying to build a docker image with my war file and jetty, and the tutorials seem pretty straght forward except for one thing.
FROM jetty
ADD mysample.war /var/lib/jetty/webapps/root.war
EXPOSE 8080
but I don't have /var/lib/jetty/webapps/root.war on my system. Brew installed jetty into /usr/local/Cellar/jetty/9.4.8.v20171121 but there isn't a root.war under the path.
I'm running macOS 10.12.6 if that matters.

If you are using the official docker image ...
https://hub.docker.com/_/jetty/
.. the /var/lib/jetty path is the ${jetty.base} directory.
When your Dockerfile uses:
ADD mysample.war /var/lib/jetty/webapps/root.war
It is taking your mysample.war and putting it in ${jetty.base}/webapps/ with the special reserved name root.war that uses contextPath = "/".
The locally installed path /usr/local/Cellar/jetty/9.4.8.v20171121 has nothing to do with your docker image, and its likely not a ${jetty.base} directory (it looks like a ${jetty.home} directory path)
If you had used the following instead ...
ADD mysample.war /var/lib/jetty/webapps/hello.war
Then that war would have been deployed to contextPath = "/hello", meaning you would access that via the general url ...
<scheme>://<host:port>/<contextPath>/<resourceInWar>
Examples:
http://localhost:8080/hello/
https://machine.com/hello/main.css
Reference: https://www.eclipse.org/jetty/documentation/9.4.x/automatic-webapp-deployment.html

Related

How to load static files with spring boot inside a docker container hosted on AWS?

I got my SpringBoot app running in a docker container built from a Dockerfile and hosted on AWS ubuntu instance.
Everything is working perfectly, except I have an image, a css file and a js file that does not load. Upon inspecting the page, these files show 404 not found error.
I have used winscp to upload my files to my aws instance. In myApp folder is where my docker file is and where I build my container.
Directory Structure is:
myApp
-Dockerfile
-target
-myApp.jar
-src
-main
-java
-[all my code in respective subdirectories]
-resources
-static
-my.js
-my.css
-my.jpg
-templates
-folder1
-html
-html2
-folder2
-html
-html2
I am almost certain my problem lies in the docker container and my dockerfile.
Spring Boot automatically looks for static files in /src/main/resources/static. I'm thinking my docker container does not have this file structure.
Here is my Dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get install -y openjdk-14-jdk
WORKDIR /usr/local/bin/myApp
ADD . /src/main/resources/static
ADD target/myapp.jar .
ENTRYPOINT ["java", "-jar", "myApp.jar"]
When i build the container it shows everything copied and built successfully, but the files are not being reached. And what is weird to me is spring boot is serving the correct templates from the static folder. I am at a complete loss on this one. I have tried adding the resources individually from the dockerfile and still no luck.
You are setting WORKDIR and then copying .jar into that location using relative path but when you are copying the other stuff (/src/main/resources/static), you are using absolute path which is completely destroying your folder structure (since those files are not copied into folder referenced by your WORKDIR). You have probably forgot .(dot) in front of that path - ./src/main/resources/static.
Run docker exec -it <image-id> bash to get access into your running container and see what was copied where if you are not sure, fixing the stuff in your Dockerfile should be easy from then on.

AWS SAM/AWS Toolkit Docker Mounting Error

What path do I add to docker to enable AWS SAM to locally debug? Adding the path to the directory in which I work normally does not work.
Short Answer: add this path to docker: C:\Users\{user}\AppData\Local\Temp\aws-toolkit-vscode
The directory that needs to be mounted is the directory that sam COMPILES TO, NOT the directory in which you normally work in e.g onedrive, documents.

Access Elastic Beanstalk environment properties in NGINX configs running on AWS Linux 2

I had this working before on AWS Linux AMI but no luck with AWS Linux 2.
I need to access my environment properties from the Nginx configuration file during the EB application deployment. It's a Single instance Node Server.
I did it like this with the AWS Linux AMI and it worked without a problem:
.ebextensions/00_options.config
option_settings:
aws:elasticbeanstalk:application:environment:
DOMAIN: socket.example.com
MASTER_DOMAIN: https://example.com
etc..
.ebextensions/10_proxy.config
... some configs ...
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
owner: root
group: root
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
map $http_origin $cors_header {
hostnames;
default "";
`{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "MASTER_DOMAIN"}}` "$http_origin";
}
server {
listen 80;
listen 8080;
server_name `{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "DOMAIN"}}`;
location ~ /.well-known {
allow all;
root /usr/share/nginx/html;
}
location / {
return 301 https://$host$request_uri;
}
}
etc..
.... some more configs ....
I'm not including most of the configs above because they're not relevant.
So when I did this before, everything worked as expected. The config file inserted my properties and created the file in the /etc/nginx/conf.d/proxy.conf folder.
Now with AWS Linux 2 the specs have changed and we have to add our Nginx configuration files in the .platform/nginx/conf.d folder located in our application bundle root folder.
Here the reference ( see Reverse proxy configuration)
So I created a proxy.conf file in the location mentioned above with the content that was previously inserted in /etc/nginx/conf.d/proxy.conf.
.platform/nginx/conf.d/proxy.conf
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
map $http_origin $cors_header {
hostnames;
default "";
`{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "MASTER_DOMAIN"}}` "$http_origin";
}
etc...
And then the problems began..
This first trial throwed unexpected "{" in /var/proxy/staging/nginx/conf.d/proxy.conf:11 at me.
And after that I tried a lot of things. Tried it with ${MASTER_DOMAIN} and messed around with the new EB AWS Linux 2 hooks (see link above Platform hooks). All for no avail it seems like you can't access the properties from the Nginx configs. I've read an article or a documentation from Nginx mentioning something similar today but I can't find it anymore (did a lot of googling).
I also tried to create a config file like I did with the working version which purpose was to save a temp file somewhere with the included properties and then include this file in the needed .platform/nginx/conf.d/proxy.conf file because I started to think that there is no way to include them directly with the new specs.
.ebextensions/10_proxy.config
... some configs ....
files:
/var/proxy/staging/custom_folder/proxy.conf:
mode: "000644"
owner: root
group: root
content: |
etc...
.platform/nginx/conf.d/proxy.conf
include custom_folder/proxy.conf;
With this idea in mind I did a lot of nonsense, I created hooks for creating (mkdir) directories in which I tried to temporarily save the file which leaded to new permission errors. I wasn't able to give the proper permissions to prebuild, postdeploy files but this is another issue.
And a lot more of trying and failing...
But then I've read (also from the link above):
"If you configure your proxy to send traffic to multiple application processes, you can configure several environment properties, and use their values in both proxy configuration and your application code."
And hope came back.. Does this mean I actually CAN directly add environmental variables into the Nginx configs located in the .platform directory? ... I don't know.. Do you?
I could continue to describe all the things I tried all night long so I will stop here. I hope you get the issue. If not ask me and I will do my best to make all this understandable.
Also my mind isn't very clear anymore after 14 hours of battling this issue. I need a break.
If you did it to the end thank you for your time and help would be greatly appreciated.
Summary
One way to do it is to create a shell script in .platform/hooks/postdeploy.
Here is a simplified example, assuming you have an Elastic Beanstalk environment property called MASTER_DOMAIN:
#!/bin/bash
# write nginx config file
cat > /etc/nginx/conf.d/elasticbeanstalk/test.conf << LIMIT_STRING
location /test/ {
default_type text/html;
return 200 "nginx variable: \$host, and EB env property: $MASTER_DOMAIN";
}
LIMIT_STRING
# restart nginx service so the config takes effect
systemctl restart nginx.service
The location block from this example can be replaced by the nginx content from .ebextensions/10_proxy.config in the original post. No need for the Fn::GetOptionSetting stuff though.
I think you also need a duplicate script in .platform/confighooks/postdeploy.
Details below.
(sorry for the wall of text)
Environment variables in nginx
Actually, as discussed in here and here, it is not possible (out-of-the-box) to use os environment variables inside the http, server, or location blocks in nginx config files. There are some workarounds, such as using lua, perl, or templates, but let's not get into those. This part has nothing to do with AWS.
In the OP's original configuration for Amazon Linux AMI (AL1), using the files section in .ebextensions/10_proxy.config, they were actually using a shell script to write the nginx config file during deployment. The shell script expanded the environment variables, but the resulting proxy.conf for nginx did not actually access any environment variables.
That's why it worked on AL1.
Platform hooks
Now, for Amazon Linux 2 (AL2), we can do something similar using shell scripts in the .platform/hooks and .platform/confighooks folders.
These .platform hook scripts are executed as the root user, and they have access to the Elastic Beanstalk (EB) environment properties. The EB environment properties can be accessed just like normal OS environment variables, so there is no need to use the Fn::GetOptionSetting stuff.
Basically, we need to create a shell script that writes a file with the content from your original .ebextensions/10_proxy.config. However, there are two questions we need to consider:
Should we use a prebuild, predeploy, or postdeploy hook?
What is the proper destination directory for our nginx proxy.conf file?
File locations
To answer these questions, we have to refer to the AWS documentation for Extending Elastic Beanstalk Linux platforms, and specifically the Instance deployment workflow section.
... The current working directory (cwd) for platform hooks is the application's root directory. For prebuild and predeploy files it's the application staging directory, and for postdeploy files it's the current application directory. If one of the files fails (exits with a non-zero exit code), the deployment aborts and fails.
This is interesting, but leaves some questions, e.g. where is the "application staging directory" located? We can fill in the blanks by inspecting one of our deployment log files. Based on our eb-engine.log, here's what happens with the platform hooks and nginx config files during app deployment (skipping a lot of details):
the source bundle is downloaded from S3 and extracted to /var/app/staging/
platform hooks in .platform/hooks/prebuild/ are executed
proxy server configuration is copied from /var/app/staging/.platform/nginx/ to /var/proxy/staging/nginx
platform hooks in .platform/hooks/predeploy/ are executed
proxy server is started, configuration is copied from /var/proxy/staging/nginx/ to /etc/nginx
platform hooks in .platform/hooks/postdeploy/ are executed
Note, after deployment the app is located in /var/app/current.
Based on the above, there are several options:
Create a shell script in .platform/hooks/postdeploy that writes to /etc/nginx/conf.d/proxy.conf.
The nginx service is already running, at this stage, so we need to restart for the configuration to take effect.
Below is a minimal test example. In this example we write to the elasticbeanstalk subdirectory, because we just want to add a location inside the default server block. We can then visit the /test/ page in a browser, to check that the configuration works.
We use some bash io redirection (<<, >) to write the nginx config file.
Note that we need to escape any nginx variables, e.g. $host becomes \$host, otherwise the shell will interpret them as environment variables.
Also note that the shell scripts need to have execution permission, as explained under More about platform hooks in the docs.
#!/bin/bash
cat > /etc/nginx/conf.d/elasticbeanstalk/test.conf << LIMIT_STRING
location /test/ {
default_type text/html
return 200 "nginx variable: \$host, and EB env property: $MASTER_DOMAIN";
}
LIMIT_STRING
systemctl restart nginx.service
Alternatively, we could create a shell script in .platform/hooks/predeploy that writes to /var/proxy/staging/nginx/conf.d/proxy.conf.
There is no need to restart the nginx service in this case, because this hook is executed before the server configuration is applied.
BEWARE:
Not sure if this is a bug or a design feature, but our newly created proxy.conf disappears after a configuration deployment (as opposed to an application deployment), unless we put a duplicate script in the .platform/confighooks/postdeploy directory. Not very DRY...
EDIT: AWS support confirmed that we need duplicate scripts in hooks and confighooks in this case. The application example in the docs also shows some duplicates (at least duplicate filenames) in hooks and confighooks.
EDIT:
Instead of duplicating scripts, we can also write a confighook that calls a hook, e.g. .platform/confighooks/predeploy/01_my_confighook.sh could look like this:
#!/bin/bash
source "/var/app/current/.platform/hooks/predeploy/01_my_hook.sh"
Disclaimer: This was tested on a freshly created single instance EB environment with "Python 3.7 running on 64bit Amazon Linux 2/3.1.5" using all default configuration and the default AWS Python sample application (only extended with our custom hooks).

Why does AWS elastic beanstalk fail to build my app?

I have an app written in Go, which I attempted to deploy to EB.
When trying to access it, I get an Error 502 from nginx, presumably because the app is not running.
Looking at logs, I get a lot of errors like
14:01:29 build.1 | application.go:10:2: cannot find package "github.com/aws/aws-sdk-go/aws" in any of:
14:01:29 build.1 | /opt/elasticbeanstalk/lib/go/src/github.com/aws/aws-sdk-go/aws (from $GOROOT)
14:01:29 build.1 | /var/app/current/src/github.com/aws/aws-sdk-go/aws (from $GOPATH)
Despite the fact, that I have all of my dependencies included in the application bundle under a vendor subdirectory. How come EB does not use vendoring? According to the dashboard, it is running Go 1.9, so vendoring should be supported.
You need to set your GOPATH in your EBS to the root of your project directory, assuming there is a src directory where your vendor directory is located.
For instance, pretend this is your project structure:
app/
src/
vendor/
And pretend that project is located in ~/home, which makes its location ~/home/app.
Then your GOPATH should be set to ~/home/app. Go will attempt to access the dependencies through $GOPATH/src/vendor.
But if this were the kind of structure you were using before, then you would need to have your GOPATH updated during local development as well, so if you aren't already doing that then I imagine you're using a different kind of setup... this solution, however, will work as long as your project is structured as I described above.

Implementing Aptana Studio + Django + Vagrant&Virtual box

I set up my virtual environment via vagrant and virtual box. I use Aptana IDE for django development and I'm wondering if there's a way to integrate new projects in aptana with the VM.
I've previously used virtualenv and i just change my python path to include my virtualenv directory. However, with virtual box, i'm not sure how to do that. I thought it'd be the same procedure but i don't think so. With Virtualenv, I was able to locate the projects i created within that directory. When I create a project via Vagrant+Virtual Box, I'm not able to locate the project directory anywhere...it's not in the dedicated directory that i setup for virtual environments. Please help.
Thanks.
you can do this with Vagrant. Let me give you an example:
I. project structure:
/yourdjangoapp
/... # all your app stuff here
/manage.py
/vagrant
/provisioning
/init.sh
/Vagrantfile
/.project
II. /Vagrantfile (simple example w/ default precise64 box):
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.provision :shell, :path => "vagrant/provisioning/init.sh"
config.vm.network :forwarded_port, guest: 8000, host: 8000
end
III. /vagrant/provisioning/init.sh
if [ ! -f /home/vagrant/.vm_initialized ]
then
rm -rf /var/www
ln -fs /vagrant /var/www
touch /home/vagrant/.vm_initialized
fi
What's happening?
Vagrant creates a shared folder for each vm, this defaults to "/vagrant" within the quest os and to the folder where you put your Vagrantfile on the host machine.
we add a provision shell script to our Vagrant config, so Vagrant will run this after booting the VM
in /vagrant/provisioning/init.sh we set a symbolic link for the shared folder to /var/www (just an example, so we can access it with apache w/o further configuration)
as Vagrant will run this on every "vagrant up", we have to check, if the vm is already initialized (if-block in init.sh)
Where to go from here?
Well, you could start your development server with:
python /var/www/yourdjangoapp/manage.py runserver 0.0.0.0:8000
And should be able to access it from your host machine via
http://localhost:8000/
You can edit your project on the host w/ your favorite editor now (placed in /yourdjangoapp).
For a new project create/copy the project structure. As Vagrant creates a new VM for each project, the shared folder in the guest is always linked to your current project's folder.
Working without provisioners
The current example shows the setup I use. My init.sh includes stuff like installing packages, pulling a GIT repo and so on.
Of course you can omit the provising part and work directly with the shared folder available at /vagrant in the guest os. So you should be able to start the development server with:
python /vagrant/yourdjangoapp/manage.py runserver 0.0.0.0:8000
HTH
Christian