How to change Default App in Dokku for Domain Host Name - digital-ocean

As a personal project I've deployed the Dokku Image on Digital Ocean and got everything working well. In fact it works very well as I've done it before, but I have a question on how I can change what "default" app the Domain Host Name points to.
Here is my setup.
I set up Dokku on Digital Ocean with the "Hostname" and "Virtualhost Naming" options selected. So basically this means that I have my own Domain Name being used to point to apps instead of the IP/Port. Lets assume my Domain Name is mydomain.com
I have 2 Dokku apps running in this Digital Ocean server. Lets call them app-a and app-b. As I enabled "virtual naming" these apps can be accessed like so.
app-a.mydomain.com
app-b.mydomain.com
All this works perfectly.
Now I notice that if I go to "mydomain.com" or "www.mydomain.com" in the browser it "defaults/redirects with masking" to "app-b.mydomain.com". My guess is that because app-b was the last App I set up NGINX has defaulted to this.
So how can I change this behaviour, i.e. I need "mydomain.com" or "www.mydomain.com" to go to app-a instead?
Thanks very much in advance.
Mark.

By default, dokku will route any received request with an unknown HOST header value to the lexicographically first site in the nginx config stack.
I believe you can add root domain using domains plugin
# add a domain to an app
dokku domains:add app-a mydomain.com
or
You can also specify fully qualified names as the name of the app
$ git remote add dokku dokku#dokku.me:mydomain.com
$ git push dokku master
or
Alternatively, you may push an app to your dokku host with a name like "00-default". As long as it lists first, it will be used as the default nginx vhost.
References:
http://progrium.viewdocs.io/dokku/application-deployment/
http://progrium.viewdocs.io/dokku/nginx/

Related

Non-starting rails 5.2 app - ActiveSupport::MessageEncryptor::InvalidMessage

I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.

With docker compose, how do I access a service internally and externally using the same address?

My problem boils down to this: I have two services in docker compose: app and storage. I'm looking for a way to access the storage service (port 9000) from inside app and from outside using the same address.
app is a Django app using django-storages with S3 backend.
storage is a minio server (S3 compatible, used only for development).
From app, I can access storage using http://storage:9000. From outside docker, I can access storage at http://localhost:9000, or http://0.0.0.0:9000, or even at http://192.168.xxx.yyy (using a different device on the network). No surprises there.
However, when the URL is generated, I don't know whether it's going to be used internally or externally (or both).
docker-compose.yml
services:
app:
build: backend/
ports:
- "8000:8000"
volumes:
- ./backend/src:/app/src
command: /usr/local/bin/python src/manage.py runserver 0.0.0.0:8000
storage:
image: minio/minio:RELEASE.2019-06-19T18-24-42Z
volumes:
- storage:/data
environment:
MINIO_ACCESS_KEY: "DevelopmentAccessKey"
MINIO_SECRET_KEY: "DevelopmentSecretKey"
ports:
- "9000:9000"
command: minio server /data
volumes:
storage:
I've looked into changing the backend to yield endpoint urls depending on the context, but that is far from trivial (and would only be for development, production uses external S3 storage, I like to keep them as similar as possible).
I've played around with docker-compose network configs but I cannot seem to make this work.
Any thoughts on how to approach this in docker-compose?
Additional info:
I've played around with host.docker.internal (and gateway.docker.internal) but to no avail. host.docker.internal resolves to 192.168.65.2, I can access storage from app with that ip, but from the browser 192.168.65.2:9000 gives a timeout.
But it seems that using my computers external ip works. If I use 192.168.3.177:9000 I can access storage from both app, the browser and even external devices (perfect!). However, this ip is not fixed and obviously not the same for my colleagues, so it seems all I need is a way to dynamically assign it when doing docker-compose up
It's been a while but I thought I'd share how I ended up solving this issue for my situation, should anyone ever come across a similar problem. Relevant XKCD
Practical solution
After spending quite some time to make it work with docker only (see below), I ended up going down the practical road and fix it on the Django side of things.
Since I'm using Django Rest Framework to expose the urls of objects in the store, I had to patch the default output of object urls created by the Django Storages S3 backend, to swap the host when developing locally.
Internally, Django uses the API key to connect directly to the object store, but externally the files are only accessible with signed urls (private bucket). And because the hostname can be part of what is signed, it needs to be set correctly before the signature is generated (otherwise a dirty find-and-replace for hostname would suffice.)
Three situations I had to patch:
signed urls (for viewing in the browser)
signed download urls (to provide a download button)
presigned post urls (for uploading)
I wanted to use the host of the current request as host of the object links (but on port 9000 for Minio). The advantages of this are:
works with localhost, 127.0.0.1, and whatever ip address my machine is assigned. So I can use localhost on my machine and use my 192.168.x.x address from a mobile for testing without changing code.
requires no setup for different developers
doesn't require a container restart when ip is changed
The situations above were implemented as follows:
# dev settings, should be read from env for production etc.
AWS_S3_ENDPOINT_URL = 'http://storage:9000'
AWS_S3_DEV_ENDPOINT_URL = 'http://{}:9000'
def get_client_for_presigned_url(request=None):
# specific client for presigned urls
endpoint_url = settings.AWS_S3_ENDPOINT_URL
if request and settings.DEBUG and settings.AWS_S3_DEV_ENDPOINT_URL:
endpoint_url = settings.AWS_S3_DEV_ENDPOINT_URL.format(request.META.get('SERVER_NAME', 'localhost'))
storage = S3Boto3Storage(
endpoint_url=endpoint_url,
access_key=settings.AWS_ACCESS_KEY_ID,
secret_key=settings.AWS_SECRET_ACCESS_KEY,
)
return storage.connection.meta.client
class DownloadUrlField(serializers.ReadOnlyField):
# example usage as pre-signed download url
def to_representation(self, obj):
url = get_client_for_presigned_url(self.context.get('request')).generate_presigned_url(
"get_object",
Params={
"Bucket": settings.AWS_STORAGE_BUCKET_NAME,
"Key": str(obj.file_object), # file_object is key for object store
"ResponseContentDisposition": f'filename="{obj.name}"', # name is user readable filename
},
ExpiresIn=3600,
)
return url
# similar for normal url and pre-signed post
This gives me and other developers an easy to use, local, offline available development object store, at the price of a small check in code.
Alternative solution
I quickly found out that to fix it on the docker side, what I really needed was to get the ip address of the host machine (not the docker host) and use that to create links to my Minio storage. Like I mentioned in my question, this was not the same as the docker.host.internal address.
Solution: using env variable to pass in the host ip.
docker-compose.yml
services:
app:
build: backend/
ports:
- "8000:8000"
environment:
HOST_IP: $DOCKER_HOST_IP
volumes:
- ./backend/src:/app/src
command: /usr/local/bin/python src/manage.py runserver 0.0.0.0:8000
# ... same as in question
settings.py
AWS_S3_ENDPOINT_URL = f'http://{os.environ['HOST_IP']}:9000'
When the environment variable DOCKER_HOST_IP is set when calling docker-compose up this will create urls that use that IP, properly signed.
Several ways to get the environment variable to docker-compose:
set it up in .bash_profile
pass it to the command with the -e flag
set it up in PyCharm
For .bash_profile I used the following shortcut:
alias myip='ifconfig | grep "inet " | grep -v 127.0.0.1 | cut -d\ -f2'
export DOCKER_HOST_IP=$(myip)
For PyCharm (very useful for debugging) setup was a little more tricky, since the default environment variables cannot be dynamic. You can, however, define a script that runs 'Before launch' for a 'run configuration'. I created a command that sets the environment variable in the same way as in .bash_profile and miraculously it seems that PyCharm keeps that environment when running the docker-compose command, making it work they way I want.
Issues:
need to restart container when ip changes (wifi off/on, sleep/wake in different location, ethernet unplugged)
needs a dynamic value in the environment, finicky to set up properly
doesn't work when not connected to a network (no ethernet and no wifi)
cannot use localhost, need to use current ip address
only works for one ip address (so need to pick one when using ethernet and wifi)
Because of these issues I ended up going with the practical solution.

How to deploy Play on Amazon Beanstalk keeping /public editable for a single page application?

I am looking for alternative methods of deploying a Play application to Elastic Beanstalk. It is a single page app that relies on Ember.js. It would be nice to be able to edit the the contents of the /public folder so I don't need to rebuild the docker image every time something is fixed on the Ember side that doesn't affect the Play app itself.
I am currently using sbt's docker:stage command and zipping the generated docker folder along with this Dockerfile and Dockerrun.
Dockerfile
FROM java:latest
WORKDIR /opt/docker
ADD stage /
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=application-prod.conf"]
CMD []
Dockerrun
{
"AWSEBDockerrunVersion": "1",
"Ports": [{ "ContainerPort": "9000" }],
"Volumes": []
}
Once I zip the file I upload it using Beanstalk console. But this involves rebuilding the app every time a typo is fixed on the front end. It is annoying because it means all the updated front end code has to wait until I get a chance to push it up so the boss can view it and give feedback. It would be nice if there was a way to have the /public folder (Play just serves /public/index.html) accessible so the front end dev could access it directly for his edits.
Ideally I would like some method that can be used for both development and production. I don't know the requirements imposed by Beanstalk so it can properly spin up extra instances of the app when needed. Maybe something where when the instance starts it does git pull on the backend repo and git pull on the front end repo, then runs my custom build script for ember to generate the /dist folder and move into Play's /public folder and create gzips of each file. Then start the play app. Then let the front end Dev ssh into the development instance and do git pull and ember build as needed for his edits.
It would also be nice for the development server for the Play server to be run using run or ~run so I can just do git pull and have it rebuild the backend.
Or maybe I am approaching this in the completely wrong way. I have never done any of this before so I am sort of guessing my way through all of it.
Thanks for any suggestions and pointers in the correct direction.
Adam
Edit
Since we are really only using Play as a RESTful API would it be better to just run a nginx/Apache server on something like EC2 then use Beanstalk to handle the Play App without it serving any content besides API calls. I would assume the EC2 nginx could be pretty tiny since only the first access would pull files from the http server. After that it is all API calls. Then we run the Play app from Beanstalk so it can handle load balancing for the API. This at least saves me from rebuilding the image for front end edits. Would this be a more correct setup?

How to find info of the environment "I'm" deployed in

I'm running an application on elastic beanstalk.
How do I find my application name? in other words, how does the application runnning in elastic beanstalk find out information about itself.
or other information about the environment the that the current application is runnning in.
I wouldn't be surprised if some of this information is available via system properties.
UPDATE: something I forgot to mention (sorry). It's java app and I'd prefer to use the JAVA SDK to acquire this information
An alternative, and rather nasty have to find the name of the environment is to check the root folder of the Elastic Beanstalk instance. As of today, there is a file there named /<aws-env-name>_LaunchFile in the root folder of the EC2 instance (caveat emptor: this can change at any time).
For example, if you environment name is "mycoolapp-dev" then there will be a file called mycoolapp-dev_LaunchFile in the root directory of your Elastic Beanstalk Instance. For things like loggly and newrelic to work correctly, sometimes it's useful to give your host a proper hostname (both services still record the IP, which is the original EC2 IP).
The command snippet below can be pasted into your .ebextensions folder .config file to set the hostname to mycoolapp-dev for these services to work.
commands:
00_set_hostname:
command: "hostname `ls /*_LaunchFile | sed -e 's/\/\(.*\)_LaunchFile$/\1/'`"
Or a really nice solution is to use this link by Steel Pangolin - Jeremy Ehrhardt
You can query Metadata about the instance by navigating from the instance to this internal address:
http://169.254.169.254/latest/meta-data/
There are many different methods you can use to query and parse the results of this data for your purposes.
See more information about Instance Metadata and Userdata here.

Why doesn't better_errors work on cloud 9 ide?

I'm working on a number of projects on cloud9 IDE, and it's really frustrating that I can't get the better errors gem to work correctly. It isn't supposed to need initializing; it should just work out of the box. However, I still only get the usual ugly red errors page. I should specify that it is included in my gemfile, and I have bundle install already.
How can I get better errors to work correctly? Is there an installation step I'm missing?
The trick, I used, to get the 'better_errors' gem working in Cloud9 is setting the value of TRUSTED_IP to the public IP address of the computer my browser session is attached to. (As far as I can tell, it has nothing to do with the IP address of the remote server or Cloud9 server addresses.)
I'll outline the process I used to get 'better_errors' working on my Cloud9 workspace, from my Chromebook on my residential network... maybe it will also work for you and others!
Add gem "better_errors" to the development group in the project Gemfile.
Add gem "binding_of_caller" to the project Gemfile.
Run $bundle in the project Cloud9 terminal.
Edit the project config/environments/development.rb file and add the following line of code to the end of the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
Create a new "runner" in Cloud9 by clicking "Run" > "Run With" > "New Runner".
Cloud9 creates an basic runner file in a new tab to modify. Replace the contents of this file with following code.
{
"cmd": [
"bash",
"--login",
"-c",
"TRUSTED_IP=XXX.XXX.XXX.XXX rails server -p $port -b $ip $args"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m",
"selector": "source.ru"
}
Replace XXX.XXX.XXX.XXX in the code above with the local computer's public IP address. (I use http://ifconfig.me/ to find the public IP assigned to my Chromebook.)
Save the runner file with the name RoR.run into the /.c9/runners path for the project.
Start the projects server by using this new runner. Click Run > Run With > RoR.
Use the popup link that Cloud9 displays, after the runner starts the server, to view the app. Enjoy 'better_errors'!
NOTE: I still have not figured out how to automate the process of feeding the external IP address of my local computer into the RoR.run file that lives on the Cloud9 workspace. I just update it manually every time I move to a new network or my external IP address changes.
WARNING: I actually just started learning RoR, so I have no idea if this is the "correct" way to get this gem to work in a cloud dev server/service environment. I also have no idea how safe this would be. I suspect that my solution exposes the 'better_errors' in-browser REPL to all computers that resolve to that same external IP address. If you are working on a sensitive codebase/database please do not implement my solution.
I just tested this in cloud9.io and this is the simplest way to make this work in cloud9.io:
Add the following line to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! 'xxx.xxx.xxx.0/24'
where xxx.xxx.xxx is the first three sections of the IP address of the local machine that you are using to connect to cloud9.io
There is a good answer in the better errors issues and c0 docs.
Issues:
https://github.com/charliesome/better_errors/issues/318
c9 Help
https://community.c9.io/t/white-listing-remote-addr-for-better-errors-gem/4976/4
Use a Rack::Request object to get the IP. You can put the following code in your view.
if Rails.env.development?
request = Rack::Request.new(env)
puts "###### Request IP_ADDRESS = #{request.ip}"
end
Change the last quadrant of the IP you get to 0/24. For example.
BetterErrors::Middleware.allow_ip! '76.168.69.0/24' <--note: change the last quad to 0/24 and of course your ip address will be different than 76.168.69.xx
Yeah!! I got it! Automatically!
Here is my solution:
1- Similar as described by #Grokcodile: edit the project config/environments/development.rb file and add the following line of code to the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
config.web_console.whitelisted_ips = ENV['TRUSTED_IP']
2- At the Cloud9 edit the ~/.bashrc...
vi ~/.bashrc
add the line (enter, alt+a):
export TRUSTED_IP='0.0.0.0/0.0.0.0'
Save it (esc, :wq)
3- run rails s -b $IP -p $PORT as usual...
4- Enjoy better errors!!
If you also work on this project at a Virtual Machine(vagrant):
1- edit at your VM (vagrant) your ~/.bash_profile (my case) and add:
export TRUSTED_IP=x.x.x.x
export PORT=3000
export IP=0.0.0.0
x.x.x.x must be equal to the REMOTE_ADDR of ENV. (This in not a problem like cloud9 because at my VM the IP doesn't change everytime: 10.0.2.2 always for me).
With this I am now able to use the gem foreman: foreman start at both places with the Procfile:
web: rails s -b $IP -p $PORT
This works because the global env variables are set on both.
I am just starting to learn RoR too, so, hope this is the right thing to do without bringing more problems in the future.
Because Cloud9 is all web-based you don't access it from localhost so by default better errors won't work. If you take a look at the security section of their README (https://github.com/charliesome/better_errors) you can add the following to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! <ipaddress>
So that the errors page shows for your IP. You can find your apparent IP by hitting the old error page's "Show env dump" and looking at "REMOTE_ADDR".