I'm working on a number of projects on cloud9 IDE, and it's really frustrating that I can't get the better errors gem to work correctly. It isn't supposed to need initializing; it should just work out of the box. However, I still only get the usual ugly red errors page. I should specify that it is included in my gemfile, and I have bundle install already.
How can I get better errors to work correctly? Is there an installation step I'm missing?
The trick, I used, to get the 'better_errors' gem working in Cloud9 is setting the value of TRUSTED_IP to the public IP address of the computer my browser session is attached to. (As far as I can tell, it has nothing to do with the IP address of the remote server or Cloud9 server addresses.)
I'll outline the process I used to get 'better_errors' working on my Cloud9 workspace, from my Chromebook on my residential network... maybe it will also work for you and others!
Add gem "better_errors" to the development group in the project Gemfile.
Add gem "binding_of_caller" to the project Gemfile.
Run $bundle in the project Cloud9 terminal.
Edit the project config/environments/development.rb file and add the following line of code to the end of the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
Create a new "runner" in Cloud9 by clicking "Run" > "Run With" > "New Runner".
Cloud9 creates an basic runner file in a new tab to modify. Replace the contents of this file with following code.
{
"cmd": [
"bash",
"--login",
"-c",
"TRUSTED_IP=XXX.XXX.XXX.XXX rails server -p $port -b $ip $args"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m",
"selector": "source.ru"
}
Replace XXX.XXX.XXX.XXX in the code above with the local computer's public IP address. (I use http://ifconfig.me/ to find the public IP assigned to my Chromebook.)
Save the runner file with the name RoR.run into the /.c9/runners path for the project.
Start the projects server by using this new runner. Click Run > Run With > RoR.
Use the popup link that Cloud9 displays, after the runner starts the server, to view the app. Enjoy 'better_errors'!
NOTE: I still have not figured out how to automate the process of feeding the external IP address of my local computer into the RoR.run file that lives on the Cloud9 workspace. I just update it manually every time I move to a new network or my external IP address changes.
WARNING: I actually just started learning RoR, so I have no idea if this is the "correct" way to get this gem to work in a cloud dev server/service environment. I also have no idea how safe this would be. I suspect that my solution exposes the 'better_errors' in-browser REPL to all computers that resolve to that same external IP address. If you are working on a sensitive codebase/database please do not implement my solution.
I just tested this in cloud9.io and this is the simplest way to make this work in cloud9.io:
Add the following line to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! 'xxx.xxx.xxx.0/24'
where xxx.xxx.xxx is the first three sections of the IP address of the local machine that you are using to connect to cloud9.io
There is a good answer in the better errors issues and c0 docs.
Issues:
https://github.com/charliesome/better_errors/issues/318
c9 Help
https://community.c9.io/t/white-listing-remote-addr-for-better-errors-gem/4976/4
Use a Rack::Request object to get the IP. You can put the following code in your view.
if Rails.env.development?
request = Rack::Request.new(env)
puts "###### Request IP_ADDRESS = #{request.ip}"
end
Change the last quadrant of the IP you get to 0/24. For example.
BetterErrors::Middleware.allow_ip! '76.168.69.0/24' <--note: change the last quad to 0/24 and of course your ip address will be different than 76.168.69.xx
Yeah!! I got it! Automatically!
Here is my solution:
1- Similar as described by #Grokcodile: edit the project config/environments/development.rb file and add the following line of code to the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
config.web_console.whitelisted_ips = ENV['TRUSTED_IP']
2- At the Cloud9 edit the ~/.bashrc...
vi ~/.bashrc
add the line (enter, alt+a):
export TRUSTED_IP='0.0.0.0/0.0.0.0'
Save it (esc, :wq)
3- run rails s -b $IP -p $PORT as usual...
4- Enjoy better errors!!
If you also work on this project at a Virtual Machine(vagrant):
1- edit at your VM (vagrant) your ~/.bash_profile (my case) and add:
export TRUSTED_IP=x.x.x.x
export PORT=3000
export IP=0.0.0.0
x.x.x.x must be equal to the REMOTE_ADDR of ENV. (This in not a problem like cloud9 because at my VM the IP doesn't change everytime: 10.0.2.2 always for me).
With this I am now able to use the gem foreman: foreman start at both places with the Procfile:
web: rails s -b $IP -p $PORT
This works because the global env variables are set on both.
I am just starting to learn RoR too, so, hope this is the right thing to do without bringing more problems in the future.
Because Cloud9 is all web-based you don't access it from localhost so by default better errors won't work. If you take a look at the security section of their README (https://github.com/charliesome/better_errors) you can add the following to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! <ipaddress>
So that the errors page shows for your IP. You can find your apparent IP by hitting the old error page's "Show env dump" and looking at "REMOTE_ADDR".
Related
I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
I have an Elastic Beanstalk application that I'm trying to configure to connect to a FileMaker Pro database, over JDBC. The code I'm using is:
import jaydebeapi as jdp
jdbc_driver_location = '/tmp/fmjdbc.jar'
conn = jdb.connect(jdbc_driver_class,
jdbc_connection_type + '://' + db_url + '/' + db_name,
[user_name, password], jdbc_driver_location,)
When I attempt this, I get the following error:
java.sql.SQLException: No suitable driver found for jdbc:filemaker://10.120.120.108/carecord-<class 'jpype._jexception.java.sql.SQLExceptionPyRaisable'>
To try and solve to problem, I've added the jdbc.jar to both the /tmp folder of the Ec2 instance, as well as included it in the project directory. When if I SSH into the EC2 instance and issue the command:
JAVA_HOME=/tmp/fmjdbc.jar
The program will run the next time it's prompted, without issue. After a few hours it will give the original error and need to be issued the above command again to work. To fix this I tried adding the following to /.ebextensions, to copy the .jar into the tmp folder from the project directory and issue the above command to the server from the start:
commands:
command01:
command: sudo cp /opt/python/current/app/fmjdbc.jar /tmp/fmjdbc.jar
command02:
command: JAVA_HOME=/tmp/fmjdbc.jar
But the project still gives the error. Any thoughts on how I can add this driver to the classpath such that the job will run consistently?
To help folks who have this issue in the future, the answer to this that I found was at the end of this thread.
I appended the following:
if jpype.isJVMStarted() and not jpype.isThreadAttachedToJVM():
jpype.attachThreadToJVM()
jpype.java.lang.Thread.currentThread().setContextClassLoader(jpype.java.lang.ClassLoader.getSystemClassLoader())
Just above the
jdbc_driver_location = '/tmp/fmjdbc.jar'
section of my original code above. This allows the application to loop and successfully find the necessary driver.
JAVA_HOME is supposed to point to the location where Java is installed on the server. You don't use JAVA_HOME to add libraries to the classpath. You shouldn't have to set any environment variables for your code to work.
The root of your problem is that you are copying the file to /tmp/fmjdbc.jar but you are setting jdbc_driver_location to be /tmp/jdbc.jar. Notice how those file names are different. To fix your code change it to this:
jdbc_driver_location = '/tmp/fmjdbc.jar'
Here's what I'm working with right now:
Ubuntu Trusty 14.04
Rails 4.2.6
Ruby 2.2.3
Passenger
Nginx
When I try to visit the IP I get this message:
Incomplete response received from application
When I look at nginx/error.log I see:
Missing `secret_token` and `secret_key_base` for 'production' environment, set these values in `config/secrets.yml`
On the server I did:
RAILS_ENV=production bundle exec rake secret
I placed that result into each of these files for good measure:
~/.bashrc
~/.bash_profile
~/.profile
/app/shared/config/local_env.yml
For all shell scripts the format is:
export SECRET_KEY_BASE="[key]"
For the local_env.yml I used just:
SECRET_KEY_BASE="[key]"
I've also tried entering it without quotation marks.
I've restarted the server each time I made a change. No cigar.
What else might be the issue?
-- UPDATE
I've even added the secret key to the secrets.yml file directly. So now I'm thinking my issue is either something to do with passenger/nginx or with a typo somewhere.
It is more likely that the environment variables are not actually set rather than Rails is not picking them up. You're raking secrets, which I don't do. I set them up manually in the Unix etc/environment, and do not check any secrets into source control. But the following are a few steps that should help you either resolve or hone in on the problem.
On your Ubuntu server for system wide environment variables
1- $env
Look for your SECRET_TOKEN and SECRET_KEY_BASE. The error tells you that these are not set, this is just a technique to check env. (RAILS_ENV will also be shown in the list if it is set.)
2- $sudo nano /etc/environment
Add the following lines -- use your actual values between double quotes. Do not use a [key] or any programmatic replacement.
export SECRET_TOKEN="T99ABC..."
export SECRET_KEY_BASE="99ABC..."
3- $logout / $login to reload environment vars
4- $env - Check the environment again
Look for your SECRET_TOKEN and SECRET_KEY_BASE to be set.
5- Try deploying again. If it fails, check the environment vars using $env again. It will tell you if something in your deploy is smashing your SECRET_* env vars.
I'm running an application on elastic beanstalk.
How do I find my application name? in other words, how does the application runnning in elastic beanstalk find out information about itself.
or other information about the environment the that the current application is runnning in.
I wouldn't be surprised if some of this information is available via system properties.
UPDATE: something I forgot to mention (sorry). It's java app and I'd prefer to use the JAVA SDK to acquire this information
An alternative, and rather nasty have to find the name of the environment is to check the root folder of the Elastic Beanstalk instance. As of today, there is a file there named /<aws-env-name>_LaunchFile in the root folder of the EC2 instance (caveat emptor: this can change at any time).
For example, if you environment name is "mycoolapp-dev" then there will be a file called mycoolapp-dev_LaunchFile in the root directory of your Elastic Beanstalk Instance. For things like loggly and newrelic to work correctly, sometimes it's useful to give your host a proper hostname (both services still record the IP, which is the original EC2 IP).
The command snippet below can be pasted into your .ebextensions folder .config file to set the hostname to mycoolapp-dev for these services to work.
commands:
00_set_hostname:
command: "hostname `ls /*_LaunchFile | sed -e 's/\/\(.*\)_LaunchFile$/\1/'`"
Or a really nice solution is to use this link by Steel Pangolin - Jeremy Ehrhardt
You can query Metadata about the instance by navigating from the instance to this internal address:
http://169.254.169.254/latest/meta-data/
There are many different methods you can use to query and parse the results of this data for your purposes.
See more information about Instance Metadata and Userdata here.