Background: I have a gradle project, and I use jettyRun with success. I want Jetty to pick up code changes immediately without requiring a server bounce. Gradle supports hot deploy, and I understand that all I need to do is turn on "scanning", but I am not sure exactly what I need to do. Here's what I tried:
jettyRun {
httpPort = 8989
reload = 'automatic'
scanIntervalSeconds = 2
daemon = false
}
Thanks.
That's how it should work. Remember that the jetty plugin picks up the resources from the 'build' folder and not the from the src folder.
cheers,
René
Related
When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19
I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.
I had this working before on AWS Linux AMI but no luck with AWS Linux 2.
I need to access my environment properties from the Nginx configuration file during the EB application deployment. It's a Single instance Node Server.
I did it like this with the AWS Linux AMI and it worked without a problem:
.ebextensions/00_options.config
option_settings:
aws:elasticbeanstalk:application:environment:
DOMAIN: socket.example.com
MASTER_DOMAIN: https://example.com
etc..
.ebextensions/10_proxy.config
... some configs ...
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
owner: root
group: root
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
map $http_origin $cors_header {
hostnames;
default "";
`{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "MASTER_DOMAIN"}}` "$http_origin";
}
server {
listen 80;
listen 8080;
server_name `{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "DOMAIN"}}`;
location ~ /.well-known {
allow all;
root /usr/share/nginx/html;
}
location / {
return 301 https://$host$request_uri;
}
}
etc..
.... some more configs ....
I'm not including most of the configs above because they're not relevant.
So when I did this before, everything worked as expected. The config file inserted my properties and created the file in the /etc/nginx/conf.d/proxy.conf folder.
Now with AWS Linux 2 the specs have changed and we have to add our Nginx configuration files in the .platform/nginx/conf.d folder located in our application bundle root folder.
Here the reference ( see Reverse proxy configuration)
So I created a proxy.conf file in the location mentioned above with the content that was previously inserted in /etc/nginx/conf.d/proxy.conf.
.platform/nginx/conf.d/proxy.conf
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
map $http_origin $cors_header {
hostnames;
default "";
`{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "MASTER_DOMAIN"}}` "$http_origin";
}
etc...
And then the problems began..
This first trial throwed unexpected "{" in /var/proxy/staging/nginx/conf.d/proxy.conf:11 at me.
And after that I tried a lot of things. Tried it with ${MASTER_DOMAIN} and messed around with the new EB AWS Linux 2 hooks (see link above Platform hooks). All for no avail it seems like you can't access the properties from the Nginx configs. I've read an article or a documentation from Nginx mentioning something similar today but I can't find it anymore (did a lot of googling).
I also tried to create a config file like I did with the working version which purpose was to save a temp file somewhere with the included properties and then include this file in the needed .platform/nginx/conf.d/proxy.conf file because I started to think that there is no way to include them directly with the new specs.
.ebextensions/10_proxy.config
... some configs ....
files:
/var/proxy/staging/custom_folder/proxy.conf:
mode: "000644"
owner: root
group: root
content: |
etc...
.platform/nginx/conf.d/proxy.conf
include custom_folder/proxy.conf;
With this idea in mind I did a lot of nonsense, I created hooks for creating (mkdir) directories in which I tried to temporarily save the file which leaded to new permission errors. I wasn't able to give the proper permissions to prebuild, postdeploy files but this is another issue.
And a lot more of trying and failing...
But then I've read (also from the link above):
"If you configure your proxy to send traffic to multiple application processes, you can configure several environment properties, and use their values in both proxy configuration and your application code."
And hope came back.. Does this mean I actually CAN directly add environmental variables into the Nginx configs located in the .platform directory? ... I don't know.. Do you?
I could continue to describe all the things I tried all night long so I will stop here. I hope you get the issue. If not ask me and I will do my best to make all this understandable.
Also my mind isn't very clear anymore after 14 hours of battling this issue. I need a break.
If you did it to the end thank you for your time and help would be greatly appreciated.
Summary
One way to do it is to create a shell script in .platform/hooks/postdeploy.
Here is a simplified example, assuming you have an Elastic Beanstalk environment property called MASTER_DOMAIN:
#!/bin/bash
# write nginx config file
cat > /etc/nginx/conf.d/elasticbeanstalk/test.conf << LIMIT_STRING
location /test/ {
default_type text/html;
return 200 "nginx variable: \$host, and EB env property: $MASTER_DOMAIN";
}
LIMIT_STRING
# restart nginx service so the config takes effect
systemctl restart nginx.service
The location block from this example can be replaced by the nginx content from .ebextensions/10_proxy.config in the original post. No need for the Fn::GetOptionSetting stuff though.
I think you also need a duplicate script in .platform/confighooks/postdeploy.
Details below.
(sorry for the wall of text)
Environment variables in nginx
Actually, as discussed in here and here, it is not possible (out-of-the-box) to use os environment variables inside the http, server, or location blocks in nginx config files. There are some workarounds, such as using lua, perl, or templates, but let's not get into those. This part has nothing to do with AWS.
In the OP's original configuration for Amazon Linux AMI (AL1), using the files section in .ebextensions/10_proxy.config, they were actually using a shell script to write the nginx config file during deployment. The shell script expanded the environment variables, but the resulting proxy.conf for nginx did not actually access any environment variables.
That's why it worked on AL1.
Platform hooks
Now, for Amazon Linux 2 (AL2), we can do something similar using shell scripts in the .platform/hooks and .platform/confighooks folders.
These .platform hook scripts are executed as the root user, and they have access to the Elastic Beanstalk (EB) environment properties. The EB environment properties can be accessed just like normal OS environment variables, so there is no need to use the Fn::GetOptionSetting stuff.
Basically, we need to create a shell script that writes a file with the content from your original .ebextensions/10_proxy.config. However, there are two questions we need to consider:
Should we use a prebuild, predeploy, or postdeploy hook?
What is the proper destination directory for our nginx proxy.conf file?
File locations
To answer these questions, we have to refer to the AWS documentation for Extending Elastic Beanstalk Linux platforms, and specifically the Instance deployment workflow section.
... The current working directory (cwd) for platform hooks is the application's root directory. For prebuild and predeploy files it's the application staging directory, and for postdeploy files it's the current application directory. If one of the files fails (exits with a non-zero exit code), the deployment aborts and fails.
This is interesting, but leaves some questions, e.g. where is the "application staging directory" located? We can fill in the blanks by inspecting one of our deployment log files. Based on our eb-engine.log, here's what happens with the platform hooks and nginx config files during app deployment (skipping a lot of details):
the source bundle is downloaded from S3 and extracted to /var/app/staging/
platform hooks in .platform/hooks/prebuild/ are executed
proxy server configuration is copied from /var/app/staging/.platform/nginx/ to /var/proxy/staging/nginx
platform hooks in .platform/hooks/predeploy/ are executed
proxy server is started, configuration is copied from /var/proxy/staging/nginx/ to /etc/nginx
platform hooks in .platform/hooks/postdeploy/ are executed
Note, after deployment the app is located in /var/app/current.
Based on the above, there are several options:
Create a shell script in .platform/hooks/postdeploy that writes to /etc/nginx/conf.d/proxy.conf.
The nginx service is already running, at this stage, so we need to restart for the configuration to take effect.
Below is a minimal test example. In this example we write to the elasticbeanstalk subdirectory, because we just want to add a location inside the default server block. We can then visit the /test/ page in a browser, to check that the configuration works.
We use some bash io redirection (<<, >) to write the nginx config file.
Note that we need to escape any nginx variables, e.g. $host becomes \$host, otherwise the shell will interpret them as environment variables.
Also note that the shell scripts need to have execution permission, as explained under More about platform hooks in the docs.
#!/bin/bash
cat > /etc/nginx/conf.d/elasticbeanstalk/test.conf << LIMIT_STRING
location /test/ {
default_type text/html
return 200 "nginx variable: \$host, and EB env property: $MASTER_DOMAIN";
}
LIMIT_STRING
systemctl restart nginx.service
Alternatively, we could create a shell script in .platform/hooks/predeploy that writes to /var/proxy/staging/nginx/conf.d/proxy.conf.
There is no need to restart the nginx service in this case, because this hook is executed before the server configuration is applied.
BEWARE:
Not sure if this is a bug or a design feature, but our newly created proxy.conf disappears after a configuration deployment (as opposed to an application deployment), unless we put a duplicate script in the .platform/confighooks/postdeploy directory. Not very DRY...
EDIT: AWS support confirmed that we need duplicate scripts in hooks and confighooks in this case. The application example in the docs also shows some duplicates (at least duplicate filenames) in hooks and confighooks.
EDIT:
Instead of duplicating scripts, we can also write a confighook that calls a hook, e.g. .platform/confighooks/predeploy/01_my_confighook.sh could look like this:
#!/bin/bash
source "/var/app/current/.platform/hooks/predeploy/01_my_hook.sh"
Disclaimer: This was tested on a freshly created single instance EB environment with "Python 3.7 running on 64bit Amazon Linux 2/3.1.5" using all default configuration and the default AWS Python sample application (only extended with our custom hooks).
I'm using jest to write tests in my ReactJS application.
So far, to run my test suite, I need to type 'npm test'.
Here's the snippet from package.npm:
"scripts": {
"test": "./node_modules/.bin/jest",
(other stuff)
},
"jest": {
"unmockedModulePathPatterns": ["<rootDir>/node_modules/react"],
"scriptPreprocessor": "<rootDir>/node_modules/babel-jest",
"testFileExtensions": [
"es6",
"js"
],
"moduleFileExtensions": [
"js",
"json",
"es6"
]
},
Is it possible to run those tests within my IDE (IDEA/WebStorm) directly, preserving the configuration? I'm not a js guy, but for example WebStrom works perfectly fine with Karma. Shouldn't this be possible with jest-cli either?
To make Jest test results shown in a tree view (like karma, etc.), a special integration is needed. WebStorm doesn't yet support Jest. Please vote for WEB-14979 to be notified on any progress.
EDIT: as of March 2017 the first version of Jest integration at WebStorm has been released.
In WebStorm 9+ You can set this up as follows:
Install Jest CLI: npm install --save-dev jest-cli
Create node run configuration with javascript file set to node_modules/.bin/jest, and application parameter to --runInBand. runInBand tells jest to run in single process, otherwise there's a port conflict when running multiple node processes in debug mode
Create some tests and run configuration in Debug mode (Ctrl-D/CMD-D). If you set breakpoints in your test or app code they should hit
It would be great though if you could click on file:line numbers in the output to go directly to the code.
app_sciences's answer is awesome, but does not work for Windows.
For windows, you can use next configuration:
Provided configuration is taken from here
For IDEA I'm using https://confluence.jetbrains.com/display/IDEADEV/Run+Configurations for that purposes. For WebStorm it seems you can add your config by yourself https://www.jetbrains.com/webstorm/help/creating-and-editing-run-debug-configurations.html . The configuration you are talking about is on the software level. If you will configure to run it via your IDE it will definitely will run within the ENV variables and paths given, you just need to add the needed global paths and the commands to run.
I'm on Ubuntu 12.04, using jetty (9_M4), solr (4.0.0) through django-haystack (2.0beta) installed in a django 1.4.2 site.
I've had to make a number of jumps through hoops to get this up and running, as there is very little documentation for getting solr 4.0 up and running in Ubuntu with django-haystack. But how hard could it be?
My main confusion is between what Jetty is doing, and what Solr is doing.
So, I installed Jetty via this tutorial making a small adjustment to the init file as I note in the comment on that tutorial. Jetty is now running, I can see it in browser, even after a reboot.
Great.
Move onto installing Solr via this tutorial again with adjustments. Instead of:
cp -R apache-solr-4.0.0/example/solr /opt
I use:
cp -R apache-solr-4.0.0/example/* /opt/solr/
and therefore add the following to /etc/default/jetty:
JAVA_OPTIONS="-Dsolr.solr.home=/opt/solr/solr $JAVA_OPTIONS"
I can't exactly remember why I did that, but there was a reason at the time. I stop using that tutorial at that point, as I don't understand the solr concept of core very well, and I'm already flustered at how annoyingly difficult this is.
(For context, when I set up django-haystack 2.0 with solr 3.5 about 6 months ago it was terrifyingly easy and didn't require a separate jetty installation - all up took me about two hours)
Anyway, I go back to my Django installation, create the schema.xml, make the stopwords-en.txt changes, copy it across to /opt/solr/solr/collection1/conf.
I edit /opt/solr/solr/collection1/conf/solrconfig.xml to remove the reference to updateLog since any attempt I made to add version field to schema.xml failed dismally with some sort of character error. See here (lucene -solr-user mailing list) and here (django-haystack github) for more info on this.
Finally, I cd into /opt/solr and run it:
sudo java -jar start.jar
Ba-da-boom! I get some results (when I go to my django site and use the search I've set up). Fantastic. This is really great. Now I just need to make the starting of solr persistent.
I create an /etc/init/solr that looks like this:
description "Solr Search Server"
# Make sure the file system and network devices have started before
# we begin the daemon
start on (filesystem and net-device-up IFACE!=lo)
# Stop the event daemon on system shutdown
stop on shutdown
# Respawn the process on unexpected termination
respawn
# The meat and potatoes
exec /usr/bin/java -jar /opt/solr/start.jar >> /var/log/solr.log 2>&1
I restart the server and nothing - I can see solr running, but I'm not getting any results in my django search.
I remove the init file and try running from the cli again - yep, sweet.
So, my questions are:
What the hell have I done wrong?
How do I get solr to start at boot and respawn if it dies accidentally AND produce results through my Django/haystack interface
Why do I need jetty and solr running simultaneously, and what is the relationship of /opt/jetty/webapps/solr.war to my /opt/solr? Am I creating causing conflicts?
Why was this so easy with solr 3.5 and so difficult now? I ask this honestly - I don't want a list of excuses or explanations from solr developers - I want to know how my understanding can be so limited in the first instance (solr 3.5) and get it running in two hours and why I now need to have a comprehensively deeper understanding of jetty/solr architecture and cli/shell script hacking to get it to run?
I am not promising to get all your things, but (numbers do not match questions):
1) Jetty is a web-server. Solr runs as a (web) application inside that web server, however:
2) Jetty can also run an embedded webserver, which is how Solr download works. When you do java -jar start.jar that runs Jetty with everything preconfigured. In which case you do not need a standalone Jetty. I suggest start with embedded Jetty, then switch to external one. However, if only your local app talks to local Solr server, you may be able to get quite far without needing full Jetty.
3) You don't need all the stuff you find in example directory - it has multiple confugurations and support files and is somewhat nested (which is confusing)
4) To start you need two things: Running solr; your configuration directory
5) The easiest way to get Solr running is to put the whole distrubution directory (I know - large) somewhere (e.g. /opt/solr).
6) Your configuration directory is very simple. All you need is two files to start, three if you are picky about names:
- (wherever, but make sure Solr can read/write there)
-- solr.xml (if you are picking about collection name, otherwise you can skip it)
-- collection1/ (that's default name, you can change that in solr.xml)
-- collection1/conf/ (this is configuration directory, Solr will add data directory on the same level once you start right)
schema.xml
-- collection1/conf/shema.xml
-- collection1/conf/solrconfig.xml
7) Then, you need to be in the example directory and run java -Dsolr.solr.home= start.jar . This will get all the pieces up and running on port :8983 . Solr 4 has a pretty new admin interface, so visit it with your browser, maybe do the tutorial, etc.
If you need help with minimal functioning schema/solrconfig files, ask separately, but you cannot just use ones from the example directory as it has all the other file references in the fieldType analysers (though you could just comment those lines out).