how spark-java hot reload can be achieved - jetty

I'm using spark-java with handlebars templates and i try to make embedded autoreload works for classes, hbs and css and other static files without success.
Can someone help me on this.
I simply start my sparkajav app with a Main class (not maven jetty plugin). I tried JVM parameters without any effect
Thks for help

IIRC, Spark uses an embedded version of Jetty without using the WebApp/WAR concepts.
All of the hot reload features in Jetty are dependent on a WebApp/WAR to pay attention to, and trigger the hot reload (which in turn will close the isolated webapp classloader and properly perform the WebApp lifecycle destruction before redeploying the WebApp/WAR again).

Related

Disable Liquibase execution at startup using Jetty + Spring (not Spring boot!)

I'm working in an application developed with Spring5 (not Spring boot!) that runs on Jetty. This application has module that uses the plugin liquibase-maven-plugin.
We generate a image from a dockerfile (base image jetty:9-jre8), where we add the application (war file) in the jetty application directory.
In some specific environments, where I deploy the application, I want to be able to disable that execution.
Is it possible to do so?
I've seen on spring boot documentation, that it's possible to do so by defining the property spring.liquibase.enabled (or liquibase.enabled on Spring4) to false, but that seems that doesn't work:
I've tried to define them at the properties file, define them as env properties and also as java options (-Dspring.liquibase.enabled=false).
This has the same behavior when I deploy the container, or when I execute locally the maven command: mvn jetty:run
Do you have any ideas or hints how to do this?
Thank you in advance
Well I just discovered that it's possible to disable the execution of liquibase by adding the JAVA_OPTION
-Dliquibase.shouldRun=false
For more details see here
I will keep this quesion anyway, in case someone has the same problem I did.

What is the best way to set up your development environment for Sitecore

The general guidance appears to be to install Sitecore into one folder, e.g. D:\Websites\MyWebSite and then create your Visual Studio project in a separate folder, e.g. C:\Projects\MyWebProject. You would then publish your custom code into the Sitecore folder from Visual Studio (This video explains what I’m describing https://www.youtube.com/watch?v=i3Mwcphtz4w around 13 mins in).
I have the following questions:-
Do people only store their Visual Studio project in source countrol and not the Sitecore code?
The publish option from VS into the Sitecore folder only has options for adding files or deleting anything not in the VS project. How would files removed from the VS project ever get deleted without doing it manually?
We use web-deploy to publish sites to staging and live environments. In this scenario would you publish from your VS project or would you set up a way to publish the Sitecore folder (if so how)?
Is this actually a good set up to have or do you do something different?
I did a lot of research on this when we started Sitecore development a couple of years ago. I remember reading a post from Sean Kearney that made a lot of sense to me: http://seankearney.com/post/Visual-Studio-Projects-and-Sitecore
We ended up using this approach for both large and small scale projects and it has been great. You will also want to look at a couple of other tools:
Team Development for Sitecore (TDS) from Hedgehog Development (http://www.hhogdev.com/products/team-development-for-sitecore/overview.aspx)
CopySauce from Igloo (http://www.igloo.com.au/blog/copysauce-igloos-sitecore-development-utility/)
SitecoreRocks for Visual Studio
So to answer your questions:
All of your code and some of the Sitecore items are stored in source control. The approach you want to take is to only store new Sitecore items (layouts, sublayouts, templates, etc) that you create along with any items you may need to customize. You do not need to store all of the sitecore source, content or modules...just what you would need to reapply to get a fresh environment up-to-date. You can manage this manually but a tool like TDS makes this MUCH easier.
We use TDS to manage the publish/deploy to each of our environments. TDS has configurable settings for handling items that have been deleted, including the ability to move it to the Sitecore recycle bin or simply remove it. You have to be careful with this but it does work.
We use a separate build environment to assemble and run deployments using TDS and Jenkins. Basically, all of the code is retrieved from the source control system to the Sitecore server and built using MSBuild and TDS. In most cases we use a webdeploy directly to the Sitecore webroot, but for production we build TDS packages and then run them on each Content Delivery Server
We have used this setup for 7 sitecore projects so far and I am very happy with how it has worked out. We have questioned whether TDS is worth the license fee but the answer always comes back as a yes. The alternative is not very appealing for our development staff and time savings far out-weigh the costs.
Everything is stored in Source Control!... just not always in the same area as they reside on the web server. Storing the Sitecore folder in source control is a good idea as there are changes that you will have as you install modules, but you do NOT add the Sitecore folder as part of your solution/project and should really be there to pull from if need be and not something that is even tracked/monitored.
Once Sitecore is installed, create a new project that resides in the website folder and only add things like the properties folder, layouts, xml and other folders that you want. I don't even include the app_config in my project. Oh and to be clear, it's probably best to just keep the Sitecore folder as a sort of reference folder in your source control but not as part of your website trunk. We have it on the ignore list for website folder in source control. However, that being said, keep in mind that you will NEED to have it in your website folder.
Technically speaking, the recommended approach is to install Sitecore on to the server itself as a stand alone empty instance.. like using the installer with the client mode (not full) so that you get the framework for an empty site in place. Then you can create the deployment package/packages/whatever and it will all be your own code. You should really never have to mess with changing/removing the base Sitecore file system manually.
See above. Generally speaking, unless you have a reason to do so: install Sitecore as an empty instance... then manage your code/files via deployment and just leave the Sitecore folder files alone. You will have very little reason to ever touch them or the Sitecore folder itself outside of an upgrade.
Adding Sitecore itself to source control should be avoided, since you won't be deploying Sitecore as part of your implementation. For modifications to Sitecore itself, you would need a way of handling those inside your implementation, but the config patch system and other mechanisms provide the means for this.
Redundant files in the web site folder will only be a real problem in your development environment. When publishing to a demo environment or to a live environment, you will only publish the material that you actually want. And the deployment-based setup opens up the possibility of always starting from a clean Sitecore installation - as long as you include your Sitecore modifications as part of your implementation (which is not covered in the video). So there is little risk of this being a problem in real life, and the development method in the video makes eliminating this risk entirely possible.
The Sitecore installation should be handled outside of the deployment of your implementation.
It's a good setup, because the method in the video is the method Sitecore recommends for development, and it is also the method Sitecore teaches to developers in development courses. The most obvious advantages of this method are
Clean separation between your web site implementation and the Sitecore installation. There is no risk of accidentally mangling the Sitecore installation, and there is no risk of forgetting unmanaged manual modifications to Sitecore that are needed to run your site. This separation is hard to accomplish if you're not using the method in the video.
By using publishing to deploy your implementation, you know that your implementation is deployable on top of a clean Sitecore installation - and works. This means when deploying to a production or demo server in the future, things will work the same and there will be no surprises. This is very hard to be confident about if you're not using the method in the video.
To test your implementation on a different version of Sitecore, you can just deploy to a clean installation of a different version. This is very hard to test if you're not using the method in the video.
There is sample source code for the video on GitHub, along with instructions on how to set up the development environment, including the publishing parts. This sample source directly and indirectly answers some of your questions.

How do those who are not using a backend framework (such as Rails/Symfony/Django) go about developing and deploying an Ember application's assets?

More specifically, when using a backend application framework I generally am afforded some level of asset management which allows me to work with multiple files in development which are uncompressed and unminified and then in production mode those files become automatically minified, compressed, and concatenated into a single file.
I am looking to create an Ember application that is a single page app that interfaces with a separate RESTful services layer. I simply do not need the weight of a framework behind the Ember app and am hoping to serve it as static html+css+js, so I am looking for any guidance on how to easily manage development and deployment of a client-side only app without adding much overhead.
Right now my biggest issue is with including JS (and to a lesser extent, CSS) files. My HTML is static and I have an Ember app comprised of many files, so I have many script tags to include them all. This is clearly not appropriate for production so I imagine some kind of build tool will be needed to assemble my Javascript files and overwrite the script tags in the HTML file. Are there tools out there right now that will do this? Is there another approach that I may be overlooking?
This is my first fully client-side application so it's very possible that I just need to make a paradigm shift, having done server-side applications for so long.
Agreed this can be tricky without a backend framework. For sure script tags are not the way to go and you will need some kind of build tool for production deployment.
Ember App Kit is a solution a few of us have been working on. It's still early stages but i've used it for a couple of projects so far and it's been much better than trying to roll-my-own with grunt. I would expect it to become the default starting point for ember apps in near future, to try it now just download it as a zip then read the Getting Started Guide
There are many other solid solutions out there, consider checking out:
ember-tools
brunch-with-ember-reloaded
brunch-with-hapmsters
charcoal
I use a combination of requirejs and Grunt, using these lovely functions and this one, which can compile your ember-handlebars templates into functions. (The git-contrib includes the ability to watch for changes in your files and perform various build steps which may differ if you are in development or production. You can have separate grunt functions which run various tasks for production or development. Of course for all of this you are going to need node!

Django Compressor on a multi-server deployment

I've been fortunate enough to discover django_compressor and implemented it within our stack, which deploys to many servers (Currently 6, but growing as we deploy smaller virtual machines.)
Now this is all fine and dandy if you're using django_compressor at its finest. Compressing raw CSS/JS code
However, say now I want introduce some type of pre-compiler. Let's say for this example it is LESS (css). The thought process for this is fairly simple:
Install node, npm, and the less package onto the server.
Add less to your precompilers!
COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), )
Now you deploy, and your server compiles the less file. Everything is fantastic!
Now let's add 8 more servers to that and you have to install node, npm, and less on each server?
This is where something doesn't seem right, and I feel like I'm missing something. I believe the Django community has run into this problem before.
My thoughts thus far have been:
Use a post-commit hook to compile the CSS on the developers machine. This means that via django_compressor, we link to the compiled static file in the HTML, and our repository contains both the compiled and non-compiled versions. My only downside to this is it ends up not using half of the benefits of django_compressor and may be tedious for developers?
Suck it up and make node, npm, and less part of the server stack.
Update
I did some additional looking around and it seems that using the COMPRESS_OFFLINE flag (or just --force) with the management command will produce an offline manifest file that does what I need (only tested locally). So setting this up with a pre-deploy hook likes to be the answer.
Of course, still open to other ideas :-)
Coupled with the tips in the comments about COMPRESS_OFFLINE, you could look at django-staticfiles' storage stuff. You can host the static files on amazon s3, for instance, so hosting it all on one static-hosting server and using that from all your servers could also be a nice solution. You wouldn't need to do anything with the static (and compressed) files on the individual servers.
Alternative solution regarding the multiple servers: I've made a custom fabric (docs.fabfile.org) script that installs/configures stuff on our servers. I've only recently started using coffeescript and less, but those two are definitively ending up in my fabfile. That solves the installation problem for me.
(Alternatives to a fabfile are things like a custom debian package with standard dependencies. Or chef or puppet or something similar.)
you can use puppet for the task

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.