I've taken a fancy to including Jetty with my applications instead of deploying to a container. But there's one big issue I've run in to: How can I automated the deploy? When the container ran standalone it was enough to copy the war file over the old and it would get picked up. With Jetty as a dependency I run it at the command line and control-c it when done. I can't think of an easy way to automate this. Is there a better solution than creating scripts to manage the job, stop the container and restart, keep track of job id, etc?
Look into setting up a DeploymentManager and AppProvider that suits your needs.
// === jetty deploy ===
DeploymentManager deployer = new DeploymentManager();
deployer.setContexts(contexts);
deployer.setContextAttribute(
"org.eclipse.jetty.server.webapp.ContainerIncludeJarPattern",
".*/servlet-api-[^/]*\\.jar$");
WebAppProvider webapp_provider = new WebAppProvider();
webapp_provider.setMonitoredDirName(jetty_base + "/webapps");
webapp_provider.setDefaultsDescriptor(jetty_home + "/etc/webdefault.xml");
webapp_provider.setScanInterval(1);
webapp_provider.setExtractWars(true);
deployer.addAppProvider(webapp_provider);
server.addBean(deployer);
Full example can be found in LikeJettyXml.java
Related
I've made a pretty simple web application which has a REST API backend service written in Python/Django and a FE service written in JS/React. Both parts are containerized and can be launched locally via docker-compose. They are situated in separate github repositories, and every time a new tag is pushed to the corresponding repo, an image is built and pushed to the corresponding ecr repo via github actions. Until that point everything works smoothly, but the problem is that, I don't know how to properly automate the deployment process to the test and production environments. The goal is to have those environments as similar as possible.
My current solution for test env is to simply upload docker-compose file to the ec2 instance via github actions, then manually run the docker-compose command, which pulls images from ecr.
Even though it's a simple solution, it's not very scalable or automated and requires some work to be done in order to update an application. The desired flow is to have a manual GitHub action in each repository, which would deploy either BE or FE to the test or prod environment without any need to ssh into the server and do any other manipulations with docker.
I was looking at ECS, but it seems that it's a solution for bigger apps, which need several or more instances to run. I want my app to be used by many users, but I'm not sure, when it will happen. So maybe i should stick to something less complicated than ECS. Are there any simpler solutions, which i am missing? Like Elastic beanstack or something from any other provider?
I will be happy to receive a feedback on anything I wrote in this post, thanks!
As you can see on the official doc of Cloud Foundry
https://docs.cloudfoundry.org/devguide/using-tasks.html
A task is an application or script whose code is included as part of a deployed application, but runs independently in its own container.
I'd like to know if there's a way to run command and manipulate files directly on the main container of an application without using a SSH connection or the manifest file.
Thanks
No. Tasks run in their own container, so they cannot affect other running tasks or running application instances. That's the designed behavior.
If you need to make changes to your application, you should look at one of the following:
Use a .profile script. This will let you execute operations prior to your application starting up. It runs for every application instance that is started (I do not believe it runs for tasks) so the operation will be consistently applied.
While not recommended, you can technically background a task through the .profile script that will continue running for the duration of your app. This is not recommended because there is no monitoring of this script and if it crashes it won't cause your app container to restart.
Integrate with your buildpack. Some buildpack's like the PHP buildpack provide hooks for you to integrate and add behavior to staging. For other buildpacks, you can fork the buildpack and make it do whatever you want. This includes changing the command that's returned by the buildpack which tells the platform how to execute your droplet and ultimately what runs in the container.
You can technically modify a running app instance with cf ssh, but I wouldn't recommend it. It's something you should really only do for troubleshooting or maybe testing. Definitely not production apps. If you feel like you need to modify a running app instance for some reason, I'd suggest looking at the reasons why and look for alternative ways of accomplishing your goals.
I am searching for a solution to do continuous deployment in a cloud environment, more specific, in an Amazon AWS environment.
The code to be deployed are mainly Microsoft's ASP and PHP, so this framework should work on both platforms. As I have an auto-scale environment, this framework will work if it pulls the new code, like Puppet does.
My first thought was to deploy direct from the VCS, but I ended in a problem where all repository information was mirrored to the servers, as GIT, for instance, works. This is a problem because the repository keeps growing and the servers will demand more and more space.
I found Ansible, that works the way I need, but does not work on Windows environment. It only sends to the servers the production code, not the VCS repository, and keeps track which servers are updated.
Without using an easy-to-setup framework like this, I will need to create a Puppet + Jenkins + a VCS framework, where Jenkins creates the package from a VCS source code and Puppet delivers it.
Does anybody know any small framework for my needs or the Puppet + Jenkins + VCS is the way to go?
Consider CloudMunch (www.cloudmunch.com) for this. The platform is built exactly to solve this kind of polyglot requirements.
Disclaimer: I work for CloudMunch
I am currently using AWS ElasticBeanStalk and I was curious as to how (as in internally) it knows that when you fire up an instance (or it automatically does with scaling), to unpack the zip I uploaded as a version? Is there some enviroment setting that looks up my zip in my S3 bucket and then unpacks automatically for every instance running in that environment?
If so, could this be used to automate a task such as run an SQL query on boot-up (instance deployment) too? Are these automated tasks changeable or viewable at all?
Thanks
I don't know how beanstalk knows which version to download and unpack, but running a task on start-up is trivial. Check out cloud-init, a tool written by Ubuntu that's now packaged in Amazon Linux. It allows you to pass arbitrary shell scripts into the UserData section of the instance configuration, and those shell scripts will run on startup.
It's a great way to bootstrap instances on startup, which avoids the soul-sucking misery of managing AMIs.
A quick (possibly non-applicable) warning: If you're running a SQL query on a database that lives on the beanstalk AMI, you're pretty much guaranteed to lose your database at some point. Those machines are designed to be entirely transient. Do not put databases on them. See this answer for more details.
Since your goal seems to be to run custom configuration tasks, the answer is yes, there is a way to do that. You can define custom actions in an .ebextensions file packaged with your app. For example, you can configure a command to run every time a new machine is deployed:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands
Heya,
Quick question.
I've got multiple instances on EC2 with a load balancer between them. I use an SVN app that used to push to my live env. at will.
With the multiple EC2's, how would I push a codebase to all of them at once?
Any thoughts/links would be appreciated.
There are a few different ways to do this.
If You Are Using Elastic Load Balancers
Write a script that:
Removes a machine from the pool
Updates the SVN repository
Re-adds the machine to the pool
Repeats for any additional machines
You could also get fancy and remove one machine, update it, remove all other machines and update them, if you're concerned about consistency.
If You Are Using a Custom Load Balancing Application
Look into Capistrano. You don't need to use it with Ruby/Rake -- you can write custom cap files that can do parallel deploys.
How about vlad or fabric for code deployment.
We use Scalr. It is available as a service (Scalr.net) or you can run it yourself (it is Open Source - though the source in the googlecode repository is sometimes a little behind the version the service uses).
Basically, Scalr has a global scripting feature whereby you can specify a script (e.g. bash, PHP, anything with #!bang) and trigger it to be run on all instances of a given 'role' (e.g. web instance). In our case we have a script that just does svn checkout or svn update as appropriate. Scalr supports periodic scheduling of scripts, so in the dev environment I run it every 5 mins to keep dev in synch with SVN, but obviously I manually trigger it for production.
(I have the script taking a param to specify the SVN branch to use)