I have deployed the node-red server as kubectl pod and it is up and running. I have modified the red.min.js, now I need to restart the node-red to reflect the changes. But I could not restart node-red, how to restart the node-red inside a kubectl pods?
Short answer, you don't.
Since the Container entrypoint is the Node-RED process you can't restart it without the pod dying at which point Kubernetes will start a new one (without your modifications)
But since red.min.js part of the editor, any changes to that should not need Node-RED to be restarted, you just need to force the browser to reload it (without using it's locally cached version)
Related
This question seems very basic, but I wasn't able to quickly find an answer at https://cloud.google.com/compute/docs/instances/create-start-instance. I'm running a MicroMDM server on a Google Cloud VM by connecting to is using SSH (from the VM instances page in the Google Cloud Console) and then running the command
> sudo micromdm serve
However, I notice that when I shut down my laptop, the server also stops, which is actually why I wanted to run the server in a VM in the first place.
What would be the recommended way to keep the server running? Should I use systemd or perhaps run the process as a Docker container?
When you run the service from the command line, you "attach" it to your shell process, when you terminate your ssh session, your job gets terminated also.
To make a process run in background, simply append the & at the end of the command, in your case:
sudo micromdm serve &
This way your server is alive even after you quit your session.
I also suggest you to add that line in the instance startup script, if you want that server to always be up, so that you don't have to run the command by hand each time :)
More on Compute Engine startup scripts here.
As the Using MicroMDM with systemd documentation, it suggested to use systemd command to run MicroMDM service on linux.First, on our linux host, we create the micromdm.service file, then we move it to the location ‘/etc/systemd/system/micromdm.service’ . We can start the service. In this way, it will keep the service running, or restart service after the service fails or server restart.
I'm setting up a local Kubernetes cluster using Minikube and Xhyve as the VM driver on Mac OS.
I have gotten volume mounting from my laptop to the pod working (when I bash into my pod with kubectl exec -it my-pod -- /bin/bash I can see the files inside the pod update with my filesystem), but the Ember app running inside the pod never reflects the file changes. I have to destroy the deployment, rebuild, and redeploy in order to see file changes take effect.
In my Dockerfile I EXPOSE port 42000 as the live reload port and start the server using a CMD which simply runs ember server --host 0.0.0.0 --live-reload-port 42000.
Is there some trick I'm missing to get the LiveReload feature working with Kubernetes? Thanks in advance.
It looks like it's a bug in Minikube: https://github.com/kubernetes/minikube/issues/1551
What is the best practice when I have an update for my Django app pushed in my production? Shall I restart both gunicorn and nginx services, with
sudo service gunicorn restart
sudo service nginx restart
or restarting only gunicorn is enough? Finally does the order of the restarts makes any difference if I have to do both the restarts? Thanks!
It entirely depends on how you've configured your box.
To keep downtime to an absolute minimum, I actually load my new release into a different directory on the box while the old release is still running. I create a new virtual environment based on my new release's requirements.txt. Then I start a second instance of gunicorn with the new release running in it (done via supervisord with entries in supervisord.conf), and leave the old instance still running.
I then update my nginx vhost file to point the server to the new release's gunicorn socket, and finally reload nginx. I do a quick check that the new site is up and functioning, and then I stop the old gunicorn instance. If for some reason it's not responding, I switch my nginx config back to point to the old one again, and then go figure out what's wrong.
I do all this using an Ansible script, but here's a great article with some Fabric scripts to do something similar: https://medium.com/#healthchecks/deploying-a-django-app-with-no-downtime-f4e02738ab06
If, on the other hand, you just update your code in-place, then there should be no changes needed to your nginx config, so you shouldn't need to reload it. Just reload gunicorn and you're good to go.
I'm working on developing a web app using Django, hosted on Gunicorn and Nginx. It's getting a bit inconvenient to run "sudo service nginx restart; sudo service gunicorn restart" every time I make a change to the code. Is there a way I can make them restart automatically whenever I make a change, or make it so the changes show up without having to restart?
You could add the '--reload' argument, as mentioned in the gunicorn documentation.
Restart workers when code changes.
This setting is intended for development. It will cause workers to be
restarted whenever application code changes.
Source: http://docs.gunicorn.org/en/latest/settings.html
I make a change on the local server (something obvious, removing a <h1/>). I see it on the local machine. I then commit and push the changes and do cap deploy.
I do not see the changes when I access my staging environment.
I checked the following: Capistrano does not give me an error. I ssh onto the server and in the code, the change IS there. I restart the server manually by sudo touch tmp/restart.txt, but still I cannot see the change in the browser. Symbolic links from current/ point to the correct revision folder.
What can be causing this? The only thing I'm doing non-standard I think is that I do not deploy in production, but rather in environment called dev2. So my server start command is sudo passenger start -e dev2 -p 80 --user=ubuntu (btw, how should I deploy passenger in production? passenger start always deploys it in development, for some reason).
So to summarize, when I deploy with capistrano, I don't see the changes, although the server is restarted and the codebase does have the changes.