Ember LiveReload inside Kubernetes Pod - ember.js

I'm setting up a local Kubernetes cluster using Minikube and Xhyve as the VM driver on Mac OS.
I have gotten volume mounting from my laptop to the pod working (when I bash into my pod with kubectl exec -it my-pod -- /bin/bash I can see the files inside the pod update with my filesystem), but the Ember app running inside the pod never reflects the file changes. I have to destroy the deployment, rebuild, and redeploy in order to see file changes take effect.
In my Dockerfile I EXPOSE port 42000 as the live reload port and start the server using a CMD which simply runs ember server --host 0.0.0.0 --live-reload-port 42000.
Is there some trick I'm missing to get the LiveReload feature working with Kubernetes? Thanks in advance.

It looks like it's a bug in Minikube: https://github.com/kubernetes/minikube/issues/1551

Related

docker up and running on my Mac but "sam local invoke" command results in Error: Running AWS SAM projects locally requires Docker

I have Docker up and running on my Mac. But "sam local invoke" command results in Error: Running AWS SAM projects locally requires Docker. Have you got it installed and running?
Anyone knows what could be the reason?
There was a change Docker Desktop's default context settings.
On Mac, it may default to desktop-linux if symlink to /var/run/docker.sock was not created. This seems to cause problem with SAM CLI (and probably, a lot of other apps) that expects to communicate with Docker using this socket.
The easiest way to fix this is to set DOCKER_HOST environment variable to point to the socket file associated with desktop-linux.
Run export DOCKER_HOST="unix://Users/<username>/.docker/run/docker.sock"
(add this to your .zshrc or .bashrc file so that you don't have to define the variable every time you start a shell)
and then try running sam command.

How to restart the node-red inside kubectl pods?

I have deployed the node-red server as kubectl pod and it is up and running. I have modified the red.min.js, now I need to restart the node-red to reflect the changes. But I could not restart node-red, how to restart the node-red inside a kubectl pods?
Short answer, you don't.
Since the Container entrypoint is the Node-RED process you can't restart it without the pod dying at which point Kubernetes will start a new one (without your modifications)
But since red.min.js part of the editor, any changes to that should not need Node-RED to be restarted, you just need to force the browser to reload it (without using it's locally cached version)

Hyperledger fabric behave tests failing "cannot connect to Docker endpoint"

Using Hyperledger fabric, I run make behave-deps then make behave, yet several of the behave test scenarios fail ("Error starting container: cannot connect to Docker endpoint") - how would I go about fixing this?
Typically this problem is encountered when running outside of Vagrant.
Ensure you can run
docker run hello-world
Without sudo
If this fails, this can be resolved by adding he user to the group as in the installation docs
If running the vagrant-based development environment described here, a change was recently made to the Docker port mapping that would manifest itself with these failed tests. Reconstruct your development environment with vagrant destroy and vagrant up from the $GOPATH/src/github.com/hyperledger/fabric/devenv directory.

Stop detached strongloop application

I installed loopback on my server (ubuntu) and then created an app and use the command slc run to run... everything works as expected.
Now i have 1 question and also 1 issue i am facing with:
The question: i need to use slc run command but to keep the app "alive" also after i close the terminal. For that i used the --detach option and it works, What i wanted to know if the --detach option is the best practice or i need to do it in a different way.
The issue: After i use the --detach i don't really know how to stop it. Is there a command that i can use to stop the process from running?
To stop a --detached process, go to the same directory it was run from and do slc runctl stop. There are a number of runctl commands, but stop is probably the one you are most interested in.
Best practices is a longer answer. The short version is: don't use --detach ever and do use an init script to run your app and keep it running (probably Upstart, since you're on Ubuntu).
Using slc run
If you wan to run slc run as an Upstart job you can install strong-service-install with npm install -g strong-service-install. This will give you sl-svc-install, a utility for creating Upstart and systemd services.
You'll end up running something like sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run . which should create a Upstart job named my-app which will run your app as your uid from the app's root. Your app's stdout/stderr will be sent to /var/log/upstart/my-app.log. If you are using a version of Ubuntu older than 12.04 you'll need to specify --upstart 0.6 and your logs will end up going to syslog instead.
Using slc pm
Another, possibly easier route, is to use slc pm, which operates at a level above slc run and happens to be easier to install as an OS service. For this route you already have everything installed. Run sudo slc pm-install and a strong-pm Upstart service will be installed as well as a strong-pm user to run it as with a $HOME of /var/lib/strong-pm.
Where the PM approach gets slightly more complicated is that you have to deploy your app to it. Most likely this is just a matter of going to your app root and running slc deploy http://localhost:8701/, but the specifics will depend on your app. You can configure environment variables for your app, deploy new versions, and your logs will show up in /var/log/upstart/strong-pm.log.
General Best Practices
For either of the options above, I recommend not doing npm install -g strongloop on your server since it includes things like yeoman generators and other tools that are more useful on a workstation than a server.
If you want to go the slc run route, you would do npm install -g strong-supervisor strong-service-install and replace your slc run with sl-run.
If you want to go the slc pm route, you would do npm install -g strong-pm and replace slc pm-install with sl-pm-install.
Disclaimer
I work at StrongLoop and primarily work on these tools.
View the status of running apps using:
slc ctl status
Example output:
Service ID: 1
Service Name: app
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
1.1.2708 2708 0
1.1.5836 5836 1 0.0.0.0:3001
Service ID: 2
Service Name: default
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
2.1.2760 2760 0
2.1.1676 1676 1 0.0.0.0:3002
To kill the first app, use slc ctrl stop
slc ctl stop app
Service "app" hard stopped
What if i have to run the application as a cluster ? Can i still do it via the upstart created.
Like
sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run --cluster 4 .
I tried doing this but the /etc/init/my-app.conf does not show any information about the cluster.

New deployment does not show up with capistrano/passenger command-line

I make a change on the local server (something obvious, removing a <h1/>). I see it on the local machine. I then commit and push the changes and do cap deploy.
I do not see the changes when I access my staging environment.
I checked the following: Capistrano does not give me an error. I ssh onto the server and in the code, the change IS there. I restart the server manually by sudo touch tmp/restart.txt, but still I cannot see the change in the browser. Symbolic links from current/ point to the correct revision folder.
What can be causing this? The only thing I'm doing non-standard I think is that I do not deploy in production, but rather in environment called dev2. So my server start command is sudo passenger start -e dev2 -p 80 --user=ubuntu (btw, how should I deploy passenger in production? passenger start always deploys it in development, for some reason).
So to summarize, when I deploy with capistrano, I don't see the changes, although the server is restarted and the codebase does have the changes.