I have below on Windows 10:
o Ember-cli: 2.4.3
o Node: 6.11.0
o Npm – 5.0.3
I am executing command ember server from admin command prompt and get below error:
Livereload failed on http://localhost:49152. It is either in use or you do not have permission.
Then I tried ember serve --port 8080 --live-reload-port 35735 and it is hanging up. Please tell me how to correct this.
This is a shot in the dark, but it could be you already have another process running on that port. Given the error, it sounds unlikely, though. Run the following command:
netstat -a -o -n
Check that command's output for any processes running on the ports in question.
You can try killing any processes running on those ports using (obviously, some caution is in order here):
taskkill /F /PID [pid from previous command here]
And in case you're curious, I found this on a Java developer's blog: http://therealdanvega.com/blog/2015/04/16/windows-kill-process-by-port-number
Related
I am using Code Server within my Cloud Shell. I need to use the port 3000 for a specific npm package. Unfortunately port 3000 is already used by the default editor Theia within Cloud Shell.
I have already tried the following:
sudo kill {{PID of Theia process}} ...but it restarts again immediatelly
searched for settings within /google/devshell/editor/theia ...but could not find any port settings
sudo netstat -tlnp gives the following output:
Any help is very appreciated.
As mentioned by JShinigami, That issue got resolved here by changing the port of the other application, other alternative of resolving this issue is as below :
First I would recommend you to reset your cloud shell.
You can refer to the Answer to follow the steps on how to kill a process running on the particular Port.
Option 1 A One-liner to kill only LISTEN on specific port:
kill -9 $(lsof -t -i:3000 -sTCP:LISTEN)`
Option 2 If you have npm installed you can also run
npx kill-port 3000
I also found this answer on stack overflow that may be relevant as it shows how they were able to kill the process once they obtained its PID.
could you run the following command :
"sudo netstat -tlnp"
From the above you will be able to tell what processes are running on the ports. From there you will see the Possibility of "auto restart" configuration somewhere causing the process to appear even after kill command.
Found this useful article on ways to list processes running on ports.
This is cloudshelledit occupy the port
If you don't need cloudshelledit and can kill off
And if you open the cloudshelledit, this process is not shut off
cloudshelledit
I have a bash script. I would like to run it continuously on google cloud server. I connected to my VM via SSH in browser but after I've closed my browser, script was stopped.
I tried to use Cloud Shell but if I restart my laptop, script launches from start. It doesn't work continuously!
Is it possible to launch my script in google cloud, shut down laptop and be sure what my script works?
The solution: GNU screen. This awesome little tool let's you run a process after you've ssh'ed into your remote server, and then detach from it - leaving it running like it would run in the foreground (not stopped in the background).
So after we've ssh'ed into our GCE VM, we will need to:
1. install GNU screen:
apt-get update
apt-get upgrade
apt-get install screen
type "screen". this will open up a new screen - kind of similar in look & feel to what "clear" would result in.
run the process (e.g.: ./init-dev.sh to fire up a ChicagoBoss erlang server)
type: Ctrl + A, and then Ctrl + D. This will detach your screen session but leave your processes running!
feel free to close the SSH terminal. whenever you feel like it, ssh back into your GCE VM, and type screen -r to resume your previously detached session.
to kill all detached screens, run:
screen -ls | grep pts | cut -d. -f1 | awk '{print $1}' | xargs kill
You have the following options:
1. Task schedules - which involves cron jobs. Check this sample. Via this answer;
2. Using startup scripts.
I performed the following test and it worked for me:
I created an instance in GCE, SSH-d into it and created the following script, myscript.bash:
#!/bin/bash
sleep 15s
echo Hello World > result.txt
and then, ran
$ bash myscript.bash
and immediately closed the browser window holding the SSH session.
I then waited for at least 15 seconds, re-engaged in an SSH connection with the VM in question and ran $ ls and voila:
myscript.bash result.txt
So the script ran even after closing the browser holding the SSH session.
Still, technically, I believe your solution lies with 1. or 2.
You can use
nohup yourscript.sh > output_log_file.log
I faced similar issue. I logged into Virtual Machine through google cloud command on my local machine, tried to exit by closing the terminal, It halted the script running in the instance.
Use command exit to log out of cloud consoles in local machine putty console (twice).
Make sure you have not enabled "PREEMPT INSTANCE" while creating a VM instance.
It will force to close the instance within 24 hours to reduce the costing by a huge difference.
I have a NodeJS project and I solved with pm2
I'm trying to get URL Rewrite 2.0 installed using this Dockerfile:
FROM microsoft/aspnet:4.6.2
WORKDIR /inetpub/wwwroot
COPY obj/Docker/publish .
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi /install/rewrite_amd64.msi
RUN net start MSIServer
RUN msiexec.exe /i c:\install\rewrite_amd64.msi /quiet /passive /qn /L*v "C:\package.log"
When I build the container image, I see this error message:
The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance.
Looking at package.log after running the container, I see this:
SI (c) (30:A4) [08:32:10:438]: Failed to connect to server. Error: 0x80040150
SI (c) (30:A4) [08:32:10:438]: Note: 1: 2774 2: 0x80040150: 2774 2: 0x80040150
Executing net start msiserver on the running container returns a message that the service is already started, and Google says 0x80040150 could be a problem reading the registry.
Is it expected that installing URL Rewrite this way should work, or do I need to elevate permissions somehow?
Update: Running the same msiexec command on the running container successfully installs URL Rewrite.
I finally figured it out thanks to this article. Using PowerShell to run msiexec with the appropriate switches works. Oddly, it threw "Unable to connect to the remote server" when trying to also download the MSI using PowerShell, so I resorted to using ADD.
Here's the relevant portion of my Dockerfile:
WORKDIR /install
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi rewrite_amd64.msi
RUN Write-Host 'Installing URL Rewrite' ; \
Start-Process msiexec.exe -ArgumentList '/i', 'rewrite_amd64.msi', '/quiet', '/norestart' -NoNewWindow -Wait
My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.
I am trying out gunicorn, and I installed it inside a virtualenv with a django site. I got gunicorn running with this command:
gunicorn_django -b 127.0.0.1:9000
Which is all well and good. I haven't setup a bash script or hooked it to upstart (I am on Ubuntu) yet, because I am testing it out.
Meantime, my connection to the server was broken, and thus I lost the console, and I can no longer do CTRL + C to stop the server after reconnecting.
How do I stop gunicorn_django, when it is already running?
The general solution to problems like this is to do ps ax|grep gunicorn to look for the relevant process, then do kill xxxx where xxxx is the number in the first column.
Just found this also - pkill - which will kill all processes matching the search text:
$ pkill gunicorn
No idea how well supported it is, but can confirm that it works on Ubuntu 12.04
(from http://www.howtogeek.com/howto/linux/kill-linux-processes-easier-with-pkill/)
A faster way:
> kill -9 `ps aux | grep gunicorn | awk '{print $2}'`
updated code
This was a bug that has just been fixed here.