Change port of Theia editor within Cloud Shell - google-cloud-platform

I am using Code Server within my Cloud Shell. I need to use the port 3000 for a specific npm package. Unfortunately port 3000 is already used by the default editor Theia within Cloud Shell.
I have already tried the following:
sudo kill {{PID of Theia process}} ...but it restarts again immediatelly
searched for settings within /google/devshell/editor/theia ...but could not find any port settings
sudo netstat -tlnp gives the following output:
Any help is very appreciated.

As mentioned by JShinigami, That issue got resolved here by changing the port of the other application, other alternative of resolving this issue is as below :
First I would recommend you to reset your cloud shell.
You can refer to the Answer to follow the steps on how to kill a process running on the particular Port.
Option 1 A One-liner to kill only LISTEN on specific port:
kill -9 $(lsof -t -i:3000 -sTCP:LISTEN)`
Option 2 If you have npm installed you can also run
npx kill-port 3000
I also found this answer on stack overflow that may be relevant as it shows how they were able to kill the process once they obtained its PID.
could you run the following command :
"sudo netstat -tlnp"
From the above you will be able to tell what processes are running on the ports. From there you will see the Possibility of "auto restart" configuration somewhere causing the process to appear even after kill command.
Found this useful article on ways to list processes running on ports.

This is cloudshelledit occupy the port
If you don't need cloudshelledit and can kill off
And if you open the cloudshelledit, this process is not shut off
cloudshelledit

Related

AWS EC2 User Data not working (Tried Installing and starting httpd via User Data)

The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established

Add a permanent command on boot with centos 7

I want, when my centos 7 server boots, to run
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
because I want to reuse open connections.
Systemd is allready installed, but my command is not a service, just a one time execution command at startup.
How can I run automatically this command at startup ? Thanks !
Add your command to
/etc/rc.d/rc.local
and it will run at startup.
The correct way to make the changes persistent is to edit the file:
/etc/sysctl.conf
After the change, type:
sysctl -p
This will load the changes into the current session. The fact that the settings are in /etc/sysctl.conf will ensure that they load on reboots.

How to access UI in Airflow 1.10?

To start with I am trying to upgrade from 1.9 version to 1.10 so my setup contains two vms running different versions of airflow with different port forwarding.
I can access UI from vm running with 1.9 but not able to access UI from 1.10.
To debug I want to confirm if airflow webserver is running. if I execute
sudo systemctl start airflow-webserver
it throws no error but when
I am looking at netstat I am not seeing any process listening to port 8080(default).
Also I have not created any user as I do not need rbac authentication ? Can that be a problem?
As requested by #kaxil. Below is the output of ps aux | grep airflow
Can someone provide some suggestions on how to fix this problem? Also if you need any further resource can provide it. I am not sure what is relevant here.
Output of journalctl -u airflow-webserver.service -b
The Error message shows that there is an issue with airflow.cfg file i.e. there might be a character in your airflow.cfg that is causing the issue. Recheck your config file, if you don't find an issue, post your config file in your question and we will try to figure it out.

google cloud instance terminate after close browser

I have a bash script. I would like to run it continuously on google cloud server. I connected to my VM via SSH in browser but after I've closed my browser, script was stopped.
I tried to use Cloud Shell but if I restart my laptop, script launches from start. It doesn't work continuously!
Is it possible to launch my script in google cloud, shut down laptop and be sure what my script works?
The solution: GNU screen. This awesome little tool let's you run a process after you've ssh'ed into your remote server, and then detach from it - leaving it running like it would run in the foreground (not stopped in the background).
So after we've ssh'ed into our GCE VM, we will need to:
1. install GNU screen:
apt-get update
apt-get upgrade
apt-get install screen
type "screen". this will open up a new screen - kind of similar in look & feel to what "clear" would result in.
run the process (e.g.: ./init-dev.sh to fire up a ChicagoBoss erlang server)
type: Ctrl + A, and then Ctrl + D. This will detach your screen session but leave your processes running!
feel free to close the SSH terminal. whenever you feel like it, ssh back into your GCE VM, and type screen -r to resume your previously detached session.
to kill all detached screens, run:
screen -ls | grep pts | cut -d. -f1 | awk '{print $1}' | xargs kill
You have the following options:
1. Task schedules - which involves cron jobs. Check this sample. Via this answer;
2. Using startup scripts.
I performed the following test and it worked for me:
I created an instance in GCE, SSH-d into it and created the following script, myscript.bash:
#!/bin/bash
sleep 15s
echo Hello World > result.txt
and then, ran
$ bash myscript.bash
and immediately closed the browser window holding the SSH session.
I then waited for at least 15 seconds, re-engaged in an SSH connection with the VM in question and ran $ ls and voila:
myscript.bash result.txt
So the script ran even after closing the browser holding the SSH session.
Still, technically, I believe your solution lies with 1. or 2.
You can use
nohup yourscript.sh > output_log_file.log
I faced similar issue. I logged into Virtual Machine through google cloud command on my local machine, tried to exit by closing the terminal, It halted the script running in the instance.
Use command exit to log out of cloud consoles in local machine putty console (twice).
Make sure you have not enabled "PREEMPT INSTANCE" while creating a VM instance.
It will force to close the instance within 24 hours to reduce the costing by a huge difference.
I have a NodeJS project and I solved with pm2

debugging distcc: no job seems to run on slave

First, my ultimate goal is to cross compile OpenCV for arm so I have tried 2 approaches, but no success so far.
This question is related to using distcc for compiling, using the target to run the make command but taking advantage of a beefy server to speed things up.
Basically, the target doesn't seem to be sending jobs to the slave server.
I installed distcc on both machines (apt-get install distcc)
As I understand it, the daemon only needs to run on the slave.
I set up hosts in /etc/distcc/hosts: In that file I have the IPs of both the target at 192.168.10.45 and slave at 192.168.10.34
I run the daemon with
distccd --daemon --allow 192.168.10.45
to allow the target
with ps aux | grep distcc
I can see the 32 instances of distccd running.
If I use
netstat -pant | grep distcc
I see the daemon listening
Now, if I tail the log file at /var/log/distccd.log, there is nothing there, and nothing happening
When I run a job on the target with
make -j33 CC=distcc
it seems to run fine, but I see nothing happening on the slave
ufw is disabled, the 2 machines ping and can talk to each other via ssh.
What am I missing here?
You must define the list of compilation hosts (through the /etc/distcc/hosts file or through the DISTCC_HOSTS environment variable) on the master (target) machine. Check the host list by running on the master distcc --show-hosts.
Specify distcc as a compiler for C++ as well:
make -j33 CC=distcc CXX=distcc
Did you run:
sudo update-distcc-symlinks
The official installation documentation currently omits this step. I had the same symptoms and had some trouble finding the log, but eventually saw that I had to specify logging in an environment variable:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /dev/shm/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.