How to kill a process running in a port in a remote aws server - amazon-web-services

I am trying to kill a process running in
http://54.218.73.244:7002/
i have used the command
fuser -k 7002/tcp
it is not working the process continues to run
I am using expressJS in server to run the server script
how can i resolve this ?

ssh 54.218.73.244 fuser -k 7002/tcp

Related

Shell Script stops after connecting to external server

I am in the process of trying to automate deployment to an AWS Server as a cool project to do for my coding course. I'm using ShellScript to automate different processes but when connecting to the AWS E2 Ubuntu server. When connected to the server, it will not do any other shell command until I close the connection. IS there any way to have it continue sending commands while being connected?
read -p "Enter Key Name: " KEYNAME
read -p "Enter Server IP With Dashes: " IPWITHD
chmod 400 $KEYNAME.pem
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
ANYTHING HERE AND BELOW WILL NOT RUN UNTIL SERVER IS DISCONNECTED
A couple of basic points:
A shell script is a sequential set of commands for the shell to execute. It runs a program, waits for it to exit, and then runs the next one.
The ssh program connects to the server and tells it what to do. Once it exits, you are no longer connected to the server.
The instructions that you put in after ssh will only run when ssh exits. Those commands will then run on your local machine instead of the server you are sshed into.
So what you want to do instead is to run ssh and tell it to run a set of steps on the server, and then exit.
Look at man ssh. It says:
ssh destination [command]
If a command is specified, it is executed on the remote host instead of a login shell
So, to run a command like echo hi, you use ssh like this:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com "echo hi"
Or, for longer commands, use a bash heredoc:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com <<EOF
echo "this will execute on the server"
echo "so will this"
cat /etc/os-release
EOF
Or, put all those commands in a separate script and pipe it to ssh:
cat commands-to-execute-remotely.sh | ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
Definitely read What is the cleanest way to ssh and run multiple commands in Bash? and its answers.

How do I restart airflow webserver?

I am using airflow for my data pipeline project. I have configured my project in airflow and start the airflow server as a backend process using following command
airflow webserver -p 8080 -D True
Server running successfully in backend. Now I want to enable authentication in airflow and done configuration changes in airflow.cfg, but authentication functionality is not reflected in server. when I stop and start airflow server in my local machine it works.
So How can I restart my daemon airflow webserver process in my server??
I advice running airflow in a robust way, with auto-recovery with systemd
so you can do:
- to start systemctl start airflow
- to stop systemctl stop airflow
- to restart systemctl restart airflow
For this you'll need a systemd 'unit' file.
As a (working) example you can use the following:
put it in /lib/systemd/system/airflow.service
[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
PIDFile=/run/airflow/webserver.pid
EnvironmentFile=/home/airflow/airflow.env
User=airflow
Group=airflow
Type=simple
ExecStart=/bin/bash -c 'export AIRFLOW_HOME=/home/airflow ; airflow webserver --pid /run/airflow/webserver.pid'
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-failure
RestartSec=42s
PrivateTmp=true
[Install]
WantedBy=multi-user.target
P.S: change AIRFLOW_HOME to where your airflow folder with the config
Can you check $AIRFLOW_HOME/airflow-webserver.pid for the process id of your webserver daemon?
Then pass it a kill signal to kill it
cat $AIRFLOW_HOME/airflow-webserver.pid | xargs kill -9
Then clear the pid file
cat /dev/null > $AIRFLOW_HOME/airflow-webserver.pid
Then just run
airflow webserver -p 8080 -D True
to restart the daemon.
This worked for me (multiple times! :D )
find the process id: (assuming 8080 is the port)
lsof -i tcp:8080
kill it
kill <pid>
Use Airflow webserver's (gunicorn) signal handling
Airflow uses gunicorn as it's HTTP server, so you can send it standard POSIX-style signals. A signal commonly used by daemons to restart is HUP.
You'll need to locate the pid file for the airflow webserver daemon in order to get the right process id to send the signal to. This file could be in $AIRFLOW_HOME or also /var/run, which is where you'll find a lot of pids.
Assuming the pid file is in /var/run, you could run the command:
cat /var/run/airflow-webserver.pid | xargs kill -HUP
gunicorn uses a preforking model, so it has master and worker processes. The HUP signal is sent to the master process, which performs these actions:
HUP: Reload the configuration, start the new worker processes with a new configuration and gracefully shutdown older workers. If the application is not preloaded (using the preload_app option), Gunicorn will also load the new version of it.
More information in the gunicorn signal handling docs.
This is mostly an expanded version of captaincapsaicin's answer, but using HUP (SIGHUP) instead of KILL (SIGKILL) to reload the process instead of actually killing it and restarting it.
In my case i want to kill previous airflow process and start.
for that following command did the magic
killall -9 airflow
As the question was related to webserver, this is something that worked in my case:
systemctl restart airflow-webserver
Just run:
airflow webserver -p 8080 -D
Find pid with:
airflow webserver
will give: "The webserver is already running under PID 21250."
Than kill web server process with:
kill 21250
None of these worked for me. I had to delete the $AIRFLOW_HOME/airflow-webserver.pid file and then running airflow webserver worked.
Create a init script and use the command "daemon" to run this as service.
daemon --user="${USER}" --pidfile="${PID_FILE}" airflow webserver -p 8090 >> "${LOG_FILE}" 2>&1 &
The recommended approach is to create and enable the airflow webserver as a service. If you named the webserver as 'airflow-webserver', run the following command to restart the service:
systemctl restart airflow-webserver
You can use a ready-made AMI (namely, LightningFLow) from AWS Marketplace which provides Airflow services (webserver, scheduler, worker) which are enabled at startup.
Note: LightningFlow comes pre-integrated with all required libraries, Livy, custom operators, and local Spark cluster.
Link for AWS Marketplace: https://aws.amazon.com/marketplace/pp/Lightning-Analytics-Inc-LightningFlow-Integrated-o/B084BSD66V
Just by killing processes!!
Assuming the default airflow home directory is ~/airflow/
List the 3 parent processes running the airflow (PID):
cat ~/airflow/airflow-scheduler.pid
cat ~/airflow/airflow-webserver.pid
cat ~/airflow/airflow-webserver-monitor.pid
Get their PGID using:
ps -xjf
And finally run loop to kill all tree of each parent (PID):
for child in $(ps x -o "%P %p %r"| awk '{ if ( $1 == $your_first_PID || $3 == $your_first_PGID) { print $2 }}'); do kill $child; done
To restart Airflow you need to restart Airflow webserver and Airflow scheduler.
Check if Airflow servers are running:
ps -aux | grep airflow
if you see in list of running processes entries like:
ubuntu 49601 0.1 1.6 266668 135520 ? S 12:19 0:00 [ready] gunicorn: worker [airflow-webserver]
This means that Airflow webserver is running.
If you see entries like this:
ubuntu 49653 0.6 2.3 308912 187596 ? S 12:19 0:00 airflow scheduler -- DagFileProcessorManager
That means that Airflow scheduler is running.
Stop Airflow servers (webserver and scheduler):
pkill -f "airflow scheduler"
pkill -f "airflow webserver"
Now use again ps -aux | grep airflow to check if they are really shut down.
Start Airflow servers in background (daemon):
airflow webserver -D
airflow scheduler -D

haproxy in docker container

I'm new to docker and haproxy.. I tried to follow the example from the official docker hub repo.
So, I have Dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
and simple haproxy config (which I expect to redirect local calls to my EB instance)
global
# daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 {my-app}.elasticbeanstalk.com:80 maxconn 32
Build and run
$ docker build .
$ docker run --rm d4598bcc293f
Container starts and stucks, Ctrl+C doen't stop it. "docker kill" helps only.
My EB resource is up and running
$ curl {my-app}.elasticbeanstalk.com/status
{
"status": "OK"
}
But local calls fail
$ boot2docker ip
192.168.59.104
$ curl 192.168.59.104/status
curl: (7) Failed to connect to 192.168.59.104 port 80: Connection refused
What am I missing or doing wrong?
Thank you!
UPDATE: I've found the problem with calls redirections. Wrong port
number in haproxy.cfg.
But this problem still annoys me... Container starts and stucks,
Ctrl+C doen't stop it. "docker kill" helps only.
If you want to be able to exit with control-c, do docker run -i <image>. The -i means to pass input to the containerized program, and if HAProxy gets a control-c then it will terminate which will stop the container.
HAProxy doesn't produce any output unless you run it in debug mode, so there's not really much point to running attached, though. You might have a better time with docker run -d <image>, which will detach from the container and let it run in the background. To stop it, use docker kill.

C++ Windows Server - how to accept an SSH connection?

I have a server application i'm making specifically for windows. However the client that needs to connect to it was originally created towards a Linux server. The client connects through SSH and runs simple commands to execute bash scripts. I'd like to make my server work with this client without making any changes to the client.
So the client would usually SSH into the Linux server -> run command to execute bash -> bash script did stuff
I'd like the windows server to accept the SSH connection -> grab the command -> execute a function that does what the bash script would of done.
My question here is how can I make my server accept that SSH connection and get that command the client sends through?
Maybe these resources will help.
Set up a free SSH server on Windows 7 with freeSSHd
http://www.techrepublic.com/blog/tr-dojo/set-up-a-free-ssh-server-on-windows-7-with-freesshd/
OpenSSH for Windows
http://sshwindows.sourceforge.net/
MobaSSH: Free SSH server for Windows
http://mobassh.mobatek.net/

How to free up the used port in virtual env

I am using virtual env to run flask app and often my local host port does not work and I have to do export PORT=500* . I mean after using foreman start couple of times at a specific port, the port gets engaged and when I try again to start and it tries to connect saying retrying to connect and then fails.
I have to change the port every time I experience this problem. Is there a command by which I can free the port or delete the port.
often happens because foreman doesn't shutdown properly. Try looking to see if there are process still running in the background that might be using the port. For example, if you use forman to launch a python app, try:
ps aux | grep python
to see all your running python processes. You can automatically kill all running python process using the following command
ps aux | grep python | tr -s ' ' '\t' | awk '{system("kill " $2)}'
but be careful as this will kill all running python processes that you have running.