Shell Script stops after connecting to external server - amazon-web-services

I am in the process of trying to automate deployment to an AWS Server as a cool project to do for my coding course. I'm using ShellScript to automate different processes but when connecting to the AWS E2 Ubuntu server. When connected to the server, it will not do any other shell command until I close the connection. IS there any way to have it continue sending commands while being connected?
read -p "Enter Key Name: " KEYNAME
read -p "Enter Server IP With Dashes: " IPWITHD
chmod 400 $KEYNAME.pem
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
ANYTHING HERE AND BELOW WILL NOT RUN UNTIL SERVER IS DISCONNECTED

A couple of basic points:
A shell script is a sequential set of commands for the shell to execute. It runs a program, waits for it to exit, and then runs the next one.
The ssh program connects to the server and tells it what to do. Once it exits, you are no longer connected to the server.
The instructions that you put in after ssh will only run when ssh exits. Those commands will then run on your local machine instead of the server you are sshed into.
So what you want to do instead is to run ssh and tell it to run a set of steps on the server, and then exit.
Look at man ssh. It says:
ssh destination [command]
If a command is specified, it is executed on the remote host instead of a login shell
So, to run a command like echo hi, you use ssh like this:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com "echo hi"
Or, for longer commands, use a bash heredoc:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com <<EOF
echo "this will execute on the server"
echo "so will this"
cat /etc/os-release
EOF
Or, put all those commands in a separate script and pipe it to ssh:
cat commands-to-execute-remotely.sh | ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
Definitely read What is the cleanest way to ssh and run multiple commands in Bash? and its answers.

Related

Docker SSH login fails remotely

I've created a docker within AWS server which runs SSH service.
I relied on the following example: https://docs.docker.com/engine/examples/running_ssh_service/ and added my own logic to the Dockerfile.
When trying to log in remotely to the docker I get the password message prompted but the password I set for the SSH user does not work. When trying the exact same password with local ssh connection (from within the AWS server to 127.0.0.1 -p exported_SSH_port) it works perfectly.
any ideas?
There's a little bug in docker docs:
You should change
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
To
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

How do I connect to aws ec2 server from chromebook using the secure shell extension?

I am trying to connect to my ec2 instance from my chromebook using the secure shell extension but I keep getting the following error:
Loading NaCl plugin... done.
ssh: connect to host (public DNS) port 22: Connection refused
NaCl plugin exited with status code 255.
I have been following the steps on this site but with 0 success.
http://www.mattburns.co.uk/blog/2012/11/15/connecting-to-ec2-from-chromes-secure-shell-using-only-a-pem-file/
Help please.
If you're doing this on your chromebook, you should have developer mode enabled so that you can enter the console and execute Linux commands. Once developer mode is enabled, enter the console with ctrl+alt+t and then type in shell.
First you'll want to change the permissions of your .pem key. The ssh keygen won't run if the permissions aren't restricted enough.
sudo chmod 400 myKeyPair.pem
Next you'll want to generate your own public key with ssh-keygen like mentioned in the other links.
ssh-keygen -y -f myKeyPair.pem > myKeyPair.pub
After this, you'll want to create a file with no extension and the private key pair inside.
touch myKeyPair
After this, copy the contents of the .pem file to the file with no extension, myKeyPair.
sudo cat myKeyPair.pem > myKeyPair
Next you'll want to open up the secure shell extension, which can be found here.
Enter your connection information for your machine and don't forget to specify the port number. When it comes to importing the key pair, select both the myKeyPair.pub and the myKeyPair files using ctrl.
That's it, you should be connected!

haproxy in docker container

I'm new to docker and haproxy.. I tried to follow the example from the official docker hub repo.
So, I have Dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
and simple haproxy config (which I expect to redirect local calls to my EB instance)
global
# daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 {my-app}.elasticbeanstalk.com:80 maxconn 32
Build and run
$ docker build .
$ docker run --rm d4598bcc293f
Container starts and stucks, Ctrl+C doen't stop it. "docker kill" helps only.
My EB resource is up and running
$ curl {my-app}.elasticbeanstalk.com/status
{
"status": "OK"
}
But local calls fail
$ boot2docker ip
192.168.59.104
$ curl 192.168.59.104/status
curl: (7) Failed to connect to 192.168.59.104 port 80: Connection refused
What am I missing or doing wrong?
Thank you!
UPDATE: I've found the problem with calls redirections. Wrong port
number in haproxy.cfg.
But this problem still annoys me... Container starts and stucks,
Ctrl+C doen't stop it. "docker kill" helps only.
If you want to be able to exit with control-c, do docker run -i <image>. The -i means to pass input to the containerized program, and if HAProxy gets a control-c then it will terminate which will stop the container.
HAProxy doesn't produce any output unless you run it in debug mode, so there's not really much point to running attached, though. You might have a better time with docker run -d <image>, which will detach from the container and let it run in the background. To stop it, use docker kill.

How to kill a process running in a port in a remote aws server

I am trying to kill a process running in
http://54.218.73.244:7002/
i have used the command
fuser -k 7002/tcp
it is not working the process continues to run
I am using expressJS in server to run the server script
how can i resolve this ?
ssh 54.218.73.244 fuser -k 7002/tcp

How to figure out why ssh session does not exit sometimes?

I have a C++ application that uses ssh to summon a connection to the server. I find that sometimes the ssh session is left lying around long after the command to summon the server has exited. Looking at the Centos4 man page for ssh I see the following:
The session terminates when the command or shell on the remote machine
exits and all X11 and TCP/IP connections have been closed. The exit
status of the remote program is returned as the exit status of ssh.
I see that the command has exited, so I imagine not all the X11 and TCP/IP connections have been closed. How can I figure out which of these ssh is waiting for so that I can fix my summon command's C++ application to clean up whatever is being left behind that keeps the ssh open.
I wonder why this failure only occurs some of the time and not on every invocation? It seems to occur approximately 50% of the time. What could my C++ application be leaving around to trigger this?
More background: The server is a daemon, when launched, it forks and the parent exits, leaving the child running. The client summons by using:
popen("ssh -n -o ConnectTimeout=300 user#host \"sererApp argsHere\""
" 2>&1 < /dev/null", "r")
Use libssh or libssh2, rather than calling popen(3) from C only to invoke ssh(1) which itself is another C program. If you want my personal experience, I'd say try libssh2 - I've used it in a C++ program and it works.
I find some hints here:
http://www.snailbook.com/faq/background-jobs.auto.html
This problem is usually due to a feature of the OpenSSH server. When writing an SSH server, you have to answer the question, "When should the server close the SSH connection?" The obvious answer might seem to be: close it when the server-side user program started by client request (shell or remote command) exits. However, it's actually a bit more complicated; this simple strategy allows a race condition which can cause data loss (see the explanation below). To avoid this problem, sshd instead waits until it encounters end-of-file (eof) on the pipes connecting to the stdout and stderr of the user program.
#sienkiew: If you really want to execute a command or script via ssh and exit, have a look at the daemontool of the libslack package. (Similar tools that can detach a command from its standard streams would be screen, tmux or detach.)
To inspect stdin, stdout & stderr of the command executed via ssh on the command line, you can, for example, use lsof.
# sample code to figure out why ssh session does not exit
# sleep keeps its stdout open, so sshd only sees EOF after command completion
ssh localhost 'sleep 10 &' # blocks
ssh localhost 'sleep 10 1>&- &' # does not block
ssh localhost 'sleep 10 & lsof -p ${!}'
ssh localhost 'sleep 10 1>&- & lsof -p ${!}'
ssh localhost 'sleep 10 1>/dev/null & lsof -p ${!}'
ssh localhost 'sleep 10 1>/dev/null 2>&1 & lsof -p ${!}'