Using plink commands from Rstudio with the AWS EC2 has disabled the instance DNS - amazon-web-services

I was trying to process GWAS data using plink1.9 with a rental AWS ubuntu server. I executed the plink commands from the terminal window in the Rstudio server.
It turned out that if I execute a plink command that overloads the server, my Rstudio server will become inaccessible and this problem does not revolve.
for example my Rstudio-server from port 8787 has become unavailable.
http://ec2-54-64-41-xxx.ap-northeast-1.compute.amazonaws.com:8787/
I accidentally did it twice. First time I did something like cat xxx.vcf (how stupid of me) and the server simply went frozen and the Rstudio server crashed.
Since I could still access the server with putty and winscp and so on I managed to get my files to a new instance. Then I tried to use plink to do some QC, something like
./plink --bfile xxx--mind 1 --geno 0.01 --maf 0.05 --make-bed --out yyy
It had again overloaded the server and the same Rstudio server trouble occurred again.
Now both instances are still accessible from putty, I logged on to check the running processes and it seemed to be fine. There were no active heavy jobs and no zombie processes either.
The CPU monitoring looks fine too.
With the only problem that the Rstudio-server link is not working.
Does anyone have similar experiences? Your advice is very much appreciated.
mindy

Related

Google Compute Engine goes to sleep after some time

I'm trying to run my application on GCE VM. It uses nodeJs as frontend and a Java backend. I use this server to communicate with my local computer using MQTT. This is working but after some time (one hour and a half), the server seems to go to sleep (or the ports close ?).
Both MQTT and ssh terminal interface connections are lost.
When I connect back, the application is not running anymore, it seems like the VM restarted.
Do you have any idea on how to keep my server alive ? I can give further details.
Answering my own question as John Hanley explained the solution in comments:
"By what method are you running your frontend and backend code. VMs do not go to sleep. If you are starting your software via an SSH session, that will be an issue. Your code needs to run as a service. Edit your question with more details."
I was indeed running my application via the ssh terminal which caused the problem. The solution for me was to remotely access the VM via vncserver and to launch the application using the VM's terminal.

Websocket server on AWS EC2 instance doesn't respond after two days of inactivity

We are using AWS EC2(ubuntu-xenial-16.04-amd64-server) instance for running PHP Websocket server.
We are using following command, in order to keep WebSocket server running continuously.
nohup php -q server.php >/dev/null 2>&1 &
It is running very well up to two days.But if no client has
connected to WebSocket server in last two days,it automatically stops
responding.
I checked the status of WebSocket port with this command (lsof -i:9000).I got following output(5&6)
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
php 1467 ubuntu 4u IPv4 17137 0t0 TCP *:9000 (LISTEN)
It seems WebSocket server is running.But client(i.e. mobile application) is not able to connect.
Is there any specific reason behind this problem? We are not able to figure out exact issue.
You'll need to provide more information for SO community to be able to help you.
Let's look at the layers for your infrastructure make an educated guess where the problem might be.
We have:
external connector (the mobile app)
PHP script acting as a server (receiver).
OS (Ubuntu)
OS kernel can kill running processes for various misbehaves. Most common OOM-killer (out-of-memory).
It's not uncommon to see PHP scripts becoming unresponsive especially when stream (sockets) programming is involved, we'll need to see that code.
You are saying that everything is fine for two days, so we can rule out external connector problem and concentrate on mismanaging of resources problems: garbage-collection, memory leaks, stream leaks, etc. Some external process is either killing your PHP script or PHP script itself becomes unresponsive.
The investigation should start at:
Sharing the server.php, and then moving to
Log analysis.

amazon aws server was working last night, not in the morning

I have an aws ec2 micro (free tier, so no tech support) and as is expected, when I got back in the morning my PuTTY had timed out from being connected for a few hours, this has happened multiple times before. When I went to restart the connection, I got a connection refusal. Tried more times, nothing. WinSCP, what I use for file transfer, also can't connect to a new login, but praise the lord allowed me to download all the files I had on the server. I tried on another device and through telnet on all the open incoming ports, connections refused. The MindTerm or whatever it's called when you click connect (on firefox) also couldn't connect.
The instance is all green, volume is all green, passing all status checks as well.
I have looked at other threads, but they are for after a restart or changing of the ssh config. I haven't touched anything. I doubt the permissions or anything have been compromised.
Apparently just hitting reboot on the server did it. Can't believe I didn't think of something so easy.

EC2 Database through Laravel Forge has stopped being accessable

I've been running an instance EC2 through Laravel forge for about 2000 hours and this morning got this error while trying to reach it:
SQLSTATE[08006] [7] could not connect to server: Connection refused Is
the server running on host "172...***" and accepting TCP/IP
connections on port 5432?
After SSHing into the server I've getting a similar error when trying to run a command. I've dug through AWS but don't see any errors being throw. I double checked the ip address for the instance to make sure the IP hadn't changed for any reason. Of course I'm a little behind on my backups for the application so I'm hoping someone might have some ideas why else I can do to try and access this data. I haven't made any changes to the app in about 10 days, but found the error while I was pushing an update. I have six other instances of the same app that weren't affected (thankfully) but makes me even more confused with the cause of the issue.
In case anyone comes across a similar issue, here's what had happened. I had an error running in the background which had filled up the EC2 harddrive's log. Since the default Larvel/Forge image has a DB running within in the EC2 instance, once it ran out of room everything stopped working. I was able to SSH in and delete the log though, and everything started working again.
To prevent the issue from happening again I then created an amazon RDS and used that rather than the EC2 instance. It's about three or four times the price of just an EC2 instance, but still not that much and the confidence I now have in the system is well worth it.

Controlling Gunicorn in a new ssh session

I am using Gunicorn to power Django application on a remote server (ubuntu), to which I connect by ssh. Once Gunicorn has started the status log pops up showing you what is going on and such. However when I close my ssh session and reconnect later on I cant seem to reopen the process without killing Gunicorn and rebooting the server.
Not sure if i understand your issue correctly...
When running django/gunicorn usually it is helpful to use some tools to control the processes. One really good option to do so is the use of supervisord:
http://docs.gunicorn.org/en/latest/deploy.html#supervisor
If you just want to run processes directly and being able to (dis-)connect - generally screen is a good option.
It allows you to to disconnect an ssh-session while leaving your 'virtual?' terminals running.
Just re-ssh to your server and reconnect using:
screen -xr