Is there any way to improve sshfs speed on amazon aws? - amazon-web-services

My ping to the AWS instance is on the level of 50ms and cat'ing files through the ssh takes way less than second, but when I mount directory using sshfs and open it using SublimeText3/Gedit lags are greater than 10 seconds.
1. Is there anything I could do to reduce those lags?
2. Why it works like that?
3. Are some better tools for remote file editing?
My ssh config:
Host myinstance
HostName ********
User ec2-user
IdentityFile ~/idfile
Compression no
Ciphers arcfour
ServerAliveInterval 15

As a first step, I'd suggest adding this line to your settings (Preferences -> Settings-User):
"atomic_save": false
and see if that does the trick. My answer to this question has some more details behind why this works, but basically what Sublime is doing with atomic_save enabled is creating new temp files and deleting the original file, then renaming the temp back to the original's name. This leads to a very significant increase in traffic over the connection, and if the server on the other side of the pipe is a little bit laggy anyway, it can really slow down Sublime.

Related

Possible to keep the GCloud VM Instances running without connection?

The title explains the question itself. My problem is every time I connect my VM machine through SSH it always timeouts after a period of time. So I'd like to let my Python script work on itself for like hours or days. Any advice? Thanks.
VM Instance will keep running even if your SSH times out.
You can keep the SSH session alive by adding following lines:
Host remotehost
HostName remotehost.com
ServerAliveInterval 240
to $HOME/.ssh/config file.
There's a similar option in PuTTy.
To keep process alive after disconnecting, you have multiple options, including those already suggested in commnets:
nohup
screen
setsid
cron
service/daemon
Decision which one to choose depends on specifics of the task that is being performed by the script.

Problems connecting ssh to GCP's compute engine

I paused and changed the cpu to improve the performance of the compute engine (ubuntu 18.04 ).
However, after executing after setting, ssh connection is not possible at all in console, vs code.
When ssh connection is attempted, the log of the gcp serial port is as follows.
May 25 02:07:52 nt-ddp-jpc GCEGuestAgent[1244]: 2021-05-25T02:07:52.4696Z GCEGuestAgent Info: Adding existing user root to google-sudoers group.
May 25 02:07:52 nt-ddp-jpc GCEGuestAgent[1244]: 2021-05-25T02:07:52.4730Z GCEGuestAgent Error non_windows_accounts.go:152: gpasswd: /etc/group.1540: No space left on device# 012gpasswd: cannot lock /etc/group; try again later.#012.
Also, when I try ssh in vs code I get permission denied error.
What is the exact cause and resolution of the problem?
Thanks all the time for your help.
No space left on device error.
To solve this issue, as John commented, you may follow this official guide of GCP in order to increase space on a full boot disk. It will be possible to log in through SSH after that procedure of increase size of boot disk.
As a best practice you may create a snapshot first, and keep in mind that increasing boot disk size and/or saving a snapshot could slightly increase the cost of your project.

How to store rabbitmq RABBITMQ_MNESIA_DIR on remote disk

We have two ec2 servers. One has the rabbitmq on it. Second one is a new one for storage purposes. Both of these are Amazon Linux 2.
On the second one we just purchased /dev/nvme1n1 70G 104M 70G 1% /data
Where we would love to push our rabbitmq queues and data. Basically we would like to RABBITMQ_MNESIA_DIR setup on the first rabbitmq server to be directly connecting and saving queues in /data remote mentioned.
Currently that is /var/lib/rabbitmq/mnesia and our config file for rabbitmq is just default /etc/rabbitmq/rabbitmq.conf
I wonder if somebody has been doing this before, or can point us in the right direction on how to set RABBITMQ_MNESIA_DIR to be directly connecting to remote ec2 and store and work with queues from there. Thank you
At the end of the day #Parsifal was right.
We ended up making one instance bigger and changed RABBITMQ_MNESIA_DIR
This was bit tricky, because after restarting service rabbitmq-server restart
First off was needed to make sure we had current right to the /data/mnesia we mounted, I managed it with chmod 755 -R /data though read/write should be sufficient based on docs.
Then we were looking for why it always produces the error like this "broker forced connection closure with reason 'shutdown'" & Error on AMQP connection so it was after the start.
So I figured and checked the ownership of the current mnesia dir and the new one. And turned out the user and group was root root compared to original one.
Switched it to drwxr-xr-x 4 rabbitmq rabbitmq 97 Dec 16 14:57 mnesia and this started working.
Maybe it will save you some headaches, I didn't realize there was a different user group for rabbitmq, since I didn't create it.
Only thing to add, is once you are shifting the current working mnesia you might consider copying the directory to the new one since there is a lot of stuff that was currently being used and ran from. I tried it without it and even the password to admin didn't work :D

Where is my socket server on AWS EC2 instance?

I have taken over a project that has an AWS ec2 elasticbeanstalk instance running a socket.io server.
I have logged in via and done the following steps.
SSH in
cd /var directory.
ls > account app cache db elasticbeanstalk empty games kerberos lib local lock log mail nis opt preserve run spool tmp www yp
I can't see to find the socket server code? Nor the logs for the socket server?
I am not sure where they would be located.
There are myriad methods to get code on to a server and to run it. Without any information about the code or the OS or anything else, assuming its a Linux based OS, if I’m looking for something and know what it is, I can usually find it by searching for a string that I know I will find inside at least one of the files on the server.
In this case, since its a socket.io, I’d search for socket.io:
grep -rnw '/' -e 'socket.io'
The result will show me all files, including the full path, that contain “socket.io”, and you should be able to determine where the code is pretty quickly.

Promise Technology VTrak configure webserver

I inherited management of some Promise VTrak disk array servers. They recently had to be transferred to a different location. We've got them set up and networking is all configured, and even have a linux server mounting to it. Before they were transferred I was trained with the web gui it comes with. However, since the move we have not been able to connect to the web gui interface.
I can ssh into the system and really do everything from there, but I would love to figure out why webserver is not coming up.
The VTRAK system does not allow for much configuration it seams. From the CLI I can start, stop, or restart the webserver, and the only thing I can configure is the amount of time someone can be logged into the gui for. I don't see anywhere where you can configure http or anything like that.
We're pretty sure it's not a firewall issue as well.
I got the same issue this morning, and resolved it as follows:
It appears there are conditions that can render the WebPam webserver with invalid configuration HttpPort and HttpsPort values. One condition that causes this is if the sessiontimeout parameter exceeds 1439 set in the GUI. Apparently the software then freaks and creates invalid configuration which locks out the GUI because the http ports have been changed to 25943 and 29538.
To fix this login through CLI:
swmgt -a mod -n webserver -s "sessiontimeout=29"
swmgt -a mod -n webserver -s "HttpPort=80"
swmgt -a mod -n webserver -s "HttpsPort=443"
swmgt -a restart -n Webserver
You should now be able to access the WebPam webserver interface again through your web browser.
After discussing with my IT department we found it was a firewall issue.