I have taken over a project that has an AWS ec2 elasticbeanstalk instance running a socket.io server.
I have logged in via and done the following steps.
SSH in
cd /var directory.
ls > account app cache db elasticbeanstalk empty games kerberos lib local lock log mail nis opt preserve run spool tmp www yp
I can't see to find the socket server code? Nor the logs for the socket server?
I am not sure where they would be located.
There are myriad methods to get code on to a server and to run it. Without any information about the code or the OS or anything else, assuming its a Linux based OS, if I’m looking for something and know what it is, I can usually find it by searching for a string that I know I will find inside at least one of the files on the server.
In this case, since its a socket.io, I’d search for socket.io:
grep -rnw '/' -e 'socket.io'
The result will show me all files, including the full path, that contain “socket.io”, and you should be able to determine where the code is pretty quickly.
Related
I am attempting to create VMWare templates using Packer. I have a simple file that is essentially a copy of https://github.com/guillermo-musumeci/packer-vsphere-iso-windows/tree/master/win2019.base.
When I build this it times out at "Waiting for IP".
The network it is using is set for static IP, so I suspect it is that, but how do I define a static IP for this? and does it really need this for template creation?
Thanks
I’ve had similar issues with vsphere-iso packer build. It was caused by using the wrong IP for the HTTP directory especially when I was on my company’s VPN vs being hardwired. Thus, it was continually stuck at 'Waiting for IP'. The issue was the order of priority that packer uses to determine what interface to use for the HTTP directory which contains my kickstarter file. The interface that it was choosing was not accessible from the vsphere instance. Could this be the issue?
How we solved this, is that we actually have a shell wrapper that calls packer. Within that script, we ask the user for an IP that the HTTP directory should be accessed at. I use ifconfig and look at the 10. IP in the list. The shell script passes on that environmental variable to my packer's build.json. Its not the cleanest solution, but Ive been using this fix for months.
I inherited management of some Promise VTrak disk array servers. They recently had to be transferred to a different location. We've got them set up and networking is all configured, and even have a linux server mounting to it. Before they were transferred I was trained with the web gui it comes with. However, since the move we have not been able to connect to the web gui interface.
I can ssh into the system and really do everything from there, but I would love to figure out why webserver is not coming up.
The VTRAK system does not allow for much configuration it seams. From the CLI I can start, stop, or restart the webserver, and the only thing I can configure is the amount of time someone can be logged into the gui for. I don't see anywhere where you can configure http or anything like that.
We're pretty sure it's not a firewall issue as well.
I got the same issue this morning, and resolved it as follows:
It appears there are conditions that can render the WebPam webserver with invalid configuration HttpPort and HttpsPort values. One condition that causes this is if the sessiontimeout parameter exceeds 1439 set in the GUI. Apparently the software then freaks and creates invalid configuration which locks out the GUI because the http ports have been changed to 25943 and 29538.
To fix this login through CLI:
swmgt -a mod -n webserver -s "sessiontimeout=29"
swmgt -a mod -n webserver -s "HttpPort=80"
swmgt -a mod -n webserver -s "HttpsPort=443"
swmgt -a restart -n Webserver
You should now be able to access the WebPam webserver interface again through your web browser.
After discussing with my IT department we found it was a firewall issue.
I have to login to a server from a remote server. I am able to login to remote server using phpseclib. After that I am able to login to next server from that but next command executes on firt server and not the second server. For example:
Login to server1.example.com via SSH
Login to remote-server.example.com using internal script from server1.
Execute 'ls'.
ls returns output from server1 rather remote-server.
Are you absolutely sure your script on server1 actually logs into remote-server (and does not immediately log out)? The only explanation I can think of is that the "ls" command is not really run on the remote server. If you share the script and exact commands, that could help figure it out. (Output of "script" from the whole exchange might also be helpful.)
Something like the following might also work for you:
ssh server1.example.com ssh remote-server.example.com ls
Depending on your remote command, you might also do something like:
ssh server1.example.com "ssh remote-server.example.com ls"
(The latter might be needed if there are, e.g., redirects involved that could otherwise be interpreted by your local shell.)
As you use phpseclib, that might handle the first ssh in the examples above. So you might perhaps use something like:
$ssh->exec("ssh remote-server.example.com ls")
Or if you are using public key authentication for the second step, maybe:
$ssh->exec("ssh -i ~/.ssh/keyfile remote-server.example.com ls")
There's a quick summary of how to run commands remotely with ssh at https://www.ssh.com/ssh/command/
I have a small- to medium-sized Django project where the client has been forced to change hosts. The new host convinced them they definitely needed a couple of web servers behind a load balancer (and to break the database off to a third server). I have everything ported over to the new setup, but I can't make it live yet as I'm not sure what's the best way to handle file uploads on the site as they will only get pushed up to the server the user is currently connected to. Given the three servers (counting the db which could double as a static file server if I had to), what's the cleanest and easiest way to handle this situation?
Simple solution which has some latency though and is not scalable beyond several servers - use rsync between hosts. Simply add it to cron to do upload dir sync both ways, also sticky session would help here - so that uploader would see their file as available immediately, and other visitors will be able to get the file after the next rsync completes.
This way you also get a free backup.
/usr/bin/rsync -url --size-only -e "ssh -i servers_ssh.key" user#server2:/dir /dir
(you'll have to have this in cron on both servers)
I have RAILO (Railo 3.1.2.001 final) installed on an AMAZON EC2 instance and everything seems to be working fine for the tests I have done. I can connect to mySQL and simple commands work. The applications I am planning to run on it make extensive use of CFFTP to pull files in from clients and process them. The OPEN command works fine and I get a succeeded in Active and Passive mode, but when I try to do anything (check for a file, put a file, download) I get : 500 Illegal PORT command.
My thought here is the AMAZON firewall is blocking some ports and something needs to be setup for this to function.
Anyone have any experience with this and can point me in the correct direction?
Thanks in advance,
Jeff
do you connect from outside amazon to the instance ? if you do check the security group and allow the ip/port for your application.