I want to establish an API server with a load balancer. I will use one machine as the master which will assign tasks in a round robin manner to 2 slave machines. All machines are hosted in AWS.
In order to process the request, I need to run a python script. When the master receives the request, it can trigger the command on one of the slaves using ssh but this adds an extra couple of seconds delay to the processing time.
Is there a way to reduce/remove the ssh delay?
Not sure if you have something implemented or you just collect the thoughts.
The basic use case is described on wikibooks, but the easiest is to set up public key authentication and ssh_config (config for machine2 would be almost the same):
Host machine1
HostName machine1.example.org
ControlPath ~/.ssh/controlmasters/%r#%h:%p
ControlMaster auto
ControlPersist yes
IdentityFile ~/.ssh/id_rsa-something
And then call the remote script like this:
ssh machine1 ./remote_script.py
First ssh call will initiate the connection (and will take a bit longer), every other script call will use the existing connection and will be almost immediate.
If you are using Python, the similar behaviour you can achieve using paramiko or even ansible if you want to step one level up (but really depends on use case).
Related
The title explains the question itself. My problem is every time I connect my VM machine through SSH it always timeouts after a period of time. So I'd like to let my Python script work on itself for like hours or days. Any advice? Thanks.
VM Instance will keep running even if your SSH times out.
You can keep the SSH session alive by adding following lines:
Host remotehost
HostName remotehost.com
ServerAliveInterval 240
to $HOME/.ssh/config file.
There's a similar option in PuTTy.
To keep process alive after disconnecting, you have multiple options, including those already suggested in commnets:
nohup
screen
setsid
cron
service/daemon
Decision which one to choose depends on specifics of the task that is being performed by the script.
I am attempting to create VMWare templates using Packer. I have a simple file that is essentially a copy of https://github.com/guillermo-musumeci/packer-vsphere-iso-windows/tree/master/win2019.base.
When I build this it times out at "Waiting for IP".
The network it is using is set for static IP, so I suspect it is that, but how do I define a static IP for this? and does it really need this for template creation?
Thanks
I’ve had similar issues with vsphere-iso packer build. It was caused by using the wrong IP for the HTTP directory especially when I was on my company’s VPN vs being hardwired. Thus, it was continually stuck at 'Waiting for IP'. The issue was the order of priority that packer uses to determine what interface to use for the HTTP directory which contains my kickstarter file. The interface that it was choosing was not accessible from the vsphere instance. Could this be the issue?
How we solved this, is that we actually have a shell wrapper that calls packer. Within that script, we ask the user for an IP that the HTTP directory should be accessed at. I use ifconfig and look at the 10. IP in the list. The shell script passes on that environmental variable to my packer's build.json. Its not the cleanest solution, but Ive been using this fix for months.
I'm using an Amazon compute instance with Windows Server 2012 R2 to run some executable I own for data processing.
Right now, what I do it to send my data via FTP (I set up an FTP server on the remote Windows machine), and manually start the data processing. When the processing is completed, I download the outputs back from FTP and manually stop the remote Amazon computing instance.
I want to automate this process. Namely, I want to find a way to automatically start the remote machine when I start sending my data, then automatically trigger the processing (this I can handle via scripting), and then send back the data and shut down the machine automatically (this I think I also can handle).
So, to sum up, I need to know how can I automatically start the machine when I send my data to it.
I am using an FTP server on that machine and an EBS drive, but there may be a better way. Also, does anyone have any more suggestions on this setup?
Thank you
There are many ways to automate this. Is your control machine (from where you will be controlling the EC2 instance) a linux or windows machine?
Ansible: It is the easiest and the most straightforward if you are familiar with ansible. Barely 20 lines of code to achieve what you want. And it is free. You will be using EC2 module to start/stop your instances and one of many modules to transfer files. However, there is a bit of learning.
AWS CLI: A one line command to start (or stop) your instance. Once the instance is up and running, you can automate the file transfer part
Is there a way to determine through a command line interface or other trick if an AWS EC2 instance is ready to receive ssh connections?
The running state seems not to be enough. Trying to connect in in the first minutes of the running state, the machine Status checks still shows initialising and ssh times out while trying to connect.
(I am using the awscli pip package.)
Running is similar to turning a computer on and finishing a bios check. As far as the hypervisor is concerned your instance is on.
The best way to know when your instance is ready, is to run a script at the end of startup (or when certain services are on) that will report its status to some other listener. Using that data, or event, you should know that your instance is ready to be connected to. This is purposely vague since there are so many different ways this can be accomplished.
You could also time the expected startup time, and try to connect after that and retry the connection if it fails. Still need a point at which you would stop trying as instances can fail to launch in some cases.
I'm using MATLAB to connect to a database hosted in AWS (using the database toolbox). In order to do that, I supply the URL of the database as a local port and create a SSH tunnel to the AWS host.
The issue is that this tunnel needs to be created in order for the code to run. If it is not, no error message is generated but MATLAB gets hung and needs to be killed. I would like to deploy this code to users who will not be able to troubleshoot if this tunnel is missing.
My question is: is there a way to check for a local port opening in MATLAB? How would I check if the tunnel is setup?
Since you are using the Database Toolbox, you might want to use the logintimeout function. As the documentation says:
Note If you do not specify a value for logintimeout and the MATLAB session cannot establish a database connection, your MATLAB
session may freeze.
And you would wrap your code inside a try/catch block
I am not familiar with Matlab's TCP objects, but there is a system command that executes a program, returning its exit code (see its documentation). So what would probably do the job is a small program or script (as portable as needed), that tries to connect to the local port.
Alternatively, the small program/script could actually open (or re-open) the tunnel and return 0 on success. (This possibly adds the problem of how Matlab handles forking processes, I don't know how it handles that.)
There probably is some way to do the check if open and re-open if not-housekeeping via Matlab, but I have no clue.