Using pseudo tty with ssh results in warning - c++

I am using ssh from my application and must pass "-t -t" to ssh in order for it to work correctly. Otherwise, the stdin of my application is interfered with by the call to ssh. Forcing a pseudo terminal to ssh via the -t -t avoids this issue, but instead results in the following obscure error message coming back from ssh, although the application seems to work correctly otherwise:
tcgetattr: Inappropriate ioctl for device
I'd like to get rid of this message to keep it from happening instead of just supressing it, but am not sure why it is coming and what I should do to prevent it. I only get the message when -t -t are passed to ssh.
Note a similar question was asked here:
http://www.perlmonks.org/?node_id=664789
The man page for ssh says:
-t Force pseudo-tty allocation. This can be used to execute arbitrary
screen-based programs on a remote machine, which can be very useful,
e.g., when implementing menu services. Multiple -t options force tty
allocation, even if ssh has no local tty.

One may work around the issue by passing -n to ssh instead of -t -t. From the ssh man page:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)
So this is another way around the issue of stdin being taken from calling process and given to ssh, however, I'd like to understand how to avoid the warning when using -t -t.

Related

Change port of Theia editor within Cloud Shell

I am using Code Server within my Cloud Shell. I need to use the port 3000 for a specific npm package. Unfortunately port 3000 is already used by the default editor Theia within Cloud Shell.
I have already tried the following:
sudo kill {{PID of Theia process}} ...but it restarts again immediatelly
searched for settings within /google/devshell/editor/theia ...but could not find any port settings
sudo netstat -tlnp gives the following output:
Any help is very appreciated.
As mentioned by JShinigami, That issue got resolved here by changing the port of the other application, other alternative of resolving this issue is as below :
First I would recommend you to reset your cloud shell.
You can refer to the Answer to follow the steps on how to kill a process running on the particular Port.
Option 1 A One-liner to kill only LISTEN on specific port:
kill -9 $(lsof -t -i:3000 -sTCP:LISTEN)`
Option 2 If you have npm installed you can also run
npx kill-port 3000
I also found this answer on stack overflow that may be relevant as it shows how they were able to kill the process once they obtained its PID.
could you run the following command :
"sudo netstat -tlnp"
From the above you will be able to tell what processes are running on the ports. From there you will see the Possibility of "auto restart" configuration somewhere causing the process to appear even after kill command.
Found this useful article on ways to list processes running on ports.
This is cloudshelledit occupy the port
If you don't need cloudshelledit and can kill off
And if you open the cloudshelledit, this process is not shut off
cloudshelledit

Python exec_command throws Unknown command: `ls' [duplicate]

I am connecting to SSH via terminal (on Mac) and run a Paramiko Python script and for some reason, the two sessions seem to behave differently. The PATH environment variable is different in these cases.
This is the code I run:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('host', username='myuser',password='mypass')
stdin, stdout, stderr =ssh.exec_command('echo $PATH')
print (stdout.readlines())
Any idea why the environment variables are different?
And how can I fix it?
The SSHClient.exec_command by default does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
To emulate the default Paramiko behavior with the ssh, use the -T switch:
ssh -T myuser#host
See the ssh man:
-T Disable pseudo-tty allocation.
Contrary, to emulate the default ssh behavior with Paramiko, set the get_pty parameter of the exec_command to True:
def exec_command(self, command, bufsize=-1, timeout=None, get_pty=False):
Though rather than working around the issue by allocating the pseudo terminal in Paramiko, you should better fix your startup scripts to set the same PATH for all sessions.
For that see Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command.
Working with the Channel object instead of the SSHClient object solved my problem.
chan=ssh.invoke_shell()
chan.send('echo $PATH\n')
print (chan.recv(1024))
For more details, see the documentation

Hyperledger: 'membersrvc' not Responding

My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.

Qt QProcess read ssh verbose output

I managed to read ssh verbose mode output with QProcess. But, like in the terminal, if ssh is successfully logged in, it will stop the process. But, in the terminal, I can see the verbose output if there are connections using ssh.
I make ssh for dynamic forwarding like this command:
ssh -vfCND31338 -l username -p 22 myhost
the problem is that QProcess will stop reading output when ssh successfully logged in. for the rest verbose, it doesn't read anymore. What should I do with this?
I do not think this is possible off-hand with stock Qt, but e.g. you could poll with a certain interval if the ssh session is still running. In that case, pgrep is your friend.

How do I keep a Perl script running on Unix after I log off?

I have a script that takes a lot of time to complete.
Instead of waiting for it to finish, I'd rather just log out and retrieve its output later on.
I've tried;
at -m -t 03030205 -f /path/to/./thescript.pl
nohup /path/to/./thescript.pl &
And I have also verified that the processes actually exist with ps and at -l depending on which scheduling syntax i used.
Both these processes die when I exit out of the shell. Is there a way to keep a script from terminating when I close the connection?
We have crons here and they are set up and are working properly, but I would like to use at or nohup for single-use scripts.
Is there something wrong with my syntax? Are there any other methods to producing the desired outcome?
EDIT:
I cannot use screen or disown - they aren't installed in my HP Unix setup and i am not in the position to install them either
Use screen. It creates a terminal which keeps going when you log out. When you log back in you can switch back to it.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
This is just a guess, but something I've seen with some versions of ssh and nohup: if you've logged in with ssh then you may need to need to redirect stdout, stderr and stdin to avoid having the session hang when you exit. (One of those may still be attached to the terminal.) I would try:
nohup /path/to/./thescript.pl > whatever.stdout 2> whatever.stderr < /dev/null &
(This is no longer the case with my current versions of ssh and nohup - the latter redirects them if it detects that any is attached to a terminal - but you may be using different versions.)
Syntax for nohup looks ok, but your account may not allow for processes to run after logout. Also, try redirecting the stdout/stderr to a log file or /dev/null.
Run your command in background.
/path/to/./thescript.pl &
To get lits of your background jobs
jobs
Now you can selectively disown any of the above jobs, with its jobid.
disown <jobid>
All the disowned process should be keep on running even after you logged out.