I managed to read ssh verbose mode output with QProcess. But, like in the terminal, if ssh is successfully logged in, it will stop the process. But, in the terminal, I can see the verbose output if there are connections using ssh.
I make ssh for dynamic forwarding like this command:
ssh -vfCND31338 -l username -p 22 myhost
the problem is that QProcess will stop reading output when ssh successfully logged in. for the rest verbose, it doesn't read anymore. What should I do with this?
I do not think this is possible off-hand with stock Qt, but e.g. you could poll with a certain interval if the ssh session is still running. In that case, pgrep is your friend.
Related
I need to create AWS CentOS 7 instance images for a customer, and need it to automatically send the ip and instance id to our AWS server every time the instance boots. For example, this is the very basic test version of the script I need to run:
#!/bin/bash
$serverIP=""
curl "https://$serverIP"/myphp.php?id='sentid'&ip='sentip'"
If the script is run directly, it works fine and is received by the server and processed there. But I can't get it to run at boot. I cannot put the script in the "User Data" directly due to security concerns as the customer can then see it easily, it needs to be in a script in the filesystem of the image.
I've tried several things that work fine on a physical Linux server, but not on AWS. I know profile.d runs every time someone logs in but over-sending like that is fine.
/etc/profile.d/myscript.sh
This stops the AWS instance from booting. Even just
#!/bin/bash/
echo "hello world"
prevents it from booting. The instance starts, but when you go to ssh into it you get 'Network Error: connection timed out', which is the standard error if you put a wrong ip in, or upset it by leaving a service like httpd enabled.
However, a blank bash script with just #!/bin/bash will allow the instance to start. Removing the script via user data usually makes it boot, sometimes it just dies.
The first thing I tried was crontab. I did:
crontab -e
#reboot /var/ook/myscript.sh
systemctl enable crond.service
But the instance wouldn't start. So I put "systemctl disable crond.service" in the User Data and one booted, but another still stayed dead. Myscript.sh was just another echo "doob" >> file which worked fine when run directly.
I tried putting in /etc/systemd/system/my-startup.service:
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/var/ook/writedood.sh
[Install]
WantedBy=multi-user.target
then:
systemctl enable my-startup.service
But this did nothing. My script "writedood.sh" was just echo "doob" >> ./file.txt ensuring file.txt was chmod 777. At least it didn't prevent the instance from starting.
To give context, an instance won't start if httpd is left enabled on shutdown, but will if you disable it in User Data.
I wanted to have a go at putting something in init.d but I'm not sure how to simply tell it to run a script once in the background, and given the plethora of success I've had so far with the instance not restarting, I'm not holding out much hope that that would work.
Thanks in advance!
EDIT::: I realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.
Crontab worked if I placed the scripts and output file in different folders. It's very finicky; any errors, such as it not being able to write to the output file, and the instance won't start. I put startscript.sh in /usr/local/src, and output.out to /tmp/ to ensure there were no permissions problems, and now the instance starts and runs the script on boot.
I then realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.
My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.
I am developing a software in Qt in which I created a terminal. I run different commands through QProcess in that but when I run root commands it ask for password in terminal. I tried to run via sudo but it only accepts password in terminal. Is there any way to give password from another source like pop up widget or a text file?
I have created a QProcess with "bash" as program.
Then just write to it:
echo mypassword | sudo -S ifconfig eth0 192.168.1.123\n
You could try
Running your application as root (which is really a very bad idea, actually!)
Edit sudoers file and add the commands you want to run to this file. Then you can run these commands like sudo run_x_cmd with no password i.e, your QProcess can run these commands and you won't be asked for password.
Adding a password to a text file in order to source input for the command is a very bad idea, as it weakens security.
Version 1.8 of sudo provides a plugin architecture, which would allow you to link to it from your application and may provide a solution for you.
The SDK for the sudo plugin API can be found in the documentation .
I have a script that takes a lot of time to complete.
Instead of waiting for it to finish, I'd rather just log out and retrieve its output later on.
I've tried;
at -m -t 03030205 -f /path/to/./thescript.pl
nohup /path/to/./thescript.pl &
And I have also verified that the processes actually exist with ps and at -l depending on which scheduling syntax i used.
Both these processes die when I exit out of the shell. Is there a way to keep a script from terminating when I close the connection?
We have crons here and they are set up and are working properly, but I would like to use at or nohup for single-use scripts.
Is there something wrong with my syntax? Are there any other methods to producing the desired outcome?
EDIT:
I cannot use screen or disown - they aren't installed in my HP Unix setup and i am not in the position to install them either
Use screen. It creates a terminal which keeps going when you log out. When you log back in you can switch back to it.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
This is just a guess, but something I've seen with some versions of ssh and nohup: if you've logged in with ssh then you may need to need to redirect stdout, stderr and stdin to avoid having the session hang when you exit. (One of those may still be attached to the terminal.) I would try:
nohup /path/to/./thescript.pl > whatever.stdout 2> whatever.stderr < /dev/null &
(This is no longer the case with my current versions of ssh and nohup - the latter redirects them if it detects that any is attached to a terminal - but you may be using different versions.)
Syntax for nohup looks ok, but your account may not allow for processes to run after logout. Also, try redirecting the stdout/stderr to a log file or /dev/null.
Run your command in background.
/path/to/./thescript.pl &
To get lits of your background jobs
jobs
Now you can selectively disown any of the above jobs, with its jobid.
disown <jobid>
All the disowned process should be keep on running even after you logged out.
I am using ssh from my application and must pass "-t -t" to ssh in order for it to work correctly. Otherwise, the stdin of my application is interfered with by the call to ssh. Forcing a pseudo terminal to ssh via the -t -t avoids this issue, but instead results in the following obscure error message coming back from ssh, although the application seems to work correctly otherwise:
tcgetattr: Inappropriate ioctl for device
I'd like to get rid of this message to keep it from happening instead of just supressing it, but am not sure why it is coming and what I should do to prevent it. I only get the message when -t -t are passed to ssh.
Note a similar question was asked here:
http://www.perlmonks.org/?node_id=664789
The man page for ssh says:
-t Force pseudo-tty allocation. This can be used to execute arbitrary
screen-based programs on a remote machine, which can be very useful,
e.g., when implementing menu services. Multiple -t options force tty
allocation, even if ssh has no local tty.
One may work around the issue by passing -n to ssh instead of -t -t. From the ssh man page:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)
So this is another way around the issue of stdin being taken from calling process and given to ssh, however, I'd like to understand how to avoid the warning when using -t -t.