How can I access my amazon ec2 instance the same way I access a normal server running SSH?
I would like to type in my Mac terminal: ssh root#[amazon.ip.address]
then the password for root
Instead I have to use these stupid public/private keys that have to live somewhere on my computer. I use dozens of computers throughout my day. I don't want to have to carry around my key on a flash drive all day.
Does any one know how I can achieve the above?
Thanks
It's not recommended you use password authentication as its susceptible to man in the middle attacks. If you don't want to keep track of your keys you can always use the ssh-add command on Linux or something like puttygen on Windows.
For example on Linux:
ssh-add <your-keyname>
To list the keys in your ssh-agent
ssh-add -l
The drawback is there's a limit of number of keys you can add before most ssh servers with basic configuration will start rejecting them. (I believe it's 3). You can workaround this by configuring in your `/etc/ssh/sshd_config file:
MaxAuthTries <number of keys you want to try>
And if you want to knock yourself out and use password authentication you can simply enable it in your /etc/ssh/sshd_config file also:
PasswordAuthentication yes
Related
I accidentaly enabled the UFW on my Google Cloud Compute debian instance and unfortunately port 22 is blocked now. I've tried every way to go inside the VM but i can't...
I'm trying to access trhougth the serial port but it's asking me for user and password that was never set.
Does anyone have any idea what can I do?
If I could 'edit' the files on disk, it would be possible to change firewall rules and disable it. Already thought on mounting the VM disk on another instance but Google doesn't allow to "hot detach" it.
Also tried to create another VM from a snapshot of VM disk, but of course, the new instance came with the same problem.
Lots of important files inside and can't go in...
This is the classical example when you close yourself outside of the house with the key inside.
There are several ways to get back inside a virtual machine when the ssh is not currently working in Google Cloud Platform, from my point of view the easiest is to make use of the startup script.
You can use them to run a script as Root when your machine starts, in this way you can basically change the configuration without accessing the virtual machine.
Therefore you can:
simply launch some command in order to deactivate UFW and then access again the machine
if it is not enough and you rally need to access to fix the configuration, you can set up username and password for the root user making use of the startup script and then accessing through the serial console therefore without ssh (basically it is like you had your keyboard directly connected to the hardware). Note as soon you access the instance remove or at least change the password you have just used was visible to the people having access to the project. A safer way is to write down the password in on a private file on a bucket and download it on the instance with the startup script.
Note that you can redirect the output of command to a file and then upload the file to a bucket if you need to debug the script, read the content of a file, understanding what is going on, etc.
The easiest way is to create a startup-script that disables ufw. which gets executed whenever the instance is booted:
Go into your Google Cloud Console. Go to your VM instance and click Edit button.
Scroll down to your "Custom metadata", and add a "startup-script" as key and the following script as value:
#! /bin/bash
/usr/sbin/ufw disable
Click save and reboot your instance.
Delete that startup-script and click Save, so that it won't get executed in future boots.
You can try google serial ports. From then you can enable the ssh
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
ufw allow ssh
When creating droplets on Digital Ocean using Terraform, the created machines' passwords are sent via mail. If I get the documentation on the Digital Ocean provider right, you can also specify the SSH IDs of of keys to use.
If I am bootstrapping a data center using Terraform, which option should I choose?
Somehow, it feels wrong to have a different password for every machine (somehow using passwords per se feels wrong), but it also feels wrong if every machine is linked to the SSH key of my user.
How do you do that? Is there a way that can be considered good (best?) practice here? Should I create an SSH key pair only for this and commit it with the Terraform files to Git as well? …?
As you mentioned, using passwords on instances is an absolute pain once you have an appreciable number of them. It's also less secure than SSH keys that are properly managed (kept secret). Obviously you are going to have some trouble linking the rest of your automation to some credentials that are delivered out of band to your automation tooling so if you need to actually configure these servers to do something then the password by email option is pretty much out.
I tend to use a different SSH key for each application and development stage (eg. dev, testing/staging, production) but then everything inside that combination gets the same public key put on it for ease of management. Separating it that way means if you have one key compromised you don't need to replace the public key everywhere and so minimises blast radius of this event. It also means you can rotate them independently, especially as some environments may move faster than others.
As a final word of warning, do not put your private SSH key into the same git repo as the rest of your code and definitely do not publish the private SSH key to a public repo. You will probably want to look into some secrets management such as Hashicorp's Vault if you are in a large team or at least distributing these shared private keys out of band if they need to be used by multiple people.
Please help me in changing the key pair for the running ec2-instance without stopping the instance.Please help me with this.
Regards,
Ashwini.
I'm assuming a Linux EC2 instance, and the example is for an Ubuntu AMI. In any case the instructions are very similar.
ssh ubuntu#ec2-host.compute-1.amazonaws.com
edit ~/.ssh/authorized_keys
# remove the entry in the above file.
# add your new public key
If you haven't added keys or users to the machine that's all you need to do. There should only be one entry in the authorized_keys file when you open it, and there should only be one when you've finished. I recommend logging in from another session to ensure you haven't broken anything. If so, you still have a session open.
Verify that your new key has the same format. It should be approximately the same length. If not, you'll need to convert it. Here's a sample of key conversion, though it isn't precisely what you might need.
As discussed in Using an SSH keyfile with Fabric, it is possible to set an ssh keyfile using env.key_filename. How does this setting interact with defining remote hosts in env.roledefs?
If I set key_filename, will Fabric try to use that key with all hosts? What if different hosts require different keys?
A workaround would be to set env.hosts and env.key_filename in a separate task for each set of hosts, but is there a way that makes use of the roledefs feature?
You can set env.key_filename to a list of filenames, each of which would then be tried for each connection. Anything more specific you would have to script yourself.
From this doc.
So to answer:
.. but is there a way that makes use of the roledefs feature?
No.
We have a Buffalo NAS drive as a backup drive.
And when we map this drive as B:\ , our backup application seems to understand this and run as an application.
But when run as a service, it does not recognize the mapping and crashes.
I tried giving the path as \\\192.168.x.x\Backups\ as the backup path, the service runs but then a lot of submodules fail because it sees the \\\ as a escape character.
What is the workaround so that the windows service can see the mapped drive.
I am trying to run zip.exe via a CreateProcess();
""C:\Users\jvenkatraj\Documents\SQLite\Debug\zip.exe" -9 -q -g -u "\\\192.168.123.60\Backup\store\location1\50\f2\25\43\d8\88\b9\68\49\8d\2b\d0\08\9e\7e\df\z.zip" "\\\192.168.123.60\Backup\store\temp\SPD405.tmp\file_contents""
The backslashes are messing with the quotes. And it is a WCHAR type, and I can't change it to any other type, else I will have to redefine this elsewhere as well. How many backslashes should I use?
You can map a network drive inside the service itself using the WNetAddConnection2 API function.
Create a symbolic link somewhere to the NAS share:
mklink /D c:\nas-backups \\192.168.x.x\Backups
and point your backup application to c:\nas-backups\etc.
Try running your service under a user that "sees" the network, such as the "Network Service" user or even as the "human" user that mapped the network drive.
The easiest way to do this would probably be to access the network path
string path = #"\\192.168.x.x\Backups\";
Another thing you have to make sure of is that the service has access to this path. If your service is logged in as a user that does NOT have access you have to change logon credentials of the service to a user/domain account that does have access to this path.