Python Fabric roledefs and ssh keys - fabric

As discussed in Using an SSH keyfile with Fabric, it is possible to set an ssh keyfile using env.key_filename. How does this setting interact with defining remote hosts in env.roledefs?
If I set key_filename, will Fabric try to use that key with all hosts? What if different hosts require different keys?
A workaround would be to set env.hosts and env.key_filename in a separate task for each set of hosts, but is there a way that makes use of the roledefs feature?

You can set env.key_filename to a list of filenames, each of which would then be tried for each connection. Anything more specific you would have to script yourself.
From this doc.
So to answer:
.. but is there a way that makes use of the roledefs feature?
No.

Related

How to globally enable --no-verify-ssl in aws cli, and also how to disable that?

I am getting a SSL validation failure, while my Python code is trying to interact with S3 bucket. The error further says
unable to get local issuer certificate. (_ssl.c:1131)
That issue will eventually get fixed in the legit way.
But, for now, I need a quick fix. Many answers mention about the risks of using --no-verify-ssl, but no one tells how to use it.
Is it possible to enable that globally? Also, can I disable it, when the issue is fixed in the correct way?
Do not use --no-verify-ssl. Doing so opens yourself up to man-in-the-middle attacks and ignores your obligations under the shared responsibility model.
If you don't have the CAs installed in your environment's trust store, you can manually pass the CA bundle using the --ca-bundle option or the ca_bundle config key.
Example Usage
Create the AWS CA Bundle
curl https://www.amazontrust.com/repository/{SFSRootCAG2,AmazonRootCA4,AmazonRootCA3,AmazonRootCA2,AmazonRootCA1}.pem >> ~/.aws/ca_bundle.pem
Configure the SDK (choice)
Add it to the config file at ~/.aws/config. If you have the CLI installed it should exist and can be set with a command: aws configure set ca_bundle '~/.aws/ca_bundle.pem'
Pass it as an environment variable: AWS_CA_BUNDLE='~/.aws/ca_bundle.pem'
https://docs.aws.amazon.com/cli/latest/reference/index.html
https://www.amazontrust.com/repository/

How can i automate script executions in aws EC2 using go sdk?

I'm building an app that manages multiple ec2 instances using the go sdk. I would like to run scripts on these instances in an automated way.
How can I achieve that ? I don't think os.command => ssh => raw script stored as string in code is the best practice. Is there any clean way to achieve this ?
Thanks
Is there any clean way to achieve this ?
To bootstrap your instance, you would create a UserData script. The script runs only once, just after your instance is launched.
For other execution of commands remotely, you can use SSM Run Command to run command on a single or multiple instances.
The way you suggest is actually valid and can work. I agree with you though, it wouldn't be my first choice either. I would either use the package golang.org/x/crypto/ssh in the standard library or an external solution like github.com/appleboy/easyssh-proxy.
I would lean towards the default library but if you don't have a preference there then the Scp function of the latter package might be especially of interest to you. You can find examples of this in the readme of the project.
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol.
EDIT: After seeing Marcin's answer, I think my answer is more the plain SSH answer, AWS independent. For the idiomatic answer for AWS please definitely look at his suggested solution!

How can I get a list of Vcenter servers in my environment using Powershell or risk api

I’m trying to get a list of Vcenter servers in my environment. Is there a Powershell command let that can get a list of Vcenter servers using Powershell or risk api?
The vCenter server generally serves as the authentication point and source of the API services, so this isn't generally something that's easy to do.
There is one caveat though, and that is when/if Linked Mode is enabled. In those cases you could use PowerCLI (a set of PowerShell modules that are easy to download from the PowerShell Gallery) and use the following commands:
Connect-VIServer vcenter-name.fqdn -AllLinked
$global:DefaultVIServers
To be very clear, the above will not provide all the vCenters in your environment, but only the ones that are in some form of linked mode.

How to bootstrap droplets using Terraform?

When creating droplets on Digital Ocean using Terraform, the created machines' passwords are sent via mail. If I get the documentation on the Digital Ocean provider right, you can also specify the SSH IDs of of keys to use.
If I am bootstrapping a data center using Terraform, which option should I choose?
Somehow, it feels wrong to have a different password for every machine (somehow using passwords per se feels wrong), but it also feels wrong if every machine is linked to the SSH key of my user.
How do you do that? Is there a way that can be considered good (best?) practice here? Should I create an SSH key pair only for this and commit it with the Terraform files to Git as well? …?
As you mentioned, using passwords on instances is an absolute pain once you have an appreciable number of them. It's also less secure than SSH keys that are properly managed (kept secret). Obviously you are going to have some trouble linking the rest of your automation to some credentials that are delivered out of band to your automation tooling so if you need to actually configure these servers to do something then the password by email option is pretty much out.
I tend to use a different SSH key for each application and development stage (eg. dev, testing/staging, production) but then everything inside that combination gets the same public key put on it for ease of management. Separating it that way means if you have one key compromised you don't need to replace the public key everywhere and so minimises blast radius of this event. It also means you can rotate them independently, especially as some environments may move faster than others.
As a final word of warning, do not put your private SSH key into the same git repo as the rest of your code and definitely do not publish the private SSH key to a public repo. You will probably want to look into some secrets management such as Hashicorp's Vault if you are in a large team or at least distributing these shared private keys out of band if they need to be used by multiple people.

Access Amazon EC2 without private/public keys

How can I access my amazon ec2 instance the same way I access a normal server running SSH?
I would like to type in my Mac terminal: ssh root#[amazon.ip.address]
then the password for root
Instead I have to use these stupid public/private keys that have to live somewhere on my computer. I use dozens of computers throughout my day. I don't want to have to carry around my key on a flash drive all day.
Does any one know how I can achieve the above?
Thanks
It's not recommended you use password authentication as its susceptible to man in the middle attacks. If you don't want to keep track of your keys you can always use the ssh-add command on Linux or something like puttygen on Windows.
For example on Linux:
ssh-add <your-keyname>
To list the keys in your ssh-agent
ssh-add -l
The drawback is there's a limit of number of keys you can add before most ssh servers with basic configuration will start rejecting them. (I believe it's 3). You can workaround this by configuring in your `/etc/ssh/sshd_config file:
MaxAuthTries <number of keys you want to try>
And if you want to knock yourself out and use password authentication you can simply enable it in your /etc/ssh/sshd_config file also:
PasswordAuthentication yes