Required AWS Policies to run BOTO Python library for ec2.py - amazon-web-services

Title says it thought a bit garbled. I am looking for the required policies /permissions (IAM) that I will need to grant to a user in
order to create a usable profile to run boto.
The root of this is that we use the ec2.py inventory script for
ansible, that will need to list ips in order to login with ansible.
I currently have a god level user (all access) that works fine, but I
will need to restrict these further down so we can create runable jobs
without wide open permissions. I image that we will need something with
describe-* but thats about as far as i've been able to figure out.

It all depends on what AWS services you will be using and what operations you will be performing. You need read only access (the operations that don't make any change) or power access?
You mentioned you will need to list ips. For you to use ansible's ec2.py script, you need read only access.
As a starting point, you can use EC2ReadOnlyAccess stock policy that comes with IAM which will solve your issue. If you want it more granular, copy paste the EC2ReadOnly policy and remove the ones that are not needed and save the policy.

Related

List of services used in AWS

Please how can get the list of all services I am using.
I have gone to Service Quotas at
https://ap-east-1.console.aws.amazon.com/servicequotas/home?region=ap-east-1
on the dashboard. I could see a list of Items e.g. EC2, VPC, RDS, Dynamo etc but I did not understand what is there.
As I did not request for some of the services I am seeing I even went into budget at
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets
and also credits. Maybe I can get the services I have been given credits to use
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets?
Also, how can I stop any service which I do not want?
The Billing service is not giving me tangible information also. I do not want the bill to pile up before I start taking needed steps.
Is there a location where I can see all services I am using or maybe there is a code I can enter somewhere which would produce such result?
You can use AWS Config Resource Inventory feature.
AWS Config will discover resources that exist in your account, record their current configuration, and capture any changes to these configurations. Config will also retain configuration details for resources that have been deleted. A comprehensive snapshot of all resources and their configuration attributes provides a complete inventory of resources in your account.
https://aws.amazon.com/config/
There is not an easy answer on this one, as there is not an AWS service that you can use to do this out of the box (yet).
There are some AWS services that you can use to get you close, like:
AWS Config (as suggested by #kepils)
Another option is to use Resource Groups and Tagging to list all resources within a region within account (as described in this answer).
In both cases however, the issue is that both Config and Resource Groups come with the same limitation - they can't see all AWS services on their own.
Another option would be to use a third party tool to do this, if your end goal is to find what do you currently have running in your account like aws-inventory or cloudmapper
On the second part of your question on how to stop any services which you don't want you can do the following:
Don't grant excessive permissions to your users. If someone needs to work on EC2 instances, then their IAM role and respective policy should allow only that instead of for example full access.
You can limit the scope and services permitted for use within account by creating Service Control Policies which are allowing only the specific resources you plan to use.
Set-up an AWS Budget Notifications and potentially AWS Budget Actions.

How can I copy an AMI to another account using Packer?

I have two AWS Accounts:
Test Account
Prod Account
I am creating an AMI using Packer in the Test Account and want to copy the AMI to the Prod Account after that.
How can I use Packer to do that and also remove the actual AMI after the job is done?
I already checked following questions but they didn't resolve my query:
How do I bulk copy AMI AWS account number permissions from one AMI image to another?
how to copy AMI from one aws account to other aws account?
You can accomplish this behavior by using the ami_users directive in packer. This will allow the specified accounts to access the created AMIs from the source account.
If you are looking to have a deep copy of the AMIs in each account (distinct IDs) then you will have to re-run packer build with credentials into the other account.
As answered above use ami_users.
The way we use this in production is, we usually have vars file for each environment in the "vars" folder. One of the value in the vars JSON file is "nonprod_account_id":"1234567890". Then in the packer.json, use ami_users as below.
"ami_users": ["{{user `nonprod_account_id`}}"]
I'm unclear on why you would want to remove the AMI from the account where it was built after copying it to another account rather than just building it in the "destination" account, unless maybe there are stronger access restrictions or something in Prod, but in that case I would question copying in an AMI built where things are "loose".
To specifically do the copying you may want this plugin.
https://github.com/martinbaillie/packer-post-processor-ami-copy
The removal from the source account might need to be "manual" or could be automated by a cleanup process that removes AMIs older than a certain period of time. As of May 2019 it is possible to create in one account and share access for both unencrypted AND encrypted AMIs (the ability to copy/utilize encrypted AMIs is the new bit compared to the other answers).
A couple Amazon posts on the new capabilities.
https://aws.amazon.com/about-aws/whats-new/2019/05/share-encrypted-amis-across-accounts-to-launch-instances-in-a-single-step/
https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/
This article outlines a process of using Packer to copy an AMI between accounts rather than just referencing a source in another account, you can probably extend it to perform the cleanup.
https://www.helecloud.com/single-post/2019/03/21/How-to-overcome-AWS-Copy-AMI-boundaries-by-using-Hashicorp%E2%80%99s-Packer
This one shows an updated process from above that uses the ability to grant access across accounts to avoid creating multiple copies of the AMI, one for each account/environment where you want to utilize it.
https://www.helecloud.com/single-post/2019/11/06/Overcome-AWS-Copy-AMI-boundaries-%E2%80%93-share-encrypted-AMIs-with-Packer-%E2%80%93-follow-up

Duplicity/Duply Backup to S3 without API Keys?

Goal: Automated full and incremental backups of an AWS EFS filesystem to an S3 bucket.
I have been looking at Duplicity/Duply to accomplish this, and it looks like it could work.I do have one concern, you would have to store API keys in the clear on an AMI for this to work. Is there any way to accomplish this using a role?
I do backups exactly as you want to and it can be done since duplicity has support for instance profile. Make sure to give appropriate access to your role and attach it to your instance.

Is it possible to use s3 buckets to create and grant admin privileges on different directorys in my ec2 instance?

I have an ec2 instance that I use as sort of a staging environment for small websites and custom Wordpress websites.
What I'm trying to find out is; Can I create a bucket for /var/www/html/site1 and assign FTP access to Developer X to work on this particular site within this particular bucket?
No. Directories on your EC2 instance have no relationship with S3.*
If you want to set up permissions for files stored on your EC2 instance, you'll have to do it by making software configuration changes on that instance, just as if it were any other Linux-based server.
*: Assuming you haven't set up something weird like s3fs, which I assume isn't the case here.

AWS EC2: SSH access for new user to existing VMs

A new developer joined our team and I need to grant him access to all VMs we have in AWS EC2. After reading a bunch of AWS docs it seems to me that we have two options:
Share the private key used when VMs were spun up with the developer
Have developer generate a new key pair and add his public key to authorized_keys on each VM.
Neither of options is ideal, because #1 violates security practices and #2 requires me to go to make changes to a bunch of VMs.
What's the recommended way to do this?
The question is rather broad, so my answer will be broad.
Yeah, sharing private keys is a bad thing. So I'll skip that and focus on the other portion.
It sounds like you want to centrally manage accounts, rather than manually adding/removing/modifying them on each individual server.
You can set up something like NIS to manage user accounts. This would require changes to every single VM.
If you use something like puppet, chef, or salt you can create recipes to control user access (e.g. pushing out public keys or even creating accounts and configuring sudo).
You can use something like pssh (parallel ssh) to execute commands on multiple hosts at the same time. It could simply add a public key to an existing authorized_keys file or even add a user, its key, and necessary sudo access. (Note: if you do this be very careful. A poorly written command could cut off access for everyone and cause unnecessary down time).
An aside: Having multiple users share a single account is a bad idea, generally a security and QA nightmare. Instead of allowing multiple access to the same account each user should have their own account with the minimal privileged access they need.
Do as you will.
Have you checked out the feature Run Command to execute a simple script to add or remove users.