I have two AWS Accounts:
Test Account
Prod Account
I am creating an AMI using Packer in the Test Account and want to copy the AMI to the Prod Account after that.
How can I use Packer to do that and also remove the actual AMI after the job is done?
I already checked following questions but they didn't resolve my query:
How do I bulk copy AMI AWS account number permissions from one AMI image to another?
how to copy AMI from one aws account to other aws account?
You can accomplish this behavior by using the ami_users directive in packer. This will allow the specified accounts to access the created AMIs from the source account.
If you are looking to have a deep copy of the AMIs in each account (distinct IDs) then you will have to re-run packer build with credentials into the other account.
As answered above use ami_users.
The way we use this in production is, we usually have vars file for each environment in the "vars" folder. One of the value in the vars JSON file is "nonprod_account_id":"1234567890". Then in the packer.json, use ami_users as below.
"ami_users": ["{{user `nonprod_account_id`}}"]
I'm unclear on why you would want to remove the AMI from the account where it was built after copying it to another account rather than just building it in the "destination" account, unless maybe there are stronger access restrictions or something in Prod, but in that case I would question copying in an AMI built where things are "loose".
To specifically do the copying you may want this plugin.
https://github.com/martinbaillie/packer-post-processor-ami-copy
The removal from the source account might need to be "manual" or could be automated by a cleanup process that removes AMIs older than a certain period of time. As of May 2019 it is possible to create in one account and share access for both unencrypted AND encrypted AMIs (the ability to copy/utilize encrypted AMIs is the new bit compared to the other answers).
A couple Amazon posts on the new capabilities.
https://aws.amazon.com/about-aws/whats-new/2019/05/share-encrypted-amis-across-accounts-to-launch-instances-in-a-single-step/
https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/
This article outlines a process of using Packer to copy an AMI between accounts rather than just referencing a source in another account, you can probably extend it to perform the cleanup.
https://www.helecloud.com/single-post/2019/03/21/How-to-overcome-AWS-Copy-AMI-boundaries-by-using-Hashicorp%E2%80%99s-Packer
This one shows an updated process from above that uses the ability to grant access across accounts to avoid creating multiple copies of the AMI, one for each account/environment where you want to utilize it.
https://www.helecloud.com/single-post/2019/11/06/Overcome-AWS-Copy-AMI-boundaries-%E2%80%93-share-encrypted-AMIs-with-Packer-%E2%80%93-follow-up
Related
Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.
I am exploring backing up our AWS services configuration to a backup disk or source control.
Only configs. eg -iam policies, users, roles, lambdas,route53 configs,cognito configs,vpn configs,route tables, security groups etc....
We have a tactical account where we have created some resources on adhoc basis and now we have a new official account setup via cloud formation.
Also in near future planning to migrate tactical account resources to new account either manually or using backup configs.
Looked at AWS CLI, but it is time consuming. Any script which crawls through AWS and backup the resources?
Thank You.
The "correct" way is not to 'backup' resources. Rather, it is to initially create those resources in a reproducible manner.
For example, creating resources via an AWS CloudFormation template allows the same resources to be deployed in a different region or account. Only the data itself, such as the information stored in a database, would need a 'backup'. Everything else could simply be redeployed.
There is a poorly-maintained service called CloudFormer that attempts to create CloudFormation templates from existing resources, but it only supports limited services and still requires careful editing of the resulting templates before they can be deployed in other locations (due to cross-references to existing resources).
There is also the relatively recent ability to Import Existing Resources into a CloudFormation Stack | AWS News Blog, but it requires that the template already includes the resource definition. The existing resources are simply matched to that definition rather than being recreated.
So, you now have the choice to 'correctly' deploy resources in your new account (involves work), or just manually recreate the ad-hoc resources that already exist (pushes the real work to the future). Hence the term Technical debt - Wikipedia.
Terraform allows provisionning aws infrastructures with custom ansible scripts.
Since the function ami_from_instance from terraform,
allow convert an Instance into an AMI, and aws_instance the opposit.
I am quite new to that tools and I might not understand their subtilities but why should the common pattern of using Packer to generate the ami instanciated by Terraform be used ?
I am not going to repeat what "ydaetskcoR" had already mentioned. Grt points. Another usecase that Packer really does is sharing the AMI With multiple accounts. In our setup, we create AMI in one account and shared it other accounts to be used. Packer is specifically build to create AMI's and so has many features than the simple Terraform's ami_from_instance. My 2 cents
Because Packer creates an AMI with Configuration as Code, you will have a reproducible recipe for how you AMI's are created.
If you would use Terraforms ami_from_instance you instead creates clones of an non-reproducible source, thus creating snowflake servers (all are slightly different).
Also on important feature of a public cloud is autoscaling and for that you want to start off with AMI's that includes as much as possible so the startup time is small. This makes a pre-baked AMI better than a generic with a initialisation script that installs and configure all adaptations to you production environment.
I've got project A with Storage buckets A_B1 and A_B2. Now Dataproc jobs running from project B needs to have read access to buckets A_B1 and A_B2. Is that possible somehow?
Motivation: project A is production environment with production data stored in Storage. Project B is "experimental" environment running experiment Spark jobs on production data. Goal is to obviously separate billing for production and experiment environment. Similar can be done with dev.
Indeed, the Dataproc cluster will be acting on behalf of a service account in project "B"; generally it'll be the default GCE service account, but this is also customizable to use any other service account you create inside of project B.
You can double check the service account name by getting the details of one of the VMs in your Dataproc cluster, for example by running:
gcloud compute instances describe my-dataproc-cluster-m
It might look something like <project-number>-compute#developer.gserviceaccount.com. Now, in your case if you already have data in A_B1 and A_B2 you would have to recursively edit the permissions on all the contents of those buckets to add access for your service account using something like gsutil -m acl ch -r -u -compute#developer.gserviceaccount.com:R gs://foo-bucket; while you're at it, you might also want to change the bucket's "default ACL" so that new objects also have that permission. This could get tedious to do for lots of projects, so if planning ahead, you could either:
Grant blanket GCS access into project A for project B's service account by adding the service account as a project member with a "Storage Reader" role
Update the buckets that might need to be shared in project A with read access and/or write/owners access by a new googlegroup you create to manage groupings of permissions. Then you can atomically add service accounts as members to your googlegroup without having to re-run a recursive update of all the objects in the bucket.
I'm trying to create a cloudformation template that would create a EC2 instance, mount a 2GB volume and do periodic snapshots, while also deleting the ones that are say a week or more old.
While I could get and integrate the access and secret keys, it seems that a signing certificate is required to delete snapshots. I could not find a way to create a new certificate with cloudformation, so it seems like I should create a new user and certificate manually and put that to the template parameters? In this case, is it correct that the user would be able to delete all the snapshots, including the ones that are not from that instance?
Is there a way to restrict snapshot deleting to only the ones with matching description? Or what's the proper way to handle deleting old snapshots?
My recommendation is to create an IAM role (not IAM user) with CloudFormation and assign this role to the instance (again using CloudFormation). The role should be allowed to delete snapshots as appropriate.
One of the easiest ways to delete the snapshot using the IAM role on the instance is to use the boto Python AWS library. Boto automatically finds and uses the correct credentials if you run it on the instance with the assigned IAM role.
Here is a simple boto script I just used to delete snapshot snap-51930522 in us-east-1:
#!/usr/bin/python
import boto.ec2
boto.ec2.connect_to_region('us-east-1').delete_snapshot('snap-51930522')
Alternatively, you might have an external server run the snapshot cleanup instead of running it on the instances themselves. In addition to simplifying credential management and cron job distribution, it also lets you clean up after stopped or terminated instances.