packer not able to find private AMI - amazon-web-services

i create a private ami on amazon and installed few things on it manually. I am new to packer and i now want to use the previous image as base and create new ami using packer. However i keep on getting error message that my base image does not exists. Here is my packer file
data "amazon-ami" "cocktails" {
filters = {
virtualization-type = "hvm"
name = "test-ami-24112022"
root-device-type = "ebs"
}
owners = ["my-account-id"]
most_recent = true
}
source "amazon-ebs" "cocktails" {
instance_type = "t2.micro"
region = "eu-west-2"
ssh_username = "ec2-user"
ami_name = "test-${uuidv4()}"
source_ami = data.amazon-ami.cocktails.id
}
build {
sources = ["source.amazon-ebs.cocktails"]
provisioner "file" {
source = "test.txt"
destination = "/home/ec2-user/test.txt"
}
}
This is the error i am getting
Datasource.Execute failed: No AMI was found matching filters: {
Filters: [{
Name: "root-device-type",
Values: ["ebs"]
},{
Name: "virtualization-type",
Values: ["hvm"]
},{
Name: "name",
Values: ["test-ami-24112022"]
}],
Owners: ["my-account-id"]
}
on main.pkr.hcl line 1:
(source code not available)

The issue was in my config file of aws. It had some pre loaded data which i used previously and it was pointing to wrong region, when i deleted the config and recreated using correct region it worked

Related

How to create a temporary instance for a custom AMI creation in AWS with terraform?

Im trying to create a Custom AMI for my AWS Deployment with terraform. Its working quite good also its possible to run a bash script. Problem is it's not possible to create the instance temporary and then to terminate the ec2 instance with terraform and all the depending resources.
First im building an "aws_instance" than I provide a bash script in my /tmp folder and let this be done via ssh connection in the terraform script. Looking like the following:
Fist the aws_instance is created based on a standard AWS Amazon Machine Image (AMI). This is used to later create an image from it.
resource "aws_instance" "custom_ami_image" {
tags = { Name = "custom_ami_image" }
ami = var.ami_id //base custom ami id
subnet_id = var.subnet_id
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile = "ec2-instance-profile"
instance_type = "t2.micro"
ebs_block_device {
//...further configurations
}
Now a bash script is provided. The source is the location of the bash script on the local linux box you are executing terraform from. The destination is on the new AWS instance. In the file I install further stuff like python3, oracle drivers and so on...
provisioner "file" {
source = "../bash_file"
destination = "/tmp/bash_file"
}
Then I'll change the permissions on the bash script and execute it with a ssh-user:
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bash_file",
"sudo /tmp/bash_file",
]
}
No you can login to the ssh-user with the previous created key.
connection {
type = "ssh"
user = "ssh-user"
password = ""
private_key = file("${var.key_name}.pem")
host = self.private_ip
}
}
With the aws_ami_from_instance the ami can be modelled with the current created EC2 instance. And now is callable for further deployments, its also possible to share it in to further aws accounts.
resource "aws_ami_from_instance" "custom_ami_image {
name = "acustom_ami_image"
source_instance_id = aws_instance.custom_ami_image.id
}
Its working fine, but what bothers me is the resulting ec2 instance! Its running and its not possible to terminate it with terraform? Does anyone have an idea how I can handle this? Sure, the running costs are manageable, but I don't like creating datagarbage....
The best way to create AMI images i think is using Packer, also from Hashicorp like Terraform.
What is Packer?
Provision Infrastructure with Packer Packer is HashiCorp's open-source tool for creating machine images from source
configuration. You can configure Packer images with an operating
system and software for your specific use-case.
Packer creates an temporary instance with temporary keypair, security_group and IAM roles. In the provisioner "shell" are custom inline commands possible. Afterwards you can use this ami with your terraform code.
A sample script could look like this:
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "linux" {
# AMI Settings
ami_name = "ami-oracle-python3"
instance_type = "t2.micro"
source_ami = "ami-xxxxxxxx"
ssh_username = "ec2-user"
associate_public_ip_address = false
ami_virtualization_type = "hvm"
subnet_id = "subnet-xxxxxx"
launch_block_device_mappings {
device_name = "/dev/xvda"
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
encrypted = false
}
# Profile Settings
profile = "xxxxxx"
region = "eu-central-1"
}
build {
sources = [
"source.amazon-ebs.linux"
]
provisioner "shell" {
inline = [
"export no_proxy=localhost"
]
}
}
You can find documentation about packer here.

Terraform getting UnsupportedOperation: Microsoft SQL when trying to start an amazon-linux instance

I'm trying to start an instance on aws using terraform and using the amazon-linux-2 image. But I'm getting an Erro for Microsoft SQL Server Entreprise Edition.
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
}
resource "aws_instance" "dev" {
ami = "${data.aws_ami.amazon-linux-2.id}"
instance_type = "t2.micro"
key_name = "terraform"
tags = {
"Name" = "dev-terraform"
}
}
data "aws_ami" "amazon-linux-2" {
most_recent = true
owners = ["amazon"]
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
}
The error:
Error: Error launching source instance: UnsupportedOperation: Microsoft SQL Server Enterprise Edition is not supported for the instance type 't2.micro'.
status code: 400, request id: 72df60b9-46ac-4616-ab3f-b964ab2f3156
on main.tf line 6, in resource "aws_instance" "dev":
6: resource "aws_instance" "dev" {
I'm not having any relation with Microsoft and I cannot understanding why this error is appearing.
The error is correct. Your data source query returns:
amzn2-ami-hvm-2.0.20190313-x86_64-gp2-SQL_2017_Enterprise-2021.02.25
Please use the following data source instead:
data "aws_ami" "latest_amazon_2" {
most_recent = true
owners = ["amazon"]
name_regex = "^amzn2-ami-hvm-.*x86_64-gp2$"
}

Terraform - Multiple accounts with multiple environments (regions)

I am developing the infrastructure (IaC) I want to have in AWS with Terraform. To test, I am using an EC2 instance.
This code has to be able to be deployed across multiple accounts and **multiple regions (environments) per developer **. This is an example:
account-999
developer1: us-east-2
developer2: us-west-1
developerN: us-east-1
account-666:
Staging: us-east-1
Production: eu-west-2
I've created two .tfvars variables, account-999.env.tfvars and account-666.env.tfvars with the following content:
profile="account-999" and profile="account-666" respectively
This is my main.tf which contains the aws provider with the EC2 instance:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
profile = var.profile
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
And the variable.tf file:
variable "profile" {
type=string
}
variable "region" {
description = "Region by developer"
type = map
default = {
developer1 = "us-west-2"
developer2 = "us-east-2"
developerN = "ap-southeast-1"
}
}
But I'm not sure if I'm managing it well. For example, the region variable only contains the values of the account-999 account. How can I solve that?
On the other hand, with this structure, would it be possible to implement modules?
You could use a provider alias to accomplish this. More info about provider aliases can be found here.
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = aws.west
# ...
}
Another way to look at is, is by using terraform workspaces. Here is an example:
terraform workspace new account-999
terraform workspace new account-666
Then this is an example of your aws credentials file:
[account-999]
aws_access_key_id=xxx
aws_secret_access_key=xxx
[account-666]
aws_access_key_id=xxx
aws_secret_access_key=xxx
A reference to that account can be used within the provider block:
provider "aws" {
region = "us-east-1"
profile = "${terraform.workspace}"
}
You could even combine both methods!

Cannot get source from artifact AWS CodeBuild in Terraform

I need to create a pipeline with a buildstep with terraform. I need to get the source from the artifact but the Terraform documentation is not very clear. This is my code so far:
resource "aws_codebuild_project" "authorization" {
name = "authorization"
description = "BuildProject for authrorization service"
build_timeout = "5"
service_role = "${aws_iam_role.codebuild_role.arn}"
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/docker:17.09.0"
type = "LINUX_CONTAINER"
privileged_mode = true
environment_variable {
"name" = "SOME_KEY1"
"value" = "SOME_VALUE1"
}
environment_variable {
"name" = "SOME_KEY2"
"value" = "SOME_VALUE2"
}
}
source {
type = "CODEPIPELINE"
buildspec = "buildspecs.yml"
}
tags {
"Environment" = "alpha"
}
}
The problem is that pointing to file gets me this error during pipeline execution of that step:
DOWNLOAD_SOURCE Failed
[Container] 2018/03/29 11:15:31 Waiting for agent ping
[Container] 2018/03/29 11:15:31 Waiting for DOWNLOAD_SOURCE
Message: Access Denied
This is how my Pipeline looks like:
resource "aws_codepipeline" "foo" {
name = "tf-test-pipeline"
role_arn = "${aws_iam_role.codepipeline_role.arn}"
artifact_store {
location = "${aws_s3_bucket.foo.bucket}"
type = "S3"
encryption_key {
id = "${aws_kms_key.a.arn}"
type = "KMS"
}
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["src"]
configuration {
RepositoryName = "authorization"
BranchName = "master"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["src"]
version = "1"
configuration {
ProjectName = "${aws_codebuild_project.authorization.name}"
}
}
}
}
I guess i did something wrong but i can't seem to find my case described somewhere.
Source needs to be received from the Source step in CodePipeline and this step is ok. I know how the pipeline works but the terraform implementation is pretty confusing.
EDIT: I've checked the S3 bucket and i can confirm that the Source step is successfully uploading the artifacts there. So the problem remains that i cannot access the source when i am in the second step. Role is allowing all access on all resources. Console version of the pipeline looks normal and nothing not filled. Role is fine.
This generally happens when you have a CodeBuild project already and you integrate it to the CodePipeline project. The Codebuild now does not download the sources from CodeCommit/Github repo. Instead, it will try to download the source artifact created in the codepipeline bucket in S3. So, you will need to provide permissions to the CodeBuild role to access the codepipline bucket in S3.

Terraform Spot Instance inside VPC

I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339