I have a Lambda
resource "aws_lambda_function" "api" {
function_name = "ApiController"
timeout = 10
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/sketch-avatar-api-1.0.0-all.jar"
handler = "io.micronaut.function.aws.proxy.MicronautLambdaHandler"
runtime = "java11"
memory_size = 1024
role = aws_iam_role.api_lambda.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for subnet in aws_subnet.private: subnet.id]
}
}
Within a VPC
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
enable_dns_support = true
enable_dns_hostnames = true
}
I created a aws_vpc_endpoint because I read that's what's need for my VPC to access S3
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
}
I created and attached a policy allowing access to S3
resource "aws_iam_role_policy_attachment" "s3" {
role = aws_iam_role.api_lambda.name
policy_arn = aws_iam_policy.s3.arn
}
resource "aws_iam_policy" "s3" {
policy = data.aws_iam_policy_document.s3.json
}
data "aws_iam_policy_document" "s3" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"s3:*",
]
}
}
It might be worth noting that the buckets I'm trying to access is created using the aws cli but in the same region. So not with terraform.
The problem is that my Lambda is timing out when I try to read files from S3.
The full project can be found here should anyone want to take a peek.
You are creating com.amazonaws.${var.region}.s3 which is gateway VPC endpoint , which shouldn't be confused with interface VPC endpoints.
One of the key differences between the two is that the gateway type requires association with route tables. Thus you should use route_table_ids to associate your S3 gateway with route tables of your subnets.
For example, to use default main VPC route table:
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
route_table_ids = [aws_vpc.vpc.main_route_table_id]
}
Alternatively, you can use aws_vpc_endpoint_route_table_association to do it as well.
Related
I have created a policy X with ec2 and vpc full access and attached to userA. userA has console access. So, using switch role userA can create instance from console.
Now, userB has programatic access with policy Y with ec2 and vpc full access. But when I tried to create instance using Terraform got error.
Error: creating Security Group (allow-80-22): UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message:
Even - aws ec2 describe-instances
gives error -
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Anyone can help me on this.
Thanks in advance.
To be honest there are a couple of mistakes in the question itself but I have ignored them and provided a solution to
Create Resources using IAM user with only programmatic access having direct policies attached to it
In general, if you have an AWS IAM user who has programmatic access and already has the required policies attached to it then it's pretty straightforward to create any resources within the permissions. Like any normal use case.
Create Resources using IAM user with only programmatic access with assuming a role that has required policies attached to it(role only)
providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
## If you hardcoded the role_arn then it is not required to have two provider configs(one with hardcoded value is enough without any alias).
provider "aws" {
region = "eu-central-1"
}
provider "aws" {
alias = "ec2_and_vpc_full_access"
region = "eu-central-1"
assume_role {
role_arn = data.aws_iam_role.stackoverflow.arn
}
}
resources.tf
/*
!! Important !!
* Currently the AWS secrets(AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) used for authentication to terraform is
* from the user which has direct AWS managed policy [IAMFullAccess] attached to it to read role arn.
*/
# If you have hardcoded role_arn in the provider config this can be ignored and no usage of alias provider config is required
## using default provider to read the role.
data "aws_iam_role" "stackoverflow" {
name = "stackoverflow-ec2-vpc-full-access-role"
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_vpc" "default" {
provider = aws.ec2_and_vpc_full_access
default = true
}
# Using provider with the role having AWS managed policies [ec2 and vpc full access] attached
resource "aws_key_pair" "eks_jump_host" {
provider = aws.ec2_and_vpc_full_access
key_name = "ec2keypair"
public_key = file("${path.module}/../../ec2keypair.pub")
}
# Example from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_ami" "ubuntu" {
provider = aws.ec2_and_vpc_full_access
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_instance" "terraform-ec2" {
provider = aws.ec2_and_vpc_full_access
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = "ec2keypair"
security_groups = [aws_security_group.t-allow_tls.name]
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_security_group" "t-allow_tls" {
provider = aws.ec2_and_vpc_full_access
name = "allow-80-22"
description = "Allow TLS inbound traffic"
vpc_id = data.aws_vpc.default.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
For a full solution refer to Github Repo, I hope this is something you were looking and helps.
TL;DR: Does my EC2 instance need an IAM role to be added to my ECS cluster? If so, how do I set that?
I have an EC2 instance created using an autoscaling group. (ASG definition here.) I also have an ECS cluster, which is set on the spawned instances via user_data. I've confirmed that /etc/ecs/ecs.config on the running instance looks correct:
ECS_CLUSTER=my_cluster
However, the instance never appears in the cluster, so the service task doesn't run. There are tons of questions on SO about this, and I've been through them all. The instances are in a public subnet and have access to the internet. The error in ecs-agent.log is:
Error getting ECS instance credentials from default chain: NoCredentialProviders: no valid providers in chain.
So I am guessing that the problem is that the instance has no IAM role associated with it. But I confess that I am a bit confused about all the various "roles" and "services" involved. Does this look like a problem?
If that's it, where do I set this? I'm using Cloud Posse modules. The docs say I shouldn't set a service_role_arn on a service task if I'm using "awsvpc" as the networking mode, but I am not sure whether I should be using a different mode for this setup (multiple containers running as tasks on a single EC2 instance). Also, there are several other roles I can configure here? The ECS service task looks like this:
module "ecs_alb_service_task" {
source = "cloudposse/ecs-alb-service-task/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "0.62.0"
container_definition_json = jsonencode([for k, def in module.flask_container_def : def.json_map_object])
name = "myapp-web"
security_group_ids = [module.sg.id]
ecs_cluster_arn = aws_ecs_cluster.default.arn
task_exec_role_arn = [aws_iam_role.ec2_task_execution_role.arn]
launch_type = "EC2"
alb_security_group = module.sg.name
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
network_mode = "awsvpc"
desired_count = 1
task_memory = (512 * 3)
task_cpu = 1024
deployment_controller_type = "ECS"
enable_all_egress_rule = false
health_check_grace_period_seconds = 10
deployment_minimum_healthy_percent = 50
deployment_maximum_percent = 200
ecs_load_balancers = [{
container_name = "web"
container_port = 80
elb_name = null
target_group_arn = module.alb.default_target_group_arn
}]
}
And here's the policy for the ec2_task_execution_role:
data "aws_iam_policy_document" "ec2_task_execution_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
Update: Here is the rest of the declaration of the task execution role:
resource "aws_iam_role" "ec2_task_execution_role" {
name = "${var.project_name}_ec2_task_execution_role"
assume_role_policy = data.aws_iam_policy_document.ec2_task_execution_role.json
tags = {
Name = "${var.project_name}_ec2_task_execution_role"
Project = var.project_name
}
}
resource "aws_iam_role_policy_attachment" "ec2_task_execution_role" {
role = aws_iam_role.ec2_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
# Create a policy for the EC2 role to use Session Manager
resource "aws_iam_role_policy" "ec2_role_policy" {
name = "${var.project_name}_ec2_role_policy"
role = aws_iam_role.ec2_task_execution_role.id
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"ssm:DescribeParameters",
"ssm:GetParametersByPath",
"ssm:GetParameters",
"ssm:GetParameter"
],
"Resource" : "*"
}
]
})
}
Update 2: The EC2 instances are created by the Auto-Scaling Group, see here for my code. The ECS cluster is just this:
# Create the ECS cluster
resource "aws_ecs_cluster" "default" {
name = "${var.project_name}_cluster"
tags = {
Name = "${var.project_name}_cluster"
Project = var.project_name
}
}
I was expecting there to be something like instance_role in the ec2-autoscaling-group module, but there isn't.
You need to set the EC2 instance profile (IAM instance role) via the iam_instance_profile_name setting in the module "autoscale_group".
How can I update the default route table that is automatically created when I create a VPC by using Terraform?
I would like to add some tags to it.
This is how I create my VPC
module "aws_vpc" {
source = "../../modules/Virtual Private Cloud"
vpc_cidr = "10.0.0.0/16"
vpc_instance_tenancy = "default"
vpc_tags = {
Name = "Web Application VPC"
project = "Alpha"
cost_center = "92736"
developer = "J.Pean"
}
}
Module looks like this:
resource "aws_vpc" "new" {
cidr_block = var.vpc_cidr
instance_tenancy = "default"
tags = var.vpc_tags
}
resource "null_resource" "tag_default_route_table" {
triggers = {
route_table_id = aws_vpc.new.default_route_table_id
}
provisioner "local-exec" {
interpreter=["/bin/bash", "-c"]
command = <<EOF
set -euo pipefail
aws ec2 create-tags --resources route_table_id --tags 'Key="somekey",Value=test'
EOF
}
}
Using null_resource
You can manage your newly created route table as well as nacl and sg through the resources:
aws_default_network_acl
aws_default_route_table
aws_default_security_group
Terraform documentation
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.
I am trying to create a VPC peer between accounts and auto accepting it but it fails with permissions error.
Here are the providers in the main.tf
provider "aws" {
region = "${var.region}"
shared_credentials_file = "/Users/<username>/.aws/credentials"
profile = "sandbox"
}
data "aws_caller_identity" "current" { }
Here is the vpc_peer module:
resource "aws_vpc_peering_connection" "peer" {
peer_owner_id = "${var.peer_owner_id}"
peer_vpc_id = "${var.peer_vpc_id}"
vpc_id = "${var.vpc_id}"
auto_accept = "${var.auto_accept}"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
tags {
Name = "VPC Peering between ${var.peer_vpc_id} and ${var.vpc_id}"
}
}
Here is the module execution in the maint.ft
module "peering" {
source = "../modules/vpc_peer"
region = "${var.region}"
peer_owner_id = "<management account number>"
peer_vpc_id = "<vpc-********>"
vpc_id = "${module.network.vpc_id}"
auto_accept = "true"
}
Now the IAM user I am using from the "sandbox" provider has permissions for VPC peering in the VPC which is in the management account.
I used the following procedure from AWS: http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
Unfortunately I keep failing with the following error:
1 error(s) occurred:
* aws_vpc_peering_connection.peer: Unable to accept VPC Peering Connection: OperationNotPermitted: User 651267440910 cannot accept peering pcx-f9c55290
status code: 400, request id: cfbe1163-241e-413b-a8de-d2bca19726e5
Any ideas?
I managed to run a local_exec which accepts the peer.
Here is an example:
resource "aws_vpc_peering_connection" "peer" {
peer_owner_id = "${var.peer_owner_id}"
peer_vpc_id = "${var.peer_vpc_id}"
vpc_id = "${var.vpc_id}"
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.peer.id} --region=${var.region} --profile=${var.profile}"
}
tags {
Name = "VPC Peering between ${var.peer_vpc_id} and ${var.vpc_id}"
}
}
Latest doc example works fine for me (cross account usage)
Other answers was not working
example with terraform ver > 1
provider "aws" {
alias = "requester"
# Requester's credentials.
}
provider "aws" {
alias = "accepter"
# Accepter's credentials.
}
resource "aws_vpc" "main" {
provider = aws.requester
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}
resource "aws_vpc" "peer" {
provider = aws.accepter
cidr_block = "10.1.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}
data "aws_caller_identity" "peer" {
provider = aws.accepter
}
# Requester's side of the connection.
resource "aws_vpc_peering_connection" "peer" {
provider = aws.requester
vpc_id = aws_vpc.main.id
peer_vpc_id = aws_vpc.peer.id
peer_owner_id = data.aws_caller_identity.peer.account_id
auto_accept = false
tags = {
Side = "Requester"
}
}
# Accepter's side of the connection.
resource "aws_vpc_peering_connection_accepter" "peer" {
provider = aws.accepter
vpc_peering_connection_id = aws_vpc_peering_connection.peer.id
auto_accept = true
tags = {
Side = "Accepter"
}
}
resource "aws_vpc_peering_connection_options" "requester" {
provider = aws.requester
# As options can't be set until the connection has been accepted
# create an explicit dependency on the accepter.
vpc_peering_connection_id = aws_vpc_peering_connection_accepter.peer.id
requester {
allow_remote_vpc_dns_resolution = true
}
}
resource "aws_vpc_peering_connection_options" "accepter" {
provider = aws.accepter
vpc_peering_connection_id = aws_vpc_peering_connection_accepter.peer.id
accepter {
allow_remote_vpc_dns_resolution = true
}
}
The auto_acceptargument in Terraform can only be used on VPCs in the same account. From the documentation:
auto_accept - (Optional) Accept the peering (both VPCs need to be in
the same AWS account).
...
If both VPCs are not in the same AWS account do not enable the
auto_accept attribute. You will still have to accept the VPC Peering
Connection request manually using the AWS Management Console, AWS CLI,
through SDKs, etc.
So you'll just need to make the peering connection on this-side in terraform without auto_accept, and then manually or programatically accept it in the target account. Some programatic options:
AWS CLI: accept-vpc-peering-connection
AWS API: AcceptVpcPeeringConnection
The AWS SDK in your language of choice should have a matching method for this, as well.
VPC peering will happen on the same region with the same account or different accout, In Both the sides the VPC peering need to be accepted in order to access from one vpc to another vpc.