I have created a policy X with ec2 and vpc full access and attached to userA. userA has console access. So, using switch role userA can create instance from console.
Now, userB has programatic access with policy Y with ec2 and vpc full access. But when I tried to create instance using Terraform got error.
Error: creating Security Group (allow-80-22): UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message:
Even - aws ec2 describe-instances
gives error -
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Anyone can help me on this.
Thanks in advance.
To be honest there are a couple of mistakes in the question itself but I have ignored them and provided a solution to
Create Resources using IAM user with only programmatic access having direct policies attached to it
In general, if you have an AWS IAM user who has programmatic access and already has the required policies attached to it then it's pretty straightforward to create any resources within the permissions. Like any normal use case.
Create Resources using IAM user with only programmatic access with assuming a role that has required policies attached to it(role only)
providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
## If you hardcoded the role_arn then it is not required to have two provider configs(one with hardcoded value is enough without any alias).
provider "aws" {
region = "eu-central-1"
}
provider "aws" {
alias = "ec2_and_vpc_full_access"
region = "eu-central-1"
assume_role {
role_arn = data.aws_iam_role.stackoverflow.arn
}
}
resources.tf
/*
!! Important !!
* Currently the AWS secrets(AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) used for authentication to terraform is
* from the user which has direct AWS managed policy [IAMFullAccess] attached to it to read role arn.
*/
# If you have hardcoded role_arn in the provider config this can be ignored and no usage of alias provider config is required
## using default provider to read the role.
data "aws_iam_role" "stackoverflow" {
name = "stackoverflow-ec2-vpc-full-access-role"
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_vpc" "default" {
provider = aws.ec2_and_vpc_full_access
default = true
}
# Using provider with the role having AWS managed policies [ec2 and vpc full access] attached
resource "aws_key_pair" "eks_jump_host" {
provider = aws.ec2_and_vpc_full_access
key_name = "ec2keypair"
public_key = file("${path.module}/../../ec2keypair.pub")
}
# Example from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_ami" "ubuntu" {
provider = aws.ec2_and_vpc_full_access
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_instance" "terraform-ec2" {
provider = aws.ec2_and_vpc_full_access
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = "ec2keypair"
security_groups = [aws_security_group.t-allow_tls.name]
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_security_group" "t-allow_tls" {
provider = aws.ec2_and_vpc_full_access
name = "allow-80-22"
description = "Allow TLS inbound traffic"
vpc_id = data.aws_vpc.default.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
For a full solution refer to Github Repo, I hope this is something you were looking and helps.
Related
When trying to bind cloudsql client role to a SA in GCP with Terraform next error is produce:
Error setting IAM policy for service account '{}': googleapi: Error 400: Role roles/cloudsql.client is not supported for this resource., badRequest
This is my terraform code on file main.tf:
resource "google_service_account" "default" {
account_id = var.service_account_name
display_name = "ECB API Caller Cloud Function SA"
}
data "google_iam_policy" "default" {
binding {
members = ["serviceAccount:${google_service_account.default.email}"]
role = "roles/cloudsql.client"
}
}
resource "google_service_account_iam_policy" "default" {
service_account_id = google_service_account.default.name
policy_data = data.google_iam_policy.default.policy_data
}
Edit: Sorted
After further investigation I found out a way. Not completely sure why, but as far as I am able to understand, you cannot allocate this role to the user but the other way around.
Whatever, this code snippet does the trick:
resource "google_service_account" "default" {
account_id = var.service_account_name
display_name = "ECB API Caller Cloud Function SA"
}
resource "google_project_iam_binding" "project" {
project = var.project_id
role = "roles/cloudsql.client"
members = [
"serviceAccount:${google_service_account.default.email}",
]
Trying to access a public cloud run service and not sure why I keep getting this error message ({"message":"PERMISSION_DENIED:API basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog is not enabled for the project.","code":403}) when hitting the gateway default hostname path with the API key in query string. The config has a service account with the role to be able to invoke cloud run services. All required APIs are also enabled. Here is a link to my entire codebase, but below is my API Gateway specific terraform configuration.
resource "google_api_gateway_api" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control]
provider = google-beta
api_id = "basic-express-api"
}
resource "google_api_gateway_api_config" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api.basic_express]
provider = google-beta
api = google_api_gateway_api.basic_express.api_id
api_config_id = "basic-express-cfg"
openapi_documents {
document {
path = "api-configs/openapi-spec-basic-express.yaml"
contents = filebase64("api-configs/openapi-spec-basic-express.yaml")
}
}
lifecycle {
create_before_destroy = true
}
gateway_config {
backend_config {
google_service_account = google_service_account.apig_gateway_basic_express_sa.email
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
# when I added this terraform said that the resource already exists, so I had to tear down all infrastructure and re-provision - also did not make a difference, still getting a 404 error when trying to hit the gateway default hostname endpoint - this resource might be immutable...
}
}
resource "google_api_gateway_gateway" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api_config.basic_express, google_api_gateway_api.basic_express]
provider = google-beta
api_config = google_api_gateway_api_config.basic_express.id
gateway_id = "basic-express-gw"
region = var.region
}
resource "google_service_account" "apig_gateway_basic_express_sa" {
account_id = "apig-gateway-basic-express-sa"
depends_on = [google_project_service.iam]
}
# "Identity to be used by gateway"
resource "google_project_iam_binding" "project" {
project = var.project_id
role = "roles/run.invoker"
members = [
"serviceAccount:${google_service_account.apig_gateway_basic_express_sa.email}"
]
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
Try:
PROJECT=[[YOUR-PROJECT]]
SERVICE="basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog"
gcloud services enable ${SERVICE} \
--project=${PROJECT}
As others have pointed out, you need to enable the api service. You can do via terraform with the google_project_service resource:
resource "google_project_service" "basic_express" {
project = var.project_id
service = google_api_gateway_api.basic_express.managed_service
timeouts {
create = "30m"
update = "40m"
}
disable_dependent_services = true
}
I am new to terraform and still learing.
I know there is way you can import your existing infrastructure to terraform and have the state file created. But As if now I have multiple AWS accounts and multiple regions in those accounts which have multiple VPCs.
My task is to create vpc flow log through terraform.
Is it possible?
If it is, could you please help me or direct me how to get this thing done.
It is possible, albeit a little messy. You will need to create a provider block (with a unique alias) for each account/region combination you have (you can use profiles but I think roles is best), and select those providers in your resources appropriately.
provider "aws" {
alias = "acct1uswest2"
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
provider "aws" {
alias = "acct2useast1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
resource "aws_flow_log" "flow1" {
provider = aws.acct1uswest2
vpc_id = "vpc id in account 1" # you mentioned the vpc already exists, so you can either import the vpc and reference it's .id attribute here, or just put the id here as a string
...
}
resource "aws_flow_log" "flow2" {
provider = aws.acct2useast1
vpc_id = "vpc id in account 2"
...
}
Suggest you read up on importing resources (and the implications) here
More on multiple providers here
I'm trying to create data roles in three environments in AWS using Terraform.
One is an role in root account. This role can is used to login to AWS and can assume data roles in production and staging. This works fine. This is using a separate module.
I have problems when trying to create the roles in prod and staging from a module.
My module looks like this main.tf:
resource "aws_iam_role" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}
data "aws_iam_policy_document" "assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${var.account_id}:root"]
}
}
}
data "aws_iam_policy_document" "assume_role_custom_principals" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
"${var.custom_principals}",
]
}
}
}
resource "aws_iam_role_policy_attachment" "this" {
role = "${aws_iam_role.this.name}"
policy_arn = "${aws_iam_policy.this.arn}"
}
I also have the following in output.tf:
output "role_name" {
value = "${aws_iam_role.this.name}"
}
Next I try to use the module to create two roles in prod and staging.
main.tf:
module "data_role" {
source = "../tf_data_role"
account_id = "${var.account_id}"
name = "data"
policy_description = "Role for data engineers"
custom_principals = [
"arn:aws:iam::${var.master_account_id}:root",
]
policy = "${data.aws_iam_policy_document.data_access.json}"
}
Then I'm trying to attach a AWS policies like this:
resource "aws_iam_role_policy_attachment" "data_readonly_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "data_redshift_full_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonRedshiftFullAccess"
}
The problem I encounter here is that when I try to run this module the above two policies are not attached in staging but in root account. How can I fix this to make it attach the policies in staging?
I'll assume from your question that staging is its own AWS account, separate from your root account. From the Terraform docs
You can define multiple configurations for the same provider in order to support multiple regions, multiple hosts, etc.
This also applies to creating resources in multiple AWS accounts. To create Terraform resources in two AWS accounts, follow these steps.
In your entrypoint main.tf, define aws providers for the accounts you'll be targeting:
# your normal provider targeting your root account
provider "aws" {
version = "1.40"
region = "us-east-1"
}
provider "aws" {
version = "1.40"
region = "us-east-1"
alias = "staging" # define custom alias
# either use an assumed role or allowed_account_ids to target another account
assume_role {
role_arn = "arn:aws:iam:STAGINGACCOUNTNUMBER:role/Staging"
}
}
(Note: the role arn must exist already and your current AWS credentials must have permission to assume it)
To use them in your module, call your module like this
module "data_role" {
source = "../tf_data_role"
providers = {
aws.staging = "aws.staging"
aws = "aws"
}
account_id = "${var.account_id}"
name = "data"
... remainder of module
}
and define the providers within your module like this
provider "aws" {
alias = "staging"
}
provider "aws" {}
Now when you are declaring resources within your module, you can dictate which AWS provider (and hence which account) to create the resources in, e.g
resource "aws_iam_role" "this" {
provider = "aws.staging" # this aws_iam_role will be created in your staging account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
# no explicit provider is set here so it will use the "default" (un-aliased) aws provider and create this aws_iam_policy in your root account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}
Terraform doesn't seem to be able to create AWS private hosted Route53 zones, and dies with the following error when I try to create a new hosted private zone associated with an existing VPC:
Error applying plan:
1 error(s) occurred:
aws_route53_zone.analytics: InvalidVPCId: The VPC: vpc-xxxxxxx you provided is not authorized to make the association.
status code: 400, request id: b411af23-0187-11e7-82e3-df8a3528194f
Here's my .tf file:
provider "aws" {
region = "${var.region}"
profile = "${var.environment}"
}
variable "vpcid" {
default = "vpc-xxxxxx"
}
variable "region" {
default = "eu-west-1"
}
variable "environment" {
default = "dev"
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${var.vpcid}"
name = "data.int.example.com"
}
I'm not sure if the error is referring to either one of these:
VPC somehow needs to be authorised to associate with the Zone in advance.
The aws account running the terraform needs correct IAM permissions to associate the zone with the vpc
Would anyone have a clue how I could troubleshoot this further?
some times you also face such issue when the aws region which is configured in provider config is different then the region in which you have VPC deployed. for such cases we can use alias for aws provider. like below:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "ap-southeast-1"
alias = "singapore"
}
then we can use it as below in terraform resources:
resource "aws_route53_zone_association" "vpc_two" {
provider = "aws.singapore"
zone_id = "${aws_route53_zone.dlos_vpc.zone_id}"
vpc_id = "${aws_vpc.vpc_two.id}"
}
above snippet is helpful when you need your terraform script to do deployment in multiple regions.
check the terraform version if run with latest or not.
Second, your codes are wrong if compare with the sample
data "aws_route53_zone" "selected" {
name = "test.com."
private_zone = true
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "www.${data.aws_route53_zone.selected.name}"
type = "A"
ttl = "300"
records = ["10.0.0.1"]
}
The error code you're getting is because either your user/role doesn't have the necessary VPC related permissions or you are using the wrong VPC id.
I'd suggest you double check the VPC id you are using, potentially using the VPC data source to fetch it:
# Assuming you use the "Name" tag on the VPC resource to identify your VPCs
variable "vpc_name" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc_name}"
}
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${data.aws_vpc.selected.id}"
name = "data.int.example.com"
}
You also want to check that your user/role has the necessary VPC related permissions. For this you'll probably want all of the permissions listed in the docs: