Facing issue importing existing resources using s3 module. I can import one bucket fine -
terraform import 'module.s3_bucket.aws_s3_bucket.this[0]' bucket-1
But then I try to import another bucket -
terraform import 'module.s3_bucket.aws_s3_bucket.this[0]' bucket-2
getting below error -
Error: Resource already managed by Terraform
I know it is an indexing issue, but how to resolve this? I have 50+ buckets.
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.6.1"
bucket = "bucket-1"
}
Updates -
I manage to import multiple buckets using for each like this -
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.6.1"
for_each = local.s3_buckets
bucket = each.key
}
and my locals.tf -
locals {
s3_buckets = {
bucket-1 = {}
}
}
but even after successful import, Terraform wants to create the existing buckets -
Plan: 2 to add, 0 to change, 0 to destroy.
Update 2:
Finally able to import buckets, as below -
terraform import 'module.s3_bucket["bucket-1"].aws_s3_bucket.this[0]' bucket-1
terraform import 'module.s3_bucket["bucket-2"].aws_s3_bucket.this[0]' bucket-2
Thanks !!
Related
I am trying to import existing gcp compute instance into with terraform import command.
But I am encountering this error which says the resource does not exist while using the import command as :
terraform import google_compute_instance.tf-instance-2 my_project_id
google_compute_instance.tf-instance-2: Import prepared!
Prepared google_compute_instance for import
google_compute_instance.tf-instance-2: Refreshing state... [id=projects/qwiklabs-gcp-02-67a8ccc33dba/zones/us-central1-a/instances/qwiklabs-gcp-02-67a8ccc33dba]
╷
│ Error: Cannot import non-existent remote object
│
│ While attempting to import an existing object to "google_compute_instance.tf-instance-2", the provider detected that no object exists with the given id. Only pre-existing objects can be imported; check that the id is correct and that it is
│ associated with the provider's configured region or endpoint, or use "terraform apply" to create a new remote object for this resource.
╵
But when I list the available gcloud compute instances tf-instance-2(the instance I am trying to import) is there.
NAME: tf-instance-1
ZONE: us-central1-a
MACHINE_TYPE: n1-standard-1
PREEMPTIBLE:
INTERNAL_IP: 10.128.0.3
EXTERNAL_IP: 34.121.38.65
STATUS: RUNNING
NAME: tf-instance-2
ZONE: us-central1-a
MACHINE_TYPE: n1-standard-1
PREEMPTIBLE:
INTERNAL_IP: 10.128.0.2
EXTERNAL_IP: 35.184.192.60
STATUS: RUNNING
The instances that I am trying to import are automatically created by GCP's codelabs.
My main.tf consists of only 3 blocs, terraform, google provider and google_compute_instance resource.
Things I have tried:
Changing the versions of terraform and google provider
terraform init and terraform init -refconfigure before running the import commands.
Make sure all the attributes of are instances are in terraform.
main.tf file:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.8.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
# id = "4193295884192005746"
project = var.project_id
zone = var.zone
machine_type = "n1-standard-1"
labels = {
"goog-dm" = "qldm-10079641-937281f7192921b3"
}
boot_disk {
initialize_params {
image = "debian-10-buster-v20220118"
}
}
network_interface {
network = "default"
access_config {
}
}
allow_stopping_for_update = true
metadata_startup_script = <<-EOT
#!/bin/bash
EOT
}
According to the import documentation for google_compute_instance:
Instances can be imported using any of these accepted formats:
$ terraform import google_compute_instance.default projects/{{project}}/zones/{{zone}}/instances/{{name}}
$ terraform import google_compute_instance.default {{project}}/{{zone}}/{{name}}
$ terraform import google_compute_instance.default {{name}}
name would probably be easiest here, and so we can modify the import command to target it accordingly:
terraform import google_compute_instance.tf-instance-2 tf-instance-2
Use
$ terraform import google_compute_instance.tf-instance-2 {{my_project_id}}/{{zone}}/tf-instance-2
like is a documentation
$ terraform import google_compute_instance.default projects/{{project}}/zones/{{zone}}/instances/{{name}}
I encountered the same issue, with an initial Terraform API call resulting in status code 409, Error: Error creating Service: googleapi: Error 409: Requested entity already exists. Thinking that if it already exists, I should import it, I was then presented with OP's error Error: Cannot import non-existent remote object.
In the case of Cloud Monitoring services, the name is far from simple. To find it, I had to navigate to Cloud Monitoring services, select the three dots at the end of the relevant service's row, choose Edit Display Name, and then copy the name out of the displayed JSON. The name wound up having the form, projects/<project_number>/services/wl:<project_name>-zone-<zone>-<cluster_name>-<namespace>-Deployment-<deployment_name> (this is for a GKE Workload).
landed here and previous Matt's reply gave us the hint.
to add some value, if you're using google-shell, beware its being launched on a specific project, so if you want to import cross-project resources it's required to add the projects/{{project}}/ or {{project}}/ prefix.
pretty obvious but got us some headaches, hope it helps someone
How are you?
I'm trying to execute a sagemaker job but i get this error:
ClientError: Failed to download data. Cannot download s3://pocaaml/sagemaker/xsell_sc1_test/model/model_lgb.tar.gz, a previously downloaded file/folder clashes with it. Please check your s3 objects and ensure that there is no object that is both a folder as well as a file.
I'm have that model_lgb.tar.gz on that s3 path as you can see here:
This is my code:
project_name = 'xsell_sc1_test'
s3_bucket = "pocaaml"
prefix = "sagemaker/"+project_name
account_id = "029294541817"
s3_bucket_base_uri = "{}{}".format("s3://", s3_bucket)
dev = "dev-{}".format(strftime("%y-%m-%d-%H-%M", gmtime()))
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker") #este pinta?
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1", role=role, instance_type='ml.m5.4xlarge', instance_count=1
)
PREPROCESSING_SCRIPT_LOCATION = 'funciones_altas.py'
preprocessing_input_code = sagemaker_session.upload_data(
PREPROCESSING_SCRIPT_LOCATION,
bucket=s3_bucket,
key_prefix="{}/{}".format(prefix, "code")
)
preprocessing_input_data = "{}/{}/{}".format(s3_bucket_base_uri, prefix, "data")
preprocessing_input_model = "{}/{}/{}".format(s3_bucket_base_uri, prefix, "model")
preprocessing_output = "{}/{}/{}/{}/{}".format(s3_bucket_base_uri, prefix, dev, "preprocessing" ,"output")
processing_job_name = params["project_name"].replace("_", "-")+"-preprocess-{}".format(strftime("%d-%H-%M-%S", gmtime()))
sklearn_processor.run(
code=preprocessing_input_code,
job_name = processing_job_name,
inputs=[ProcessingInput(input_name="data",
source=preprocessing_input_data,
destination="/opt/ml/processing/input/data"),
ProcessingInput(input_name="model",
source=preprocessing_input_model,
destination="/opt/ml/processing/input/model")],
outputs=[
ProcessingOutput(output_name="output",
destination=preprocessing_output,
source="/opt/ml/processing/output")],
wait=False,
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
and on funciones_altas.py i'm using ohe_altas.tar.gz and not model_lgb.tar.gz making this error super weird.
can you help me?
Looks like you are using sagemaker generated execution role and the error is related to S3 permissions.
Here are a couple of things you can do:
make sure to check the policies on the role that they have access to your bucket.
check if the objects are encrypted in your bucket, if so then ensure to also include kms policy to the role you are linking to the job. https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-forbidden-error/
You can always create your own role as well and pass the arn to the code to run the processing job.
I have created a aws security group using the terraform provided aws module which is terraform-aws-modules/security-group/aws//modules/web. Below is the snippet of code used. Resource created properly
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "web-server-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
}
But not able to import using below command
terraform import -var aws_region=us-east-1 -state-out=us-east-1-recover.terraform.tfstate module.app_security_group.aws_security_group.web-server-sg sg-01c3b636f23c07ed0
getting error
Error: resource address "module.app_security_group.aws_security_group.this" does not exist in the configuration.
Before importing this resource, please create its configuration in module.app_security_group. For example:
resource "aws_security_group" "web-server-sg" {
# (resource arguments)
}
Try this command
terraform import -var aws_region=us-east-1 -state-out=us-east-1-recover.terraform.tfstate module.app_security_group.web-server-sg sg-01c3b636f23c07ed0
Also note that when you created the resource then it should have already added to the statefile
following the answers to this question Load S3 Data into AWS SageMaker Notebook I tried to load data from S3 bucket to SageMaker Jupyter Notebook.
I used this code:
import pandas as pd
bucket='my-bucket'
data_key = 'train.csv'
data_location = 's3://{}/{}'.format(bucket, data_key)
pd.read_csv(data_location)
I replaced 'my-bucket' by the ARN (Amazon Ressource name) of my S3 bucket (e.g. "arn:aws:s3:::name-of-bucket") and replaced 'train.csv' by the csv-filename which is stored in the S3 bucket. Regarding the rest I did not change anything at all. What I got was this ValueError:
ValueError: Failed to head path 'arn:aws:s3:::name-of-bucket/name_of_file_V1.csv': Parameter validation failed:
Invalid bucket name "arn:aws:s3:::name-of-bucket": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" or be an ARN matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]{1,63}$"
What did I do wrong? Do I have to modify the name of my S3 bucket?
The path should be:
data_location = 's3://{}/{}'.format(bucket, data_key)
where bucket is <bucket-name> not ARN. For example bucket=my-bucket-333222.
So I want to start importing the existing infrastructure from an AWS account and I have this simple code in a main.tf file:
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = "profile_I_want"
region = "us-east-1"
}
resource "aws_ecs_cluster" "my_cluster" {
# name = "my-cluster"
}
And my credentials file for AWS has two profiles + the default one set:
[default]
aws_secret_access_key = secretkey2
aws_access_key_id = accesskey2
[another_profile]
aws_secret_access_key = secretkey1
aws_access_key_id = accesskey1
[profile_I_want]
aws_secret_access_key = secretkey2
aws_access_key_id = accesskey2
Note: another_profile and profile_I_want correspond to different AWS accounts.
These are the versions I'm working with:
Terraform v0.12.28
+ provider.aws v2.70.0
And when I execute terraform import aws_ecs_service.my_service my-service, the next error shows up:
aws_ecs_cluster.my_cluster: Refreshing state... [id=arn:aws:ecs:us-east-1:another_profile_ID:cluster/my-cluster]
Error: Cannot import non-existent remote object
Notice the another_profile_ID arn.
So these are my questions:
Terraform is selecting by default another_profile at one point and I don't know how to change it. Can I set and import infrastructure from the profile_I_want account?
Can I import infrastructure when there's no previous .tfstate file in the directory?
I think I found a solution, at least in my case, it seems terraform import takes the value of the environment variables AWS_ACCESS_KEY and AWS_SECRET_KEY, so I just changed the value for those two variables while I'll work on this project.