I'm trying to execute query on my table In amazone but i cant execute any query i had this error msg :
Before you run your first query, you need to set up a query result location in Amazon S3.
Your query has the following error(s):
No output location provided. An output location is required either through the Workgroup result configuration setting or as an API input. (Service: AmazonAthena; Status Code: 400; Error Code: InvalidRequestException; Request ID: b6b9aa41-20af-4f4d-91f6-db997e226936)
So i'm trying to add Workgroup but i have this problem
'Error: error creating Athena WorkGroup: InvalidRequestException: primary workGroup could not be created
{
RespMetadata: {
StatusCode: 400,
RequestID: "c20801a0-3c13-48ba-b969-4e28aa5cbf86"
},
AthenaErrorCode: "INVALID_INPUT",
Message_: "primary workGroup could not be created"
}
'
Mycode
resource "aws_s3_bucket" "tony" {
bucket = "tfouh"
}
resource "aws_athena_workgroup" "primary" {
name = "primary"
depends_on = [aws_s3_bucket.tony]
configuration {
enforce_workgroup_configuration = false
publish_cloudwatch_metrics_enabled = true
result_configuration {
output_location = "s3://${aws_s3_bucket.tony.bucket}/"
encryption_configuration {
encryption_option = "SSE_S3"
}
}
}
}
please if there are solution
This probably happens because you already have primary work group. Thus, you can't create new one of the same name. Just create a work group with different name if you want:
name = "primary2"
#Marcin suggested a valid approach, but what may be closer to what you are looking for would to to import existing workgroup into the state:
terraform import aws_athena_workgroup.primary primary
Once the state knows about the already existing resource it can do the plan and apply possible changes.
Related
I would like to create an eventarc trigger for GCS object creation. According to the Eventarc documentation, this should use the direct GCS trigger. I can create it like this, but I don't know where to put the bucket name:
resource "google_eventarc_trigger" "upload" {
name = "upload"
location = "europe-west1"
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
destination {
workflow = google_workflows_workflow.process_file.id
}
service_account = google_service_account.workflow.email
}
When I run this example, I get the following error:
Error: Error creating Trigger: googleapi: Error 400: The request was invalid: The request was invalid: missing required attribute "bucket" in trigger.event_filters
Reading the documentation didn't help, but after reading the Creating Eventarc triggers with Terraform
blog post multiple times I found the answer. The bucket can be provided as another block of matching_criteria like this:
resource "google_eventarc_trigger" "upload" {
name = "upload"
location = "europe-west1"
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
matching_criteria {
attribute = "bucket"
value = google_storage_bucket.uploads.name
}
destination {
workflow = google_workflows_workflow.process_file.id
}
service_account = google_service_account.workflow.email
}
how can i use the data source in aws_security group.
i have a security group in my aws account how can i call the existing security group in my terraform code to newly created instance i am using the terraform data resouces but i am getting the error i have pasted the my code and error as well any one can please tell me how to resolve the error.
provider "aws" {
profile = "default"
region = "us-east-2"
}
data "aws_vpc" "tesing" {
filter {
name = "tag:Name"
values = ["test-vpc"]
}
}
data "aws_security_group" "sg" {
filter {
name = "group-name"
values = ["testing"]
}
filter {
name = "vpc-id"
values = ["data.aws_vpc.testing.id"]
}
}
resource "aws_instance" "example" {
ami = "ami-03657b56516ab7912"
instance_type = "t2.micro"
vpc_security_group_ids = ["data.aws_security_group.sg.id"]
}
output "ipddress" {
value = aws_instance.example.public_ip
}
i am getting the below error can u please help me out how to resolve the this error
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_security_group.sg: Refreshing state...
data.aws_vpc.tesing: Refreshing state...
Error: InvalidParameterValue: vpc-id
status code: 400, request id: 22e0f8c9-2265-4077-b271-6231b4787db1
Error: no matching VPC found
how to resolve this:
First you have spelling mistake:
aws_vpc" "tesing"
It should be:
aws_vpc" "testing"
Second,
values = ["data.aws_vpc.testing.id"]
should be:
values = [data.aws_vpc.testing.id]
Given the following alert policy in GCP (created with terraform)
resource "google_monitoring_alert_policy" "latency_alert_policy" {
display_name = "Latency of 95th percentile more than 1 second"
combiner = "OR"
conditions {
display_name = "Latency of 95th percentile more than 1 second"
condition_threshold {
filter = "metric.type=\"custom.googleapis.com/http/server/requests/p95\" resource.type=\"k8s_pod\""
threshold_value = 1000
duration = "60s"
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner= "ALIGN_NEXT_OLDER"
cross_series_reducer= "REDUCE_MAX"
group_by_fields = [
"metric.label.\"uri\"",
"metric.label.\"method\"",
"metric.label.\"status\"",
"metadata.user_labels.\"app.kubernetes.io/name\"",
"metadata.user_labels.\"app.kubernetes.io/component\""
]
}
trigger {
count = 1
percent = 0
}
}
}
}
I get the following this error (which is part of a terraform project also creating the cluster):
Error creating AlertPolicy: googleapi: Error 404: The metric referenced by the provided filter is unknown. Check the metric name and labels.
Now, this is a custom metric (by a Spring Boot app with Micrometer), therefore this metric does not exist when creating infrastructure. Does GCP have to know a metric before creating an alert for it? This would mean that a Spring boot app has to be deployed on a cluster and sending metrics before this policy can be created?
Am I missing something... (like this should not be done in terraform, infrastructure)?
interesting question, the reason for the 404 error is because the resource was not found, there seems to be a preexisting pre-requisite for the descriptor. I would create the metric descriptor first, you can use this as reference, then going forward on creating the alerting policy.
This is an ingenious way you may avoid it. Please comment if it makes sense and if you make it work like this, share it.
For reference (this can be referenced from the alert policy according to terraform doc):
resource "google_monitoring_metric_descriptor" "p95_latency" {
description = ""
display_name = ""
type = "custom.googleapis.com/http/server/requests/p95"
metric_kind = "GAUGE"
value_type = "DOUBLE"
labels {
key = "status"
}
labels {
key = "uri"
}
labels {
key = "exception"
}
labels {
key = "method"
}
labels {
key = "outcome"
}
}
I've been trying to deploy AWS WorkSpaces infrastructure using Terraform. The code itself passes the verify and plan check, but it fails to apply.
Source:
module "networking" {
source = "../../modules/networking"
region = var.region
main_cidr_block = var.main_cidr_block
cidr_block_1 = var.cidr_block_1
cidr_block_2 = var.cidr_block_2
size = var.size
}
resource "aws_directory_service_directory" "main" {
name = var.aws_ds_name
password = var.aws_ds_passwd
size = var.size
type = "SimpleAD"
vpc_settings {
vpc_id = module.networking.main_vpc
subnet_ids = ["${module.networking.private-0}", "${module.networking.private-1}"]
}
}
resource "aws_workspaces_directory" "main" {
directory_id = aws_directory_service_directory.main.id
subnet_ids = ["${module.networking.private-0}", "${module.networking.private-1}"]
}
resource "aws_workspaces_ip_group" "main" {
name = "Contractors."
description = "Main IP access control group"
rules {
source = "10.0.0.0/16"
description = "Contractors"
}
}
Error code:
ValidationException: 2 validation errors detected: Value at 'password' failed to satisfy constraint: Member must satisfy regular expression pattern: (?=^.{8,64}$)((?=.*\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[^A-Za-z0-9\s])(?=.*[a-z])|(?=.*[^A-Za-z0-9\s])(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9\s]))^.*; Value '' at 'name' failed to satisfy constraint: Member must satisfy regular expression pattern: ^([a-zA-Z0-9]+[\\.-])+([a-zA-Z0-9])+$
status code: 400, request id: 073f6e61-775e-4ff9-a88e-e1eab97f8519
on modules/workspaces/workspaces.tf line 10, in resource "aws_directory_service_directory" "main":
10: resource "aws_directory_service_directory" "main" {
I am aware that it is a regex issue with the username/passwords, but I haven't set any users for now, and I've reset the security policies for testing reasons.
Anyone had this issue before?
The AWS API for the directory service enforces a constraint on the password attribute and matches what you are seeing in that error when you run terraform apply:
Password
The password for the directory administrator. The directory creation
process creates a directory administrator account with the user name
Administrator and this password.
If you need to change the password for the administrator account, you
can use the ResetUserPassword API call.
Type: String
Pattern:
(?=^.{8,64}$)((?=.*\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[^A-Za-z0-9\s])(?=.*[a-z])|(?=.*[^A-Za-z0-9\s])(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9\s]))^.*
Required: Yes
Normally Terraform is able to validate this with the plan or validate commands but unfortunately the AWS provider is currently missing an appropriate ValidateFunc so it will only fail at apply time instead at the minute.
If you want this to be caught at plan or validate time then you should raise a feature request for it on the provider issue tracker.
I have a lambda function which copies the RDS Snapshot from Eu-West-3 to Eu-Central-1 region.
Here is my code:
public class CopySnapshot implements RequestHandler<String, String> {
public String handleRequest(String input, Context context) {
AmazonRDS client = AmazonRDSClientBuilder.standard().build();
DescribeDBSnapshotsRequest request = new DescribeDBSnapshotsRequest()
.withDBInstanceIdentifier(System.getenv("DB_IDENTIFIER"))
.withSnapshotType(System.getenv("SNAPSHOT_TYPE"))
.withIncludeShared(true)
.withIncludePublic(false);
DescribeDBSnapshotsResult response = client.describeDBSnapshots(request);
System.out.println("Found the snapshot "+response);
// Get the latest snapshot
List<DBSnapshot> list = response.getDBSnapshots();
if(list.size() > 0)
{
DBSnapshot d = list.get(list.size()-1);
String snapshotArn=d.getDBSnapshotArn();
System.out.println(snapshotArn);
AmazonRDS client_dr_region = AmazonRDSClientBuilder
.standard()
.withRegion(Regions.EU_CENTRAL_1)
.build();
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yy-MM-dd-HH-mm");
CopyDBSnapshotRequest copyDbSnapshotRequest = new CopyDBSnapshotRequest()
.withSourceDBSnapshotIdentifier(snapshotArn)
.withSourceRegion("eu-west-3")
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withTargetDBSnapshotIdentifier("dr-snapshot-copy"+"-"+simpleDateFormat.format(new Date()));
DBSnapshot response_snapshot_copy = client_dr_region
.copyDBSnapshot(copyDbSnapshotRequest)
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withSourceRegion("eu-west-3");
System.out.println("Snapshot request submitted successfully "+response_snapshot_copy);
return "Snapshot copy request successfully submitted";
}
else
return "No Snapshot found";
}
}
While executing the code it shows below error:
{
"errorMessage": "PreSignedUrl could not be authenticated. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 7f794176-a21f-448e-acb6-8a5832925cab)",
"errorType": "com.amazonaws.services.rds.model.AmazonRDSException",
"stackTrace": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1726)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1381)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1127)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:784)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:745)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)",
"com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)",
"com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)",
"com.amazonaws.services.rds.AmazonRDSClient.doInvoke(AmazonRDSClient.java:9286)",
"com.amazonaws.services.rds.AmazonRDSClient.invoke(AmazonRDSClient.java:9253)",
"com.amazonaws.services.rds.AmazonRDSClient.invoke(AmazonRDSClient.java:9242)",
"com.amazonaws.services.rds.AmazonRDSClient.executeCopyDBSnapshot(AmazonRDSClient.java:1262)",
"com.amazonaws.services.rds.AmazonRDSClient.copyDBSnapshot(AmazonRDSClient.java:1234)",
"fr.aws.rds.CopySnapshot.handleRequest(CopySnapshot.java:59)",
"fr.aws.rds.CopySnapshot.handleRequest(CopySnapshot.java:19)"
]
}
From env variable I am fetching the KMS ID of EU-Central-1 with is the destination region for copying snapshot.
The lambda has full permission (for trial purpose) on KMS but it does not work.
Added an inline policy to the specific lambda role, with describe, create grant using the key (full ARN mentioned) but still shows same error.
The key is enabled but not sure why such error.
Many thanks for your valuable feedback.
This I have resolved it using, one more attribute added to it - sourceregion.
CopyDBSnapshotRequest copyDbSnapshotRequest = new CopyDBSnapshotRequest()
.withSourceDBSnapshotIdentifier(snapshotArn)
.withSourceRegion(System.getenv("SOURCE_REGION"))
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withTargetDBSnapshotIdentifier("dr-snapshot-copy"+"-"+simpleDateFormat.format(new Date()));
DBSnapshot response_snapshot_copy = client_dr_region
.copyDBSnapshot(copyDbSnapshotRequest)
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withSourceRegion(System.getenv("SOURCE_REGION"));
and voila, it worked