why does Terraform Refresh make changes to my tfstate file outputs? - refresh

We maybe using our state file incorrectly or we maybe using terraform refresh incorrectly.
Anyway I want to keep track of which key (primary or secondary) is being used for a particular resource. After terraform apply we get an output like this
Outputs:
is_storage_account_primary_key_active = true
When you run terrraform refresh at any point it updates the output to
is_storage_account_primary_key_active = false
I would rather not let that happen and that the refresh command not update my outputs as we are using that value to determine which key to use the next time we run terraform plan and terraform apply
Any ideas / suggestions would be greatly appreciated.

Related

Forced region split in HBase not causing any splits

I have an EMR cluster running HBase on s3. I have a table with the following configuration
hbase.regionserver.region.split.policy = org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy
I have disabled the split policy because I want to run split commands manually.
So I have a region say 'e85b1fe7c708500a7ae44427a76b3391' whose size is 14GB. I issue the following split command on the region:
split 'e85b1fe7c708500a7ae44427a76b3391'
The command runs successfully on the hbase shell, but no region split occurs. Can anyone help me on this.
Even though a force-split (such as the 'split' command in hbase shell) should be expected to expected to ignore 'DisabledRegionSplitPolicy', and try splitting a region, I have found, at least in the versions 1.4.x, that the policy is stopping the force-split.
You may try changing to some other policy, say 'ConstantSizeRegionSplitPolicy'. If this doesn't fix, then the issue could be something else. In that case, enable debug level in logging both in master, and in the target region server (which is hosting the target region), and troubleshoot.
Based on information available in https://hbase.apache.org/book.html#manual_region_splitting_decisions it states The DisabledRegionSplitPolicy policy blocks manual region splitting..
If you want to avoid manual splits, you can increase the size for auto-split to something very large.

How to conditionally update a resource in Terraform

Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.

Creating a google_sql_user with terraform always recreates the resource

When ever I am running a terraform plan using the following:
resource "google_sql_user" "users" {
name = "me"
instance = "${google_sql_database_instance.master.name}"
host = "me.com"
password = "changeme"
}
This only happens when I am running this a Postgres instance on Google Cloud SQL.
Terraform always outputs that the plan will create a user, even though it's already created. I'm using terraform version 0.11.1. Is there something that i'm missing? I tried setting the id value, however there it still recretes itself.
It turns out that the terraform state of the user was not being added, the cause seemed to be that when using postgres, the value host = "me.com" is not required, but causes problems if it's left there.
Once that was removed, then the terraform state will be correct.

Importing data from Excel sheet to DynamoDB table

I am having a problem importing data from Excel sheet to a Amazon DynamoDB table. I have the Excel sheet in an Amazon S3 bucket and I want to import data from this sheet to a table in DynamoDB.
Currently I am following Import and Export DynamoDB Data Using AWS Data Pipeline but my pipeline is not working normally.
It gives me WAITING_FOR_RUNNER status and after sometime the status changed to CANCELED. Please suggest what I am doing wrong or is there any other way to import data from an Excel sheet to a DynamoDB table?
The potential reasons are as follows:-
Reason 1:
If your pipeline is in the SCHEDULED state and one or more tasks
appear stuck in the WAITING_FOR_RUNNER state, ensure that you set a
valid value for either the runsOn or workerGroup fields for those
tasks. If both values are empty or missing, the task cannot start
because there is no association between the task and a worker to
perform the tasks. In this situation, you've defined work but haven't
defined what computer will do that work. If applicable, verify that
the workerGroup value assigned to the pipeline component is exactly
the same name and case as the workerGroup value that you configured
for Task Runner.
Reason 2:-
Another potential cause of this problem is that the endpoint and
access key provided to Task Runner is not the same as the AWS Data
Pipeline console or the computer where the AWS Data Pipeline CLI tools
are installed. You might have created new pipelines with no visible
errors, but Task Runner polls the wrong location due to the difference
in credentials, or polls the correct location with insufficient
permissions to identify and run the work specified by the pipeline
definition.

AWS DynamoDB resource not found exception

I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595