STANDARD network tier is not supported for global address - google-cloud-platform

I'd like to add an A-type DNS name on GCP with the following Terraform code:
data "google_dns_managed_zone" "env_dns_zone" {
name = "env-zone"
}
resource "google_compute_global_address" "argo_events_webhook" {
name = "argo-events-webhook"
}
/*
resource "google_dns_record_set" "argo-events-webhook" {
name = "argo-events-webhook.${data.google_dns_managed_zone.env_dns_zone.dns_name}"
managed_zone = data.google_dns_managed_zone.env_dns_zone.name
rrdatas = [google_compute_global_address.argo_events_webhook.address]
ttl = 600
type = "A"
}
*/
(The out commented part is not causing the error but maybe relevant as it shows more info about what I want to achieve)
But this yields the following error message ...
...
module.gke.google_compute_global_address.argo_events_webhook: Creating...
Error: Error creating GlobalAddress: googleapi: Error 400: STANDARD network tier (the project's default network tier) is not supported: STANDARD network tier is not supported for global address., badRequest
... for which I can't find more information. Does somebody have an idea how to solve this?
What I find confusing is that there are A-level entries added and my terraform code is c/p'ed from their corresponding tf code (+ adjustment of names).

The Standard Network Tier doesn't use the Google global fiber network and use the "standard internet", locally to the region. If you use global address, the address is globally reachable and thus you need to use the premium network tier to access to this feature.
more details here
In your case, you have to update the project configuration to Premium Network Tier. You can achieve this with Terraform
resource "google_compute_project_default_network_tier" "default" {
network_tier = "PREMIUM"
}

Related

GCP BigTable Metrics - what do 404 requests mean?

We switched to BigTable some time ago and since then there is a number of "404 requests" and also a high number of errors in the GCP Metrics console.
We see no errors in our logs and even data storage/retrieval seems to work as expected.
What is the cause for these errors and how is it possible to find out what is causing them?
As mentioned previously 404 means resource is not found. The relevant resource here is the Bigtable table (which could mean that either the instance id or table id are misconfigured in your application).
I'm guessing that you are looking at the metrics under APIs & Services > Cloud Bigtable API. These metrics show the response code from the Cloud Bigtable Service. You should be able to see this error rate under Monitoring > Metrics Explorer > metric:bigtable.googleapis.com/server/error_count and grouping by instance, method, error_code and app_profile. This will tell which instance and which RPC is causing the errors. Which let you grep your source code for incorrect usages.
A significantly more complex approach is that you can install an interceptor in Bigtable client that:
dumps the resource name of the RPC
once you identify the problematic table name, logs the stack trace of the caller
Something along these lines:
BigtableDataSettings.Builder builder = BigtableDataSettings.newBuilder()
.setProjectId("...")
.setInstanceId("...");
ConcurrentHashMap<String, Boolean> seenTables = new ConcurrentHashMap<>();
builder.stubSettings().setTransportChannelProvider(
EnhancedBigtableStubSettings.defaultGrpcTransportProviderBuilder()
.setInterceptorProvider(() -> ImmutableList.of(new ClientInterceptor() {
#Override
public <ReqT, RespT> ClientCall<ReqT, RespT> interceptCall(
MethodDescriptor<ReqT, RespT> methodDescriptor, CallOptions callOptions,
Channel channel) {
return new ForwardingClientCall.SimpleForwardingClientCall<ReqT, RespT>(channel.newCall(methodDescriptor, callOptions)) {
#Override
public void sendMessage(ReqT message) {
Message protoMessage = (Message) message;
FieldDescriptor desc = protoMessage.getDescriptorForType()
.findFieldByName("table_name");
if (desc != null) {
String tableName = (String) protoMessage.getField(desc);
if (seenTables.putIfAbsent(tableName, true) == null) {
System.out.println("Found new tableName: " + tableName);
}
if ("projects/my-project/instances/my-instance/tables/my-mispelled-table".equals(
tableName)) {
new RuntimeException(
"Fake error to get caller location of mispelled table id").printStackTrace();
}
}
delegate().sendMessage(message);
}
};
}
}))
.build()
);
Google Cloud Support here,
Without more insight I won’t be able to provide valid information about this 404 issue.
The issue must be either a typo or with the configuration, but cannot confirm with the shared data.
In order to provide more meaningful support, I would suggest you to open a Public Issue Tracker or a Google Cloud Support ticket.

How can I avoid "IN_USED_ADDRESSES" error when starting multiple Dataflow jobs from the same template?

I have created a Dataflow template which allows me to import data from CSV file in Cloud Storage into BigQuery. I use Cloud Function for Firebase to create jobs from this template at certain time everyday. This is the code in the Function (with some irrelevant parts removed).
const filePath = object.name?.replace(".csv", "");
// Exit function if file changes are in temporary or staging folder
if (
filePath?.includes("staging") ||
filePath?.includes("temp") ||
filePath?.includes("templates")
)
return;
const dataflow = google.dataflow("v1b3");
const auth = await google.auth.getClient({
scopes: ["https://www.googleapis.com/auth/cloud-platform"],
});
let request = {
auth,
projectId: process.env.GCLOUD_PROJECT,
location: "asia-east1",
gcsPath: "gs://my_project_bucket/templates/csv_to_bq",
requestBody: {
jobName: `csv-to-bq-${filePath?.replace(/\//g, "-")}`,
environment: {
tempLocation: "gs://my_project_bucket/temp",
},
parameters: {
input: `gs://my_project_bucket/${object.name}`,
output: biqQueryOutput,
},
},
};
return dataflow.projects.locations.templates.launch(request);
This function is triggered every time any file is written in Cloud Storage. I am working with sensors so at least I have to import 89 different data i.e. different CSV files within 15 minutes.
The whole process works fine if there are only 4 jobs working at the same time. However, when the function tried to create the fifth job, the API returned many different types of errors.
Error 1 (not exact since somehow I cannot find the error anymore):
Error Response: [400] The following quotas were exceeded: IN_USE_ADDRESSES
Error 2:
Dataflow quota error for jobs-per-project quota. Project *** is running 25 jobs.
Please check the quota usage via GCP Console.
If it exceeds the limit, please wait for a workflow to finish or contact Google Cloud Support to request an increase in quota.
If it does not, contact Google Cloud Support.
Error 3:
Quota exceeded for quota metric 'Job template requests' and limit 'Job template requests per minute per user' of service 'dataflow.googleapis.com' for consumer 'project_number:****'.
I know I can space out starting jobs to avoid Error 2 and 3. However, I don't know how to start jobs in a way that won't fill up the addresses. So, how do I avoid that? If I cannot, then what approach should I use?
I had answered this in another post here - Which Compute Engine quotas need to be updated to run Dataflow with 50 workers (IN_USE_ADDRESSES, CPUS, CPUS_ALL_REGIONS ..)?.
Let me know if that helps.
This is a GCP external IP quota issue and the best solution is not to use any public IPs for dataflow jobs as long as your pipeline resources stay within GCP networks.
To enable public IP in dataflow jobs:
Create or update your subnetwork to allow Private google access. this is fairly simple to do using the console - VPC > networks > subnetworks > tick enable private google access
In the parameters of your Cloud Dataflow job, specify --usePublicIps=false and --network=[NETWORK] or --subnetwork=[SUBNETWORK].
Note: - For internal IP IN_USED errors just change your subnet CIDR range to accommodate more addresses like 20.0.0.0/16 will give you close to 60k internal IP address.
By this, you will never be exceeding your internal IP ranges

Want to deploy a Google Cloud Run service via Terraform

I want to deploy a google cloud run service using terraform. When I try to deploy via 'port' block for defining the container port, getting error, I have to pass the container port from template tag but unable to do that. Here is my .tf file -
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "us.gcr.io/xxxxxx/xxxx.app"
port {
container_port = 19006
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
output "url" {
value = "${google_cloud_run_service.default.status[0].url}"
}
With the port tag, here is the error -
And if I not pass the Port block, here is the error -
I have to pass the container port value as 19006 because of my container is running on that port only. How I pass the container port 19006 instead of default port 8080.
I had a look at REST API exposed by Google for creating a Cloud Run service.
This starts with the entry here:
POST https://{endpoint}/apis/serving.knative.dev/v1/{parent}/services
where the body contains a Service.
which contains a ServiceSpec
which contains a RevisionRemplate
which contains a RevisionSpec
which contains a Container
which contains a ContainerPort
If we now map this to the source of the Terraform extension to handle creation of Cloud Run Services, we find:
https://github.com/terraform-providers/terraform-provider-google/blob/2dc3da62e3844d14fb2136e09f13ea934b038411/google/resource_cloud_run_service.go#L90
and in the comments, we find the following:
In the context of a Revision, we disallow a number of the fields of
this Container, including: name, ports, and volumeMounts. The runtime
contract is documented here:
https://github.com/knative/serving/blob/master/docs/runtime-contract.md
While name and volumeMounts seems ok to me at this point, I'm not sensing the reason that ports are not mapped.
From this though, I seem to see that the inability to specify a port through Terraform seems to be explicit as opposed to an omission. I also seem to see that the ability to specify a port is indeed present in the REST API at Google.
I was then going to suggest that you raise a defect through Github but then wondered if it was already present. I did some digging and there is already a request for the missing feature:
Allow specifying 'container_port' and 'request_timeout' for google_cloud_run_service
My belief is that the core answer to your question then becomes:
What you are trying to do should work with Terraform and has been
raised as an issue and we must wait for the resolution in the
Terraform provider.
The block should be ports (i.e. plural), not port
I needed to expose port 9000 and solved it this way:
resource "google_cloud_run_service" "service" {
...
template {
spec {
containers {
...
ports {
container_port = 9000
}
}
}
}
}

Terraform - Creating Google Cloud SQL instance not working

I use the following Terraform configuration to try to create a subnet and a Cloud SQL MySQL 5.6 instance on Google Cloud Platform.
resource "google_compute_network" "default" {
name = "my-default-network"
auto_create_subnetworks = "true"
project = "${google_project.project.project_id}"
}
resource "google_sql_database_instance" "wordpress" {
region = "${var.region}"
database_version = "MYSQL_5_6"
project = "${google_project.project.project_id}"
settings {
tier = "db-n1-standard-1"
ip_configuration {
private_network = "${google_compute_network.default.self_link}"
}
}
}
But applying this plan gives me the following vague error. I also tried to destroy the entire project and tried to build it up again, but I get the same error.
google_sql_database_instance.wordpress: Still creating... (20s elapsed)
google_sql_database_instance.wordpress: Still creating... (30s elapsed)
google_sql_database_instance.wordpress: Still creating... (40s elapsed)
Error: Error applying plan:
1 error(s) occurred:
* google_sql_database_instance.wordpress: 1 error(s) occurred:
* google_sql_database_instance.wordpress: Error waiting for Create Instance:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Can anyone see what I do wrong here?
Edit:
When adding TF_LOG=debug to the terraform apply-run, I get the following error.
"error": {
"kind": "sql#operationErrors",
"errors": [{
"kind": "sql#operationError",
"code": "INTERNAL_ERROR"
}]
}
Edit 2: Simplified the network setup, but getting the exact same error.
A bit late to the party but I have just had and overcome this issue. In my case it was related to using the private_networking option. My suggestion is to read the documentation paying attention to the "Network Requirements" and check the following:
You have the servicenetworking.googleapis.com API enabled in your project
The ServiceAccount you are running with Terraform has the "Service Network Admin" role
I found that verifying private networking was the issue (by removing it and setting ipv4_enabled = "true") in a temporary instance helped focus my debugging efforts.
Good Luck!

Does AWS CPP S3 SDK support "Transfer acceleration"

I enabled "Transfer acceleration" on my bucket. But I dont see any improvement in speed of Upload in my C++ application. I have waited for more than 20 minutes that is mentioned in AWS Documentation.
Does the SDK support "Transfer acceleration" by default or is there a run time flag or compiler flag? I did not spot anything in the SDK code.
thanks
Currently, there isn't a configuration option that simply turns on transfer acceleration. You can however, use endpoint override in the client configuration to set the accelerated endpoint.
What I did to enable a (working) transfer acceleration:
set in the bucket configuration on the AWS panel "Transfer Acceleration" to enabled.
add to the IAM user that I use inside my C++ application the permission s3::PutAccelerateConfiguration
Add the following code to the s3 transfer configuration (bucket_ is your bucket name, the final URL must match the one shown in the AWS panel "Transfer Acceleration"):
Aws::Client::ClientConfiguration config;
/* other configuration options */
config.endpointOverride = bucket_ + ".s3-accelerate.amazonaws.com";
Ask for acceleration to the bucket before transfer... (docs in here )
auto s3Client = Aws::MakeShared<Aws::S3::S3Client>("Uploader",
Aws::Auth::AWSCredentials(id_, key_), config);
Aws::S3::Model::PutBucketAccelerateConfigurationRequest bucket_accel;
bucket_accel.SetAccelerateConfiguration(
Aws::S3::Model::AccelerateConfiguration().WithStatus(
Aws::S3::Model::BucketAccelerateStatus::Enabled));
bucket_accel.SetBucket(bucket_);
s3Client->PutBucketAccelerateConfiguration(bucket_accel);
You can check in the detailed logs of the AWS sdk that your code is using the accelerated entrypoint and you can also check that before the transfer start there is a call to /?accelerate (info)
What worked for me:
Enabling S3 Transfer Acceleration within AWS console
When configuring the client, only utilize the accelerated endpoint service:
clientConfig->endpointOverride = "s3-accelerate.amazonaws.com";
#gabry - your solution was extremely close, I think the reason it wasn't working for me was perhaps due to SDK changes since originally posted as the change is relatively small. Or maybe because I am constructing put object templates for requests used with the transfer manager.
Looking through the logs (Debug level) the SDK automatically concatenates the bucket used in transferManager::UploadFile() with the overridden endpoint. I was getting unresolved host errors as the requested host looked like:
[DEBUG] host: myBucket.myBucket.s3-accelerate.amazonaws.com
This way I could still keep the same S3_BUCKET macro name while only selectively calling this when instantiating a new configuration for upload.
e.g.
<<
...
auto putTemplate = new Aws::S3::Model::PutObjectRequest();
putTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->putObjectTemplate = *putTemplate;
auto multiTemplate = new Aws::S3::Model::CreateMultipartUploadRequest();
multiTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->createMultipartUploadTemplate = *multiTemplate;
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
auto transferHandle = transferMgr->UploadFile(localFile, S3_BUCKET, s3File);
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
...
>>