I want to create two databases in one cloudsql instance.
But if it is written in the following way it will result in an error.
resources:
- name: test-instance
type: sqladmin.v1beta4.instance
properties:
region: us-central
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
settings:
tier: db-f1-micro
- name: test_db1
type: sqladmin.v1beta4.database
properties:
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
- name: test_db2
type: sqladmin.v1beta4.database
properties
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
output:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation
[operation-********]
- code: RESOURCE_ERROR
location: /deployments/sample-deploy/resources/test_db2
message:
'{"ResourceType":"sqladmin.v1beta4.database","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"Operation
failed because another operation was already in progress.","reason":"operationInProgress"}],"message":"Operation
failed because another operation was already in progress.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/****/instances/test-instance/databases","httpMethod":"POST"}}'
Please tell me what to do to resolve the error.
The error “ResourceErrorCode” is an error which originates with the CloudSQL API.
The issue here is that Deployment Manager will try to run all resource modifications in parallel (unless you specify a dependency between resources). Deployment Manager is a declarative configuration, it will run the deployments in parallel either they are independent of each other or not.
In this specific case, CloudSQL is not able to create two databases at the same time. This is why you are seeing the error message: Operation failed because another operation was already in progress.
There can be only one pending operation at a given point of time because of the inherent system architecture. This is a limitation on the concurrent writes to a CloudSQL database.
To resolve this issue, you will have to create the two databases in sequence, not in parallel.
For more information on how to do so, you may consult the documentation on this matter.
Related
I am trying to setup a cloud replication for my master/slave db. The master resides on an external vpc and I want to set up a slave in google cloud sql. I have followed the steps as here to setup the databases.
They are setup fine and I can see initial replication taking place from my master. The data is synchronized. However shortly after, it becomes disabled for replication. I cannot seem to start it again to replicate and each time gives the following error
The instance or operation is not in an appropriate state to handle the request.
I checked the suggestions from here but that didnt work.
Running gcloud sql instances describe replica-instance1 gives me the following (excerpt):
state: RUNNABLE
replicaConfiguration:
failoverTarget: false
kind: sql#replicaConfiguration
I can update if you need more of the results but that all looks fine. Can anyone help?
Edit:
This is in the postgresql logs
resource: {
labels: {3}
type: "cloudsql_database"
}
severity: "ERROR"
textPayload: "2023-01-20 22:10:36.354 UTC [282]: [2-1] db=postgres,user=[unknown] ERROR: data stream ended"
timestamp: "2023-01-20T22:10:36.354863Z"
}
I have the following (minimal test) cloudformation template:
AWSTemplateFormatVersion: 2010-09-09
Description: Test template
Resources:
TestTargetGroupListener:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: fixed-response
FixedResponseConfig:
ContentType: text/plain
MessageBody: It works
StatusCode: 200
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- example.com
ListenerArn: arn:aws:elasticloadbalancing:eu-west-1:<accountid>:listener/app/<alb name>/xxx/xxx
Priority: 10
When I attempt to deploy this I get the message:
Resource of type 'AWS::ElasticLoadBalancingV2::ListenerRule' with identifier 'Priority '10' is currently in use (Service: ElasticLoadBalancingV2, Status Code: 400, Request ID: ..., Extended Request ID: null)' already exists." (RequestToken: ..., HandlerErrorCode: AlreadyExists)
I have checked the listener and have confirmed that there are currently 9 rules (+ the last rule).
I have also tried setting priority to 9 (in case it is 0 based) and to 11 (because I wasn't sure if "last" counted in the priorities) however I get the same message (for each priority I tried).
This is how the listener rules look like:
I am not sure why this is happening. I used similar templates before without any issues on the same listener.
Update: I got this to work by using Listener priority 4 which (suprisingly) worked and made the listener appear in the console as 2nd! I still don't understand how it works. I figured out I could use 4 when I attempted to create an ECS service on the AWS web console, attached to the same load listener and had the same issue when selecting listener priority. However on the web console I was able to try numbers a lot quicker than via a CF template. I still do not understand what the issue was here and I still do not know how to properly diagnose this error.
Stumbled upon the same issue. You can find the priorities by first going to EC2 management console, there select your Load balancer, open Listeners-tab and click desired listener id, it opens to a new tab. And the priorities are shown on the Rules-tab.
It is kind of unbelievable that AWS Cloudformation hasn't solved this yet.
I needed to add new rules and re-arrange all priorities in live environments.
After modifying my template with the new listeners and the priorities update, I deployed it, and it failed due to the "existing priority" error.
For the ones in a hurry, the only thing that worked for me in this scenario was to make a copy of my template and delete all listeners, deploy those changes, and then take my original template with the new and updated listener rules and deploy it again so that no conflicts could arise. "Inspired" by this.
Things you should not do since they will fail:
manually delete all rules and try to re-create them from the CloudFormation template.
re-create the rules manually as they are in the CFT, and then try to update the stack.
Using higher/different priorities temporarily to update in 2 steps works sometimes, and sometimes it creates conflicts.
We are using Google Cloud Build as CI/CD tool and we use private pools to be able to connect to our database using private IPs.
Since 08/27 our builds using private pools are stuck in Queued and are never executed ou fail due to timeout, they just hang there until we cancel them.
We have already tried without success:
Change the worker pool to another region (from southamerica-east1 to us-central1);
Recreate the worker pool with different configurations;
Recreate all triggers and connections.
Removing the worker pool configuration (running the build in global) executed the build.
cloudbuild.yaml:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Backup database
args: ['gcloud', 'sql', 'backups', 'create', '--instance=${_DATABASE_INSTANCE_NAME}']
- name: 'node:14.17.4-slim'
id: Migrate database
entrypoint: npm
dir: 'build'
args: ['...']
secretEnv: ['DATABASE_URL']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Migrate traffic to new version
dir: 'build'
entrypoint: bash
args: ['-c', 'gcloud app services set-traffic ${_SERVICE_NAME} --splits ${_VERSION_NAME}=1']
availableSecrets:
secretManager:
- versionName: '${_DATABASE_URL_SECRET}'
env: 'DATABASE_URL'
options:
pool:
name: 'projects/$PROJECT_ID/locations/southamerica-east1/workerPools/<project-id>'
our worker pool configuration:
$ gcloud builds worker-pools describe <worker-pool-id> --region=southamerica-east1 --project=<project-id>
createTime: '2021-08-30T19:35:57.833710523Z'
etag: W/"..."
name: <worker-pool-id>
privatePoolV1Config:
networkConfig:
egressOption: PUBLIC_EGRESS
peeredNetwork: projects/<project-id>/global/networks/default
workerConfig:
diskSizeGb: '1000'
machineType: e2-medium
state: RUNNING
uid: ...
updateTime: '2021-08-30T20:14:13.918712802Z'
It was my last week discussion with the Cloud Build PM... TL;DR: if you haven't support subscription, or a corporate account, you can't (for now)
In detail, you can check the 1. link of RJC, you will get that
If you have a closer look, you can see (with my personal account, even if I have an Organization structure) the Concurrent Builds per worker pool is set to 0. That is the reason of your infinite queue of your build job.
The most annoying part is this one. Click on a Concurrent build per worker pool line checkbox and then click on edit, to change the limit. Here what you get
Read carefully: set a limit between 0 and 0.
Therefore, if you haven't support subscription (like me) you can't use the feature with your personal account. I was able to use it with my corporate account, even if I shouldn't...
For now, I haven't a solution, only this latest message from the PM
The behaviour around quota restrictions in private pools is a recent change that we're still iterating on and appreciate the feedback to make it easier for personal accounts to try out the feature.
The build in queue state can have the following possible reasons:
Concurrency limits. Cloud Build enforces quotas on running builds for various reasons. As a default, Cloud Build has only 10 concurrent build limit, whilst as per Worker Pool, it has a 30 concurrent build limit. You can also further check in this link for the quotas limit.
Using a custom machine size. In addition to the standard machine type, Cloud Build provides four high-CPU virtual machine types to run your builds.
You are using worker pools alpha and has too few nodes available.
Additionally, if the issue still persist, you can submit a bug under Google Cloud. I see that your colleague already submitted a public issue tracker in this link. In addition, if you have a free trial or paid support plan, it would be better to use it to file an issue.
I recently did a configuration where I created a component in using the aws console for greengrass from a recipe and another where I imported the config from a lambda file. They both work well when I do it using the aws console. However, I want to be able to produce this same configuration using cloudformation. I have read the documentation here component version and it says I can be able to add a recipe file inline or send it a lambda function using the LambdaFunctionRecipeSource. However all my attempt fail with the error
Resource handler returned message: "User: arn:aws:iam::accountIDHere:user/harisu is not
authorized to perform: null (Service: GreengrassV2, Status Code: 403, Request ID: f517f1ff-a387-
4380-8a47-bd6d41fd628e, Extended Request ID: null)"
(RequestToken: d6f8042d-687e-0afa-e75d-d80f27a7f177, HandlerErrorCode: AccessDenied)
I have however granted administrator access to the user harisu and I ensured he has the full access to the greengrass service.
My example cfn file is
TestComponentVersion:
Type: AWS::GreengrassV2::ComponentVersion
Properties:
InlineRecipe: "---
RecipeFormatVersion: '2020-01-25'
ComponentName: com.example.HelloWorld
ComponentVersion: 1.0.0
ComponentDescription: My first AWS IoT Greengrass component.
ComponentPublisher: Amazon
ComponentConfiguration:
DefaultConfiguration:
Message: world
Manifests:
- Name: Linux
Platform:
os: linux
Lifecycle:
Run: |
python3 {artifacts:path}/hello_world.py '{configuration:/Message}'
Artifacts:
- URI: s3://DOC-EXAMPLE-BUCKET/artifacts/com.example.HelloWorld/1.0.0/hello_world.py
"
I will appreciate any help
For anyone else who comes across this when troubleshooting, refer to https://greengrassv2.workshop.aws/en/chapter4_createfirstcomp/30_step3.html. It states that this is typically caused by the recipe definition not being valid json or yaml. In my case this was true, I had a yaml syntax error. I don't immediately see if that is the case here or where your error is.
my google cloud instance(compute engine) was shut down automatically with unknown reason
there is the event log for the instance
{"shutdownEvent":{},"bootCounter":"6","#type":"type.googleapis.com/cloud_integrity.IntegrityEvent"}
{
insertId: "0"
jsonPayload: {
#type: "type.googleapis.com/cloud_integrity.IntegrityEvent"
bootCounter: "6"
shutdownEvent: {
}
}
logName: "projects/<xxxx>/logs/compute.googleapis.com%2Fshielded_vm_integrity"
receiveTimestamp: "2019-05-09T10:14:29.112673979Z"
resource: {
labels: {
instance_id: "<xxxx>"
project_id: "<xxxx>"
zone: "us-west1-b"
}
type: "gce_instance"
}
severity: "NOTICE"
timestamp: "2019-05-09T10:14:26.928978844Z"
}
gcloud compute instances update INSTANCE_NAME --shielded-learn-integrity-policy
Instance must be running and have Integrity Monitoring enabled.
I don't know if this is secure though. Read more:
https://cloud.google.com/compute/docs/instances/integrity-monitoring#updating-baseline
This seems to be a shielded VM with "integrity monitoring" enabled
... and most likely this is caused by an integrity validation failure.
You might use Stackdriver Advanced Logs Filters and try to get more details about the reason behind the shutdown, for example, sometimes instances are migrated (Live Migration) and cause the behavior reported.
Some filters that may help you to determine what caused the shutdown on instances are:
Checking for migrations, in Stackdriver Advanced Logging filter, use following entries:
compute.instances.migrationOnHostMaintenance
OR
compute.instances.migrateOnHostMaintenance
For hostError, you can use following filter:
"compute.instances.hostError"
For VM Instance Stop:
resource.type="gce_instance"
protoPayload.methodName="v1.compute.instances.stop"
had the some problem and confirmed from gcp that my vm had moved from one physics host to another..
pls feel free to open case and google customer support will help you diagnostics