Create SQL Instance fails with "An unknown error occurred." - google-cloud-platform

I'm trying to create a Cloud SQL instance but am unable to do so.
I have tried many variations:
DB: Postgres 9.6 / 11.1, Mysql 5.9
Tier: micro, 1 vcpu / 3.75MiB
Public network (non selected)
Via the console (pictured) or via gcloud
I have an up-to-date billing account (have other GCP services running ok). I am the project owner and logged into the console and gcloud as such
Each time i see the same unhelpful, generic error message "An unknown error occurred.". See below also for an error log example.
{
insertId: "XXX"
logName: "projects/XXXX/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "XXXX"
}
authorizationInfo: [
0: {
granted: true
permission: "cloudsql.instances.create"
resource: "instances/XXXX"
resourceAttributes: {
}
}
]
methodName: "cloudsql.instances.create"
request: {
#type: "type.googleapis.com/cloudsql.admin.InstancesInsertRequest"
clientIp: "XXXX"
resource: {
backendType: "SECOND_GEN"
databaseVersion: "POSTGRES_11"
instanceName: {
fullProjectId: "XXXX"
instanceId: "XXXX"
}
instanceType: "CLOUDSQL_INSTANCE"
region: "us-central1"
settings: {
activationPolicy: "ALWAYS"
availabilityType: "ZONAL"
backupConfig: {
enabled: true
replicationLogArchivingEnabled: false
specification: "every day 07:00"
}
dataDiskSizeGb: "10"
dataDiskType: "PD_SSD"
ipConfiguration: {
enabled: true
}
locationPreference: {
gceZone: "us-central1-a"
}
maintenanceWindow: {
}
pricingPlan: "PER_USAGE"
storageAutoResize: true
storageAutoResizeLimit: "0"
tier: "db-custom-1-3840"
}
}
}
requestMetadata: {
callerIp: "XXXX"
destinationAttributes: {
}
requestAttributes: {
auth: {
}
time: "2019-06-04T05:46:10.255Z"
}
}
resourceName: "instances/XXXX"
response: {
#type: "type.googleapis.com/cloudsql.admin.InstancesInsertResponse"
}
serviceName: "cloudsql.googleapis.com"
status: {
code: 2
message: "UNKNOWN"
}
}
receiveTimestamp: "2019-06-04T05:46:10.297960113Z"
resource: {
labels: {
database_id: "XXXX"
project_id: "XXXX"
region: "us-central"
}
type: "cloudsql_database"
}
severity: "ERROR"
timestamp: "2019-06-04T05:46:10.250Z"
}

Since as you've said, the error message is too generic, the appropriate course of action, in my mind, is to have the issue digged into deeper by inspecting your project.
As a GCP Support employee I recommend you to open private issue by providing your project number in the following component:
https://issuetracker.google.com/issues/new?component=187202
Once you have it created, please let me know, and I'll start working on it as soon as possible.

I've had a similar issue trying to create a SQL instance using Terraform.
As it turns out, database instance names can't be reused immediately:
If I delete my instance, can I reuse the instance name?
Yes, but not right away. The instance name is unavailable for up to a week before it can be reused.
To solve this I had to add a random suffix to the database instance name upon creation.

Related

Could not find a value associated with JSONKey in SecretString

I want to make RDS and proxy with credential
However, I bumped into this error.
14:32:32 | CREATE_FAILED | AWS::RDS::DBCluster | DatabaseB269D8BB
Could not find a value associated with JSONKey in SecretString
My script is like this below.
const rdsCredentials: rds.Credentials = rds.Credentials.fromGeneratedSecret(dbInfos['user'],{secretName:`cdk-st-${targetEnv}-db-secret`});
const dbCluster = new rds.DatabaseCluster(this, 'Database', {
parameterGroup,
engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
credentials: rdsCredentials,
cloudwatchLogsExports:['slowquery','general','error','audit'],
backup: backupProps,
instances:2,
removalPolicy: cdk.RemovalPolicy.DESTROY,
clusterIdentifier: dbInfos['cluster'], //clusterIdentifier,
defaultDatabaseName :dbInfos['database'], //defaultDatabaseName,
instanceProps: {
// optional , defaults to t3.medium
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
vpc,
securityGroups:[dbSecurityGroup],
},
});
const proxy = new rds.DatabaseProxy(this, 'Proxy', {
proxyTarget: rds.ProxyTarget.fromCluster(dbCluster),
secrets: [dbCluster.secret!],
vpc,
});
I guess this error is related to secrets: [dbCluster.secret!] maybe.
I googled around and found this error happens when secrets are deleted.
However I want to use credential which is just generated for RDS
Is it impossible?
how can I fix this?
More Test
I tried another way but this comes the error below
/node_modules/aws-cdk-lib/aws-rds/lib/proxy.ts:239
secret.grantRead(role);
my code is here
dbCluster.addProxy('testProxy',{
secrets: [rdsCredentials.secret!],
vpc
});

How to get rid of the BootstrapVersion parameter with cdk synth?

When generating the cloudformation template with aws cdk:
cdk synth
I always get:
"Parameters": {
"BootstrapVersion": {
"Type": "AWS::SSM::Parameter::Value<String>",
...
Here the code:
import * as cdk from 'aws-cdk-lib';
import { Stack } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as sqs from 'aws-cdk-lib/aws-sqs';
export class MyStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id);
const queue = new sqs.Queue(this, 'Example', {
visibilityTimeout: cdk.Duration.seconds(300)
});
}
};
const app = new cdk.App();
new MyStack(app, 'MyStack');
Full output (some shortening ...):
$ cdk synth
Resources:
ExampleA925490C:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 300
UpdateReplacePolicy: Delete
DeletionPolicy: Delete
Metadata:
aws:cdk:path: MyStack/Example/Resource
CDKMetadata:
Type: AWS::CDK::Metadata
Properties:
Analytics: v2:deflate64:H4sIAAAAAAAA/zPSM9EzUEwsL9ZNTsnWzclM0qsOLklMztYBCsUXFxbrVQeWppam6jin5YEZtSBWUGpxfmlRMljUOT8vJbMkMz+vVicvPyVVL6tYv8zQTM8YaGpWcWamblFpXklmbqpeEIQGAChZc6twAAAA
Metadata:
aws:cdk:path: MyStack/CDKMetadata/Default
Condition: CDKMetadataAvailable
Conditions:
CDKMetadataAvailable:
Fn::Or:
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- af-south-1
...
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- us-west-1
...
Parameters:
BootstrapVersion:
Type: AWS::SSM::Parameter::Value<String>
Default: /cdk-bootstrap/hnb659fds/version
Description: Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip]
Rules:
CheckBootstrapVersion:
Assertions:
- Assert:
Fn::Not:
- Fn::Contains:
- - "1"
- "2"
- "3"
- "4"
- "5"
- Ref: BootstrapVersion
AssertDescription: CDK bootstrap stack version 6 required. Please run 'cdk bootstrap' with a recent version of the CDK CLI.
Here the environment:
$ cdk doctor
ℹ️ CDK Version: 2.8.0 (build 8a5eb49)
ℹ️ AWS environment variables:
- AWS_PAGER =
- AWS_DEFAULT_PROFILE = sbxb.admin
- AWS_STS_REGIONAL_ENDPOINTS = regional
- AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ No CDK environment variables
How to get rid of that cloudformation parameter?
I just want to use CDK to create a cloudformation template.
Later I want to use that template with the service catalog and don't want the BootstrapVersion parameter to be exposed nor do I need it.
Here the modified code which works:
import * as cdk from 'aws-cdk-lib';
import { DefaultStackSynthesizer, Stack } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as sqs from 'aws-cdk-lib/aws-sqs';
export class MyStack extends Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const queue = new sqs.Queue(this, 'Example', {
visibilityTimeout: cdk.Duration.seconds(300)
});
}
};
const app = new cdk.App();
new MyStack(app, 'MyStack' , {
synthesizer: new DefaultStackSynthesizer({
generateBootstrapVersionRule: false
})
});
As mentioned by the other answer one has to override the DefaultStackSynthesizer with generateBootstrapVersionRule: false.
Edit: updated the answer to mention the generateBootstrapVersionRule parameter. See #Felix's answer for code.
By default, the following is included in all templates when using DefaultStackSynthesizer:
"Parameters": {
"BootstrapVersion": {
"Type": "AWS::SSM::Parameter::Value<String>",
"Default": "/cdk-bootstrap/hnb659fds/version",
"Description": "Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip]"
}
},
"Rules": {
"CheckBootstrapVersion": {
"Assertions": [
{
"Assert": {
"Fn::Not": [
{
"Fn::Contains": [
[
"1",
"2",
"3",
"4",
"5"
],
{
"Ref": "BootstrapVersion"
}
]
}
]
},
"AssertDescription": "CDK bootstrap stack version 6 required. Please run 'cdk bootstrap' with a recent version of the CDK CLI."
}
]
}
}
The BootstrapVersion parameter and the associated rule are used by CDK to check the version of the Bootstrap stack which is deployed in your environment. You can remove it if you are confident that your stack doesn't require bootstrapping or you have the correct BootstrapVersion. The parameter isn't used anywhere else in stack.
By default, CDK v2 uses the DefaultStackSynthesizer so this parameter will always be included. One way of avoid this is to create a custom object with generateBootstrapVersionRule parameter with a value of false (see Felix's answer for code). Alternatively can also specify the LegacyStackSynthesizer when instantiating the CDK to avoid creating the parameter, however this makes a few changes in the way your stack is synthesized and how you use the bootstrap stack. A table of differences is given in the v1 documentation link below.
CDK v1 is the opposite and uses the LegacyStackSynthesizer by default.
References
https://github.com/aws/aws-cdk/issues/17942#issuecomment-992295898
https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-synthesizers (v2 documentation)
https://docs.aws.amazon.com/cdk/v1/guide/bootstrapping.html#bootstrapping-synthesizers (v1 documentation)

Why can't I run multiple tasks in AWS ECS?

I'm working on a file processing system where files can be uploaded to S3 and then processed in a container. I have been using triggering ECS to run tasks from lambda and passing a few environment variables.
S3 -> Lambda -> ECS
I'm running into a problem where I can't seem to run more than 1 task at once. If a task is already running then any subsequent tasks that get run are stuck in "PROVISIONING" and eventually disappear altogether.
Here is my lambda function that runs the ECS task:
const params: RunTaskRequest = {
launchType: "FARGATE",
cluster: "arn:aws:ecs:us-east-1:XXXXXXX:cluster/FileProcessingCluster",
taskDefinition: "XXX",
networkConfiguration: {
awsvpcConfiguration: {
subnets: [
"subnet-XXX",
"subnet-XXX"
],
securityGroups: [
"..."
],
assignPublicIp: "DISABLED"
}
},
overrides: {
containerOverrides: [
{
name: "FileProcessingContainer",
environment: [
...
]
},
]
},
};
try {
await ecs.runTask(params).promise();
}catch (e) {
console.error(e, e.stack)
}
I'm using AWS-CDK to create the ECS infrastructure:
const cluster = new ecs.Cluster(this, 'FileProcessingCluster', {
clusterName: "FileProcessingCluster"
});
const taskDefinition = new ecs.FargateTaskDefinition(this, "FileProcessingTask", {
memoryLimitMiB: 8192,
cpu: 4096,
});
taskDefinition.addContainer("FileProcessingContainer", {
image: ecs.ContainerImage.fromAsset("../local-image"),
logging: new ecs.AwsLogDriver({
streamPrefix: `${id}`
}),
memoryLimitMiB: 8192,
cpu: 4096,
});
Is there some something I'm missing here? Perhaps a setting related to concurrent tasks?
It turns out that I misconfigured the subnets in the task definition, this was preventing the image pull from ECR.
You can read more about it here:
ECS task not starting - STOPPED (CannotPullContainerError: “Error response from daemon request canceled while waiting for connection”
And:
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-pull-container-error/

Getting 400 HTTP error when creating a new cloud composer environment in GCP

When creating a composer environment in GCP, I get this error:
Http error status code: 400 Http error message: BAD REQUEST Additional errors: {"ResourceType":"gcp-types/storage-v1:storage.buckets.insert","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global","message":"Invalid argument","reason":"invalid"}],"message":"Invalid argument","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/storage/v1/b","httpMethod":"POST"}}
It appears the GKE cluster is successfully created, and there is a failed k8s job named composer-agent-a62cd8ef-f7a8-4797-c732-fc15673528e8
Looking in the logs of the job's containers, I can see that a gsutil cp failed to some bucket named us-central1-test-airflow--a62cd8ef-bucket.
Looking in GCS, this bucket indeed doesn't exist.
I looked in stackdriver and found that the creation of this bucket indeed fails. Here is a sample stackdriver record of the failure:
{
insertId: "59zovdfe3w3r"
logName: "projects/analytics-analysis-1/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "234453461464#cloudservices.gserviceaccount.com"
serviceAccountDelegationInfo: [
0: {
firstPartyPrincipal: {
principalEmail: "cloud-dm#prod.google.com"
}
}
]
}
authorizationInfo: [
0: {
granted: true
permission: "storage.buckets.create"
resource: "projects/_/buckets/us-central1-test-airflow--a62cd8ef-bucket"
resourceAttributes: {
}
}
]
methodName: "storage.buckets.create"
requestMetadata: {
callerIp: "64.233.172.244"
callerSuppliedUserAgent: "Google-Deployment-Manager,gzip(gfe)"
destinationAttributes: {
}
requestAttributes: {
}
}
resourceLocation: {
currentLocations: [
0: "us"
]
}
resourceName: "projects/_/buckets/us-central1-test-airflow--a62cd8ef-bucket"
serviceName: "storage.googleapis.com"
status: {
code: 3
message: "INVALID_ARGUMENT"
}
}
receiveTimestamp: "2019-04-05T09:29:59.198703635Z"
resource: {
labels: {
bucket_name: "us-central1-test-airflow--a62cd8ef-bucket"
location: "us"
project_id: "exp-project-airflow"
}
type: "gcs_bucket"
}
severity: "ERROR"
timestamp: "2019-04-05T09:28:54.019Z"
}
I am guessing the failure to create the bucket is what failing the entire airflow environment creation.
How do I solve this?

Google Cloud Deployment Manager: add instances to instance group via yaml configuration

I'm trying to create an unmanaged instanceGroup with several VM's in it via Deployment Manager Configuration (YAML file).
I can easily find docs about addInstances via Google API, but couldn't find docs about how to do this in a YAML file:
instances
instanceGroups
What properties should be included in instances/instanceGroup resource to make it work?
The YAML below will create a compute engine instance, create an unmanaged instance group, and add the instance to the group.
resources:
- name: instance-1
type: compute.v1.instance
properties:
zone: australia-southeast1-a
machineType: zones/australia-southeast1-a/machineTypes/n1-standard-1
disks:
- deviceName: boot
type: PERSISTENT
diskType: zones/australia-southeast1-a/diskTypes/pd-ssd
boot: true
autoDelete: true
initializeParams:
sourceImage: projects/debian-cloud/global/images/debian-9-stretch-v20180716
networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- name: ig-1
type: compute.v1.instanceGroup
properties:
zone: australia-southeast1-a
network: global/networks/default
- name: ig-1-members
action: gcp-types/compute-v1:compute.instanceGroups.addInstances
properties:
project: YOUR_PROJECT_ID
zone: australia-southeast1-a
instanceGroup: ig-1
instances: [ instance: $(ref.instance-1.selfLink) ]
There is no possibility right now, to do it with gcloud deployment manager.
This was tested and it seemed that while Google Deployment Manager was able to complete without issue having the following snippet:
{
"instances": [
{
"instance": string
}
]
}
it did not add the instances specified, but created the IGM.
However Terraform seems to be able to do it https://www.terraform.io/docs/providers/google/r/compute_instance_group.html
I think #mcourtney answer is correct.
I just had this scenario and i used python template with yaml config to add instances to an un-managed instance group.
Here is the snippet of resource definition in my python template :
{
'name': name + '-ig-members',
'action': 'gcp-types/compute-v1:compute.instanceGroups.addInstances',
'properties': {
'project': '<YOUR PROJECT ID>',
'zone' : context.properties['zone'], // Defined in config yaml
'instanceGroup': '<YOUR Instance Group name ( not url )>',
"instances": [
{
"instance": 'projects/<PROJECT ID>/zones/<YOUR ZONE>/instances/<INSTANCE NAME>'
}
]
}
}
Reference API is documented here :
https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroups/addInstances
This is just an example. you can abstract all the hard coded things to either yaml configuration or variables at the top of python template.