Invalid ssh key entry - unrecognized format: ssh-rsa AAAA - google-cloud-platform

{ insertId: "1uvaqlxfcfvjgw" jsonPayload: { localTimestamp: "2023-01-30T11:32:57.2877Z" message: "Invalid ssh key entry - expired key: selman_keskin:ecdsa-sha2-nistp256 AAAA...= google-ssh {"userName":"selman.keskin#company.com","expireOn":"2023-01-30T11:32:49+0000"}" omitempty: null } labels: {1} logName: "projects/xxx-testproject/logs/GCEGuestAgent" receiveTimestamp: "2023-01-30T11:32:57.610158055Z" resource: {2} severity: "ERROR" sourceLocation: {3} timestamp: "2023-01-30T11:32:57.287845863Z" }
On GCP stream logs, get this error.
Are there any bug on non_windows_accounts.go?
troubleshooting ssh errors
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-ssh-errors
resolve the "Invalid ssh key entry - unrecognized format: ssh-rsa AAAA..." error

Related

Connecting KafkaSource to SASL enabled kafka broker of AWS MSK cluster forknative eventing

We are trying to implement event-driven architecture with our applications using knative eventing.
We wish to connect to an Apache Kafka cluster(AWS MSK) and have those messages flow through Knative Eventing.
Using the following blog we have deployed the kind : KafkaSource but it failing to connect MSK brokers when SASL authentication is enable to MSK cluster side.
https://knative.dev/docs/eventing/sources/kafka-source/#enabling-sasl-for-kafkasources
Note: And we are able to connect over plaintext communication with no authentication.
Please suggest a way to connect KafkaSource to MKS brokers having SASL enabled.
Please find the KafkaSource here
kind: KafkaSource
metadata:
name: kafka-source
spec:
consumerGroup: kntive-groups
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9096 #MSK broker
topics:
- knative-input-topic
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-app-service
uri: /myAppUrl
net:
sasl:
enable: true
user:
secretKeyRef:
name: msk-secret
key: user
password:
secretKeyRef:
name: msk-secret
key: password
type:
secretKeyRef:
name: msk-secret
key: saslType
tls:
enable: false
caCert:
secretKeyRef:
name: msk-secret-tf
key: ca.crt
Please find the logs here(Noticed that it is always throwing one common error irrespective of any mistake in code- (client has run out of available brokers to talk to) But the brokers are actually rachable.
kubectl get kafkasource kafka-source -n my-ns -o yaml
status: conditions:
- lastTransitionTime: "2023-02-03T06:10:15Z"
message: 'kafka: client has run out of available brokers to talk to (Is your cluster
reachable?)'
reason: ClientCreationFailed
status: "False"
type: ConnectionEstablished
- lastTransitionTime: "2023-02-03T06:10:15Z"
status: Unknown
type: Deployed
- lastTransitionTime: "2023-02-03T06:10:15Z"
status: Unknown
type: InitialOffsetsCommitted
- lastTransitionTime: "2023-02-03T06:10:15Z"
message: 'kafka: client has run out of available brokers to talk to (Is your cluster
reachable?)'
reason: ClientCreationFailed
status: "False"
type: Ready
- lastTransitionTime: "2023-02-03T06:10:15Z"
status: "True"
type: SinkProvided
Logs:
kubectl logs deployment.apps/kafka-controller-manager -n knative-eventing
{
"level": "info",
"ts": "2023-02-03T06:21:18.865Z",
"logger": "kafka-controller",
"caller": "client/config.go:288",
"msg": "Built Sarama config: &{Admin:{Retry:{Max:5 Backoff:100ms} Timeout:3s} Net:{MaxOpenRequests:5 DialTimeout:30s ReadTimeout:30s WriteTimeout:30s TLS:{Enable:false Config:<nil>} SASL:{Enable:true Mechanism:SCRAM-SHA-512 Version:0 Handshake:true AuthIdentity: User:maas-ml-testuser2 Password: SCRAMAuthzID: SCRAMClientGeneratorFunc:0x1861980 TokenProvider:<nil> GSSAPI:{AuthType:0 KeyTabPath: KerberosConfigPath: ServiceName: Username: Password: Realm: DisablePAFXFAST:false}} KeepAlive:0s LocalAddr:<nil> Proxy:{Enable:false Dialer:<nil>}} Metadata:{Retry:{Max:3 Backoff:250ms BackoffFunc:<nil>} RefreshFrequency:10m0s Full:true Timeout:0s AllowAutoTopicCreation:true} Producer:{MaxMessageBytes:1000000 RequiredAcks:1 Timeout:10s Compression:none CompressionLevel:-1000 Partitioner:0x17cf660 Idempotent:false Return:{Successes:true Errors:true} Flush:{Bytes:0 Messages:0 Frequency:0s MaxMessages:0} Retry:{Max:3 Backoff:100ms BackoffFunc:<nil>} Interceptors:[]} Consumer:{Group:{Session:{Timeout:10s} Heartbeat:{Interval:3s} Rebalance:{Strategy:0x2f67290 Timeout:1m0s Retry:{Max:4 Backoff:2s}} Member:{UserData:[]}} Retry:{Backoff:2s BackoffFunc:<nil>} Fetch:{Min:1 Default:1048576 Max:0} MaxWaitTime:250ms MaxProcessingTime:100ms Return:{Errors:true} Offsets:{CommitInterval:0s AutoCommit:{Enable:true Interval:1s} Initial:-2 Retention:0s Retry:{Max:3}} IsolationLevel:0 Interceptors:[]} ClientID:sarama RackID: ChannelBufferSize:256 ApiVersionsRequest:true Version:1.0.0 MetricRegistry:0xc002ca4080}",
"commit": "394f005-dirty",
"knative.dev/controller": "knative.dev.eventing-kafka.pkg.source.reconciler.source.Reconciler",
"knative.dev/kind": "sources.knative.dev.KafkaSource",
"knative.dev/traceid": "4d6b80c4-2116-4acb-b5bc-e1d074c2a380",
"knative.dev/key": "coal-dev/uat-kafka-source"
}
{
"level": "error",
"ts": "2023-02-03T06:21:19.654Z",
"logger": "kafka-controller",
"caller": "source/kafkasource.go:184",
"msg": "unable to create a kafka client",
"commit": "394f005-dirty",
"knative.dev/controller": "knative.dev.eventing-kafka.pkg.source.reconciler.source.Reconciler",
"knative.dev/kind": "sources.knative.dev.KafkaSource",
"knative.dev/traceid": "4d6b80c4-2116-4acb-b5bc-e1d074c2a380",
"knative.dev/key": "coal-dev/uat-kafka-source",
"error": "kafka: client has run out of available brokers to talk to (Is your cluster reachable?)",
"stacktrace": "knative.dev/eventing-kafka/pkg/source/reconciler/source.(*Reconciler).ReconcileKind\n\tknative.dev/eventing-kafka/pkg/source/reconciler/source/kafkasource.go:184\nknative.dev/eventing-kafka/pkg/client/injection/reconciler/sources/v1beta1/kafkasource.(*reconcilerImpl).Reconcile\n\tknative.dev/eventing-kafka/pkg/client/injection/reconciler/sources/v1beta1/kafkasource/reconciler.go:239\nknative.dev/pkg/controller.(*Impl).processNextWorkItem\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:542\nknative.dev/pkg/controller.(*Impl).RunContext.func3\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:491"
}
{
"level": "error",
"ts": "2023-02-03T06:21:19.655Z",
"logger": "kafka-controller",
"caller": "kafkasource/reconciler.go:302",
"msg": "Returned an error",
"commit": "394f005-dirty",
"knative.dev/controller": "knative.dev.eventing-kafka.pkg.source.reconciler.source.Reconciler",
"knative.dev/kind": "sources.knative.dev.KafkaSource",
"knative.dev/traceid": "4d6b80c4-2116-4acb-b5bc-e1d074c2a380",
"knative.dev/key": "coal-dev/uat-kafka-source",
"targetMethod": "ReconcileKind",
"error": "kafka: client has run out of available brokers to talk to (Is your cluster reachable?)",
"stacktrace": "knative.dev/eventing-kafka/pkg/client/injection/reconciler/sources/v1beta1/kafkasource.(*reconcilerImpl).Reconcile\n\tknative.dev/eventing-kafka/pkg/client/injection/reconciler/sources/v1beta1/kafkasource/reconciler.go:302\nknative.dev/pkg/controller.(*Impl).processNextWorkItem\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:542\nknative.dev/pkg/controller.(*Impl).RunContext.func3\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:491"
}
{
"level": "error",
"ts": "2023-02-03T06:21:19.655Z",
"logger": "kafka-controller",
"caller": "controller/controller.go:566",
"msg": "Reconcile error",
"commit": "394f005-dirty",
"knative.dev/controller": "knative.dev.eventing-kafka.pkg.source.reconciler.source.Reconciler",
"knative.dev/kind": "sources.knative.dev.KafkaSource",
"knative.dev/traceid": "4d6b80c4-2116-4acb-b5bc-e1d074c2a380",
"knative.dev/key": "coal-dev/uat-kafka-source",
"duration": 0.813508291,
"error": "kafka: client has run out of available brokers to talk to (Is your cluster reachable?)",
"stacktrace": "knative.dev/pkg/controller.(*Impl).handleErr\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:566\nknative.dev/pkg/controller.(*Impl).processNextWorkItem\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:543\nknative.dev/pkg/controller.(*Impl).RunContext.func3\n\tknative.dev/pkg#v0.0.0-20220818004048-4a03844c0b15/controller/controller.go:491"
}
{
"level": "info",
"ts": "2023-02-03T06:21:19.655Z",
"logger": "kafka-controller.event-broadcaster",
"caller": "record/event.go:285",
"msg": "Event(v1.ObjectReference{Kind:\"KafkaSource\", Namespace:\"coal-dev\", Name:\"uat-kafka-source\", UID:\"1b6dd5c4-539a-424a-811c-fd16a5d2468d\", APIVersion:\"sources.knative.dev/v1beta1\", ResourceVersion:\"56774522\", FieldPath:\"\"}): type: 'Warning' reason: 'InternalError' kafka: client has run out of available brokers to talk to (Is your cluster reachable?)",
"commit": "394f005-dirty"
}

GKE cluster upgrade failed with "DeployPatch failed" error

My cluster upgrade failed:
gcloud beta container operations describe "operation-sdfsdfsdf" --zone us-central1
clusterConditions:
- canonicalCode: UNKNOWN
message: DeployPatch failed
detail: DeployPatch failed
endTime: '2022-06-30T12:36:48.246662261Z'
error:
code: 2
message: DeployPatch failed
name: operation-sdfsdfsdf
operationType: UPGRADE_NODES
progress:
metrics:
- intValue: '7'
name: NODES_TOTAL
- intValue: '1'
name: NODES_FAILED
- intValue: '6'
name: NODES_COMPLETE
- intValue: '7'
name: NODES_DONE
- intValue: '2454'
name: NODE_PDB_DELAY_SECONDS
selfLink: https://container.googleapis.com/v1beta1/projects/xxxxxx/locations/us-central1/operations/operation-sdfsdfsdf
startTime: '2022-06-30T10:36:14.709547456Z'
status: DONE
statusMessage: DeployPatch failed
targetLink: https://container.googleapis.com/v1beta1/projects/xxxxxx/locations/us-central1/clusters/mycluster/nodePools/mypool
zone: us-central1
This is the only thing I've been able to find with this same error: Fail to enable Workload identity on GKE
I searched github as well. This error is no where in google's docs.
I also see zero error logs on the nodes.

I can't create a Gateway API Google Error: "Cannot convert to service config. 'location: "unknown location"

I'm trying to create an API gateway on Google Cloud Platform, after filling in all the fields and clicking on create gateway the following error message appears:
Cannot convert to service config. 'location: "unknown location" kind: ERROR message: "Unable to parse the content. while parsing a block mapping\n in 'reader', line 1, column 1:\n swagger: '2.0'\n ^\nexpected , but found BlockMappingStart\n in 'reader', line 2, column 5:\n info:\n ^\n\n at [Source: (StringReader); line: 1, column: 15]" '
I believe this is related to the configuration of the Yaml file that is required in the API Spec field as shown in the image below:
My yaml file is configured as follows:
swagger: '2.0'
info:
title: API Gateway for Cycle
description: "Send a deal object for the data to be treated"
version: "1.0.0"
host: teste.apigateway.project-teste-homolog.cloud.goog
schemes:
- "https"
produces:
- "application/json"
paths:
"/data-verification-homologation":
post:
x-google-backend:
address: URL.example
description: "Jailson esteve aqui =)"
operationId: "dataVerification"
parameters:
-
name: iataCode
in: query
required: true
type: string
responses:
200:
description: "Sucess"
schema:
type: string
400:
description: "Error"
I've already checked the following google documentation https://cloud.google.com/endpoints/docs/grpc/troubleshoot-config-deployment, but I couldn't solve the error.
Your indentation is incorrect.
swagger: "2.0"
info:
title: "API Gateway for Cycle"
description: ...
YAML requires very precise indentation.
See YAML Swagger (OpenAPI) example here: https://swagger.io/docs/specification/basic-structure/

Create SQL Instance fails with "An unknown error occurred."

I'm trying to create a Cloud SQL instance but am unable to do so.
I have tried many variations:
DB: Postgres 9.6 / 11.1, Mysql 5.9
Tier: micro, 1 vcpu / 3.75MiB
Public network (non selected)
Via the console (pictured) or via gcloud
I have an up-to-date billing account (have other GCP services running ok). I am the project owner and logged into the console and gcloud as such
Each time i see the same unhelpful, generic error message "An unknown error occurred.". See below also for an error log example.
{
insertId: "XXX"
logName: "projects/XXXX/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "XXXX"
}
authorizationInfo: [
0: {
granted: true
permission: "cloudsql.instances.create"
resource: "instances/XXXX"
resourceAttributes: {
}
}
]
methodName: "cloudsql.instances.create"
request: {
#type: "type.googleapis.com/cloudsql.admin.InstancesInsertRequest"
clientIp: "XXXX"
resource: {
backendType: "SECOND_GEN"
databaseVersion: "POSTGRES_11"
instanceName: {
fullProjectId: "XXXX"
instanceId: "XXXX"
}
instanceType: "CLOUDSQL_INSTANCE"
region: "us-central1"
settings: {
activationPolicy: "ALWAYS"
availabilityType: "ZONAL"
backupConfig: {
enabled: true
replicationLogArchivingEnabled: false
specification: "every day 07:00"
}
dataDiskSizeGb: "10"
dataDiskType: "PD_SSD"
ipConfiguration: {
enabled: true
}
locationPreference: {
gceZone: "us-central1-a"
}
maintenanceWindow: {
}
pricingPlan: "PER_USAGE"
storageAutoResize: true
storageAutoResizeLimit: "0"
tier: "db-custom-1-3840"
}
}
}
requestMetadata: {
callerIp: "XXXX"
destinationAttributes: {
}
requestAttributes: {
auth: {
}
time: "2019-06-04T05:46:10.255Z"
}
}
resourceName: "instances/XXXX"
response: {
#type: "type.googleapis.com/cloudsql.admin.InstancesInsertResponse"
}
serviceName: "cloudsql.googleapis.com"
status: {
code: 2
message: "UNKNOWN"
}
}
receiveTimestamp: "2019-06-04T05:46:10.297960113Z"
resource: {
labels: {
database_id: "XXXX"
project_id: "XXXX"
region: "us-central"
}
type: "cloudsql_database"
}
severity: "ERROR"
timestamp: "2019-06-04T05:46:10.250Z"
}
Since as you've said, the error message is too generic, the appropriate course of action, in my mind, is to have the issue digged into deeper by inspecting your project.
As a GCP Support employee I recommend you to open private issue by providing your project number in the following component:
https://issuetracker.google.com/issues/new?component=187202
Once you have it created, please let me know, and I'll start working on it as soon as possible.
I've had a similar issue trying to create a SQL instance using Terraform.
As it turns out, database instance names can't be reused immediately:
If I delete my instance, can I reuse the instance name?
Yes, but not right away. The instance name is unavailable for up to a week before it can be reused.
To solve this I had to add a random suffix to the database instance name upon creation.

Getting 400 HTTP error when creating a new cloud composer environment in GCP

When creating a composer environment in GCP, I get this error:
Http error status code: 400 Http error message: BAD REQUEST Additional errors: {"ResourceType":"gcp-types/storage-v1:storage.buckets.insert","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global","message":"Invalid argument","reason":"invalid"}],"message":"Invalid argument","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/storage/v1/b","httpMethod":"POST"}}
It appears the GKE cluster is successfully created, and there is a failed k8s job named composer-agent-a62cd8ef-f7a8-4797-c732-fc15673528e8
Looking in the logs of the job's containers, I can see that a gsutil cp failed to some bucket named us-central1-test-airflow--a62cd8ef-bucket.
Looking in GCS, this bucket indeed doesn't exist.
I looked in stackdriver and found that the creation of this bucket indeed fails. Here is a sample stackdriver record of the failure:
{
insertId: "59zovdfe3w3r"
logName: "projects/analytics-analysis-1/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "234453461464#cloudservices.gserviceaccount.com"
serviceAccountDelegationInfo: [
0: {
firstPartyPrincipal: {
principalEmail: "cloud-dm#prod.google.com"
}
}
]
}
authorizationInfo: [
0: {
granted: true
permission: "storage.buckets.create"
resource: "projects/_/buckets/us-central1-test-airflow--a62cd8ef-bucket"
resourceAttributes: {
}
}
]
methodName: "storage.buckets.create"
requestMetadata: {
callerIp: "64.233.172.244"
callerSuppliedUserAgent: "Google-Deployment-Manager,gzip(gfe)"
destinationAttributes: {
}
requestAttributes: {
}
}
resourceLocation: {
currentLocations: [
0: "us"
]
}
resourceName: "projects/_/buckets/us-central1-test-airflow--a62cd8ef-bucket"
serviceName: "storage.googleapis.com"
status: {
code: 3
message: "INVALID_ARGUMENT"
}
}
receiveTimestamp: "2019-04-05T09:29:59.198703635Z"
resource: {
labels: {
bucket_name: "us-central1-test-airflow--a62cd8ef-bucket"
location: "us"
project_id: "exp-project-airflow"
}
type: "gcs_bucket"
}
severity: "ERROR"
timestamp: "2019-04-05T09:28:54.019Z"
}
I am guessing the failure to create the bucket is what failing the entire airflow environment creation.
How do I solve this?