I can not create spanner instance by using nodejs client library.
{ Error: Creating an instance that is in the process of getting
deleted.
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:442:17
code: 9, metadata: Metadata { _internal_repr: {
'google.rpc.resourceinfo-bin': [Object] } }, note: 'Exception
occurred in retry method that was not classified as transient' }
I looked at Activity and yesterday when i deleted instance it turned out that the error occurred:
Resource name projects/spanner-gss-ge/instances/business-data Error
message Deadline exceeded (HTTP 504): Deadline expired before
operation could complete.
On Console the Instance with this name is not listed.
How can i resolve this issue?
As it is not deleted am i charged on daily basis?
I have no problem with creating instance by the other name.
Related
Intermittently getting the following error when connecting to an AWS keyspace using a lambda layer
All host(s) tried for query failed. First host tried, 3.248.244.53:9142: Host considered as DOWN. See innerErrors.
I am trying to query a table in a keyspace using a nodejs lambda function as follows:
import cassandra from 'cassandra-driver';
import fs from 'fs';
export default class AmazonKeyspace {
tpmsClient = null;
constructor () {
let auth = new cassandra.auth.PlainTextAuthProvider('cass-user-at-xxxxxxxxxx', 'zzzzzzzzz');
let sslOptions1 = {
ca: [ fs.readFileSync('/opt/utils/AmazonRootCA1.pem', 'utf-8')],
host: 'cassandra.eu-west-1.amazonaws.com',
rejectUnauthorized: true
};
this.tpmsClient = new cassandra.Client({
contactPoints: ['cassandra.eu-west-1.amazonaws.com'],
localDataCenter: 'eu-west-1',
authProvider: auth,
sslOptions: sslOptions1,
keyspace: 'tpms',
protocolOptions: { port: 9142 }
});
}
getOrganisation = async (orgKey) => {
const SQL = 'select * FROM organisation where organisation_id=?;';
return new Promise((resolve, reject) => {
this.tpmsClient.execute(SQL, [orgKey], {prepare: true}, (err, result) => {
if (!err?.message) resolve(result.rows);
else reject(err.message);
});
});
};
}
I am basically following this recommended AWS documentation.
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_nodejs_driver.html
It seems that around 10-20% of the time the lambda function (cassandra driver) cannot connect to the endpoint.
I am pretty familiar with Cassandra (I already use a 6 node cluster that I manage) and don't have any issues with that.
Could this be a timeout or do I need more contact points?
Followed the recommended guides. Checked from the AWS console for any errors but none shown.
UPDATE:
Update to the above question....
I am occasionally (1 in 50 if I parallel call the function (5 concurrent calls)) getting the below error:
"All host(s) tried for query failed. First host tried,
3.248.244.5:9142: DriverError: Socket was closed at Connection.clearAndInvokePending
(/opt/node_modules/cassandra-driver/lib/connection.js:265:15) at
Connection.close
(/opt/node_modules/cassandra-driver/lib/connection.js:618:8) at
TLSSocket.
(/opt/node_modules/cassandra-driver/lib/connection.js:93:10) at
TLSSocket.emit (node:events:525:35)\n at node:net:313:12\n at
TCP.done (node:_tls_wrap:587:7) { info: 'Cassandra Driver Error',
isSocketError: true, coordinator: '3.248.244.5:9142'}
This exception may be caused by throttling in the keyspaces side, resulting the Driver Error that you are seeing sporadically.
I would suggest taking a look over this repo which should help you to put measures in place to either prevent the occurrence of this issue or at least reveal the true cause of the exception.
Some of the errors you see in the logs you will need to investigate Amazon CloudWatch metrics to see if you have throttling or system errors. I've built this AWS CloudFormation template to deploy a CloudWatch dashboard with all the appropriate metrics. This will provide better observability for your application.
A System Error indicates an event that must be resolved by AWS and often part of normal operations. Activities such as timeouts, server faults, or scaling activity could result in server errors. A User error indicates an event that can often be resolved by the user such as invalid query or exceeding a capacity quota. Amazon Keyspaces passes the System Error back as a Cassandra ServerError. In most cases this a transient error, in which case you can retry your request until it succeeds. Using the Cassandra driver’s default retry policy customers can also experience NoHostAvailableException or AllNodesFailedException or messages like yours "All host(s) tried for query failed". This is a client side exception that is thrown once all host in the load balancing policy’s query plan have attempted the request.
Take a look at this retry policy for NodeJs which should help resolve your "All hosts failed" exception or pass back the original exception.
The retry policies in the Cassandra drivers are pretty crude and will not be able to do more sophisticated things like circuit breaker patters. You may want to eventually use a "failfast" retry policy for the driver and handle the exceptions in your application code.
I am Receiving the following error
invalid property 'ERROR_INTEGRATION' for 'TASK'
when adding notification integration property to TASK OBJECT in snowflake hosted on AWS.
I have defined notification integration <my_notifcation_int> successfully.
invalid property 'ERROR_INTEGRATION' for 'TASK'
The definition of task is as follows :
create task mytask
schedule = '5 MINUTE'
error_integration = <my_notification_int>
as
insert into mytable(ts) values(current_timestamp);
As per snowflake documentation, ERROR_INTEGRATION in TASK OBJECTS is supported.
Any suggestions on resolving this error?
I tried to create a Google Cloud Composer environment but in the page to set it up I get the following errors:
Service Error: Failed to load GKE machine types. Please leave the field
empty to apply default values or retry later.
Service Error: Failed to load regions. Please leave the field empty to
apply default values or retry later.
Service Error: Failed to load zones. Please leave the field empty to apply
default values or retry later.
Service Error: Failed to load service accounts. Please leave the field
empty to apply default values or retry later.
The only parameters GCP lets me change are the region and the number of nodes, but still lets me create the environment. After 30 minutes the environment crashes with the following error:
CREATE operation on this environment failed 1 day ago with the following error message:
Http error status code: 400
Http error message: BAD REQUEST
Errors in: [Web server]; Error messages:
Failed to deploy the Airflow web server. This might be a temporary issue. You can retry the operation later.
If the issue persists, it might be caused by problems with permissions or network configuration. For more information, see https://cloud.google.com/composer/docs/troubleshooting-environment-creation.
An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>2021-07-20T14:31:23.047Z7050.xd.0: Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
Got error "Another operation failed." during CP_DEPLOYMENT_CREATING_STANDARD []
Is it a problem with permissions? If so, what permissions do I need? Thank you!
It looks like more of a temporary issue:
the first set of errors is stating you cannot load the metadata :
regions list, zones list ....
you dont have a clear
PERMISSION_DENIED error
the second error: is suggesting also:
This might be a temporary issue.
I have a workflow that contains a bunch of activities. I store each activity's response in a S3 bucket.
I pass the S3 key as an input to each activity. Inside the activity, I have a method that retrieve the data from S3 and perform some operation. But my last activity failed and threw error:
Caused by: com.amazonaws.AmazonServiceException: Request entity too large (Service: AmazonSimpleWorkflow; Status Code: 413; Error Code: Request entity too large; Request ID: null)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:820)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:439)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:245)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.invoke(AmazonSimpleWorkflowClient.java:3173)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.respondActivityTaskFailed(AmazonSimpleWorkflowClient.java:2878)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailed(SynchronousActivityTaskPoller.java:255)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailedWithRetry(SynchronousActivityTaskPoller.java:246)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.execute(SynchronousActivityTaskPoller.java:208)
at com.amazonaws.services.simpleworkflow.flow.worker.ActivityTaskPoller$1.run(ActivityTaskPoller.java:97)
... 3 more
I know AWS SWF has some limits on data size, but I am only passing a S3 Key to activity. Inside activity, it will read from S3 and process the data. I am not sure why I am getting this error. If anyone knows, please help! Thanks a lot!
Your activity failed as respondActivityTaskFailed SWF API call is seen in the stack trace. So my guess is that the exception message + stack trace exceeded the maximum size allowed by SWF service.
I have defined a new artifact type and am successfully creating new asset instances of it in the publisher - which works well. Recently I was experimenting with my own create_form.hbs under publisher/extensions/assets//themes/default/partials/ and then decided against continuing. After deleting the newly created '/themes/default/partials/create_form.hbs' I then found that when I now try to publish a new instance of the artifact I get the following error thrown:
[2016-11-11 11:17:06,833] ERROR - Failed to invoke action: Create for the asset of id: "9a3a4e55-a5a3-4c94-a2d0-152a10e4ab45".The following exception was thrown: JavaException: org.wso2.carbon.registry.core.exceptions.RegistryException: Preprequest action must be completed before Create {rxt.asset}
[2016-11-11 11:17:06,833] ERROR - org.wso2.carbon.registry.core.exceptions.RegistryException: Preprequest action must be completed before Create {asset_api_endpoints}
org.mozilla.javascript.WrappedException: Wrapped org.wso2.carbon.registry.core.exceptions.RegistryException: Preprequest action must be completed before Create (eval code#1(eval)#87) at org.mozilla.javascript.Context.throwAsScriptRuntimeEx(Context.java:1754) at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:148) at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:22
Despite this and the fact that I get the user friendly 'Error' message in the publication page telling me the asset was unable to be created, the new instance does in fact get created (I can see it when I go the the asset list page). I can also edit with no problems.
I'm unsure whether this error is related to the create_form.hbs page I previously created (and then deleted) or whether it is just a coincidence.
Is there a caching problem going on?
Any help on what the error means and how to resolve it would be greatly appreciated.
Thanks in advance.
The defaultAction value (under meta lifecycle) in the asset file was defined as "Create" which didn't match what was in the LifeCycle config. This action must match a valid lifecycle value otherwise the publish page displays an error message despite successfully creating the new asset.