InsufficientBalanceException("Insufficient spendable states identified for $requiredAmount.") - blockchain

I am getting the below CordaRunTime Exception
InsufficientBalanceException: Insufficient spendable states identified for $requiredAmount.
when i tried to execute a transaction by transferring "100" tokens from one Account(sender) to Account (receiver). But i didn't issue any tokens to the sender Account.
I understand that if we tried to transfer the tokens when there is no tokens available in vault to transfer it was throwing this exception.
The exception is fine (as per corda it will throw InsufficientBalanceException when They're not enough tokens in the vault to transfer,
**but I want to understand when its throwing this exception the execution of transaction is taking more time 10 seconds(aproxx) than usual. if we have sufficient tokens to transfer its taking only 800 ms. **
What are the reasons , why corda is taking more time to execute a transaction when its throwing this exception: InsufficientBalanceException.
Thanks in Advance.
Test Scenario that i have tried:
I have not issued the tokens to Account(sender) but i tried to execute a transaction which is transferring the 100 tokens from one Account (sender) to another Account(receiver). Iam getting the CordaRuntimeException:
InsufficientBalanceException: Insufficient spendable states identified for $requiredAmount.
by throwing this exception its completing the execution of the transaction Flow. But its taking more time approximately 10 Seconds than an successful transaction (means a successful transaction by transferring the tokens). For successful transaction it is taking only 1 Sec.
but I want to understand when its throwing this exception the transaction taking more time (10 seconds) to complete. What are the reasons , why corda is taking more time to execute a transaction when its throwing this exception: InsufficientBalanceException.

Related

Alert Policies keep incidents open after fix the uptime checks

I'm experiencing since past 9.Jun.21 a problem with the GCP Alert Policies that after the uptime checks recover the OK status, the Alert Policy keeps triggered as active.
The Alerts were configured time ago and the Uptime Checks appear all in green but I have 7 incidents open already since this date.
Is anybody else experiencing the same problem?
Your case seems to be as described below, which if the case, it is an expected behavior that cause your incidents to be closed after 7 days.
Partial metric data: Missing or delayed metric data can result in policies not alerting and incidents not closing. Delays in data arriving from third-party cloud providers can be as high as 30 minutes, with 5-15 minute delays being the most common. A lengthy delay—longer than the duration window—can cause conditions to enter an "unknown" state. When the data finally arrives, Cloud Monitoring might have lost some of the recent history of the conditions. Later inspection of the time-series data might not reveal this problem because there is no evidence of delays once the data arrives
Sometimes happenes that, in the vent of an outage beyond 30 minutes (as mentioned above), your alerting policy enter in such mentioned “unknown” state, resulting in the metrics reporting completely dropped (disappeared), causing that monitoring lost track of the history condition. Once recovered, by default it keeps the last readable value, which in this case, as the metric reporting stopped at all, the tool consider it a null value which is translated to a 0.000.
Such unknown state behavior causes the tool to “observe no changes” even though the metrics are reporting back to a normal state and pace, this enforce a “7 days no observable change” policy that you can read here Managing incidents: Incidents will be automatically closed if the system observed that the condition stopped being met or 7 days passed without an observation that the condition continued to be met.

BigQuery unable to insert job. Workflow failed

I need to run a batch job from GCS to BigQuery via Dataflow and Beam. All my files are avro with the same schema.
I've created a dataflow java application that is successful on a smaller set of data (~1gb, about 5 files).
But when I try to run it on a bigger set of data ( >500gb, >1000 files), i receive an error message
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: Failed to create load job with id prefix 1b83679a4f5d48c5b45ff20b2b822728_6e48345728d4da6cb51353f0dc550c1b_00001_00000, reached max retries: 3, last failed load job: ...
After 3 retries it terminates with:
Workflow failed. Causes: S57....... A work item was attempted 4 times without success....
This step is the load to BigQuery.
Stack Driver says the processing is stuck in step ....for 10m00s... and
Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes.....
I looked up the 409 error code stating that I might have an existing job, dataset, or table. I've removed all the tables and re-ran the application but it still shows the same error message.
I am currently limited on 65 workers and I have them using n1-standard-4 cpus.
I believe there are other ways to move the data from gcs to bq, but i need to demonstrate dataflow.
"java.lang.RuntimeException: Failed to create job with prefix beam_load_csvtobigqueryxxxxxxxxxxxxxx, reached max retries: 3, last failed job: null.
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers$PendingJob.runJob(BigQueryHelpers.java:198)..... "
One of the possible cause could be the privilege issue. Ensure the user account which interacts with the BigQuery has privilege "bigquery.jobs.create" in the predefined role "*BigQuery User"
Posting the comment of #DeaconDesperado as community wiki, where they experienced the same error and what they did was remove the restricted characters (eg. Unicode letters, marks, numbers, connectors, dashes or spaces) in the table name and the error is gone.
I got the same problem using "roles/bigquery.jobUser", "roles/bigquery.dataViewer", and "roles/bigquery.user". But only when granting "roles/bigquery.admin" did the issue get resolved.

When using the Admin SDK directory API to insert Org Units a dailyLimitExceeded error is returned even though that quota has not been reached

I work for an Student Information System and we're using the Admin SDK directory API to create school districts Google Org Unit structures from within our software.
POST https://www.googleapis.com/admin/directory/v1/customer/customerId/orgunits
When generating these API requests we're consistently receiving dailyLimitExceeded errors even when the district's quota has not been reached.
This error can be bypassed by ignoring the error, and implementing an exponential back-off routine, but I believe this to be acting much more like the quotaExceeded error is intended to act rather than dailyLimitExceeded, in that the request succeeds afterward on the first retry of this request.
In detail, the test I just ran successfully completed 9 of these API calls and then I received this response on the 10th:
Google.Apis.Requests.RequestError
Quota limit exceeded for the day. [403]
Errors [Message[Quota limit exceeded for the day.] Location[ - ] Reason[dailyLimitExceeded] Domain[usageLimits]
From the start of the batch of API calls it took about 10 seconds to get to the point where the error occurred.
Thanks for your help!
What I would suggest is to slow down your API requests. Don't make like 10 requests in 1 second. Give it a space in between requests. You are correct to implement exponential backoff. Also, if you can, use other accounts as well to make requests.

A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool'

I have a series of Azure SQL Data Warehouse databases (for our development/evaluation purposes). Due to a recent unplanned extended outage (due to an issue with the Tenant Ring associated with some of these databases), I decided to resume the canary queries I had been running before but had quiesced for a couple of months due to frequent exceptions.
The canary queries are not running particularly frequently on any specific database, say every 15 minutes. On one database, I've received two indications of issues completing the canary query in 24 hours. The error is:
Msg 110802, Level 16, State 1, Server adwscdev1, Line 1110802;An internal DMS error occurred that caused this operation to fail. Details: A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool' (2000000007). Rerun the query.
This database is under essentially no load, running at more than 100 DWU.
Other databases on the same logical server may be running under a load, but I have not seen the error on them.
What is the explanation for this error?
Please open a support ticket for this issue, support will have full access to the DMS logs and be able to see exactly what is going on. this behavior is not expected.
While I agree a support case would be reasonable I think you should also try scaling up to say DWU400 and retrying. I would also consider trying largerc or xlargerc on DWU100 and DWU400 as described here. Note it gets more memory and resources per query.
Run the following then retry your query:
EXEC sp_addrolemember 'largerc', 'yourLoginName'

How AWS Cognito User Pool defends against bruteforce attacks

I am going to use AWS Cognito User Pool product as user directory for application and have several questions:
Is Amazon throttle request to Cognito User Pool and if yes what is the rate limit of calls to get throttled?
How Cognito defends against bruteforce attack on login/password?
After couple of hours search I found this two exceptions in source code:
TooManyFailedAttemptsException This exception gets thrown when the
user has made too many failed attempts for a given action (e.g., sign
in).
HTTP Status Code: 400
TooManyRequestsException This exception gets thrown when the user has
made too many requests for a given operation.
HTTP Status Code: 400
Also, I tried to log in with wrong credentials to test limits, I get NotAuthorizedException: Password attempts exceeded exception after 5. attempt.
In a similar scenario, I tried to brute force to forgot password but after 10 failed attempt I got LimitExceededException: Attempt limit exceeded, please try after some time.
I think that is how they do it.
Yes, Cognito User Pools protects against brute force attacks by using various security mechanisms. Throttling is one of those of mechanisms. We do not share limits as they vary dynamically.
This contains the latest documentation on the lockout policies for Cognito.
https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html
We allow five failed sign-in attempts. After that we start temporary lockouts with exponentially increasing times starting at 1 second and doubling after each failed attempt up to about 15 minutes. Attempts during a temporary lockout period are ignored. After the temporary lockout period, if the next attempt fails, a new temporary lockout starts with twice the duration as the last. Waiting about 15 minutes without any attempts will also reset the temporary lockout. Please note that this behavior is subject to change.
Rather than (or in addition to) focusing on bruteforcing the login endpoint, I think forgot password flow deserves some attention.
Forgot password email contains a 6-digit code that can be used to set new password.
This code is valid for 1 hour. User Pools code validity resource quotas.
In my tests I could make 5 attempts to set new password within an hour for a single user before throttling came into effect (LimitExceededException: Attempt limit exceeded, please try after some time.)
Now, if I do the math correctly, there are 1000000 possible values for a code (from my tests I never saw codes starting with 0 so there may be less). You have 5 attempts/hr to guess the code. So each hour you have 5/1000000*100=0.0005% chance to succeed with resetting the password without knowing the code.
Is this a small chance? It seems so.
Considering a large-scale attack bruteforcing multiple users with retries concurrently should I sleep well at night? I don't know!
To solve the issue once and for all, why can't Cognito use longer codes that are hard to guess (I want to sleep well at night). Maybe it has something to do with the fact that the same codes mechanism is used in text messages. I wish there was an official comment.