Preventing: Store reset while query was in flight (not completed in link chain) - apollo

I get this randomly in my Apollo Client code. I'm using the anti-pattern of surrounding all setState with an isMounted flag to prevent attempts to set the state when a component is unmounted or a user is leaving a page. But I still get this
Store reset while query was in flight (not completed in link chain)
What gives?

Related

SCTP State Cookie

I know that sctp does prevents SYN/Flooding (Denial of service) by the use of state cookies every echoed cookie is stored at a session-browser buffer level.
But what does the state cookie actually contains?!
RFC-4960, chapter 5.1.3 describes it in details:
5.1.3. Generating State Cookie
When sending an INIT ACK as a response to an INIT chunk, the sender
of INIT ACK creates a State Cookie and sends it in the State Cookie
parameter of the INIT ACK. Inside this State Cookie, the sender
should include a MAC (see [RFC2104] for an example), a timestamp on
when the State Cookie is created, and the lifespan of the State
Cookie, along with all the information necessary for it to establish
the association.
The following steps SHOULD be taken to generate the State Cookie:
Create an association TCB using information from both the
received INIT and the outgoing INIT ACK chunk,
In the TCB, set the creation time to the current time of day,
and
the lifespan to the protocol parameter 'Valid.Cookie.Life' (see
Section 15),
From the TCB, identify and collect the minimal subset of
information needed to re-create the TCB, and generate a MAC using
this subset of information and a secret key (see [RFC2104] for an
example of generating a MAC), and
Generate the State Cookie by combining this subset of
information
and the resultant MAC.
After sending the INIT ACK with the State Cookie parameter, the
sender SHOULD delete the TCB and any other local resource related to
the new association, so as to prevent resource attacks.
The hashing method used to generate the MAC is strictly a private
matter for the receiver of the INIT chunk. The use of a MAC is
mandatory to prevent denial-of-service attacks. The secret key
SHOULD be random ([RFC4086] provides some information on randomness
guidelines); it SHOULD be changed reasonably frequently, and the
timestamp in the State Cookie MAY be used to determine which key
should be used to verify the MAC.
An implementation SHOULD make the cookie as small as possible to
ensure interoperability.

Step Functions: How to share context between Lambdas?

I have a data processing workflow like this. The Download task creates a session ID (GUID) and pass it to Parse task and then the Post task. If any exception occurs in these three tasks, the workflow jumps to Failed task. The Failed task would update the status of the process as failed in DynamoDB. To do that, it needs to get the session ID.
Is there any way to pass the session ID to the Failed task?
Or, if the session ID is created outside and passed in to the workflow, is it possible to share this ID to all the tasks?
Specify ResultPath property in the error catcher. By default it is $, which means that output of a failed Parallel State will be only error info. However, if you set ResultPath to, for example, $.error_info then you will preserve state and error data will be accessible under error_info property.
For more details, you may be interested in https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html (Error Handling).

NSUbiquitousKeyValueStore Notification for Current Device

I am using NSUbiquitousKeyValueStore to store user preferences in iCloud. To ensure that I understand how NSUbiquitousKeyValueStore works, I successfully stored a String value and monitored the value on a separate device. The following explains my configuration.
I started by observing the Notification that is posted when the NSUbiquitousKeyValueStore changes externally:
NSUbiquitousKeyValueStore.didChangeExternallyNotification
Then, I set a String value that was input of a UITextField:
let store = NSUbiquitousKeyValueStore.default
store.set(text, forKey: key)
store.synchronize()
To ensure that this works, I created a UIAlertController in the method that responds to the notification. I observed the alert appear on the secondary device within six seconds of setting the value on the primary device. However, I never observed the alert appear on the primary device after setting the value.
After reading the documentation for NSUbiquitousKeyValueStore, I was unable to find a reason that the primary device would not also receive the notification after updating the NSUbiquitousKeyValueStore.
Do I need a solution to save the updated values locally before setting them in the NSUbiquitousKeyValueStore? I can persist the values locally in UserDefaults before persisting them in NSUbiquitousKeyValueStore. However, it would require me to revert the values in UserDefaults if an error should occur during the attempt to persist them in the NSUbiquitousKeyValueStore. Without the primary device being notified of the changes that are persisted from it, will I even be notified of the error in the observation of the notification? I struggle to believe that I will if I am not receiving notifications when successful.
Is there something that I am missing, or is this the expected behavior?
Perhaps they updated the documentation (+ 1 yr since you asked).
"This notification is sent only upon a change received from iCloud; it is not sent when your app sets a value."
I assume that means that because no change is observed (the local device and iCloud are already in sync) - you don't get a notification.

Detecting when a spanner read timestamp has expired

I am trying to build a cursor based pagination API on top of a spanner dataset. To do this I'm am using the read timestamp from the initial request to retrieve data and then encoding this into a cursor which can then be used to do an "Exact staleness" (https://cloud.google.com/spanner/docs/timestamp-bounds) read in subsequent paging requests.
For example, the processing of a request for the first page looks something like:
Transaction tx = spanner.singleUseReadOnlyTransaction();
tx.executeQuery(statement); // result set containing the first page of data
tx.getReadTimestamp(); // read timestamp that gets returned in a cursor
And for subsequent requests:
Transaction tx = spanner.singleUseReadOnlyTransaction(TimestampBound.ofReadTimestamp(cursorTs));
I'd also like to return a message to the user when the cursor timestamp has expired (the documentation linked to above states they are valid for roughly an hour) and to do this I have the following code:
try {
// process spanner result set
} catch (SpannerException e) {
if (ErrorCode.FAILED_PRECONDITION.equals(e.getErrorCode)) {
// cursor has expired, return appropriate error message
}
}
This works fine when manually testing against a long running spanner database. However, in my test code I create a spanner database and then tear it down once the test is complete and in these tests the spanner exception is only thrown intermittently when I use a read timestamp that should definitely be expired (say over a year old). In the cases where no exception is thrown, I get an empty resultset. If I make multiple requests to spanner in my test with this expired read timestamp, eventually the database seems to consistently throw the "failed precondition" error.
Is this behaviour expected for a newly provisioned spanner database?
I believe the reason for this behavior is because you are using Read-only Transactions. As explained in the documentation, Read-only transactions always observe a consistent state of the database and the transaction commit history at a chosen point. In your case, the database is created and torn down before and after your test is completed. Hence, no transaction commit history to be observed except after a number of attempts.

Architecture for robust payment processing

Imagine 3 system components:
1. External ecommerce web service to process credit card transactions
2. Local Database to store processing results
3. Local UI (or win service) to perform payment processing of the customer order document
The external web service is obviously not transactional, so how to guarantee:
1. results to be eventually persisted to database when received from web service even in case the database is not accessible at that moment(network issue, db timeout)
2. prevent clients from processing the customer order while payment initiated by other client but results not successfully persisted to database yet(and waiting in some kind of recovery queue)
The aim is to do processing having non transactional system components and guarantee the transaction won't be repeated by other process in case of failure.
(please look at it in the context of post sell payment processing, where multiple operators might attempt manual payment processing; not web checkout application)
Ask the payment processor whether they can detect duplicate transactions based on an order ID you supply. Then if you are unable to store the response due to a database failure, you can safely resubmit the request without fear of double-charging (at least one PSP I've used returned the same response/auth code in this scenario, along with a flag to say that this was a duplicate).
Alternatively, just set a flag on your order immediately before attempting payment, and don't attempt payment if the flag was already set. If an error then occurs during payment, you can investigate and fix the data at your leisure.
I'd be reluctant to go down the route of trying to automatically cancel the order and resubmitting, as this just gets confusing (e.g. what if cancelling fails - should you retry or not?). Best to keep the logic simple so when something goes wrong you know exactly where you stand.
In any system like this, you need robust error handling and error reporting. This is doubly true when it comes to dealing with payments, where you absolutely do not want to accidentaly take someone's money and not deliver the goods.
Because you're outsourcing your payment handling to a 3rd party, you're ultimately very reliant on the gateway having robust error handling and reporting systems.
In general then, you hand off control to the payment gateway and start a task that waits for a response from the gateway, which is either 'payment accepted' or 'payment declined'. When you get that response you move onto the next step in your process and everything is good.
When you don't get a response at all (time out), or the response is invalid, then how you proceed very much depends on the payment gateway:
If the gateway supports it send a 'cancel payment' style request. If the payment cancels successfully then you probably want to send the user to a 'sorry, please try again' style page.
If the gateway doesn't support canceling, or you have no communications to the gateway then you will need to manually (in person, such as telephone) contact the 3rd party to discover what went wrong and how to proceed. To aid this you need to dump as much detail as you have to error logs, such as date/time, customer id, transaction value, product ids etc.
Once you're back on your site (and payment is accepted) then you're much more in control of errors, but in brief if you cant complete the order, then you should either dump the details to disk (such as csv file for manual handling) or contact the gateway to cancel the payment.
Its also worth having a system in place to track errors as they occur, and if an excessive number occur then consider what should happen. If its a high traffic site for example you may want to temporarily prevent further customers from placing orders whilst the issue is investigated.
Distributed messaging.
When your payment gateway returns submit a message to a durable queue that guarantees a handler will eventually get it and process it. The handler would update the database. Should failure occur at that point the handler can leave the message in the queue or repost it to the queue, or post an alternate message.
Should something occur later that invalidates the transaction, another message could be queued to "undo" the change.
There's a fair amount of buzz lately about eventual consistency and distribute messaging. NServiceBus is the new component hotness. I suggest looking into this, I know we are.