Upserting a document in CosmosDb, but only if "newer" - concurrency

I need to update documents in a CosmosDb.
{
firstname: "...",
lastname: "...",...
lastmodified: "2022-01-13T12:06:18.712Z"
}
Due to "concurrency" ( i.e. multiple concurrent "updaters" ), the update ( or insertion, hence Upsert? ) should only take place if the data I update is "newer" ( data.lastmodified ) than the one already persisted.
What is the proposed way to achieve this?
In plain SQL I'd opt for:
UPDATE address SET ... WHERE address.lastmodified < newdata.lastmodified
or INSERT ON DUPLICATE KEY UPDATE
If Upsert had the possibility for specifying contraints when to effectively upsert ( i.e. address.lastmodified < newdata.lastmodified ), I'd use these. But I guess ItemRequestOptions is not meant for that?
"concurrency"-context: updates are being posted into a service bus queue and an event-triggered AzureFunction handles the Events. Chances are, that multiple Events for the same data end up "concurrently" in the queue and hence are being executed "concurrently"
Thx for your advices
Clemens ( being new to ComosDb et al )

This can be achieved using partial document update with conditional update. You can use Add or Set patch operations and specify your WHERE clause of WHERE address.lastmodified < newdata.lastmodified to specify whether it executes or not.
For more information see, Partial Document Updates in Azure Cosmos DB

Related

limiting array size in security rules cloud firestore using request.resorce.data?

I have the two following security rules the later checks if the document premiumUitill value in DB is bigger that current time meaning the premium is valid.
the issue here is with the first rule I want to disable array size so it won't pass 50 of length and I am pushing using arrayUninon(data) should I check for the size of resource.datarather than request.resorce.data ? in my testing request.resource.data.arr.size() < 50 works but it doesn't make sense to check the incoming data since the incoming has the payload only is something with the arrayUnion() that makes it work ?
await updateDoc(docRef, {
arr: arrayUnion(payload),
}).catch((error) => {
errorHandeling(error, 'An error has happened', reduxDispatch, SetSnackBarMsg);
});
&& request.resource.data.arr.size() < 50
&& resource.data.premiumUntill > request.time
In Cloud Firestore security rules, resource refers to the existing document in the database, and request.resource refers to the document as it exists in the request (during a write, i.e. a set or update).
From the documentation on data validation:
The resource variable refers to the requested document, and resource.data is a map of all of the fields and values stored in the document. For more information on the resource variable, see the reference documentation.
When writing data, you may want to compare incoming data to existing data. In this case, if your ruleset allows the pending write, the request.resource variable contains the future state of the document. For update operations that only modify a subset of the document fields, the request.resource variable will contain the pending document state after the operation. You can check the field values in request.resource to prevent unwanted or inconsistent data updates:
service cloud.firestore {
match /databases/{database}/documents {
// Make sure all cities have a positive population and
// the name is not changed
match /cities/{city} {
allow update: if request.resource.data.population > 0
&& request.resource.data.name == resource.data.name;
}
}
}
Additionally you may watch this video and also may have a look at this stackoverflow

Different results when making a Corda query via API and inside a flow

I’m getting some strange behaviour. When I update a state with a list of partner ids - other nodes - and and read the state afterwards it seems that via rpcOps.vaultQueryBy I can see the updated - or unconsumed - state with the updated list of partners, but if I do same query via serviceHub.vaultService.queryBy it looks like the state’s parner list hasn’t changed at all.
If I get all states in the flow - also the consumed - it looks like there has not been a change, but via API all updates into partners list are visible. Is this some sort of a bug I have encountered or am I just not understanding something?
We're using Corda 4.0.
Via API
var servicestates = rpcOps.vaultQueryBy<ServiceState>().states.map { it.state.data }
var services = getServices().filter {
it.linearId == UniqueIdentifier.fromString(serviceId)
}.single()
Inside flow
val serviceStateAndRef = serviceHub.vaultService.queryBy<ServiceState>(
QueryCriteria.LinearStateQueryCriteria(linearId = listOf(serviceLinearId))
).states.single()
#Ashutosh Meher You got it near enough. The problem was in a previous flow, where, when creating a new partner state the command call for contract, there was only the caller listed.
So
Command(ServiceContract.Commands.AddPartner(),listOf(ourIdentity.owningKey))
had to be edited to include necessary other parties.
Command(ServiceContract.Commands.AddPartner(),updatedServiceState.participants.map { it.owningKey })
That resulted the other node not to see the change. It was right under my eyes all the time... ;)

Google Cloud Datastore - get after insert in one request

I am trying to retrieve an entity immediately after it was saved. When debugging, I insert the entity, and check entities in google cloud console, I see it was created.
Key key = datastore.put(fullEntity)
After that, I continue with getting the entity with
datastore.get(key)
, but nothing is returned. How do I retrieve the saved entity within one request?
I've read this question Missing entities after insertion in Google Cloud DataStore
but I am only saving 1 entity, not tens of thousands like in that question
I am using Java 11 and google datastore (com.google.cloud.datastore. package)*
edit: added code how entity was created
public Key create.... {
// creating the entity inside a method
Transaction txn = this.datastore.newTransaction();
this.datastore = DatastoreOptions.getDefaultInstance().getService();
Builder<IncompleteKey> builder = newBuilder(entitykey);
setLongOrNull(builder, "price", purchase.getPrice());
setTimestampOrNull(builder, "validFrom", of(purchase.getValidFrom()));
setStringOrNull(builder, "invoiceNumber", purchase.getInvoiceNumber());
setBooleanOrNull(builder, "paidByCard", purchase.getPaidByCard());
newPurchase = entityToObject(this.datastore.put(builder.build()));
if (newPurchase != null && purchase.getItems() != null && purchase.getItems().size() > 0) {
for (Item item : purchase.getItems()) {
newPurchase.getItems().add(this.itemDao.save(item, newPurchase));
}
}
txn.commit();
return newPurchase.getKey();
}
after that, I am trying to retrieve the created entity
Key key = create(...);
Entity e = datastore.get(key)
I believe that there are a few issues with your code, but since we are unable to see the logic behind many of your methods, here comes my guess.
First of all, as you can see on the documentation, it's possible to save and retrieve an entity on the same code, so this is not a problem.
It seems like you are using a transaction which is right to perform multiple operations in a single action, but it doesn't seem like you are using it properly. This is because you only instantiate it and close it, but you don't put any operation on it. Furthermore, you are using this.datastore to save to the database, which completely neglects the transaction.
So you either save the object when it has all of its items already added or you create a transaction to save all the entities at once.
And I believe you should use the entityKey in order to fetch the added purchase afterwards, but don't mix it.
Also you are creating the Transaction object from this.datastore before instantiating the latter, but I assume this is a copy-paste error.
Since you're creating a transaction for this operation, the entity put should happen inside the transaction:
txn.put(builder.builder());
Also, the operations inside the loop where you add the purchase.getItems() to the newPurchase object should also be done in the context of the same transaction.
Let me know if this resolves the issue.
Cheers!

Postgres advisory lock within function allows concurrent execution

I'm encountering an issue where I have a function that is intended to require serialized access dependent on some circumstances. This seemed like a good case for using advisory locks. However, under fairly heavy load, I'm finding that the serialized access isn't occurring and I'm seeing concurrent access to the function.
The intention of this function is to provide "inventory control" for a event. Meaning, it is intended to limit concurrent ticket purchases for a given event such that the event is not oversold. These are the only advisory locks used within the application/database.
I'm finding that occasionally there are more tickets in an event than the eventTicketMax value. This doesn't seem like it should be possible because of the advisory locks. When testing with low volume (or manually introduced delays such as pg_sleep after acquiring the lock), things work as expected.
CREATE OR REPLACE FUNCTION createTicket(
userId int,
eventId int,
eventTicketMax int
) RETURNS integer AS $$
DECLARE insertedId int;
DECLARE numTickets int;
BEGIN
-- first get the event lock
PERFORM pg_advisory_lock(eventId);
-- make sure we aren't over ticket max
numTickets := (SELECT count(*) FROM api_ticket
WHERE event_id = eventId and status <> 'x');
IF numTickets >= eventTicketMax THEN
-- raise an exception if this puts us over the max
-- and bail
PERFORM pg_advisory_unlock(eventId);
RAISE EXCEPTION 'Maximum entries number for this event has been reached.';
END IF;
-- create the ticket
INSERT INTO api_ticket (
user_id,
event_id,
created_ts
)
VALUES (
userId,
eventId,
now()
)
RETURNING id INTO insertedId;
-- update the ticket count
UPDATE api_event SET ticket_count = numTickets + 1 WHERE id = eventId;
-- release the event lock
PERFORM pg_advisory_unlock(eventId);
RETURN insertedId;
END;
$$ LANGUAGE plpgsql;
Here's my environment setup:
Django 1.8.1 (django.db.backends.postgresql_psycopg2 w/ CONN_MAX_AGE 300)
PGBouncer 1.7.2 (session mode)
Postgres 9.3.10 on Amazon RDS
Additional variables which I tried tuning:
setting CONN_MAX_AGE to 0
Removing pgbouncer and connecting directly to DB
In my testing, I have noticed that, in cases where an event was oversold, the tickets were purchased from different webservers so I don't think there is any funny business about a shared session but I can't say for sure.
As soon as PERFORM pg_advisory_unlock(eventId)is executed, another session can grab that lock, but as the INSERT of session #1 is not yet commited, it will not be counted in the COUNT(*)of session #2, resulting in the over-booking.
If keeping the advisory lock strategy, you must use transaction-level advisory locks (pg_advisory_xact_lock), as opposed to session-level. Those locks are automatically released at COMMIT time.

Time tracking when status is changed

I have a question about a specific functionality in Siebel, regarding service requests.
Is there a way to track time when certain service request is in a given status/substatus, for example "Waiting on Customer"? When the service request is changed again to another status that isn't "Wait for somebody" anymore, I have to stop counting the time.
I don't know of any out of the box solution to your needs, however there are many ways to achieve it with a bit of customisation. For example:
Create two new fields, Waiting Time (with predefault value: 0) and Waiting Date.
Create the following BC user properties:
On Field Update Set x = "Status", "Waiting Time", "IIF([Waiting Date] IS NULL, [Waiting Time], [Waiting Time] + (Timestamp() - [Waiting Date]))
On Field Update Set y = "Status", "Waiting Date", "IIF([Status]='Waiting on Customer',Timestamp(),NULL)"
Your Waiting Date field will store the last time the service request changed to "Waiting on Customer", or NULL if it's on another status. Then, Waiting Time will accumulate the total time the request has been in that status.
I have not tested the solution, it might need some more work, for example, it's possible that Siebel doesn't allow you to use the expression [Waiting Time] + (Timestamp() - [Waiting Date]) directly and you'll have to decompose it using auxiliary calculated fields.
Note also that the On Field Update Set user property has changed its syntax from Siebel 7.7-7.8 to Siebel 8.x.
If you're familiar with server scripting, you could implement something similar quite easily, on the BusComp_PreSetFieldValue event. If the field being changed is Status, check if you're entering or exiting (or not) the "Waiting on Customer" status, and update the two fields accordingly.