We are working with a set of web services and we're looking for the best option to return errors to the web service's consumer. This is the current response:
Response
Some data about the server
Some data about the user
Some resulting data of executing the transaction
So, we need to return errors too. these are our options:
Composite message
we'll return two kinds of responses depending if the transaction was approved or had an error:
First:
type identifier (this message is serialized. so I need to know which kind of message I'm dealing with, to deserialize the last part)
Some data about the server
Some data about the user
Some resulting data of executing the transaction
Second:
type identifier (this message is serialized. so I need to know which kind of message I'm dealing with, to deserialize the last part)
Some data about the server
Some data about the user
The errors
Optional fields
the transaction data and error fields will be optional. if there's no errors I will know it was approved.
Some data about the server
Some data about the user
Some resulting data of executing the transaction
The errors
Which option is more appropriate?
This is discutable and more of a personal opinion than a best practice.
My personal favor is to use the Optional fields, because the error code is possible outcome of an operation. I would expect the client to always first check the (optional) error properties of the returned result before parsing the results. This allows to also return non-fatal errors and partial results together. Exclusive makes it so ... exclusive. Optional is more flexible.
Related
My model represents users with unique names. In order to achieve that I store user and its name as 2 separate items using TransactWriteItems. The approximate structure looks like this:
PK | data
--------------------------------
userId#<userId> | {user data}
userName#<userName> | {userId: <userId>}
Data arrives to a lambda from a Kinesis stream. If one lambda invocation processes an "insert" event and another lambda request comes in about at the same time (the difference could be 5 milliseconds) the "update" event causes a TransactionConflictException: Transaction is ongoing for the item error.
Should I just re-try to run update again in a second or so? I couldn't really find a resolution strategy.
That implies you’re getting data about the same user in quick succession and both writes are hitting the same items. One succeeds while the other exceptions out.
Is it always duplicate data? If you’re sure it is, then you can ignore the second write. It would be a no-op.
Is it different data? Then you’ve got to decide how to handle that conflict. You’ll have one dataset in the database and a different dataset live in your code. That’s a business logic question not database question.
I am using cloud endpoints with objectify and Firestore in Datastore mode. Although it says in the documentation that all queries are strongly consistent, I have found that they are not in the following examples:
Example 1
I made an endpoint that queries for an entity by a property, adds +1 to a count property on it, and saves it back to the datastore. I then have 50 different clients all execute that method at the same time. I would expect the count property to be 50, however, it usually ends up being somewhere between 25-30.
Example 2
I have an endpoint that queries for an entity by a property. If the entity does not exist, I create the entity and save it to the datastore. If it exists, I just return it. Again, I hit this endpoint with 50 different clients at the same time. I would expect there to only be one entity in the Datastore. However, I will have maybe 5-10 of the same entity.
It seems to me this is not strongly consistent. If I take my code in the above endpoints and put them in a transaction with retries, all works as intended. I looked around in objectify to see if there is a ReadOptions set somewhere, but from what I can see, there is not, so it should be using the default of read_consistency=STRONG
For example 1, you need to use transactions to ensure that writes do not stomp on each other.
For example 2, again you need to use a transaction to get consistency across clients.
Strong consistency means that if a client writes a value, it can read or query it back after the write succeeds. Not that if a client reads a value, another reads the same value, they each do a transformation, and try to write that the blinds writes for each client will merge together.
I'm using a CDatabase*/CRecordset* duo to read a HFSQL (windev) database through an ODBC DSN.
There are many issues with HFSQL's handling of binary blobs, especially when they're empty.
One such problem causes my app to fire warnings and exceptions on loop as I read a table. I use a custom class that manages the recordset and fetches every field once, in ascending ordinal order, and stores the resulting CDBVariant vars in a vector for my own later uses. The error I get alternates between Warning: ODBC Success With Info on field 8. when the field has content and Error: GetFieldValue operation failed on field 8. Data already fetched for this field. when it has none. Clearly, I have not fetched the field before, so either the wrong error message is displayed or the CRecordset believes it is correct and I should be able to detect it beforehand.
How could I go about detecting whether my CRecordset considers a field to have already been fetched? GetODBCFieldInfo does not give me any useful information, have I missed something?
I am trying to use IBM MQ client 9 with C++. I would like to read only messages that has group id '2'. I have tried everything but it just does not work. Can someone assist please?
I tried to set groupId and flag to match on group.
MQGET
gmoptions.setMatchOptions(MQMO_MATCH_GROUP_ID);
MQBYTE24 bGroupId("2");
ImqBinary _groupId;
_groupId.set(bGroupId, sizeof(bGroupId));
message.setGroupId(_groupId);
q->get(message, gmoptions);
MQPUT
MQBYTE24 bGroupId("2");
ImqBinary _groupId;
_groupId.set(bGroupId, sizeof(bGroupId));
message.setGroupId(_groupId);
ImqPutMessageOptions pmo;
pmo.setOptions(MQPMO_LOGICAL_ORDER);
pmo.setRecordFields(MQPMRF_GROUP_ID);
q->put(message, pmo);
mqget should be able to get all the msgs with groupId "2" but it does not. Though it can read the msg as soon as I remove setMatchOptions.
Basically, I want to use Group Id as filter where server instance 1 will read msgs only in group1 and server instance 2 will read msgs only in group 2 and so on, instead of creating separate queues for each server instance.
May be following can help me if group id is only for batching instead of filtering though not sure how to do 'Selection using the MQSUB and MQOPEN function calls' in C++
https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q022990_.htm
Is there any C++ equivalent of MQSETMP ? I am unable to find any interface in ImqQueue or ImqObject that will let me set message property or selectionString.
I don't think you are going about this the right way.
IBM published a Java/MQ sample program to get messages in a group called GetGroup.java. You can find it here. You can use it as a model for your C++ program.
Basically, the code retrieves a message from the queue and then checks the messageFlags field if the message is part of a group.
if ((myMessage.messageFlags & CMQC.MQMF_MSG_IN_GROUP) == CMQC.MQMF_MSG_IN_GROUP)
If the message is part of the group then the code sets the matchOptions for matching on a group and retrieves all of the messages in the group.
Note: You will probably want to add logical order to the GMO options.
gmo.options |= CMQC.MQGMO_LOGICAL_ORDER;
Finally, what is this?
pmo.setRecordFields(MQPMRF_GROUP_ID);
That doesn't make any sense. You should be setting messageFlags field to MQMF_MSG_IN_GROUP.
You can use the concept of SELECTORS in IBM MQ. A message selector is a variable-length string used by an application to register its interest in only those messages that have properties that satisfy the Structured Query Language (SQL) query that the selection string represents.
A message selector is a concept that has been in the JMS specification for a long time. It
is a way of limiting the messages that are passed to an application to those that meet
certain criteria. Those criteria are based on the values of the message properties and only
the value of the message properties. It is important to understand that selection cannot be
based on any values of the message payload, only on the message property values.
In your case, the PUT application will have to put messages with populating the a certain topic string in MQMD or MQRFH2 header and using MQ Interface function calls, you should be able to pick the messages only with a certain value which in your case is GroupId value.
Below are few reference links to the concept:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q022990_.htm
http://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/topic/com.ibm.iea.wmq_v7/wmq/7.0/MQI/iea_330_wmqv7_API_3_Selectors.pdf ==> Pdf gets downloaded
To understand Message Properties => https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q022920_.htm
Background
In distributed systems messages can arrive in an out of order fashion. For example if message A is sent at time T1 and message B is sent at T2 there is a chance that B is received before A. This matters for example if A is a message such as "CustomerRegistered" and B is "CustomerUnregistered".
In other databases I'd typically write a tombstone if CustomerUnregistered is received for a customer that is not present in the database. I can then check if this tombstone exists when the CustomerRegistered message is received (and perhaps simply ignore this message depending on use case). I could of course do something similar with Datomic as well but I hope that maybe Datomic can help me so that I don't need to do this.
One potential solution I'm thinking of is this:
Can you perhaps retract a non-existing customer entity (CustomerUnregistered) and later when CustomerRegistered is received the customer entity is written at a time in history before the retraction? It would be neat (I think) if the :db/txInstant could be set to a timestamp defined in the message.
Question
How would one deal with this scenario in Datomic in an idiomatic way?
As a general principle, do not let your application code manipulate :db/txInstant. :db/txInstant represents the time at which you learned a fact, not the time at which it happened.
Maybe you should consider un-registration as adding a Datom about a customer (e.g via an instant-typed :customer/unregistered attribute) instead of retracting the Datoms of that customer (which means: "forget that this customer existed").
However, if retracting the datoms of customer is really the way you want to do things, I'd use a record which prevents the customer registration transaction to take place (which I'd enforce via a transaction function).