I have a code block that redirects a Cassandra query to different Cassandra tables based on the available parameters such that I check multiple logical conditions inside multiple if conditions. I'm trying my hand at java 8 and looking to reduce these conditions to lambda expressions. Here's how the code currently looks,
String processTable(String cid, String postcode, String[] ratingvalue, String ratingType) {
String table = "";
if (postcode != null && ratingvalue == null) {
table = cassconf.getTable1();
}
if (postcode != null && ratingvalue != null) {
table = cassconf.getTable2();
}
if (cid != null && ratingvalue == null) {
table = cassconf.getTable3();
}
if (cid != null && ratingvalue != null) {
table = cassconf.getTable4();
}
if (cid != null && postcode != null && ratingvalue == null) {
table = cassconf.getTable5();
}
if (cid != null && postcode != null & ratingvalue != null) {
table = cassconf.getTable6();
}
return table;
}
My problem is even if I store the arguments in a map and filter the unavailable values from the stream, I don't know how to return the final value of the table based on these 6 different conditions.
Considering that ratingvalue can only be null or non-null, you can simplify the code by writing effectively unconditional statements as such:
String processTable(String cid, String postcode, String[] ratingvalue, String ratingType) {
if (cid != null)
if(postcode != null)
return ratingvalue == null? cassconf.getTable5(): cassconf.getTable6();
else
return ratingvalue == null? cassconf.getTable3(): cassconf.getTable4();
if(postcode != null)
return ratingvalue == null? cassconf.getTable1(): cassconf.getTable2();
return "";
}
Testing the conditions with precedence first is also more efficient than testing all conditions in reversed order and overwriting results of previous evaluations.
You could also write the entire evaluation as a single condition:
String processTable(String cid, String postcode, String[] ratingvalue, String ratingType) {
return cid != null?
postcode != null? ratingvalue == null? cassconf.getTable5(): cassconf.getTable6():
ratingvalue == null? cassconf.getTable3(): cassconf.getTable4():
postcode != null? ratingvalue == null? cassconf.getTable1(): cassconf.getTable2():
"";
}
An alternative is to use a map lookup:
String processTable(String cid, String postcode, String[] ratingvalue, String ratingType) {
final int hasCID = 1, hasPostcode = 2, hasRatingValue = 4;
Map<Integer, Supplier<String>> map = new HashMap<>();
map.put(hasCID|hasPostcode, cassconf::getTable5);
map.put(hasCID|hasPostcode|hasRatingValue, cassconf::getTable6);
map.put(hasCID, cassconf::getTable3);
map.put(hasCID|hasRatingValue, cassconf::getTable4);
map.put(hasPostcode, cassconf::getTable2);
map.put(hasPostcode|hasRatingValue, cassconf::getTable1);
return map.getOrDefault(
(cid!=null? hasCID: 0) | (postcode!=null? hasPostcode: 0)
| (ratingvalue!=null? hasRatingValue: 0),
() -> "").get();
}
The key point of this alternative is that, depending on what cassconf is or when it will be initialized, the map may be prepared at an earlier stage and processTable could be simplified to the return map.getOrDefault… operation.
I was thinking this as an exercise more, nothing to do with streams though, since it can't really help you here.
You could compute a HashMap that will have O(1) time for finding a value, like so:
Map<Integer, String> map = new HashMap<>(16);
map.put(0b1110, "table-6");
map.put(0b1100, "table-5");
map.put(0b1010, "table-4");
map.put(0b1000, "table-3");
map.put(0b0110, "table-2");
map.put(0b0100, "table-1");
This corresponds to whether your cid (the 4-th most significant bit), postcode (3-rd most significant bit) and ratingValue (second most significant bit) are null or not. So these are the total of 6 combinations that you are looking for.
Also this Map will have one entry per bucket, thus finding the value that you are interested in, will be really fast.
Computing the key that you need to get the value from is fairly trivial, you just need to set the bit (value that is not null).
String processTable(String cid, String postcode, String[] ratingvalue, String ratingType) {
if (cid != null) {
x = x | 1 << 3;
}
if (postCode != null) {
x = x | 1 << 2;
}
if (ratingValue != null) {
x = x | 1 << 1;
}
return map.get(x);
}
Do note that this code was take just as an exercise (well, we do have something close to this in real life, but there are compelling reasons for this - speed mainly).
Java 8 is not a magic wand you can simply wave at a piece of code to instantly improve it's readability.
Using null as a sentinel is bad practice and that's fundamentally why your code is hard to read. I suggest you rethink having nullable parameters.
Without fixing that, this is probably the best you can do:
if (ratingvalue == null)
{
if (cid != null && postcode != null) {
table = cassconf.getTable5();
}
else if (postcode != null) {
table = cassconf.getTable1();
}
else if (cid != null) {
table = cassconf.getTable3();
}
}
else
{
if (cid != null && postcode != null) {
table = cassconf.getTable6();
}
else if (postcode != null) {
table = cassconf.getTable2();
}
else if (cid != null) {
table = cassconf.getTable4();
}
}
Related
On screen CRCase, I added new field UsrFinishDate and it will generate when I assign the owner for the case and I allowed users to change DateTime of UsrFinishDate if they want. For my purpose are:
For the default value of UsrFinishDate will get from SLAETA when I assign User, it will be calculated.
If owners want to add more days over the SLA so they can change the DateTime on UsrFinishDate and it should get the new Date and current time when they change. But about my problem, it's not calculated correctly because it got time at 12:00 AM or 12:00 PM all the time while I'm changing UsrFinishDate and time it still got the same current time even user have already changed it.
Below is my coding:
protected void CRCase_UsrFinishDate_FieldDefaulting(PXCache cache, PXFieldDefaultingEventArgs e, PXFieldDefaulting InvokeBaseHandler)
{
if (InvokeBaseHandler != null)
InvokeBaseHandler(cache, e);
CRCase row = e.Row as CRCase;
CRCaseExt rowExt = PXCache<CRCase>.GetExtension<CRCaseExt>(row);
if (row == null || row.AssignDate == null) return;
if (row.ClassID != null && row.Severity != null)
{
var severity = (CRClassSeverityTime)PXSelect<CRClassSeverityTime,
Where<CRClassSeverityTime.caseClassID, Equal<Required<CRClassSeverityTime.caseClassID>>,
And<CRClassSeverityTime.severity, Equal<Required<CRClassSeverityTime.severity>>>>>.
Select(Base, row.ClassID, row.Severity);
if (severity != null && severity.TimeReaction != null)
{
e.NewValue = ((DateTime)row.AssignDate).AddMinutes((int)severity.TimeReaction);
e.Cancel = true;
}
}
if (row.Severity != null && row.ContractID != null)
{
var template = (Contract)PXSelect<Contract, Where<Contract.contractID, Equal<Required<CRCase.contractID>>>>.Select(Base, row.ContractID);
if (template == null) return;
var sla = (ContractSLAMapping)PXSelect<ContractSLAMapping,
Where<ContractSLAMapping.severity, Equal<Required<CRCase.severity>>,
And<ContractSLAMapping.contractID, Equal<Required<CRCase.contractID>>>>>.
Select(Base, row.Severity, template.TemplateID);
if (sla != null && sla.Period != null)
{
e.NewValue = ((DateTime)row.AssignDate).AddMinutes((int)sla.Period);
e.Cancel = true;
}
}
}
protected void CRCase_UsrFinishDate_FieldUpdated(PXCache cache, PXFieldUpdatedEventArgs e)
{
var row = e.Row as CRCase;
CRCaseExt rowExt = PXCache<CRCase>.GetExtension<CRCaseExt>(row);
if(rowExt != null)
{
System.DateTime today = (DateTime)rowExt.UsrFinishDate;
System.TimeSpan duration = new System.TimeSpan(0, 0, 0, 0);
rowExt.UsrFinishDate = today.Add(duration);
}
}
It seems not correct at all. Please help!!!
Could you please describe your usage scenario? Especially this information will be helpful:
What date do you want to store in this field?
What do you expect to see there? Date? Time?
DAC field definition and attributes
What do you mean under "duration" field?
Thanks.
As the question I'm asking above on CaseScrren (screenID:CR306000), the default value it sets SLA Datetime by the datetime of creating case.
For example: Case 1 created on 01/01/2016 09:58 AM, severity: H for 3Days so SLA Datetime should be on 01/03/2016 09:58 AM. But case 1 I assigned to owner on 01/02/2016 09:58 AM. I just added new field name: Test SLA
[PXDBDate(PreserveTime = true, DisplayMask = "g")]
[PXUIField(DisplayName="Test SLA")]
[PXFormula(typeof(Default<CRCase.contractID, CRCase.severity, CRCase.caseClassID>))]
protected void CRCase_UsrTestSLA_FieldDefaulting(PXCache cache, PXFieldDefaultingEventArgs e)
{
CR.CRCase row = e.Row as CR.CRCase;
if (row == null || row.AssignDate == null) return;
if (row.ClassID != null && row.Severity != null)
{
CR.CRClassSeverityTime severity = PXSelect<CR.CRClassSeverityTime,
Where<CR.CRClassSeverityTime.caseClassID, Equal<Required<CR.CRClassSeverityTime.caseClassID>>,
And<CR.CRClassSeverityTime.severity, Equal<Required<CR.CRClassSeverityTime.severity>>>>>
.Select(Base,row.ClassID,row.Severity);
if (severity != null && severity.TimeReaction != null)
{
e.NewValue = ((DateTime)row.AssignDate).AddMinutes((int)severity.TimeReaction);
e.Cancel = true;
}
}
if (row.Severity != null && row.ContractID != null)
{
Contract template = PXSelect<Contract, Where<Contract.contractID, Equal<Required<CRCase.contractID>>>>
.Select(Base, row.ContractID);
if (template == null) return;
ContractSLAMapping sla = PXSelect<ContractSLAMapping,
Where<ContractSLAMapping.severity, Equal<Required<CRCase.severity>>,
And<ContractSLAMapping.contractID, Equal<Required<CRCase.contractID>>>>>
.Select(Base, row.Severity, template.TemplateID);
if (sla != null && sla.Period != null)
{
e.NewValue = ((DateTime)row.AssignDate).AddMinutes((int)sla.Period);
e.Cancel = true;
}
}
}
I am working on a legacy code base which has the following snippet:
if ((results[0].Length == 0))
customerName = "";
else
customerName = results[0].Substring(18);
if ((results[1].Length == 0))
meterSerialNumber = "";
else
meterSerialNumber = results[1];
if ((results[2].Length == 0))
customerID = "";
else
customerID = results[2];
if ((results[3].Length == 0))
meterCreditAmount = "";
else
meterCreditAmount = results[3];
if ((results[4].Length == 0))
debtInstallmentDeduction = "";
else
debtInstallmentDeduction = results[4];
if ((results[5].Length == 0))
vatOnEnergyAmount = "";
else
vatOnEnergyAmount = results[5];
if ((results[6].Length == 0))
vatOnDebt = "";
else
vatOnDebt = results[6];
if ((results[7].Length == 0))
outstandingDebtAmount = "";
else
outstandingDebtAmount = results[7];
if ((results[8].Length == 0))
tariffCategory = "";
else
tariffCategory = results[8];
if ((results[9].Length == 0))
tariffId = "";
else
tariffId = results[9];
if ((results[10].Length == 0))
encryptedToken1 = "";
else
encryptedToken1 = results[10];
if ((results[11].Length == 0))
encryptedToken2 = "";
else
encryptedToken2 = results[11];
if ((results[12].Length == 0))
encryptedToken3 = "";
else
encryptedToken3 = results[12];
if ((results[13].Length == 0))
encryptedToken4 = "";
else
encryptedToken4 = results[13];
if ((results[14].Length == 0))
systemMessage = "";
else
systemMessage = results[14];
if ((results[15].Length == 0))
customerMessage = "";
else
customerMessage = results[15];
if ((results[16].Length == 0))
predefinedMessage = "";
else
predefinedMessage = results[16];
if ((results[17].Length == 0))
transactionAcknowledgeNumber = "";
else
transactionAcknowledgeNumber = results[17];
What would be the best way to refactor this for acceptable coding standards? Would it be acceptable to make this a case statement instead?
This is not a case-wise execution so it can't be refactored to a switch-case. However it can be converted to functional code and then it factored out into a separate method so that the "ugly" part is hidden behind a method call.
Step#1 - Making the code functional
Here, we rewrite the code by following functional code writing practices. The rewritten code will look like:
customerName = (results[0].Length == 0) ? "" : results[0].Substring(18);
meterSerialNumber = (results[1].Length == 0) ? "" : results[1];
customerID = (results[2].Length == 0) ? "" : results[2];
meterCreditAmount = (results[3].Length == 0) ? "" : results[3];
debtInstallmentDeduction = (results[4].Length == 0) ? "" : results[4];
vatOnEnergyAmount = (results[5].Length == 0) ? "" : results[5];
.
.
.
transactionAcknowledgeNumber = (results[17].Length == 0) ? "" : results[17];
There are numerous advantages of writing the code this way. Important ones include:
terseness of code;
values being initialized at one place (by means of ternary operator) instead of two (one in if and another in else clause).
Step#2 - Factoring out the method
Now that the values are being initialized functionally, you can create a class (or you may be already having this class) containing the properties customerName, meterSerialNumber, ..., transactionAcknowledgeNumber. Either the constructor of the class can be designed to read the results and populate the class members or you may write a method to read the results. So it will look like:
ResultValues resultVal = new ResultValues();
resultVal.Read(results);
.
.
.
//Accessing the values later in the code
Print(resultVal.customerName);
...
PS:
1. I admit that ResultValues may not be a good class to make. Alternatively, you may create multiple classes by clubbing the related data and then have the Read() method of those classes read the values from results.
2. The essential idea of Step#2 is to factor out the "ugly" part to another simple and readable method call(s).
I am using the Cassandra C++ driver in my application. I am observing many crashes. After debugging I identified that even when the query output is zero rows , the if (result == NULL) is false and when I iterate through the result, one place or other it is crashing. Below is the code sample. Please suggest me any solution for this.
const char* query = "SELECT variable_id, variable_name FROM aqm_wfvariables WHERE template_id = ?;";
CassError rc = CASS_OK;
CassSession* session = NULL;
if((session=CassandraDbConnect::getInstance()->getSessionForCassandra())==NULL){
return false;
}
CassStatement* statement = cass_statement_new(query, 1);
cass_statement_bind_int32(statement, 0, wf_template_id );
CassFuture* query_future = cass_session_execute(session, statement);
cass_future_wait(query_future);
rc = cass_future_error_code(query_future);
if (rc != CASS_OK) {
logMsg(DEBUG, 7, "cass_session_execute failed for query #%d:%s:%s", 1, __FILE__, query);
cass_statement_free(statement);
return false;
}
cass_statement_free(statement);
const CassResult* result = cass_future_get_result(query_future);
if (result == NULL) {
cass_future_free(query_future);
logMsg(DEBUG, 7, "No values are returned for query #%d:%s:%s", 1, __FILE__, query);
return false;
}
cass_future_free(query_future);
CassIterator* row_iterator = cass_iterator_from_result(result);
while (cass_iterator_next(row_iterator)) {
const CassRow* row = cass_iterator_get_row(row_iterator);
/* Copy data from the row */
You should use
(result.cass_result_row_count>0)
instead of
(result == NULL)
to verify if query returns zero rows. In your code, result is always an instance of CassResult and not a null reference when zero rows are returned.
The source codes of the method scanAndLockForPut in ConcurrentHashMap in JDK7 says:
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
HashEntry<K,V> first = entryForHash(this, hash);
HashEntry<K,V> e = first;
HashEntry<K,V> node = null;
int retries = -1; // negative while locating node
while (!tryLock()) {
HashEntry<K,V> f; // to recheck first below
if (retries < 0) {
if (e == null) {
if (node == null) // speculatively create node
node = new HashEntry<K,V>(hash, key, value, null);
retries = 0;
}
else if (key.equals(e.key))
retries = 0;
else
e = e.next;
}
else if (++retries > MAX_SCAN_RETRIES) {
lock();
break;
}
else if ((retries & 1) == 0 &&
(f = entryForHash(this, hash)) != first) {
e = first = f; // re-traverse if entry changed
retries = -1;
}
}
return node;
}
I understand what the codes mean, but what I don't is this else if entry:
else if ((retries & 1) == 0 && (f = entryForHash(this, hash)) != first)
My question is:
Why do we have to do "(retries & 1) == 0"?
EDIT:
I kind of figure it out. It's all because the constant MAX_SCAN_RETRIES:
static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
In single core processor, MAX_SCAN_RETRIES = 1. So the second time the thread steps into the loop "while(tryLock)", it doesn't have to check whether the first node was changed.
However, in multi cores processor, this will behave like checking whether the first node is changed every 2 times in the while loop.
Is the above explanation correct?
Let's break this down:
1:
(retries & 1) == 0
This returns 1 for odd numbers, 0 for even numbers. Basically, to get past, there's a 1 in 2 chance, if the number is even.
2:
f = entryForHash(this, hash)
f is a temporary variable used to store the value of the latest entry in the segment.
3:
(/* ... */) != first
Checks if the value changed. If it did, it would move the current entry to the start, and re-iterate the linked nodes again in attempt to acquire the lock.
I've asked this question on the concurrency-interest mailing list, and the author(Doug Lea) himself replied:
Yes. We need only ensure that staleness is eventually detected.
Alternating the head-checks works fine, and simplifies use of
the same code for both uni- and multi- processors.
link
So I think this is the end of this question.
I think there are some bugs for the method!
first let us see the put method:
final V put(K key, int hash, V value, boolean onlyIfAbsent) {
HashEntry<K,V> node = tryLock() ? null :
scanAndLockForPut(key, hash, value);//1. scanAndLockForPut only return
// null or a new Entry
V oldValue;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> first = entryAt(tab, index);
for (HashEntry<K,V> e = first;;) {
if (e != null) {
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
++modCount;
}
break;
}
e = e.next;
}
else {
// 2. here the node is null or a new Entry
// and the node.next is the origin head node
if (node != null)
node.setNext(first);
else
node = new HashEntry<K,V>(hash, key, value, first);
int c = count + 1;
if (c > threshold && tab.length < MAXIMUM_CAPACITY)
rehash(node);
else
setEntryAt(tab, index, node);//3. finally, the node become
// the new head,so eventually
// every thing we put will be
// the head of the entry list
// and it may appears two equals
// entry in the same entry list.
++modCount;
count = c;
oldValue = null;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
step: 1. scanAndLockForPut only return null or a new Entry.
step: 2. the node eventualy a new Entry, and the node.next is the origin head node
step: 3. finally, the node become the new head,so eventually every thing we put will be the head of the entry list and it may appears two equals entry in the same entry list when the concurrentHashMap works in a concurrent environment.
That is my opinion, and I am not exactly sure about whether it is right or not. So I hope you all give me some advice,thanks a lot!!