How to handle NULL Value in BigQuery while writing through Dataflow? - google-cloud-platform

I am ingesting data from one Database to BigQuery using the JdbcIO Source connector and BigQueryIO Sink connector provided by Apache Beam.
Below is my sample table data:
As we can see few columns such as id, and booking_date contain NULL Value. So when I try to write data into BigQuery, it gives the below error
"message": "Error while reading data, error message: JSON parsing error in row starting at position 0: Only optional fields can be set to NULL. Field: status; Value: NULL
if I pass null in booking_date it gives an invalid date format error.
Below is the RowMapper I am using to convert JdbcIO resultset into TableRow. it is the same code that GCP JdbcToBigQuery Template is using.
public TableRow mapRow(ResultSet resultSet) throws Exception {
ResultSetMetaData metaData = resultSet.getMetaData();
TableRow outputTableRow = new TableRow();
for (int i = 1; i <= metaData.getColumnCount(); i++) {
if (resultSet.getObject(i) == null) {
outputTableRow.set(getColumnRef(metaData, i), resultSet.getObject(i));
// outputTableRow.set(getColumnRef(metaData, i), String.valueOf(resultSet.getObject(i)));
continue;
}
/*
* DATE: EPOCH MILLISECONDS -> yyyy-MM-dd
* DATETIME: EPOCH MILLISECONDS -> yyyy-MM-dd hh:mm:ss.SSSSSS
* TIMESTAMP: EPOCH MILLISECONDS -> yyyy-MM-dd hh:mm:ss.SSSSSSXXX
*
* MySQL drivers have ColumnTypeName in all caps and postgres in small case
*/
switch (metaData.getColumnTypeName(i).toLowerCase()) {
case "date":
outputTableRow.set(
getColumnRef(metaData, i), dateFormatter.format(resultSet.getDate(i)));
break;
case "datetime":
outputTableRow.set(
getColumnRef(metaData, i),
datetimeFormatter.format((TemporalAccessor) resultSet.getObject(i)));
break;
case "timestamp":
outputTableRow.set(
getColumnRef(metaData, i), timestampFormatter.format(resultSet.getTimestamp(i)));
break;
case "clob":
Clob clobObject = resultSet.getClob(i);
if (clobObject.length() > Integer.MAX_VALUE) {
LOG.warn(
"The Clob value size {} in column {} exceeds 2GB and will be truncated.",
clobObject.length(),
getColumnRef(metaData, i));
}
outputTableRow.set(
getColumnRef(metaData, i), clobObject.getSubString(1, (int) clobObject.length()));
break;
default:
outputTableRow.set(getColumnRef(metaData, i), resultSet.getObject(i).toString());
}
}
return outputTableRow;
}
Click here for more details JdbcToBigQuery
Solution I tried but did not get success
I tried to skip that particular column when it is null then it gives the error Missing required field
I tried to hard code value for all cases as "null" so that I can handle this particular value later but it gives the error Could not convert value 'string_value: \t \"null\"' to integer
How can I handle all Null case? Please note, I will not be able to ignore these rows since few columns contain values.

To solve your issue, you have to pass null if the date value is null and you have to set the associated BigQuery columns to NULLABLE :
public TableRow mapRow(ResultSet resultSet) throws Exception {
ResultSetMetaData metaData = resultSet.getMetaData();
TableRow outputTableRow = new TableRow();
for (int i = 1; i <= metaData.getColumnCount(); i++) {
if (resultSet.getObject(i) == null) {
outputTableRow.set(getColumnRef(metaData, i), resultSet.getObject(i));
// outputTableRow.set(getColumnRef(metaData, i), String.valueOf(resultSet.getObject(i)));
continue;
}
/*
* DATE: EPOCH MILLISECONDS -> yyyy-MM-dd
* DATETIME: EPOCH MILLISECONDS -> yyyy-MM-dd hh:mm:ss.SSSSSS
* TIMESTAMP: EPOCH MILLISECONDS -> yyyy-MM-dd hh:mm:ss.SSSSSSXXX
*
* MySQL drivers have ColumnTypeName in all caps and postgres in small case
*/
public void yourMethod() {
switch (metaData.getColumnTypeName(i).toLowerCase()) {
case "date":
String date = Optional.ofNullable(resultSet.getDate(i))
.map(d -> dateFormatter.format(d))
.orElse(null);
outputTableRow.set(getColumnRef(metaData, i), date);
break;
case "datetime":
String datetime = Optional.ofNullable(resultSet.getObject(i))
.map(d -> datetimeFormatter.format((TemporalAccessor) d))
.orElse(null);
outputTableRow.set(getColumnRef(metaData, i), datetime);
break;
case "timestamp":
String timestamp = Optional.ofNullable(resultSet.getTimestamp(i))
.map(t -> timestampFormatter.format(t))
.orElse(null);
outputTableRow.set(getColumnRef(metaData, i), timestamp);
break;
case "clob":
Clob clobObject = resultSet.getClob(i);
if (clobObject.length() > Integer.MAX_VALUE) {
LOG.warn(
"The Clob value size {} in column {} exceeds 2GB and will be truncated.",
clobObject.length(),
getColumnRef(metaData, i));
}
outputTableRow.set(
getColumnRef(metaData, i), clobObject.getSubString(1, (int) clobObject.length()));
break;
default:
outputTableRow.set(getColumnRef(metaData, i), resultSet.getObject(i).toString());
}
return outputTableRow;
}
For the date, datetime and timestamp blocs, I applied the transformation only if the value is not null, otherwise I retrieved default null value.

Related

Passing Side Input in PCollection Partition

I want to pass a sideInput in PCollection Partition and On basis of that, i need to Divide my PCollection is their anyway....
PCollectionList<TableRow> part = merged.apply(Partition.of(Pcollection Count Function Called, new PartitionFn<TableRow>(){
#Override
public int partitionFor(TableRow arg0, int arg1) {
return 0;
}
}));
Any Other Way through Which I Can Partition My PCollection
//Without Dynamic destination partitioning BigQuery table
merge.apply("write into target", BigQueryIO.writeTableRows()
.to(new SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination>() {
#Override
public TableDestination apply(ValueInSingleWindow<TableRow> value) {
TableRow row = value.getValue();
TableReference reference = new TableReference();
reference.setProjectId("XYZ");
reference.setDatasetId("ABC");
System.out.println("date of row " + row.get("authorized_transaction_date_yyyymmdd").toString());
LOG.info("date of row "+
row.get("authorized_transaction_date_yyyymmdd").toString());
String str = row.get("authorized_transaction_date_yyyymmdd").toString();
str = str.substring(0, str.length() - 2) + "01";
System.out.println("str value " + str);
LOG.info("str value " + str);
reference.setTableId("TargetTable$" + str);
return new TableDestination(reference, null);
}
}).withFormatFunction(new SerializableFunction<TableRow, TableRow>() {
#Override
public TableRow apply(TableRow input) {
LOG.info("format function:"+input.toString());
return input;
}
})
.withSchema(schema1).withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Now I have to use Dynamic Destination Any Solution.Insted Of this and have to Do Partition.
Based on seeing TableRow in your code, I suspect that you want to write a PCollection to BigQuery, sending different elements to different BigQuery tables. BigQueryIO.write() already provides a method to do that, using BigQueryIO.write().to(DynamicDestinations). See Writing different values to different BigQuery tables in Apache Beam.

How to delete all the items from all tables of Amazon's dynamo db?

Just like backing up all the tables of dynamo db, i also want clear all the tables of my test environment after testing without deleting tables.
We used backup service such a way that we don't want schema structure or java object of table schema as below:
Map<String, AttributeValue> exclusiveStartKey = null;
do {
// Let the rate limiter wait until our desired throughput "recharges"
rateLimiter.acquire(permitsToConsume);
ScanSpec scanSpec = new ScanSpec().withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
.withMaxResultSize(25);
if(exclusiveStartKey!=null){
KeyAttribute haskKey = getExclusiveStartHashKey(exclusiveStartKey, keySchema);
KeyAttribute rangeKey = getExclusiveStartRangeKey(exclusiveStartKey, keySchema);
if(rangeKey!=null){
scanSpec.withExclusiveStartKey(haskKey, rangeKey);
}else{
scanSpec.withExclusiveStartKey(haskKey);
}
}
Table table = dynamoDBInstance.getTable(tableName);
ItemCollection<ScanOutcome> response = table.scan(scanSpec);
StringBuffer data = new StringBuffer();
Iterator<Item> iterator = response.iterator();
while (iterator.hasNext()) {
Item item = iterator.next();
data.append(item.toJSON());
data.append("\n");
}
logger.debug("Data read from table: {} ", data.toString());
if(response.getLastLowLevelResult()!=null){
exclusiveStartKey = response.getLastLowLevelResult().getScanResult().getLastEvaluatedKey();
}else{
exclusiveStartKey = null;
}
// Account for the rest of the throughput we consumed,
// now that we know how much that scan request cost
if(response.getTotalConsumedCapacity()!=null){
double consumedCapacity = response.getTotalConsumedCapacity().getCapacityUnits();
if(logger.isDebugEnabled()){
logger.debug("Consumed capacity : " + consumedCapacity);
}
permitsToConsume = (int)(consumedCapacity - 1.0);
if(permitsToConsume <= 0) {
permitsToConsume = 1;
}
}
} while (exclusiveStartKey != null);
is it possible to delete all items without knowing table schema? can we do it using DeleteItemSpec

Salesforce Trigger Test Error

Hello!
I am working on unit tests for trigger within Salesforce and I keep encountering an error that I can't seem to solve so I am hoping someone with more experience can help me get back on track. I've hunted around on Google and messed with the structure of my code many times but I'm unable to find a solution.
Purpose:
I have been tasked with writing a trigger that will handle the logic required to maintain case rankings per developer. Each developer is assigned cases and those cases may or may not have a priority determined by the business. Each developer may only have 10 cases prioritized at any one time. Any other cases will just have a null value in the ranking field. If a case with a ranking is inserted, updated, or deleted then all the other cases assigned to that developer with a rank must automatically update accordingly. Any case with a rank higher than 10 will get nulled out.
Problem:
I have completed the trigger and trigger handler class now I am writing unit tests to cover all the unit tests. When I finished my first unit test method I got an error that referenced a workflow issue. I found and corrected the issue but after I finished my second unit test method I get the same error again. I can comment out either of the two methods and the other passes but whenever I run them together I get a pass on the first and a fail on the second with the same original error.
Error:
System.DmlException: Upsert failed. First exception on row 0; first error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY, A workflow or approval field update caused an error when saving this record. Contact your administrator to resolve it.
Developer Assigned Email: invalid email address: false: []
Code:
Trigger Code -
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/09/15
* #brief: This is a trigger for the Case object that will modify the rank of the Cases
* assigned to the developer based on a priority set by the Administrator.
***************************************************************************************/
trigger CaseRankTrigger on Case (before insert, before update, before delete) {
// trigger level variables
private static Boolean firstRun = true;
private RecordType ticketRecordType = [SELECT Id FROM RecordType WHERE SobjectType = 'Case' AND Name = 'Salesforce Service Ticket' LIMIT 1];
private List<Case> newTrigger = trigger.new;
private List<Case> currentTrigger = trigger.old;
private List<Case> qualifiedNewCases = new List<Case>();
private List<Case> qualifiedCurrentCases = new List<Case>();
// makes sure that the trigger only runs once
if (firstRun) {
firstRun = false;
// populate trigger Case lists
qualifyCases();
if (qualifiedNewCases.size() == 1 || qualifiedCurrentCases.size() == 1) {
// the CaseRankTriggerHandler constructor method takes (List<Case>, List<Case>, String)
if (trigger.isInsert) CaseRankTriggerHandler handler = new CaseRankTriggerHandler(qualifiedNewCases, qualifiedCurrentCases, 'Insert'); // if a new Case is being inserted into the database
if (trigger.isUpdate) CaseRankTriggerHandler handler = new CaseRankTriggerHandler(qualifiedNewCases, qualifiedCurrentCases, 'Update'); // if an existing Case is being updated
if (trigger.isDelete) CaseRankTriggerHandler handler = new CaseRankTriggerHandler(qualifiedNewCases, qualifiedCurrentCases, 'Delete'); // if an existing Case is deleted from the database
}
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/24/15
* #brief: The qualifyCases method populates a list of Cases for each trigger
* that are of the Salesforce Service Ticket record type only.
* #return: Void
***************************************************************************************/
public void qualifyCases() {
if (newTrigger != null ) {
for (Case c : newTrigger) {
if (c.recordTypeId == ticketRecordType.Id) {
qualifiedNewCases.add(c);
}
}
}
if (currentTrigger != null) {
for (Case c : currentTrigger) {
if (c.recordTypeId == ticketRecordType.Id) {
qualifiedCurrentCases.add(c);
}
}
}
}
}
Trigger Handler Code -
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/09/15
* #brief: This is a Case object trigger handler class that provides logic to the CaseRankTrigger for manipulating
* the ranks of all Cases assigned to a developer based on a priority that is set by an Administrator.
***************************************************************************************/
public with sharing class CaseRankTriggerHandler {
// class level variables
private static Boolean firstRun = true;
private static Boolean modify = false;
private static Integer MAX = 10;
private static Integer MIN = 1;
private List<Case> newTrigger {get; set;}
private List<Case> currentTrigger {get; set;}
private List<Case> cases {get; set;}
private List<Case> newList {get; set;}
private List<Case> currentList {get; set;}
private String developer {get; set;}
private Decimal newRank {get; set;}
private Decimal currentRank {get; set;}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: Class constructor method.
* #return: Void
***************************************************************************************/
public CaseRankTriggerHandler(List<Case> newT, List<Case> oldT, String type) {
if (firstRun) { // makes sure that the trigger only runs once
firstRun = false;
InitializeTrigger(newT, oldT, type); // initializes the trigger
if (developer != null) { // skips trigger if DML is performed on a Case with no developer assigned
ModificationCheck(type); // determines if Cases need to be modified
if (modify) ModificationLogic(type); // modifies Cases if needed
}
}
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: The InitializeTrigger method initializes the handler class based on the type of trigger fired.
* #return: Void
***************************************************************************************/
private void InitializeTrigger(List<Case> newT, List<Case> oldT, String type) {
if (type == 'Insert') {
this.newTrigger = newT;
this.developer = newTrigger[0].Resource_Assigned__c;
this.newRank = newTrigger[0].Case_Rank__c;
this.newList = [SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Resource_Assigned__c = :developer AND Case_Rank__c != null AND Case_Rank__c = :newRank ORDER BY Case_Rank__c];
} else if (type == 'Update') {
this.newTrigger = newT;
this.currentTrigger = oldT;
this.developer = newTrigger[0].Resource_Assigned__c;
this.newRank = newTrigger[0].Case_Rank__c;
this.currentRank = currentTrigger[0].Case_Rank__c;
this.newList = [SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Resource_Assigned__c = :developer AND Case_Rank__c != null AND Case_Rank__c = :newRank ORDER BY Case_Rank__c];
this.currentList = [SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Resource_Assigned__c = :developer AND Case_Rank__c != null AND Case_Rank__c = :currentRank ORDER BY Case_Rank__c];
} else if (type == 'Delete') {
this.currentTrigger = oldT;
this.developer = currentTrigger[0].Resource_Assigned__c;
this.currentRank = currentTrigger[0].Case_Rank__c;
}
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: The ModificationCheck method ensures various conditions are met, depending on the type
* of trigger that was fired, before modifying the ranks of the Cases assigned to the developer.
* #return: Void
***************************************************************************************/
private void ModificationCheck(String type) {
if (type == 'Insert') {
// the Case being inserted has a new rank not equal to null and if the assigned developer already has a Case with the
// same rank as the new rank, we will proceed to modification, if not the record will be inserted without modification.
if (newRank != null && !newList.isEmpty()) {
modify = true;
}
} else if (type == 'Update') {
// if the Case being updated has ranks with different values in both triggers we will proceed to the next check, if not the record is updated without modification.
if (newRank != currentRank) {
// if the Case being updated has a (new rank equal to null and a current rank not equal to 10) or
// if the Case being updated has a new rank not equal to null, we will proceed to the next check,
// if not the record is updated without modification.
if ((newRank == null && currentRank != 10) || newRank != null) {
// if the assigned developer on the Case being updated already has a Case with the same rank as the new or current rank, we will proceed to modification,
// if not the record is updated without modification.
if (!newList.isEmpty() || !currentList.isEmpty()) {
modify = true;
}
}
}
} else if (type == 'Delete') {
// if the Case being deleted has current rank not equal to null, we will proceed to modification, if not the record is deleted without modification.
if (currentRank != null) {
modify = true;
}
}
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: If a Case rank needs to be updated the ModificationLogic method calls the appropriate
* computation method based on trigger type and the values of newRank and currentRank.
* #return: Void
***************************************************************************************/
private void ModificationLogic(String type) {
if (type == 'Insert') {
for (Case c : newTrigger) {
// calls the IncreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank greater than or equal to the new rank.
IncreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :newTrigger AND Resource_Assigned__c = :developer AND Case_Rank__c >= :newRank ORDER BY Case_Rank__c]);
}
} else if (type == 'Update') {
for (Case c : newTrigger) {
if (currentRank == null) {
// if the current rank is null - calls the IncreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank greater than or equal to the new rank.
IncreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :newTrigger AND Resource_Assigned__c = :developer AND Case_Rank__c >= :newRank ORDER BY Case_Rank__c]);
} else if (newRank == null) {
// if the new rank is null - calls the DecreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank greater than the current rank.
DecreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :newTrigger AND Resource_Assigned__c = :developer AND Case_Rank__c > :currentRank ORDER BY Case_Rank__c]);
} else if (newRank > currentRank) {
// if the new rank is greater than the current rank - calls the DecreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank less than or equal to the new rank and greater than to the current rank.
DecreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :newTrigger AND Resource_Assigned__c = :developer AND (Case_Rank__c <= :newRank AND Case_Rank__c > :currentRank) ORDER BY Case_Rank__c]);
} else if (newRank < currentRank) {
// if the new rank is less than the current rank - calls the IncreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank a.
IncreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :newTrigger AND Resource_Assigned__c = :developer AND (Case_Rank__c >= :newRank AND Case_Rank__c < :currentRank) ORDER BY Case_Rank__c]);
}
}
} else if (type == 'Delete') {
for (Case c : currentTrigger) {
// calls the DecreaseCaseRank method and passes it a list of Cases that are assigned to the developer that have a rank greater than the current rank.
DecreaseCaseRank([SELECT Subject, CaseNumber, Case_Rank__c FROM Case WHERE Id NOT IN :currentTrigger AND Resource_Assigned__c = :developer AND Case_Rank__c > :currentRank ORDER BY Case_Rank__c]);
}
}
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: The DecreaseCaseRank method provides the logic required to properly
* decrease or null out the ranks of the Cases assigned the the developer.
* #return: Void
***************************************************************************************/
private void DecreaseCaseRank(List<Case> cases) {
// if the list of Cases passed in by the ModificationLogic method isn't empty then it will iterate through the
// list and decrease their ranks by 1 or null out the rank if it is not within the acceptable limits (1-10).
if (!cases.isEmpty()) {
for (Case c : cases) {
if (c.Case_Rank__c >= 1 && c.Case_Rank__c <= 10) {
c.Case_Rank__c = c.Case_Rank__c - 1;
} else {
c.Case_Rank__c = null;
}
}
update cases;
}
return;
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/16/15
* #brief: The IncreaseCaseRank method provides the logic required to properly
* increase or null out the ranks of the Cases assigned the the developer.
* #return: Void
***************************************************************************************/
private void IncreaseCaseRank(List<Case> cases) {
// if the list of Cases passed in by the ModificationLogic method isn't empty then it will iterate through the
// list and increase their ranks by 1 or null out the rank if it is not within the acceptable limits (1-10).
if (!cases.isEmpty()) {
for (Case c : cases) {
if (c.Case_Rank__c >= 1 && c.Case_Rank__c < 10) {
c.Case_Rank__c = c.Case_Rank__c + 1;
} else {
c.Case_Rank__c = null;
}
}
update cases;
}
return;
}
}
Trigger Handler Test Code -
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/24/15
* #brief: This is the test class for the CaseRankTriggerHandler class
***************************************************************************************/
#isTest
public with sharing class CaseRankTriggerHandlerTest {
// class level variables
static User testRequestor = createTestRequestor();
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/24/15
* #brief: The InsertCase_NewRankNull test method ensures that the insert functionality of the
* CaseRankTrigger is working as intended when a new Case is inserted with a null rank.
***************************************************************************************/
#isTest
static void InsertCase_NewRankNull() {
// creates the initial case load for 'Test Developer' by passing in a list of integers that will become the ranks for the cases
createDeveloperCase_Multiple(new List<Integer> {1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
// starting the test by inserting a new Case with a null rank
Test.startTest();
createDeveloperCase_Single('Null', null);
Test.stopTest();
// queries the system to create a map of Cases assigned to 'Test Developer' that are keyed by Rank with Subject as the value
Map<Decimal, String> caseMap = createCaseMap();
// system asserts to ensure that Cases are in the proper order
System.assertEquals('Test Case (1)', caseMap.get(1), 'Test Developer should have \'Test Case (1)\' as rank 1 but instead has ' + caseMap.get(1));
System.assertEquals('Test Case (2)', caseMap.get(2), 'Test Developer should have \'Test Case (2)\' as rank 2 but instead has ' + caseMap.get(2));
System.assertEquals('Test Case (3)', caseMap.get(3), 'Test Developer should have \'Test Case (3)\' as rank 3 but instead has ' + caseMap.get(3));
System.assertEquals('Test Case (4)', caseMap.get(4), 'Test Developer should have \'Test Case (4)\' as rank 4 but instead has ' + caseMap.get(4));
System.assertEquals('Test Case (5)', caseMap.get(5), 'Test Developer should have \'Test Case (5)\' as rank 5 but instead has ' + caseMap.get(5));
System.assertEquals('Test Case (6)', caseMap.get(6), 'Test Developer should have \'Test Case (6)\' as rank 6 but instead has ' + caseMap.get(6));
System.assertEquals('Test Case (7)', caseMap.get(7), 'Test Developer should have \'Test Case (7)\' as rank 7 but instead has ' + caseMap.get(7));
System.assertEquals('Test Case (8)', caseMap.get(8), 'Test Developer should have \'Test Case (8)\' as rank 8 but instead has ' + caseMap.get(8));
System.assertEquals('Test Case (9)', caseMap.get(9), 'Test Developer should have \'Test Case (9)\' as rank 9 but instead has ' + caseMap.get(9));
System.assertEquals('Test Case (10)', caseMap.get(10), 'Test Developer should have \'Test Case (10)\' as rank 10 but instead has ' + caseMap.get(10));
System.assertEquals('Test Case (Null)', caseMap.get(null), 'Test Developer should have \'Test Case (Null)\' as rank null but instead has ' + caseMap.get(null));
delete [SELECT Id FROM Case WHERE Resource_Assigned__c = 'Test Developer'];
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/24/15
* #brief: The InsertCase_NewRankNotNull test method ensures that the insert functionality of the
* CaseRankTrigger is working as intended when a new Case is inserted with a rank that is not null.
***************************************************************************************/
#isTest
static void InsertCase_NewRankNotNull() {
// creates the initial case load for 'Test Developer' by passing in a list of integers that will become the ranks for the cases
createDeveloperCase_Multiple(new List<Integer> {1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
// starting the test by inserting a new Case with a null rank
Test.startTest();
createDeveloperCase_Single('NewNotNull', 4);
Test.stopTest();
// queries the system to create a map of Cases assigned to 'Test Developer' that are keyed by Rank with Subject as the value
Map<Decimal, String> caseMap = createCaseMap();
// system asserts to ensure that Cases are in the proper order
System.assertEquals('Test Case (1)', caseMap.get(1), 'Test Developer should have \'Test Case (1)\' as rank 1 but instead has ' + caseMap.get(1));
System.assertEquals('Test Case (2)', caseMap.get(2), 'Test Developer should have \'Test Case (2)\' as rank 2 but instead has ' + caseMap.get(2));
System.assertEquals('Test Case (3)', caseMap.get(3), 'Test Developer should have \'Test Case (3)\' as rank 3 but instead has ' + caseMap.get(3));
System.assertEquals('Test Case (NewNotNull)', caseMap.get(4), 'Test Developer should have \'Test Case (NewNotNull)\' as rank 4 but instead has ' + caseMap.get(4));
System.assertEquals('Test Case (4)', caseMap.get(5), 'Test Developer should have \'Test Case (4)\' as rank 5 but instead has ' + caseMap.get(5));
System.assertEquals('Test Case (5)', caseMap.get(6), 'Test Developer should have \'Test Case (5)\' as rank 6 but instead has ' + caseMap.get(6));
System.assertEquals('Test Case (6)', caseMap.get(7), 'Test Developer should have \'Test Case (6)\' as rank 7 but instead has ' + caseMap.get(7));
System.assertEquals('Test Case (7)', caseMap.get(8), 'Test Developer should have \'Test Case (7)\' as rank 8 but instead has ' + caseMap.get(8));
System.assertEquals('Test Case (8)', caseMap.get(9), 'Test Developer should have \'Test Case (8)\' as rank 9 but instead has ' + caseMap.get(9));
System.assertEquals('Test Case (9)', caseMap.get(10), 'Test Developer should have \'Test Case (9)\' as rank 10 but instead has ' + caseMap.get(10));
System.assertEquals('Test Case (10)', caseMap.get(null), 'Test Developer should have \'Test Case (10)\' as rank null but instead has ' + caseMap.get(null));
delete [SELECT Id FROM Case WHERE Resource_Assigned__c = 'Test Developer'];
}
/***************************************************************************************
* #author: Michael *REDACTED*
* #email: michael.*REDACTED*#*REDACTED*.com
* #date: 11/24/15
* #brief: The createCaseMap method queries all the developers Cases then creates a map
* keyed by Rank with the Subject as the value. This map will be used to ensure that
* the Cases are in the proper order after any DML has been performed on a Case.
* #return: Map<Decimal, String>
***************************************************************************************/
static Map<Decimal, String> createCaseMap() {
List<Case> caseList = [SELECT Case_Rank__c, Subject FROM Case WHERE Resource_Assigned__c = 'Test Developer' ORDER BY Case_Rank__c];
Map<Decimal, String> caseMap = new Map<Decimal, String>();
for (Case c : caseList) {
caseMap.put(c.Case_Rank__c, c.Subject);
}
return caseMap;
}
/***************************************************************************************
* TEST DATA SECTION - Refactor out of test class after creating Test Data Factory
***************************************************************************************/
static User createTestRequestor() {
Profile testProfile = [SELECT Id from Profile where Name = 'Standard User'];
User requestor = new User(FirstName = 'Test', LastName = 'Requestor', Alias = 'Test.Req', Email = 'newtestrequestor#null.com', UserName = 'newtestrequestor#null.com', ProfileId = testProfile.Id,
TimeZoneSidKey = 'America/Los_Angeles', LocaleSidKey = 'en_US', EmailEncodingKey = 'UTF-8', LanguageLocaleKey = 'en_US');
insert requestor;
return requestor;
}
static List<Case> createDeveloperCase_Multiple(List<Integer> ranks) {
List<Case> developerCaseLoad = new List<Case>();
Case developerCase;
Integer count = 0;
for (Integer rank : ranks) {
count++;
developerCase = new Case(Subject = 'Test Case (' + count + ')', Service_Request_Type__c = 'Development', Requestor__c = testRequestor.Id, Description = 'Foo', Business_Value_of_Change__c = 'Bar',
Business_Area__c = 'Warranty', Requested_Implementation_Date__c = Date.today(), Resource_Assigned__c = 'Test Developer', Resource_Assigned_Email__c = 'newtestdeveloper#null.com', Case_Rank__c = rank);
developerCaseLoad.add(developerCase);
}
for (Case c : developerCaseLoad) {
}
upsert developerCaseLoad;
return developerCaseLoad;
}
static Case createDeveloperCase_Single(String name, Integer rank) {
Case developerCase = new Case(Subject = 'Test Case (' + name + ')', Service_Request_Type__c = 'Development', Requestor__c = testRequestor.Id, Description = 'Foo', Business_Value_of_Change__c = 'Bar',
Business_Area__c = 'Warranty', Requested_Implementation_Date__c = Date.today(), Resource_Assigned__c = 'Test Developer', Case_Rank__c = rank);
upsert developerCase;
return developerCase;
}
}
Workflow Code - I didn't write this one, but click to see pic
CASE( Resource_Assigned__c ,
"Kimberly REDACTED","foo#bar.com",
"Josh REDACTED","foo#bar.com",
"Robert REDACTED","foo#bar.com",
"Jose REDACTED","foo#bar.com",
"Ryan REDACTED","foo#bar.com",
"Lloyd REDACTED","foo#bar.com",
"Nathan REDACTED","foo#bar.com",
"Amber REDACTED","foo#bar.com",
"Ora REDACTED","foo#bar.com",
"Jason REDACTED","foo#bar.com",
"Shalini REDACTED","foo#bar.com",
"Siva REDACTED","foo#bar.com",
"Quinn REDACTED","foo#bar.com",
"Adrienne REDACTED","foo#bar.com",
"Vasantha REDACTED","foo#bar.com",
"Michael REDACTED","foo#bar.com",
"Sudheera REDACTED","foo#bar.com",
"Test Developer","newtestdeveloper#null.com",
"false")
I really appreciate any help you all can give me on this one!
Regards,
Michael
Here is what I did to fix my issue.
I refactored the test and removed the creation of the two test developers. Instead I grabbed two random developers that are contained in the dropdown list we use and then I used those developers in the test.
Due to the way everything is set up, I did not need to use production data (SeeAllData=true) to make this fix work and after making the code change I never had another issue with testing.

pig puzzle: re-writing an involved reducer as a simple pig script?

There are account ids, each with a timestamp grouped by username. foreach of these username groups I want all pairs of (oldest account, other account).
I have a java reducer that does that, can I rewrite it as a simple pig script?
Schema:
{group:(username),A: {(id , create_dt)}
Input:
(batman,{(id1,100), (id2,200), (id3,50)})
(lulu ,{(id7,100), (id9,50)})
Desired output:
(batman,{(id3,id1), (id3,id2)})
(lulu ,{(id9,id7)})
Not that anyone seems to care, but here goes. You have to create a UDF:
desired = foreach my_input generate group as n, FIND_PAIRS(A) as pairs_bag;
And the UDF:
public class FindPairs extends EvalFunc<DataBag> {
#Override
public DataBag exec(Tuple input) throws IOException {
Long pivotCreatedDate = Long.MAX_VALUE;
Long pivot = null;
DataBag accountsBag = (DataBag) input.get(0);
for (Tuple account : accountsBag){
Long accountId = Long.parseLong(account.get(0).toString());
Long creationDate = Long.parseLong(account.get(4).toString());
if (creationDate < pivotCreatedDate ) {
// pivot is the one with the minimal creation_dt
pivot = accountId;
pivotCreatedDate = creationDate;
}
}
DataBag allPairs = BagFactory.getInstance().newDefaultBag();
if (pivot != null){
for (Tuple account : accountsBag){
Long accountId = Long.parseLong(account.get(0).toString());
Long creationDate = Long.parseLong(account.get(4).toString());
if (!accountId.equals(pivot)) {
// we don't want any self-pairs
Tuple output = TupleFactory.getInstance().newTuple(2);
if (pivot < accountId){
output.set(0, pivot.toString());
output.set(1, accountId.toString());
}
else {
output.set(0, accountId.toString());
output.set(1, pivot.toString());
}
allPairs.add(output);
}
}
return allPairs;
}
and if you wanna play real nicely, add this:
/**
* Letting pig know that we emit a bag with tuples, each representing a pair of accounts
*/
#Override
public Schema outputSchema(Schema input) {
try{
Schema pairSchema = new Schema();
pairSchema.add(new FieldSchema(null, DataType.BYTEARRAY));
pairSchema.add(new FieldSchema(null, DataType.BYTEARRAY));
return new Schema(
new FieldSchema(null,
new Schema(pairSchema), DataType.BAG));
}catch (Exception e){
return null;
}
}
}

mysql returns null when asking for difference between some value and null

in my table i have 3 column
id, somevalue (float), current timestamp
code below searches for the latest value in today's date and subtracts that with the value on monday of the same week. but i dont have any value stored for monday in this week so its NULL at the moment. but result of below code should be some value not null ???? dont understand how is it possible. pls explain.
select(SELECT power FROM newdb.newmeter
where date(dt)=curdate() order by dt desc limit 1)-
(select Power from newdb.newmeter
where date(dt)=(select date(subdate(now(), interval weekday(now()) day))));
as i was reading the similar questions-answers it looks like anything you do with null in mysql is null is it true??
if yes how do i resolve this
update:
i tried this but didnt work
select sum(amount) - coalesce(sum(due),0)
just wanted to add something more to this
i'm calling querydb as following for the mysql in c++
bool Querydb(char *query, double Myarray[1024])
{
//snip//
if (mysql_query(conn, query)) {
fprintf(stderr, "%s\n", mysql_error(conn));
return 0;
}
else {
res = mysql_use_result(conn);
//output table name
//printf("MySQL Tables in mysql database:\n");
//checking for null value in database
while((row = mysql_fetch_row(res))==NULL){
printf("ERROR_____NULL VALUE IN DATABASE ");
return 0;
}
//if not null then ...
while ((row = mysql_fetch_row(res)) != NULL){
printf("rows fetched %s\n", row[0]);
sprintf(buffer,"%s",row[0]);
value1 = atof(buffer);
Myarray[i]=value1;
//printf("Myarray in sql for daybutton = %f\n",Myarray[i]);
i++;
}
i=0;
//for(i=0;i<5;i++){
// printf("mya arr in sqlfunction = %f\n",Myarray[i]);}
return 1;
}
printf("if here then....where??\n");
//close connection
// mysql_free_result(res);
//return 0;
}
the above function works ok with different query when database has null but doesnt work with this query
select(SELECT power FROM newdb.newmeter
where date(dt)=curdate() order by dt desc limit 1)-
(select Power from newdb.newmeter
where date(dt)=(select date(subdate(now(), interval weekday(now()) day))));
it returns 1 even thought the answer is NULL...
Consult the manual on working with NULL values. They are treated specially http://dev.mysql.com/doc/refman/5.0/en/working-with-null.html
What value would you want it to be? This isn't particularly specific to MySQL - operations involving null operands (including comparisons) have null results in most SQL dialects.
You may want to use COALESCE() to provide a "default" value which is used when your real target value is null.
work Around would be
select(SELECT power FROM newdb.newmeter
where date(dt)=curdate() order by dt desc limit 1)-
(select COALESCE(Power,0) from newdb.newmeter
where date(dt)=(select date(subdate(now(), interval weekday(now()) day))));