SolrJ - NPE when accessing to SolrCloud - solrj

I'm running the following test code on SolrCloud using Solrj library:
public static void main(String[] args) {
String zkHostString = "192.168.56.99:2181";
SolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
List<MyBean> beans = new ArrayList<>();
for(int i = 0; i < 10000 ; i++) {
// creating a bunch of MyBean to be indexed
// and temporarily storing them in a List
// no Solr operations performed here
}
System.out.println("Adding...");
try {
solr.addBeans("myCollection", beans);
} catch (IOException | SolrServerException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println("Committing...");
try {
solr.commit("myCollection");
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This code fails due to the following exception
Exception in thread "main" java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1175)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:312)
at com.togather.solr.testing.SolrIndexingTest.main(SolrIndexingTest.java:83)
This is the full stacktrace of the exception. I just "upgraded" from a Solr standalone installation to a SolrCloud (with an external Zookeeper single instance, not the embedded one). With standalone Solr the same code (with just some minor differences, like the host URL) used to work perfectly.
The NPE sends me inside the SolrJ library, which I don't know.
Anyone can help me understand where the problem originates from and how I can overcome it? Due to my unexperience and the brevity of the error message, I can't figure out where to start inquiring from.

Looking at your code, I would suggest to specify the default collection as first thing.
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("myCollection");
Regarding the NPE you're experiencing, very likely is due to a network error.
In these lines your exception is raised by for loop: for (DocCollection ext : requestedCollections)
if (wasCommError) {
// it was a communication error. it is likely that
// the node to which the request to be sent is down . So , expire the state
// so that the next attempt would fetch the fresh state
// just re-read state for all of them, if it has not been retired
// in retryExpiryTime time
for (DocCollection ext : requestedCollections) {
ExpiringCachedDocCollection cacheEntry = collectionStateCache.get(ext.getName());
if (cacheEntry == null) continue;
cacheEntry.maybeStale = true;
}
if (retryCount < MAX_STALE_RETRIES) {//if it is a communication error , we must try again
//may be, we have a stale version of the collection state
// and we could not get any information from the server
//it is probably not worth trying again and again because
// the state would not have been updated
return requestWithRetryOnStaleState(request, retryCount + 1, collection);
}
}

Related

Unit testing suspend coroutine

a bit new to Kotlin and testing it... I am trying to test a dao object wrapper with using a suspend method which uses an awaitFirst() for an SQL return object. However, when I wrote the unit test for it, it is just stuck in a loop. And I would think it is due to the awaitFirst() is not in the same scope of the testing
Implementation:
suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): OrderDomain {
var result: Map<String, Any>?
try {
result = querySpec.fetch().first().awaitFirst()
} catch (e: Exception) {
if (e is DataAccessResourceFailureException)
throw CommunicationException(
"Cannot connect to " + DatabaseConstants.DB_NAME +
DatabaseConstants.ORDERS_TABLE + " when executing querySelect",
"querySelect",
e
)
throw InternalException("Encountered R2dbcException when executing SQL querySelect", e)
}
if (result == null)
throw ResourceNotFoundException("Resource not found in Aurora DB")
try {
return OrderDomain(result)
} catch (e: Exception) {
throw InternalException("Exception when parsing to OrderDomain entity", e)
} finally {
logger.info("querySelect;stage=end")
}
}
Unit Test:
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(monoMapMock)
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}
I noticed this issue on https://github.com/Kotlin/kotlinx.coroutines/issues/1204 but none of the work around has worked for me...
Using runBlocking within Unit Test just causes my tests to never complete. Using runBlockingTest explicitly throws an error saying "Job never completed"... Anyone has any idea? Any hack at this point?
Also I fairly understand the point of you should not be using suspend with a block because that kinda defeats the purposes of suspend since it is releasing the thread to continue later versus blocking forces the thread to wait for a result... But then how does this work?
private suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): Map {
var result: Map<String, Any>?
try {
result = withContext(Dispatchers.Default) {
querySpec.fetch().first().block()
}
return result
}
Does this mean withContext will utilize a new thread, and re-use the old thread elsewhere? Which then doesnt really optimize anything since I will still have one thread that is being blocked regardless of spawning a new context?
Found the solution.
The monoMapMock is a mock value from Mockito. Seems like the kotlinx-test coroutines can't intercept an async to return a mono. So I forced the method that I can mock, to return a real Mono value instead of a Mocked Mono. To do so, as suggested by Louis. I stop mocking it and return a real value
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(Mono.empty())
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}

Univocity - Issue in file header validation during multiple reads

I'm using Univocity-Parser's bean iterator to read each line of file and get the bean. I have observed a weird behavior in the library when I'm attempting to read the same file mutiple times.
Code when passing the File object to CsvParser instance:
private static void testBeanIterator() throws Exception {
try {
File sampleFile = generateFile(0);
/*
System.out.println("Sample file content = " + FileUtils.readFileToString(sampleFile,
Charset.defaultCharset()));
*/
for (int i = 0; i < 1000; i++) {
BufferedReader reader =
new BufferedReader(new InputStreamReader(new FileInputStream(sampleFile),
StandardCharsets.UTF_8));
AtomicInteger atomicInteger = new AtomicInteger();
final BeanProcessor<CustomerSegmentMapping> rowProcessor =
new BeanProcessor<CustomerSegmentMapping>(CustomerSegmentMapping.class) {
#Override
public void beanProcessed(#Nonnull final CustomerSegmentMapping customerSegmentMapping,
#Nonnull final ParsingContext context) {
try {
System.out.println(OBJECT_MAPPER.writeValueAsString(customerSegmentMapping));
atomicInteger.getAndAdd(1);
} catch (Exception ex) {
throw new RuntimeException("error");
}
}
};
rowProcessor.setStrictHeaderValidationEnabled(true);
final CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setRowProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
final CsvParser parser = new CsvParser(parserSettings);
//parser.parse(reader);
parser.parse(sampleFile);
System.out.println("Finished parser");
if (atomicInteger.get() != 10) {
throw new Exception("mismatch");
}
reader.close();
}
} catch (Exception ex) {
throw new RuntimeException("exception = " + ex.getMessage(), ex);
} finally {
}
}
On executing the code, following is the console output:
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
Exception in thread "main" java.lang.RuntimeException: exception = Could not find fields [CustomerId]' in input. Names found: [ustomerId, SegmentId]
Internal state when error was thrown: line=2, column=0, record=1, charIndex=60, headers=[ustomerId, SegmentId]
at com.poppins.cube.common.UnivocityNahiHatanaHai.testBeanIterator(UnivocityNahiHatanaHai.java:95)
at com.poppins.cube.common.UnivocityNahiHatanaHai.main(UnivocityNahiHatanaHai.java:37)
Caused by: com.univocity.parsers.common.DataProcessingException: Could not find fields [CustomerId]' in input. Names found: [ustomerId, SegmentId]
Internal state when error was thrown: line=2, column=0, record=1, charIndex=60, headers=[ustomerId, SegmentId]
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.mapFieldIndexes(BeanConversionProcessor.java:414)
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.mapValuesToFields(BeanConversionProcessor.java:340)
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.createBean(BeanConversionProcessor.java:508)
at com.univocity.parsers.common.processor.core.AbstractBeanProcessor.rowProcessed(AbstractBeanProcessor.java:54)
at com.univocity.parsers.common.Internal.process(Internal.java:21)
at com.univocity.parsers.common.AbstractParser.rowProcessed(AbstractParser.java:596)
at com.univocity.parsers.common.AbstractParser.parse(AbstractParser.java:133)
at com.univocity.parsers.common.AbstractParser.parse(AbstractParser.java:605)
at com.poppins.cube.common.UnivocityNahiHatanaHai.testBeanIterator(UnivocityNahiHatanaHai.java:83)
... 1 more
Process finished with exit code 1
Following is the content of the file:
CustomerId,SegmentId
6bc12a7a-2c28-4aea-a7be-6be45e16ffb2,S1
da736310-e508-47ff-92b8-59d490e37a72,S1
9a5d4454-e6d4-49a5-bb04-8354154d0493,S1
ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5,S1
94ea24b0-0c83-4039-a391-1d2439c88be8,S1
2baef5f9-d8cd-451d-b579-a626cb58b284,S1
022a184b-1b06-49aa-b1c4-b94a6f343b04,S1
bcb3984c-0495-4da8-b146-9af3983cc158,S1
feef62de-1aaf-43d4-a83b-afe053db97cf,S1
5825c924-55d5-4fd6-8468-ca36d47a7cae,S1
From what I could understand, the issue is arising because I'm passing a File object to CsvParser. CsvParser internally creates an InputStream object which is not closed.
If I'm passing a Buffered reader object instead of File object, the issue is not arising.
I'm not able to understand whether this is a known issue with the Univocity-Parsers or is there anything I'm missing in understanding.
Author of the library here. I can see your exception showing it got header ustomerId instead of CustomerId.
This looks like a bug introduced in version 2.5.0 that was fixed in version 2.5.6 if I'm not mistaken. This plagued me for a while as it was an internal concurrency issue that was hard to track down. Basically when you pass a File without an explicit encoding it will try to find a UTF BOM marker in the input (effectively consuming the first character) to determine the encoding automatically. This happened only for InputStreams and Files.
Anyway, this has been fixed so simply updating to the latest version should get rid of the problem for you (please let me know if you are not using version 2.5.something)
If you want to remain with the current version you have there, the error will be gone if you call
parser.parse(sampleFile, Charset.defaultCharset());
This will prevent the parser from trying to discover whether there's a BOM marker in your file, therefore avoiding that pesky bug.
Hope this helps

EMF model transaction

In our project we use an implementation of HL7 document from openehealth. This implementation uses EMF as primitive model and delegates all calls to EMF. We need to handle a large volume of documents and our flows involve concurrent processing of documents(read, validate, query). In concurrency environment the EMF layer crashes with UnsupportedOperationException. From openehealth site it says to handle the synchronized processing in the client api, but this will decrease our system performance and we don't want this. I tried EMF transaction API, TransactionalEditingDomain, which says that supports read only model transactions but without success. My test looks something like this:
ExecutorService executorService = Executors.newFixedThreadPool(4);
final List<ClinicalDocument> documents = new ArrayList<ClinicalDocument>();
for (int i = 0; i < 100; i ++) {
executorService.submit(new Runnable() {
#Override
public void run() {
try {
int randomNum = 1 + (int)(Math.random()*6);
ClinicalDocument cda = readCda();
processIntensiveWork(cda);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
private void processIntensiveWork(final ClinicalDocument document) {
for (final Method method : document.getClass().getMethods())
if (method.getName().startsWith("get")) {
try {
domain.runExclusive(new RunnableWithResult.Impl() {
#Override
public void run() {
try {
method.invoke(document);
System.out.println("Invoked method: " + method.getName());
setResult(null);
} catch (UnsupportedOperationException e) {
e.printStackTrace();
}catch (Exception e){
e.printStackTrace();
}
}
});
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
For this test case we frequently caught java.lang.UnsupportedOperationException.
I mention that for some test cases i also caught the the following error from EMF transaction API: java.lang.IllegalArgumentException: Can only deactivate the active transaction
Any suggestions are kindly appreciated. Feel free to ask other information that might help you in resolving the problem.

MongoDB C++ driver handling replica set connection failures

So the mongo c++ documentation says
On a failover situation, expect at least one operation to return an
error (throw an exception) before the failover is complete. Operations
are not retried
Kind of annoying, but that leaves it up to me to handle a failed operation. Ideally I would just like the application to sleep for a few seconds (app is single threaded). And retry with the hopes that a new primary mongod is established. In the case of a second failure, well I take it the connection is truly messed up and I just want to thrown an exception.
Within my MongodbManager class this means all operations have this kind of double try/catch block set up. I was wondering if there is a more elegant solution?
Example method:
template <typename T>
std::string
MongoManager::insert(std::string ns, T object)
{
mongo::BSONObj = convertToBson(object);
std::string result;
try {
connection_->insert(ns, oo); //connection_ = shared_ptr<DBClientReplicaSet>
result = connection_->getLastError();
lastOpSucceeded_ = true;
}
catch (mongo::SocketException& ex)
{
lastOpSucceeded_ = false;
boost::this_thread::sleep( boost::posix_time::seconds(5) );
}
// try again?
if (!lastOpSucceeded_) {
try {
connection_->insert(ns, oo);
result = connection_->getLastError();
lastOpSucceeded_ = true;
}
catch (mongo::SocketException& ex)
{
//do some clean up, throw exception
}
}
return result;
}
That's indeed sort of how you need to handle it. Perhaps instead of having two try/catch blocks I would use the following strategy:
keep a count of how many times you have tried
create a while loop with as terminator (count < 5 && lastOpSucceeded)
and then sleep with pow(2,count) to sleep more in every iteration.
And then when all else fails, bail out.

J2ME Stub generates an unknown exception with JPA Entity types

I created a web service stub using NetBeans 7.0 and when I try using it, it throws an unknown Exception. I don't even know what part of my code to show here, all I know is that bolded line generates an unknown Exception:
public Businesses[] findBusiness(String query) throws java.rmi.RemoteException {
Object inputObject[] = new Object[]{
query
};
Operation op = Operation.newInstance(_qname_operation_findBusiness, _type_findBusiness, _type_findBusinessResponse);
_prepOperation(op);
op.setProperty(Operation.SOAPACTION_URI_PROPERTY, "");
Object resultObj;
try {
resultObj = op.invoke(inputObject);
} catch (JAXRPCException e) {
Throwable cause = e.getLinkedCause();
if (cause instanceof java.rmi.RemoteException) {
throw (java.rmi.RemoteException) cause;
}
throw e;
}
return businesses_ArrayfromObject((Object[]) resultObj);
}
private static Businesses[] businesses_ArrayfromObject(Object obj[]) {
if (obj == null) {
return null;
}
Businesses result[] = new Businesses[obj.length];
for (int i = 0; i < obj.length; i++) {
result[i] = new Businesses();
Object[] oo = (Object[]) obj[i];
result[i].setAddress((String) oo[0]); // **exception here**
result[i].setEmail((String) oo[1]);
result[i].setId((Integer) oo[2]);
result[i].setName((String) oo[3]);
result[i].setPhoneno((String) oo[4]);
result[i].setProducts((String) oo[5]);
}
return result;
}`
I tried to consume the same webservice using a web application and it works quite well. I don't have a single clue to what is causing this exception. Any comment would be appreciated.
Update
I tried other services that return a String data type and it works fine. So I thought maybe J2ME has issues with JPA Entity types.
So my question is how do I return the data properly so that the J2ME client can read it well?