So the mongo c++ documentation says
On a failover situation, expect at least one operation to return an
error (throw an exception) before the failover is complete. Operations
are not retried
Kind of annoying, but that leaves it up to me to handle a failed operation. Ideally I would just like the application to sleep for a few seconds (app is single threaded). And retry with the hopes that a new primary mongod is established. In the case of a second failure, well I take it the connection is truly messed up and I just want to thrown an exception.
Within my MongodbManager class this means all operations have this kind of double try/catch block set up. I was wondering if there is a more elegant solution?
Example method:
template <typename T>
std::string
MongoManager::insert(std::string ns, T object)
{
mongo::BSONObj = convertToBson(object);
std::string result;
try {
connection_->insert(ns, oo); //connection_ = shared_ptr<DBClientReplicaSet>
result = connection_->getLastError();
lastOpSucceeded_ = true;
}
catch (mongo::SocketException& ex)
{
lastOpSucceeded_ = false;
boost::this_thread::sleep( boost::posix_time::seconds(5) );
}
// try again?
if (!lastOpSucceeded_) {
try {
connection_->insert(ns, oo);
result = connection_->getLastError();
lastOpSucceeded_ = true;
}
catch (mongo::SocketException& ex)
{
//do some clean up, throw exception
}
}
return result;
}
That's indeed sort of how you need to handle it. Perhaps instead of having two try/catch blocks I would use the following strategy:
keep a count of how many times you have tried
create a while loop with as terminator (count < 5 && lastOpSucceeded)
and then sleep with pow(2,count) to sleep more in every iteration.
And then when all else fails, bail out.
Related
a bit new to Kotlin and testing it... I am trying to test a dao object wrapper with using a suspend method which uses an awaitFirst() for an SQL return object. However, when I wrote the unit test for it, it is just stuck in a loop. And I would think it is due to the awaitFirst() is not in the same scope of the testing
Implementation:
suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): OrderDomain {
var result: Map<String, Any>?
try {
result = querySpec.fetch().first().awaitFirst()
} catch (e: Exception) {
if (e is DataAccessResourceFailureException)
throw CommunicationException(
"Cannot connect to " + DatabaseConstants.DB_NAME +
DatabaseConstants.ORDERS_TABLE + " when executing querySelect",
"querySelect",
e
)
throw InternalException("Encountered R2dbcException when executing SQL querySelect", e)
}
if (result == null)
throw ResourceNotFoundException("Resource not found in Aurora DB")
try {
return OrderDomain(result)
} catch (e: Exception) {
throw InternalException("Exception when parsing to OrderDomain entity", e)
} finally {
logger.info("querySelect;stage=end")
}
}
Unit Test:
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(monoMapMock)
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}
I noticed this issue on https://github.com/Kotlin/kotlinx.coroutines/issues/1204 but none of the work around has worked for me...
Using runBlocking within Unit Test just causes my tests to never complete. Using runBlockingTest explicitly throws an error saying "Job never completed"... Anyone has any idea? Any hack at this point?
Also I fairly understand the point of you should not be using suspend with a block because that kinda defeats the purposes of suspend since it is releasing the thread to continue later versus blocking forces the thread to wait for a result... But then how does this work?
private suspend fun queryExecution(querySpec: DatabaseClient.GenericExecuteSpec): Map {
var result: Map<String, Any>?
try {
result = withContext(Dispatchers.Default) {
querySpec.fetch().first().block()
}
return result
}
Does this mean withContext will utilize a new thread, and re-use the old thread elsewhere? Which then doesnt really optimize anything since I will still have one thread that is being blocked regardless of spawning a new context?
Found the solution.
The monoMapMock is a mock value from Mockito. Seems like the kotlinx-test coroutines can't intercept an async to return a mono. So I forced the method that I can mock, to return a real Mono value instead of a Mocked Mono. To do so, as suggested by Louis. I stop mocking it and return a real value
#Test
fun `get by orderid id, null`() = runBlocking {
// Assign
Mockito.`when`(fetchSpecMock.first()).thenReturn(Mono.empty())
Mockito.`when`(monoMapMock.awaitFirst()).thenReturn(null)
// Act & Assert
val exception = assertThrows<ResourceNotFoundException> {
auroraClientWrapper.queryExecution(
databaseClient.sql("SELECT * FROM orderTable WHERE orderId=:1").bind("1", "123") orderId
)
}
assertEquals("Resource not found in Aurora DB", exception.message)
}
I found interesting write operation codes while looking at SpannerIO, and want to understand reasons.
On write(WriteToSpannerFn) and REPORT_FAILURES failure mode, it seems trying to write failed mutations twice.
I think it's for logging each mutation's exceptions. Is it a correct assumption, and is there any workaround?
Below, I removed some lines for simplicity.
public void processElement(ProcessContext c) {
Iterable<MutationGroup> mutations = c.element();
boolean tryIndividual = false;
try {
Iterable<Mutation> batch = Iterables.concat(mutations);
spannerAccessor.getDatabaseClient().writeAtLeastOnce(batch);
} catch (SpannerException e) {
if (failureMode == FailureMode.REPORT_FAILURES) {
tryIndividual = true;
} else {
...
}
}
if (tryIndividual) {
for (MutationGroup mg : mutations) {
try {
spannerAccessor.getDatabaseClient().writeAtLeastOnce(mg);
} catch (SpannerException e) {
LOG.warn("Failed to submit the mutation group", e);
c.output(failedTag, mg);
}
}
}
}
So rather than write each Mutation individually to the database, the SpannerIO.write() connector tries to write a batch of Mutations in a single transaction for efficiency.
If just one of these Mutations in the batch fails, then the whole transaction fails, so in REPORT_FAILURES mode, the mutations are re-tried individually to find which Mutation(s) are the problematic ones...
I'm running the following test code on SolrCloud using Solrj library:
public static void main(String[] args) {
String zkHostString = "192.168.56.99:2181";
SolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
List<MyBean> beans = new ArrayList<>();
for(int i = 0; i < 10000 ; i++) {
// creating a bunch of MyBean to be indexed
// and temporarily storing them in a List
// no Solr operations performed here
}
System.out.println("Adding...");
try {
solr.addBeans("myCollection", beans);
} catch (IOException | SolrServerException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println("Committing...");
try {
solr.commit("myCollection");
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This code fails due to the following exception
Exception in thread "main" java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1175)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:312)
at com.togather.solr.testing.SolrIndexingTest.main(SolrIndexingTest.java:83)
This is the full stacktrace of the exception. I just "upgraded" from a Solr standalone installation to a SolrCloud (with an external Zookeeper single instance, not the embedded one). With standalone Solr the same code (with just some minor differences, like the host URL) used to work perfectly.
The NPE sends me inside the SolrJ library, which I don't know.
Anyone can help me understand where the problem originates from and how I can overcome it? Due to my unexperience and the brevity of the error message, I can't figure out where to start inquiring from.
Looking at your code, I would suggest to specify the default collection as first thing.
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("myCollection");
Regarding the NPE you're experiencing, very likely is due to a network error.
In these lines your exception is raised by for loop: for (DocCollection ext : requestedCollections)
if (wasCommError) {
// it was a communication error. it is likely that
// the node to which the request to be sent is down . So , expire the state
// so that the next attempt would fetch the fresh state
// just re-read state for all of them, if it has not been retired
// in retryExpiryTime time
for (DocCollection ext : requestedCollections) {
ExpiringCachedDocCollection cacheEntry = collectionStateCache.get(ext.getName());
if (cacheEntry == null) continue;
cacheEntry.maybeStale = true;
}
if (retryCount < MAX_STALE_RETRIES) {//if it is a communication error , we must try again
//may be, we have a stale version of the collection state
// and we could not get any information from the server
//it is probably not worth trying again and again because
// the state would not have been updated
return requestWithRetryOnStaleState(request, retryCount + 1, collection);
}
}
Im basically facing a blocking problem.
I have my server coded based on C++ Boost.ASIO using 8 threads since the server has 8 logical cores.
My problem is a thread may face 0.2~1.5 seconds of blocking on a MySQL query and I honestly don't know how to go around that since MySQL C++ Connector does not support asynchronous queries, and I don't know how to design the server "correctly" to use multiple threads for doing the queries.
This is where I'm asking for opinions of what to do in this case.
Create 100 threads for async' query sql?
Could I have an opinion from experts about this?
Okay, the proper solution to this would be to extend Asio and write a mysql_service implementation to integrate this. I was almost going to find out how this is done right away, but I wanted to get started using an "emulation".
The idea is to have
your business processes using an io_service (as you are already doing)
a database "facade" interface that dispatches async queries into a different queue (io_service) and posts the completion handler back onto the business_process io_service
A subtle tweak needed here you need to keep the io_service on the business process side from shutting down as soon as it's job queue is empty, since it might still be awaiting a response from the database layer.
So, modeling this into a quick demo:
namespace database
{
// data types
struct sql_statement { std::string dml; };
struct sql_response { std::string echo_dml; }; // TODO cover response codes, resultset data etc.
I hope you will forgive my gross simplifications :/
struct service
{
service(unsigned max_concurrent_requests = 10)
: work(io_service::work(service_)),
latency(mt19937(), uniform_int<int>(200, 1500)) // random 0.2 ~ 1.5s
{
for (unsigned i = 0; i < max_concurrent_requests; ++i)
svc_threads.create_thread(boost::bind(&io_service::run, &service_));
}
friend struct connection;
private:
void async_query(io_service& external, sql_statement query, boost::function<void(sql_response response)> completion_handler)
{
service_.post(bind(&service::do_async_query, this, ref(external), std::move(query), completion_handler));
}
void do_async_query(io_service& external, sql_statement q, boost::function<void(sql_response response)> completion_handler)
{
this_thread::sleep_for(chrono::milliseconds(latency())); // simulate the latency of a db-roundtrip
external.post(bind(completion_handler, sql_response { q.dml }));
}
io_service service_;
thread_group svc_threads; // note the order of declaration
optional<io_service::work> work;
// for random delay
random::variate_generator<mt19937, uniform_int<int> > latency;
};
The service is what coordinates a maximum number of concurrent requests (on the "database io_service" side) and ping/pongs the completion back onto another io_service (the async_query/do_async_query combo). This stub implementation emulates latencies of 0.2~1.5s in the obvious way :)
Now comes the client "facade"
struct connection
{
connection(int connection_id, io_service& external, service& svc)
: connection_id(connection_id),
external_(external),
db_service_(svc)
{ }
void async_query(sql_statement query, boost::function<void(sql_response response)> completion_handler)
{
db_service_.async_query(external_, std::move(query), completion_handler);
}
private:
int connection_id;
io_service& external_;
service& db_service_;
};
connection is really only a convenience so we don't have to explicitly deal with various queues on the calling site.
Now, let's implement a demo business process in good old Asio style:
namespace domain
{
struct business_process : id_generator
{
business_process(io_service& app_service, database::service& db_service_)
: id(generate_id()), phase(0),
in_progress(io_service::work(app_service)),
db(id, app_service, db_service_)
{
app_service.post([=] { start_select(); });
}
private:
int id, phase;
optional<io_service::work> in_progress;
database::connection db;
void start_select() {
db.async_query({ "select * from tasks where completed = false" }, [=] (database::sql_response r) { handle_db_response(r); });
}
void handle_db_response(database::sql_response r) {
if (phase++ < 4)
{
if ((id + phase) % 3 == 0) // vary the behaviour slightly
{
db.async_query({ "insert into tasks (text, completed) values ('hello', false)" }, [=] (database::sql_response r) { handle_db_response(r); });
} else
{
db.async_query({ "update * tasks set text = 'update' where id = 123" }, [=] (database::sql_response r) { handle_db_response(r); });
}
} else
{
in_progress.reset();
lock_guard<mutex> lk(console_mx);
std::cout << "business_process " << id << " has completed its work\n";
}
}
};
}
This business process starts by posting itself on the app service. It then does a number of db queries in succession, and eventually exits (by doing in_progress.reset() the app service is made aware of this).
A demonstration main, starting 10 business processes on a single thread:
int main()
{
io_service app;
database::service db;
ptr_vector<domain::business_process> bps;
for (int i = 0; i < 10; ++i)
{
bps.push_back(new domain::business_process(app, db));
}
app.run();
}
In my sample, business_processes don't do any CPU intensive work, so there's no use in scheduling them across CPU's, but if you wanted you could easily achieve this, by replacing the app.run() line with:
thread_group g;
for (unsigned i = 0; i < thread::hardware_concurrency(); ++i)
g.create_thread(boost::bind(&io_service::run, &app));
g.join_all();
See the demo running Live On Coliru
I'm not a MySQL guru, but the following is generic multithreading advice.
Having NumberOfThreads == NumberOfCores is appropriate when none of the threads ever block and you are just splitting the load over all CPUs.
A common pattern is to have multiple threads per CPU, so one is executing while another is waiting on something.
In your case, I'd be inclined to set NumberOfThreads = n * NumberOfCores where 'n' is read from a config file, a registry entry or some other user-settable value. You can test the system with different values of 'n' to fund the optimum. I'd suggest somewhere around 3 for a first guess.
While stress testing prototype of our brand new primary system, I run into concurrent issue with AppFabric Cache. When concurrently calling many DataCache.Get() and Put() with same cacheKey, where I attempt to store relatively large objet, I recieve "ErrorCode:SubStatus:There is a temporary failure. Please retry later." It is reproducible by the following code:
var dcfc = new DataCacheFactoryConfiguration
{
Servers = new[] {new DataCacheServerEndpoint("localhost", 22233)},
SecurityProperties = new DataCacheSecurity(DataCacheSecurityMode.None, DataCacheProtectionLevel.None),
};
var dcf = new DataCacheFactory(dcfc);
var dc = dcf.GetDefaultCache();
const string key = "a";
var value = new int [256 * 1024]; // 1MB
for (int i = 0; i < 300; i++)
{
var putT = new Thread(() => dc.Put(key, value));
putT.Start();
var getT = new Thread(() => dc.Get(key));
getT.Start();
}
When calling Get() with different key or DataCache is synchronized, this issue will not appear. If DataCache is obtained with each call from DataCacheFactory (DataCache is supposed to be thread-safe) or timeouts are prolonged it has no effect and error is still received.
It seems to me very strange that MS would leave such bug. Did anybody faced similar issue?
I also see the same behavior and my understanding is that this is by design. The cache contains two concurrency models:
Optimistic Concurrency Model methods: Get, Put, ...
Pessimistic Concurrency Model: GetAndLock, PutAndLock, Unlock
If you use optimistic concurrency model methods like Get then you have to be ready to get DataCacheErrorCode.RetryLater and handle that appropriately - I also use a retry approach.
You might find more information at MSDN: Concurrency Models
We have seen this problem as well in our code. We solve this by overloading the Get method to catch expections and then retry the call N times before fallback to a direct request to SQL.
Here is a code that we use to get data from the cache
private static bool TryGetFromCache(string cacheKey, string region, out GetMappingValuesToCacheResult cacheResult, int counter = 0)
{
cacheResult = new GetMappingValuesToCacheResult();
try
{
// use as instead of cast, as this will return null instead of exception caused by casting.
if (_cache == null) return false;
cacheResult = _cache.Get(cacheKey, region) as GetMappingValuesToCacheResult;
return cacheResult != null;
}
catch (DataCacheException dataCacheException)
{
switch (dataCacheException.ErrorCode)
{
case DataCacheErrorCode.KeyDoesNotExist:
case DataCacheErrorCode.RegionDoesNotExist:
return false;
case DataCacheErrorCode.Timeout:
case DataCacheErrorCode.RetryLater:
if (counter > 9) return false; // we tried 10 times, so we will give up.
counter++;
Thread.Sleep(100);
return TryGetFromCache(cacheKey, region, out cacheResult, counter);
default:
EventLog.WriteEntry(EventViewerSource, "TryGetFromCache: DataCacheException caught:\n" +
dataCacheException.Message, EventLogEntryType.Error);
return false;
}
}
}
Then when we need to get something from the cache we do:
TryGetFromCache(key, region, out cachedMapping)
This allows us to use Try methods that encasulates the exceptions. If it returns false, we know thing is wrong with the cache and we can access SQL directly.