Design puzzle about database connectivity architecture - c++

I am designing a database browsing application, which till now had MySQL support, but recently I have started implementing supporting Sqlite too and I face some ugliness while designing the way the connectivity architecture is being implemented. This is only about the "connection" part (ie: where you get the user/db/host, or for sqlite the filename), not the database functionality. That is sorted out already.
I have a base class "Connection" which exposes "normal" methods like name(), or pure virtual methods which are like virtual string fullLocation() = 0 which returns me a string that can be used to identify the database (such as: database#host for MySql, or /etc/mydb.sqlite for Sqlite).
Now, the user of course needs to specify a database he wants to connect to, so in the GUI of the application he simply chooses the type and then fills in the credentials. And here my troubles start. I have created a MySqlConnection and an SqliteConnectionclasses, both derived from Connection but most of the cases I end up with something like:
Connection* c = 0;
if(gui->engine_name() == "MYSQL")
{
string host = gui->getHost();
string user = gui->getUser();
string password = gui->getPassword();
int port = gui->getPort();
string db = gui->getDatabase();
c = new MySqlConnection(host, user, password, db, port);
}
else
{
string dbFile = gui->getSqliteDbFile();
c = new SqliteConnection(dbFile);
}
string meta = application->use_connection(&c);
and I have the fear, that this will continue through the entire application, due to the so different nature of these two database engines.
Do you have some guidance on how to solve this in an elegant way?

You need pattern Factory, that will create Connection for you in abstract way. It is superficial answer.
This Factory would be nice parametrize with Builder pattern. Something like that:
ParamBuilder *b = new ParamBuilder;
if(gui->engine_name() == "MYSQL")
{
b->setHost(gui->getHost())
->setUser(gui->getUser());
->setPassword(gui->getPassword());
...
}
else
{
b->setFile(gui->getSqliteDbFile());
}
Connection *c = globalConnectionFactory->createConnection(b);

A much more elegant way would be designing a factory class and handling GUI inputs in the GenerateConnection() method of that factory:
void ConnectionFactory::GenerateConnection(Connection* c)
{
if(gui->engine_name() == "MYSQL")
{
string host = gui->getHost();
string user = gui->getUser();
string password = gui->getPassword();
int port = gui->getPort();
string db = gui->getDatabase();
c = new MySqlConnection(host, user, password, db, port);
}
else
{
string dbFile = gui->getSqliteDbFile();
c = new SqliteConnection(dbFile);
}
}
If you do not prefer a dependency on the gui, you can just define a struct named Parameters and update its instance according to the gui inputs and give this object into the connection generating method of connection factory.

Related

Converting an unruly dependency injection model with a service locator

I've been using DI for a game engine project for a while and I just hit a wall; given the below order of creation: The job system does not depend on anything and everything depends on the file logger. It makes sense to create the job system, then the file logger, then pass the created references for each dependency down to its dependents' constructors.
App::App(const std::string& cmdString)
: EngineSubsystem()
, _theJobSystem{std::make_unique<JobSystem>(-1, static_cast<std::size_t>(JobType::Max), new std::condition_variable)}
, _theFileLogger{std::make_unique<FileLogger>(*_theJobSystem.get(), "game")}
, _theConfig{std::make_unique<Config>(KeyValueParser{cmdString})}
, _theRenderer{std::make_unique<Renderer>(*_theJobSystem.get(), *_theFileLogger.get(), *_theConfig.get())}
, _theInputSystem{std::make_unique<InputSystem>(*_theFileLogger.get(), *_theRenderer.get())}
, _theUI{std::make_unique<UISystem>(*_theFileLogger.get(), *_theRenderer.get(), *_theInputSystem.get())}
, _theConsole{std::make_unique<Console>(*_theFileLogger.get(), *_theRenderer.get())}
, _theAudioSystem{std::make_unique<AudioSystem>(*_theFileLogger.get()) }
, _theGame{std::make_unique<Game>()}
{
SetupEngineSystemPointers();
SetupEngineSystemChainOfResponsibility();
LogSystemDescription();
}
void App::SetupEngineSystemPointers() {
g_theJobSystem = _theJobSystem.get();
g_theFileLogger = _theFileLogger.get();
g_theConfig = _theConfig.get();
g_theRenderer = _theRenderer.get();
g_theUISystem = _theUI.get();
g_theConsole = _theConsole.get();
g_theInputSystem = _theInputSystem.get();
g_theAudioSystem = _theAudioSystem.get();
g_theGame = _theGame.get();
g_theApp = this;
}
void App::SetupEngineSystemChainOfResponsibility() {
g_theConsole->SetNextHandler(g_theUISystem);
g_theUISystem->SetNextHandler(g_theInputSystem);
g_theInputSystem->SetNextHandler(g_theApp);
g_theApp->SetNextHandler(nullptr);
g_theSubsystemHead = g_theConsole;
}
As you can see, passing the different subsystems around to the other subsystem constructors is starting to get messy. In particular when dealing with jobs, logging, console commands, UI, configuration, and audio (and physics, not pictured).
(Side note: These are going to eventually be replaced with interfaces created via factories for cross-compatibility, i.e. the Renderer is strictly a DirectX/Windows-only renderer but I want to eventually support OpenGL/Linux; that's why everything is passed around as references and created as pointers instead of a concrete types)
I've run in to situations where pretty much all the subsystems are in some way dependent on every other subsystem.
But, due to construction-order problems, Dependency Injection does not work because one or more of the required-to-exist subsystems hasn't been constructed yet. Same problem with two-phase construction: the subsystem may not have been initialized by the time it's needed further downstream.
I looked in to the service locator pattern and this question deems it a bad idea, but the game industry likes using bad ideas (like global variables to every subsystem for game-specific code to use) if they work.
Would converting to a service locator fix this problem?
What other implementations do you know of that could also fix the issue?
I ultimately went with the ServiceLocator pattern, deriving every subsystem that was a dependency as a Service:
App::App(const std::string& cmdString)
: EngineSubsystem()
, _theConfig{std::make_unique<Config>(KeyValueParser{cmdString})}
{
SetupEngineSystemPointers();
SetupEngineSystemChainOfResponsibility();
LogSystemDescription();
}
void App::SetupEngineSystemPointers() {
ServiceLocator::provide(*static_cast<IConfigService*>(_theConfig.get()));
_theJobSystem = std::make_unique<JobSystem>(-1, static_cast<std::size_t>(JobType::Max), new std::condition_variable);
ServiceLocator::provide(*static_cast<IJobSystemService*>(_theJobSystem.get()));
_theFileLogger = std::make_unique<FileLogger>("game");
ServiceLocator::provide(*static_cast<IFileLoggerService*>(_theFileLogger.get()));
_theRenderer = std::make_unique<Renderer>();
ServiceLocator::provide(*static_cast<IRendererService*>(_theRenderer.get()));
_theInputSystem = std::make_unique<InputSystem>();
ServiceLocator::provide(*static_cast<IInputService*>(_theInputSystem.get()));
_theAudioSystem = std::make_unique<AudioSystem>();
ServiceLocator::provide(*static_cast<IAudioService*>(_theAudioSystem.get()));
_theUI = std::make_unique<UISystem>();
_theConsole = std::make_unique<Console>();
_theGame = std::make_unique<Game>();
g_theJobSystem = _theJobSystem.get();
g_theFileLogger = _theFileLogger.get();
g_theConfig = _theConfig.get();
g_theRenderer = _theRenderer.get();
g_theUISystem = _theUI.get();
g_theConsole = _theConsole.get();
g_theInputSystem = _theInputSystem.get();
g_theAudioSystem = _theAudioSystem.get();
g_theGame = _theGame.get();
g_theApp = this;
}
void App::SetupEngineSystemChainOfResponsibility() {
g_theConsole->SetNextHandler(g_theUISystem);
g_theUISystem->SetNextHandler(g_theInputSystem);
g_theInputSystem->SetNextHandler(g_theRenderer);
g_theRenderer->SetNextHandler(g_theApp);
g_theApp->SetNextHandler(nullptr);
g_theSubsystemHead = g_theConsole;
}

c++ connect to mysql/mariadb is very slow

I write some c++ https server which get connection - make query to base and send answer. If I try send GET query for index.html endpoint I get real good result:
but if I send POST and connect to mysql, Requests/sec is very small:
I try different connectors: mysql, mariadb++, mariadbcpp (in code example) etc the result is the same.
Code example:
nlohmann::json PostResult = {};
const char *uri = "tcp://192.168.1.130:3306/test";
const char *user = "root";
const char *passwd = "123";
MariaCpp::scoped_library_init maria_lib_init;
try {
MariaCpp::Connection conn;
conn.connect(MariaCpp::Uri(uri), user, passwd);
std::auto_ptr<MariaCpp::PreparedStatement> stmt(conn.prepare("SELECT a.id, a.msg_id, a.NUMBER, a.sign FROM chiffa a WHERE a.id =1"));
stmt->execute();
while (stmt->fetch()) {
PostResult["id"] = stmt->getInt(0);
PostResult["msg_id"] = stmt->getString(1);
PostResult["NUMBER"] = stmt->getString(2);
PostResult["msg_id"] = stmt->getString(3);
}
conn.close();
}
catch (MariaCpp::Exception &e) {
std::cerr << e << std::endl;
}
Maybe anyone can help, how to increase Requests/sec? Thanks!
200 connections and queries per second isn't that bad for a naive implementation, particularly when the database is being hosted across another network boundary.
I'm certainly not surprised that it's 50 times faster than something that just returns some HTML.
Your server/application should maintain one connection (or a pool of connections), that it keeps alive all the time, and use that when a request is made, rather than constructing a fresh connection for each request.

GATE Embedded runtime

I want to use "GATE" through web. Then I decide to create a SOAP web service in java with help of GATE Embedded.
But for the same document and saved Pipeline, I have a different run-time duration, when GATE Embedded runs as a java web service.
The same code has a constant run-time when it runs as a Java Application project.
In the web service, the run-time will be increasing after each execution until I get a Timeout error.
Does any one have this kind of experience?
This is my Code:
#WebService(serviceName = "GateWS")
public class GateWS {
#WebMethod(operationName = "gateengineapi")
public String gateengineapi(#WebParam(name = "PipelineNumber") String PipelineNumber, #WebParam(name = "Documents") String Docs) throws Exception {
try {
System.setProperty("gate.home", "C:\\GATE\\");
System.setProperty("shell.path", "C:\\cygwin2\\bin\\sh.exe");
Gate.init();
File GateHome = Gate.getGateHome();
File FrenchGapp = new File(GateHome, PipelineNumber);
CorpusController FrenchController;
FrenchController = (CorpusController) PersistenceManager.loadObjectFromFile(FrenchGapp);
Corpus corpus = Factory.newCorpus("BatchProcessApp Corpus");
FrenchController.setCorpus(corpus);
File docFile = new File(GateHome, Docs);
Document doc = Factory.newDocument(docFile.toURL(), "utf-8");
corpus.add(doc);
FrenchController.execute();
String docXMLString = null;
docXMLString = doc.toXml();
String outputFileName = doc.getName() + ".out.xml";
File outputFile = new File(docFile.getParentFile(), outputFileName);
FileOutputStream fos = new FileOutputStream(outputFile);
BufferedOutputStream bos = new BufferedOutputStream(fos);
OutputStreamWriter out;
out = new OutputStreamWriter(bos, "utf-8");
out.write(docXMLString);
out.close();
gate.Factory.deleteResource(doc);
return outputFileName;
} catch (Exception ex) {
return "ERROR: -> " + ex.getMessage();
}
}
}
I really appreciate any help you can provide.
The problem is that you're loading a new instance of the pipeline for every request, but then not freeing it again at the end of the request. GATE maintains a list internally of every PR/LR/controller that is loaded, so anything you load with Factory.createResource or PersistenceManager.loadObjectFrom... must be freed using Factory.deleteResource once it is no longer needed, typically using a try-finally:
FrenchController = (CorpusController) PersistenceManager.loadObjectFromFile(FrenchGapp);
try {
// ...
} finally {
Factory.deleteResource(FrenchController);
}
But...
Rather than loading a new instance of the pipeline every time, I would strongly recommend you explore a more efficient approach to load a smaller number of instances of the pipeline but keep them in memory to serve multiple requests. There is a fully worked-through example of this technique in the training materials on the GATE wiki, in particular module number 8 (track 2 Thursday).

How to create new record from web service in ADF?

I have created a class and published it as web service. I have created a web method like this:
public void addNewRow(MyObject cob) {
MyAppModule myAppModule = new MyAppModule();
try {
ViewObjectImpl vo = myAppModule.getMyVewObject1();
================> vo object is now null
Row r = vo.createRow();
r.setAttribute("Param1", cob.getParam1());
r.setAttribute("Param2", cob.getParam2());
vo.executeQuery();
getTransaction().commit();
} catch (Exception e) {
e.printStackTrace();
}
}
As I have written in code, myAppModule.getMyVewObject1() returns a null object. I do not understand why! As far as I know AppModule has to initialize the object by itself when I call "getMyVewObject1()" but maybe I am wrong, or maybe this is not the way it should be for web methods. Has anyone ever faced this issue? Any help would be very appreciated.
You can check nice tutorial: Building and Using Web Services with JDeveloper
It gives you general idea about how you should build your webservices with ADF.
Another approach is when you need to call existing Application Module from some bean that doesn't have needed environment (servlet, etc), then you can initialize it like this:
String appModuleName = "org.my.package.name.model.AppModule";
String appModuleConfig = "AppModuleLocal";
ApplicationModule am = Configuration.createRootApplicationModule(appModuleName, appModuleConfig);
Don't forget to release it:
Configuration.releaseRootApplicationModule(am, true);
And why you shouldn't really do it like this.
And even more...
Better aproach is to get access to binding layer and do call from there.
Here is a nice article.
Per Our PM : If you don't use it in the context of an ADF application then the following code should be used (sample code is from a project I am involved in). Note the release of the AM at the end of the request
#WebService(serviceName = "LightViewerSoapService")
public class LightViewerSoapService {
private final String amDef = " oracle.demo.lightbox.model.viewer.soap.services.LightBoxViewerService";
private final String config = "LightBoxViewerServiceLocal";
LightBoxViewerServiceImpl service;
public LightViewerSoapService() {
super();
}
#WebMethod
public List<Presentations> getAllUserPresentations(#WebParam(name = "userId") Long userId){
ArrayList<Presentations> al = new ArrayList<Presentations>();
service = (LightBoxViewerServiceImpl)getApplicationModule(amDef,config);
ViewObject vo = service.findViewObject("UserOwnedPresentations");
VariableValueManager vm = vo.ensureVariableManager();
vm.setVariableValue("userIdVariable", userId.toString());
vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("byUserIdViewCriteria"));
Row rw = vo.first();
if(rw != null){
Presentations p = createPresentationFromRow(rw);
al.add(p);
while(vo.hasNext()){
rw = vo.next();
p = createPresentationFromRow(rw);
al.add(p);
}
}
releaseAm((ApplicationModule)service);
return al;
}
Have a look here too:
http://www.youtube.com/watch?v=jDBd3JuroMQ

Virtual Server IIS WMI Problem

I have been tasked with finding out what is causing an issue with this bit of code:
public static ArrayList GetEthernetMacAddresses()
{
ArrayList addresses = new ArrayList();
ManagementClass mc = new ManagementClass("Win32_NetworkAdapter");
// This causes GetInstances(options)
// to return all subclasses of Win32_NetworkAdapter
EnumerationOptions options = new EnumerationOptions();
options.EnumerateDeep = true;
foreach (ManagementObject mo in mc.GetInstances(options)) {
string macAddr = mo["MACAddress"] as string;
string adapterType = mo["AdapterType"] as string;
if (!StringUtil.IsBlank(macAddr) && !StringUtil.IsBlank(adapterType))
{
if (adapterType.StartsWith("Ethernet")) {
addresses.Add(macAddr);
}
}
}
return addresses;
}
On our (Win2003) virtual servers, this works when run as part of a console application but not from a web service running on IIS (on that same machine).
Alternatively, I can use this code in a web service on IIS (on the virtual server) and get the correct return values:
public static string GetMacAddresses()
{
ManagementClass mgmt = new ManagementClass(
"Win32_NetworkAdapterConfiguration"
);
ManagementObjectCollection objCol = mgmt.GetInstances();
foreach (ManagementObject obj in objCol)
{
if ((bool)obj["IPEnabled"])
{
if (sb.Length > 0)
{
sb.Append(";");
}
sb.Append(obj["MacAddress"].ToString());
}
obj.Dispose();
}
}
Why does the second one work and not the first one?
Why only when called through an IIS web service on a virtual machine?
Any help would be appreciated.
UPDATE: After much telephone time with all different levels of MS Support, the've come to the conclusion that this is "As Designed".
Since it is on a driver level for the virtual network adapter driver, the answer was that we should change our code "to work around the issue".
This means that you cannot reliable test code on virtual servers unless you with the same code that you use on physical servers, since we can't guarantee that the servers are exact replicas...
Okay, so I wrote this code to test the issue:
public void GetWin32_NetworkAdapter()
{
DataTable dt = new DataTable();
dt.Columns.Add("AdapterName", typeof(string));
dt.Columns.Add("ServiceName", typeof(string));
dt.Columns.Add("AdapterType", typeof(string));
dt.Columns.Add("IPEnabled", typeof(bool));
dt.Columns.Add("MacAddress", typeof(string));
//Try getting it by Win32_NetworkAdapterConfiguration
ManagementClass mgmt = new ManagementClass("Win32_NetworkAdapter");
EnumerationOptions options = new EnumerationOptions();
options.EnumerateDeep = true;
ManagementObjectCollection objCol = mgmt.GetInstances(options);
foreach (ManagementObject obj in objCol)
{
DataRow dr = dt.NewRow();
dr["AdapterName"] = obj["Caption"].ToString();
dr["ServiceName"] = obj["ServiceName"].ToString();
dr["AdapterType"] = obj["AdapterType"];
dr["IPEnabled"] = (bool)obj["IPEnabled"];
if (obj["MacAddress"] != null)
{
dr["MacAddress"] = obj["MacAddress"].ToString();
}
else
{
dr["MacAddress"] = "none";
}
dt.Rows.Add(dr);
}
gvConfig.DataSource = dt;
gvConfig.DataBind();
}
When it's run on a physical IIS box I get this:
Physical IIS server http://img14.imageshack.us/img14/8098/physicaloutput.gif
Same code on Virtual IIS server:
Virtual server http://img25.imageshack.us/img25/4391/virtualoutput.gif
See a difference? It's on the first line. The virtual server doesn't return the "AdapterType" string. Which is why the original code was failing.
This brings up an interesting thought. If Virtual Server is supposed to be an "virtual" representation of a real IIS server, why doesn't it return the same values?
Why are the two returning different results? It's possible that due to the different user accounts, you'll get different results running from the console and from a service.
Why does (1) fail and (2) work? Is it possible that a null result for adapterType return a null value? If so, would the code handle this condition?