I'm trying to use Windows RPC for communcation between two processes (C/C++, 32bit, Win7).
I've followed the example here Instruction to RPC and successfully got RPC to work. Now I have difficulties getting the proxy / stub thing to work too.
The IDL file looks like this:
[ uuid(3cb112c0-688a-4611-83b6-31d33d87ea28), object ]
interface IDemo : IUnknown
{
HRESULT ThisIsAMethod([in, string] const char* test);
}
[ uuid(60ad6a21-ba49-483a-b0a2-faa5187b8299), version(1.0),
implicit_handle(handle_t hDemoBinding)]
interface IDemoRPC
{
void SimpleTest();
void GetDemo([out] IDemo** service);
void Shutdown();
}
I can invoke SimpleTest() remotely on the server from the client. Works just fine. But GetDemo() gives me an access violation when the server 'returns' something else than NULL.
Here's what I've done:
Build a DLL based on the generated demo_i.c, demo_p.c, dlldata.c. With REGISTER_PROXY_DLL set and a def file containing the five private entries. I've registered it with regsvr32 (the one from WOW64).
Created a DemoImpl class in the server process that extends IDemo and implements ThisIsAMethod as well as AddRef and friends.
Implemented GetDemo(IDemo** service) with a one-liner *service = new DemoImpl();
When I invoke GetDemo from the client process, the server process terminates with an access violation (0x00000014). The stacktrace shows that it happens in a separate thread deep within rpcrt4.
I would have expected that the thing returns a proxy to the client.
I have the suspicion that I'm doing something fundamentally wrong here. For one thing, I can't find an example where instances of interface-objects are created with new. There always some some magic with CoGetClassObject or something. No clue how these functions should know where to find the implementation.
Related
I have a default website in my IIS where I have created one virtual directory "wsdls".
I would want to gather statistics on how many requests are triggered to my virtual directory. This would need a request interception at web server level and gather statistics. "HTTPModule" was one of the many solutions I have considered which is suitable for such scenario. Hence I have started building one.
For testing purpose, I wanted to create a HTTP Module and apply it on a particular extension files (say *.wsdl) and on every GET request of any .wsdl files in this virtual directory, I will want to redirect the application to "www.google.com". This would demonstrate a good example of how HTTP Module can be used and deployed on IIS.
HTTPModule which is written using Visual Studio is shown below,
namespace Handler.App_Code
{
public class HelloWorldModule : IHttpModule
{
public HelloWorldModule(){
}
public String ModuleName{
get { return "HelloWorldModule"; }
}
// In the Init function, register for HttpApplication
// events by adding your handlers.
public void Init(HttpApplication application){
application.BeginRequest +=
(new EventHandler(this.Application_BeginRequest));
application.EndRequest +=
(new EventHandler(this.Application_EndRequest));
}
private void Application_BeginRequest(Object source,
EventArgs e)
{
// Create HttpApplication and HttpContext objects to access
// request and response properties.
HttpApplication application = (HttpApplication)source;
HttpContext context = application.Context;
context.Response.Redirect("www.google.com");
}
private void Application_EndRequest(Object source, EventArgs e)
{
//Nothing to be done here
}
public void Dispose() { }
}
}
Now I have done a build of this project for x64 version and I am able to browser successfully the "dll" file. Now I have to register this dll in IIS and whenever I try to access the *.wsdl files, the requests automatically divert to "www.google.com". Here is the next step I have done,
Then I have enabled the Handler mappings as shown below,
I am assuming that is it!! Nothing more to be done. I should be able to intercept the requests for all HTTP requests which are of the form "*.wsdl". This means whenever I access any wsdl from the server, control should be going back to google(Because of the logic written in begin request ). But unfortunately, I failed in achieving it. What can be done here?
One thing I noticed is that when you are trying to redirect to an external URL use
http://
So change
context.Response.Redirect("www.google.com");
to
context.Response.Redirect("http://www.google.com", true);
I could solve the problem what I am facing and below are the observations which were missing in my understanding and which helped me in solving my problem:
Locating proper web.config file :
Every website in IIS will be having a web.config file to have control over the application.
Since I am working with "Default Website", this refers to the directory "C:\\inetpub\\wwwroot"
There will be a "web.config" file which would be present in this director. Please create it if not already present.
Modifying web.config :
Once you have identified the file which needs to be modified, just add necessary module configuration to web.config
In this case, we would want to add a Module to the default website, the probably setting would be shown below,
Adding contents to bin directory :
Now if you try to run the application, the IIS would not find any dll or executable to run and hence we would need to keep the executables at a particular location.
Create a director if not already present with the name "bin" at the root of the directory and place all the dlls which you would want this website to execute. Sample shown below,
General Points to be considered:
Proper access must be given for the folder which consists of dll.
It is ideally not suggested to modify the entire website. It would be ideal if one works only on their web application.
If web.config is not found, we can create one.
If bin is not present in the web root directory, we can create one.
Background:
Sorry this is such a complex problem but it is driving me nuts. Finding a solution may help others who need a compartmentalized application.
I have a Qt program that is VERY compartmentalized because it is meant to host plugins and be used in a variety of situations, sometimes as a server, sometimes as a client, sometimes as both. The plugins that are loaded are login dependent. (Because the access defined for the user is not necessarily up to the user and the user's access to data and functionality may be limited).
The application relies on a core DLL library (specific to the application) which is used by the main exe, the client, the server, and all plugin dlls. Client and server functionality are also in separate dlls. I am new to this style of programming so that may be leading to my issue.
My Problem:
I have a class called "BidirectionalTcpConnection" that is defined in the core DLL which is to be used by the executable, the client dll, and the server dll. It is a class that keeps track of data that is passed back and forth over a QTcpSocket. I wrote the class to avoid THE SAME problem as I am having now except that the problem originally occurred while using the QTcpSocket.ReadAll() function AND in the current situation. (If I tried reading all but the last byte, and then read the last byte using the QTcpSocket.peek(...) function it would work fine).
My new class successfully reads from and writes to the socket without error but when I try and close or abort the socket (this happened with my earlier workaround too...), I get the same error I was getting when I tried to read it (only on the last byte). I get an Invalid address specified to RtlValidateHeap. Basically it throws a "User Breakpoint" in dbgheap.c
My Hypothesis (What I believe is wrong):
The dbgheap.c documents that it is checking to see if the address is valid and that it resides on the current heap.
It is possible that the need for compartmentalizing my application may be leading to this issue. The data being supplied to the socket for sending was originally being allocated in the executable's heap along with the instance of BidirectionalTcpConnection. (I am trying to send the login and receive the permissions for application access). The socket itself however is being allocated in the core heap (assuming that the dll has a separate heap from the exe for internal data). I tried avoiding this by doing a deep copy of each piece of data that is to be sent over the socket within the core dll code. But that hasn't solved the problem. Presumably because the BidirectionalTcpConnection is still being allocated on a separate heap from the socket itself.
My question(s) for anyone who can help:
Is the assumption in my hypothesis correct?
Do I need to allocate the socket and the connection on the same heap? How do I
overcome this issue?
Also... if you look at the code, will I need to delete the returned
string that needs to be processed by the executable within the core
dll in order to avoid the same issue?
If you guys need some code... I have supplied what I think is necessary. I can supply more upon request.
Some Code:
For starters.. here is some basic code to show the way things are allocated. The login is performed in main before the main interface is shown. w is the main interface window class instance. Here is the code that starts the process leading to the crash:
while (loginFailed)
{
splash->showLogin();
while (splash->isWaitingOnLogin())
a.processEvents();
QString username(*splash->getUserName());
QString password(*splash->getPassword());
// LATER: encrypt login for sending
loginFailed = w.loginFailed(username, password, a);
}
Here is the code that instantiates the BidirectionalTcpConnection on the executable's stack and sends the login data. This code is inside a few separate private methods of the Qt main window class.
// method A
// processes Qstring parameters into sendable data...
// then calls method B
// which creates the instance of *BidirectionalTcpConnection*
...
if (getServerAddress() == QString("LOCAL"))
mTcpConnection = new BidirectionalTcpConnection(getHostAddressIn()->toString(),
(quint16)ServerPorts::loginRequest, (long)15, this);
else
mTcpConnection = new BidirectionalTcpConnection(*getServerAddress(),
(quint16)ServerPorts::loginRequest, (long)15, this);
...
// back to method A...
mTcpConnection->sendBinaryData(*dataStream);
mTcpConnection->flushMessages(); // sends the data across the socket
...
// waits for response and then parses user data when it comes
while (waitForResponse)
{
if (mTcpConnection->hasBufferedMessages())
{
QString* loginXML = loginConnection->getNextMessageAsText();
// parse the xml
if (parseLogin(*loginXML))
{
waitForResponse = false;
}
...
}
}
...
// calls method that closes the socket which causes crash
mTcpConnection->abortConnection(); // crash occurs inside this method
delete mTcpConnection;
mTcpConnection = NULL;
Here is the relevant BidirectionalTcpConnection code in order of use. Note, this code is located in the core dll so presumably it is allocating data on a separate stack...
BidirectionalTcpConnection::BidirectionalTcpConnection(const QString& destination,
quint16 port, long timeOutInterval, TimeUnit unit, QObject* parent) :
QObject(parent),
mSocket(parent),
...
{ }
void BidirectionalTcpConnection::sendBinaryData(QByteArray& data)
{
// notice I try and avoid different heaps where I can by copying the data...
mOutgoingMessageQueue.enqueue(new QByteArray(data)); // member is of QQueue type
}
QString* BidirectionalTcpConnection::getNextMessageAsText()
// NOTE: somehow I need to delete the returned pointer to prevent memory leak
{
if (mIncomingMessageQueue.size() == 0)
return NULL;
else
{
QByteArray* data = mIncomingMessageQueue.dequeue();
QString* stringData = new QString(*data);
delete data;
return stringData;
}
}
void BidirectionalTcpConnection::abortConnection()
{
mSocket.abort(); // **THIS CAUSES ERROR/CRASH**
clearQueues();
mIsConnected = false;
}
I have an emulator program written in C++ running on Ubuntu 12.04. There are some settings and options needed for running the program, which are given by the main's arguments. I need to query these options via HTTPS from a remote machine/mobile device (so basically imagine I want to return main's arguments). I was wondering if someone can help me with that.
There should probably be some libraries for the ease, for example Poco. I'm not sure how suitable it is for my case, but here is any example of connection setup in poco. It's not a must to use any libraries though; just the most efficient/simplest way.
Mongoose (or the non-GPL fork Civetweeb) are embedded web servers. Very easy to set up and add controllers for (typically a half dozen lines of code)
Just add the project file (1 c file) to your project and build, add a line to start the server listening and give it what options you like, and add a callback function to handle requests. It does ssl out of the box (though IIRC you'll need to have openssl installed too)
There was another SO answer with some comparisons. I used civetweb at work, and was impressed at how easy it all was. There's not too much documentation though.
Here's a stripped-down POCO version, for full code see HTTPSTimeServer example.
struct MyRequestHandler: public HTTPRequestHandler
{
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
response.setContentType("text/html");
// ... do your work here
std::ostream& ostr = response.send();
ostr << "<html><head><title>HTTPServer example</title>"
<< "<body>Success!</body></html>";
}
};
struct MyRequestHandlerFactory: public HTTPRequestHandlerFactory
{
HTTPRequestHandler* createRequestHandler(const HTTPServerRequest& request)
{
return new MyRequestHandler;
}
};
// ...
// set-up a server socket
SecureServerSocket svs(port);
// set-up a HTTPServer instance (you may want to new the factory and params
// prior to constructing object to prevent the possibility of a leak in case
// of exception)
HTTPServer srv(new MyRequestHandlerFactory, svs, new HTTPServerParams);
// start the HTTPServer
srv.start();
I have an application built using MFC that I need to add Bonjour/Zeroconf service discovery to. I've had a bit of trouble figuring out how best to do it, but I've settled on using the DLL stub provided in the mDNSresponder source code and linking my application to the static lib generated by that (which in turn uses the system dnssd.dll).
However, I'm still having problems as the callbacks don't always seem to be being called so my device discovery stalls. What confuses me is that it all works absolutely fine under OSX, using the OSX dns-sd terminal service and under Windows using the dns-sd command line service. On that basis, I'm ruling out the client service as being the problem and trying to figure out what's wrong with my Windows code.
I'm basically calling DNSBrowseService(), then in that callback calling DNSServiceResolve(), then finally calling DNSServiceGetAddrInfo() to get the IP address of the device so I can connect to it.
All of these calls are based on using WSAAsyncSelect like this :
DNSServiceErrorType err = DNSServiceResolve(&client,kDNSServiceFlagsWakeOnResolve,
interfaceIndex,
serviceName,
regtype,
replyDomain,
ResolveInstance,
context);
if(err == 0)
{
err = WSAAsyncSelect((SOCKET) DNSServiceRefSockFD(client), p->m_hWnd, MESSAGE_HANDLE_MDNS_EVENT, FD_READ|FD_CLOSE);
}
But sometimes the callback just never gets called even though the service is there and using the command line will confirm that.
I'm totally stumped as to why this isn't 100% reliable, but it is if I use the same DLL from the command line. My only possible explanation is that the DNSServiceResolve function tries to call the callback function before the WSAAsyncSelect has registered the handling message for the socket, but I can't see any way around this.
I've spent ages on this and am now completely out of ideas. Any suggestions would be welcome, even if they're "that's a really dumb way to do it, why aren't you doing X, Y, Z".
I call DNSServiceBrowse, with a "shared connection" (see dns_sd.h for documentation) as in:
DNSServiceCreateConnection(&ServiceRef);
// Need to copy the main ref to another variable.
DNSServiceRef BrowseServiceRef = ServiceRef;
DNSServiceBrowse(&BrowseServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_servicename._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
this); // Callback context.
This is inside a main run method of a thread class called ServiceDiscovery. ServiceRef is a member of ServiceDiscovery.
Then immediately following the above code, I have a main event loop like the following:
while (true)
{
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError)
{
DNSServiceRefDeallocate(BrowseServiceRef);
DNSServiceRefDeallocate(ServiceRef);
ServiceRef = nullptr;
}
}
Then, in BrowserCallback you have to setup the resolve request:
void DNSSD_API ServiceDiscovery::BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext)
{
(void) inServiceRef; // Unused
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
...
// Pass a copy of the main DNSServiceRef (just a pointer). We don't
// hang to the local copy since it's passed in the resolve callback,
// where we deallocate it.
DNSServiceRef resolveServiceRef = sd->ServiceRef;
DNSServiceErrorType err =
DNSServiceResolve(&resolveServiceRef,
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
sd);
Then in ResolveCallback you should have everything you need.
// Callback for Bonjour resolve events.
void DNSSD_API ServiceDiscovery::ResolveCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* fullname,
const char* hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char* txtRecord,
void* inContext)
{
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
assert(sd);
// Save off the connection info, get TXT records, etc.
...
// Deallocate the DNSServiceRef.
DNSServiceRefDeallocate(inServiceRef);
}
hosttarget and port contain your connection info, and any text records can be obtained using the DNS-SD API (e.g. TXTRecordGetCount and TXTRecordGetItemAtIndex).
With the shared connection references, you have to deallocate each one based on (or copied from) the parent reference when you are done with them. I think the DNS-SD API does some reference counting (and parent/child relationship) when you pass copies of a shared reference to one of their functions. Again, see the documentation for details.
I tried not using shared connections at first, and I was just passing down ServiceRef, causing it to be overwritten in the callbacks and my main loop to get confused. I imagine if you don't use shared connections, you need to maintain a list of references that need further processing (and process each one), then destroy them when you're done. The shared connection approach seemed much easier.
I'm coding a long-running, multi-threaded server in C++. It receives requests on a socket, does database lookups and returns responses on a socket.
The server reads various run information from a configuration file, including database connectivity parameters. I have to use a database abstraction class from the company's code library. I don't want to wait until trying to do the DB search to lazy instantiate the DB connection (due to not shown complexity and the need for error exit at startup if DB connection cannot be made).
My problem is how to get the database connection information down into the search class without doing any number of "ugly" or bad OOP things that would technically work. I want to learn how to do this right way.
Is there a good design pattern for doing this? Should I be using the "Parameterize from Above" pattern? Am I missing some simpler Composition pattern?
// Read config file.
// Open DB connection using config values.
Server::process_request(string request, string response) {
try {
Process process(request);
if (process.do_parse(response)) {
return REQ_OK;
} else {
// handle error
}
} catch (..,) {
// handle exceptions
}
}
class Process : public GenericRequest {
public:
Process(string *input) : generic_process(input) {};
bool do_parse(string &output);
}
bool Process::do_parse(string &output) {
// Parse the input request.
Search search; // database search object
search.init( search parameters from parsing above );
output = format_response(search.get_results());
}
class Search {
// must use the Database library connection handle.
}
How do I get the DB connection from the Server class at top into the Search class instance at the bottom of the pseudo-code above?
It seems that the problem you are trying to solve is one of objects dependency, and is well solved using dependency injection.
Your class Process requires an instance of Search, which must be configured somehow. Instead of having instances of Process allocating their own Search instance, it would be easier to have them receive a ready made one at construction time. The Process class won't have to know about the Search configuration details, and thus an unecessary dependency is avoided.
But then the problem cascades up to whichever object must create a Process, because now this one has to know that configuration detail! In your situation, it is not really a problem, since the Server class is the one creating Process instances, and it happens to know the configuration details for Search.
However, a better solution is to implement a specialized class - for instance DBService, which will encapsulate the DB details acquired from the configuration step, and provide a method to get ready made Search instances. With this setup, no other objects will depend on the Search class for its construction and configuration. As an added benefit, you can easily implement and inject a DBService mockup object which will help you build test cases.
class DBSearch {
/* implement/extends the Search interface/class wrt DB */
};
class DBService {
/* constructor reads up configuration details somehow: command line, file */
Search *newSearch(){
return new DBSearch(config); // search object specialized on db
}
};
The code above somewhat illustrates the solution. Note that the newSearch method is not constrained to build only a Search instance, but may build any object specializing that class (as for example the class DBSearch above). The dependency is there almost removed from Process, which now only needs to know about the interface of Search it really manipulates.
The central element of good OOP design highlighted here is reducing coupling between objects to reduce the amount of work needed when modifying or enhancing parts of the application,
Please look up for dependency injection on SO for more information on that OOP design pattern.