I'm trying to write a Bluetooth server as a MFC app and while I got it working as a console app with blocking sockets, I can't get it working using CAsyncSocket.
The error it returns is 10035 - WSAEWOULDBLOCK as soon as I call Accept()
I could copy the code, but it's way too long, so I'll just outline the general idea:
- create, bind regular socket and start listening just like in the Microsoft SDK example app
- attach this socket to CAsyncSocket
- call Accept() (this is where the error occurs)
Any ideas how to get Bluetooth working with CAsyncSocket?
CAsyncSocket's OnAccept member function is called when you can Accept. Subclass CAsyncSocket and handle the OnAccept notification.
Thanks, I've corrected that, but OnAccept, OnConnect(), etc. were never executed, not even when called directly. It turned out that I had to delete all the temporary files the compiler and linker use to find out that I was using global shorthand function log() which clashed with log() defined in math.h and which caused some weird behavior.
See this thread for more datails http://www.codeguru.com/forum/showthread.php?t=339413
UPDATE: now you can download the finished app and the whole source code on Brm Bluetooth Remote Control homepage!
Related
I've tried to see if anyone else if having this problem, but I haven't found anything online yet. Does anything in this code looks like I'm invoking boost incorrectly?
This code works when I am logged into the machine that is starting the TCP server, but fails when no one is logged in. I stripped the code down to only look at the boost asio logic.
//create _acceptor, which will eventually listen for incomming connections, asynchronously
_acceptor = boost::shared_ptr<tcp::acceptor>(new tcp::acceptor(*_io_service));
_acceptor->open(tcp::endpoint(tcp::v4(), _port).protocol());
_acceptor->set_option(tcp::acceptor::reuse_address(false));
//omitted logic find a port that is open
_acceptor->bind(tcp::endpoint(tcp::v4(), _port));
//omitted error handling logic if open port not found
//Start listening for incoming connections asynchronously.
_acceptor->listen();
sslSocketPtr ssl_socket(sslSocketPtr(new ssl::stream<ip::tcp::socket>(*_io_service, _sslContext)));
_acceptor->async_accept(ssl_socket->lowest_layer(),
boost::bind(&TCPServer::handle_sslAccept, shared_from_this(), boost::asio::placeholders::error, ssl_socket));
When no one is logged into the machine, the ssl_socket constructor throws the exception: "static_mutex: Access is denied".
If I define BOOST_ASIO_ENABLE_OLD_SSL the code works correctly, but I think that may be contributing to another bug in my code. So I am trying to use the latest SSL logic from Boost.
Any help would be appreciated!
I'm going to assume from the symptoms that you run on Windows.
On windows, static_mutex is implemented as a named (interprocess) mutex and gets "opened" using CreateMutexW:
If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.
As you can see you don't have the required permissions. However, you could still have this working if the mutex is always created by a privileged process. In that case you could modify the code that opens an existing named mutex with OpenMutex as the documentation hints.
It's likely easier to run the process under a user that has the required permissions thought
I have a DLL written in C++ to be consumed by our clients - third party developers in their applications. The DLL has some facilities in to connect to our company's servers through winsock and communicate with the servers on a specific protocol.
The outstanding task is to send a kind of farewell message to the active server on application closing. But the only way of getting known about the event (application closing) I am aware about is DllMain/DLL_PROCESS_DETACH case. I know that it is not recommended, I read Raymond Chen's article about this and MSDN documentation, but instead of cautions I need a solution.
Of course I did a try to send the message from DllMain, but it seemed that at that moment winsock library had been already detached, as what I received was WSANOTINITIALISED (10093) error.
Also I tried to create a static finalizer like the following:
struct Finalization
{
~Finalization() {
// sending the message
}
};
static Finalization f;
without success either.
I feel that what I need is somewhat like a trigger point to know, when the process is going to terminate. The case of simultaneous using the library can be ignored, as its specific leaves absolutely no sense in that.
What I am thinking about is that the library is bundled with an interface header, that is going to be included in the customer's application. I could use the fact to place something in the file, a mutex or something like that.
Maybe it's worth mentioning that the previous version of the library was written in Delphi, and the parting message was sending from one of finalization sections, and it worked perfectly, perhaps just by chance.
Thank you in advance for your ideas.
Add to your DLL functions that initialise and finalise the DLL. Make it so that all consumers of the DLL call these functions. In the finalise function you can do whatever it is that you wish to do.
As you have discovered, leaving it to DllMain is no good. There's no way for you to escape that. DllMain is simply the wrong place to attempt socket comma.
I am trying to make use of Borlands TClientSocket component in non-blocking mode inside a multithreaded C++ Windows application. I am creating multiple threads (classes derived from TThread), each of which creates its own TClientSocket object. I then assign member functions of the thread class to act as event handlers for the OnConnect, OnDisconnect and OnSocketError events of the socket. The problem I am having here is that whenever I call the TClientSocket::Open() function from within the TThread::Execute() function, the OnConnect event never fires. However, When I call the Open() function from the VCL thread prior to the TThread::Execute() function getting called, all of the events fire and I can use the thread-socket combination as I would like. Now I have not read anything in documentation that says that TClientSocket should not be used in non-blocking mode when used inside a thread, but it appears to me that there is perhaps something wrong conceptually in the way I am trying to use this class. Borland documentation is quite poor on the subject and these components have now been deprecated so reliable information is hard to come by. Despite being deprecated I have to use them as there is no alternative in the Builder 6 package I have. Can anyone please advise me if there is a right/wrong way to use TThread and a non-blocking TClientSocket in combination. I have never had problems using it as part of the VCL thread and never had problems using TServerSocket before and I really cannot understand why some events are not firing.
TClientSocket in non-blocking mode uses a hidden window internally to receive socket events. If you use a non-blocking TClientSocket in a TThread then you must implement a message loop inside of your TThread::Execute() method in order to dispatch those messages to the socket's window. Also, being window-based, that also means that the socket messages are sent to the thread that actually creates the socket window, so you have to make sure you are opening the TClientSocket from inside of your TThread::Execute() method.
BTW, BCB6 shipped with Indy 8, which is an alternative. You can also install an up-to-date version of Indy, or even another third-party library like ICS or Synapse.
I am using boost::asio to implement network programming and running into timing issues. The issue is currently most with the client.
The protocol initially begins by the server returning a date time string to the user, and the client reads it. Up to that part it works fine. But What I also want is to be able to write commands to the server which then processes them. To accomplish this I use the io_service.post() function as shown below.
io_service.post(boost::bind()); // bounded function calls async_write() method.
For some reason the write tries happens before the initial client/server communication, when the socket has not been created yet. And I get bad socket descriptor error.
Now the io_service's run method is indeed called in another thread.
When I place a sleep(2) command before post method, it work fine.
Is there way to synchronize this, so that the socket is created before any posted calls are executed.
When creating the socket and establishing the connection using boost::asio, you can define a method to be called when these operations have either completed or failed. So, you should trigger your "posted call" in the success callback.
Relevant methods and classes are :
boost::asio::ip::tcp::resolver::async_resolve(...)
boost::asio::ip::tcp::socket::async_connect(...)
I think the links below
will give u some help
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_asio/reference/io_service.html
I'm writing some dll (on windows in MSVC++2008) which provides some functionality as xmlrpc server. To implement xmlrpc server I'm using xmlrpc-c library.
I can start xmlrpc server in some diffrent ways. More intresting are:
run method - This will run xmlrpc server forever so dll can't control until server is not terminated.
runOnce method - This will run xmlrpc server only to process one RPC. And if there is no request it will wait for that.
I can't keep control in dll for long time. I need to process some RPCs and give back control to program which is using dll. And process next RPCs when dll will get back control again.
runOnce looks ok. But there is possibility that there will be no RPCs to process and it will be waiting for one. That is unacceptable.
There is also one exception:
runOnce aborts waiting for a
connection request and returns
immediately if the process receives a
signal. Note that unless you have a
handler for that signal, the signal
will probably kill the whole process,
so set up a signal handler — even one
that does nothing — if you want to
exploit this. But before Xmlrpc-c 1.06
(June 2006), signals have no effect —
there is no way to make runOnce abort
the wait and return.
Can I use it as workaround go get back control to dll? Is it possible to send signal from dll to themself? How it works on windows?
Or maybe there is some better solution of this issue?
Signals (of the kind that makes an Xmlrpc-c library call abort early) don't exist in Windows.
Best solution is to create new thread for server.