The following code used to work:
cl_context_properties Properties [] = {
CL_GL_CONTEXT_KHR, (cl_context_properties) glXGetCurrentContext(),
CL_GLX_DISPLAY_KHR, (cl_context_properties) glXGetCurrentDisplay(),
CL_CONTEXT_PLATFORM, (cl_context_properties) CL.Platform,
0
};
CL.Context = clCreateContext(Properties, 1, CL.Device, 0, 0, &err);
if (err < 0) printf("Context error %i!\n", err);
but now prints
Context error -1000!
If I comment out
//CL_GL_CONTEXT_KHR, (cl_context_properties) glXGetCurrentContext(),
//CL_GLX_DISPLAY_KHR, (cl_context_properties) glXGetCurrentDisplay(),
then it works fine. So, it would seem the issue is with the glX calls.
Now, what has changed is that I have upgraded X on my machine. I run AMD catalyst, and this upgrade resulted in the loss of my display, so after purging and reinstalling fglrx, I regained my display but suspect something got broken in the process. As as aside, I used to play Zandronum on this machine but since the upgrade, any attempt to play yields the following error:
zandronum: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.0.
I don't think this is a coincidence.
However, I'm not sure how to proceed in debugging. I can print the results of the glX calls in gdb:
(gdb) p Properties
$1 = {8200, 8519632, 8202, 6308672, 4228, 140737247522016, 0}
but I don't know how to verify any of it, or get further information on the values these calls are returning. What steps can I take to get to the root of the problem? Am I even looking in the right place?
Related
I am trying to replicate a real time application on a windows computer to be able to debug and make changes easier, but I ran into issue with Delayed Ack. I have already disabled nagle and confirmed that it improve the speed a bit. When sending a lots of small packets, window doesn't ACK right away and delay it by 200 ms. Doing more research about it, I came across this. Problem with changing the registry value is that, it will affect the whole system rather than just the application that I am working with. Is there anyway to disable delayed ACK on window system like TCP_QUICKACK from linux using setsockopt? I tried hard coding 12, but got WSAEINVAL from WSAGetLastError.
I saw some dev on github that mentioned to use SIO_TCP_SET_ACK_FREQUENCY but I didn't see any example on how to actually use it.
So I tried doing below
#define SIO_TCP_SET_ACK_FREQUENCY _WSAIOW(IOC_VENDOR,23)
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, 0, 0, info, sizeof(info), &bytes, 0, 0);
and I got WSAEFAULT as an error code. Please help!
I've seen several references online that TCP_QUICKACK may actually be supported by Winsock via setsockopt() (opt 12), even though it is NOT documented, or officially confirmed anywhere.
But, regarding your actual question, your use of SIO_TCP_SET_ACK_FREQUENCY is failing because you are not providing any input buffer to WSAIoctl() to set the actual frequency value. Try something like this:
int freq = 1; // can be 1..255, default is 2
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, &freq, sizeof(freq), NULL, 0, &bytes, NULL, NULL);
Note that SIO_TCP_SET_ACK_FREQUENCY is available in Windows 7 / Server 2008 R2 and later.
The challenge is to determine if the wanted "interface to use" for the multicast-socket is not available.
Currently a ACE_SOCK_Dgram_Mcast is created and join() is called with explicit interface-selection and reuse_addr == 1 (see note 1).
I would have expected that join() returns with -1 (error) or some of the other errno. But it returns with 0, even when all ethernet-adapters are unplugged (and loopback-adapter is deactivated).
I had the idea to call open() with similar parameters before doing join(), but this also works perfectly fine (returnvalue 0).
Can anyone explain this? I was expecting that join/open return a failure in case the wanted interface for Multicast is not available. And being an unplugged interface, means to me "not available".
What do I miss?
By the way: "ipconfig /all" on the Windows-terminal lists for all devices "unplugged or not available", so the wanted interface (specified by IP-address) is not listed.
Setup:
C++ with ACE 6.3.1-library; Win7; built as x86_32 binary - but will be later deployed as well with Ubuntu 14.04 (Linux)
note 1:
int join (const ACE_INET_Addr &mcast_addr,
int reuse_addr = 1, // (see above)
const ACE_TCHAR *net_if = 0);
I am trying to initialise a sound port from pjsip and pjsua with the standart pjmedia_snd_port_create but the result is always not successful.
pj_caching_pool_init(&cp, &pj_pool_factory_default_policy, 0);
pool = pj_pool_create(&cp.factory,
"pool1",
4000,
4000,
NULL);
pjmedia_snd_port *snd_port1 = NULL;
status = pjmedia_snd_port_create(pool, id1, id1, clock_rate,
channel_count, samples_per_frame,
bits_per_sample,
0, &snd_port1);
My device id1 is 0, as i got it from the audio device manager. I`ve tried with the -1 for defaults but always fails me. I have endpoint created with the pjsua2 api from a C++ class, the lib is OK and running, I can create conference bridges also but the sound port fails me. A bit of a hint will be great.
I`ve fixed it with initialising loop, I guess it has nothing to do with the setup, what I need is actually to register all of my playback and rec devices from my hardware.
I am trying to figure out where I made a mistake in my c++ poco code. While running it on Ubuntu 14, the program runs correctly but when recompiled for the arm via gnueabi, it just crashes with sigsegv:
This is report from the stack trace (where it falls):
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.2.101")}, 16) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x6502a8c4} ---
+++ killed by SIGSEGV +++
And this is code where it falls ( it should connect to the tcp server ):
this->address = SocketAddress(this->host, (uint16_t)this->port);
this->socket = StreamSocket(this->address); // !HERE
Note that I am catching any exceptions (like econnrefused) and it dies correctly when it can't connect. When it connect's to the server side, it just falls.
When trying to start valgrind, it aborts with error. No idea what shadow memory range means
==4929== Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.
http://pastebin.com/Ky4RynQc here is full log
Thank you
Don't know why, this compiled badly on ubuntu, but when compiled on fedora (same script, same build settings, same gnu), it's working.
Thank you guys for your comments.
Using Win8.1 and Visual Studio 2013, I’ve tested every example of Windows Registered I/O that I can find (about 5). All result in error 10045 on RioCreateRequestQueue() as shown below on one.
c:>rioServerTest.exe
InitialiseRio Start
InitialiseRio End
CreateCompletionQueue Start
CreateCompletionQueue End
CreateRequestQueue start
RIOCreateRequestQueue Error: 10045
Related code is :
void *pContext = 0;
printf("CreateRequestQueue start\n");
g_requestQueue = g_rio.RIOCreateRequestQueue(
g_socket, // Socket
(ULONG) 10, // MaxOutstandingReceive,
(ULONG) 1, // maxReceiveDataBuffers,
(ULONG) 10, // MaxOutstandingSend,
(ULONG) 1, // MaxSendDataBuffers
g_completionQueue, // ReceiveCQ
g_completionQueue, // SendCQ
pContext); // SocketContect
if (g_requestQueue == RIO_INVALID_RQ) {
printf_s("RIOCreateRequestQueue Error: %d\n", GetLastError());
exit(1);
}
printf("CreateRequestQueue End\n");
According to the literature that I have read, Registered I/O is intended to work with Windows 8 and later and Windows Server 2012 and later.
Can anyone explain to me via an example how to get this to work on Win8.1? TIA
10045 is WSAEOPNOTSUPP the description of which is "Operation not supported.
The attempted operation is not supported for the type of object referenced. Usually this occurs when a socket descriptor to a socket that cannot support this operation is trying to accept a connection on a datagram socket."
So actually it's likely that the code we need to see is in fact where you create your socket.
Your socket creation code should look something like this:
socket = ::WSASocket(
AF_INET,
SOCK_DGRAM,
IPPROTO_UDP,
NULL,
0,
WSA_FLAG_REGISTERED_IO);
I have some example articles (including a whole suite of RIO, UDP server designs with complete source code) here, all of these run on all operating systems that RIO supports.