I am using turnserver (http://code.google.com/p/rfc5766-turn-server/) with --alternate-server options for relaying media stream and using pjnath library on client side.
But when turn server on ALLOCATION request, return a 300 error code i.e. Try Alternate Server, pjnath simply treated it as an error and doesn't connect to alternate server.
So my question is, Does pjnath supports ALTERNATE-SERVER option? Does it try to connect alternate server on 300 error code?
Does anybody had the similar problem with pjnath? How to make pjnath to connect alternate server ?
Any help will be appreciated.
Checking version 2.2.1, PJNATH does not support 300 ALTERNATE-SERVER option.
However, rfc5766-turn-server support two other means
DNS SRV based load-balancing
network load-balancer server
Related
I have been trying to connect two machines: both Virtual Machines, one is Xubuntu and the other is Ubuntu. I'm also very new to OpenDDS, but the best way -or so it seems- to do it is to use the .ini files.
However, when I try to connect, I seem to fail in changing the Discovery Server, since the default is localhost:12345. Can somebody help me with that so I can configure the file properly?
I have tried using the dds_udp_conf.ini and the tcp one, but it doesn't seem to work.
Also, I tried using unicast, but failed.
the ini file:
[common]
DCPSDebugLevel=0
DCPSInfoRepo=corbaloc::localhost::12345/DCPSInfoRepo
DCPSGlobalTransportConfig=config1
[config/config1]
transports=udp1
[transport/udp1]
transport_type=udp
And I use the syntax:
./publisher -DCPSConfigFile conf.ini
Well, the publisher and subscriber are supposed to connect, but the publisher sends some error messages and in the other VM nothing happens.
I seem to fail because I cant change the configuration in the localhost for discovery.
When I try to run the server with a different parameter than localhost:12345 it always sends error messages too.
It's unclear to me where you're running the InfoRepo if both the publisher and subscriber are told the InfoRepo is running at localhost. Regardless I would recommend using the RTPS discovery and transport instead. It's easy to set up because the participants can find each other through the network's multicast without InfoRepo. This config is the simplest way to use RTPS with OpenDDS:
[common]
DCPSDefaultDiscovery=DEFAULT_RTPS
DCPSGlobalTransportConfig=$file
[transport/the_rtps_transport]
transport_type=rtps_udp
Just give this to both the programs and they should find each other. If not that would mean there's probably something's wrong with how the networking is set up on your VMs.
I am working on a project need to send periodic alive message to https server.
Because of security issue, we need to use minimal number of ports (blocking unused ports as many as we can).
I am using c++ libcurl easy interface to send https request in linux.
I have tried to use the same curl handler object (CURL object) and set CURLOPT_LOCALPORT to a port number. The first request is ok. But in the second, libcurl verbose mode said address already in use.
However, when I comment out the port set through CURLOPT_LOCALPORT, it works on second connection also, and by setting VERBOSE to 1, I can see "Re-using existing connection" print out, which is missing in version setting up local port.
And I check with linux netstat, find out that it is using the same port.
I cannot figure out why setting up local port will make it failed.
And also, I have tried to close the connection using curl_easy_cleanup, but due to tcp time_wait state, we cannot reuse the port in a while, that's not what I want.
Could anyone provide a solution or suggestion to us? Thanks a lot.
Edit
My reason using one port is not to keep opening and closing connection too much.
Because of the security issue ...
There is no security issue. You need to get over this phobia about using multiple local outbound ports. There is zero security benefit in using fewer, or in constraining them in any way.
Currently, we run into one problem with timeout issue. Our application is based on Jetty and uses Zeus as load balancing. The maxIdleTime is set as default value 30000 in jetty.xml. When a request/connection exceeds 30 seconds, the connection status will change to TIME_WAIT, but we get the HTTP 500 Internal Error in the browser side.
I guess the HTTP 500 error comes from Zeus but I want to confirm this: how would Zeus handle the closed connection?
OR
The jetty service sends 500 to Zeus? If so, how can I confirm this?
The surefire way to iron out what is happening here is to sniff the packets using something like ethereal or tcpdump between the load balancer and the jetty server, and you can use the network tooling in something like firebug or the chrome developer tools to see what is happening on that side of the connection. You can also turn on debug on the jetty side to see what it is doing specifically.
Regardless, if your hitting your timeout settings then you need to either increase those settings or decided on a proper strategy to deal with them to avoid this sort of issue, assuming you don't want that 500 error on the browser.
Is it possible to create an HTTP tunnel in Delphi or C++?
My application connects to several HTTP servers that do not belong to the company I work for. Because of that, our users need to open their firewall ports to allow those connections. I thought about creating a tunnel at my company and redirecting HTTP requests made by my application through this tunnel. This way, my clients will only need to open one port and the tunnel will handle all requests. All requests are made with POST or GET using indy components.
EDIT: I can't use an HTTP proxy. Some of my users have already got their own HTTP proxy and it is going to be impossible to connect to two different proxy servers at the same time.
Here is a free component is kind of old but it works you can get yourself inspired from there
TGpHTTPProxy
Or you can try this samples
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/examplesocks4proxybyaphex
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/multi-threadedhttpproxyserver
As Warren P. and Rob Kennedy suggest, you really just need a proxy server. Don't write a tunnel yourself, it's a huge overkill and it's far from easy (writing a robust socket application is more time consuming than it first appears to be).
If you want something dead simple look for datapipe.c or netcat (nc) unix command. SSH can create tunnels too (look in OpenSSH and PuTTy docs).
Here is a free open source HTTP-Tunnel and UDP-Tunnel: http://barbatunnel.codeplex.com/
I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.