FTP NLST results in '425: Can't open data connection for transfer' only on some client machines - c++

I'm currently running a FileZilla FTP server on a network. My issue is that on seemingly random machines, when the user navigates to a directory (which they are able to do) and attempts to ls (i.e. data transfer) their end hangs waiting for a response, while the server reports this 425: Can't open data connection for transfer mentioned above. This result varies depending on the client machine used, where some (either local or remote) are able to proceed and others stuck here. I understand that this is because simple FTP commands like CWDing operate on the 20/21 ports, whereas FTP data transfer operate on some other port number, which in turn may be blocked by a firewall somewhere along the chain. My question is, how do I account for these varying ports (if this truly is the issue), as as best I know they could be anything above 1024?
My end goal with this project is to implement a very simple FTP solution, ideally using WinINet, however, so far I've run into the same problem:
BOOL CWebFileFinder::FindFile(const CString& URL)
{
CString ServerName;
CString strObject;
INTERNET_PORT nPort;
DWORD dwServiceType = AFX_INET_SERVICE_FTP;
if (AfxParseURL(URL, dwServiceType, ServerName, strObject, nPort))
{
m_Connection = m_Session.GetFtpConnection(ServerName, m_Username, m_Password, nPort/*, true*/); // results in findfile still failing
if (m_Connection)
{
m_Connection->SetCurrentDirectory("sms"); // CDs into this dir
m_Finder = new CFtpFileFind(m_Connection);
if (m_Finder)
{
More = m_Finder->FindFile(_T("*.*")); // hangs here
}
}
}
catch (CException* pEx)
{
CString str;
LPTSTR error = str.GetBuffer(255);
pEx->GetErrorMessage(error, 255);
pEx->Delete();
str.ReleaseBuffer();
}
return More;
}
As far as I can see, either I need to call to open this data port prior to the LIST, or find the firewalls blocking these ports and create a rule to prevent that (What ports does Wininet listen on for Active FTP data connection?). Of course I could also be just completely off-base – Any insights at all would be greatly appreciated!

Your FTP server seems to require an encrypted connection (TLS/SSL).
WinInet does not support encrypted FTP.
See C++/Win32 The basics of FTP security and using SSL.

Related

Boost Asio SSL not able to receive data for 2nd time onwards (1st time OK)

I'm working on Boost Asio and Boost Beast for simple RESTful server. For normal HTTP and TCP socket, it works perfectly. I put it under load test with JMeter, everything works fine.
I tried to add the SSL socket. I set the 'ssl::context' and also called the 'async_handshake()' - additional steps for SSL compared to normal socket. It works for the first time only. Client can connected with me (server) and I also able to receive the data via 'boost::beast::http::async_read()'.
Because this is RESTful, so the connection will drop after the request & respond. I call 'SSL_Socket.shutdown()' and follow by 'SSL_Socket.lowest_layer().close()' to close the SSL socket.
When the next incoming request, the client able to connect with me (server). I called 'SSL_Socket.async_handshake()' and then follow by 'boost::beast::http::async_read()'. But this time I not able to receive any data. But the connection is successfully established.
Anyone has any clue what i missed?
Thank you very much!
If you want to reuse the stream instance, you need to manipulate SSL_Socket.native_handle() with openssl lib function. After ssl shutdown, use SSL_clear() before start a new ssl handshake.
please read(pay attention to warnings) link for detail
SSL_clear() resets the SSL object to allow for another connection. The reset operation however keeps several settings of the last sessions (some of these settings were made automatically during the last handshake)
.........
WARNINGS
SSL_clear() resets the SSL object to allow for another connection. The reset operation however keeps several settings of the last sessions (some of these settings were made automatically during the last handshake). It only makes sense for a new connection with the exact same peer that shares these settings, and may fail if that peer changes its settings between connections. Use the sequence SSL_get_session(3); SSL_new(3); SSL_set_session(3); SSL_free(3) instead to avoid such failures (or simply SSL_free(3); SSL_new(3) if session reuse is not desired).
In regard to ssl shutdown issue, link explain how boost asio ssl shutdown work.
In Boost.Asio, the shutdown() operation is considered complete upon error or if the party has sent and received a close_notify message.
If you look at boost.asio (1.68) source code boost\asio\ssl\detail\impl\engine.ipp, it shows how boost.asio do ssl shutdown and stream_truncated happens when there is data to be read or ssl shutdown expected from peer not received.
int engine::do_shutdown(void*, std::size_t)
{
int result = ::SSL_shutdown(ssl_);
if (result == 0)
result = ::SSL_shutdown(ssl_);
return result;
}
const boost::system::error_code& engine::map_error_code(
boost::system::error_code& ec) const
......
// If there's data yet to be read, it's an error.
if (BIO_wpending(ext_bio_))
{
ec = boost::asio::ssl::error::stream_truncated;
return ec;
}
......
// Otherwise, the peer should have negotiated a proper shutdown.
if ((::SSL_get_shutdown(ssl_) & SSL_RECEIVED_SHUTDOWN) == 0)
{
ec = boost::asio::ssl::error::stream_truncated;
}
}
Also you can see boost.asio ssl shutdown routine may call openssl SSL_shutdown() twice if first return 0, openssl document allows it but advice call SSL_read() to do a bidirectional shutdown if first SSL_shutdown() returns 0.
Read link for details.
I had a similar issue, the 2nd time onward my asynchonous accept always failed with session id uninitialized.
I solved this problem calling SSL_CTX_set_session_id_context on context or
setting context cache mode with SSL_SESS_CACHE_OFF and SSL_OP_NO_TICKET on context options.
This is my cents to someone else's problem.
I managed to resolve the problem by switching 'ssl::stream' socket to 'boost::optional' and then added 'SSL_Socket.emplace(io_context, oSSLContext)' each time the socket is shutdown and closed.
Big credit to sehe at 'Can't implement boost::asio::ssl::stream<boost::asio::ip::tcp::socket> reconnect to server'. His statement "the purest solution would be to not reuse the stream/socket objects" rocks! Save my time.
Thanks.

SSH local port forwarding using libssh

Problem
I try to do local port forwarding using libssh with the libssh-C++-wrapper. My intention is to forward port localhost:3306 on a server to localhost:3307 on my machine via SSH to connect via MySQL to localhost:3307.
void ssh_session::forward(){
ssh::Channel channel(this->session);
//remotehost, remoteport, localhost, localport
channel.openForward("localhost",3306,"localhost",3307);
std::cout<< "Channel is " << (channel.isOpen()?"open!":"closed!") << std::endl;
}
with session in the constructor of ssh::Channel being of type ssh::Session.
The code above prints Channel is open!. If I try to connect to localhost:3307 using the MySQL Connector/C++ I get
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
Observations
If I use the shell command $ ssh -L 3307:localhost:3306 me#myserver.com everything works fine and I can connect.
If I use ssh::Session session used in the constructor or ssh::Channel channel to execute remote shell commands everything works therefore the session is fine!
The documentation of libssh (which is total crap for the C++ wrapper libsshpp.hpp since a lot of public member functions are not documented and you have to look into the source code) shows that ssh::Channel::openForward() is a wrapper for the C function ssh_channel_open_forward()
The documentation of ssh_channel_open_forward() states
Warning
This function does not bind the local port and does not automatically forward the content of a socket to the channel. You still have to use channel_read and channel_write for this.
I think that could cause the problem. I have no problem by reading and writing in to the ssh:Channel but thats not how the MySQL Connector/C++ works.
Question
How can I achieve the same behaviour produced by the common shell command
$ ssh -L 3307:localhost:3306 me#myserver.com
using libssh?
Warning
This function does not bind the local port and does not automatically forward the content of a socket to the channel. You
still have to use channel_read and channel_write for this.
This is telling you that you need to write your own local socket code. Unfortunately, it doesn't do it for you.
The simplest implementation would be to bind a local socket, and use ssh_select to listen for events (e.g. new connection to accept, socket or channel events). You can keep your socket fds aand ssh_channels in a vector for easy management.
When you get any event, just loop over all the operations in a non-blocking way, i.e.
try to accept a new connection, and append the fd, and a new ssh_channel (created as in your question) to your vectors.
try to read all the socket fds, and forward anything to the corresponding ssh channel using ssh_channel_write (make sure to setsockopt SO_RCVTIMEO to 0)
try to read all the channels, using ssh_channel_read_nonblocking, and forward to the socket fd using write.
You also need to handle errors everywhere, and close the corresponding fd and ssh_channel.
Overall it's probably going to be too much code for a StackOverflow answer, but I may come back and add it in if I get time.
The tempting alternative to all that would be to just run ssh -L ... as a subprocess using fork & exec, avoiding all that boilerplate socket code, and benefitting from an efficient, bug-free implementation.

C++: One client communicating with multiple server

I was wondering, if it is possible to let one client communicate with multiple server at the same time. As far as I know, common browsers like for example firefox are doing exactly this.
The problem I have now is, that the client has to listen and wait for data from the server, rather then requesting it itself. It has to listen to multiple server at once. Is this even possible? What happens if the client is listening to server 1 and server 2 sends something? Is the package lost or will it be resend until the client communicates a successful receival? The protocol used is TCP.
edit: platform is Windows. Thanks for pointing this out Arunmu.
This is nothing different from regular socket programming using select/poll/epoll OR using thread-pool OR using process-per-connection OR whatever model that you know.
I can show you a rough pseudo-code on how to do it with epoll.
NOTE: None of my functions exist in C++, its just for explanation purpose. ANd I am ALSO assuming that you are on Linux, since you have mentioned nothing about the platform.
socket sd = connect("server.com", 8080);
sd.set_nonblocking(1);
epoll_event event;
event.data.fd = sd
epoll_ctl(ADD, event);
...
...
while (True) {
auto n = epoll_wait(events, 1);
for (int i : 1...n) {
if (events[i].data.fd == sd) // The socket added in epoll_ctl
{
std::thread(&Session::read_handler, rd_hndler_, sd); // Call the read in another thread or same thread
}
}
}
I hope you got the gist. In essence, think of server like a client and client like a server and you have your problem solved (kind of). Check out below link to know more about epoll
https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/
To see an fully functional server design using epoll, checkout:
https://github.com/arun11299/cpp-reactor-server/blob/master/epoll/reactor.cc

HttpAddUrl fails with ERROR_SHARING_VIOLATION (32L)

I am attempting to write a price listener.
the data arrives as a 'push' response, ie: chunked transfer-encoding.
i have decided to use the http server api, as both async wininet and winHTTP read data apis both close the connection if there is no data for a short while.
first of all, am i correct to use the http server api?
second, if i try to, as per the msdn example:
retCode = HttpInitialize(
HttpApiVersion,
HTTP_INITIALIZE_SERVER,
NULL
); // return is NO_ERROR
retCode = HttpCreateHttpHandle(
&hReqQueue,
0
); // return is NO_ERROR
std::wstring url = _T( "http://apidintegra.tkfweb.com:80/" );
retCode = HttpAddUrl(
hReqQueue,
url.c_str(),
NULL
); // always fails with ERROR_SHARING_VIOLATION
i always get a sharing violation. do i need to use netsh to configure the connection somehow? if so how? ive seen mention of configuring http.sys, and ive even tried executing the above code as an administrator.
I would be extremely grateful for some help, as there seems very little code out there for this!
Many thanks,
Jon
This error happens if the port is already in use by another process. That means another application uses the port (for example IIS or another web server).

Socket in use error when reusing sockets

I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?