Live555 RTSP server does not use UDP - c++

I have a pretty basic live555 RTSP server and client to stream a h264 stream written in c++.
Here's the code I have for the client (adapted from testProgs/testRTSPClient.cpp, bundled with live555)
client->scheduler = BasicTaskScheduler::createNew();
client->env = BasicUsageEnvironment::createNew(*client->scheduler);
client->rtspClient = NULL;
RTSP_CLIENT::eventLoopWatchVariable = 0;
openURL(client, *client->env, string(string("rtsp://") + ip_address + ":" + to_string(BASE_RTSP_PORT + iris_id) + "/iris").c_str());
client->env->taskScheduler().doEventLoop(&RTSP_CLIENT::eventLoopWatchVariable);
void openURL(RTSP_CLIENT* client, UsageEnvironment& env, char const* rtspURL) {
// Begin by creating a "RTSPClient" object. Note that there is a separate "RTSPClient" object for each stream that we wish
// to receive (even if more than stream uses the same "rtsp://" URL).
while (!client->rtspClient) {
client->rtspClient = ourRTSPClient::createNew(env, rtspURL, RTSP_CLIENT_VERBOSITY_LEVEL, "main");
}
// Next, send a RTSP "DESCRIBE" command, to get a SDP description for the stream.
// Note that this command - like all RTSP commands - is sent asynchronously; we do not block, waiting for a response.
// Instead, the following function call returns immediately, and we handle the RTSP response later, from within the event loop:
client->rtspClient->sendDescribeCommand(continueAfterDESCRIBE);
}
void continueAfterDESCRIBE(RTSPClient* rtspClient, int resultCode, char* resultString) {
do {
UsageEnvironment& env = rtspClient->envir(); // alias
StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias
if (resultCode != 0) {
env << *rtspClient << "Failed to get a SDP description: " << resultString << "\n";
delete[] resultString;
break;
}
char* const sdpDescription = resultString;
env << *rtspClient << "Got a SDP description:\n" << sdpDescription << "\n";
// Create a media session object from this SDP description:
scs.session = MediaSession::createNew(env, sdpDescription);
delete[] sdpDescription; // because we don't need it anymore
if (scs.session == NULL) {
env << *rtspClient << "Failed to create a MediaSession object from the SDP description: " << env.getResultMsg() << "\n";
break;
} else if (!scs.session->hasSubsessions()) {
env << *rtspClient << "This session has no media subsessions (i.e., no \"m=\" lines)\n";
break;
}
// Then, create and set up our data source objects for the session. We do this by iterating over the session's 'subsessions',
// calling "MediaSubsession::initiate()", and then sending a RTSP "SETUP" command, on each one.
// (Each 'subsession' will have its own data source.)
scs.iter = new MediaSubsessionIterator(*scs.session);
setupNextSubsession(rtspClient);
return;
} while (0);
// An unrecoverable error occurred with this stream.
shutdownStream(rtspClient);
}
Here's the code I have for the server (adapted from testProgs/testOnDemandRTSPServer.cpp, bundled with live555)
rtsp_server->taskSchedular = BasicTaskScheduler::createNew();
rtsp_server->usageEnvironment = BasicUsageEnvironment::createNew(*rtsp_server->taskSchedular);
rtsp_server->rtspServer = RTSPServer::createNew(*rtsp_server->usageEnvironment, BASE_RTSP_PORT + iris_id, NULL);
rtsp_server->eventLoopWatchVariable = 0;
if(rtsp_server->rtspServer == NULL) {
*rtsp_server->usageEnvironment << "Failed to create rtsp server ::" << rtsp_server->usageEnvironment->getResultMsg() <<"\n";
return false;
}
rtsp_server->sms = ServerMediaSession::createNew(*rtsp_server->usageEnvironment, "iris", "iris", "stream");
rtsp_server->liveSubSession = H264LiveServerMediaSession::createNew(*rtsp_server->usageEnvironment, true);
rtsp_server->sms->addSubsession(rtsp_server->liveSubSession);
rtsp_server->rtspServer->addServerMediaSession(rtsp_server->sms);
rtsp_server->taskSchedular->doEventLoop(&rtsp_server->eventLoopWatchVariable);
I was under the assumption that live555 by default used UDP to transport data to the client from the server, which is what I wanted for it's latency benefits over TCP. However while running the server client I happened to check netstat and I found this:
~# netstat | grep 8554
tcp 0 0 x.x.x.x:8554 wsip-x-x-x-x:39224 ESTABLISHED
It is however showing that the communications are going through TCP not UDP. I am a bit confused here, am I mis-interpreting netstat here?
Is there anything I need to tune in my c++ code to force the communication to go through UDP not TCP?

Okay so I figured out the answer. To help anyone else who is curious about this, the code is actually all correct. There is also no mis-interpretation of netstat. RTSP does indeed run over TCP not UDP. However the transport method of the A/V data runs on RTP, a connection that RTSP simply negotiates and instantiates. RTP almost always will run over UDP. To figure out what port and protocol the A/V data stream is going over you will need to sniff the packets sent out via RTSP. In my case the A/V data stream was indeed still going over UDP.

Related

cppzmq fails to receive tcp messages

I am trying to use ZMQ socket on my Ubuntu machine to communicate with a ESP8266 edge device. I tried this piece of Python code which works fine:
import zmq
ctx = zmq.Context()
router = ctx.socket(zmq.ROUTER)
router.router_raw = True
router.bind("tcp://*:8081")
while True:
msg = router.recv_multipart()
identity, body = msg
print(identity)
print(body)
as it gives (server side)
b'\x00k\x8bEg'
b''
b'\x00k\x8bEg'
b'hello from ESP8266'
b'\x00k\x8bEg'
b'\r\n'
but when I translate it into C++ as
#include <zmq_addon.hpp>
int main () {
zmq::context_t context;
zmq::socket_t socket(context, zmq::socket_type::router);
int router_raw = 1;
zmq_setsockopt(&socket, ZMQ_ROUTER_RAW, &router_raw, 1);
socket.bind("tcp://*:8081");
while (true) {
std::cout << "listening " << std::endl;
std::vector<zmq::message_t> msgs;
if (zmq::recv_multipart(socket, std::back_inserter(msgs))) {
std::cout << "got " << static_cast<const char *> (msgs.front().data())
<< std::endl;
}
}
return 0;
}
it doesn't work any more and hangs before recv_multipart, though at the same time ESP8266 client do recieve some wierd βΈ® symbol which indicates tcp connection success I guess.
The weird symbols that the client receives are ZMQ's internal protocol, which ZMQ_ROUTER_RAW normally suppresses, except that your call to set ZMQ_ROUTER_RAW is wrong. The last parameter to zmq_setsockopt is supposed to be the size in bytes of the new option value you are passing. It should be sizeof(router_raw) instead of 1.

How to communicate locally between a Node.js (pref. Express module) server and a C++ application using IPC (Unix Domain Sockets)

I have one machine running simultaniously some C++ application and a Node.js server.
Use-case:
I want to be able to trigger my C++ application and make it pass some data (lets say a string) into a socket file. Then my Node.js server shall fetch that data from the socket and print it on some web page via a TCP-port (Code not included here/yet). The same should happen the other way around.
What I've done so far:
I was able to write some strings from my Node.js server into to the socket file with the following code:
server.js
var net = require('net');
var fs = require('fs');
var socketPath = '/tmp/sock';
fs.stat(socketPath, function(err) {
if (!err) fs.unlinkSync(socketPath);
var unixServer = net.createServer(function(localSerialConnection) {
localSerialConnection.on('data', function(data) {
// data is a buffer from the socket
console.log('Something happened!');
});
// write to socket with localSerialConnection.write()
localSerialConnection.write('HELLO\n');
localSerialConnection.write('I\'m\n');
localSerialConnection.write('DOING something!\n');
localSerialConnection.write('with the SOCKS\n');
});
unixServer.listen(socketPath);
});
reading the content with nc -U /tmp/sock and with the following output https://i.stack.imgur.com/ye2Dx.png.
When I run my C++ code:
cpp_socket.cpp
#include <boost/asio.hpp>
#include <iostream>
int main() {
using boost::asio::local::stream_protocol;
boost::system::error_code ec;
::unlink("/tmp/sock"); // Remove previous binding.
boost::asio::io_service service;
stream_protocol::endpoint ep("/tmp/sock");
stream_protocol::socket s(service);
std::cout << "passed setup section" << std::endl;
s.connect(ep);
std::cout << "passed connection" << std::endl;
std::string message = "Hello from C++!";
std::cout << "before sending" << std::endl;
boost::asio::write(s, boost::asio::buffer(message), boost::asio::transfer_all());
/* s.write_some(boost::asio::buffer("hello world!"), ec); */
std::cout << "after sending" << std::endl;
I get the following output:
/cpp_socket
passed setup section
terminate called after throwing an instance of 'boost::wrapexcept<boost::system::system_error>'
what(): connect: No such file or directory
Aborted (core dumped)
Even though the /tmp/sock file still exists.
When I remove ::unlink("/tmp/sock"); // Remove previous binding. with comments it runs through, but my Node.js server stops running and nc -U /tmp/sock looses its connection.
Neither the .write() nor the .write_some() function seems to work.
I assume that I miss something trivial or I'm not following basic concepts of unix socket communication.
Questions:
Is it even possible to listen with one Node.js server application to a TCP-port and a UNIX-socket at the same time?
Am I understanding the concept of unix socket communication correctly, judging from my input?
How can I read or write from C++ from/into a socket, preferably with C++ boost/asio library. But not necessarily necessary :-)
Am I asking the right questions?
As you might see, I'm not too experienced with these subjects. If I haven't addressed my issues accordingly and not precisely enough,it's due to my lack of experience.
Thanks a lot in advance. Lets have a fruitful discussion.
Oh oops. The error was in plain sight:
::unlink("/tmp/sock"); // Remove previous binding.
Removes the socket. That's not good if you wanted to connect to it.
Removing that line made it work:
passed setup section
passed connection: Success
before sending
after sending
And on the listener side:
Which is, I guess, to be expected because the client isn't complete yet.
Disclaimer:
I made it work with TCP sockets, but I would like to see how its possible with unix sockets. One more open port could lead to potential security threats (correct me if I'm wrong). So if you (sehe) or someone knows how to achieve this, please feel free to share. Since I wasn't able to find this in my searches over the internet, it could be helpful for others, too.
What I did now:
Creating a NodeJS server which is listening to two ports. One port for the web-browser and one for the C++ application
Connect the C++ application with one port
Sending strings using telnet
server.js
const net = require('net');
const express = require('express');
const app = express();
const c_port = 6666;
const si_port = 8888;
//------------- From here Browser stream is handled -------------//
app.get('/', (req, res)=>{
res.send('Hello from Node!');
});
app.get('/index.html', (req, res) => {
res.sendFile(__dirname + "/" + "index.html");
});
app.listen(si_port,(req, res)=>{
console.log(`Listening on http://localhost:${si_port}`);
});
//------------- From here C++ stream is handled -------------//
var server = net.createServer(function(c) { //'connection' listener
console.log('client connected');
c.on('end', function() {
console.log('client disconnected');
});
c.write('hello\r\n');
c.on('data', function(data){
var read = data.toString();
console.log(read);
// var message = c.read();
// console.log(message);
})
// c.pipe(c);
c.write('Hello back to C++'); // But only if you shut down the server
});
server.listen(c_port, function() { //'listening' listener
console.log(`Listening for input from C++ application on port:${c_port}`);
});
client.cpp
#include <iostream>
#include <boost/asio.hpp>
int main(int argc, char* argv[])
{
if(argc != 4){
std::cout<<"Wrong parameter\n"<<"Example usage ./client 127.0.0.1 1234 hello"<<std::endl;
return -1;
}
auto const address = boost::asio::ip::make_address(argv[1]);
auto const port = std::atoi(argv[2]);
std::string msg = argv[3];
msg = msg + '\n';
boost::asio::io_service io_service;
//socket creation
boost::asio::ip::tcp::socket socket(io_service);
//connection
boost::system::error_code ec;
socket.connect( boost::asio::ip::tcp::endpoint( address, port ),ec);
if(ec){std::cout<<ec.message()<<std::endl; return 1;}
// request/message from client
//const string msg = "Hello from Client!\n";
boost::system::error_code error;
boost::asio::write( socket, boost::asio::buffer(msg), error );
if(error){
std::cout << "send failed: " << error.message() << std::endl;
}
// getting response from server
boost::asio::streambuf receive_buffer;
boost::asio::read(socket, receive_buffer, boost::asio::transfer_all(), error);
if( error && error != boost::asio::error::eof ){
std::cout << "receive failed: " << error.message() << std::endl;
}
else{
const char* data = boost::asio::buffer_cast<const char*>(receive_buffer.data());
std::cout << data << std::endl;
}
return 0;
}
With telnet localhost 6666 I can easily on that port and send random strings.
Executing my binary with additional arguments and a string I was able to send some data from my C++: ./clientcpp 127.0.0.1 6666 "HELLO from C++". And here is the output:
Thanks a lot again.

Periodic latency spikes from UDP socket caused by periodic sendto()/recvfrom() delay, C++ for Linux RT-PREEMPT system

I have setup two Raspberry Pis to use UDP sockets, one as the client and one as the server. The kernel has been patched with RT-PREEMPT (4.9.43-rt30+). The client acts as an echo to the server to allow for the calculation of Round-Trip Latency (RTL). At the moment a send frequency of 10Hz is being used on the server side with 2 threads: one for sending the messages to the client and one for receiving the messages from the client. The threads are setup to have a schedule priority of 95 using Round-Robin scheduling.
The server constructs a message containing the time the message was sent and the time past since messages started being sent. This message is sent from the server to the client then immediately returned to the server. Upon receiving the message back from the client the server calculates the Round-Trip Latency and then stores it in a .txt file, to be used for plotting using Python.
The problem is that when analysing the graphs I noticed there is a periodic spike in the RTL. The top graph of the image:RTL latency and sendto() + recvfrom() times. In the legend I have used RTT instead of RTL. These spikes are directly related to the spikes shown in the server side sendto() and recvfrom() calls. Any suggestion on how to remove these spikes as my application is very reliant on consistency?
Things I have tried and noticed:
The size of the message being sent has no effect. I have tried larger messages (1024 bytes) and smaller messages (0 bytes) and the periodic delay does not change. This suggests to me that it is not a buffer issue as there is nothing filling up?
The frequency at which the messages are sent does play a big role, if the frequency is doubled then the latency spikes occur twice as often. This then suggests that something is filling up and while it empties the sendto()/recvfrom() functions experience a delay?
Changes to the buffer size with setsockop() has no effect.
I have tried quite a few other settings (MSG_DONTWAIT, etc) to no avail.
I am by no means an expert in sockets/C++ programming/Linux so any suggestions given will be greatly appreciated as I am out of ideas. Below is the code used to create the socket and start the server threads for sending and receiving the messages. Below that is the code for sending the messages from the server, if you need the rest please let me know but for now my concern is centred around the delay caused by the sendto() function. If you need anything else please let me know. Thanks.
thread_priority = priority;
recv_buff = recv_buff_len;
std::cout << del << " Second start-up delay..." << std::endl;
sleep(del);
std::cout << "Delay complete..." << std::endl;
master = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
// master socket creation
if(master == 0){// Try to create the UDP socket
perror("Could not create the socket: ");
exit(EXIT_FAILURE);
}
std::cout << "Master Socket Created..." << std::endl;
std::cout << "Adjusting send and receive buffers..." << std::endl;
setBuff();
// Server address and port creation
serv.sin_family = AF_INET;// Address family
serv.sin_addr.s_addr = INADDR_ANY;// Server IP address, INADDR_ANY will
work on the server side only
serv.sin_port = htons(portNum);
server_len = sizeof(serv);
// Binding of master socket to specified address and port
if (bind(master, (struct sockaddr *) &serv, sizeof (serv)) < 0) {
//Attempt to bind master socket to address
perror("Could not bind socket...");
exit(EXIT_FAILURE);
}
// Show what address and port is being used
char IP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &(serv.sin_addr), IP, INET_ADDRSTRLEN);// INADDR_ANY
allows all network interfaces so it will always show 0.0.0.0
std::cout << "Listening on port: " << htons(serv.sin_port) << ", and
address: " << IP << "..." << std::endl;
// Options specific to the server RPi
if(server){
std::cout << "Run Time: " << duration << " seconds." << std::endl;
client.sin_family = AF_INET;// Address family
inet_pton(AF_INET, clientIP.c_str(), &(client.sin_addr));
client.sin_port = htons(portNum);
client_len = sizeof(client);
serv_send = std::thread(&SocketServer::serverSend, this);
serv_send.detach();// The server send thread just runs continuously
serv_receive = std::thread(&SocketServer::serverReceive, this);
serv_receive.join();
}else{// Specific to client RPi
SocketServer::clientReceiveSend();
}
And the code for sending the messages:
// Setup the priority of this thread
param.sched_priority = thread_priority;
int result = sched_setscheduler(getpid(), SCHED_RR, &param);
if(result){
perror ("The following error occurred while setting serverSend() priority");
}
int ched = sched_getscheduler(getpid());
printf("serverSend() priority result %i : Scheduler priority id %i \n", result, ched);
std::ofstream Out;
std::ofstream Out1;
Out.open(file_name);
Out << duration << std::endl;
Out << frequency << std::endl;
Out << thread_priority << std::endl;
Out.close();
Out1.open("Server Side Send.txt");
packets_sent = 0;
Tbegin = std::chrono::high_resolution_clock::now();
// Send messages for a specified time period at a specified frequency
while(!stop){
// Setup the message to be sent
Tstart = std::chrono::high_resolution_clock::now();
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tstart - Tbegin); // Total time passed before sending message
memcpy(&message[0], &Tstart, sizeof(Tstart));// Send the time the message was sent with the message
memcpy(&message[8], &TDEL, sizeof(TDEL));// Send the time that had passed since Tstart
// Send the message to the client
T1 = std::chrono::high_resolution_clock::now();
sendto(master, &message, 16, MSG_DONTWAIT, (struct sockaddr *)&client, client_len);
T2 = std::chrono::high_resolution_clock::now();
T3 = std::chrono::duration_cast< std::chrono::duration<double>>(T2-T1);
Out1 << T3.count() << std::endl;
packets_sent++;
// Pause so that the required message send frequency is met
while(true){
Tend = std::chrono::high_resolution_clock::now();
Tdel = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tstart);
if(Tdel.count() > 1/frequency){
break;
}
}
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tbegin);
// Check to see if the program has run as long as required
if(TDEL.count() > duration){
stop = true;
break;
}
}
std::cout << "Exiting serverSend() thread..." << std::endl;
// Save extra results to the end of the last file
Out.open(file_name, std::ios_base::app);
Out << packets_sent << "\t\t " << packets_returned << std::endl;
Out.close();
Out1.close();
std::cout << "^C to exit..." << std::endl;
I have sorted out the problem. It was not the ARP tables as even with the ARP functionality disabled there was a periodic spike. With the ARP functionality disabled there would only be a single spike in latency as opposed to a series of latency spikes.
It turned out to be a problem with the threads I was using as there were two threads on a CPU only capable of handling one thread at a time. The one thread that was sending the information was being affected by the second thread that was receiving information. I changed the thread priorities around a lot (send priority higher than receive, receive higher than send and send equal to receive) to no avail. I have now bought a Raspberry Pi that has 4 cores and I have set the send thread to run on core 2 while the receive thread runs on core 3, preventing the threads from interfering with each other. This has not only removed the latency spikes but also reduced the mean latency of my setup.

Network Programming Issue - buffer will only send once to the server

I am trying to send a file to a server using socket programming. My server and client are able to connect to each other successfully however I am expecting the while loop below to go through the entire file and add it to the server. The issue I am having is that it only send the first chunk and not the rest.
On the client side I have the following:
memset(szbuffer, 0, sizeof(szbuffer)); //Initialize the buffer to zero
int file_block_size;
while ((file_block_size = fread(szbuffer, sizeof(char), 256, file)) > 0){
if (send(s, szbuffer, file_block_size, 0) < 0){
throw "Error: failed to send file";
exit(1);
} //Loop while there is still contents in the file
memset(szbuffer, 0, sizeof(szbuffer)); //Reset the buffer to zero
}
On the server side I have the following:
while (1)
{
FD_SET(s, &readfds); //always check the listener
if (!(outfds = select(infds, &readfds, NULL, NULL, tp))) {}
else if (outfds == SOCKET_ERROR) throw "failure in Select";
else if (FD_ISSET(s, &readfds)) cout << "got a connection request" << endl;
//Found a connection request, try to accept.
if ((s1 = accept(s, &ca.generic, &calen)) == INVALID_SOCKET)
throw "Couldn't accept connection\n";
//Connection request accepted.
cout << "accepted connection from " << inet_ntoa(ca.ca_in.sin_addr) << ":"
<< hex << htons(ca.ca_in.sin_port) << endl;
//Fill in szbuffer from accepted request.
while (szbuffer > 0){
if ((ibytesrecv = recv(s1, szbuffer, 256, 0)) == SOCKET_ERROR)
throw "Receive error in server program\n";
//Print reciept of successful message.
cout << "This is the message from client: " << szbuffer << endl;
File.open("test.txt", ofstream::out | ofstream::app);
File << szbuffer;
File.close();
//Send to Client the received message (echo it back).
ibufferlen = strlen(szbuffer);
if ((ibytessent = send(s1, szbuffer, ibufferlen, 0)) == SOCKET_ERROR)
throw "error in send in server program\n";
else cout << "Echo message:" << szbuffer << endl;
}
}//wait loop
} //try loop
The code above is the setup for the connection between the client and server which works great. It is in a constant while loop waiting to receive new requests. The issue is with my buffer. Once I send the first buffer over, the next one doesn't seem to go through. Does anyone know what I can do to set the server to receive more than just one buffer? I've tried a while loop but did not get any luck.
Your code that sends the file from the server appears to send consecutive sections of the file correctly.
Your code that appears to have the intention of receiving the file from the client performs the following steps:
1) Wait for and accept a socket.
2) Read up to 256 bytes from the socket.
3) Write those bytes back to the socket.
At this point the code appears to go back to waiting for another connection, and keeping the original connection open, and, at least based on the code you posted, obviously leaking the file descriptor.
So, the issues seems to be that the client and the server disagreeing on what should happen. The client tries to send the entire file, and doesn't read from the socket. The server reads the first 256 bytes from the socket, and writes it back to the client.
Of course, its entirely possible that portions of the code not shown implement some of the missing pieces, but there's definitely a disconnect here between what the sending side is doing, and what the receiving side is doing.
buffer will only send once to the server
No, your server is only reading once from the client. You have to loop, just like the sending loop does.

FTP server file transfer

I am uncertain about a few things regarding ftp file transfer. I am writing an ftp server and I am trying to figure out how to make the file tranfer work correctly. So far it works somehow but I have certain doubts. Here is my file transfer function (only retrieve so far):
void RETRCommand(int & clie_sock, int & c_data_sock, char buffer[]){
ifstream file; //clie_sock is used for commands and c_data_sock for data transfer
char *file_name, packet[PACKET_SIZE]; //packet size is 2040
int packet_len, pre_pos = 0, file_end;
file_name = new char[strlen(buffer + 5)];
strcpy(file_name, buffer + 5);
sprintf(buffer, "150 Opening BINARY mode data connection for file transfer\r\n");
if (send(clie_sock, buffer, strlen(buffer), 0) == -1) {
perror("Error while writing ");
close(clie_sock);
exit(1);
}
cout << "sent: " << buffer << endl;
file_name[strlen(file_name) - 2] = '\0';
file.open(file_name, ios::in | ios::binary);
if (file.is_open()) {
file.seekg(0, file.end);
file_end = (int) file.tellg();
file.seekg(0, file.beg);
while(file.good()){
pre_pos = file.tellg();
file.read(packet, PACKET_SIZE);
if ((int) file.tellg() == -1)
packet_len = file_end - pre_pos;
else
packet_len = PACKET_SIZE;
if (send(c_data_sock, packet, packet_len, 0) == -1) {
perror("Error while writing ");
close(clie_sock);
exit(1);
}
cout << "sent some data" << endl;
}
}
else {
sprintf(buffer, "550 Requested action not taken. File unavailable\r\n", packet);
if (send(clie_sock, buffer, packet_len + 2, 0) == -1) {
perror("Error while writing ");
close(clie_sock);
exit(1);
}
cout << "sent: " << buffer << endl;
delete(file_name);
return;
}
sprintf(buffer, "226 Transfer complete\r\n");
if (send(clie_sock, buffer, strlen(buffer), 0) == -1) {
perror("Error while writing ");
close(clie_sock);
exit(1);
}
cout << "sent: " << buffer << endl;
close(c_data_sock);
delete(file_name);
}
So one problem is the data transfer itself. I am not exactly sure how it is supposed to work. Now it works like this: the server sends all the data to c_data_sock, closes this socket and then the client starts doing something. Shouldn't the client recieve the data while the server is sending them? And the other problem is the abor command. How am I supposed to recieve the abor command? I tried recv with flag set to MSG_OOB but then I get an error saying "Invalid argument". I would be glad if someone could give me a hint or an example of how to do it right as I don't seem to be able to figure it out myself.
Thanks,
John
Ftp use two connections. First - is command connection, in your case it is clie_sock. 'ABOR' command should be received though it. You going to receive it the same way you received 'RETR' command.
To receive file client establishes data connection with your server ( c_data_sock socket ). It will not be opened till client connects, so this is the answer to your second question. You cannot start client after server executes this function. First client sends 'retr' command to your command socket. Then your sever waits new connection from client ( after sending him data ip and port ). Then client connects ( now you have your c_data_sock ready ) and sends all the data to that socket, which are in turn received by the client.
You probably need to read more about networking in general if you feel you don't understand it. I prefer this one: http://beej.us/guide/bgnet/
Also you have a memory leak here, after you allocate an array with the
file_name = new char[strlen(buffer + 5)];
you need to delete it using
delete [] file_name;
Otherwise file_name will be treated as a simple pointer, not an array, so most of array memory will be kept by your application which is bad especially when creating server.