I'm creating a website in C++ using FastCGI on nginx. My problem is now to track a user (aka session). I can read the HTTP_COOKIE out, but I have no clue how I can create a new cookie with a name and a value and send this to the client.
Looking up in Google I only found relevant stuff for PHP, Python and other scriptlanguages that try to run with CGI/fCGI.
you can use setcookie syntax.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv)
{
int count = 0;
printf("Content-type: text/html\r\n"
"Set-Cookie: name=value\r\n"
"\r\n"
"<title>CGI Hello!</title>"
"<h1>CGI Hello!</h1>"
"Request number %d running on host <i>%s</i>\n",
++count, getenv("SERVER_NAME"));
return 0;
}
Related
I'm using nghttp2_asio. I compiled it using ./configure --enable-asio-lib. Then, I added /usr/local/lib to /etc/ld.so.conf file. The code is as follows:
#include "bits/stdc++.h"
#include "nghttp2/asio_http2_server.h"
using namespace std;
using namespace nghttp2::asio_http2;
using namespace nghttp2::asio_http2::server;
int main(int argc, char **argv) {
http2 srv;
srv.num_threads(4);
srv.handle("/", [](const request &req, const response &res) {
cout << req.uri().path << endl;
header_map headers;
headers.emplace("content-type", header_value{ "text/html", false });
res.write_head(200, headers);
res.end(file_generator("index.html"));
});
boost::system::error_code ec;
if (srv.listen_and_serve(ec, "localhost", "8080")) cerr << ec.message() << endl;
return 0;
}
When I try to open the browser (Chrome or Firefox) on http://localhost:8080, it give me the following error:
This page isn't working
localhost didn't send any data.
ERR_EMPTY_RESPONSE
Even if I try with curl, it gives me the error:
curl: (52) Empty reply from server
The only thing that works is curl http://localhost:8080 --http2-prior-knowledge.
Is there a solution for this?
It looks like your browser refuses to do HTTP/2 over an unencrypted connection. The Wikipedia page has the following to say:
Although the standard itself does not require usage of encryption,[51] all major client implementations (Firefox,[52] Chrome, Safari, Opera, IE, Edge) have stated that they will only support HTTP/2 over TLS, which makes encryption de facto mandatory.[53]
cURL has a different problem: it defaults to HTTP/1 which your HTTP/2 server does not understand. Adding the flag makes it use the HTTP/2 binary protocol directly. Alternatively, connecting to an HTTPS endpoint will automatically turn on HTTP/2.
See the libnghttp2_asio documentation for an example on how to serve with encryption:
int main(int argc, char *argv[]) {
boost::system::error_code ec;
boost::asio::ssl::context tls(boost::asio::ssl::context::sslv23);
tls.use_private_key_file("server.key", boost::asio::ssl::context::pem);
tls.use_certificate_chain_file("server.crt");
configure_tls_context_easy(ec, tls);
http2 server;
// add server handlers here
if (server.listen_and_serve(ec, tls, "localhost", "3000")) {
std::cerr << "error: " << ec.message() << std::endl;
}
}
I am trying to send TCP messages over my local network that exceed the MTU limit of ~1500 bytes. I know that TCP protocol splits messages exceeding 1500 bytes into individual packets at some level, but am unclear if it's something I need to deal with at the application level. So I wrote a test application in ROS and Qt, C++, to test the behavior. I picked these because they are what my overall project is written in.
My test_fragmentation_test node sets up a server, then the test_client node establishes a connection and sends a message exceeding 1500 bytes. The goal is to get the server to receive this full message as one cohesive unit.
When I test these nodes on the same computer, the server receives the full messages as one unit. However, when I put the server and client on separate computers (still same network), the server receives the message as multiple packets. The packets are either 1500 bytes, or multiples thereof, so I believe they are sometimes getting squashed together in the read buffer in pairs of two or three.
Here is my code:
tcp_fragmentation_test.cpp
#include "ros/ros.h"
#include <QtCore/QCoreApplication>
#include <QTcpSocket>
#include <QTcpServer>
#include <QDebug>
//prototype functions
void initialConnect();
void receiveMessage();
//global(ish) variables
extern QTcpServer* subserver;
extern QTcpSocket* subsocket;
QTcpServer* subserver;
QTcpSocket* subsocket;
int main(int argc, char **argv)
{
QCoreApplication app(argc, argv);
//initiate ROS
ros::init(argc, argv, "tcp_fragmentation_test_node");
ros::NodeHandle n;
subserver = new QTcpServer();
//call initialConnect on first connection attempt
QObject::connect(subserver, &QTcpServer::newConnection,initialConnect);
quint16 server_port = 9998;
//start listening
QHostAddress server_IP = QHostAddress("127.0.0.1");
//start listening for incoming C2 connections
if(!subserver->listen(server_IP, server_port))
{
qDebug().noquote() << "subserver failed to start on: " + server_IP.toString() + "/" + QString::number(server_port);
}
else
{
qDebug().noquote() << "subserver started on: " + server_IP.toString() + "/" + QString::number(server_port);
}
return app.exec(); //inifite loop that starts qt event listener
}
//accept incoming connection
void initialConnect()
{
//accept the incoming connection
subsocket = subserver->nextPendingConnection();
QObject::connect(subsocket, &QTcpSocket::readyRead, receiveMessage);
}
void receiveMessage()
{
//read data in
QByteArray received_message = subsocket->readAll();
qDebug() << "message received: ";
qDebug() << received_message;
}
test_client.cpp
#include "ros/ros.h"
#include <QtCore/QCoreApplication>
#include <QTcpSocket>
#include <QTcpServer>
#include <QDebug>
//function prototypes
bool initialConnect(QString server_IP, quint16 server_port);
void sendMessage(QByteArray message);
//global(ish) variables
QTcpSocket* test_socket;
int main(int argc, char **argv)
{
QCoreApplication app(argc, argv);
ros::init(argc, argv, "test_client");
ros::NodeHandle n;
initialConnect("127.0.0.1",9998);
//send 1500+ byte message, to test IP fragmentation
sendMessage("<message large than 1500 bytes>");
return app.exec(); //inifite loop that starts qt event listener
}
bool initialConnect(QString server_IP, quint16 server_port)
{
//create test_socket
test_socket = new QTcpSocket();
//try to connect, then wait a bit to make sure it was successful
test_socket->connectToHost(server_IP, server_port);
if(test_socket->waitForConnected(1000))
{
return true;
}
else
{
qDebug() << "Failed to connect to server";
return false;
}
}
//send test message
void sendMessage(QByteArray message)
{
//send message
test_socket->write(message);
test_socket->waitForBytesWritten(1000);
return;
}
I talked to one of my colleagues about this, and they said that they had not run into this issue when sending messages that exceed the MTU in both Tcl scripting language, and HTML websockets. They sent me their code to check out, but unfortunately it is a small part of a really large and not well documented codebase, so I am having parsing out what they did and why they did not run into the same issues as me.
I know that I could probably avoid this issue by including identifying headers for each message, that include the length, and then combine read messages into an overall buffer until the correct number of bytes have been read. However, I'm trying to see if there's a simpler way than this, especially after my colleague telling me that he never ran into this issue. It seems like something that should be handled behind the scenes by the TCP protocol, and I'm trying to avoid dealing with it at the application level if at all possible.
Any ideas?
I'm new to socket programming. I'm working with the Poco library. I found this example online. (https://pocoproject.org/slides/200-Network.pdf)
#include "Poco/Net/SocketAddress.h"
#include "Poco/Net/StreamSocket.h"
#include "Poco/Net/SocketStream.h"
#include "Poco/StreamCopier.h"
#include <iostream>
int main(int argc, char** argv)
{
Poco::Net::SocketAddress sa("www.appinf.com", 80);
Poco::Net::StreamSocket socket(sa)
Poco::Net::SocketStream str(socket);
str << "GET / HTTP/1.1\r\n"
"Host: www.appinf.com\r\n"
"\r\n";
str.flush();
Poco::StreamCopier::copyStream(str, std::cout);
return 0;
}
I understand that a socket stream is created.
I cannot understand the commands. What does the "/" do after "GET" or what is "1.1". Please do explain what that particular line means.
This code does give me an output. But how do the commands work? And is there a way to give the commands from the console? Thanks.
I'm not sure what you want to do here.
Are you trying to do HTTP or not?
If not, then do your own text, but don't use port 80 as that is the well known http port.
If you want to just send whatever you type over a TCP socket, then you probably could use the StreamCopier to send everything from std::cin to str.
I'm in a trouble with ZeroMQ and IPv6. When I use a connection through IPv4 or if I use "tcp://[::1]:5558", it connects like a charm. However, if I use the server full IPv6 address (on my local host or remote host) it connects, but don't get data on the other endpoint.
Here is my code sample:
client.cpp
#include <stdio.h>
#include <zmq.h>
int main(int argc, char** argv)
{
void* context = zmq_ctx_new();
void* socket = zmq_socket(context, ZMQ_SUB);
int ipv6 = 1;
zmq_setsockopt(socket, ZMQ_IPV6, &ipv6, 4);
zmq_connect(socket, "tcp://[fe80::52e5:49ff:fef8:dbc6]:5558");
//zmq_connect(socket, "tcp://[::1]:5558");
zmq_setsockopt(socket, ZMQ_SUBSCRIBE, "pub", 3);
zmq_msg_t message;
do {
zmq_msg_init (&message);
zmq_msg_recv (&message, socket, 0);
printf("%s\n", (char*)zmq_msg_data(&message));
zmq_msg_close(&message);
} while (zmq_msg_more(&message));
}
And server.cpp
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <zmq.h>
int main(int argc, char**argv)
{
void* context = zmq_ctx_new();
void* publisher = zmq_socket(context, ZMQ_PUB);
int ipv6 = 1;
zmq_setsockopt(publisher, ZMQ_IPV6, &ipv6, sizeof(int));
zmq_bind(publisher, "tcp://*:5558");
char buffer[4] = "pub";
unsigned tries = 0;
while(tries < 10) {
zmq_send(publisher, &buffer, strlen(buffer), 0);
tries++;
sleep(1);
}
return 0;
}
I'm using ZeroMQ 4.0.0 RC, but it is also happening on 3.2. I'm on linux (slackware) and installed it from sources. I also tested using a java server using jeroMQ and the problem is the same. I did another test using a REQ-REP connection and the problem is the same.
Thanks in advance for any help.
fe80* addresses are link local, you must specify the local hosts link name: e.g. fe80...:1%eth1
fe80::/10 — Addresses in the link-local prefix are only valid and
unique on a single link. Within this prefix only one subnet is
allocated (54 zero bits), yielding an effective format of fe80::/64.
The least significant 64 bits are usually chosen as the interface
hardware address constructed in modified EUI-64 format. A link-local
address is required on every IPv6-enabled interface—in other words,
applications may rely on the existence of a link-local address even
when there is no IPv6 routing. These addresses are comparable to the
auto-configuration addresses 169.254.0.0/16 of IPv4.
http://en.wikipedia.org/wiki/IPv6_address#Local_addresses
I'm working on an application that needs to perform network communication and decided to use the poco c++ libraries. After going through the network tutorial I can't seem to find any forms of validation on establishing a network connection.
In the following example a client tries to connect to a server using a tcp socket stream:
#include "Poco/Net/SocketAddress.h"
#include "Poco/Net/StreamSocket.h"
#include "Poco/Net/SocketStream.h"
#include "Poco/StreamCopier.h"
#include <iostream>
int main(int argc, char** argv)
{
Poco::Net::SocketAddress sa("www.appinf.com", 80);
Poco::Net::StreamSocket socket(sa);
Poco::Net::SocketStream str(socket);
str << "GET / HTTP/1.1\r\n"
"Host: www.appinf.com\r\n"
"\r\n";
str.flush();
Poco::StreamCopier::copyStream(str, std::cout);
return 0;
}
However, I couldn't find any information related to:
Error checking(what if www.appinf.com is unavailable or doesn't exist for that matter)
The type of exception these calls may raise
The only mention is that a SocketStream may hang if the receive timeout is not set for the socket when using formated inputs.
How can I check if a host is alive and may set up a tcp connection, implement a method such as:
void TCPClient::connectTo(std::string host, bool& connected, unsigned int port) {
std::string hi = "hi";
Poco::Net::SocketAddress clientSocketAddress(host, port);
Poco::Net::StreamSocket clientStreamSocket;
// try to connect and avoid hang by setting a timeout
clientStreamSocket.connect(clientSocketAddress, timeout);
// check if the connection has failed or not,
// set the connected parameter accordingly
// additionally try to send bytes over this connection
Poco::Net::SocketStream clientSocketStream(clientStreamSocket);
clientSocketStream << hi << std::endl;
clientSocketStream.flush();
// close the socket stream
clientSocketStream.close();
// close stream
clientStreamSocket.shutdown();
}