I´m trying to use the boost beast http library for an HTTP Client. It´s working without any issues when I´m using a simulated server, however when I try connecting to the real server, boost::beast::http::read throws an exception saying "partial message".
I´ve been working on this issue for a couple days now but I can´t figure out why. Until now I´ve been using a different http client library and the server communication has been working without any similar issues.
I´d be grateful for any kind of idea or hint as to why this is happening and why it doesn't seem to be an issue when using a different library.
boost::beast::http::read throws an exception saying "partial message".
This happens because the message being parsed wasn't complete. A typical reason for it is when the content-length header is wrong, or the sender abandons the connection prematurely. E.g.:
Live On Compiler Explorer
This is what http::[async_]read ends up doing under the hood, but without the network related stuff:
#include <iostream>
#include <iomanip>
#include <string_view>
#include <boost/beast/http.hpp>
int main() {
using namespace boost::beast::http;
using boost::asio::buffer;
for (std::string_view buf : {
"GET / HTTP/1.1\r\n", // incomplete headers
"GET / HTTP/1.1\r\nHost: example.com\r\nContent-Length: 0\r\n\r\ntrailing data",
"GET / HTTP/1.1\r\nHost: example.com\r\nContent-Length: 42\r\n\r\nshort",
})
{
//std::cout << std::quoted(test) << "\n";
std::cout << "---------------------" << "\n";
request_parser<string_body> parser;
boost::system::error_code ec;
size_t n = parser.put(buffer(buf), ec);
if (n && !ec && !parser.is_done()) {
buf.remove_prefix(n);
n = parser.put(buffer(buf), ec); // body
}
if (!ec)
parser.put_eof(ec);
buf.remove_prefix(n);
std::cout
<< (parser.is_header_done()?"headers ok":"incomplete headers")
<< " / " << (parser.is_done()?"done":"not done")
<< " / " << ec.message() << "\n";
if (parser.is_header_done() && !parser.is_done())
std::cout << parser.content_length_remaining().value_or(0) << " more content bytes expected\n";
if (!buf.empty())
std::cout << "Remaining buffer: " << std::quoted(buf) << "\n";
}
}
Prints
---------------------
incomplete headers / not done / need more
---------------------
headers ok / done / Success
Remaining buffer: "trailing data"
---------------------
headers ok / not done / partial message
37 more content bytes expected
If you're not passing error_code to your calls they will throw the exception system_error with the same code, which is exactly what you see.
Side Note
If another library doesn't have this "problem" there are two options:
the library is sloppy (i.e. bad)
you're using it wrong (maybe you're not checking for errors)
Related
I have a program that uses the modbus protocol to send chunks of data between a 64-bit Raspberry Pi 4 (running Raspberry Pi OS 64) and a receiving computer. My intended setup for the serial port is baud rate of 57600, 8 data bits, two stop bits, no flow control, and no parity. I have noticed that the data is only properly interpreted when the receiving computer is set to view one stop bit and no parity, regardless of the settings on the Raspberry Pi.
What is interesting is this program works as expected when run on Windows, only the Pi has caused problems at the moment. This was originally seen in ASIO 1.20 and can still be reproduced in 1.24 on the Pi.
I wrote a minimal example that reproduces the issue for me on the Pi:
#include <asio.hpp>
#include <asio/serial_port.hpp>
#include <iostream>
int main(void) {
asio::io_service ioService;
asio::serial_port serialPort(ioService, "/dev/serial0");
serialPort.set_option(asio::serial_port_base::baud_rate(57600));
serialPort.set_option(asio::serial_port_base::character_size(8));
serialPort.set_option(asio::serial_port_base::stop_bits(asio::serial_port_base::stop_bits::two));
serialPort.set_option(asio::serial_port_base::flow_control(asio::serial_port_base::flow_control::none));
serialPort.set_option(asio::serial_port_base::parity(asio::serial_port_base::parity::none));
std::string test("Test#");
asio::write(serialPort, asio::buffer(test.data(), test.size()));
std::array<char, 5> buf;
asio::read(serialPort, asio::buffer(buf.data(), buf.size()));
std::cout << "Received: " << std::string(std::begin(buf), std::end(buf)) << std::endl;
serialPort.close();
return 0;
}
I looked closer at the issue and used a Saleae Logic Analyzer to see what data is being sent between the machines. Below you can see the expected behavior for a successful run, this is when the test is run on Windows.
Here you can see the behavior that occurs on the Raspberry Pi when it runs the test code. The analyzer fails to interpret the data using the parameters set in the code.
Below you can see that when the analyzer is set with one stop bit rather than two, it interprets the hex without an issue.
Overall you can see that the issue takes place on the Pi's end because of the responses seen in the logic analyzer. The program running on the Pi can interpret messages sent to it using the given parameters without any issue, however when it tries to reply to those messages it seems that the ASIO port settings are not being applied.
Any insight that can be provided would be very helpful. Let me know if you need more information. Thanks for the help!
UPDATE: Ran #sehe's test code as they recommended and results are as follows:
baud_rate: Success
character_size: Success
stop_bits: Success
flow_control: Success
parity: Success
parity: 0 (Success)
flow_control: 0 (Success)
stop_bits: 0 (Success)
character_size: 8 (Success)
baud_rate: 57600 (Success)
ModbusTest: Main.cpp:37: int main(): Assertion `sb.value() == serial_port::stop_bits::two' failed.
It appears that the setting for stop bits did not successfully apply and rather failed silently. Any ideas on how to proceed with further debugging?
UPDATE 2: Also wanted to mention that I ran minicom with the same hardware setup and was able to communicate without issue using two stop bits.
Very solid debugging and analysis info.
I don't immediately see something wrong with the code. My intuition was to separate construction from open(), so the options could be set prior to opening, but it turns out that is just not working.
So maybe you can verify that the set_option calls had their desired effect. I can imagine hardware limitations that don't allow certain configuration?
This should definitely uncover any unexpected behavior:
Live On Coliru
//#undef NDEBUG
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
namespace asio = boost::asio;
using asio::serial_port;
using boost::system::error_code;
#include <iostream>
int main() {
asio::io_service ioService;
asio::serial_port sp(ioService);
sp.open("/dev/serial0");
serial_port::baud_rate br{57600};
serial_port::character_size cs{8};
serial_port::stop_bits sb{serial_port::stop_bits::two};
serial_port::flow_control fc{serial_port::flow_control::none};
serial_port::parity pb{serial_port::parity::none};
error_code ec;
if (!ec) { sp.set_option(br, ec); std::cout << "baud_rate: " << ec.message() << std::endl; }
if (!ec) { sp.set_option(cs, ec); std::cout << "character_size: " << ec.message() << std::endl; }
if (!ec) { sp.set_option(sb, ec); std::cout << "stop_bits: " << ec.message() << std::endl; }
if (!ec) { sp.set_option(fc, ec); std::cout << "flow_control: " << ec.message() << std::endl; }
if (!ec) { sp.set_option(pb, ec); std::cout << "parity: " << ec.message() << std::endl; }
sp.get_option(pb, ec); std::cout << "parity: " << pb.value() << " (" << ec.message() << ")" << std::endl;
sp.get_option(fc, ec); std::cout << "flow_control: " << fc.value() << " (" << ec.message() << ")" << std::endl;
sp.get_option(sb, ec); std::cout << "stop_bits: " << sb.value() << " (" << ec.message() << ")" << std::endl;
sp.get_option(cs, ec); std::cout << "character_size: " << cs.value() << " (" << ec.message() << ")" << std::endl;
sp.get_option(br, ec); std::cout << "baud_rate: " << br.value() << " (" << ec.message() << ")" << std::endl;
assert(br.value() == 57600);
assert(cs.value() == 8);
assert(sb.value() == serial_port::stop_bits::two);
assert(fc.value() == serial_port::flow_control::none);
assert(pb.value() == serial_port::parity::none);
std::string test("Test#");
write(sp, asio::buffer(test));
std::array<char, 5> buf;
auto n = read(sp, asio::buffer(buf));
std::cout << "Received: " << std::string(buf.data(), n) << std::endl;
}
Which on my system (Ubuntu host, using /dev/ttyS0) prints e.g.
baud_rate: Success
character_size: Success
stop_bits: Success
flow_control: Success
parity: Success
parity: 0 (Success)
flow_control: 0 (Success)
stop_bits: 2 (Success)
character_size: 8 (Success)
baud_rate: 57600 (Success)
As expected
I was able to discover the cause and fix the problem!
I am using a Raspberry Pi 4 for this project and interfacing with GPIO pins 14/15 to use /dev/serial0. With the default configuration /dev/serial0 maps to /dev/ttyS0 which is a mini UART and is not capable of using multiple stop bits, etc.
Disabling Bluetooth sets the symlink to map to /dev/ttyAMA0 which is a full UART and is capable of parity and multiple stop bits.
In /boot/config.txt I added the following lines:
[all]
dtoverlay=disable-bt
If you are experiencing a similar problem with /dev/serial0, this may be worth a shot.
I'm trying to lean Boost::Asio networking library for C++ by watching this video but I stuck at making request using threads asynchronously.
The code :
#include "stdafx.h"
#include <iostream>
#include <boost/asio.hpp>
#include <boost/asio/ts/buffer.hpp>
#include <boost/asio/ts/internet.hpp>
#include <boost/system/error_code.hpp>
std::vector<char> vBuffrer(20 * 1024);
void GrabSomeData(boost::asio::ip::tcp::socket& socket) {
socket.async_read_some(boost::asio::buffer(vBuffrer.data(), vBuffrer.size()),
[&](std::error_code ec, std::size_t length)
//boost::system::error_code ec
{
if (!ec)
{
std::cout << "\n\nRead" << length << "bytes\n\n";
for (int i = 0; i < length; i++)
std::cout << vBuffrer[i];
GrabSomeData(socket);
}
});
}
int main()
{
boost::system::error_code ec;
boost::asio::io_context context;
boost::asio::io_context::work idleWork(context);
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::make_address("13.107.21.200",ec),80);
boost::asio::ip::tcp::socket socket(context);
std::thread thrContext = std::thread([&]() {context.run(); });
std::cout << "Starting " << std::endl;
socket.connect(endpoint,ec);
if (!ec)
{
std::cout << "Connected ! " << std::endl;
}
else {
std::cout << "Fail to connect ! " << ec.message() << std::endl;
}
if (socket.is_open()) {
GrabSomeData(socket);
std::string sRequest =
"GET /index.html HTTP/1.1\r\n"
"Host: www.example.com\r\n"
"Connection: close\r\n\r\n";
socket.write_some(boost::asio::buffer(sRequest.data(), sRequest.size()), ec);
using namespace std::chrono_literals;
std::this_thread::sleep_for(2800ms);
//std::this_thread::sleep_for(1ms);
context.stop();
if (thrContext.joinable()) thrContext.join();
}
system("pause");
return 0;
}
Microsoft Visual studio give me this :
Error C2752 'asio_prefer_fn::call_traits<boost::asio::execution::any_executor<boost::asio::execution::context_as_t<boost::asio::execution_context &>,boost::asio::execution::detail::blocking::never_t<0>,boost::asio::execution::prefer_only<boost::asio::execution::detail::blocking::possibly_t<0>>,boost::asio::execution::prefer_only<boost::asio::execution::detail::outstanding_work::tracked_t<0>>,boost::asio::execution::prefer_only<boost::asio::execution::detail::outstanding_work::untracked_t<0>>,boost::asio::execution::prefer_only<boost::asio::execution::detail::relationship::fork_t<0>>,boost::asio::execution::prefer_only<boost::asio::execution::detail::relationship::continuation_t<0>>> &,void (const boost::asio::execution::detail::blocking::possibly_t<0> &,boost::asio::execution::allocator_t<std::allocator<void>>),void>': more than one partial specialization matches the template argument list boostasiotest c:\boost\boost_1_75_0\boost\asio\detail\handler_work.hpp 353
Error C2893 Failed to specialize function template 'enable_if<asio_prefer_fn::call_traits<T,void(P0,P1,PN...),void>::overload==,call_traits<T,void(P0,P1,PN...),void>::result_type>::type asio_prefer_fn::impl::operator ()(T &&,P0 &&,P1 &&,PN &&...) noexcept(<expr>) const' boostasiotest c:\boost\boost_1_75_0\boost\asio\detail\handler_work.hpp 353
Everything worked fine till I added GrabSomeData function and I have absolutely no idea how to fix it , any help would be appreciated.
PS : there is an example on the Boost website on this subject, but it is object oriented and all the pointers are referring to the class and I (think) it can't help.
Like the commenter, I cannot repro your message: it just compiles,
MSVC 19, /std:c++14, Boost 1.75.0: compiler explorer
Now, I do see other issues:
write_some may not write all the data - you will want to ensure a composed-write operation
a race condition: since you're doing GrabSomeData on a thread, you
need to synchronize access to the tcp::socket and buffer (the
shared resources).
io_context itself is thread-safe.
In this case, it's really easy to avoid, since you don't need to
start the async operation until after you sent the request:
write(socket, boost::asio::buffer(sRequest));
GrabSomeData(socket);
async_read_some has a similar problem as the write side. You will want a composed read operation that reads the expected output so read_until(socket, buf, "\r\n\r\n") and then read how much content is expected based on Content-Length header, Connection: Close and others (think chunked encoding).
You currently have no good way to store and access the response. It would be a lot easier to use a streambuf, a single composed read.
If you want to be really solid, use Beast to receive a HTTP/1.1 response (which can even be chunked) and not worry about when it's complete (the library does it for you):
auto GrabSomeData(tcp::socket& socket) {
http::response<http::string_body> res;
auto buf = boost::asio::dynamic_buffer(vBuffer);
http::read(socket, buf, res);
return res;
}
Oh, and don't do it on a thread (why was that anyways? it literally just introduced undefined behavior for no gain):
Simplified Code
Live On Coliru
Compiles on MSVC: Godbolt
#include <boost/asio.hpp>
#include <boost/beast/http.hpp>
#include <iostream>
#include <iomanip>
using boost::asio::ip::tcp;
std::vector<char> vBuffer; // TODO FIXME global variable
auto GrabSomeData(tcp::socket& socket) {
namespace http = boost::beast::http;
http::response<http::string_body> res;
auto buf = boost::asio::dynamic_buffer(vBuffer);
http::read(socket, buf, res);
return res;
}
int main() try {
boost::asio::io_context context;
std::cout << "Starting " << std::endl;
tcp::endpoint endpoint(boost::asio::ip::make_address("13.107.21.200"), 80);
tcp::socket socket(context);
socket.connect(endpoint);
std::cout << "Connected" << std::endl;
std::string const sRequest = "GET /index.html HTTP/1.1\r\n"
"Host: www.example.com\r\n"
"Connection: close\r\n"
"\r\n";
write(socket, boost::asio::buffer(sRequest));
auto response = GrabSomeData(socket);
std::cout << "Response body length: " << response.body().size() << std::endl;
std::cout << "Response headers: " << response.base() << std::endl;
std::cout << "Response body: " << std::quoted(response.body()) << std::endl;
context.run(); // run_for(10s) e.g.
} catch (boost::system::system_error const& se) {
std::cerr << "Error: " << se.code().message() << std::endl;
}
This sample printed:
Starting
Connected
Response body length: 194
Response headers: HTTP/1.1 400 Bad Request
Transfer-Encoding: chunked
X-MSEdge-Ref: 0BstXYAAAAACeQ2y+botzQISiBe2U3iGCQ0hHRURHRTE0MDgARWRnZQ==
Date: Sun, 21 Mar 2021 22:39:02 GMT
Connection: close
Response body: "<h2>Our services aren't available right now</h2><p>We're working to restore all services as soon as possible. Please check back soon.</p>0BstXYAAAAACeQ2y+botzQISiBe2U3iGCQ0hHRURHRTE0MDgARWRnZQ=="
NOTE It does indeed use chunked encoding, which you didn't seem to be anticipating.
Thanks #sehe for the answer and recommendations, it took so long but I've upgrade to New Windows10 and Visual Studio 2019, and Boost 1_76 and the problem solved!
Those errors were totally irrelevant to the code!
I'm new to C++ and Boost Asio, I wanted to use Boost Asio to create a simple HTTP Get request to a website having oAthn2 (bearer token) in my program but it doesn't work and I don't know why, I have tried to libCurl and separate Http request in Clion and they worked well without any problems.
#include "netCommon/net_Common.h"
#include <chrono>
#include <thread>
using namespace boost;
typedef asio::ip::tcp ip;
int main() {
boost::system::error_code ec;
asio::io_service ios;
//The website
std::string host_name = "api-fxpractice.oanda.com";
std::string port = "443";
//resolve DNS
ip::resolver resolver_dns(ios);
ip::resolver::query query_dns(host_name,port,asio::ip::tcp::resolver::numeric_service);
ip::resolver::iterator it = resolver_dns.resolve(query_dns,ec);
//Create a socket
ip::socket sock(ios);
//Connect to endpoint
asio::connect(sock,it);
if (ec){
std::cout << " Failed to connect: " << ec.message() << std::endl;
return ec.value();
}
std::cout << "Connected successfully !!! " << std::endl;
if (sock.is_open()) {
//Create a stream buffer
std::stringstream request_stream;
request_stream << "GET https://api-fxpractice.oanda.com/v3/accounts HTTP/1.1\r\n"
"Authorization: Bearer <my token>\r\n"
"Connection: close\r\n\r\n";
//Tried to change to Host: https://api-fxpractice.oanda.com, doesn't work either
const auto request = request_stream.str();
asio::write(sock, asio::buffer(request));
using namespace std::chrono_literals;
std::this_thread::sleep_for(2000ms);
size_t bytes = sock.available();
if (bytes > 0) {
std::vector<char> vBuffer(bytes);
std::cout << "Have something to read !" << std::endl;
asio::read(sock, asio::buffer(vBuffer));
for (auto c: vBuffer) {
std::cout << c;
}
}
}
It's connected to the site but I can't get any data.
It keeps sending me either "HTTP/1.1 400 Bad Request" or "The plain HTTP request ... send to HTTPS port"
I'm not sure what I'm doing wrong here. :(
thank you guys so much.
I'm catching errors in Boost Asio program like
if (!error)
{
//do stuff
}
else
{
std::cout << "Error : " << error << std::endl;
//handle error
}
But the error isn't human-readable (e.g. connecting to SSL server without certificate gives error asio.ssl:335544539). Is there any better way how to display error ?
If you are likely using boost::system::error_code you can call:
error.message()
to get a more human-friendly message.
Using operator<< translates into:
os << ec.category().name() << ':' << ec.value()
Here you can check a detailed overview of the available members in error_code.
What is the meaning of boost::asio::placeholders::bytes_transferred in async_read_until()? In the callback function it returns smaller value, than streambuf.size(). streambuf was clear before the callback. To sum up,...bytes_transferred is not the actual number of bytes went through the socket, but less. Do I have misunderstood all of this, or what?
EDIT: I read the following protocol from a socket:
Y43,72,0,,91009802000000603=0000000000000000000
"Y43," - is the header.
"Y" - is message type.
"43" - additional bytes to read
"," - delimiter. The header is the until the first "," encountered.
My code is for reading is like:
void handle_write(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_read_until(
socket_,
inputStreamBuffer_,
',',
boost::bind(
&client::handle_read1, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
else
{
std::cout << "Write failed: " << error << "\n";
}
}
void handle_read1(const boost::system::error_code& error,
size_t bytes_transferred)
{
cout << "bytes_transferred=" << bytes_transferred << endl;
if (!error)
{
cout << "0 size=" << inputStreamBuffer_.size() << endl;
istream is(&inputStreamBuffer_);
char c[1000];
is.read(c,bytes_transferred);
c[bytes_transferred]=0;
for (int i=0;i<bytes_transferred;++i)
{
cout << dec << "c[" << i << "]=" << c[i] << " hex=" << hex << static_cast<int>(c[i]) << "#" << endl;
}
}
else
{
std::cout << "Read failed: " << error << "\n";
}
}
For stream sent from the other side:
Y43,71,0,,91009802000000595=0000000000000000000
Some times, I read this:
bytes_transferred=4
0 size=47
c[0]=Y hex=59#
c[1]=4 hex=34#
c[2]=3 hex=33#
c[3]=, hex=2c#
For stream sent from the other side:
Y43,72,0,,91009802000000603=0000000000000000000
But other times, I read this:
bytes_transferred=7
0 size=47
c[0]= hex=0#
c[1]= hex=0#
c[2]= hex=0#
c[3]= hex=0#
c[4]=7 hex=37#
c[5]=2 hex=32#
c[6]=, hex=2c#
The socket is secured with SSL, and the client and server apps are slightly modified examples from boost_asio/example/ssl/* .
In the second example I loose the entire header :(
There's four overloads of the function but let's just assume the first one is used. If you look at the documentation, then you'll see that bytes_transferred is the amount of bytes to and including the delimiter specified.
And furthermore:
After a successful async_read_until operation, the streambuf may contain additional data beyond the delimiter. An application will typically leave that data in the streambuf for a subsequent async_read_until operation to examine.
Resolved. I was passing std::string object to boost::asio::buffer(), instead of std::string.c_str() when sending the reply from the server.
As the docs suggest, you should be able to ignore anything beyond bytes_transferred and just call async_read_until again.
However if you happen to be using the all-new SSL implementation in ASIO 1.5.3 (which is not officially part of boost yet), you might run into the same issue I did (for which I submitted a patch):
http://comments.gmane.org/gmane.comp.lib.boost.asio.user/4803
It doesn't look like you're using the new version or running into the same problem, but it's something to be aware of if you hit some limitations and are tempted by the advantages of the new implementation:
The new implementation compiles faster, shows substantially improved performance, and supports custom memory allocation and handler invocation. It includes new API features such as certificate verification callbacks and has improved error reporting. The new implementation is source-compatible with the old for most uses.