In an EB environment I am getting these errors from worker Instances, who access the SQS queues through AWS API to check for new messages. It just started since 4-5 days and comes constantly every hour, Is it due to some network issue from AWS side??
Update#: No Issues in PHD affecting our region where this is deployed.
An exception of type HttpErrorResponseException was handled in ErrorHandler.Amazon.Runtime.Internal.HttpErrorResponseException: The remote server returned an error: (503) Server Unavailable. ---> System.Net.WebException: The remote server returned an error: (503) Server Unavailable.
at System.Net.HttpWebRequest.GetResponse()
at Amazon.Runtime.Internal.HttpRequest.GetResponse() in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_bcl\HttpWebRequestFactory.cs:line 106
--- End of inner exception stack trace ---
at Amazon.Runtime.Internal.HttpRequest.GetResponse() in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_bcl\HttpWebRequestFactory.cs:line 114
at Amazon.Runtime.Internal.HttpHandler`1.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\HttpHandler.cs:line 85
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.Runtime.Internal.Unmarshaller.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\Unmarshaller.cs:line 48
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.SQS.Internal.ValidationResponseHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Services\SQS\Custom\Internal\ValidationResponseHandler.cs:line 28
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.Runtime.Internal.ErrorHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 72
And This:
An exception of type WebException was handled in ErrorHandler.System.Net.WebException: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count)
at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.GetResponse()
at Amazon.Runtime.Internal.HttpRequest.GetResponse() in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_bcl\HttpWebRequestFactory.cs:line 118
at Amazon.Runtime.Internal.HttpHandler`1.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\HttpHandler.cs:line 85
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.Runtime.Internal.Unmarshaller.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\Unmarshaller.cs:line 48
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.SQS.Internal.ValidationResponseHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Services\SQS\Custom\Internal\ValidationResponseHandler.cs:line 28
at Amazon.Runtime.Internal.PipelineHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\PipelineHandler.cs:line 55
at Amazon.Runtime.Internal.ErrorHandler.InvokeSync(IExecutionContext executionContext) in E:\JenkinsWorkspaces\v3-trebuchet-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 72
Related
I'm running the following code in order to create listener to unix domain socket.
Under macOS this code is working fine, but in Windows it produces the following error from the tcp_acceptor command : WSAEOPNOTSUPP
Here's a minimal reproducible example :
#include <iostream>
#include <boost/asio/local/stream_protocol.hpp>
constexpr char* kFileName = "file.sock";
using namespace std;
using namespace boost::asio;
int main(int argc, char* argv[])
{
io_context my_io_context;
::_unlink(kFileName); // Remove previous binding.
local::stream_protocol::endpoint server(kFileName);
local::stream_protocol::acceptor acceptor(my_io_context, server);
local::stream_protocol::socket socket(my_io_context);
acceptor.accept(socket);
return 0;
}
While debugging inside the boost library, i saw that the failure comes from the internal bind in the following code :
and this is the frame variables (it's clearly visible that sa_family = AF_UNIX (1):
I know that unix domain socket was introduced in windows10 few years ago, and i'm working with the latest version so it should be supported. Any idea what's wrong in my code?
EDIT : I've found out that in linux based machine I pass the following sockaddr to ::bind
(const boost::asio::detail::socket_addr_type) *addr = (sa_len = '\0', sa_family = '\x01', sa_data = "/tmp/server.sock")
(lldb) memory read addr
0x7ffeefbffa00: 00 01 2f 74 6d 70 2f 73 65 72 76 65 72 2e 73 6f ../tmp/server.so
0x7ffeefbffa10: 63 6b 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ck..............```
and in windows i get a slightly different struct :
{sa_family=1 sa_data=0x000000fffd33f682 "C:\\temp\\UnixSo... }const sockaddr *
Notice that the len field is missing in the windows platform.
Thanks
The issue seems to be the SO_REUSEADDR socket option, which ASIO by default sets. Setting this option itself succeeds, but causes the subsequent bind to fail.
Construct the acceptor with reuse_addr = false, then the binding should succeed:
local::stream_protocol::acceptor acceptor(my_io_context, server, false);
I've written a little program, which in a loop first sends a POST-request to a special machine, the machine returns an other API-path in its response header, afterwards I can execute a GET-request on that API path and get an extremly long string (~ about 15.000 characters). This works pretty fine - the string is returned and I can print it afterwards. But then something really strange happens: After each print of the string, my ESP seems to restart somewhy. Please see Output:
005 03.07. 08:25
========================
ENDE NC NL
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x4015c44c PS : 0x00060430 A0 : 0x800d30f2 A1 : 0x3ffb1dc0
A2 : 0x3ffb1f10 A3 : 0x00000000 A4 : 0x00003de8 A5 : 0x00003de8
A6 : 0x3ffd9098 A7 : 0x00003de8 A8 : 0x800d5105 A9 : 0x3ffb1da0
A10 : 0x3ffd7938 A11 : 0x3f4015a2 A12 : 0x00000002 A13 : 0x0000ff00
A14 : 0x00ff0000 A15 : 0xff000000 SAR : 0x0000000a EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000010 LBEG : 0x400014fd LEND : 0x4000150d LCOUNT : 0xffffffff
Backtrace: 0x4015c44c:0x3ffb1dc0 0x400d30ef:0x3ffb1de0 0x400d3161:0x3ffb1e00 0x400d1ae2:0x3ffb1e20 0x400d6301:0x3ffb1fb0 0x40088b9d:0x3ffb1fd0
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1044
load:0x40078000,len:8896
load:0x40080400,len:5816
entry 0x400806ac
Connecting
ENDE NC NL is the last line of the string, one line is maximal 24 characters long and ends with \n (LF), the program starts with Connecting.
If I do not print the string directly, but first do some string "manipulation" operations and than try to print the manipulation result, the first 2 or 3 rounds the Guru Meditation Error is shown, then suddenly the ouput changes to the following:
Connecting
..
Connected to WiFi network with IP Address: 192.168.200.244
Timer set to 5 seconds (timerDelay variable), it will take 5 seconds before publishing the first reading.
POST-Request to: https://192.168.200.101/api/vdai-daten
HTTP Response code: 201
Header Payload: /api/vdai-daten/vdai-daten.txt
=> GET-Request to: https://192.168.200.101/api/vdai-daten/vdai-daten.txt
2nd HTTP Response code: 200
Guru Meditation Error: Core 1 panic'ed (Unhandled debug exception)
Debug exception reason: Stack canary watchpoint triggered (loopTask)
Core 1 register dump:
PC : 0x400d55b4 PS : 0x00060236 A0 : 0x800d57e9 A1 : 0x3ffaf5c0
A2 : 0x3ffb005c A3 : 0x00000025 A4 : 0x00000000 A5 : 0x0000ff00
A6 : 0x00ff0000 A7 : 0xff000000 A8 : 0x80000000 A9 : 0x3ffaf5a0
A10 : 0x3ffb0050 A11 : 0x3f4016d2 A12 : 0x00000001 A13 : 0x3ffb1cc0
A14 : 0x00000000 A15 : 0x3ffb0060 SAR : 0x0000000a EXCCAUSE: 0x00000001
EXCVADDR: 0x00000000 LBEG : 0x4000c46c LEND : 0x4000c477 LCOUNT : 0x00000000
Backtrace: 0x400d55b4:0x3ffaf5c0 0x400d57e6:0x3ffaf5e0 0x400d1afd:0x3ffaf600 0x400d63e9:0x3ffb1fb0 0x40088b9d:0x3ffb1fd0
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1044
load:0x40078000,len:8896
load:0x40080400,len:5816
entry 0x400806ac
Connecting
..
Connected to WiFi network with IP Address: 192.168.200.244
Timer set to 5 seconds (timerDelay variable), it will take 5 seconds before publishing the first reading.
POST-Request to: https://192.168.200.101/api/vdai-daten
HTTP Response code: 201
Header Payload: /api/vdai-daten/vdai-daten.txt
=> GET-Request to: https://192.168.200.101/api/vdai-daten/vdai-daten.txt
2nd HTTP Response code: 200
***ERROR*** A stack overflow in task loopTask has been detected.
abort() was called at PC 0x4008c69c on core 1
Backtrace: 0x4008c454:0x3ffaf430 0x4008c685:0x3ffaf450 0x4008c69c:0x3ffaf470 0x40089768:0x3ffaf490 0x4008b3cc:0x3ffaf4b0 0x4008b382:0x833dcfb1
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1044
load:0x40078000,len:8896
load:0x40080400,len:5816
entry 0x400806ac
Connecting
Anyone had something like this before or can explain what is happening/wrong and how I can handle / deal with this 15.000 characters long string?^^
Would be really happy for every answer,
Best regards
P.S.: Please finally let me add my code... the commented part is the string "manipulation" part.
#include <WiFi.h>
#include <HTTPClient.h>
const char* ssid = "MyWiFiName";
const char* password = "MyWiFiPassword";
const char* serverName = "https://192.168.200.101/api/vdai-daten";
unsigned long lastTime = 0;
unsigned long timerDelay = 5000;
const char * headerKeys[] = {"HeaderKey"} ;
const size_t numberOfHeaders = 1;
String payload;
void setup() {
Serial.begin(115200);
WiFi.begin(ssid, password);
Serial.println("Connecting");
while(WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("");
Serial.print("Connected to WiFi network with IP Address: ");
Serial.println(WiFi.localIP());
Serial.println("Timer set to 5 seconds (timerDelay variable), it will take 5 seconds before publishing the first reading.");
}
void loop() {
if ((millis() - lastTime) > timerDelay) {
//Check WiFi connection status
if(WiFi.status()== WL_CONNECTED){
HTTPClient http;
Serial.print("POST-Request to: ");
Serial.println(serverName);
http.begin(serverName);
http.collectHeaders(headerKeys, numberOfHeaders);
http.setAuthorization("Username", "Password");
http.addHeader("Content-Type", "application/x-www-form-urlencoded");
// Send HTTP POST request
int httpResponseCode = http.POST("kassierung=false");
Serial.print("HTTP Response code: ");
Serial.println(httpResponseCode);
String headerServer = http.header("HeaderKey");
Serial.print("Header Payload: ");
Serial.println(headerServer);
// Free resources
http.end();
HTTPClient http2;
String newServerName = "https://192.168.200.101" + headerServer;
Serial.print("=> GET-Request to: ");
Serial.println(newServerName);
http2.begin(newServerName.c_str());
http2.setAuthorization("Username", "Password");
httpResponseCode = http2.GET();
Serial.print("2nd HTTP Response code: ");
Serial.println(httpResponseCode);
Serial.println("=> Big string: ");
payload = http2.getString();
Serial.println(payload);
/*
//Count amount of lines
int counter = 0;
for (int i=0; i < payload.length(); i++){
if(payload.charAt(i) == '\n'){
counter++;
}
}
//Divide big string into a array, where each entry is one line
String sa[counter];
int r=0;
int t=0;
for (int i=0; i < payload.length(); i++){
if(payload.charAt(i) == '\n') {
sa[t] = payload.substring(r, i);
r=(i+1);
t++;
}
}
Serial.println("=> First line: ");
Serial.println(sa[0]);
*/
http2.end();
}
else {
Serial.println("WiFi Disconnected");
}
lastTime = millis();
}
}
It's clearly a stack overflow issue!
Either use malloc'ed memory for storing string instead OR increase task stack size for loopTask to about 20K.
I'm trying to implement a client which connects to a WebSocket (the Discord gateway to be precise) using the websocketpp library, but I'm getting an error when I try to send a JSON payload to the server
The code I'm using is:
//Standard C++:
#include <string>
//JSON Header (nlohmann's library):
#include <json.hpp>
//Networking Headers:
#include <websocketpp/client.hpp>
#include <websocketpp/config/asio_client.hpp>
#define WEBSOCKETPP_STRICT_MASKING
std::string token;
static websocketpp::lib::shared_ptr<boost::asio::ssl::context> on_tls_init(websocketpp::connection_hdl)
{
websocketpp::lib::shared_ptr<boost::asio::ssl::context> ctx = websocketpp::lib::make_shared<boost::asio::ssl::context>(boost::asio::ssl::context::sslv23);
ctx->set_options(boost::asio::ssl::context::default_workarounds |
boost::asio::ssl::context::no_sslv2 |
boost::asio::ssl::context::no_sslv3 |
boost::asio::ssl::context::single_dh_use);
return ctx;
}
void onMessage(websocketpp::client<websocketpp::config::asio_tls_client>* client, websocketpp::connection_hdl hdl, websocketpp::config::asio_tls_client::message_type::ptr msg)
{
//Get the payload
nlohmann::json payload = nlohmann::json::parse(msg->get_payload());
//If the op code is 'hello'
if (payload.at("op") == 10)
{
//HEARTBEAT STUFF HAS BEEN REMOVED FOR SIMPLICITY
//Create the identity JSON
nlohmann::json identity =
{
{"token", token},
{"properties", {
{"$os", "linux"},
{"$browser", "my_library"},
{"$device", "my_library"},
{"$referrer", ""},
{"$referring_domain", ""}
}},
{"compress", false},
{"large_threshold", 250},
{"shard", {0, 1}}
};
//Create the error code object
websocketpp::lib::error_code errorCode;
//Send the identity JSON
client->send(hdl, std::string(identity.dump()), websocketpp::frame::opcode::text, errorCode);
//If the request was invalid
if (errorCode) {std::cerr << "Identify handshake failed because " << errorCode.message() << std::endl;}
}
}
int main(int argc, char** argv)
{
if (argc > 1)
{
//Set the token
token = argv[1];
}
else
{
std::cout << "Error, please specify the token as an argument to this program" << std::endl;
return -1;
}
//Create the client
websocketpp::client<websocketpp::config::asio_tls_client> client;
client.set_tls_init_handler(on_tls_init);
client.init_asio();
client.set_access_channels(websocketpp::log::alevel::all);
client.set_message_handler(websocketpp::lib::bind(&onMessage, &client, websocketpp::lib::placeholders::_1, websocketpp::lib::placeholders::_2));
//Create an error object
websocketpp::lib::error_code errorCode;
//Get the connection from the gateway (usually you'd use GET for the URI, but I'm hardcoding it for simplicity)
websocketpp::client<websocketpp::config::asio_tls_client>::connection_ptr connection = client.get_connection("wss://gateway.discord.gg/?v=5&encoding=json", errorCode);
//Check for errors
if (errorCode)
{
std::cout << "Could not create an connection because " << errorCode.message() << std::endl;
}
//Connect
client.connect(connection);
//Run it
client.run();
return 0;
}
(Obviously this code is simplified and is only for connecting to the Discord gateway and sending the payload)
When I do this, I get this output in my terminal:
[2016-09-24 16:36:47] [connect] Successful connection
[2016-09-24 16:36:48] [connect] WebSocket Connection 104.16.60.37:443 v-2 "WebSocket++/0.7.0" /?v=5&encoding=json 101
[2016-09-24 16:36:48] [frame_header] Dispatching write containing 1 message(s) containing 8 header bytes and 238 payload bytes
[2016-09-24 16:36:48] [frame_header] Header Bytes:
[0] (8) 81 FE 00 EE C3 58 3C 0C
[2016-09-24 16:36:48] [frame_payload] Payload Bytes:
[0] (238) [1] �z_c�(Ni�+6�9P�t`�*[i�,T~�+Tc�<6�m
�(Nc�=Nx�=O.�#(�*S{�=N.�zQu�4Un�9Nu�t(�=Je�=6�5ES�1^~�*E.�zc�z.�1Ry�z.�*Yj�*Ni�z.�t(�=Zi�*Ub�Xc�9Ub�b.�t�9Nh�bg<�ia �,Sg�66�VI�xg�Fg�hU�VI�VU�v�+w{�hW;�KM�<RA� ch�f8�
Mv�k8�%
[2016-09-24 16:36:49] [control] Control frame received with opcode 8
[2016-09-24 16:36:49] [frame_header] Dispatching write containing 1 message(s) containing 6 header bytes and 31 payload bytes
[2016-09-24 16:36:49] [frame_header] Header Bytes:
[0] (6) 88 9F 07 DD CB 5A
[2016-09-24 16:36:49] [frame_payload] Payload Bytes:
[0] (31) [8] 08 7F 8E 28 75 B2 B9 7A 70 B5 A2 36 62 FD AF 3F 64 B2 AF 33 69 BA EB 2A 66 A4 A7 35 66 B9 E5
[2016-09-24 16:36:49] [error] handle_read_frame error: websocketpp.transport:8 (TLS Short Read)
[2016-09-24 16:36:49] [disconnect] Disconnect close local:[1006,TLS Short Read] remote:[4002,Error while decoding payload.]
EDIT:
After doing some research, it appears the error is caused by the gateway rejecting the request, so I assume websocketpp isn't correctly encoding the JSON (or encoding into the wrong format)
I figured out my problem. Using the discordia source code as a reference I discovered I was creating the JSON incorrectly, so I changed this code:
{"token", token},
{"properties", {
{"$os", "linux"},
{"$browser", "my_library"},
{"$device", "my_library"},
{"$referrer", ""},
{"$referring_domain", ""}
}},
{"compress", false},
{"large_threshold", 250},
{"shard", {0, 1}}
to:
{"op", 1},
{"d", {
{"token", token},
{"properties", {
{"$os", "linux"},
{"$browser", "orfbotpp"},
{"$device", "orfbotpp"},
{"$referrer", ""},
{"$referring_domain", ""}
}},
{"compress", false},
{"large_threshold", 250},
{"shard", {0, 1}}
}}
I have two servers(EC2 instances). In one server(server 1) i have 3 Batch and on another(server 2) i have 4 Batch. Now, one of the batch in server 2 needs to be executed only after the successful execution of a batch in server 1.
updated
Promise<Void> r12 = null
new TryCatchFinally(){
// First server job sequencing
Promise<Void> r11 = client1.b1();
r12 = client1.b2(r11);
Promise<Void> r13 = client1.b3(r12);
Promise<Void> r14 = client1.b4(r13);
}
#Override
protected void doCatch(Throwable e) throws Throwable {
System.out.println("Failed to execute commands in server 1");
}
#Override
protected void doFinally() throws Throwable {
// cleanup
}
}
new TryCatchFinally(){
// Second server job sequencing
Promise<Void> r21 = client2.b1();
// Will execute only when both parameters are ready
Promise<Void> r22 = client2.b2(r21, r12);
Promise<Void> r23 = client2.b3(r22);
Promise<Void> r24 = client2.b4(r23);
}
#Override
protected void doCatch(Throwable e) throws Throwable {
System.out.println("Failed to execute commands in server 2");
}
#Override
protected void doFinally() throws Throwable {
// cleanup
}
}
Any of the activity in any server can throw any custom exception. But the execution of any activity in a sever should not be cancelled because of exception thrown by activity in another server. Activity in a server should only be cancelled in case one of the activity in its own server throws any exception. (Dependent activity should also gets cancelled out irrespective of server if the activity on which it is dependent fails or throw any exception). For this what I did is wrapped it into two separate try catch block.
how to terminate the Workflow Execution if the activity of server 1 and server 2 both throws any exception or fails?
You can wrap each Spring Batch execution into a SWF activity and then use SWF decider to sequence these activities. See AWS Flow Framework documentation and recipes for more info.
Added after reading the updated description of the problem:
You can use Promises to sequence activities in any way. So in your case I would do something like:
// First server job sequencing
Promise<Void> r11 = client1.b1();
Promise<Void> r12 = client1.b2(r11);
Promise<Void> r13 = client1.b3(r12);
Promise<Void> r14 = client1.b4(r13);
// Second server job sequencing
Promise<Void> r21 = client2.b1();
// Will execute only when both parameters are ready
Promise<Void> r22 = client2.b2(r21, r12);
Promise<Void> r23 = client2.b3(r22);
Promise<Void> r24 = client2.b4(r23);
If any of the activities throws an exception it would cancel all outstanding activities and fail the workflow unless the exception is explicitly catched and handled using TryCatchFinally. Activity that wasn't started (for example because it is waiting for its parameters of type Promise become ready) is cancelled immediately. An activity that is executing should explicitly handle cancellation. See "Activity Heartbeat" section from Error Handling page of the AWS Flow Framework Guide for more info.
Added the error handling part:
You wrap the part that shouldn't affect other parts of the worklfow in the TryCatch. So in this example any client2 activity throwing an exception cancels all future client2 activities, but not activities called on client1 as exception is not thrown into its scope.
// First server job sequencing
Promise<Void> r11 = client1.b1();
final Promise<Void> r12 = client1.b2(r11);
Promise<Void> r13 = client1.b3(r12);
Promise<Void> r14 = client1.b4(r13);
new TryCatch(){
#Override
protected void doTry() throws Throwable {
// Second server job sequencing
Promise<Void> r21 = client2.b1();
// Will execute only when both parameters are ready
Promise<Void> r22 = client2.b2(r21, r12);
Promise<Void> r23 = client2.b3(r22);
Promise<Void> r24 = client2.b4(r23);
}
#Override
protected void doCatch(Throwable e) throws Throwable {
// Handle exception without rethrowing it.
}
}
So working off of the boost HTTP Server 3 example, I want to modify connection::handle_read to support sending a body along with the message. However, the method for doing this is not apparent to me. I want to write something like:
void connection::handle_read(const boost::system::error_code& e,
std::size_t bytes_transferred)
{
...
if (result)
{
boost::asio::async_write(socket_, reply.to_buffers(),
strand_.wrap(
boost::bind(&connection::write_body, shared_from_this(),
boost::asio::placeholders::error)));
}
}
void connection::write_body(const boost::system::error_code& e)
{
boost::asio::async_write(socket_, body_stream_,
strand_.wrap(
boost::bind(&connection::handle_write, shared_from_this(),
boost::asio::placeholders::error)));
}
where body_stream_ is an asio::windows::stream_handle.
But this approach doesn't handle the http chunking at all (all that means is the size of the chunk is sent before each chunk). What is the best way to approach this problem? Do I write my own wrapper for an ifstream that adheres to the requiresments of a boost const buffer? Or try to simulate the effect of async_write with multiple calls to async_write_some in a loop? I should mention a requirement of the solution is that I never have the entire file in memory at any given time - only one or a few chunks.
Very new to ASIO and sockets, any advice is appreciated!
It may be easier to visualize asynchronous programming as a chain of functions rather than looping. When breaking apart the chains, I find it to be helpful to break operations into two parts (initiation and completion), then illustrate the potential call paths. Here is an example illustration that asynchronously reads some data from body_stream_, then writes it out the socket via HTTP Chunked Transfer Encoding:
void connection::start()
{
socket.async_receive_from(..., &handle_read); --.
} |
.----------------------------------------------'
| .-----------------------------------------.
V V |
void connection::handle_read(...) |
{ |
if (result) |
{ |
body_stream_.assign(open(...)) |
|
write_header(); --------------------------------|-----.
} | |
else if (!result) | |
boost::asio::async_write(..., &handle_write); --|--. |
else | | |
socket_.async_read_some(..., &handle_read); ----' | |
} | |
.---------------------------------------------------' |
| |
V |
void connection::handle_write() |
{} |
.------------------------------------------------------'
|
V
void connection::write_header()
{
// Start chunked transfer coding. Write http headers:
// HTTP/1.1. 200 OK\r\n
// Transfer-Encoding: chunked\r\n
// Content-Type: text/plain\r\n
// \r\n
boost::asio::async_write(socket_, ...,
&handle_write_header); --.
} .-------------------------'
|
V
void connection::handle_write_header(...)
{
if (error) return;
read_chunk(); --.
} .-------------'
| .--------------------------------------------.
V V |
void connection::read_chunk() |
{ |
boost::asio::async_read(body_stream_, ..., |
&handle_read_chunk); --. |
} .-----------------------' |
| |
V |
void connection::handle_read_chunk(...) |
{ |
bool eof = error == boost::asio::error::eof; |
|
// On non-eof error, return early. |
if (error && !eof) return; |
|
write_chunk(bytes_transferred, eof); --. |
} .-------------------------------------' |
| |
V |
void connection::write_chunk(...) |
{ |
// Construct chunk based on rfc2616 section 3.6.1 |
// If eof has been reached, then append last-chunk. |
boost::asio::async_write(socket_, ..., |
&handle_write_chunk); --. |
} .------------------------' |
| |
V |
void connection::handle_write_chunk(...) |
{ |
// If an error occured or no more data is available, |
// then return early. |
if (error || eof) return; |
|
// Read more data from body_stream_. |
read_chunk(); ---------------------------------------'
}
As illustrated above, the chunking is done via an asynchronous chain, where data is read from body_stream_, prepared for writing based on the HTTP Chunked Transfer Encoding specification, then written to the socket. If body_stream_ still has data, then another iteration occurs.
I do not have a Windows environment to test on, but here is a basic complete example on Linux that chunks data 10 bytes at a time.
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
using boost::asio::ip::tcp;
namespace posix = boost::asio::posix;
// Constant strings.
const std::string http_chunk_header =
"HTTP/1.1 200 OK\r\n"
"Transfer-Encoding: chunked\r\n"
"Content-Type: text/html\r\n"
"\r\n";
const char crlf[] = { '\r', '\n' };
const char last_chunk[] = { '0', '\r', '\n' };
std::string to_hex_string(std::size_t value)
{
std::ostringstream stream;
stream << std::hex << value;
return stream.str();
}
class chunk_connection
{
public:
chunk_connection(
boost::asio::io_service& io_service,
const std::string& pipe_name)
: socket_(io_service),
body_stream_(io_service),
pipe_name_(pipe_name)
{}
/// Get the socket associated with the connection
tcp::socket& socket() { return socket_; }
/// Start asynchronous http chunk coding.
void start(const boost::system::error_code& error)
{
// On error, return early.
if (error)
{
close();
return;
}
std::cout << "Opening pipe." << std::endl;
int pipe = open(pipe_name_.c_str(), O_RDONLY);
if (-1 == pipe)
{
std::cout << "Failed to open pipe." << std::endl;
close();
return;
}
// Assign native descriptor to Asio's stream_descriptor.
body_stream_.assign(pipe);
// Start writing the header.
write_header();
}
private:
// Write http header.
void write_header()
{
std::cout << "Writing http header." << std::endl;
// Start chunked transfer coding. Write http headers:
// HTTP/1.1. 200 OK\r\n
// Transfer-Encoding: chunked\r\n
// Content-Type: text/plain\r\n
// \r\n
boost::asio::async_write(socket_,
boost::asio::buffer(http_chunk_header),
boost::bind(&chunk_connection::handle_write_header, this,
boost::asio::placeholders::error));
}
/// Handle writing of http header.
void handle_write_header(const boost::system::error_code& error)
{
// On error, return early.
if (error)
{
close();
return;
}
read_chunk();
}
// Read a file chunk.
void read_chunk()
{
std::cout << "Reading from body_stream_...";
std::cout.flush();
// Read body_stream_ into chunk_data_ buffer.
boost::asio::async_read(body_stream_,
boost::asio::buffer(chunk_data_),
boost::bind(&chunk_connection::handle_read_chunk, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
// Handle reading a file chunk.
void handle_read_chunk(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
bool eof = error == boost::asio::error::eof;
// On non-eof error, return early.
if (error && !eof)
{
close();
return;
}
std::cout << bytes_transferred << " bytes read." << std::endl;
write_chunk(bytes_transferred, eof);
}
// Prepare chunk and write to socket.
void write_chunk(std::size_t bytes_transferred, bool eof)
{
std::vector<boost::asio::const_buffer> buffers;
// If data was read, create a chunk-body.
if (bytes_transferred)
{
// Convert bytes transferred count to a hex string.
chunk_size_ = to_hex_string(bytes_transferred);
// Construct chunk based on rfc2616 section 3.6.1
buffers.push_back(boost::asio::buffer(chunk_size_));
buffers.push_back(boost::asio::buffer(crlf));
buffers.push_back(boost::asio::buffer(chunk_data_, bytes_transferred));
buffers.push_back(boost::asio::buffer(crlf));
}
// If eof, append last-chunk to outbound data.
if (eof)
{
buffers.push_back(boost::asio::buffer(last_chunk));
buffers.push_back(boost::asio::buffer(crlf));
}
std::cout << "Writing chunk..." << std::endl;
// Write to chunk to socket.
boost::asio::async_write(socket_, buffers,
boost::bind(&chunk_connection::handle_write_chunk, this,
boost::asio::placeholders::error,
eof));
}
// Handle writing a chunk.
void handle_write_chunk(const boost::system::error_code& error,
bool eof)
{
// If eof or error, then shutdown socket and return.
if (eof || error)
{
// Initiate graceful connection closure.
boost::system::error_code ignored_ec;
socket_.shutdown(tcp::socket::shutdown_both, ignored_ec);
close();
return;
}
// Otherwise, body_stream_ still has data.
read_chunk();
}
// Close the socket and body_stream.
void close()
{
boost::system::error_code ignored_ec;
socket_.close(ignored_ec);
body_stream_.close(ignored_ec);
}
private:
// Socket for the connection.
tcp::socket socket_;
// Stream file being chunked.
posix::stream_descriptor body_stream_;
// Buffer to read part of the file into.
boost::array<char, 10> chunk_data_;
// Buffer holds hex encoded value of chunk_data_'s valid size.
std::string chunk_size_;
// Name of pipe.
std::string pipe_name_;
};
int main()
{
boost::asio::io_service io_service;
// Listen to port 80.
tcp::acceptor acceptor_(io_service, tcp::endpoint(tcp::v4(), 80));
// Asynchronous accept connection.
chunk_connection connection(io_service, "example_pipe");
acceptor_.async_accept(connection.socket(),
boost::bind(&chunk_connection::start, &connection,
boost::asio::placeholders::error));
// Run the service.
io_service.run();
}
I have a small html file that will be served over chunked encoding, 10 bytes at a time:
<html>
<body>
Test transfering html over chunked encoding.
</body>
</html>
Running server:
$ mkfifo example_pipe
$ sudo ./a.out &
[1] 28963
<open browser and connected to port 80>
$ cat html > example_pipe
The output of the server:
Opening pipe.
Writing http header.
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...10 bytes read.
Writing chunk...
Reading from body_stream_...7 bytes read.
Writing chunk...
The wireshark output shows no-malformed data:
0000 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d HTTP/1.1 200 OK.
0010 0a 54 72 61 6e 73 66 65 72 2d 45 6e 63 6f 64 69 .Transfe r-Encodi
0020 6e 67 3a 20 63 68 75 6e 6b 65 64 0d 0a 43 6f 6e ng: chun ked..Con
0030 74 65 6e 74 2d 54 79 70 65 3a 20 74 65 78 74 2f tent-Typ e: text/
0040 68 74 6d 6c 0d 0a 0d 0a 61 0d 0a 3c 68 74 6d 6c html.... a..<html
0050 3e 0a 3c 62 6f 0d 0a 61 0d 0a 64 79 3e 0a 20 20 >.<bo..a ..dy>.
0060 54 65 73 74 0d 0a 61 0d 0a 20 74 72 61 6e 73 66 Test..a. . transf
0070 65 72 69 0d 0a 61 0d 0a 6e 67 20 68 74 6d 6c 20 eri..a.. ng html
0080 6f 76 0d 0a 61 0d 0a 65 72 20 63 68 75 6e 6b 65 ov..a..e r chunke
0090 64 0d 0a 61 0d 0a 20 65 6e 63 6f 64 69 6e 67 2e d..a.. e ncoding.
00a0 0d 0a 61 0d 0a 0a 3c 2f 62 6f 64 79 3e 0a 3c 0d ..a...</ body>.<.
00b0 0a 37 0d 0a 2f 68 74 6d 6c 3e 0a 0d 0a 30 0d 0a .7../htm l>...0..
00c0 0d 0a ..
the example is very simple,just to show you how to hanlde HTTP request simply.Chunked Transfer Encoding is not support for this example.
some suggestion for you :
to learn what is the Chunked Transfer Encoding,you can found it in RFC2616,section 3.6.
do something before send:
set HTTP headers to indicate that the reapons message using Chunked Transfer Encoding;
encode your data with Chunked Transfer Encoding.
the logic will be like this:
std::string http_head;
std::string http_body;
char buff[10000];
read_file_to_buff(buff);
set_http_head_values(http_head);
encode_chunk_format(buff,http_body);
boost::asio::async_write(socket_,
boost::asio::buffer(http_head.c_str(), http_head.length()),
boost::bind(&connection::handle_write, shared_from_this(),
boost::asio::placeholders::error);
boost::asio::async_write(socket_,
boost::asio::buffer(http_body.c_str(), http_body.length()),
boost::bind(&connection::handle_write, shared_from_this(),
boost::asio::placeholders::error);
when you test your program,you can use Fiddler2 to monitor http message.
So the solution I came up is to have 2 write functions - write_body() and write_complete(). Once reading is done, assuming we have a body to send, we call
body_fp_.open(bodyFile);
async_write(get_headers(), write_body)
inside write_body, we do something like
vector<boost::asio::const_buffer> buffers;
body_fp_.read(buffer_, buffer_.size())
buffers.push_back(...size, newlines, etc...);
buffers.push_back(buffer_, body_fp_.gcount());
and once we finish writing the file contents, write a final time with:
boost::asio::async_write(socket_, boost::asio::buffer(misc_strings::eob),
strand_.wrap(
boost::bind(&connection::write_complete, shared_from_this(),
boost::asio::placeholders::error)));
This seems to work pretty well, and has alleviated concerns with memory usage. Any comments on this solution are appreciated; hopefully this will help someone else with a similar problem in the future.