send rSocket SETUP frame using Postman - postman

Per this question I am using spring-boot-starter-webflux and spring-boot-starter-rsocket version 2.7.1 and trying to run some simple websocket endpoint tests in Postman.
In Postman, when I setup the mime types and use
{
"data":"test",
"metadata":4
}
I get the following error on the server
DEBUG [reactor-http-nio-6] lambda$receive$0: receiving ->
Frame => Stream ID: 2064452128 Type: REQUEST_N Flags: 0b100000 Length: 42
RequestN: 539124833
Data:
DEBUG [reactor-http-nio-6] sendErrorAndClose: sending -> InvalidSetupException: SETUP or RESUME frame must be received before any others
How can I send the SETUP frame?
Per RSC it looks like this:
I tried a wireshark to capture the RSC binary and send that via postman. ie
and
But get An outbound error could not be processed java.nio.channels.ClosedChannelException

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

how to send multiple http2 requests over the same connection with libcurl

I'm using https://curl.haxx.se/libcurl/c/http2-download.html to send mulitple http2 requests to a demo http server. This server is based on spring webflux. To verify if libcurl can send http2 requests concurrently, the server will delay 10 seconds before return response. In this way, I hope to observe that the server will receive multiple http2 requests at almost the same time over the same connection, after 10 seconds, the client will receive responses.
However,I noticed that the server received the requests sequentially. It seems that the client doesn't send the next request before geting the response of previous request.
Here is the log of server, the requests arrived every 10 seconds.
2021-05-07 17:14:57.514 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:07.532 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:17.541 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
Any guys can help figure out my mistakes? Thank you
For me,
curl -v --http2 --parallel --config urls.txt
did exactly what you need, where urls.txt was like
url = "localhost:8080/health"
url = "localhost:8080/health"
the result was that at first, curl sent first request via HTTP/1.1, received 101 upgrade to http/2, immediately sent the second request without waiting for response, and then received two times 200 response in succession.
Note: -v is added for verbosity to validate it works as expected. You don't need it other than for printing the underlying protocol conversation.

Multipart file upload failing some times on docker container on AWS

I have an hapi server running on AWS docker container and it exposes a file upload API. This API runs smoothly on my local machine, but when deployed to AWS it fails some times with an error "Incomplete multipart payload". The error does not occur always, but only at some times.
The images which I am uploading are small in size(less than 100 kb) only and this failure is not because of slow network as I have tested it on multiple networks.
After debugging hapi modules for payload parsing, I have found that Pez module who is parsing the payload is throwing this error. I also noticed that when this error happens Pez modules onClose event is called and none of the parse events occurs and hence it returns the "Incomplete multipart payload" error. The Pez state is at "preamble" when this happens, for successful parse case, the state is "epilogue".
My hapi route config is
config: {
payload: {
maxBytes: 20971520,
output: 'data',
parse: true,
allow: 'multipart/form-data'
}
}
Can somebody suggest why is the parsing fails at times or why the onClose event is called before parsing happens?

Is it possible to connect to the Google IOTCore MQTT Bridge via Javascript?

I've been trying to use the javacscript version of the Eclipse Paho MQTT client to access the Google IOTCore MQTT Bridge, as suggested here:
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
However, whatever I do, any attempt to connect with known good credentials (working with other clients) results in this connection error:
errorCode: 7, errorMessage: "AMQJS0007E Socket error:undefined."
Not much to go on there, so I'm wondering if anyone has ever been successful connecting to the MQTT Bridge via Javascript with Eclipse Paho, the client implementation suggested by Google in their documentation.
I've gone through their troubleshooting steps, and things seem to be on the up and up, so no help there either.
https://cloud.google.com/iot/docs/troubleshooting
I have noticed that in their docs they have sample code for Java/Python, etc, but not Javascript, so I'm wondering if it's simply not supported and their documentation just fails to mention as such.
I've simplified my code to just use the 'Hello World' example in the Paho documentation, and as far as I can tell I've done things correctly (including using my device path as the ClientID, the JWT token as the password, specifying an 'unused' userName field and explicitly requiring MQTT v3.1.1).
In the meantime I'm falling back to polling via their HTTP bridge, but that has obvious latency and network traffic shortcomings.
// Create a client instance
client = new Paho.MQTT.Client("mqtt.googleapis.com", Number(8883), "projects/[my-project-id]/locations/us-central1/registries/[my registry name]/devices/[my device id]");
// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
// connect the client
client.connect({
mqttVersion: 4, // maps to MQTT V3.1.1, required by IOTCore
onSuccess:onConnect,
onFailure: onFailure,
userName: 'unused', // suggested by Google for this field
password: '[My Confirmed Working JWT Token]' // working JWT token
function onFailure(resp) {
console.log(resp);
}
// called when the client connects
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("onConnect");
client.subscribe("World");
message = new Paho.MQTT.Message("Hello");
message.destinationName = "World";
client.send(message);
}
// called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:"+responseObject.errorMessage);
}
}
// called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:"+message.payloadString);
}
I'm a Googler (but I don't work in Cloud IoT).
Your code looks good to me and it should work. I will try it for myself this evening or tomorrow and report back to you.
I've spent the past day working on a Golang version of the samples published on Google's documentation. Like you, I was disappointed to not see all Google's regular languages covered by samples.
Are you running the code from a browser or is it running on Node.JS?
Do you have a package.json (if Node) that you would share too please?
Update
Here's a Node.JS (JavaScript but non-browser) that connects to Cloud IoT, subscribes to /devices/${DEVICE}/config and publishes to /devices/${DEVICE}/events.
https://gist.github.com/DazWilkin/65ad8890d5f58eae9612632d594af2de
Place all the files in the same directory
Replace values in index.js of the location of Google's CA and your key
Replaces [[YOUR-X]] values in config.json
Use "npm install" to pull the packages
Use node index.js
You should be able to pull messages from the Pub/Sub subscription and you should be able to send config messages to the device.
Short answer is no. Google Cloud IoT Core doesn't support WebSockets.
All the JavaScript MQTT libraries use WebSocket because JavaScript is restricted to perform HTTP requests and WebSocket connections only.

Graylog2 not showing any messages in specific stream

In our Symfony2 application we query an external API for a certain service we provide. This API (let's call it Acme API) sometimes throws error messages that we forward to Graylog2 via Monolog and Gelf to keep track of outages. Every error is logged on error level with $logger->err().
The messages are shown in the normal message pool, but the custom stream that collects these API error messages isn't showing any message at all.
So my main question is: Why is Graylog refusing to show messages in the stream and what can we do to change that behaviour?
Configurations
There is a total of 35 streams at the moment (This because we have a bunch of applications on our servers).
Every message that is given to Monolog has the same pattern:
Acme API Error on "{user action}": {error description}. Additional information: "{more information provided from the API}" on server "{web server name}" and domain "{domain}" for user "{session ID}"
The Graylog stream rules are as follows:
Host (regex): ^((?!mycompany-staging).)*$ // Needed to show only logs from the live servers
Facility: app
Full Message (regex): Acme API.*
(We've also tried to set the Full Message regex to .*Acme API.* and Acme API Error.*, but none of these worked)
The monolog configuration is as follows:
// config_prod.yml
// ...
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
level: debug
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
graylog:
type: gelf
level: warning
publisher:
hostname: mycompany-monitoring.mycompany.ch
// ...
Seemed to be a problem with Graylog2 itself, the stream started working normally after updating Graylog to the newest version and recreating the stream.