I have a program that generates a fair amount of output to console.log. I'm exceeding the size of the console.log buffer and the first parts of my logs are being lost.
How can I up the size of the console so my output is not lost?
WebStorm console buffer is currently limited to 1024 Kb. You can try increasing idea.cycle.buffer.size property value - see http://intellij-support.jetbrains.com/entries/23395793.
Related
I have been searching this site and the Boost.Log doc for a way to do this but have come up empty so far.
The doc (https://www.boost.org/doc/libs/1_74_0/libs/log/doc/html/log/detailed/sink_backends.html) mentions the ability to set a text_stream_backend to flush after each log record written by calling auto_flush(true).
While this works well for debugging, I was wondering if it was possible to configure a custom number of log records received by the core (or sink?) before a flush() occurs. My goal is to strike a balance between useful live logging (I can see the log records frequently enough with a tail -f) and performance.
Alternatively, would it be possible to configure the size of the buffer containing log records so that once it fills up, it gets flushed?
I've resorted to stackoverflow becase AWS doesn't provide technical support for free tiers.
Someone reported an issue using httpx, the ruby HTTP client library I maintain: https://gitlab.com/honeyryderchuck/httpx/issues/64
The report came after a recent upgrade to improve HTTP/2 spec compliance in the parser. Although the library now passes the h2spec, there seem to be legitimate issues requesting from cloudfront, due to an apparent part of the spec they don't seem to comply with: when a flow control window over 2 ** 31 - 1 is advertised, the sender must not allow it and return a flow control error.
Is it correct?
sbordet answer is not fully correct.
He is right that flow-control window can't exceed 2^31-1 bytes and that the initial flow-control window size is 65535 bytes. However the part that CloudFront sends wrong value of 65536 is incorrect, as any endpoint is allowed to modify the default initial window size as stated in RFC7540 Sec 6.9.2:
Both endpoints can adjust the initial window size for new streams by including a value for SETTINGS_INITIAL_WINDOW_SIZE in the SETTINGS frame that forms part of the connection preface.
Note that this setting is applied only to new streams and not connection flow-control window size. The connection flow-control window size can be updated only through WINDOW_UPDATE frame, as mentioned in next line of RFC:
The connection flow-control window can only be changed using WINDOW_UPDATE frames.
So after CloudFront updated SETTINGS_INITIAL_WINDOW_SIZE to 65536 bytes, the connection flow-control window is still at 65535 bytes, so the next WINDOW_UPDATE of 2147418112 bytes, increases it to 2^31-1 bytes (which is a valid value according to RFC), not 2^31 bytes.
You are correct that the flow control window cannot exceed 2^31-1, as indicated in the specification.
The initial flow control window is 65535, not 65536 as sent from Cloudfront, so the subsequent enlargement of the flow control window by 2147418112 yields 2^31 which is off-by-one too big for the flow control window.
Your client correctly sends a GO_AWAY with error FLOW_CONTROL_ERROR.
I was creating a server application, which uses socket.write() to send data back to the clients.
Here, I got the problem of managing buffer size. Suppose the connection between server and client broke down, and server was not aware of the problem. So, it keeps writing to the buffer. In such situation, I want to limit the maximum buffer size, so that it can throw errors before it consumes lots of resources.
I know kMaxLength in node_buffer.h controls the size of buffer, but I do not think it's a good idea to change its value via some self-defined methods. Is there any other method to manage the size?
kMaxLength is the limit value for memory allocation, it is much bigger then socket.buffSize
http://nodejs.org/docs/v0.11.5/api/smalloc.html
You can read about socket.buffSize here:
http://nodejs.org/api/net.html#net_socket_buffersize
You can also see socket.bytesRead, socket.bytesWritten to control your data transmission.
I am working on a project where we can have input data stream with 100 Mbps.
My program can be used overnight for capturing these data and thus will generate huge data file. My program logic which interpret these data is complex and can process only 1 Mb data per second.
We also dump the bytes to some log file after getting processed. We do not want to loose any incoming data and at the same time want my program to work in real time.So; we are maintaining a circular buffer which acts like a cache.
Right now only way to save incoming data from getting lost is to increase size of this buffer.
Please suggest better way to do this and also what are the alternate way of caching I can try?
Stream the input to a file. Really, there is no other choice. It comes in faster than you can process it.
You could create one file per second of input data. That way you can directly start processing old files while new files are being streamed on the disk.
I am writing an application on Ubuntu Linux in C++ to read data from a serial port. It is working successfully by my code calling select() and then ioctl(fd,FIONREAD,&bytes_avail) to find out how many bytes are available before finally obtaining the data using read().
My question is this: Every time select returns with data, the number of bytes available is reported as 8. I am guessing that this is a buffer size set somewhere and that select returns notification to the user when this buffer is full.
I am new to Linux as a developer (but not new to C++) and I have tried to research (without success) if it is possible to change the size of this buffer, or indeed if my assumptions are even true. In my application timing is critical and I need to be alerted whenever there is a new byte on the read buffer. Is this possible, without delving into kernel code?
You want to use the serial IOCTL TIOCSSERIAL which allows changing both receive buffer depth and send buffer depth (among other things). The maximums depend on your hardware, but if a 16550A is in play, the max buffer depth is 14.
You can find code that does something similar to what you want to do here
The original link went bad: http://www.groupsrv.com/linux/about57282.html
The new one will have to do until I write another or find a better example.
You can try to play with the VMIN and VTIME values of the c_cc member of the termios struct.
Some info here, especially in the section 3.2.