I am migrating from Jetty 9.0.x to 9.4.x
org.eclipse.jetty.server.ResourceCache is removed from Jetty 9.4.x
Questions:
1) What is the replacement for this class in 9.4.x ?
2) I found CachedContentFactory is the closest equivalent of this class but constructor of this class takes one extra parameter CompressedContentFormat[] precompressedFormats. if this is a correct replacement then I am not sure what should I pass it in for this param? can it be empty array ? Sorry, javadocs did not help a lot either.
First some history.
Back during the major release Jetty 9.0.0 there were 2 main ways to handle static content:
DefaultHandler (and the inferior ResourceHandler).
When major release Jetty 9.4.0 rolled out (this is 4 major version releases of Jetty later then Jetty 9.0.0) an effort was made to make both of those component use a common codebase, so the ResourceService was created to standardize the servicing of static content in a single place. Now the differences between DefaultHandler and ResourceHandler were vastly reduced. (note: DefaultHandler still supports more features of its own and more features of the various HTTP specs)
Next, Issue #539 was resolved to allow the ResourceHandler (and now DefaultHandler) to have customized directory listings. To accomplish this the HttpOutput.ContentFactory interface was introduced.
The new HttpOutput.ContentFactory was responsible for returning the HttpContent representing the path provided (and an optional maximum buffer size configuration option).
So that means, at this point we have ...
A DefaultServlet (or ResourceHandler)
Which has a ResourceService
Which gets it's content from a HttpOutput.ContentFactory
The returned HttpContent can be a static resource, a directory listing, or a welcome file.
When it comes time to send a piece of static content the steps taken are ...
Ask for HttpContent object from HttpOutput.ContentFactory.getContent(path, maxBufferSize)
Ask for representation of HttpContent that can be used to send the referenced content, one of the following (in this order):
If HttpChannel is configured to use "direct buffers", then ask for HttpContent.getDirectBuffer() representing the entire content. (this could be a memory mapped file using a negligible amount of heap memory)
Ask for HttpContent.getIndirectBuffer() representing the entire content. (this could be a memory mapped file using a negligible amount of heap memory)
Ask for HttpContent.getReadableByteChannel() to send content.
Ask for HttpContent.getInputStream() to send content.
Return error indicating "Unknown Content"
There are 2 major implementations of HttpOutput.ContentFactory present in Jetty 9.4.0+
ResourceContentFactory which handles transient content (not cached) - if content exceeds maxBufferSize then raw ByteBuffer versions will not be returned.
CachedContentFactory which will cache various ByteBuffer values returned from previous HttpOutput usage.
The CachedContentFactory has a isCacheable(Resource) method that is interrogated to know if the supplied resource should enter the in-memory cache or not.
With regards to the CompressedContentFormat[] precompressedFormats parameter in the CachedContentFactory constructor, that refers to the "pre-compressed" formats supported by both the ResourceService and the CachedContentFactory.
Typical, default, setup is ...
CompressedContentFormat[] precompressedFormats = {
CompressedContentFormat.GZIP, // gzip compressed
CompressedContentFormat.BR, // brotli compressed
new CompressedContentFormat("bzip", ".bz") // bzip compressed
};
CachedContentFactory cachedContentFactory = new CachedContentFactory(parentContentFactory,
resourceFactory, mimeTypes, useFileMappedBuffers,
useEtags, precompressedFormats);
resourceService.setContentFactory(cachedContentFactory);
These precompressedFormats refer to static (and immutable) content that has been precompressed before server startup.
This allows a client to send a request for say ...
GET /css/main.css HTTP/1.1
Host: example.hostname.com
Accept-Encoding: gzip, deflate
and if the "Base Resource" directory has a ${resource.basedir}/css/main.css AND a ${resource.basedir}/css/main.css.gz then the response will be served from the main.css.gz (not the main.css), resulting in an HTTP response like ...
HTTP/1.1 200 OK
Date: Wed, 15 May 2019 20:17:22 GMT
Vary: Accept-Encoding
Last-Modified: Wed, 15 May 2019 20:17:22 GMT
Content-Type: text/css
ETag: W/"H/6qTDwA8vsH/6rJoEknqc"
Accept-Ranges: bytes
Content-Length: 11222
Related
I need a library for my C++ program.
But the problem, I don't know the name of this data type I want.
I have NPAPI plugin (I know this API is deprecated and removed from modern browsers) which issues to a server
HTTP range requests. Request is asyncronious and the data may arraive in any order with any chunks size.
So I need to track ranges I already have requested from a server.
For example, if initially I requested bytes [10-20] (inclusevely), then I requested [30-40] the data type I need should keep it as two intervals:
[10-20],[30-40]
But if I request [21-29] or even [15-35] it should be merged in one interval:
[10-20],[30-40] + [15-35] = [10-40]
Also I need a substraction when a requested block arrives:
[10-40] - [20-30] = [10-19],[31-40]
(requested - arrived = we're still waiting for)
I had a look at boost::numeric::intervals library but at first glance it is too big for this task (1583 files, 13 Mb of sources after './dist/bin/bcp numeric/interval ~/boost').
Also, GNU ddrescue has some similar arithmetics inside but the code isn't a library there, it coupled too much with the applications specifics.
UPDATE:
Here is what I've found on my way:
A container for integer intervals, such as RangeSet, for C++
https://en.wikipedia.org/wiki/Interval_tree
Boost.ICL
NCBI C++ Toolkit, CIntervalTree
I have an app that appears to enable gzip encoding by default while sending data to the server.
We tried disabling the gzip compression by explicitly using:
IXMLHttpRequest2::SetRequestHeader(L"Accept-Encoding", L"") (on the HTTP Request Object, of course)
This still doesn't seem to help. Is there anyway to disable GZIP being enabled in the HTTP-Request headers from the C++ App?
Thanks!
To ask a server to do not use a specific encoding you should provide a list of Accept-Encoding values. From section 14.11 of RFC2616 (HTTP/1.1) you see that it has one of forms (values are examples):
Accept-Encoding: compress, gzip
Accept-Encoding:
Accept-Encoding: *
Accept-Encoding: compress;q=0.5, gzip;q=1.0
Accept-Encoding: gzip;q=1.0, identity; q=0.5, *;q=0
If the content-coding is one of the content-codings listed in Accept-Encoding field, then it is acceptable, unless it is by a qvalue of 0. (As defined in section 3.9, a of 0 means "not acceptable.")
Then to ask the server to do not use gzip compression you should provide, instead of an empty string, this value for Accept-Encoding:
gzip;q=0
This will require the server to do not use it and but you have to provide another encoding. See section 3.5 for available encodings. Use the quality q parameter to inform the server about your preferences (do not forget that if it can't provide that encoding for your request it'll reply with error 406).
identity;q=1.0, gzip=0.5
In this way you ask to use identity encoding and, in case it's not available, you can accept a gzip encoding too (this will prevent the server to reply with an error if it, for any reason, can't use any other encoding for your request). You may try performance of other encodings too (compress and deflate, for example).
Code
Then, finally, you have to use IXMLHttpRequest2::SetRequestHeader(L"Accept-Encoding", L"identity;q=1.0, gzip=0.5"). In SetRequestHeader you see that it's an append to headers sent by default so if you specify an empty string actually the value won't be changed (actually how it is interpreted may depends on the server, I didn't find any proper specification about this, you may inspect both your HTTP request and response to check what is actually sent/received).
Old value: Accept-Encoding: compress
Call: IXMLHttpRequest2::SetRequestHeader(L"Accept-Encoding", L"")
New value: Accept-Encoding: compress
Symptom
I think, I messed up something, because both Mozilla Firefox and Google Chrome produce the same error: they don't receive the whole response the webserver sends them. CURL never misses, the last line of the quick-scrolling response is always "</html>".
Reason
The reason is, that I send response in more part:
sendHeaders(); // is calls sendResponse with a fix header
sendResponse(html_opening_part);
for ( ...scan some data... ) {
sendResponse(the_data);
} // for
sendResponse(html_closing_part)
The browsers stop receiving data between sendResponse() calls. Also, the webserver does not close() the socket, just at the end.
(Why I'm doing this way: the program I write is designed for non-linux system, it will run on an embedded computer. It has not too much memory, which is mostly occupied by lwIP stack. So, avoid collecting the - relativelly - huge webpage, I send it in parts. Browsers like it, no broken HTML occurred as under Linux.)
Environment
The platform is GNU/Linux (Ubuntu 32-bit with 3.0 kernel). My small webserver sends the stuff back to the client standard way:
int sendResponse(char* data,int length) {
int x = send(fd,data,length,MSG_NOSIGNAL);
if (x == -1) {
perror("this message never printed, so there's no error \n");
if (errno == EPIPE) return 0;
if (errno == ECONNRESET) return 0;
... panic() ... (never happened) ...
} // if send()
} // sendResponse()
And here's the fixed header I am using:
sendResponse(
"HTTP/1.0 200 OK\n"
"Server: MyTinyWebServer\n"
"Content-Type: text/html; charset=UTF-8\n"
"Cache-Control: no-store, no-cache\n"
"Pragma: no-cache\n"
"Connection: close\n"
"\n"
);
Question
Is this normal? Do I have to send the whole response with a single send()? (Which I'm working on now, until a quick solution arrives.)
If you read RFC 2616, you'll see that you should be using CR+LF for the ends of lines.
Aside from that, open the browser developer tools to see the exact requests they are making. Use a tool like Netcat to duplicate the requests, then eliminate each header in turn until it starts working.
Gotcha!
As #Jim adviced, I've tried sending same headers with CURL, as Mozilla does: fail, broken pipe, etc. I've deleted half of headers: okay. I've added back one by one: fail. Deleted another half of headers: okay... So, there is error, only if header is too long. Bingo.
As I've said, there're very small amount of memory in the embedded device. So, I don't read the whole request header, only 256 bytes of them. I need only the GET params and "Host" header (even I don't need it really, just to perform redirects with the same "Host" instead of IP address).
So, if I don't recv() the whole request header, I can not send() back the whole response.
Thanks for your advices, dudes!
I am using URLOpenPullStream along with a IBindStatusCallback and IHttpNegotiate callbacks to handle the negotiate, status, and data messages. Problem that I have is when the content is gzip (e.g. Content-Encoding: gzip). The data that I am receiving via OnDataAvailable is compressed. I need the uncompressed data. I am using BINDF_PULLDATA | BINDF_GETNEWESTVERSION | BINDF_NOWRITECACHE binding flags. I have read some posts that says it should support gzip format.
I initially tried to change the Accept-Encoding request header to specify that I did not want gzip but was unsucessful with this. I can change or add headers in BeginningTransaction, but it fails to change Accept-Content. I was able to change the User-Agent, and was able to add a new header, so the process works, but it would not override the Accept-Content for some reason.
Other option is to un-gzip the data myself. In a quick test using a C++ gzip library, I was able to ungzip the content. So, this may be an option. If this is what I need to do, what is the best method to detect it is gzip. I noticed that I got an OnProgress event with BINDSTATUS_MIMETYPEAVAILABLE and the text set to "application/x-gzip-compressed". Is this how I should detect it?
Looking for any solution to get around this problem! I do want to stay with URLOpenPullStream. This is a product that has been released and wish to keep changes to the minimum.
I will answer my own question after more research. It seems that the website that I having the issue with is returning something incorrect where IE, FF, and URLOpenPullStream do not recognize it as valid gzip content. The headers appear to be fine, e.g.
HTTP/1.1 200 OK
Content-Type: text/html; charset=iso-8859-1
Content-Encoding: none
Server: Microsoft-IIS/6.0
MSNSERVER: H: COL102-W41 V: 15.4.317.921 D: 2010-09-21T20:29:43
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 4258
Date: Wed, 27 Oct 2010 20:48:15 GMT
Connection: keep-alive
Set-Cookie: xidseq=4; domain=.live.com; path=/
Set-Cookie: LD=; domain=.live.com; expires=Wed, 27-Oct-2010 19:08:15 GMT; path=/
Cache-Control: no-cache, no-store
Pragma: no-cache
Expires: -1
Expires: -1
but URLOpenPullStream just downloaded it in raw compressed format, IE reports an error if you try to access the site, and FF shows garbage.
After doing a test with a site that does return valid gzip content, e.g. www.webcompression.org, then IE, FF, and URLOpenPullStream worked fine. So, it appears that URLOpenPullStream does support gzip content. In this case, it was transparent. In OnDataAvailable, I received the uncompressed data, and in the OnResponse, the headers did not show the Content-Encoding as gzip.
Unfortunately, this still did not solve my problem. I resolved by checking the response headers in OnResponse event. If the Content-Encoding was gzip, then I set a flag and when the download was complete, then used zlib gzip routines to uncompress the content. This seemed to work fine. This should be fine for my rare case since typically I should never receive a Content-Encoding : gzip in the OnResponse headers since the URLOpenPullStream handles the uncompress transparently.
Dunno :)
I'm serving some files locally via HTTP using QTcpSocket. My problem is that only wget downloads the file properly, firefox adds four extra bytes to the end. This is the header I send:
HTTP/1.0 200 Ok
Content-Length: 382917;
Content-Type: application/x-shockwave-flash;
Content-Disposition: attachment; filename=file.swf;
This is the code used to send the response:
QTextStream os(socket);
os.setAutoDetectUnicode(true);
QString name = tokens[1].right(tokens[1].length() - 1);
QString resname = ":/" + name; // the served file is a Qt resource
QFile f(resname); f.open(QIODevice::ReadOnly);
os << "HTTP/1.0 200 Ok\r\n" <<
"Content-Length: " << f.size() << ";\r\n" <<
"Content-Type: application/x-shockwave-flash;\r\n" <<
"Content-Disposition: attachment; filename=" << name <<
";\r\n\r\n";
os.flush();
QDataStream ds(socket);
ds << f.readAll();
socket->close();
if (socket->state() == QTcpSocket::UnconnectedState)
{
delete socket;
}
As I stated above, wget gets it right and downloads the file properly. The problem is that Firefox (and my target application, a Flash ActiveX instance) don't.
The four extra bytes are always the same: 4E E9 A5 F4
Hex dump http://www.freeimagehosting.net/uploads/a5711fd7af.gif
My question is what am I doing wrong, and what should I change to get it right? Thanks in advance.
You should not be terminating the lines with semicolons. At first glance this seems like the most likely problem.
I don't know much about QDataStream (or QT in general), however a quick look at the QDataStream documentation mentions operator<<(char const*). If you are passing a null terminated string to QDataStream, you are almost certainly going over the end of the final buffer.
Try using QDataStream::writeRawBytes().
If you remove the semicolons, then the clients should at least read the correct number of bytes for the response and ignore the last four bytes.
I'd leave out "Content-Disposition" too. That's a MIME thing, not an HTTP thing.
So I found the whole solution to the question, and I think someone might need it, so here it is:
The first problem were the four extra bytes. The reason for this is that according to the QDataStream documentation, "each item written to the stream is written in a predefined binary format that varies depending on the item's type". And as QFile.readAll() returned a QByteArray, QDataStream.operator<< wrote that object in the following format:
If the byte array is null: 0xFFFFFFFF (quint32)
Otherwise: the array size (quint32) followed by the array bytes, i.e. size bytes
(link)
So, the four extra bytes were the four bytes of quint32 that denoted the array size.
The solution, according to janm's answer was to use the writeRawBytes() function.
QDataStream ds(socket);
ds.writeRawData(f.readAll().data(), f.size());
Wget probably got it right the first time because it strictly enforces the Content-Length field of the HTTP header, while apparently firefox does not.
The second problem was that despite the right header and working sockets, the flashplayer did not display the desired content at all. I experimented with various fields to make it work, and noticed that by uploading to a real server, it works all right. I copied the header from server, and tadaa! it works. This is the header:
HTTP/1.1 200 OK
Server: Apache/2.2.15 (Fedora)
Accept-Ranges: bytes
Content-Length: 382917
Content-Type: application/x-shockwave-flash
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
At first I only tried setting the version to 1.1, but that didn't help. Probably it's the keepalive thing, but honestly, I don't care at all as long as it works :).
There shouldn't be any semicolons at the end of the line.