I make a request to a Web-Service to download a file as an MTOM/XOP attachment; The file is an Excel file.xlsx;
The response in the SoapUI tool comes back 200-OK, with typical soap-envelope, and the attachment is there in the SOAP-UI grid, and I can export the attachment from the SOAP-UI grid to a file, and it verifies ok (it is my original Excel file).
The real question is the Dump file that got created is some garbled binary file, and I got no idea what its contents are, what format it is in, whether it includes both the soap xml response and attachment, but more importantly how can I decode it to be useful ?
Got the answer, rather than deleting the question, I'll leave it up here in case anyone else struggles with this as I did !
In the Raw response of SOAP-UI, we can see "Content-Encoding: gzip"; this is dependent on the config of the web-service / web-server.
So after decoding the Dump File with GZipStream (I used C#), I got an intelligible format, whereupon I can see the original Excel file embedded in there !
Related
I am getting the "Not a gzipped file" exception while retrieving a gzipped sitemap xml (tested on amazon.de)
According to the bugtrackers, there used to be a bug regarding "Not a gzipped file"
I am using Python 2.7.3 and Scrapy 0.24.4
Can anyone confirm this as a bug or am I overseeing something?
UPDATE
I think this is some valuable information, already posted on github as well
Possible bug:
retrieving a gzipped sitemap xml (tested on amazon.de) fails.
Reproduce with:
modify /utils/gz.py gunzip method to write the incoming data to a file.
gunzip the file on the command line.
the unzipped file contains garbled content
gunzip that file with garbled content a second time and get the correct content
I suspect that the content coming from the target server is already gzip compressed and scrapy has a bug that causes the gzip http decompression to not work properly, resulting in a double compressed file arriving at the /utils/gz.py gunzip method
this is the first time I write on StackOverflow. My question is the following.
I am trying to write a OneDrive C++ API based on the cpprest sdk CasaBlanca project:
https://casablanca.codeplex.com/
In particular, I am currently implementing read operations on OneDrive files.
Actually, I have been able to download a whole file with the following code:
http_client api(U("https://apis.live.net/v5.0/"), m_http_config);
api.request(methods::GET, file_id +L"/content" ).then([=](http_response response){
return response.body();
}).then([=]( istream is){
streambuf<uint8_t> rwbuf = file_buffer<uint8_t>::open(L"test.txt").get();
is.read_to_end(rwbuf).get();
rwbuf.close();
}).wait()
This code is basically downloading the whole file on the computer (file_id is the id of the file I am trying to download). Of course, I can extract an inputstream from the file and using it to read the file.
However, this could give me issues if the file is big. What I had in mind was to download a part of the file while the caller was reading it (and caching that part if he came back).
Then, my question would be:
Is it possible, using the OneDrive REST + cpprest downloading a part of a file stored on OneDrive. I have found that uploading files in chunks seems apparently not possible (Chunked upload (resumable upload) for OneDrive?). Is this true also for the download?
Thank you in advance for your time.
Best regards,
Giuseppe
OneDrive supports byte range reads. And so you should be able to request chunks of whatever size you want by adding a Range header.
For example,
GET /v5.0/<fileid>/content
Range: bytes=0-1023
This will fetch the first KB of the file.
I am implementing a HTTP/1.0 server that processes GET or HEAD request.
I've finished Date, Last-Modified, and Content-Length, but I don't know how to get the Content-Type of a file.
It has to return directory for directory(which I can do using stat() function), and for a regular file, text/html for text or html file, and image/gif for image or gif file.
Should this be hard-coded, using the name of the file?
I wonder if there is any function to get this Content-Type.
You could either look at the file extension (which is what most web servers do -- see e.g. the /etc/mime.types file; or you could use libmagic to automatically determine the content type by looking at the first few bytes of the file.
It depends how sophisticated you want to be.
If the files in question are all properly named and there are only several types to handle, having a switch based file suffix is sufficient. Going to the extreme case, making the right decision no matter what the file is would probably require either duplicating the functionality of Unix file command or running it on file in question (and then translating the output to the proper Content-Type).
I've read how NSURLConnection will automatically decompress a compressed (zipped) resource, however I can not find Apple documentation or official word anywhere that specifies the logic that defines when this decompression occurs. I'm also curious to know how this would relate to streamed data.
The Problem
I have a server that streams files to my app using a chunked encoding, I believe. This is a WCF service. Incidentally, we're going with streaming because it should alleviate server load during high use and also because our files are going to be very large (100's of MB). The files could be compressed or uncompressed. I think in my case because we're streaming the data, the Content-Encoding header is not available, nor is Content-Length. I only see "Transfer-Encoding" = Identity in my response.
I am using the AFNetworking library to write these files to disk with AFHTTPRequestOperation's inputStream and outputStream. I have also tried using AFDownloadRequestOperation as well with similar results.
Now, the AFNetworking docs state that compressed files will automatically be decompressed (via NSURLConnection, I believe) after download and this is not happening. I write them to my documents directory, with no problems. Yet they are still zipped. I can unzip them manually, as well. So the file is not corrupted. Do they not auto-unzip because I'm streaming the data and because Content-Encoding is not specified?
What I'd like to know:
Why are my compressed files not decompressing automatically? Is it because of streaming? I know I could use another library to decompress afterward, but I'd like to avoid that if possible.
When exactly does NSURLConnection know when to decompress a downloaded file, automatically? I can't find this in the docs anywhere. Is this tied to a header value?
Any help would be greatly appreciated.
NSURLConnection will decompress automatically when the appropriate Content-Encoding (e.g. gzip) is available in the response header. That's down to your server to arrange.
The HTTP file and its contents are already downloaded and are present in memory. I just have to pass on the content to a decoder in gstreamer and play the content. However, I am not able to find the connecting link between the two.
After reading the documentation, I understood that gstreamer uses httpsoupsrc for downloading and parsing of http files. But, in my case, I have my own parser as well as file downloader to do the same. It takes the url and returns the data in parts to be used by the decoder. I am not sure howto bypass httpsoupsrc and use my parser instead also how to link it to the decoder.
Please let me know if anyone knows how things can be done.
You can use appsrc. You can pass chunks of your data to app source as needed.