Compression HTTP - compression

Is it possible to POST request data from browser to server in compressed format?
If yes, How can we do that?

Compressing data sent from the browser to the server is not natively supported in the browsers.
You'll have to find a workaround, using a clientside language (maybe a JavaScript GZip implementation, or a Java Applet, or ...). Be sure to visually display to the user what the browser is doing and why it is taking some time.
I don't know the scope of your application, but on company websites you could just restrict input to compressed files. Ask your users to upload .zip/.7z/.rar/... files.

The server->client responses can be gzip compressed automagically by the server.
Compressing the client->server messages is not standard, so will require some work by you. Take your very large POST data and compress it client-side, using JavaScript. Then decompress it manually on the server side.
This will usually not be a beneficial thing to do unless your bandwidth usage is a major bottleneck. Compression requires both time and CPU usage to perform.

Related

How can I send bandwidth of client to server?

Is there away for server (web-server of web-page or file server) to know what bandwidth client had during last access or during page/file request? Can this information be sent via cookie or together with page/file request?
I guess this is more of theoretical question, since I want to know if server can provide lower resolution image for clients with bad bandwidth available to them.
Yes; use JavaScript Image onload event to time the download speed of a small image (like a logo), then do something usefull with the result like downloaded a large image if the client has the bandwidth.

Most efficient solution for sending multiple files to client from a webservice?

Wonder what the community says about the most efficient (in terms of I/O and speed) solution for delivering multiple files back from a single request to a webservice would be. The client is not a web browser.
The options I see so far:
creating a zip archive and streaming it back to the client.
base64 encoding files an returning array of strings that would need to be decoded by the client.
Using Mime multipart/related and sending Mime headers for each file in iteration, also potentially streamed back to the client.
Maybe there are others I haven't considered?
CLARIFICATION:
Let's assume the files may be in the 10s of Megabytes, and that memory is around 4G but there are likely other processes and/or simultaneous requests.
I think you need to consider the bindings (streaming) and transports protocols (SOAP, REST). How large is the average file?

How can I use compression with QHttp?

In a existing application with Qt the QHttp-class is used to access data over the network. This communication is in the moment uncompressed, but the server allows compression (and browsers actually use it). How can I use QHttp to make accept compression?
QHttp is obsolete class. Try using QNetworkAccessManager. It uses compression by default in responses
According to this two sources, the server only capable to send compressed responses, and the browser do the decompressing process, so I think you have either use/implement a decompression function.
http://www.http-compression.com/
QHttp & HTTP 1.1 Compression

Video streaming through web service and rendering - Any Issues?

We have a web service that sends the video content in the response as binary (in different formats asx, asf, ram, mpeg, mpg, mpe, qt, mov, avi, movie, wmv, smil, mp4, mxf, gxf, flv, 3gp, f4v, mj2, omf, dv, vob).
Do you see any issue with performance, if I have an intermediate application which makes a request to web service to retrieve video content and render in browser?
Thanks
As long as the web service returns binary data directly, then there will be no performance hit. If this is an XML or SOAP web service that wraps the whole thing in a SOAP envelope and bae64 encodes it to make it all text, then you will not be able to play it directly and it will have a big impact on bandwidth, cpu, and memory.
Also note that by serving the video directly instead of using a true streaming protocol the user will only be able to seek within the portion downloaded so far. A streaming protocol like RTSP, RTMP, or the many varieties of HTTP Streaming allow seeking to any part of the file and only downloading the part seeked to.

What would you use to implement a fast and lightweight file server?

I need to have as part of a desktop application a file server which should respond as fast as possible to file transfer requests (from remote clients, usually located on the same LAN). There will be many file requests for small sized files. The server should be able to provide both upload and download services.
I am not tight to any particual technology so I am open to any programming language, toolkits, libraries as long as they can run on Windows.
My initial take is to go with a C/C++ implementation using Windows Sockets or use the services provided by libraries such as Boost (asio or such). I have also thought of Erlang but that I'll have to learn and so the performance benefits should justify the increased development time due to having to learn the language.
LATER EDIT: I appreciate the answers that say use FTP or HTTP or basically anything that has been already created but considering you still want to write one from scratch, what would you do?
Why not just go with FTP? You should be able to find an adequate server implementation in any language, and client access libraries too.
It sounds like a lot of wheel-reinvention. Granted, FTP is not ideal, and has a few odd spots, but ... it's there, it's standard, well-known, and already very widely implemented.
For frequent uploads of small files, the fastest way would be to implement your own proprietary protocol, but that would require a considerable amount of work - and also it would be non-standard, meaning future integration would be difficult unless you are able to implement your protocol in any client you'll support. If you choose to do it anyway, this is my suggestion for a simple protocol:
Command: 1 byte to identify what'll be done: (0x01 for upload request, 0x02 for download request, 0x11 for upload response, 0x12 for download response, etc).
File name: can be fixed-size or prefixed with a byte for the length (assuming the name is less than 255 bytes)
Checksum, MD5 for instance (if upload request or download response)
File size (if upload request or download response)
payload (if upload request or download response)
This could be implemented on top of a simple TCP socket. You can also use UDP, avoiding the cost of establishing a connection but in this case you have to deal with retransmission control.
Before deciding to implement your own protocol, take a look at HTTP libraries like libcurl, you could make your server use standard HTTP commands like GET for download and POST for upload. This would save a lot of work and you'll be able to test the download with any web browser.
Another suggestion to improve performance is to use as the file repository not the filesystem, but something like SQLite. You can create a single table containing one char column for the file name and one blob column for the file contents. Since SQLite is lightweight and does an efficient caching, you'll most of the time avoid the disk access overhead.
I'm assuming you don't need client authentication.
Finally: although C++ is your preference to give you raw native code speed, rarely this is the major bottleneck in this kind of application. Most probably will be disk access and network bandwidth. I'm mentioning this because in Java you'll probably be able to make a servlet to do exactly the same thing (using HTTP GET for download and POST for upload) with less than 100 lines of code. Use Derby instead of SQLite in this case, put that servlet in any container (Tomcat, Glassfish, etc) and it's done.
If all the machines are running on Windows on the same LAN, why do you need a server at all? Why not simply use Windows file sharing?
I would suggest not to use FTP, or SFTP, or any other connection oriented technique. Instead, go for a connectionless protocol or technique.
The reason is that, if you require lots of small files to be uploaded or downloaded, and the response should be as fast as possible, you want to avoid the cost of setting up and destroying connections.
I would suggest that you look at either using an existing implementation or implementing your own HTTP or HTTPS server/service.
Your bottlenecks are likely to come from one of the following sources:
Harddisk I/O - The WD velociraptor is supposed to have a random access speed of about 100MB/s. Also, it is important whether you set it up as RAID0,1,5 or what nots. Some read fast but write slow. Trade-offs.
Network I/O - Assuming that you have the fastest harddisks in a fast RAID setup, unless you use Gbit I/O, your network will be slow. If your pipes are big, you still need to supply it with data.
Memory cache - The in-memory file-system cache will need to be big enough to buffer all the network I/O so that it does not slow you down. That will require large amounts of memory for the kind of work you're looking at.
File-system structure - Assuming that you have gigabytes worth of memory, then the bottleneck will most likely be the data-structure that you use for the file-system. If the file-system structure is cumbersome it will slow you down.
Assuming that all the other problems are solved, then do you worry about your application itself. Notice, that most of the bottlenecks are outside your software control. Therefore, whether you code it in C/C++ or use specific libraries, you will still be at the mercy of the OS and hardware.
Sounds like you should use an SFTP (SSH) server, it's firewall/NAT safe, secure, and already does what you want and more. You could also use SAMBA or windows file sharing for an even more simple implementation.
Why not use something existing, for example a normal Web server handles a lot of small files (images) very well and fast.
And lots of people already spent time in optimizing the code.
And the second benefit is that the transfer is done with HTTP which is an established protocol. And is easily switched to SSL if you need more security.
For the uploads, they are also no problem with a script or custom module - with the same method you can also add authorization.
As long as you don't need to dynamically seek the files i guess this would be one of the best solutions.
It's a new part to an existing desktop application? What's the goal of the server? Is it protecting the files that are uploaded/downloaded and providing authentication and/or authorisation? Does it provide some kind of structure for the uploads to be stored in?
One option may be to install Apache HTTP Server on the machine and serve the file via that. Use POST to upload and GET to download.
If the clients are within a LAN could you not just share a drive?