I want to transfer files (upload+download) between clients and server which would be better for me?any recommendation.
one way be would be using http with client sending different parameters in request and server sending or requesting file depending upon algorithm result.
Other way may be, implement WebService on LAMP server and then properly using RPCs to decide upload or download.
I have basic knowledge of Websevices.
Thanks
Related
Can anyone explain me the difference between chunking and streaming and which method should be preferred when uploading big files from iPad to WCF REST service? Right now we have timeout error when uploading big files from iPad and we'd like to fix this. Our key requirement is that WCF Service should know whether the whole file is uploaded or not. So when client by some reason won't be able to upload the whole file WCF should not perform any operation on uploaded content (as far as I understand streaming upload won't allow to implement this).
Some more questions that confuse me:
1) How both of these modes are working in terms of HTTP?
2) I found that in chunked mode there is header "trasnfer-encoding: chunked" in the first request. Then client sends chunks within separate requests to server and a final zero-length request. Do I need to set trasnfer-encoding header in every request? What other headers should be used?
3) Do I need to send only one HTTP request in streaming mode?
4) Do I need to tell WCF Service somehow that I'm sending streamed content?
5) Let's say default connection timeout for WCF service is 30 seconds. How this timeout affect
streaming and chunking modes?
6) Can anyone explain in short how both of these modes should be implemented on server and client? (No code required, just high level description). The more I read on this topic the more I'm getting confused.
Many thanks!
I have a web service which performs the submission of a small amount of data. It provides a synchronous request response service for my clients. This is working well. I have a new requirement to also support the submission of a much larger amount of the same data; about 10,000 times more data volume. Naturally the larger data will be an asynchronous service for my clients.
The infrastructure I use for the small amount of data cannot support both types of service; the large volume submissions will kill the responsiveness of my small volume submissions.
What I would like to do is be flexible with my deployment and make life simple for the people developing the client software which submits the data. I have been looking for a standards based way to do this:
- client calls my data submission web service
- server determines the amount of data being submitted
- if data is too big the server responds to the client with a different uri. The uri is for client to do the submission i.e. Redirect the client to bigger infrastructure
- client calls the different uri and gets service
I've done some searching and the general response is that this isn't something that is done in web services. I don't understand why. This seems like a reasonable requirement that is probably also true for clustered server scenario's.
Does anyone know if there are standards which cover this? If not, is there a better way?
A subtlety in my case is that I want all the traffic to flow differently for the large submission so I can't simply front end my infrastructure with a web service content aware proxy server. I need to push the web service call to a totally different place; much like a HTTP redirect.
Any help is appreciated.
I am writing an application, similar to Seti#Home, that allows users to run processing on their home machine, and then upload the result to the central server.
However, the final result is maybe a 10K binary file. (Processing to achieve this output is several hours.)
What is the simplest reliable automatic method to upload this file to the central server? What do I need to do server-side to prevent blocking? Perhaps having the client send mail is simple and reliable? NB the client program is currently written in Python, if that matters.
Email is not a good solution; you will run into potential ISP blocking and other anti-spam mechanisms.
The easiest way is over HTTP via a simple webservice. Have a listener at your server that accepts the uploaded files as part of a HTTP POST and then dump them wherever they need to be post-processed.
Ok so coming in from a completely different field of software development, I have a problem that's a little out of my experience. I'll state it as plainly as possible without giving out confidential details:
I want to make a server that "does stuff" when requested by a client on the same network. The client will most likely be a back-end to a content management system.
The request consists of some parameters, an input file and several output files.
The files are quite large, from 10MB - 100MB of data that must be processed (possibly more). The client can specify destination for output files.
The client needs to be able to find out the status of the request - eg position in queue, percent complete. And obviously when and where to pick up output.
So, my questions are - What is a good method for the client and server to communicate? Should the client poll the server, or provide a "callback" somehow for status updates?
At this point the implementation platform is completely open - anything from C to scripting languages like Ruby are available (at either end), my main issue is how the communication should occur.
First thought, set up some webservices between the machines. But webservices aren't going to be too friendly or efficient with the large files.
Simple appoach:
ServerA hits a web method on ServerB "BeginProcess". The response give you back a FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerA regularly polls a webmethod "GetProcessStatus(ticketNumber)", possible return values: Awaiting files, Percent complete, Finished
Slightly more complicated approach, without the polling.
ServerA hits a web method on ServerB "BeginProcess(postUrl)", and you send along a URL you want status updates POSTed to. Response: FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerB sends thru updates to the POST location on ServerA every XXX% completed.
For extra resilience you would keep the GetProcessStatus in case something gets lost in the ether...
Files that will be up to 100MB aren't a good choice for a webservice, since you run a risk of the HTTP session timing out before you have completed your processing.
Having a webservice for checking the status of these jobs would be more ideal. Handle the file transfers via FTP or whatever file transfer method you choose and poll a webservice for updates on status. When the process is completed, you might have an output file url returned that can be downloaded.
My company is planning on implementing a remote programming tool to configure embedded devices in the field. I assumed that these devices would have an HTTP client on them, and planned to implement some REST services for them to access. Unfortunately, I found out that they have a TCP stack but no HTTP client. One of my co-workers suggested that we try to send “soap packets” over port 80 without an HTTP client. The devices also don’t have any SOAP client. Is this possible? Would there be implications if there was a web server running on the network the devices are connected to? I’d appreciate any advice or best practices on how to implement something like this.
If your servers are serving simple files, the embedded devices really only need to send an HTTP GET request (possibly with a little extra data identifying the device, so the server can know which firmware version to send).
From there, it's pretty much a simple matter of reading the raw data coming in on the embedded device's socket -- you might need to only disregard the HTTP header on the response, or you could possibly configure your server to not send it for those requests.
you don't really need an HTTP client per-se. HTTP is a very simple text-based protocol that you can implement yourself if you need to.
That said, you probably won't need to implement it yourself. If they have a TCP stack and a standard sockets library, you can probably find a simple C library (such as this one) that wraps up HTTP or SOAP functionality for you. You could then just build that library into your application.
Basic HTTP is not a particularly difficult protocol to implement by hand. It's a text and line based protocol, save for the payload, and the servers work quite well with "primitive, ham fisted" clients, which is all a simple client needs to be.
If you can use just a subset, likely, then simply write it and be done.
You can implement a trivial http client over sockets (here is an example of how to do it in ruby: http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm )
It probably depends what technology you have available on your embedded devices - if you can easily consume JSON or XML then a webservice approach using the above may work for you.