I have some software which periodically uploads data to a remote location. At the remote location (maintained by a customer or a supplier so somewhat out of my control), there's a script running which detects changes to file size or the last-modified time-stamp of a file and if it's changed, passes the updated information on to other systems.
If I want to trigger the remote server to pass that information on without deleting and then re-uploading the contents, is there a way I can just 'touch' the file to changed it's last-modified date?
I'm using the EnterpriseDT FTP Pro (.net) module to do the uploading from a C++/CLI application on a windows platform.
I just tried resuming a transfer and transferring 0 bytes which appears to do exactly what I need. I've leave this Q/A up in case anyone else is interested in doing the same.
Related
I have a REST service with a simple get and post method in Java EE. The post method saves the recieved json in a file using Gson and FileWriter. On my local system the file is saved in C:\Users...\Documents\Glassfish Domains\Domain\config. The Get method reads this file and gives out the json.
When I test this on my local system using Postman, everything works fine, but when I deploy the project on a Ubuntu Server vm with Glassfish installed, I am able to connect but I get a http 500 Internal Server Error Code. I managed to find out, that the errors are thrown, when the FileReader/FileWriter tries to do stuff. I suppose that it is restricted to access this directory on a real glassfish instance.
So my question is, if there is a file path where I am allowed to write a file and read it afterwards. This file has to stay there (at least during the applicationr runs) and has to be the same for every request (A scheduler writes some stuff into the file every 24 hours). If anyone has a simple alternative how to save the json in Java EE without an extra database instance, that would be helpful, too :)
If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.
We are implementing SMB2 protocol. In order to show previous file versions client sends SMB2 IOCTL reqeust with CtlCode of FSCTL_SRV_ENUMERATE_SNAPSHOTS. We send a response as described in http://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/%5BMS-SMB2%5D.pdf 3.3.5.15.1 Handling an Enumeration of Previous Versions Request
When I click on Properties->Previous versions of a directory it shows previous versions returned, but for files it doesn't show anything. I checked that we return the same response for both files and directories.
Why doesn't it work for files? How files and directories are different regarding to previous versions? What other requests should be supported to view previous versions of a file in windows client?
I've sniffed some localhost communication when opening directory/file properties (previous version tab). Found that the client sends CreatFile requests ([MS-SMB2], 2.2.13SMB2 CREATE Request) with SMB2_CREATE_TIMEWARP_TOKEN ([MS-SMB2], 2.2.13.2.7) in CreateContexts. Client gets the list of snapshots and then cycling timestamps doing Create request using timestamp in SMB2_CREATE_TIMEWARP_TOKEN.
Suppose client tries to open file from different snapshots and compare changes using file modification time. Then display all different versions.
This may be either a particular windows shape behavior or a bug in your server. We tested with our NQ Storage server and it worked well for both files and folders when the client was 2012. We tested with serveral other windows but I cannot currently recall which ones. Honestly, we did not test snapshots with too many Win shapes.
If you take a capture, it can give you a hint which side (C or S) is guilty.
A vendor's remote system has data that one of our internal systems needs daily. Our system currently receives the data daily by the vendor's system pushing a CSV file via SFTP. The data is < 1KB in size.
We are considering using a pull via SFTP instead. The file "should" always be ready no later than a defined time (5 ET). So, one problem with this approach could be that our system may have to do some polling to eventually get the file.
How should a system get data from a remote third party data source? The vendor also provides a web service and subscription feed service. They will also consider other ideas for us to acquire the data.
Assuming your system is unix-like, and the other side has an open SSH server, I would add the public key of the user that your application runs under to the authorized_keys file in the remote side. After this, your application would be able to poll for the existence of an updated file by running
ssh username_at_remote_end#ip_address_of_remote stat -C %Z path_to_file
Which will output the seconds after the unix Epoch of the last change to the file (if found) or error (non-zero exit code) if file not found.
To actually retrieve the file (after checking that the time-stamp is within the last 24 hours), I would use
t=$(mktemp -d) && scp username_at_remote_end#ip_address_of_remote:path_to_file $t && echo $t
Which will copy it to a temporary directory under /tmp, readable only by the user that your application is running under, and will return the name of that folder.
All programming languages support running commands locally (in C, using system(); in Java, using a Process; ...). To keep things simple, each command would be added to a script file (say poll.sh and retrieve.sh). If the remote end changes, you only have to update & test the scripts. There are direct interfaces to openssh, but it is probably simpler to outsource all of that work to bash via scripts as seen above.
IF you have similar requirements for more that one case, you can consider using integration server (middleware) to implement this. There you can design a trigger which will invoke that particular pull after 5 ET.
If this is required for only one case, then ask your provider for webservice option. Where you can call his webservice after 5ET once a day by sending a soap request for data and he will return a soap response rather than csv. You can implement it very easily in your system. It will be more secured and efficient. You will have more control on data, transport and security.
I am writing an application, similar to Seti#Home, that allows users to run processing on their home machine, and then upload the result to the central server.
However, the final result is maybe a 10K binary file. (Processing to achieve this output is several hours.)
What is the simplest reliable automatic method to upload this file to the central server? What do I need to do server-side to prevent blocking? Perhaps having the client send mail is simple and reliable? NB the client program is currently written in Python, if that matters.
Email is not a good solution; you will run into potential ISP blocking and other anti-spam mechanisms.
The easiest way is over HTTP via a simple webservice. Have a listener at your server that accepts the uploaded files as part of a HTTP POST and then dump them wherever they need to be post-processed.
Ok so coming in from a completely different field of software development, I have a problem that's a little out of my experience. I'll state it as plainly as possible without giving out confidential details:
I want to make a server that "does stuff" when requested by a client on the same network. The client will most likely be a back-end to a content management system.
The request consists of some parameters, an input file and several output files.
The files are quite large, from 10MB - 100MB of data that must be processed (possibly more). The client can specify destination for output files.
The client needs to be able to find out the status of the request - eg position in queue, percent complete. And obviously when and where to pick up output.
So, my questions are - What is a good method for the client and server to communicate? Should the client poll the server, or provide a "callback" somehow for status updates?
At this point the implementation platform is completely open - anything from C to scripting languages like Ruby are available (at either end), my main issue is how the communication should occur.
First thought, set up some webservices between the machines. But webservices aren't going to be too friendly or efficient with the large files.
Simple appoach:
ServerA hits a web method on ServerB "BeginProcess". The response give you back a FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerA regularly polls a webmethod "GetProcessStatus(ticketNumber)", possible return values: Awaiting files, Percent complete, Finished
Slightly more complicated approach, without the polling.
ServerA hits a web method on ServerB "BeginProcess(postUrl)", and you send along a URL you want status updates POSTed to. Response: FTP location username/password, and ticket number.
ServerA delivers the files to FTP location.
ServerB sends thru updates to the POST location on ServerA every XXX% completed.
For extra resilience you would keep the GetProcessStatus in case something gets lost in the ether...
Files that will be up to 100MB aren't a good choice for a webservice, since you run a risk of the HTTP session timing out before you have completed your processing.
Having a webservice for checking the status of these jobs would be more ideal. Handle the file transfers via FTP or whatever file transfer method you choose and poll a webservice for updates on status. When the process is completed, you might have an output file url returned that can be downloaded.