I have a REST service with a simple get and post method in Java EE. The post method saves the recieved json in a file using Gson and FileWriter. On my local system the file is saved in C:\Users...\Documents\Glassfish Domains\Domain\config. The Get method reads this file and gives out the json.
When I test this on my local system using Postman, everything works fine, but when I deploy the project on a Ubuntu Server vm with Glassfish installed, I am able to connect but I get a http 500 Internal Server Error Code. I managed to find out, that the errors are thrown, when the FileReader/FileWriter tries to do stuff. I suppose that it is restricted to access this directory on a real glassfish instance.
So my question is, if there is a file path where I am allowed to write a file and read it afterwards. This file has to stay there (at least during the applicationr runs) and has to be the same for every request (A scheduler writes some stuff into the file every 24 hours). If anyone has a simple alternative how to save the json in Java EE without an extra database instance, that would be helpful, too :)
If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.
Related
A vendor's remote system has data that one of our internal systems needs daily. Our system currently receives the data daily by the vendor's system pushing a CSV file via SFTP. The data is < 1KB in size.
We are considering using a pull via SFTP instead. The file "should" always be ready no later than a defined time (5 ET). So, one problem with this approach could be that our system may have to do some polling to eventually get the file.
How should a system get data from a remote third party data source? The vendor also provides a web service and subscription feed service. They will also consider other ideas for us to acquire the data.
Assuming your system is unix-like, and the other side has an open SSH server, I would add the public key of the user that your application runs under to the authorized_keys file in the remote side. After this, your application would be able to poll for the existence of an updated file by running
ssh username_at_remote_end#ip_address_of_remote stat -C %Z path_to_file
Which will output the seconds after the unix Epoch of the last change to the file (if found) or error (non-zero exit code) if file not found.
To actually retrieve the file (after checking that the time-stamp is within the last 24 hours), I would use
t=$(mktemp -d) && scp username_at_remote_end#ip_address_of_remote:path_to_file $t && echo $t
Which will copy it to a temporary directory under /tmp, readable only by the user that your application is running under, and will return the name of that folder.
All programming languages support running commands locally (in C, using system(); in Java, using a Process; ...). To keep things simple, each command would be added to a script file (say poll.sh and retrieve.sh). If the remote end changes, you only have to update & test the scripts. There are direct interfaces to openssh, but it is probably simpler to outsource all of that work to bash via scripts as seen above.
IF you have similar requirements for more that one case, you can consider using integration server (middleware) to implement this. There you can design a trigger which will invoke that particular pull after 5 ET.
If this is required for only one case, then ask your provider for webservice option. Where you can call his webservice after 5ET once a day by sending a soap request for data and he will return a soap response rather than csv. You can implement it very easily in your system. It will be more secured and efficient. You will have more control on data, transport and security.
I am using ColdFusion 8.
I am trying to write a file on a networked path Windows.
// THIS WORKS
CatalogDirectory = getDirectoryFromPath("E:\INETPUB\WWWROOT\AVCATALOGS\AVCAT\");
// THIS DOES NOT WORK
CatalogDirectory = getDirectoryFromPath("\\ourserver\e$\InetPub\wwwroot\AVCATALOGS\");
I can't find any good documentation on what you CAN'T do.
// TOO VAGUE
http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=functions_e-g_36.html
Is there a way to write copy a file from one server to another server on networked drives?
you must be running ColdFusion as a network user and that user must have permission to access the server you are connecting to.
What I'm trying to design is a system where-by a user can upload a compiled .hex file to a web site, and then that web server sends it on to another server which has a microprocessor connected to it via USB. The web server would then trigger a script to run on that sever which would load the .hex file to the micro.
So my question is: Is it possible for a web server to trigger a shell script or C/Java program on another (trusted) machine?
Yes it's possible. Webservers often support invocation of scripts, usually with the CGI interface. This scripts can perform the required interaction with the other server. How this has to be implemented depends on the webserver and its configuration.
If there is a microcontroller is involved and how this is connected to the second server is not related to this question. Therefore I suggest to remove the microcontroller tag. It's also doubful if it's somehow related to shell at all.
We have a Django app that needs to post messages and upload files from the web server to another server via an XML API. We need to do X asynchronous file uploads and then make another XML API request when they have finished uploading. I'd also like the files to stream from disk without having to load them completely into memory first. Finally, I need to send the files as application/octet-stream in a POST body (rather than a more typical form data MIME type) and I wasn't able to find a way to do this with urllib2 or httplib.
I ended up integrating Twisted into the app. It seemed perfect for this task, and sure enough I was able to write a beautifully clean implementation with deferreds for each upload. I use my own IBaseProducer to read the data from the file in chunks and send it to the server in a POST request body. Unfortunately then I found out that the Twister reactor cannot be restarted, so I can't just run it and then stop it whenever I want to upload files. Since Twisted is apparently used more for full-blown servers, I'm now wondering whether this was the right choice.
I'm not sure if I should:
a) Configure the WSGI container (currently I'm testing with manage.py) to start a Twisted thread on startup and use blockingCallFromThread to trigger my file uploads.
b) Use Twisted as the WSGI container for the Django app. I'm assuming we'll want to deploy later on Apache and I'm not sure what the implications are if we take this route.
c) Simply can Twisted and use some other approach for the file uploads. Kind of a shame since the Twisted approach with deferreds is elegant and works.
Which of these should we choose, or is there some other alternative?
Why would you want to deploy later on Apache? Twisted is rad. I would do (b) until someone presented specific, compelling reasons not to. Then I would do (a). Fortunately, your application code looks the same either way. blockingCallFromThread works fine whether Twisted is your WSGI container or not - either way, you're just dealing with running code in a separate thread than the reactor is running in.
I am writing an application, similar to Seti#Home, that allows users to run processing on their home machine, and then upload the result to the central server.
However, the final result is maybe a 10K binary file. (Processing to achieve this output is several hours.)
What is the simplest reliable automatic method to upload this file to the central server? What do I need to do server-side to prevent blocking? Perhaps having the client send mail is simple and reliable? NB the client program is currently written in Python, if that matters.
Email is not a good solution; you will run into potential ISP blocking and other anti-spam mechanisms.
The easiest way is over HTTP via a simple webservice. Have a listener at your server that accepts the uploaded files as part of a HTTP POST and then dump them wherever they need to be post-processed.