We are implementing SMB2 protocol. In order to show previous file versions client sends SMB2 IOCTL reqeust with CtlCode of FSCTL_SRV_ENUMERATE_SNAPSHOTS. We send a response as described in http://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/%5BMS-SMB2%5D.pdf 3.3.5.15.1 Handling an Enumeration of Previous Versions Request
When I click on Properties->Previous versions of a directory it shows previous versions returned, but for files it doesn't show anything. I checked that we return the same response for both files and directories.
Why doesn't it work for files? How files and directories are different regarding to previous versions? What other requests should be supported to view previous versions of a file in windows client?
I've sniffed some localhost communication when opening directory/file properties (previous version tab). Found that the client sends CreatFile requests ([MS-SMB2], 2.2.13SMB2 CREATE Request) with SMB2_CREATE_TIMEWARP_TOKEN ([MS-SMB2], 2.2.13.2.7) in CreateContexts. Client gets the list of snapshots and then cycling timestamps doing Create request using timestamp in SMB2_CREATE_TIMEWARP_TOKEN.
Suppose client tries to open file from different snapshots and compare changes using file modification time. Then display all different versions.
This may be either a particular windows shape behavior or a bug in your server. We tested with our NQ Storage server and it worked well for both files and folders when the client was 2012. We tested with serveral other windows but I cannot currently recall which ones. Honestly, we did not test snapshots with too many Win shapes.
If you take a capture, it can give you a hint which side (C or S) is guilty.
Related
What kind of existing options there is to make the client's GET "myserver://api/download/12345.jpg" to download from some_cloudfront_server://files/12345.jpg without redirecting the client to that CloudFront-path? Ie. I want the client see only myserver://api/download/12345.jpg all the time.
It should be some kind channeling solution, as downloading a full file first to Django-server and then sending it to the client is not applicable (takes so much time that the client will see timeout before a response to its query comes). Any existing libraries for this? If I have to create one myself, I welcome even just tips where to start from as Django's communication layer is not too familiar to me.
Problem here is that we are creating CloudFront signatures with wildcards to certain set of files, in format files/<object_id>*, thus allowing the client access only to certain object's all files in CloudFront-server directly. This works fine as long as file access traffic from clients is low. But if we start creating separate access signatures for hundred different files at same time, CloudFront starts throttling our requests. Solution I came up is to create and store to Django-server one generic allow-all signature for files/*, which is used only by Django-server and never given to any client, and then we let Django-server decide should it fetch files for the client or not. Thus, I can't give to the client the CloudFront-path with a allow-all signature, but I can channel CloudFront-data thru Django's endpoint to the client without showing signature to the client.
Environment I'm working with is Django v1.11, and Django REST Framework v3.4's ViewSets.
One colleague heard about my problem and mentioned that signatures can be made locally at server, thus no CloudFront connection needed every time the signature is created and we can keep signatures as file specific. He took this task for himself so I don't know details yet.
I'm using the following line to download a file, and when I do that, it's not downloading the most recent file.
HRESULT hr = URLDownloadToFile(NULL, _T("http://example.com/users.txt"), _T("users.txt"), 0, NULL);
On the first run, users.txt has 3 names in it, if you were to remove a name, and run it again it still downloads with 3 names.
I'm using remove("users.txt); to remove the file prior to download.
It is probably operating system specific, or at least you need a library for HTTP client side.
You need to read a lot more about the HTTP protocol. The formulation of your question makes me believe you don't understand much about it.
On some OSes (notably Linux and POSIX compliant ones), you can use libcurl (which is a good HTTP client free software library)
URLDownloadToFile seems to be a Windows specific thing. Did you carefully read its documentation? It is returning some error code. Do you handle hr correctly?
You can probably only get what the HTTP protocol (response from web server, for a GET HTTP request) gives you. Mostly, the MIME type of the content of the URL, the content size, and the content bytes (etc... including content encoding etc...). The fact that the content has 3 names is your understanding of it.
Try to read more about the HTTP protocol, and understand what is really going on. Are any cookies or sessions involved? Did you try to use something like telnet to manually make the HTTP exchange? Are you able to show it and understand it? What it the HTTP response code ?
If you have access to the server (e.g. using ssh) and are able to look into the log files, try to understand what exchanges happened and what HTTP status -i.e. error code- was sent back. Perhaps set up some Linux box locally for initial tests. Or setup some HTTP server locally and use http://localhost/ etc...
I am a little new to active MQ so please bear with me.
I am trying to take advantage of the ActiveMQ priority backup feature for some of my Java and CPP applications. I have two brokers on two different servers (local and remote), and I want the following behavior for my apps.
Always connect to local broker on startup
If local broker goes down, connect to remote
While connected to remote, if local comes back up, we then reconnect to local.
I have had success with testing it on the java apps by simply adding priorityBackup to my uri options
i.e.
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
However stuff isn't going as smoothly on the CPP side.
The following works fine on the CPP apps (with basic working failover functionality - aka jumping to remote when local goes down )
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false
But updating the uri options with priorityBackup seems to break failover functionality completely (my apps never failover to the remote broker, they just stay in some kind of broker-less/limbo state when their local broker goes down)
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
Is there anything I am missing here? Extra uri options that I should have included?
UPDATE: Transport connector info
<transportConnectors>
<transportConnector name="ClientOpenwire" uri="tcp://0.0.0.0:61616?wireFormat.maxInactivityDuration=7000"/>
<transportConnector name="Broker2BrokerOpenwire" uri="tcp://0.0.0.0:62627?wireFormat.maxInactivityDuration=5000"/>
<transportConnector name="stompConnector" uri="stomp://0.0.0.0:62623"/>
</transportConnectors>
backup and priorityBackup parameters are handled in completely different way in Java and C++ implementation of the library.
Java implementation works well but unfortunately C++ implementation is broken. There are no extra options that can fix this issue. Serious changes in library are required to resolve this issue.
I was testing this issue using activemq-cpp-library-3.8.3, and brokers in various versions (5.10.0, 5.11.1). Issue is not fixed in 3.8.4 release.
When developing apps, your server and your iPhone evolve and not always back-compatibility is possible.
I guess adding a protocol version number in every request should do the trick (instead of, let's say another web service for protocol version).
Amazon do it in every request (here a sample):
https://forums.aws.amazon.com/message.jspa?messageID=269876
So, when the app is too old for the current protocol, it shows a message blocking the app and asking the users for an update. Otherwise, customers with an old app will see an app that doesn't work very well. That's not good.
My question is: How do you implement a similar versioning schema into a JSON reply without interfering the parser, object mapping and entity mapping? Any suggestion?
Perhaps there is another schema like passing an app version in the headers every request, and if error, the server returns and error message like (from twitter):
{"errors":[{"message":"Sorry, that page does not exist","code":34}]}
I'd like to know how do you solve this problem.
Regards.
You can version your API using the media-type. For instance, if your media-type is application/vnd.ricardo+json and you need to make a change that is not backwards compatible, you would create an application/vnd.ricardo-v2+json media type.
New version of the app will Accept the application/vnd.ricardo-v2+json media-type and get the new content. Old versions of the app will continue to Accept the application/vnd.ricardo+json and will never see the incompatible change.
When you want to retire old versions of the app, simply have requests that only accept application/vnd.ricardo+json return 406 Not Acceptable. You can then use this in you app to trigger logic to prompt the user to upgrade.
If (for whatever reason) you need to allow the app to support multiple versions of the server you can have it Accept application/vnd.ricardo-v2+json, application/vnd.ricardo+json; q=0.5. In this situation, the server will respond with application/vnd.ricardo-v2+json content if it can and application/vnd.ricardo+json otherwise.
You could use this to give you greater control over the "release" of a new feature. For instance you could release and update for the app, use request statistics to determine when a sufficient number of users have upgraded. You can then co-ordinate releasing the new feature by flipping a feature toggle on your servers and doing a marketing effort at the same time.
I'm in the process of creating a utility to backup user's media files. The media isn't being shared etc its only a backup utility.
I'm trying to think of the best way to protect users from ISPs accusing them of downloading illegal media files by using some sort of secure connection.
The utility is written in C++ using the Qt lib and so far I've only been able to find the QtSslSocket component for secure connections. The domain already has a valid SSL certificate for the next few years.
Can anyone suggest the best way to go about implementing this from both the server and client side. i.e what does the server need to have in place and is there anything in particular the backup utility needs to implement from the client side to ensure secure transactions?
Are there any known, stable sftp or ftps servers available etc?
As far as I know, Qt doesn't have support for secure FTP transfers.
Not sure what other info. would be useful to make the question any clearer but any advice or help pointing me in the right direction will be most welcomed.
EDIT I'm also Java competent so a Java solution will work just as well...
As Martin wrote, you can wrap client. But if you don't want to do that, you can use libssh.
I searched for some sort of solution to this for a couple days and then forgot about the problem. Then today I stumbled across this little gem in the Qt-Creator source Utils::ssh, includes support for SFTP, plain-old SSH, and all sorts of goodies.
Disentangling stuff from Qt-Creator can be a pain, but having gone through this process it amounts to grabbing Botan (one of the other libs in QT-Creator) + Utils.
When it rains, it pours, I find two solutions to this problem in an hour - http://nullget.sourceforge.net/ (Requires Chinese translation), but from their summary:
NullGet is written with Qt, runs on
multiple platforms, the GUI interface
of the multi-threaded multi-protocol
HTTP download software. Use NullGet
can easily download a variety of
network protocol data stream, faster
download speeds, support for HTTP, the
protocol currently supported are:
HTTP, HTTPS, FTP, MMS, RTSP. And it
can run on most current popular
operating systems including Windows,
Linux, FreeBSD and so on.
Easiest way would be to just wrap a commandline sftp client with a Qt front end.
On the server any ftp server should do sftp pretty much out of the box.
As Synthesizerpatel says Qt Creator implements SFTP. So I have isolated the library that contains SSH and SFTP and I have created a new project named QSsh in Github (https://github.com/lvklabs/QSsh). The aim of the project is to provide SSH and SFTP support for any Qt Application.
I have written an example on how to upload a file using SFTP in examples/SecureUploader/
I hope it might be helpful