Is DataCache's Add method 'Timeout' property sliding in nature - appfabric

I am implementing appfabric for distributed caching , needed to know is the timeout property provided in one of the overloads is sliding or not , i.e. it increases the timeout with every usage , and how to change setting from sliding to absolute if possible

The MSDN documentation doesn't clearly specify, but I would expect that if it used a sliding expiry it would have discussed it in the comments, and therefore I believe it to be an absolute timeout.

Related

Is there a way to query if a message filter is already in effect?

Due to enhanced security requirements, in order to use the WM_COPYDATA message in modern versions of Windows you need to first call the ChangeWindowMessageFilter() function to MSGFLT_ADD it to the filter allowance.
See MSDN ChangeWindowMessageFilter()
Is there a way to query if it already is allowed (without using SendMessage() or PostMessage() to wait and see if it comes through)?
The answer is yes. I researched it on MSDN.
Use ChangeWindowMessageFilterEx instead of ChangeWindowMessageFilter, which is due to be deprecated anyway. Pass in pChangeFilterStruct to contain extended result. See https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-changefilterstruct
See if ExtStatus contains the value MSGFLTINFO_ALREADYALLOWED_FORWND

How to log more frequently than evaluating with `ray.tune.Trainable`

I am interested in using the tune library for reinforcement learning and I would like to use the in-built tensorboard capability. However, the metric that I am using to tune my hyperparameters is based on a time-consuming evaluation procedure that should be run infrequently.
According to the documentation, it looks like the _train method returns a dictionary that is used both for logging and for tuning hyperparameters. Is it possible either to perform logging more frequently within the _train method? Alternately, could I return the values that I wish to log from the _train method but some of the time omit the expensive-to-compute metric from the dictionary?
One option is to use your own logging mechanism in the Trainable. You can log to the trial-specific directory (Trainable.logdir). If this conflicts with the built-in Tensorboard logging, you can remove that by setting tune.run(loggers=None).
Another option is to, as you mentioned, some of the time omit the expensive-to-compute metric from the dictionary. If you run into issues with that, you can also return "None" as the value for those metrics that you don't plan to compute in a particular iteration.
Hope that helps!

libtorrent speed greater limits

I'm tring to use libtorrent library and I have a some following problem.
I create torrents with default settings and set them limits (e.g., 100Kb) as
torrent_handle.set_download_limit(limit);
And when I ask speed for the current torrent:
torrent_handle.status ().download_payload_rate
sometimes I get greater value then the limit (e.g., about 200Kb or 300Kb).
What's wrong? Why my torrent limits aren't applyed?

Disable application after expiry date for trial

I am writing a simple application for a semi-trusted client, and have no say on certain specifics. The client must be given a copy of a binary, myTestApp, which makes use of proprietary code in an external library, libsecrets. It is a Windows application that will run on a few separate Windows 7 laptops. I have been informed that after the application has served its purpose, it will be deleted. I know there is no perfect solution to this, but I would like to implement an expiry date in the program, and hinder efforts to potentially reverse engineer the code, or at least to prevent the contents of libsecrets from being exposed too easily.
So, my first step will be to statically link myTestApp against libsecrets so everything is contained in one binary, so only the needed pieces of libsecrets is included in the final binary, and its interfaces are no longer published.
Second, I want to implement some sort of getTime mechanism that is not naive. Is there anything in Windows that does a "secure" getTime call, so it can't be tricked by changing the time in the system tray or the BIOS?
Thirdly, if there is no "secure" getTime call, I could also modify myTestApp to use NTP to query a trusted time server, and fail if it can't get the time from it or the trial period has elapsed. But this could be fooled by messing with DNS on the gateway, unless there is some sort of certificates mechanism in place to verify the time server. I don't know much about this though, and would need some suggestions on how to implement it.
Next, is there some way to alter the binary so that it is impractical for individuals to attempt to reverse engineer it by viewing the assembly code? Maybe some sort of wrapper that encrypts the binary and requires a third-party authentication tool? Or maybe some sort of certificate I create that is required to run it and expires later?
Finally, is there any software out there (ie: packaging or publishing software) that can do this for me, either by repacking the final .exe or as some sort of plugin for Microsoft Visual Studio?
Thank you all in advance.
Edit: This is NOT meant to be a bullet proof system, and if it fails, that is acceptable. I just want to make it inconvenient for a non-technical person to attempt to crack. The people using it are technical Luddites, and the only way the software would be cracked is if they hired someone to do it. Since the names and company name are watermarked into the application, and only one person could benefit from its use, it's unlikely they would redistribute it.
You can't make things complete secure, but you can make it hard(er).
Packing with UPX adds some level of complexity to the hacker.
You can check at runtime if you're running under a debugger in several places or if you're running under a virtual machine.
You can encrypt a DLL you're using and load it manually (complicated).
You can write a loader that checks a hash of your application and your application can check the hash of the loader.
You can get the system time and compare it to a system time you already wrote to disk and see that it's monotonic.
All depends on the level of protection you want.
If you go to PirateBay or any other torrent site, you'll see that everything get's hacked if hackers are interested.
There is one way to make it really difficult for them to use it after expiry. The main theme of this trick is to make your expiration date independent of system time and make it depend on hours passed, irrespective of whatever the system time may be.
you will have to create a separate thread to perform this task.
Suppose you want the application to expire after they use it 70 hours.
Create a binary file called "record", and store any number in it, which should be hard to guess (I will tell you latter why you have to put this number in binary file).
When your application starts, it checks if that number is present there if yes, your application should get the current time, and store it in that file along with hour=1 (replacing the already present number), and the thread you created should keep on checking if hour in system time has changed or not, when it changes store current time in that file along with hour=2. A time will come when hour=70.
Add this code at two places inside that thread and on the start of your applicaiton
/*the purpose of storing current time is to find out later if hour has changed or not*/
/*read hour from file.*/
if(hour==70)
{
cout<<"Your trial period has expired"<<endl;
return EXIT_SUCESS;
}
now when ever hour=70 application will not work.
Earlier I told you to keep any number in your binary file, when ever they will run your application, binary file will be read and if that number is found there your application will replace it with current time and hour=1, now suppose they use your application for 5 hours and close it and run it after some time, now when your application will be run it will check that binary file if that number has been replaced with time stored previously and hour=5 it means now you will have to store current time along with hour=stored hour in file +1; . In this even if they change time or do anything else it will not effect your expiration period. Because now your expiration checking is not based on system time any more, it is now based on hours passed, irrespective whatever the time may be.
The absence of that number indicates file is not being accessed for first time and currently present hour in file should be incremented, and use binary file so that client can't see that number.
One last thing
Your binary file's format should be like this
current time, hour="any number", another_secret_number
another_secret_number will be placed so that even if they any how change your binary they will not be able to put that another_secret_number there because they don't know it. It means while reading your binary file you will have to make sure that, the end of any entry in your binary file contains "another_secret_number" at end.
For checking purposes both hidden numbers will also be hard coded in your code, which surely they can't see, and they can't read the binary also, so there is no way they can know them.
I hope it will help you.
Nothing stop the hackers!!!
Your question is like a a searching needle at the hay.
Assembly is large room for the responses.
You may thing only hrder, nothing, never stop 'bad' persons.
For UPX: Is well known, dont use it!!!

Pattern for synching object lists among computers (in C++)?

I've got an app that has about 10 types of objects. There will be potentially a few thousand object instances of each type. These lists of objects need to stay synchronized between apps running on different machines. If an object is added, changed or deleted, that needs to propagate to the other machines.
This will be a star topology -- there is a central master, and the rest are clients.
I DO have the concept of a session, so can store data about each client.
Is there a good design pattern to follow for this? Even better, is there a (template based?) library that would handle asking the container what has changed since client X came by and getting that delta to send out?
Right now I'm thinking every object-type container has an update counter. When something is added/changed/removed, the update counter is incremented, and the changed object(s) are tagged with that value. Each client will save the value of the update counter when it gets an update. Later it will come back and ask for any changes since it's update counter value. Finally, deletes are kept as tombstone records (although I'm not exactly sure when to clear them out).
One thing that makes this harder is clients can come and go without the central server necessarily knowing, although I guess there could be a timeout concept (if the server haven't heard from a client in 5 minutes, it assumes the client is gone)
Is this a well-known pattern? Any additional suggestions?
How you implement synchronization very much depends on your needs. Do the changes need to be sent to the clients, or is it sufficient that the clients checks if an object is up to date whenever it uses the objects? How bout using the Proxy pattern? This pattern allows you to create a proxy-implementation of your objects that can check if they are up to date or not, do update if they are not, and then return the result. I would do this by having a lastChanged timestamp on the objects on the master and a lastUpdated timestamp on the client objects. If latency is an issue checking if an object is up-to-date on each call is probably not a good idea. Consider having a separate thread that queries the master for changed objects and marks them "dirty". This could dramatically reduce the network traffic as well.
You could also look into the Observer pattern and Publish/Subscribe.
An option that might be simple to implement and still pretty efficient is to treat the pile of objects as an opaque blob and use librsync to synchronize them. It sounds like all of the updates flow one direction, from master to clients, and there's probably some persistent representation of the objects on the clients -- a file or something. I'm assuming it's a file for the rest of this answer, though any sequence of bytes can be used.
The way it would work is that each client would generate a librsync "signature" of its local copy of the blob and send that signature to the master. The signature is about 1% of the size of the blob. The master would then use librsync to compute a delta between that signature and the current data, and send the delta to the client, which would use librsync to apply the delta to its local copy of the blob.
The librsync API is simple, and the signature/delta data transfer is relatively efficient.
If that's not workable, it may still be useful to take a more manual "delta-based" approach, to avoid having to do per-object versioning. Each time the master makes a change, it should log that change to a journal, recording what was done and to which object. Versioning is done at the whole-database level, so in effect a version number is assigned to each journal entry.
When a client connects, it should send its version of the whole object collection, and the server can then respond with the contents of the journal between the client's version and the newest entry. If updates on a given object are done by completely replacing the object contents, then you can optimize this by filtering out all but the most recent version of each object. If the master also keeps track of which versions it has sent to which client, it can know when it is safe to discard old journal entries. Even if it doesn't track that, you can still discard old journal entries according to some heuristic (probably just age) and if you receive a connection from a client whose last version is older than your oldest journal entry, then you just have to send the entire set of objects to that client.