I'm working with a complex set of SAS algorithms, created by a group outside of my company, to prepare a report required each year. Unfortunately, I am running into a file lock problem:
ERROR: A lock is not available for WORK._TEMP_OP_OTHER.DATA
I did have a similar issue last year, but it then appeared to be a somewhat random problem that cropped up (rarely) during execution. I reviewed the logs to see if the problem occurred, and if so cleaned up the output files and ran the algorithm again.
This year's report is consistently producing the error in the same place every time I run the algorithm. I have tried a couple of things to give the system more time in the hopes that the lock will become available: inserting a SLEEP command and also setting FILELOCKWAIT=n in libname statements. Neither has worked as I'd hoped.
FILELOCKWAIT seems like the most promising option, but when observing the execution of the algorithm and reviewing the logs it's clear that the process is failing immediately at that section, consistent with the default FILELOCKWAIT value of 0 seconds.
I am far from an expert in SAS, but I am wondering if I need to set FILELOCKWAIT for WORK, as that is where the lock issue is coming up. Is there a way to do this, and might it help my problem? If not, are there other options I could look into?
(Note: I am aware of the TRYLOCK macro, but want to introduce as few changes as possible to the algorithms I'm running. As mentioned above, they are complex and I am concerned about introducing unintended problems which may be difficult to notice, diagnose, and fix).
Related
I'd like to optimize my notification system, so here is how it works now:
Every time some change occurred on application, we're calling background job (Sidekiq) in order to compute some values and then to notify users via email.
This approach worked very well for a while, but suddenly we got memory leak as there were a lot of actions very frequently and we had about 30-50 workers per second so I need to refactor this.
What I would like to do is, instead of running worker immediately, to store it in array and perform bit later.
But I'm afraid that also will cause a problem, but just "delayed" problem.
I'm looking forward to hear more approaches and solutions as well.
Thanks in advance
So I found one very interesting solution:
I'm storing values to Redis directly as key - value, where the value is dataset with data I'd need later for computation. Then I'm using simple cron job, which occurs service which is responsible for reading data from Redis and computing them. I optimized Sidekiq workers to work only when cron is executed, everything works perfectly fine and even much faster then before.
I'm still eager to hear if there is any other approach/solution.
Thanks
We just got a new Server at work that will be used only for running SAS code, and I've been asked to run some tests and make sure that it's performing better than our other servers. I'm not an expert at this so I want to make sure I avoid making naive mistakes that don't properly measure the performance of the server. My header looks like this:
options fullstimer;
%LET BenchStartTime = %sysfunc(datetime(),22.);
Which I use as a check for the "real time" report in the log. I have a vague understanding of the difference between "user cpu time" and "system cpu time", but if anyone wants to offer up additional information on that, that would be helpful.
Anyway, the main point of this post is that I want to know if there are any standard benchmark tests that I should be using in order to see if this new server is better than the old ones. Currently I'm using something I found online which is just appending a bunch of copies of sashelp.class (but I think this might be a bad idea because pulling from the C: and loading it into a different drive might be the same across all servers, right? If the C: is the slowest drive, doesn't that become the bottleneck?), and I'm also using code that basically generates a bunch of random data of a fixed size and comparing runtimes. Is this the correct approach? Is there something else I should be doing? How many times should I be running these benchmark tests to make sure that it isn't a fluke?
Thanks for your help!
I would test by doing the things that you normally do. If you run larger merges, then you're basically talking I/O; so just make a very large dataset, write it out, read it in, etc., and perform the same test on the other machine. Perform each test a few times each in a fresh SAS session.
Further, it sounds like you need to make sure the new server can handle multiple concurrent sessions. You can simulate this in part by submitting many connections from one computer by using SAS/CONNECT; that allows you to start multiple concurrent sessions. Then do so, perhaps I would create a script that starts a local SAS session, signs on to the server and rsubmits a job of a normal difficulty that takes maybe 5 to 10 minutes, and then run that script 20 times concurrently (you can script this or just double click a .bat file 20 times). See how it handles it as compared to the other server.
On SAS 9.2 and later you can use the %SASHOME%/SASFoundation/9.x/sasiotest.exe utility. You will find some further Guidance in Support note 51659.
Edit:
A similiar utility can be downloaded for Unix Plattform, details in Support note 51660.
Suppose for an application which will never receive internet connection during its lifetime, how can you prevent the piracy of the software?
There cannot be a single product key requirement during installation because, once installed legitimately anybody can copy the installation and re-distribute it.
So every time the application runs it should check for something and crash if the check fails.
Now what could it possibly check?
Initially I thought keeping an encrypted binary file will do the job, but as answered here, that seems a negligible prevention.
Any hacker can modify the executable so that instead of crashing when the check fails it should continue running.
So no matter how difficult the check is, the cracked application will always run.
Now I cannot see any possible solution to this problem.
PS: I am a single independent developer who is developing productivity software with very low charge. Seeing this question I believe I just have to let it go. Sigh....
EDIT: I would like to thank all the contributors in this discussion in letting me know the grim reality...
What I understand now is that you are indirectly submitting the source code of your application in the form of the target executable. Its source code can be modified by anybody using a debugger, thus ANY method of preventing piracy through source code of your application is useless. The only possible solution to this problem is to keep your legitimate customers happy by providing them services (apart from the software) and keep your price below their expectations.
I was think of solving this problem for past 3 days and now all seems worthwhile but still learnt a lot in this process, which I wouldn't have otherwise...
I ha
The only standalone thing I've seen that is semi-effective is hardware keys that come with the boxed software. They used to attach to a parallel port or a serial port and get checked when you started the program.
AutoCad and similar programs used to do this, but it is a BIG PAIN for your customers. Any time it doesn't read it, or a key goes bad, customer productivity suffers. It hurts your legitimate customers far more than those who end up pirating it anyway, and a sufficiently motivated pirate can make a VM that will overcome this. Modern versions of this use USB.
My recommendation is to trust people. Upon install, make them click a "I promise I paid for this" button and be done with it. If they click "I didn't pay for this" show them a small paragraph about how to help keep good software coming and prevent customer-harming DRM schemes by simply contributing to the success of good software authors.
You could generate a unique copy for each user, create a database, and check it agents copies you find online if you like playing the biggest game of wack-a-mole ever.
We would like to be able to create intermediate releases of our software that would time-bomb or expire after a certain fixed time or number of uses that would not easily be manipulated. We are using Visual C++ with mixed native and managed assemblies.
I imagine we may need to rely on a registry tag but this seems to be insecure.
Can anyone offer some advice on how to do this?
I was working on a "trial-ware" solution a while back and it used a combination of registry keys, information stored in a flat-file at a certain position surrounded with junk data, and then also had an option to reach out to a webservice that would verify it back with the software creators.
However, as FrustratedWithFormsDesigner stated, there is no 100% fool-proof way to do this. There is always a way that a hacker can get around whatever precautions you put in place.
If you are using a database for the application, then it might be better to store a install (datetime) and a numberofusers (int) and then make code that checks those fields when the program is starting / loading / initing. If they are past a certain number or time (this could also be in the db) then exit the program.
This is very hard if not impossible to do in a foolproof way. In any event, there's nothing to stop somebody removing and reinstalling the software (you do support that, right?).
If you cannot limit the function of these intermediate releases (a much better incentive for people to move to official bits), it might be more trouble than it's worth to implement such a scheme.
Set a variable to a specific date in the program then every time the program is run access the system date and check if that date is equal to or greater than the specified date. If true then start the expiry process and display a message or alert panel to the user.
Have the binary download a tiny bit of code on startup from one of your servers.
Keep track of the activation counter on the server, when the counter reaches the limit, return a piece of code that displays the 'sorry!' message.
You could deploy it as a ClickOnce application with a certificate that expires at a certain date. If I recall correctly, the app will err on startup after that date.
A couple caveats:
The only option for the user may be to uninstall the app, which is a jerk move.
You will end up maintaining a ton of different deployments.
It will be a shock to the user as it will just happen without warning.
During the process of building software applications, you would start testing what you have built in stages even before it is complete and you could start seeing issues/bugs. How do you track them, Do you use your regular bug tracking tool to add them as issues(waste of time - since it is a work in progress), just have them in your head to fix later, or have a simple text list.
What would be an efficient way to make sure that whatever you have found is eventually fixed as development progresses? Are there any tiny tools to do that?
What I usually do is the following:
Gauge the size of the bug/issue
If it's too big, create an issue in the bug tracker.
If it's small enough, write a failing unit test and then come back to it after I've finished the original functionality.
I've found that the simplest and most efficient way to track tasks of all types (todos, work items, bugs, etc ...) is to use a single system. Typically a bug tracking system. This allows you to see all of the work remaining on your project in a single place.
Having multiple tracking systems almost always results in lost data. People eventually pick different systems, don't tell people about the system they are on, lose the piece of paper which has the list of work items, etc ...
Most bug tracking systems allow you to categorize your bugs so it's easy to distinguish the type of work remaining as well.
Make sure you CI tools such as CruiseControl.NET run unit tests as part of the build. This will cause the build shown as broken when unit test fails and the person who last checked in will be responsible for fixing it.