c++ program to find a file currently open in gvim? - c++

I want to rename the folder e.g "mv -f old_proj_name new_proj_name".
But, since the file is opened in gvim editor it is not allowing renaming operation to be performed on the folder.
The file is not moved to new folder name.
Manually I have used unlocker software to check whether the file is locked by other process.
fopen() does not show file is locked, when the file is opened by gvim editor.
I tried with opendir() API as well but didn't helped.
Now i want the lock checking functionality to be implemented in my code, so that before doing rename operation i should able to know whether i can do it successfully or not.
Please guide me.
Regards,
Amol

before doing rename operation i should able to know whether i can do it successfully or not.
This is a fallacy. You can only know whether you could perform the operation successfully at the time of the check. To know whether you can do it now, you need to check for it now. But when you actually get around to performing it, that "now" will turn to "back then". To have a reliable indication, you need to check again.
Don't you think it will get tiresome really fast?
So there are two ways of dealing with this.
First, you can hope (but never know) that nothing important happens between the check and the actual operation.
Second, you may skip the check altogether and just attempt the operation. If it fails, then you can't do it. There, you have killed two birds with one stone: you have checked whether an operation is possible, and performed it in the case it is indeed possible.
Update
If your data is organised in such a way that you have to perform several operations that may fail, and data consistency depends on all these operations succeeding or failing at once, then there's an inherent problem. You can check for some known failure conditions, but (a) you can never check for all possible failure conditions, and (b) any check is valid just for the moment it's performed. So any such check will not be fully reliable. You may be able to prevent some failures but not others. An adequate solution to this would be data storage with proper rollback facility built in, i.e. a database.
Hope it helps.

Related

Cpp file handling using transaction like sql, commit and rollback

I need to write in multiple files if anything wrong happens then rollback all changes and also commit same time in all files in cpp windows. is it possible or if there any library please suggest.
This would require kind of mechanism for file system operations resembling those database transactions you mention, which the C++ standard doesn't provide, so you'd entirely rely on the operating system – which none of the ones I'm aware of provides either (maybe there's some specialised linux distribution that does so?).
All you can do is trying to get as close as possible, e.g. by the following approach:
Write out all these files all as temporary copies, at best into a dedicated temporary directory.
Rename all original files/move them into the dedicated directory (we'll keep them as backup for now).
Rename/move all new temporaries to original file names/folders.
Finally delete the backups.
If anything goes wrong then you can delete all the new temporaries again and rename the backups back to their original names – unless the error occurred on deleting the backups, then you might just leave the remaining ones.
If you retain a log file and add an entry for every task when it is started as well as when completed you exactly know when an error occurred and can safely restore the original state later on even if recovery failed, as the actual point of failure can be determined exactly that way (similarly to the logging database management systems do internally as well to realise their transactions).

SAS EG mute/comment/put on hold/hide a branch

In SAS EG is there a way to put on hold the execution of a branch (the task would be grayed out for example) so that it's not executed when I execute a parent process ?
If not would you advise a good practice to put some tasks aside without losing the process tree structure ?
It depends on the version of your SAS EG. Check if you have this "add condition" when you click on any of the tasks. Once a condition has been added, it shows as a flag. See the screenshot below.
You can add a condition as shown in the other answer, and have it depend on a prompt value (if you sometimes might want to run this), or on a macro value that you just define in your program by hand (if you currently never would want to run it).
You won't be able to keep your links, though, without some gymnastics that don't really make sense. Using the conditional execution means not going down the rest of the branch.
I'd also suggest that if you have extra programs that you want to keep-but-not-run, you move them to another process flow, unless you have a very good reason for keeping them in that particular process flow. I usually have a few process flows:
In Development: where I have programs that are in development and I don't want to run along with the whole process flow (maybe I'm not sure where they go yet in order, or am not sure if I will include or not)
Other Programs: where I put programs that I might run on an ad-hoc basis but not regularly.
Deprecated Programs: where I put programs that are "old" and not used anymore, but I want to keep around for reference or just to remember what I've done.
Finally, if you use version control, you can always get back to the program you had before; so if you do use version control properly you don't need to keep programs around just in case, if you're fairly sure they're not needed anymore.

What circumstances ostream::write or ostream::operator<< would fail under?

In my C++ code, I am constantly writing different values into a file. My question is that if there is any circumstances that write or << would fail under, considering the fact that file was opened successfully. Do I need to check every single call of write or << to make sure it was carried out correctly?
There are too many failure reasons to list them all. Possible ones would be:
the partition is finally full
the user exceeds his disk quota
the partition has been brutally unmounted
the partition has been damaged (filesystem bug)
the disk failed physically
...
Do I need to check every single call of write or << to make sure it was carried out correctly?
If you want your program to be resilient to failures then, definitely, yes. If you don't, it simply means the data you are writing may or may not be written, which amounts to say you don't care about it.
Note: Rather than checking the stream state after every operation (which will soon be extremely tedious) you can set std::ostream::exceptions to your liking so that the stream will throw an exception when it fails (which shouldn't be a problem since such disk failures are quite exceptional by definition).
There are any number of reasons why a write could fail. Off the top of my head here are a few:
The disk is full
The disk fails
The file is on an NFS mount and the network goes down
The stream you're writing to (remember that an ostream isn't always a file) happens to be a pipe that closes when the downstream reader crashes
The stream you're writing to is a TCP socket and the peer goes away
And so on.
EDIT: I know you've said that you're writing to a file, I just wanted to draw attention to the fact that your code should only care that it's writing to an ostream which could represent any kind of stream.
The others covered situations that might result in output failure.
But:
Do I need to check every single call of write or << to make sure it was carried out correctly?
To this, I would answer "no". You could conceivably just as well check
if the file was opened successfully, and
if the stream is still good() after you wrote your data.
This depends, of course, on the type of data written, and the possibility / relative complexity of recovering from partial writes vs. re-running the application.
If you need closer control on when exactly a write failed (e.g. in order to do a graceful recovery), the ostream exceptions syam linked to are the way to go. Polling stream state after each operation would bloat the code.

Error handling design problem on collection of items

I have a collection of some items and some operation on them. This operation is a part of remote calls between client and server and it should run on all items at once. On server side it runs repeatedly on each item and may fail or succeed. I need to know which items succeeded and which failed. I guess this is rather common case and there are good solutions to it. How should I design it?
it should run on all items at once
You will hate your life if you don't read into this as a design requirement. All or nothing is the right way to handle it. It will simplify everything you do.
If that isn't an option, just do the dumbest thing possible. Wrap each call in a try/catch and give some report. Chances are no one will be able to consume the report, which is another reason all or nothing is the right thing to do.
edit:
To elaborate: When batching, writing simple logic to report errors is fine, but writing logic to recover from errors is very complicated. I've never seen a system really handle recovery well on batching. I'm sure there are some corner cases where each item is completely independent. At which point makes no matter that one or another failed, but that is usually not the case.
Generally, I expect any errors that happen during a batching operation to not be critical. By that I mean the system should be able to ignore errors and continue operating as if the message that caused the error never existed.
If it's really vital that these messages get processed, then I would definately try for all or nothing.

Should programs check for failure on WinAPI functions that "shouldn't", but can, fail?

Recently I was updating some code used to take screenshots using the GetWindowDC -> CreateCompatibleDC -> CreateCompatibleBitmap -> SelectObject -> BitBlt -> GetDIBits series of WinAPI functions. Now I check all those for failure because they can and sometimes do fail. But then I have to perform cleanup by deleting the created bitmap, deleting the created dc, and releasing the window dc. In any example I've seen -- even on MSDN -- the related functions (DeleteObject, DeleteDC< ReleaseDC) aren't checked for failure, presumably because if they were retrieved/created OK, they will always be deleted/released OK. But, they still can fail.
That's just one noteable example since the calls are all right next to each other. But occasionally there are other functions that can fail but in practice never do. Such as GetCursorPos. Or functions that can fail only if passed invalid data, such as FileTimeToSytemTime.
So, is it good-practice to check ALL functions that can fail for failure? Or are some OK not to check? And as a corollary, when checking these should-never-fail functions for failure, what is proper? Throwing a runtime exception, using an assert, something else?
The question whether to test or not depends on what you would do if it failed. Most samples exit once cleanup is finished, so verifying proper clean up serves no purpose, the program is exiting in either case.
Not checking something like GetCursorPos could lead to bugs, but depending on the code required to avoid this determines whether you should check or not. If checking it would add 3 lines around all your calls then you are likely better off to take the risk. However if you have a macro setup to handle it then it wouldn't hurt to add that macro just in case.
FileTimeToSystemTime being checked depends on what you are passing into it. A file time from the system? probably safe to ignore it. A custom string built from user input? probably better to make sure.
Yes. You never know when a promised service will surprise by not working. Best to report an error even for the surprises. Otherwise you will find yourself with a customer saying your application doesn't work, and the reason will be a complete mystery; you won't be able to respond in a timely, useful way to your customer and you both lose.
If you organize your code to always do such checks, it isn't that hard to add the next check to that next API you call.
It's funny that you mention GetCursorPos since that fails on Wow64 processes when the address passed is >2Gb. It fails every time. The bug was fixed in Windows 7.
So, yes, I think it's wise to check for errors even when you don't expect them.
Yes, you need to check, but if you're using C++ you can take advantage of RAII and leave cleanup to the various resources that you are using.
The alternative would be to have a jumble of if-else statements, and that's really ugly and error-prone.
Yes. Suppose you don't check what a function returned and the program just continues after the function failure. What happens next? How will you know why your program misbehaves long time later?
One quite reliable solution is to throw an exception, but this will require your code to be exception-safe.
Yes. If a function can fail, then you should protect against it.
One helpful way to categorise potential problems in code is by the potential causes of failure:
invalid operations in your code
invalid operations in client code (code that call yours, written by
someone else)
external dependencies (file system, network connection etc.)
In situation 1, it is enough to detect the error and not perform recovery, as this is a bug that should be fixable by you.
In situation 2, the error should be notified to client code (e.g. by throwing an exception).
In situation 3, your code should recover as far as possible automatically, and notify any client code if necessary.
In both situations 2 & 3, you should endeavour to make sure that your code recovers to a valid state, e.g. you should try to offer "strong exception guarentee" etc.
The longer I've coded with WinAPIs with C++ and to a lesser extent PInvoke and C#, the more I've gone about it this way:
Design the usage to assume it will fail (eventually) regardless of what the documentation seems to imply
Make sure to know the return value indication for pass/fail, as sometimes 0 means pass, and vice-versa
Check to see if GetLastError is noted, and decide what value that info can give your app
If robustness is a serious enough goal, you may consider it a worthy time-investment to see if you can do a somewhat fault-tolerant design with redundant means to get whatever it is you need. Many times with WinAPIs there's more than one way to get to the specific info or functionality you're looking for, and sometimes that means using other Windows libraries/frameworks that work in-conjunction with the WinAPIs.
For example, getting screen data can be done with straight WinAPIs, but a popular alternative is to use GDI+, which plays well with WinAPIs.