I wrote a lot of code in C++ and save. After that I want only try some example code which I find. So I paste this code in this my project main.cpp file (where I had my code). I try it example code and mistake close this file. After that I open main.cpp file, but I can't undo changes by Ctrl-Z. I wanted only try example code and then I wanted undo changes by Ctrl Z, but my mistake is close file. Is it possible undo changes after close file or restore it?
Your original code is probably gone for good. However, perhaps this is a good time for you to consider adding a version control system to your tool set, which will help you avoid this kind of mistake in future, as well as give you a lot of other benefits.
Also, it is not a wise idea to paste example code over your own work in the way that you've done, for exactly the reason you've discovered. Insert a new file into your project, or create a separate project for testing example code. I have a separate Visual Studio solution specifically for this purpose.
EDIT: I say "probably" because I can't rule out all possibility of recovery based on the information you've supplied (e.g. you might have some kind of scheduled backup which caught your previous version). Also, if the code you pasted over it was shorter than your original code, it's possible that some of it still exists as unused data blocks on your hard drive, and might be recoverable, assuming something else hasn't already overwritten them.
Related
This is the first time I am writing in this forum, I hope someone could help me. I have been searching on the Web but have not found any answer related to my question.
I have a very large file (about 25000 lines) with thousands of definitions that must be used by another file
All these files (and about 600 more of them) are converted to .c files using a special tool. I am almost sure this conversion is made propertly.
If I create a.exe with all these files, there is no problem and everything works all right. Unfortunately, I need a .dll which crashes when I try to access to the very large file.
I have check that its .obj file is larger than 65MB so I have added the compiler command /bigobj as far as I have seen on the Internet but it didn't solve the problem.
I have also checked that the problem happens when access to the large file because everything works ok when I join both files (which is not possible in my development)
I am using Visual 2008
Could it be related to compile as C (/TC) or C++ (/TP) code? What's the difference between .exe and .dll that may make my program crashes?
Any ideas please?
Thanks in advance
Indeed, without the code not much can be said... (tho not sure if anyone would have the patience of reading 600 files each with 25k lines of code :) )
As advice, rebuild the exe and dll in debug mode, run the exe from MSVC, then put a breakpoint where you know it crashes. Next set a data breakpoint on the variable after you get its address from the watch window. ASSUMING the app does what it should correctly, then the pointer is set, but lost along the way; that means it should be triggered twice.
Alternatively, try an assertion check.
Another scenario is because the variable is volatile.
Another scenario is the value is returned from a temporary value and gets lost...
And last but not least, the value is never set because of wrong\bad conditions...
If your problem is the crash and not the missing value, just do a null check and return the call if you really want to avoid the complication, however, I would recommend you find why the value isn't set. Your choice.
I would like to set a break point on file modification for a particular file, or on opening a particular file. This is a file which our software opens and modifies during portions of legacy code. I'm not exactly sure how to approach this problem. One approach I have thought about was to find all of the places where we are opening files, break on all of them, and inspect the file path to determine if it is the path we are concerned with. The other approach I was thinking was to attempt to set a break point in the file system opening code when the path matches what I am concerned with (possibly more difficult, as I am presently running under Windows. This might be an option under Linux but a Visual Studio 2005 solution would be ideal and a Linux solution potentially useful).
Presently, I am using Visual Studio 2005 for my software project in C++. I was not able to find anything online about this as an option or an approach people would like to take.
Normally, I would say that I should just understand where this file is being opened. Unfortunately this section of code is quite difficult to understand and will be re-factored, but for the immediate future this functionality would help me.
Thank you very much for reading my question,
-Brian J. Stinar-
Put conditional breakpoint to kernel32.CreateFileW and check file name.
Then you will get file handle, so you can put conditional breakpoint to kernel32.CreateFileW and check file handle.
Also you can hook CreateFileW and call __debugbreak() in it.
I have a solution with 21 C++ projects and 1 VB.NET project.
The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement.
It only happens with this solution and only on my machine.
The solution has total of 2380 source and header files, of which 1280 are header files.
I tried to remove all connection to the source control (Perforce) but it didn't help.
Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued.
Any idea?
Deactivate intellisense.
Link
Intellisense parses the whole project and slows down the IDE drastically. If you use Visual Assist then you won't really need it. Visual Assist is less resource hungry and scans in the background, intellisense steals too many resources during its parsing.
Could this apply in your case?
http://coolthingoftheday.blogspot.com/2008/03/visual-basic-2008-hotfix-to-fix-slow.html
Note that disabling Intellisense may also break stuff like the Class Wizard (at least I'm pretty sure it does in VS2005). As already suggested it's a good idea to get rid of all the temporary files like .ncb regularly, because they can get huge and will slow down the IDE.
Also, if you're using Visual Assist, try rebuilding the database, disabling it or installing a different version.
I have a few solutions with over 100 projects, so I know exactly how you feel. Solutions containing some managed projects are especially bad. Disabling Intellisense helps a lot. I've never seen such problems from Visual Assist (or other similar refactoring tools), and that fills in a lot of the missing functionality from losing Intellisense.
I've also encountered some projects that had code that would cause the Intellisense thread to endlessly loop and never finish parsing the code. Most of those times we were never able to pin down the exact bit of code that caused the problem. Certain heavy use of templates and nested macros were often high on the suspicion list.
The only good way to be sure that Intellisense is disabled is to create a directory with the same name as the ncb file. Go to your solution directory, delete the ncb, and create a directory named your_solution_name.ncb. Because it can't recreate the ncb file, you'll get an error box to click through every time you open the solution, but that's a small price to pay.
Simply deleting the ncb will mean that VS will just create it again. The methods that I've seen from inside the VS options will turn off some of the features but will not prevent it from trying to parse all your code.
I have frequently encounter the following debugging scenario:
Tester provide some reproduce steps for a bug. And to find out where the problem is, I try to play with these reproduce steps to get the minimum necessary reproduce steps. Sometimes, luckily I found that when do a minor change to the steps, the problem is gone.
Then the job turns to find the difference in code workflow between these two reproduce steps. This job is tedious and painful especially when you are working on a large code base and it go through a lot code and involve lots of state changes which you are not familiar with.
So I was wondering is there any tools available to compare "code workflow". As I've learned the "wt" command in WinDbg, I thought it might be possible to do it. For example, I can run the "wt" command on some out most functions with 2 different reproduce steps and then compare the difference between outputs. Then it should be easy to found where the code flow starts to diverge.
But the problem with WinDBG is "wt" is quite slow (maybe I should use a log file instead of output to screen) and not very user-friendly (compared with visual studio debugger) ... So I want to ask you guys is there any existing tools available . or is it possible and difficult to develop a "plug-in" for visual studio debugger to support this functionality ?
Thanks
I'd run it under a profiler in "coverage" mode, then use diff on the results to see which parts of the code were executed in one run by not the other.
Sorry, I don't know of a tool which can do what you want, but even if it existed it doesn't sound like the quickest approach to finding out where the lower layer code is failing.
I would recommend to instrument your layer's code with high-level logs so you can know which module fails, stalls, etc. In debug, your logger can write to file, to output debug window, etc.
In general, failing fast and using exceptions are good ways to find out easily where things go bad.
Doing something after the fact is not going to cut it, since your problem is reproducing it.
The issue with bugs is seldom some interal wackiness but usually what the user's actually doing. If you log all the commands that the user enters then they can simply send you the log. You can substitute button clicks, mouse selects, etc. This will have some cost but certainly much less than something that keeps track of every method visited.
I am assuming that if you have a large application that you have good logging or tracing.
I work on a large server product with over 40 processes and over one million lines of code. Most of the time the error in the trace file is enough to identify the location of problem. However sometimes the error I see in the trace file is caused by some earlier code and the reason for this can be hard to spot. Then I use a comparative debugging technique:
Reproduce the first scenario, copy the trace to a new file (if the application is multi threaded ensure you only have the trace for the thread that does the work).
Reproduce the second scenario, copy the trace to a new file.
Remove the timestamps from the log files (I use awk or sed for this).
Compare the log files with winmerge or similar, to see where and how they diverge.
This technique can be a little time consuming, but is much quicker than stepping through thousand of lines in the debugger.
Another useful technique is producing uml sequence diagrams from trace files. For this you need the function entry and exit positions logged consistently. Then write a small script to parse your trace files and use sequence.jar to produce uml diagrams as png files. This is a great way to understand the logic of code you haven't touched in a while. I wrapped a small awk script in a batch file, I just provide trace file and line number to start then it untangles the threads and generates the input text to sequence.jar then runs its to create the uml diagram.
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531