Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been working for years on my personal project, an operating system made from scratch. As you may imagine it's quite complicated stuff. The problem is that I've been working this from scratch many times. Meaning that at some point (quite advanced too, I had hard disk read/write and some basic networking), things were too confused and I decided to throw it all by the window and try again.
In the years I've learnt how to make the code look nicer, I read "Clean Code - A Handbook of Agile Software Craftsmanship" by Robert Martin and it helped a lot. I learnt to make functions smaller, organize things in classes (I used C, now C++) and namespaces, appropriate error handling with exceptions and testing.
This new approach however got me stuck at the point that I spend most of the time to check that everything is going well, that the code reads well, that is easy, well commented and tested. Basically I'm not making any relevant step from months. When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
So, here's the problem. Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Thanks in advance.
Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Yes, here.
Seriously, no. It is not possible to write good code without thinking.
When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
This is called "analysis paralysis". You might be interested in reading the "Good Enough Software" section of The Pragmatic Programmer. Your code doesn't have to be perfect.
Those things are widely discussed. To me this legendary blog entry be Joel Spolsky and the follow up discussion (Robert Martin answered this) everywhere an the web contains all the pro and cons and is still fun to read.
To get an idea here's a quote by Jamie Zawinski which appears in the post linked to above:
“At the end of the day, ship the fu****g thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”
I suggest you give TDD (test driven development) a run.
In this context, you will write automated tests for each piece of functionality before implementing it, then you run the tests after completing the feature.
If the tests pass, then you are done, and can start another feature. As a bonus, the tests will collect over time, and you will soon have a test suite you can use for regression testing (to make sure you haven't broke anything while new coding); this addresses your fear of breaking things in the "nice code".
Also, TDD will let you focus on developing exactly what you need, not more, so it tends to lead to nicer and simpler design (especially in interfaces, since you have to think about interfaces before you start coding, so "thought" drives the interfaces, rather than "whatever happens to be handier when I'm coding it".)
However, be aware that applying automated tests to an OS may provide some amount of technical challenge!
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've started a rather large 2D game engine project a few months ago, and I began noticing:
The code from the first one or two months is quite different to the more recent one:
The naming of variables feels a bit different
Some code style aspects are different
I'm sometimes wondering why I named some function that way, and can easily think of a better name
The code feels rather messy
There are parts where almost instantly a better way of doing it comes to my mind
The code seems as if its quality was significantly lower
However, at the time I wrote it, I was watching out to do everything right the same way as I do now.
Now, for my questions:
Is this a common situation, even in large commercial-style projects?
Should I consider investing (a lot of) time in refactoring and maybe even rewriting the affected code?
Is it normal that as a project grows and changes, large parts of code have to be refactored or rewritten from ground up? Is this bad?
Is this a common situation, even in large commercial-style projects?
Yes.
Should I consider investing (a lot of) time in refactoring and maybe even rewriting the affected code?
You going to do that again tomorrow too?
No. Not unless you're actually working on the code you want to refactor.
Is it normal that as a project grows and changes, large parts of code have to be refactored or rewritten from ground up?
Yes.
Is this bad?
It would certainly be a lot easier if we where all perfect, yes.
Yes, this is a common pattern with my projects as well. ABR: Always Be Refactoring. When I feel a new pattern emerge, I try to update older code to match it as well. As a project grows, your experience working in the problem domain influences your style and it's a good idea to be updating older code to match it as well.
As a corollary, if your first project commit is still in your project unchanged a few months later, something is going wrong. I view development as an exploratory practice, and a big part of that is updating old code and ironing out your style. No one knows their final design/API before they start coding. Find any large open source project and walk up its commit history; it happens everywhere.
If you've been working on a drawing or a painting for a while, your style develops sophistication the longer you do it. Also, your first layer or first few sketches are rarely the inked lines that appear in the final result.
A big takeaway lesson from this experience: you're getting better. Or, at least, you're changing. Certainly, from today's perspective, the code you're writing today looks better to you. If the code you wrote back then looks bad today - make it look better. Your responsibility today is not just the code you write today; it is the entire code base. So make it right - and be glad you're getting better.
Yes, this happens. I would even say that it's expected and typical as you delve further into your solution.
Only update your code when you go back and touch it. Don't forget to write unit tests before adjusting it.
It's very tempting to rewrite bad code for no reason, particularly when you don't have a deadline looming. You can easily get stuck in a loop that way.
Remember, shipping is a feature.
Is this a common situation, even in large commercial-style projects?
I must confess here that my belief is that if you design first and code later you can avoid many issues. So I would say here it depends. If one starts with a good design has some company standards in place to ensure the code based on the design follows the same important rules no matter who wrote it then at least you have a chance to avoid such situations. However I am not sure if this is always the case :-).
Should I consider investing (a lot of) time in re-factoring and maybe even rewriting the affected code?
Making things better can never hurt :-).
Is it normal that as a project grows and changes, large parts of code have to be re-factored or rewritten from ground up? Is this bad?
I would say yes and re-factoring should be normally considered to be a good thing when the resulting code is better than the old one. The world never stays the same and even if something was appropriate at some point in time it just may be that it doesn't stand up to the needs of today. So I would say it would be bad if the company you work for would say to you: "you cannot re-factor this code. It's holy". Change (if it is for the better) is always good.
Fred Brooks wrote, "Build one to throw away, you will anyway." While it's not as true as it used to be, it is far from uncommon to not really understand the problem until you start working on it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was wondering how valuable open-source projects are to learn from?
I still consider myself "beginner" due moreso to lack of experience than lack of knowledge- I've read many C/C++ tutorials and taken classes and such, but I just really lack experience. Therefore, even though I understand the major data types and coding techniques, I don't have a solid grasp on what approach to take to use them in my own programming. Therefore, while I continue to read/practice/learn, I have been downloading lots of open-source code (random applications, emulators, games). It is worthwhile looking at them to help learn? I find it extremely interesting, but more often than not just get lost.
Second question, where does one usually start when doing this? Do you hunt down a main() function somewhere? Do you look at headers to see what functions will be available throughout the code and get an idea of what is around to work with?
Please let me know!
R
I personally wouldn't recommend the reading of the source code of open-source projects to a beginner, especially if they're mature projects. It may be overwhelming for a beginner since they tend to be rather large projects with thousands of lines of code, most likely of a non-trivial design.
If you lack experience, then the best way to gain experience is by writing your own programs and taking on your own projects that are of interest to you, in my opinion. You can certainly read other people's code to see "how it's done", but actually trying to implement those ideas yourself in practice does more to help you understand how to write code than just passively reading code. In turn, the gained understanding and experience will allow you to make better sense of other people's code.
It's sort of like math; you may know the formulae, and you can see how mathematicians/teachers/professors/etc. use those formulae, but you won't really understand them until you try them out yourself. Once you do understand them, then the kinds of things mathematicians write will make much more sense.
Try to focus on things you want to do, there's not a lot of point in looking at code for an application that you have no reference point for.
The best place to start would probably be to look at a project like Boost
But formulate a series of application tasks that you'd like to investigate, perhaps graphics, text editing or socket programming... and then work from there.
Getting a good IDE or programmers editor that will help you navigate the code is a major plus.
For example, Emacs + ECTAGS/CEDET/Semantic will help you browse all the functions / classes in a C / C++ project.
I'm agree with #In silico. It's very useful to see other's code, but only when it's a little bit over your level, so that you can learn something. I've seen quite a few projects that were too "over-engineered", so that learning from them when you can't really tell the good from the bad will be a bad idea.
Another thing is to learn from another programmer, when you could ask why he did one way and not another. In this case the difference in levels does not matter.
So I'd suggest programming by yourself, and looking on the other people's code for the same thing after you've tried it. In this way you'll be able to compare the choices you've seen and the decision you've made with someone else (when you don't know a problem in depth, any suggested solution would seem right). You know, In theory, theory and practice are the same. In practice, they are not.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When I get excited about a new feature I'm just about to implement or about a bug that I've just "understood", there is the urge to just jump into the code and get hacking. It takes some effort to stop myself from doing that and to write the corresponding test first. Later the test often turns out to be trivial 4-liner, but before writing it still there's the thought in back of a head, "maybe I can skip this one, this one time?" Ideally I'd like to get an urge to write test, and only then, perhaps, the code :)
What method (or way of thinking or mind trick or self-reward policy or whatever) do you use to help maintain the discipline? Or do you just practice it until it feels natural?
I like the instant feedback from the test, that's reward enough for me. If I can reproduce a bug in a test that's a good feeling, I know I'm headed in the right direction as opposed to guessing and possibly wasting my time.
I like working Test-First because I feel like it keeps me more in tune with what the code is actually doing as opposed to guessing based on a possibly inaccurate mental model. Being able to confirm my assumptions iteratively is a big payoff for me.
I find that writing tests helps me to sketch out my approach to the problem at hand. Often, if you can't write a good test, it means you haven't necessarily thought enough about what it is that you're supposed to be doing. The satisfaction of being confident that I know how to tackle the problem once the tests are written is rather useful.
I'll let you know when I find a method that works. :-)
But seriously, I think your "practice until it feels natural" comment pretty much hits the nail on the head. A 4 line test may appear trivial, but as long as what you are testing represents a real failure point then it is worth doing.
One thing I have found to be helpful is to include code coverage validation as part of the build process. If I fail to write tests, the build will complain at me. If I continue failing to write tests, the continuous integration build will "error out" and everyone nearby will hear the sound I have wired to the "broken build" notification. After a few weeks of "Good grief... You broke it again?", and similar comments, I soon started writing more tests to avoid embarrassment.
One other thing (which only occurred to me after I had submitted the answer the first time) is that once I really got into the habit of writing tests first, I got great positive reinforcement from the fact that I could deliver bug-fixes and additional features with much greater confidence than I could in my pre-automated-test days.
Easiest way I've found is to just use TDD a lot. At some point, writing code without unit tests becomes a very, very nervous activity.
Also, try to focus on interaction or behavioral testing rather than state-based testing.
wear a green wristband
1) You pair with somebody else in your team. One person writes the test, the other implements.
It's called "ping-pong" pairing.
Doing this will force you to discuss design and work out what to do.
Having this discussion also makes it easier to see what tests you're going to need.
2) When I'm working on my own, I like to try out chunks of code interactively. I just type them in at the ruby prompt. When I'm experimenting like this I often need to set up some data for experimenting with, and some printout statements to see what the result is.
These little, self-contained throwaway experiments are usually:
a quick way to establish the feasibility of an implementation, and
good place to start formalising into a test.
I think the important part of keeping yourself in check as far as TDD is concerned is to have the test project set up properly. That way adding a trivial test case is indeed trivial.
If to add a test you need to first create a test project, then work out how isolate components, when to mock things, etc, etc it gees into too hard basket.
So I guess it comes back to having unit tests fully integrated into your development process.
When I first started doing TDD around 2000, it felt very unnatural. Then came the first version of .net and the JUnit port of NUnit, and I started practice TDD at the Shu level (of Shu-Ha-Ri), which meant test (first) everything, with the same questions as yours.
A few years later, at another workplace, together with a very dedicated, competent senior developer, we took the steps necessary to reach the Ha level. This meant for example, not blindly starring at the coverage report, but question "is this kind of test really useful, and does it add more value than it costs?".
Now, at another workplace, together with yet another great colleague, I feel that we're taking our first steps towards the Ri level. For us that currently means a great focus on BDD/executable stories. With those in place verifying the requirements at a higher level, I feel more productive, since I don't need to (re-)write a bunch of unit tests each time a class' public interface needs to change, replace a static call with an extension method, and so on.
Don't get me wrong, the usual TDD class tests still is used and provides great value for us. It's hard to put in words, but we're just so much better at "feeling" and "sensing" what tests makes sense, and how to design our software, than I was capable of ten years ago.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I keep reading about people who are "test infected", meaning that they don't just "get" TDD but also can't live without it. They've "had the makeover" as it were. The question is, how do I get like that?
Part of the point of being "test infected" is that you've used TDD enough and seen the successes enough that you don't want to code without it. Once you've gone through a cycle of writing tests first, then coding and refactoring and seeing your bug counts go down and your code get better as a result, not only does it become second nature like Zxaos said, you have a hard time going back to Code First. This is being test infected.
You've already read about TDD; reading more isn't going to excite you.
Instead, you need a genuine personal success story.
Here's how. Grab some code from a core module, code that doesn't depend on external systems or too many other subroutines. Doesn't matter how complex or simple the routine is.
Then start writing unit tests against it. (I'm assuming you have an xUnit or similar for your language.) Be really obnoxious with the tests -- test every boundary case, test max-int and min-int, test null's, test strings and lists with millions of elements, test strings with Korean and control characters and right-to-left Arabic and quotes and backslashes and periods and other things that tend to break things if not escaped.
What you'll find is.... bugs! At first you might think these bugs aren't important -- you haven't run into these problems yet, your code probably would never do this, etc. etc.. But my experience is if you keep pushing forward you'll be amazed at the number of little problems. Eventually it becomes hard to believe that none of these bugs will ever cause a problem.
Plus you get a great feeling of accomplishment with something is done really, really well. We know code is never perfect and rarely free of bugs, so it's nice when we've exhausted so many tests that we really do feel confident. Confidence is a nice feeling.
Finally, I think the last event that will trigger the love will happen weeks or months later. Maybe you're fixing a bug or adding a feature or refactoring some code, and something you do will break a unit test. "Huh?" you'll say, not understanding why the new change was even relevant to the broken test. Then you'll find it, and find enlightenment. Because you really didn't know that you were breaking code, and the tests saved you.
Hallelujah!
Learn about TDD to start, and then begin integrating it into your workflow. If you use the methodologies enough, you'll find that they become second nature and you'll start framing all of your development tasks within that framework.
Also, start using the J-Unit (or X-Unit) framework for your language of choice.
One word, practice! There is some overhead with doing TDD and the way to overcome it is to practice and make sure you are using tools to help the process. You need to learn the tools like the back of your hand. Once you learn the tools to go along with the process you are learning, then it will click and you will get fluent with writing tests first to flush the code out. Then you will be "test infected".
I answered a question similar to this a while back. You may want to check it out also. I mention some tools and explain learning TDD. Out of these tools, Resharper and picking a good mocking framework are critical for doing TDD. I can't stress learning these tools to go along with the testing framework you are using enough.