Broken Unit Test [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Our project is in agile environment where requirements keep changing every sprint. The annoying part is the unit tests keep failing due to the requirement change. And now it takes longer to fix and maintain them.
Do you have any suggestions in general, what is the better approach for this situation? Thanks in advance.

This is a common issue, and is related to how committed you are to maintaining your unit tests.
You mention your tests break when requirements change, so I'm assuming this means you update code to meet the changing requirements, but you're not updating your tests at the same time.
A development approach fully committed to the benefits of repeatable unit tests would always update the unit test code at the same time code changes are made. If you don't, how do you re-test the code changes, or how can you prove the code changes work?
If you're not committed to maintaining unit tests at the same time as code changes, then you might as well embrace that fact and throw them away as soon as the code changes, because at that point, as you're finding out, the tests become useless.
It's a common problem and one that many projects struggle with. Are tests written one time to test code when initially written, but after that point are discarded, or are they always maintained at the same time when code changes are made? Sure, it adds more effort to maintain the tests, but then you benefit longer term from having a suite of repeatable tests that you can run at any time to test that your code is working as expected, before and after any code changes.

Related

Should you end-to-end test code in production? If yes, how do you avoid corrupting database? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Context:
Let's say you have 2 branches, development branch and production branch.
You unit test the dev branch, it passes, then merge the dev branch with the production branch. (I'm assuming end-to-end tests starting on the server side.)
Questions
Should you then run end-to-end tests against the production branch to assure you that the code works in the production environment as well?
If the answer is yes, then how do you avoid corrupting the state of the database in the production branch? For example, I don't think it's unrealistic that someday someone might write a faulty test that creates/updates/deletes something it shouldn't.
If the answer is no, how would you know whether your code does indeed work in the production environment? I'm guessing simple smoke testing?
Edit
I previously misused the term unit test. I replaced it with end-to-end tests.
Unit tests don't modify database state, so it shouldn't matter. If you're doing end to end tests, then yeah this is a possibility, but proper database permissions and access control should easily prevent this from happening.
You should be able to run all your tests far, far away from your production server. Never run tests on the server directly unless the tests are designed specifically for that (ie. health checks)

Is it acceptable to create unit tests only after qa testing is done? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I came to know that in some shops, code is developed first, given to QA for testing and then developers write unit tests for that code. Is this approach acceptable ? If yes, then what are the pros & cons ?
I got some clues in answers in an unrelated question : Is Unit Testing worth the effort?
But, I also need answers specifically for my question.
A lot of serious dev "shops" do this.
When you develop complex applications for a client you never actually "care" about simple unit tests, the ones you can write any day of the coding project. You have to "test at a coarser level of granularity" (32:30 in the video) and you generally want to test things that are not supposed to change so you don't write tests over and over again, when the architecture changes a bit.
To answer your question: creating unit tests at the end is a fail safe for later, when you fix bugs making sure they don't break existing client required functionality. Writing tests at the end also gives you the insight you need to write them, the client's wishes are known and not subject to change any more.
Bottom line: It's not a science, you only get good at it while doing it.
PS: Not a fan, but this one is "right on the money" https://www.youtube.com/watch?v=9LfmrkyP81M

How to know project is using only TDD [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to provide tests statistics of TDD usage in a company, so I need to identify which projects are using only TDD, or if there are tests code written after coding. I thought using time stamp file change info, but Does anybody have a better solution for this?
A pretty broad question, but I think there is actually a fact based answer.
That answer is: you can't solve social problems on the technical layer.
In other words: already your goal/requirement is flawed: you will not be able to generate those clear statistics. You might be able to apply some heuristics; but unless you get access to all information from all developer systems, timestamps wont help you. You see: the normal approach is to do some coding; and at some point release all of that into the version control system.
So, sometimes it might be clear from timestamps that X was written before XTest; but very often, X and XTest will be released into the library within one commit. Now - which one was written first?
Thus: start thinking on the "social" level first. Meaning: talk to the development teams. Ask them about their practices. And when they claim to do TDD; then look into their specific commit history and see if that tells you anything.
Usually following the Test Driven Development practice implies continuous repeating of small Red-Green-Refactor cycles. As #GhostCat stated, looking into the commit history is is an excellent point to check if the devs follow TDD principles. Every change in the production code should be reflected in a corresponding unit test.
You may also check the code coverage. The high coverage is not the goal but it can be a good indication if the TDD practices are followed.

Why is unit testing so important in agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I still read that without unit testing, you cannot be agile. While I understand the purpose of unit tests, why they are so crucial in agile? Is it because the frequent builds might easily brake something?
And what about integration testing in agile, is that the same case?
Thanks
"...without unit testing, you cannot be agile". Strictly speaking, that is a false statement. Agile doesn't prescribe a particular testing methodology. Anybody who tells you otherwise doesn't understand agile. Agile is about delivering high quality code and being able to respond to change. If you can do that without writing unit tests, you can still be agile.
That being said, unit tests are an important part of software development no matter what the methodology. It's difficult to write high quality software on a large scale without them. They help you determine that the individual units of your code are behaving the way they are designed. Whether you use unit tests, and how many unit tests you write, is a factor of how important it is that your code is correct, how hard it is to fix defects if they make it into production, and so on.
I would say that for most projects, having a robust, well-maintained set of unit and integration tests helps your team be more agile. Having a good set of unit tests is very liberating as a developer -- you are free to make changes more quickly because you have a safety net. This makes it easier to quickly develop stories and verify they are correct.

provide unit-testing for c++ code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to provide unit tests for an application written in c++, this is a very big application and content many sources (.h, .cpp) , actually I don't know where to start? How to process? ...
So any help is more than welcome.
Thanks
Did you upset someone? Given there are no unit tests, the chances of the code being written to be testable range from slim to absolutely none.
Without seeing the code and spending several weeks if not months with it no one can give you more than a general strategy.
There will be some functions you can write unit tests for. Those will be ones where the arguments are easy to generate, they do very few things, one thing would be nice, and they don't have side effects. Attack these first, get them out of the way.
There will be others which nearly fit the above. Now you'll be tempted to re-engineer them a bit so they do, don't do it until you have some sort of test. Write tests for the bits you can. Write integration tests where you can't.
So the basic idea is to get as many tests as you can before you start changing the code, so you can test it and then, to make the smallest change possible to make the code better and write the tests first!
There are a fair few patterns or strategies you can use (get a good book on re-factoring legacy code), start with the simple ones.
Prepare for dismay, hard work and rework, but the best piece of advice I can give is don't try to take short cuts, after all that's what the chuffer who left you with this did isn't it?
Grab a good test framework.
I have used google test a lot with my last company, and it was pretty good, though there are likely better around.
Reading:
http://code.google.com/p/googletest/
Comparison of c++ unit test frameworks