How to know project is using only TDD [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to provide tests statistics of TDD usage in a company, so I need to identify which projects are using only TDD, or if there are tests code written after coding. I thought using time stamp file change info, but Does anybody have a better solution for this?

A pretty broad question, but I think there is actually a fact based answer.
That answer is: you can't solve social problems on the technical layer.
In other words: already your goal/requirement is flawed: you will not be able to generate those clear statistics. You might be able to apply some heuristics; but unless you get access to all information from all developer systems, timestamps wont help you. You see: the normal approach is to do some coding; and at some point release all of that into the version control system.
So, sometimes it might be clear from timestamps that X was written before XTest; but very often, X and XTest will be released into the library within one commit. Now - which one was written first?
Thus: start thinking on the "social" level first. Meaning: talk to the development teams. Ask them about their practices. And when they claim to do TDD; then look into their specific commit history and see if that tells you anything.

Usually following the Test Driven Development practice implies continuous repeating of small Red-Green-Refactor cycles. As #GhostCat stated, looking into the commit history is is an excellent point to check if the devs follow TDD principles. Every change in the production code should be reflected in a corresponding unit test.
You may also check the code coverage. The high coverage is not the goal but it can be a good indication if the TDD practices are followed.

Related

Documenting Unit tests in Go [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Whoever writes unit tests in Go, how are you documenting them?
Is there some kind of 'docstring' (like in Python) convention?
If so, how do you maintain this documentation afterwards?
Is it possible to generate Docs based on the description from Unit tests with some automatic tool?
I am asking because as a QA person in my team i wish to document those tests and maintain them as a part of an ongoing dev cycle.
Whoever writes unit tests in GoLang, how are you documenting them?
Not in any systematic way (if at all).
Is there some kind of 'docstring' (like in Python) convention?
No. (For executable examples there is of course.)
If so, how do you maintain this documentation afterwards?
NA. Nothing to maintain.
Is it possible to generate Docs based on the description from Unit tests with some automatic tool?
Asking for 3rd party software/libraries is offtopic on SO.
about automatic documentation, take a look on godoc - https://go.dev/blog/godoc
i don't know about specific usage of unit-test in golang (everyone using it - same as in other languages), but besides them bdd is also popular, for such a stuff take a look on godog - https://github.com/cucumber/godog
about Asking for 3rd party software/libraries is offtopic on SO - golang is all about 3rd party libraries :)
p.s. probably you can use pattern godo(.{1}) to find any packages for go :)

How to architect/design a knowledge base to solve issues from its history analysis? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a ticketing system (lets say JIRA or similar) for my application to file an issue of my application. Now my requirement is to build a knowledge base in a way so that I can predict the solution of any similar issues in future by churning that knowledge base.
To explain further, the knowledge base would give me how many times this kind of issues have arisen in past and what have been the root cause of it in most of the time (lets say 80% time). This way the repository should have an analysis of each and every issue and its possible root cause plus many other relevant information about the issue.
Just to start off to build such a knowledge base, I need to know following things:
What is the most commonly used technology/mechanism available to achieve this ?
How do I need to architect/design a system to be able to serve this kind of requirement?
Does it require to learn any particular language/database ?
I request community experts to enlighten me with the required information and pointers to give me a starting point at least in this direction.
Thanks.
I would suggest against a ‘reinvent the wheel’ approach.
There are perfectly good tools out there that achieve your required use cases. Look at ServiceNow or Desk.com as CRM for tickets, or if you just want a Wiki that integrates with Jira, look at Atlassian’s wiki.
You can also generate reports from Jira itself, not sure why anyone would want to build his own when there are such great tools out there.

Is it acceptable to create unit tests only after qa testing is done? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I came to know that in some shops, code is developed first, given to QA for testing and then developers write unit tests for that code. Is this approach acceptable ? If yes, then what are the pros & cons ?
I got some clues in answers in an unrelated question : Is Unit Testing worth the effort?
But, I also need answers specifically for my question.
A lot of serious dev "shops" do this.
When you develop complex applications for a client you never actually "care" about simple unit tests, the ones you can write any day of the coding project. You have to "test at a coarser level of granularity" (32:30 in the video) and you generally want to test things that are not supposed to change so you don't write tests over and over again, when the architecture changes a bit.
To answer your question: creating unit tests at the end is a fail safe for later, when you fix bugs making sure they don't break existing client required functionality. Writing tests at the end also gives you the insight you need to write them, the client's wishes are known and not subject to change any more.
Bottom line: It's not a science, you only get good at it while doing it.
PS: Not a fan, but this one is "right on the money" https://www.youtube.com/watch?v=9LfmrkyP81M

provide unit-testing for c++ code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to provide unit tests for an application written in c++, this is a very big application and content many sources (.h, .cpp) , actually I don't know where to start? How to process? ...
So any help is more than welcome.
Thanks
Did you upset someone? Given there are no unit tests, the chances of the code being written to be testable range from slim to absolutely none.
Without seeing the code and spending several weeks if not months with it no one can give you more than a general strategy.
There will be some functions you can write unit tests for. Those will be ones where the arguments are easy to generate, they do very few things, one thing would be nice, and they don't have side effects. Attack these first, get them out of the way.
There will be others which nearly fit the above. Now you'll be tempted to re-engineer them a bit so they do, don't do it until you have some sort of test. Write tests for the bits you can. Write integration tests where you can't.
So the basic idea is to get as many tests as you can before you start changing the code, so you can test it and then, to make the smallest change possible to make the code better and write the tests first!
There are a fair few patterns or strategies you can use (get a good book on re-factoring legacy code), start with the simple ones.
Prepare for dismay, hard work and rework, but the best piece of advice I can give is don't try to take short cuts, after all that's what the chuffer who left you with this did isn't it?
Grab a good test framework.
I have used google test a lot with my last company, and it was pretty good, though there are likely better around.
Reading:
http://code.google.com/p/googletest/
Comparison of c++ unit test frameworks

How do you find a particular piece of functionality in a large codebase? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I was fascinated by the "press Tab to search site" feature in chromium, so naturally I wanted to see how exactly it was implemented in code.
A little background for anybody who aren't familiar with this. After navigating to some site, say wikipedia, and doing a search, chromium remembers the name of the query variable and will let you press tab and search the site directly from the address bar. Neat!
Problem is the codebase for chromium is huge and I've had no luck in finding the method/function that handles this.
How do you approach a large codebase when you are looking for the implementation of a particular piece of functionality? Any tricks for narrowing it down? Preferably it should not require building the software with debug symbols and following the flow through the program.
There is no one size fits all approach to this sort of problem. But for this one I would try these:
If there are any unique messages associated with the operation, grep all the source files for that string. A common pitfall of this technique is that messages might be assembled from pieces within the application, so it is often helpful to grep for a unique short phrase—or even a single word—to identify the source of the message. Once the text is found, then finding what references it often requires more text searches.
Trace execution from an easy-to-find point, like the command processing and dispatch loop. I'd look for a Tab key case and follow where it leads.
Look at source code directory and filenames for hints. Software is often constructed rationally, with good engineers dividing and conquering in a sensible way.
A test coverage tool is a good way to do this. They tell you what part of an application
is exercised by a test.
Instrument the application to collect test coverage. Execute the functionality you care about. Record what is executed. Execute something similar, but not the same as the functionality you want. Record this. Take the set difference over the coverage. The diff selects code involved in the functionality of interest, excluding code which is common to similar functionality.
Ask the Chromium team. They don't give points or bronze pixels but they're definitely the authority and right people to ask this sort of questions.