I want to test a function and I want this test to fail. After initiating the project using cabal init, I created my findOddInt function in the Main.hs file in the app directory:
module Main where
findOddInt :: [Int] -> Int
findOddInt (x:xs) = undefined
main :: IO ()
main = putStrLn "Hello, Haskell!"
I created a directory named tests and created the FindOddInt.hs file:
module FindOddInt ( main ) where
import Main hiding ( main )
import Test.HUnit
import qualified System.Exit as Exit
test1 :: Test
test1 = TestCase (assertEqual "should return an odd integer" 3 (findOddInt [0, 1, 0, 1, 0]))
tests :: Test
tests = TestList [TestLabel "test1" test1]
main :: IO ()
main = do
result <- runTestTT tests
if failures result > 0 then Exit.exitFailure else Exit.exitSuccess
my .cabal file is as follows:
cabal-version: 2.4
name: find-odd-int
version: 0.1.0.0
-- A short (one-line) description of the package.
-- synopsis:
-- A longer description of the package.
-- description:
-- A URL where users can report bugs.
-- bug-reports:
-- The license under which the package is released.
-- license:
author: André Ferreira
maintainer: andresouzafe#gmail.com
-- A copyright notice.
-- copyright:
-- category:
extra-source-files: CHANGELOG.md
executable find-odd-int
main-is: Main.hs
-- Modules included in this executable, other than Main.
-- other-modules:
-- LANGUAGE extensions used by modules in this package.
-- other-extensions:
build-depends: base ^>=4.14.3.0
hs-source-dirs: app
default-language: Haskell2010
test-suite tests
type: exitcode-stdio-1.0
main-is: FindOddIntTest.hs
build-depends: base ^>=4.14, HUnit ^>=1.6
hs-source-dirs: app, tests
other-modules: Main
default-language: Haskell2010
All set, I ran cabal configure --enable-tests && cabal test and got the output:
Build profile: -w ghc-8.10.7 -O1
In order, the following will be built (use -v for more details):
- find-odd-int-0.1.0.0 (test:tests) (file app/Main.hs changed)
Preprocessing test suite 'tests' for find-odd-int-0.1.0.0..
Building test suite 'tests' for find-odd-int-0.1.0.0..
[1 of 2] Compiling Main ( app/Main.hs, /home/asf/Projects/codewars-exercises/haskell/6-kyu/find-odd-int/dist-newstyle/build/x86_64-linux/ghc-8.10.7/find-odd-int-0.1.0.0/t/tests/build/tests/tests-tmp/Main.o )
[2 of 2] Compiling FindOddInt ( tests/FindOddIntTest.hs, /home/asf/Projects/codewars-exercises/haskell/6-kyu/find-odd-int/dist-newstyle/build/x86_64-linux/ghc-8.10.7/find-odd-int-0.1.0.0/t/tests/build/tests/tests-tmp/FindOddInt.o ) [Main changed]
Linking /home/asf/Projects/codewars-exercises/haskell/6-kyu/find-odd-int/dist-newstyle/build/x86_64-linux/ghc-8.10.7/find-odd-int-0.1.0.0/t/tests/build/tests/tests ...
Running 1 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to:
/home/asf/Projects/codewars-exercises/haskell/6-kyu/find-odd-int/dist-newstyle/build/x86_64-linux/ghc-8.10.7/find-odd-int-0.1.0.0/t/tests/test/find-odd-int-0.1.0.0-tests.log
1 of 1 test suites (1 of 1 test cases) passed.
I already looked the following posts:
Running “cabal test” passes although there are test failures
Create and run a minimal test suite in Haskell using Hunit only
These posts don't cover my case. Any help is appreciated.
You included the source of your executable as part of your test suite (hs-source-dirs) and this confuses the compiler. When compiling both tests and regular executables, GHC looks for main in a module named Main, and in this case that is app/Main.hs which does nothing, and your test module is compiled but not actually used.
Don't put app in the hs-source-dirs of the test suite. And more generally, don't include a directory in more than one component (library, executable, test or benchmark suite), unless you know what you're doing. If you need to reuse code, you can put it in a library and have executable and test suite depend on it.
The file that you put under main-is: in the .cabal file should include module Main where or no module line. The file name can be anything, but to avoid confusing it with a library module, it may be a good idea to use a lowercase name.
If the module is not going to be imported by another module (Main, for example), then you are free to use any filename for it.
--- https://downloads.haskell.org/ghc/latest/docs/users_guide/separate_compilation.html
You're inconsistently trying to declare two different modules as Main - for the purpose of finding the main entry point you want FindOddInt.hs to be considered Main, but for the purpose of importing things you want Main.hs to be considered Main.
In your cabal file you have:
test-suite tests
type: exitcode-stdio-1.0
main-is: FindOddInt.hs
This says that the Main module should be in a file called FindOddInt.hs (in one of the hs-source-dirs). However in FindOddInt.hs itself you have:
module FindOddInt ( main ) where
import Main hiding ( main )
So there you're saying that that same file is not the Main module after all, it's module FindOddInt. And to make matters worse you're actually importing from another module called Main (which the compiler is going to have to find by the usual module to filename convention, so it'll look for a Main.hs file in any of the hs-source-dirs).
Apparently the way this ends up being compiled, the compiler ends up actually using the Main.hs file as the definition of the Main module, effectively overriding the main-is: FindOddInt.hs configuration in the cabal file. That means your test suite just runs putStrLn "Hello, Haskell!" which of course succeeds. You can confirm this by running cabal test --test-show-details=direct, so that it shows the output from successful tests as well as failures. I get something like this:
Build profile: -w ghc-9.0.2 -O1
In order, the following will be built (use -v for more details):
- find-odd-int-0.1.0.0 (test:tests) (first run)
Preprocessing test suite 'tests' for find-odd-int-0.1.0.0..
Building test suite 'tests' for find-odd-int-0.1.0.0..
Running 1 test suites...
Test suite tests: RUNNING...
Hello, Haskell!
Test suite tests: PASS
Test suite logged to:
/tmp/scratch/dist-newstyle/build/x86_64-linux/ghc-9.0.2/find-odd-int-0.1.0.0/t/tests/test/find-odd-int-0.1.0.0-tests.log
1 of 1 test suites (1 of 1 test cases) passed.
You can see the "Hello, Haskell" being printed there.
If you change your FindOddInt.hs file so that its header is module Main (since your cabal file says that is the Main module), then you get this error:
<no location info>: error:
module ‘main:Main’ is defined in multiple files: tests/FindOddInt.hs
tests/FindOddInt.hs
That's because your other-modules: Main is telling it to look for the Main module again. If you remove that, you get:
Building test suite 'tests' for find-odd-int-0.1.0.0..
Module imports form a cycle:
module ‘Main’ (tests/FindOddInt.hs) imports itself
This is because the import Main is now being resolved against FindOddInt.hs, not Main.hs.
Basically, there is no way to make this work the way you seem to want it to. It is arguably a bug in Cabal and/or GHC that your original config compiles at all; it should possibly report an error about the mismatch between your cabal files's main-is and the referenced file's own decalared module name (and if not, it's extremely surprising that it uses another file as Main instead of that one). But no amount of fixing problems upstream is going to solve the fundamental contradiction in what you're trying to do. Either FindOddInt.hs is Main (in which case you can't import things from another Main), or it isn't.
In my experience, it is not really a good idea to ever import something from Main. A source file for Main can only really be included in a single application, so as soon as you want 2 application entry points (like your normal executable and a test suite), any code you want to share between them can't be in Main.
What this ends up meaning for my coding practices is that I only put very minimal code in Main that I don't intend to ever test with a library-level test suite (I might do further end-to-end testing of the whole binary, but that's different). Most of my code is typically in an internal library, so it can be imported by both my Main module and by my test suite; my actual app folder will usually only have a very thin Main.hs that stitches together a definition for main that is just combining imported things from the library (sometimes even it's just main = SomeModule.realMain, if I wanted run library tests on realMain).
I also agree with Li-yao Xia's recommendation that you don't put the same folder in the hs-source-dirs of multiple components. It's possible to do that sensibly (and is one option for giving your test suite access to modules that a library doesn't expose), but it also has drawbacks even when it doesn't cause errors like this one (mainly that you waste time compiling files multiple times for each component, instead of compiling once and then linking them). One folder per component is the simplest and easiest way to structure your project, unless you have a good reason otherwise.
For myself, I don't like giving Main a different name than Main.hs. You still can't have another Main.hs that is a different module, so I think it avoids confusion all round to stick firmly to the usual module naming convention and have the Main.hs filename be occupied by the Main module. You likely would have spotted what was happening if you had done that and tried to have a Main.hs importing from Main (and the compiler certainly would have spotted the two files claiming to be Main, as in an error I quoted above). There's not much additional safety in putting Main in a file with a name that can't be imported; it just enforces the "don't import from Main" rule that you end up having to stick to anyway. This makes main-is: Main.hs a pointless bit of boilerplate, but at least you don't have to think about it.
Related
I have custom Julia package, which has hopefully quite standard structure. I have single package, src directory with sources and then test directory with all the tests.
In my package I have 4 modules (1 "main" and entrypoint module) and then 3 submodules.
I am slowly adding tests to tests directory. Tests are importing modules with using keyword.
Now problem is, very often I am testing some "private" or unnecessary method to be visible to outside and I have to export those functions even though I would not export them otherwise.
How to solve this? I was thinking that each module could have "Private" submodule containing all these "private" functions and constants used for unit testing so that I don't bloat exports of my clean module API.
copied from comments as that seems to be the solution OP is looking for
you can always test functions that are not exported by calling them with MyModule.MySubModule.func()
I've got the start of a haskell application and I want to see how the build tools behave. One of the things I would like to see is the Haskell coverage reports, through hpc (Haskell Program Coverage -> I didn't find this tag on so, hpc points to high perf computing, on a side note).
The structure of my application is
Main
src/
ModuleA
ModuleB
tests/
ModuleBTest
I have unit tests for moduleB, and I run those unit-tests through cabal test. Before, I configure cabal to spit out the hpc data through
cabal configure --ghc-options=-fhpc --enable-tests
I then build and test,
cabal build
cabal test unit-tests (that's the name of the test suite in the cabal file)
and I indeed see a report and all seems well. However, moduleA is not referred from within moduleB, it's only referred from the Main. I don't have tests (yet) for the Main module.
The thing is, I expected to see moduleA pop up in the hpc output, highlighted completely in yellow and really waving at me that there are no tests for this module, but that doesn't seem to be the case. I noticed that the .mix files are created for this 'unused' module, so I suspect the build step went ok but it goes wrong in the cabal test step.
If I go through ghci and I compile the unit tests while explicitly moduleA on the list of modules to compile then I do get hpc to show me that this module has no tests at all. So I suspect cabal optimizes this moduleA away (as it's 'unused') somewhere, but I don't really see how or where.
Now, I do realize that this might not be a real life situation, as moduleA is only referenced from within the main method, moduleB doesn't reference moduleA and I don't test the Main module (yet), but still I would have felt a lot better if it would at least show up in the program coverage as a hole in my tests the size of a battleship. Anybody an idea?
Note : I realize that my question might boil down to : "How do I tell cabal not to optimize unused modules away?" but I wanted to present the complete problem.
Kasper
First, make sure all of your modules are listed in the other-modules cabal field.
Even though in my experience sometimes applications seem to work their way around without specifying everything there - it can often cause mysterious linking issues, and I assume it could cause situations like yours.
Now, other than that, I don't think cabal would optimize your modules like that, but GHC's dead code elimination. So if your code is not used at all (just one actual usage per module has to exist), GHC wouldn't even care for it.
Unfortunately I haven't seen a flag to change that. You may want to make a meaningless usage for every module in your tests project, just to get things visible.
2.1 Dead code elimination
Does GHC remove code that you're not actually using?
Yes and no. If there is something in a module that isn't exported and
isn't used by anything that is exported, it gets ignored. (This makes
your compiled program smaller.) So at the module level, yes, GHC does
dead code elimination.
On the other hand, if you import a module and use just 1 function from
it, all of the code for all of the functions in that module get linked
in. So in this sense, no, GHC doesn't do dead code elimination.
(There is a switch to make GHC spit out a separate object file for
each individual function in a module. If you use this, only the
functions are actually used will get linked into your executable. But
this tends to freak out the linker program...)
If you want to be warned about unused code (Why do you have it there
if it's unused? Did you forget to type something?) you can use the
-fwarn-unused-binds option (or just -Wall).
- GHC optimisations - HaskellWiki
I just started a python project and I'm trying out different test frameworks.
The problem I have is that nose2 does not find my tests:
$ nose2 --verbose
Ran 0 tests in 0.000s
OK
while nosetests find them all
$ nosetests --collect-only
.................................
Ran 33 tests in 0.004s
OK
Otherwhise I can execute a single test with nose2 from same directory:
$ nose2 myproj.client.test.mypkg.mymodule_test
.
Ran 1 test in 0.007s
OK
where myproj.client.test.mypkg.mymodule_test is like:
'''
Created on 18/04/2013
#author: julia
'''
from unittest import TestCase, main
import os
from myproj.client.mymodule import SUT
from mock import Mock
import tempfile
class SUTTest(TestCase):
def setUp(self):
self.folder = tempfile.mkdtemp(suffix='myproj')
self.sut = SUT(self.folder, Mock())
self.sut.init()
def test_wsName(self):
myfolder = os.path.join(self.folder, 'myfolder')
os.mkdir(myfolder)
self.sut.change_dir(myfolder)
self.assertEquals(self.SUT.name, 'myfolder')
if __name__ == "__main__":
main()
I've been looking at documentation and I cannot find a possible cause for this.
Running python 2.7.3 on MacOs 10.8.3
Adding to MichaelJCox's answer, another problem is that nose2, by default, is looking for your test file names to begin with 'test'. In other words, 'testFilePattern == test*.py' (you can find that in nose2/session.py).
You can fix this in two ways:
Specify a different test file pattern in a configuration file:
Create a configuration file somewhere in your project (the base directory is a good place, or wherever you will run nose2). nose2 will look for and load any file called nose2.cfg or unittest.cfg.
Add this to that configuration file.
[unittest]
test-file-pattern=*.py
Now run nose2 again and it'll find those old test cases. I'm unsure if this could adversely affect nose2 performance or what, but so far so good for me.
Rename your test files so that they begin with test.
For example, if you have a project like this:
/tests/
__init__.py
fluxcapacitor.py
Rename /tests/fluxcapacitor.py to /tests/test_fluxcapacitor.py, now nose2 will find the tests inside fluxcapacitor.py again.
More verbose output
Finally, this is unrelated to your question but might be helpful in the future: If -verbose doesn't output enough info, you can also pass the following additional arg --log-level debug for even more output.
It looks like nose2 needs 1 of 3 things to find the test:
Your tests need to be in packages (just create __init__.py files in each dir of your test structure)
You need a directory named 'test' in the same directory in which nose2 is being run
It needs to be in the same directory
nose2's _discovery method (in nose2.plugins.loader.discovery.py) is explicitly looking for directories named 'test' or directories that are packages (if it doesn't just pick up your test files from the same directory):
if ('test' in path.lower()
or util.ispackage(entry_path)
or path in self.session.libDirs):
for test in self._find_tests(event, entry_path, top_level):
yield test
If I set up a similar test file (called tests.py) and run nose2 in the same directory, it gives me the 1 test OK back.
If I create a directory named 'test', move the file to it, and run nose2 from the top directory, I get an error stating that it can't import my py file. Creating an __init__.py in that directory fixes that error.
If I make a directory 'blah' instead and move the file there, then I see the issue you list above:
Ran 0 tests in 0.000s
OK
However, if I then create an __init__.py in directory 'blah', the test runs and I get my 1 test found and OK'd.
I have an error similar to the one in this post. Now, I'm sure I've made some stupid error somewhere, probably related to releasing an object or an observer or what-not, but since I can't seem to find a way to debug the code I thought I could use the NSDebugEnabled, NSZombieEnabled and MallocStackLogging (as shown here).
Can it be done using OCUnit? If so, how? I just can't find an "executable" to set these parameters on...
Thanks!
Aviad.
Unfortunately, Dave's solution didn't work - I kept getting errors and mistakes. I eventually got GHUnit to work on my project, found the problem by debugging, but it had its own problems so I now use both it and OCUnit which is slightly better integrated in terms of showing the results in the results tab.
sigh. When will we get to see a good, complete unit testing framework for Obj-C?
This may have been fixed in recent Xcodes, but I get zombies by doing
Go into schemes (cmd <)
Open Test, then Arguments tab
Uncheck "Use the Run action's arguments and environment variables"
"+" an environment variable "NSZombieEnabled" = "YES"
Well, NSZombieEnabled and friends are environment variables, which means they have to be run on an executable. The default setup for a unit testing bundle is for the tests to be run during the build process, and not during execution.
So the way to fix this is to make it so that your tests don't run during the build phase, but instead run them as part of an executable.
Here's how I do that:
Inside your Unit Test bundle target, remove the "Run Script" build phase. It's that step that executes the tests after compiling them.
From the Project menu, choose "New Custom Executable..." and name it something meaningful, like "otest"
Make the executable path to be the otest binary, which should be located at /Developer/Tools/otest
Set the following environment variables on the otest executable:
DYLD_FRAMEWORK_PATH => {UnitTest.bundle}/Contents/Frameworks
DYLD_LIBRARY_PATH => {UnitTest.bundle}/Contents/Frameworks
Set the following program arguments on the otest executable:
-SenTest All (this will run all of the unit tests)
{UnitTest.bundle}
You can now select your unit test bundle as the active target, and the otest executable as the active executable, and then build and debug. This will let you set breakpoints, set other environment variables (like NSZombieEnabled), and so on.
If you only want to debug a certain suite or specific unit test, you can change the -SenTest All argument to -SenTest MyUnitTestSuite or -SenTest MyUnitTestSuite/myUnitTestMethod.
It took me quite some time but I finally managed to make it work for my project.
To create the "logic" tests I followed Apple guidelines on creating logic tests.
This works fine once you understand that the logic tests are run during build.
To be able to debug those tests it is required to create a custom executable that will call those tests. The article by Sean Miceli on the Grokking Cocoa blog provides all the information to do this. Following it however did not yield immediate success and needed some tweaking.
I will go over the main steps presented in Sean's tutorial providing some "for dummies" outline which took me some time to figure out:
Setup a target that contains the unit tests but DOES NOT run them
Setup the otest executable to run the tests
Setup the otest environment variables so that otest can find your unit tests
Step 1 - Setting up the target
Duplicate your unit tests target located under your project Targets. This will also create a duplicate of your unit tests product (.octest file). In the figure below "UnitTest" is the original target.
Rename both the unit tests target and the unit tests product (.octest file) to the same name. In the figure below "UnitTestsDebug" is the duplicate target.
Delete the RunScript phase of the new target
The name of both can be anything but I would avoid spaces.
Step 2 - Setting up otest
The most important point here is to get the correct otest, i.e. the one for your current iOS and not the default Mac version. This is well described in Sean's tutorial. Here are a few more details which helped me setting things right:
Go Project->New Custom Executable. This will pop open a window prompting you to enter an Executable Name and an Executable Path.
Type anything you wish for the name.
Copy paste the path to your iOS otest executable. In my case this was /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.2.sdk/Developer/usr/bin/otest
Press enter. This will bring you to the configuration page of your executable.
The only thing to change at this point is to select "Path Type: Relative to current SDK". Do not type in the path, this was done at step 3.
Step 3 - Setting up the otest arguments and environment variables
The otest arguments are straightforward to setup... But this proved to be my biggest problem. I initially had named my logic test target "LogicTests Debug". With this name and "LogicTests Debug.octest" (with quotes) as argument to otest I kept having otest terminating with exit code 1 and NEVER stopping into my code...
The solution: no space in your target name!
The arguments to otest are:
-SenTest Self (or All or a test name - type man otest in terminal to get the list)
{LogicTestsDebug}.octest - Where {LogicTestsDebug} needs to be replaced by your logic test bundle name.
Here is the list of environment variables for copy/pasting:
DYLD_ROOT_PATH: $SDKROOT
DYLD_FRAMEWORK_PATH: "${BUILD_PRODUCTS_DIR}: ${SDK_ROOT}:${DYLD_FRAMEWORK_PATH}"
IPHONE_SIMULATOR_ROOT: $SDKROOT
CFFIXED_USER_HOME: "${HOME}/Library/Application Support/iPhone Simulator/User"
DYLD_LIBRARY_PATH: ${BUILD_PRODUCTS_DIR}:${DYLD_LIBRARY_PATH}
DYLD_NEW_LOCAL_SHARED_REGIONS: YES
DYLD_NO_FIX_PREBINDING: YES
Note that I also tried the DYLD_FORCE_FLAT_NAMESPACE but this simply made otest crash.
Step 4 - Running your otest executable
To run your otest executable and start debugging your tests you need to:
Set your active target to your unit test target (LogicTestsDebug in my case)
Set your active executable to your otest executable
You can build and run your executable and debug your tests with breakpoints.
As a side note if you are having problems running your otest executable it can be related to:
Faulty path. I had lots of problem initially because I was pointing to the mac otest. I kept crashing on launch with termination code 6.
Faulty arguments. Until I removed the space from bundle (.octest) name I kept having otest crash with exit code 1.
Wrong path in environment variables. [Sean tutorial][8] has lots of follow-up questions giving some insight on what other people tried. The set I have now seems to work so I suggest you start with this.
You may get some message in the console which might lead you to think something is wrong with your environment variables. You may notice a message regarding CFPreferences. This message is not preventing the tests from running properly so don't focus on it f you have problems running otest.
Last once everything is working you will be able to stop at breakpoints in your tests.
One last thing...
I've read on many blogs that the main limitation of the integrated XCode SenTestKit is that tests cannot be run while building the application. Well as it turns out this is in fact quite easy to manage. You simply need to add your Logic tests bundle as a dependency to your application project. This will make sure your logic tests bundle is built, i.e. all tests are run, before your application is built.
To do this you can drag and drop your logic test bundle onto your application target.
I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite