Running a suite of Unit Tests to insure adequate Use Case and Code Coverage requires a testing framework. There are several out there, but my favorite is still MbUnit. (Many of the concepts that will be discussed here carry across the other frameworks as well.)
This is by no means a complete tutorial on MbUnit, just some of the features I use most often. Also, I will be using the MbUnit GUI for all of my examples. MbUnit supports most (if not all) of the automated build processes (like nAnt, MSBuild, etc), and integrates with code coverage utilities (like nCover).
Best Practices For Test Projects
There are different schools of thought (some might call them religions) about organization of test projects. The common denominator is that the test projects need to mirror your projects being tested.
So, if you have a project named DAL you would have a project named DALTests. Each class in your DAL project would have a matching TestFixture in the DALTests project (including folder structure). For example, if you have a class named DataAccess in the folder Utilities in the DAL project, the TestFixture DataAccessTests will be in the Utilities folder in the DALTests project.
One variant of this rule is to have one project, then folders for each test "project", and then the above rule is applied through the folder structure.
While not following the strict best practice, it does simplify testing in small projects, and is how the sample code from Day Of Dot Net is laid out.
A TestFixture is a class that provides context and specific resources to run one or more Unit Tests. For MbUnit, this is a class in a test project with the [TestFixture] attribute. TestFixtures provide the entire framework for running unit tests.
There are two method attributes that are for the TestFixture level:
- TestFixtureSetup - fires once for the TestFixture before any other decorated methods execute (but after the default constructor)
- TestFixtureTearDown - fires after all decorated methods execute (but before the Dispose method if IDisposable is implemented)
These are somewhat analagous to constructor and dispose methods. A typical use for these methods is to initialize an expensive resource that is to be used by the tests but does not need to be reinitialized for each test. An example would be initializing a logging file in the TestFixtureSetup method to log all of the test activity. In the TestFixtureTearDown method the logging file would be Flushed, Closed, and Disposed.SetUp/Test/TearDown Attributes
There are three Test related attributes:
- SetUp - fires once before every Test method (even Ignored Tests)
- Test - This is the actual Test method that contains the assertions
- TearDown - fire once after every Test method (even Ignored Tests)
The Test attribute is used to decorate the methods that are the actual Tests. The SetUp and TearDown methods are used to set up (and tear down) the stage before (and after) each Test.There are a couple of important items to consider:
- While there is no guarantee of the order of execution of Tests within a TestFixture (unless using the TestSequence attribute), they typically fire in the order they appear. So, it is extremely important to make sure each Test is completely independent of any other Test (see my prior post on unit testing: Unit Testing with .Net (Part 1))
- The SetUp and TearDown methods fire even if a test is ignored. Furthering the case for ensuring that Tests are independent of each other.
Assertions are the meat of the Tests, and how the code is actually validated (except for the ExpectedException Tests). After all of the (TestFixture)SetUp is complete, it's time to actually test the System Under Test.
The format of the Assertions that do comparisons (AreEqual, AreNotEqual, AreSame, AreNotSame, etc.) is the Assertion Type(Expected Result,Actual Result). Thus, the Assertion for the Add Method would be coded Assert.AreEqual(6,Calculator.Add(3,3)).
The boolean Assertions (IsTrue, IsFalse) just take the ActualResult (since the Method name indicates the expected result).
All of the assertions can take additional parameters such as messages to display upon failure.For the calculator example we discussed in my previous post, we decided to code the following Assertions:
For the test to pass, ALL of the Assertions must pass. There is an issue with coding all of these assertions in one test, and that is when this test fails, the breadth of the Use Cases covered can make it more complicated to resolve the issue. (Again, this is a trivial example, but consider the implications of a much more complicated System Under Test).
I recommend coding like Use Cases together into Tests. For the above example, I would place the non-negative Assertions (1-3) into a Test, and the negative Assertions (4-8) in a separate test.From a TDD standpoint, there is another issue with this code. There is way too much duplicate code, and it needs to be refactored. Enter the RowTest attribute to save the day!
One of the most powerful features of MbUnit is the RowTest feature. This allows Use Case coverage with a single Test by passing into the Test different values and the elimination of duplicate code. The Test method changes from a no-arg method to one with parameters, passed in by the Row attribute. So we can take the previous example, and replace all of those assertions with a single assertion:
We then replace the [Test] attribute with the [RowTest] attribute, and add our Use Case coverage through Row Attributes:
[Row(6, 3, 3)]
[Row(3, 0, 3)]
[Row(3, 3, 0)]
[Row(1, 3, -2)]
[Row(1, -2, 3)]
[Row(-2, -1, -1)]
[Row(-3, 0, -3)]
[Row(-3, -3, 0)]
public void Should_Add_Two_Integers(int result, int a, int b)
ICalculator sut = CreateSUT();
Each of these Rows becomes a separate test in the Test Runner, each of which can be run individually.
In our Add example, the business decided to change the requirements to not allow negative integers. With the Rows as separate tests, we will see individual failures (all of the rows with parameters < 0).
The Row attribute takes an ExpectedException (it must be passed into the Row constructor as a named parameter) such as:
[Row(-3, -3, 0, ExpectedException = (typeof(Exception)))] In this way, you can check for failure at the row level. However, I still believe that the two sets should be separated if for no other reason than the name. The failure should be named along the lines of Should_Fail_With_Negative_Operands to be more clear.
Although we touched on this attribute in the RowTest example, this is also a stand alone attribute. When we tag a test with [ExpectedException(typeof(exception))], we are telling the Test Runner that we want the Test to Fail unless we get the exception. This is an important test to cover for both Use Case and Code Coverage.
So, what exception should we code in the attribute? Similar to good coding practices for exception handling, you should never catch something as general as Exception. This should be tagged with the most specific exception possible to ensure accurate Test coverage.
Other Useful Attributes
- Ignore - applied at either the Test or the TestFixture level, causes the Test/TestFixture to be ingored by the Test console/GUI
- TestsOn - allows grouping by Type in the GUI
- Author - Tags the TestFixture with attributes of who wrote the test
- FixtureCategory - Tags the TestFixture with one or more category values
- SqlRestoreInfo/RestoreDatabaseFirst - Sets the connectionstring and backup file location for a SQL Server, then restores the database prior to executing any Tests with the RestoreDatabaseFirst attribure. While entirely cool, I'm not convinced of the usefulness of these, unless the database is small and the restore time is extremely fast.
We've covered a lot of the basics here. Hopefully this gets you well on your way to unit testing. Next post we'll discuss custom TestFixtures and Code Coverage.