As I have been promising for some time (quite literally 5 years (!), I am ashamed to admit) I am finally unclenching and releasing the Smoketest framework into the wild, ready or not. The code is published and will continue to be updated in a github repository.
Documentation is still a little scant at the moment, but there is a quite comprehensive Getting Started guide in the wiki on that github project, though this is primarily concerned with the libs project as a whole rather than Smoketest specifically.
So to get you started with Smoketest here is a very quick overview.
Creating a Smoketest Project
Smoketest lacks any IDE support for producing test projects or test cases at this stage, so unfortunately it is a manual process. Fortunately it isn’t very arduous.
This is a template, skeleton Smoketest project:
program MyTestProject; uses Deltics.Smoketest; begin Smoketest.Ready; end.
GUI or CONSOLE ?
You do not need to worry about whether to specify a “GUI runner” or a “console runner”. Smoketest takes care of initialising itself appropriate for either GUI or CONSOLE use according to the project being compiled.
All you have to do is add the {$APPTYPE CONSOLE} to the project.
But wait!
You don’t even need to do that if you don’t want to disrupt the DPR. Just add a CONSOLE conditional define to the project and the deltics.inc include file will take care of the $APPTYPE declaration for you (I know, I was surprised myself when I discovered this worked even if not $included in the DPR).
If you have a CI environment, you can simply configure the build of the test project to produce a console app (either with the specific compiler switch or, again, by specifying the CONSOLE conditional define in the compiler options). Your developers can continue to work with the Smoketest GUI when running the project under the IDE with the same DPR producing the automation friendly console build.
So much for the test project. If you run this you get nothing but an empty test which isn’t much use. We need to add some tests.
Your First Test Case
Test cases are similarly very simple to create. This is a skeleton Smoketest test case, implemented in it’s own unit:
unit MyTestCase; interface uses Deltics.Smoketest; type TMyFirstTestCase = class(TTestCase); initialization Smoketest.Add([TMyFirstTestCase]); end.
By adding the test case to Smoketest in the unit initialization
, the test project itself only needs to use the unit to add the test case to the test project:
program MyTestProject; uses Deltics.Smoketest, MyTestCase in 'MyTestCase.pas';
Running the project now we see our test project starting to come together:
Smoketest prettifies our test case name and assigns it a reference number.
NOTE: The reference number is derived from the position of the case in the project so is not guaranteed to be fixed as your test project evolves.
If you wish to provide an even more meaningful name for the case you can do so by declaring your intent to provide this name in the form of an interface on the test case. This can be useful if you are implementing a test case containing tests for a complete unit and wish to name the test case for that unit (including any peculiar capitalisation and/or dots in the name etc).
Also note that the test case is highlighted in red and has a greyed icon as a warning that it contains no tests.
Let’s add a custom name and also a first test method:
TMyFirstTestCase = class(TTestCase, INameCase) private function NameForCase: UnicodeString; published procedure MyFirstTest; end;
The UnicodeString type is supported by Smoketest in all versions of Delphi right back to 7. For pre-Unicode versions of Delphi it is declared in Deltics.Strings
and is aliased in Deltics.Smoketest
for convenience.
NameForCase is the only method that the INameCase interface requires. Being an interface method it doesn’t need any particular visibility on the class itself.
Any test methods on the other hand are required to be published since they are discovered by Smoketest using RTTI. Test methods are parameterless methods.
We shall come to writing the test method itself shortly. For now we’ll just implement the NameForCase method and leave a stub for the test method:
function TMyFirstTestCase.NameForCase: UnicodeString; begin result := 'First EVER Test Case!' end; procedure TMyFirstTestCase.MyFirstTest; begin end;
Now if we run the project we see our newly named test case and (if we double click the case to expand it) the test method within it:
I already covered the basics of writing tests in a previous post, using strings as the test subject, so for this exercise let’s look at some of the other tests supported by the framework.
These will be pretty useless tests but are intended only to show how to write tests, not necessarily how to write good ones. π
TObject of the Exercise
First, let’s make sure that we have a real test by testing that the test case really exists!
procedure TMyFirstTestCase.MyFirstTest; begin Test.Expect(self).IsAssigned; end;
If we run the project and execute the test (the “play” button in the toolbar) we should find (to our enormous relief, but hopefully not surprise) that the test case is assigned.
Since I didn’t specify a label for the test, the expectation has provided what it thinks will be a useful default. Usually this will be a string representation of the value being tested (in this case self
). The object expectation uses the class name and the value of the reference. If I label the test, my label will be used instead:
Test('Test Case').Expect(self).IsAssigned;
To avoid having to always expand tests and start them executing, command line parameters can be specified. When running in the IDE, simply add these to the Run Parameters options for the project:
Option | Action |
---|---|
-xa | Expands all test cases |
-r | Automatically runs the test |
There are others which I will cover another time.
The Importance of Good Test Results
Very often you will have some test that – should it fail – is bad news for the rest of your test process. If you fail to obtain an expected reference, for example, pressing ahead with further tests that use that reference is just going to lead to predictable failures. You might as well abort the test right there and then.
With Smoketest you can express this directly in your test logic to various levels of importance:
Test('Test Case').Expect(self).IsAssigned.IsRequired;
By adding “IsRequired” to the end of the test, if the test fails then the test method will halt, be marked as aborted and the test case will move on to the next test method.
The next level of importance is indicated by “IsCritical“. If this is specified then if the test fails the entire test case will be aborted. The test will not execute any further tests in that test case and will move on to the next text case in the project.
The final level of importance is “IsShowStopper“. With this level of importance, if the test fails then the entire test project is halted.
You can experiment with these different techniques in your test project by constructing a test that will deliberately fail, such as testing if Expect(NIL).IsAssigned, for example.
You can also explore the various test expectations supported for various types of value.
Or you can wait and read along as I post about these in forthcoming blog posts and update (Ok, create) the documentation on the repository wiki.
Just the FAQ’s, Ma’am
You will also no doubt find a few areas which don’t seem to work either at all or as intended. The most obvious ones I think will be:
- “The stop button does not appear to stop the tests when running“. Yes, this broke some time ago and I haven’t yet gotten around to fixing it. My tests don’t run long enough for it to be a big enough problem as yet.
- “What are the two additional columns in the GUI list view for ?“. They are intended to show some basic performance data for performance tests. I haven’t discussed those yet, so just ignore those columns for now.
- “How do I get test results when running the console version ?“. The simple answer is that command line parameters can be used to direct the console runner (and the GUI) to output an XML file of results containing either all test results or just the failures. I’ll cover that next time.
- “How do I perform setup and tear-down/clean-up for my tests or test methods?“. This is actually very straightforward and will also be covered in my next Smoketest post
Looks promising!
This may be a silly question that you covered in an earlier blog post. What are the advantages of this testing framework over, say, DUnit or DUnitX? Ie, what are the “selling points”?
I shall cover this in a bit more detail once I’ve figured out what they are myself. π
But overall, the aim with Smoketest was to create something specifically for Delphi rather than taking something originally created for Java then adapted to .NET then adapted to Delphi.
Things like the explicit need to invoke either a GUI Runner or a Console Runner, something which shouldn’t be necessary but which is a vestigial requirement inherited from the *Unit legacy in DUnit.
This is also reflected in the much “cleaner” environment when writing tests. Instead of relying on inherited methods for various aspects of the framework which then end up polluting the test cases (Setup/Teardown etc) I lean much more heavily on interfaces and use these to create contextual “shifts” for the test author.
I also wanted to be able to write tests more expressively so that instead of having to describe what is being tested, have the test describe itself (reducing the opportunity to make mistakes in the tests themselves).
Finally, a framework for automatically running code is also ideally suited to conduct benchmarking tests, another aspect of testing but somewhat different from simple “correctness”. Smoketest incorporates performance test cases with different capabilities than correctness test cases.
What is the difference between your smoketest and the Spring Framework ?
If you mean the Java Spring framework, the difference is that Smoketest is specifically for writing test code and is not a general purpose application framework. Was their some other Spring framework that I am not aware of that is more connected with testing ?
thx 4 yr wrk
why not integrate in Delphi Spring?
http://www.spring4d.org/
First of all, thanks for clarifying – now I know what Pio was talking about. π
Two reasons for not integrating with Spring4D.
1. First, Smoketest is an alternative to DUnit, not a general purpose framework. DUnit isn’t part of Spring either. π
2. Second, and more importantly, like it or not there are still a lot of people on older versions of Delphi. Spring4D requires the use of Delphi 2010. All my code supports Delphi versions as far back as Delphi 7. This may change as time goes on, but right now even if I dropped support for Delphi 7, I would still want to support non-Unicode versions of Delphi which means up to and including Delphi 2007 but still rules out Delphi 2010 as a minimum requirement.
Oh, and a 3rd reason: I wasn’t aware of Spring4D in the first place. π
sorry, my fault here. Amanda is right. That is what I was speaking about.
Your testing framework looks interesting! Kudos!
“I also wanted to be able to write tests more expressively so that instead of having to describe what is being tested, have the test describe itself (reducing the opportunity to make mistakes in the tests themselves).”
Could you expand on how a test can describe itself besides using an interface to give it a name?
Btw. How about optionally supporting attributes so you could annotate tests to describe itself? (Yes only those people using newer Delphi version could use that, but it shouldn’t damage compatibility for those who don’t. Just a suggestion.)
Thanks for finally releasing this. I have a faint memory that you mentioned once that this framework can be used to test performance. Is it still the case?
Yes, this is the same framework. The basic performance testing capabilities are still there but are an area that still need a bit of work to be really useful (plus some recent enhancements have some wrinkles that need ironing out). Once it is more “race fit”, I shall be covering it in the blog.
π