Testing is the process of writing and running small pieces of code to verify that your software behaves as intended. Effective testing automates a process you've already done countless times already: write some code, run it, and see that it works. This automation is essential. Rather than relying on humans to perform repeated manual checks perfectly, let the computer do it.
Perl 5 provides great tools to help you write the right tests.
Perl testing begins with the core module Test::More
and its ok()
function. ok()
takes two parameters, a boolean value and a string which describes the test's purpose:
Any condition you can test in your program can eventually become a binary value. Every test assertion is a simple question with a yes or no answer: does this tiny piece of code work as I intended? A complex program may have thousands of individual conditions, and, in general, the smaller the granularity the better. Isolating specific behaviors into individual assertions lets you narrow down bugs and misunderstandings, especially as you modify the code in the future.
The function done_testing()
tells Test::More
that the program has successfully executed all of the expected testing assertions. If the program encountered a runtime exception or otherwise exited unexpectedly before the call to done_testing()
, the test framework will notify you that something went wrong. Without a mechanism like done_testing()
, how would you know? Admittedly this example code is too simple to fail, but code that's too simple to fail fails far more often than anyone would expect.
The resulting program is now a full-fledged Perl 5 program which produces the output:
This format adheres to a standard of test output called TAP, the Test Anything Protocol (http://testanything.org/). Failed TAP tests produce diagnostic messages as a debugging aid.
The output of a test file containing multiple assertions (especially multiple failed assertions) can be verbose. In most cases, you want to know either that everything passed or the specifics of any failures. The core module Test::Harness
interprets TAP, and its related program prove
runs tests and displays only the most pertinent information:
That's a lot of output to display what is already obvious: the second and third tests fail because zero and the empty string evaluate to false. It's easy to fix that failure by inverting the sense of the condition with the use of boolean coercion (boolean_coercion):
With those two changes, prove
now displays:
Even though the heart of all automated testing is the boolean condition "is this true or false?", reducing everything to that boolean condition is tedious and offers few diagnostic possibilities. Test::More
provides several other convenient assertion functions.
The is()
function compares two values using the eq
operator. If the values are equal, the test passes. Otherwise, the test fails with a diagnostic message:
As you might expect, the first test passes and the second fails:
Where ok()
only provides the line number of the failing test, is()
displays the expected and received values.
is()
applies implicit scalar context to its values (prototypes). This means, for example, that you can check the number of elements in an array without explicitly evaluating the array in scalar context:
... though some people prefer to write scalar @cousins
for the sake of clarity.
Test::More
's corresponding isnt()
function compares two values using the ne
operator, and passes if they are not equal. It also provides scalar context to its operands.
Both is()
and isnt()
apply string comparisons with the Perl 5 operators eq
and ne
. This almost always does the right thing, but for complex values such as objects with overloading (overloading) or dual vars (dualvars), you may prefer explicit comparison testing. The cmp_ok()
function allows you to specify your own comparison operator:
Classes and objects provide their own interesting ways to interact with tests. Test that a class or object extends another class (inheritance) with isa_ok()
:
isa_ok()
provides its own diagnostic message on failure.
can_ok()
verifies that a class or object can perform the requested method (or methods):
The is_deeply()
function compares two references to ensure that their contents are equal:
If the comparison fails, Test::More
will do its best to provide a reasonable diagnostic indicating the position of the first inequality between the structures. See the CPAN modules Test::Differences
and Test::Deep
for more configurable tests.
Test::More
has several more test functions, but these are the most useful.
CPAN distributions should include a t/ directory containing one or more test files named with the .t suffix. By default, when you build a distribution with Module::Build
or ExtUtils::MakeMaker
, the testing step runs all of the t/*.t files, summarizes their output, and succeeds or fails on the results of the test suite as a whole. There are no concrete guidelines on how to manage the contents of individual .t files, though two strategies are popular:
Each .t file should correspond to a .pm file
Each .t file should correspond to a feature
A hybrid approach is the most flexible; one test can verify that all of your modules compile, while other tests verify that each module behaves as intended. As distributions grow larger, the utility of managing tests in terms of features becomes more compelling; larger test files are more difficult to maintain.
Separate test files can also speed up development. If you're adding the ability to breathe fire to your RobotMonkey
, you may want only to run the t/breathe_fire.t test file. When you have the feature working to your satisfaction, run the entire test suite to verify that local changes have no unintended global effects.
Test::More
relies on a testing backend known as Test::Builder
. The latter module manages the test plan and coordinates the test output into TAP. This design allows multiple test modules to share the same Test::Builder
backend. Consequently, the CPAN has hundreds of test modules available--and they can all work together in the same program.
Test::Fatal
helps test that your code throws (and does not throw) exceptions appropriately. You may also encounterTest::Exception
.Test::MockObject
andTest::MockModule
allow you to test difficult interfaces by mocking (emulating but producing different results).Test::WWW::Mechanize
helps test web applications, whilePlack::Test
,Plack::Test::Agent
, and the subclassTest::WWW::Mechanize::PSGI
can do so without using an external live web server.Test::Database
provides functions to test the use and abuse of databases.DBICx::TestDatabase
helps test schemas built withDBIx::Class
.Test::Class
offers an alternate mechanism for organizing test suites. It allows you to create classes in which specific methods group tests. You can inherit from test classes just as your code classes inherit from each other. This is an excellent way to reduce duplication in test suites. See Curtis Poe's excellentTest::Class
serieshttp://www.modernperlbooks.com/mt/2009/03/organizing-test-suites-with-testclass.html. The newerTest::Routine
distribution offers similar possibilities through the use of Moose (moose).Test::Differences
tests strings and data structures for equality and displays any differences in its diagnostics.Test::LongString
adds similar assertions.Test::Deep
tests the equivalence of nested data structures (nested_data_structures).Devel::Cover
analyzes the execution of your test suite to report on the amount of your code your tests actually exercises. In general, the more coverage the better--though 100% coverage is not always possible, 95% is far better than 80%.
See the Perl QA project (http://qa.perl.org/) for more information about testing in Perl.
Hey! The above document had some coding errors, which are explained below:
- Around line 3:
-
A non-empty Z<>
- Around line 81:
-
A non-empty Z<>
- Around line 104:
-
Deleting unknown formatting code U<>
- Around line 388:
-
Deleting unknown formatting code N<>
Deleting unknown formatting code U<>
- Around line 411:
-
Deleting unknown formatting code U<>