Testing

_images/unicon.png

Index Unicon

Testing Unicon

Unit testing is an important part of software development. The more tests, the better. These can be ad-hoc, and that ad-hoc form of testing is probably still the most prevalent form of testing. In this case, ad-hoc meaning that a developer writes some code, then runs the program to see if the new feature functions as expected. This is important to mentally ensure that the current work being done is acceptable, catches typos and the like, but is not very reproducible and ends up being a linear time burden. Each testing pass takes time, or more explicitly, the programmer has to devote some time to testing.

Wisdom dictates that it might be smarter to automate some of the testing burden. Spend a little bit of extra time during development to create reproducible test cases. Written once, these tests can be evaluated many times, building up a level of continual confidence in the software and retesting older work to ensure new work does not break previous assumptions or expectations.


Unit testing

There are a few de facto standard unit testing frameworks. TAP, xUnit, to name two. There are orders of magnitude more engines that run tests and produce an output compatible with these frameworks. cUnit, check, jUnit, to name just a few from the long list. Some programming languages have unit testing baked into the design of the syntax; the D programming language for instance.

class Sum
{
    int add(int x, int y) { return x + y; }

    unittest
    {
        Sum sum = new Sum;
        assert(sum.add(3,4) == 7);
        assert(sum.add(-2,0) == -2);
    }
}

The D compilers support a -unittest command option that set up special compiles for running the unittest blocks.

unitest

Now, Unicon gets unitest, an engine that can assist with Unicon unit testing. Note the one t, uni-test. Yet another unit testing engine. unitest follows the xUnit framework specification by default[1].

#
# unitest.icn, Unicon unit testing
#
link fullimag, lists

# test suite container, and testresults aggregate
record testcontrol(testname, speaktest, looplimit, xmlout, testcases)
record testresults(control, trials, skips, 
                   errors, pass, fails, breaks)
global testlabel

procedure testsuite(testname, speak, looplimit, xmlout)
    local control, suite
    testlabel := create ("test-" || seq())
    control := testcontrol(testname, speak, looplimit, xmlout, [])
    suite := testresults(control, 0, 0, 0, 0, 0, 0)
    return suite
end

#
# single result testing
#
procedure test(suite, code, arglist, result, output, error)
    suite.trials +:= 1
    put(suite.control.testcases, [@testlabel, 0, &null])

    if suite.control.speaktest > 0 then {
        write(repl("#", 18), " Test: ", right(suite.trials, 4), " ",
              repl("#", 18))
        writes(image(code))
        if \arglist then write("!", fullimage(arglist)) else write()
        if \result then write("Expecting: ", result)
    }

    case type(code) of {
        "string"    : task := uval(code)
        "procedure" : task := create code!arglist
        default     : testFailure("Unknown code type: " || type(code))
    }

    if \task then {
        suite.control.testcases[*suite.control.testcases][2] :=
            gettimeofday()
        # fetch a result
        r := @task
        if \result then
            if r === result then pass(suite) else fails(suite)
        else pass(suite)
        suite.control.testcases[suite.trials][2] := 
            bigtime(suite.control.testcases[suite.trials][2])
    } else errors(suite)

    if suite.control.speaktest > 0 then {
        if \result then write("Received: ", type(r), ", ", image(r))
        write("Trials: ", suite.trials, " Errors: ", suite.errors,
              "  Pass: ", suite.pass, " Fail: ", suite.fails)
        write(repl("#", 48), "\n")
    }
end

#
# record a pass
#
procedure pass(suite)
    suite.pass +:= 1
end

#
# record a fail
#
procedure fails(suite)
    suite.fails +:= 1
    suite.control.testcases[suite.trials][3] := 1
end
 
#
# record an error
#
procedure errors(suite)
    suite.errors +:= 1
    suite.control.testcases[suite.trials][3] := 2
end

#
# report, summary and possibly XML
#
procedure testreport(suite)
    write("Trials: ", suite.trials, " Errors: ", suite.errors,
          "  Pass: ", suite.pass, " Fail: ", suite.fails)
    write()

    if suite.control.xmlout > 0 then {
        write("<?xml version=\"1.0\" encoding=\"UTF-8\"?>")
        write("<testsuite name=\"", suite.control.testname,
              "\" tests=\"", suite.trials,
              "\" errors=\"", suite.errors, "\" failures=\"", suite.fails,
              "\" skip=\"", suite.skips, "\">")
        every testcase := !suite.control.testcases do {
            write("    <testcase classname=\"", &progname,
                  "\" name=\"", testcase[1], "\" time=\"", testcase[2], "\">")
            if \testcase[3] = 1 then {
                write("        <failure type=\"unitest\"> unitest failure </failure")
            }
            if \testcase[3] = 2 then {
                write("        <error type=\"unicon\" message=\"code error\">")
                write("            CodeError: code problem")
                write("        </error>")
            }
            write("    </testcase>")
        }
        write("</testsuite>")
    }
end

#
# Multiple result testing
#
procedure tests(suite, code, arglist, result, output, error)
    suite.trials +:= 1
    put(suite.control.testcases, [@testlabel, 0, &null])
    if suite.control.speaktest > 0 then {
        write(repl("#", 8), " Generator test: ", right(suite.trials, 4),
              " ", repl("#", 18))
        writes(image(code))
        if \arglist then write("!", fullimage(arglist)) else write()
        if \result then write("Expecting: ", limage(result))
    }

    case type(code) of {
        "string"    : task := uvalGenerator(code)
        "procedure" : task := create code!arglist
        default     : testFailure("Unknown code type: " || type(code))
    }

    resultList := list()
    loops := 0;
    if \task then {
        suite.control.testcases[suite.trials][2] :=
            gettimeofday()
        # fetch a result list
        while put(resultList, @task) do {
            loops +:= 1
            if loops > suite.control.looplimit > 0 then {
                suite.breaks +:= 1
                pull(resultList)
                break &null
                # should limiter breaks ever count as a pass?  todo
            }
        }
        if \result then
            if lequiv(resultList, result) then pass(suite) else fails(suite)
        else pass(suite)
        suite.control.testcases[suite.trials][2] := 
            bigtime(suite.control.testcases[suite.trials][2])
    } else errors(suite)

    if suite.control.speaktest > 0 then {
        if \result then write("Received: ", limage(resultList))
        write("Trials: ", suite.trials, "  Errors: ", suite.errors,
              "  Limits: ", suite.breaks,
              "  Pass: ", suite.pass, " Fail: ", suite.fails)
        write(repl("#", 48), "\n")
    }
end

#
# timer calculation
#
procedure bigtime(timer)
    secs := gettimeofday().sec - timer.sec
    usecs := gettimeofday().usec - timer.usec
    return secs * 1000000 + usecs
end

#
# usage failure
#
procedure testFailure(s)
    write(&errout, s)
end

#
# uval.icn, an eval function
#
# Author: Brian Tiffin
# Dedicated to the public domain
#
# Date: September 2016
# Modified: 2016-09-17/14:48-0400
#
$define base "/tmp/child-xyzzy"

link ximage

#
# try an evaluation
#
procedure uval(code)
    program := "# temporary file for unitest eval, purge at will\n_
        procedure main()\n" || code || "\nreturn\nend"
    return eval(program)
end

#
# try a generator
#
procedure uvalGenerator(code)
    program := "# temporary file for unitest eval, purge at will\n_
        procedure main()\n" || code || "\nend"
    return eval(program)
end

#
# eval, given string (either code or filename with isfile)
#
procedure eval(s, isfile)
    local f, codefile, code, coex, status, child, result

    if \isfile then {
        f := open(s, "r") | fail
        code ||:= every(!read(f))
    } else code := s

    # compile and load the code
    codefile := open(base || ".icn", "w") | fail
    write(codefile, code)
    close(codefile)

    status := system("unicon -s -o " || base  || " " ||
                     base || ".icn 2>/dev/null")

    # task can have io redirection here for stdout compares...
    if \status then coex := load(base)

    remove(base || ".icn")
    remove(base)
    return coex
end

programs/unitest/unitest.icn

This is currently a work in progress. Support for other framework integrations and extended capabilities are in the work plan.

unitest can be used in two ways. In source, which requires a couple of Unicon preprocessor lines to define two different main functions depending on a compile time define. Or it can be used to load one or more secondary program in the Unicon multi-tasking virtual machine space and act as a monitoring command and control utility.

Todo

monitoring test mode not yet ready for prime time. Nor is xUnit compatibility actually finished for that matter.

In both of these modes, unit testing can be by on the fly expression compiles, given strings, or more conventional procedure testing of the module under test.

[1]Fibbing, release 0.6 of unitest.icn is not yet xUnit compatible.

Character escapes

Any expression strings passed to test include a burden of backslash escaping on the part of the test writer. To test:

generator(arg)\1

the string needs to be passed as:

test("generator(arg)\\1")

That is due to Unicon string compilation, certain characters need to be protected when inside string literals. For the most part test writers will need to protect:

' quote
" double quote
\ backslash

An example of in source test expressions:

#
# tested.icn, Unit testing, in source
#

$ifndef UNITEST
#
# unit testing example
#
procedure main()
    write("Compile with -DUNITEST for the demonstration")
end

$else
link unitest
#
# unit test trial
#
procedure main()
    speaktest := 0
    xmlout := 1
    looplimit := 100
    suite := testsuite("unitest", speaktest, looplimit, xmlout)

    test(suite, "1 + 2")
    test(suite, "xyzzy ::=+ 1")
    test(suite, "delay(1000)")
    test(suite, "return 1 + 2",, 3)
    test(suite, "return 1 + 2",, 0)
    test(suite, "return write(1 + 2)",, 3, "3\n")
    test(suite, internal, [21], 42)

    testreport(suite)

    speaktest := 1
    # set up for loop limit break testing
    looplimit := 3
    suite := testsuite("unitest generators", speaktest, looplimit, xmlout)

    tests(suite, "suspend 1 to 3",, [1,2,3])
    tests(suite, "syntaxerror 1 to 3",, [1,2,3])
    tests(suite, "suspend seq()\\4",, [1,2,3,4])
    tests(suite, generator, [1,3], [1,2,3])
    tests(suite, generator, [1,2], [1,2,3])

    testreport(suite)
end
$endif

#
# some internal procedure tests
# todo: need some way of handling the arguments
#
procedure internal(v)
    return v * 2
end

procedure generator(low, high)
    suspend low to high
end

programs/unitest/tested.icn

Example testing pass:

prompt$ unicon -s -c unitest.icn
prompt$ unicon -s tested.icn -x
Compile with -DUNITEST for the demonstration
prompt$ unicon -s -DUNITEST tested.icn -x
3
Trials: 7 Errors: 1  Pass: 5 Fail: 1

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="unitest" tests="7" errors="1" failures="1" skip="0">
    <testcase classname="tested" name="test-1" time="20">
    </testcase>
    <testcase classname="tested" name="test-2" time="0">
        <error type="unicon" message="code error">
            CodeError: code problem
        </error>
    </testcase>
    <testcase classname="tested" name="test-3" time="1001078">
    </testcase>
    <testcase classname="tested" name="test-4" time="9">
    </testcase>
    <testcase classname="tested" name="test-5" time="19">
        <failure type="unitest"> unitest failure </failure
    </testcase>
    <testcase classname="tested" name="test-6" time="22">
    </testcase>
    <testcase classname="tested" name="test-7" time="7">
    </testcase>
</testsuite>
######## Generator test:    1 ##################
"suspend 1 to 3"
Expecting: [1,2,3]
Received: [1,2,3]
Trials: 1  Errors: 0  Limits: 0  Pass: 1 Fail: 0
################################################

######## Generator test:    2 ##################
"syntaxerror 1 to 3"
Expecting: [1,2,3]
Received: []
Trials: 2  Errors: 1  Limits: 0  Pass: 1 Fail: 0
################################################

######## Generator test:    3 ##################
"suspend seq()\\4"
Expecting: [1,2,3,4]
Received: [1,2,3]
Trials: 3  Errors: 1  Limits: 1  Pass: 1 Fail: 1
################################################

######## Generator test:    4 ##################
procedure generator![1,3]
Expecting: [1,2,3]
Received: [1,2,3]
Trials: 4  Errors: 1  Limits: 1  Pass: 2 Fail: 1
################################################

######## Generator test:    5 ##################
procedure generator![1,2]
Expecting: [1,2,3]
Received: [1,2]
Trials: 5  Errors: 1  Limits: 1  Pass: 2 Fail: 2
################################################

Trials: 5 Errors: 1  Pass: 2 Fail: 2

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="unitest generators" tests="5" errors="1" failures="2" skip="0">
    <testcase classname="tested" name="test-1" time="19">
    </testcase>
    <testcase classname="tested" name="test-2" time="0">
        <error type="unicon" message="code error">
            CodeError: code problem
        </error>
    </testcase>
    <testcase classname="tested" name="test-3" time="36">
        <failure type="unitest"> unitest failure </failure
    </testcase>
    <testcase classname="tested" name="test-4" time="32">
    </testcase>
    <testcase classname="tested" name="test-5" time="24">
        <failure type="unitest"> unitest failure </failure
    </testcase>
</testsuite>

Test Assisted Development

The author of unitest is not actually a practitioner of the more formal Test Driven Development, TDD method, but follows a slightly looser model of test assisted development, TAD.

The order of write a test first, then the code, is not a modus operandi in test assisted development. The goal is to write code and then verify it works with various test cases. Let the implementation occur during coding, not as a side effect of how it will pass or fail various tests.

Using unitest does not preclude TDD, but it is not a strict requirement or expectation.


Index | Previous: Documentation | Next: Debugging