Index Unicon

Unicon performance

Unicon is a very high level language. The runtime engine, with generators, co-expressions, threading, and the assortment of other features means that most operations need to include a fair number of conditional tests to verify correctness. While this is some overhead, Unicon still performs at a very reasonable level. The C compilation mode can help when performance is a priority, and loadfunc is always available when C or Assembly level speed is necessary for certain operations.

Unicon, interpreting icode files, ranges from 20 to 40 times slower than optimized C code in simple loops and straight up computations. On par with similar code in Python. Compiled Unicon (via unicon -C) is probably about 10 times slower than similar C when timing a tight numeric computation loop, given the overhead that is required for the very high level language features of Unicon, mentioned above.

As always, those are somewhat unfair comparisons. The tasks that Unicon can be applied to and the development time required to get correct solutions can easily outweigh a few seconds of runtime per program pass. Saving a week of effort can mean that many thousands of program runs are required before a developer hits a break even point. Five eight hours days total up 144,000 seconds. If the delta for a program run between C and Unicon is 5 seconds per run, you’d need to run a program over 28,000 times to make up for a one week difference in development time. Critical routines can leverage loadable functions when needed. All these factors have to be weighed when discussing performance issues.

With all that in mind, it’s still nice to have an overall baseline for daily tasks, to help decide when to bother with loadfunc or which language is the right tool for the task at hand[1].

[1]Or more explicitly; not the wrong tool for the task at hand. Most general purpose programming languages are capable of providing a workable solution to almost any computing problem, but sometimes the problem specific advantages in one language make it an obvious choice for mixing with Unicon for performance and or development time benefits.


This section is unashamedly biased towards Unicon. It’s the point of the exercise. All comparisons assume that point of view and initial bias.

Summing integers

Given a simple program, creating a sum of numbers in a tight loop. Comparing Unicon with Python, and C. Other scripting and compiled languages are included for general interest sake. This lists simple code running 16.8 million iterations while tallying a sum in each language.


The representative timings below each listing are approximate, and results will vary from run to run. There is a fixed number included with each the listing to account for those times when the document generation may have occurred while the build system was blocked or busy at the point of timing run capture. Tested running Xubuntu with an AMD A10-5700 quadcore APU chipset. Different hardware would have different base values, but should have equivalent relative timings.


Unicon, tightloop.icn

# tightloop trial, sum of values from 0 to 16777216
procedure main()
    total := 0
    every i := 0 to 2^24 do total +:= i

Representative timing: 2.02 seconds, 0.55 seconds (-C compiled)


unicon (icode)

prompt$ time -p unicon -s tightloop.icn -x
real 2.06
user 2.04
sys 0.00

unicon -C

prompt$ unicon -s -o tightloop-uc -C tightloop.icn
prompt$ time -p ./tightloop-uc
real 0.55
user 0.55
sys 0.00



# Sum of values from 0 to 16777216
total = 0
for i in range(0, 2**24 + 1):
    total += i

Representative timing: 2.06 seconds


prompt$ time -p python
real 2.09
user 1.90
sys 0.18


C, tightloop-c.c

/* sum of values from 0 to 16777216 */
#include <stdio.h>

main(int argc, char** argv)
    int i;
    unsigned long total;

    total = 0;
    for (i = 0; i <= 1 << 24; i++) total += i;
    printf("%lu\n", total);
    return 0;

Representative timing: 0.05 seconds


prompt$ gcc -o tightloop-c tightloop-c.c
prompt$ time -p ./tightloop-c
real 0.05
user 0.05
sys 0.00


Ada, tightloopada.adb

-- Sum of values from 0 to 16777216 
with Ada.Long_Long_Integer_Text_IO;

procedure TightLoopAda is
    total : Long_Long_Integer;
    total := 0;
    for i in 0 .. 2 ** 24 loop
        total := total + Long_Long_Integer(i);
    end loop;
end TightLoopAda;

Representative timing: 0.06 seconds


GNAT Ada, 5.4.0

prompt$ gnatmake tightloopada.adb
gnatmake: "tightloopada" up to date.
prompt$ time -p ./tightloopada
real 0.06
user 0.05
sys 0.00


ALGOL, tightloop-algol.a68 [2]

# Sum of values from 0 to 16777216 #
    INT i := 0;
    LONG INT total := 0;
    FOR i FROM 0 BY 1 TO 2^24 DO
        total +:= i
    print ((total))

Representative timing: 5.91 seconds


prompt$ a68g tightloop-algol
real 5.91
user 5.86
sys 0.03


Assembler, tightloop-assembler.s

# Sum of integers from 0 to 16777216
aslong:    .asciz  "%ld\n"

           .globl main
            push %rbp                  # need to preserve base pointer
            movq %rsp, %rbp            # local C stack, frame size 0

            movl $1, %eax              # eax counts downs
            shll $24, %eax             # 2^24
            movq $0, %rbx              # rbx total

top:        addq %rax, %rbx
            decl %eax                  # decrement counter
            jnz top                    # if counter not 0, then loop again
done:       movq %rbx, %rsi            # store sum in rsi for printf arg 2

            lea aslong(%rip), %rdi     # format string
            call printf                # output formatted value

            movl $0, %eax              # shell result code

Representative timing: 0.01 seconds


prompt$ gcc -o tightloop-assembler tightloop-assembler.s
prompt$ time -p ./tightloop-assembler
real 0.01
user 0.01
sys 0.00


BASIC, tightloop-basic.bac

REM Sum of values from 0 to 16777216
total = 0
FOR i = 0 TO 1<<24
    total = total + i
PRINT total

Representative timing: 0.05 seconds


prompt$ bacon -y tightloop-basic.bac >/dev/null
prompt$ time -p ./tightloop-basic
real 0.05
user 0.05
sys 0.00

C (baseline)

See above, Unicon, C and Python are the ballpark for this comparison.


COBOL, tightloop-cobol.cob

      *> Sum of values from 0 to 16777216
       identification division.
       program-id. tightloop-cob.

       data division.
       working-storage section.
       01 total        usage binary-double value 0.
       01 counter      usage binary-long.
       01 upper        usage binary-long.

       procedure division.
       compute upper = 2**24
       perform varying counter from 0 by 1 until counter > upper
           add counter to total
       display total 
       end program tightloop-cob.

Representative timing: 0.06 seconds


GnuCOBOL 2.0-rc3

prompt$ cobc -x tightloop-cobol.cob
prompt$ time -p ./tightloop-cobol
real 0.07
user 0.07
sys 0.00


D, tightloop-d.d

/* Sum of values from 0 to 16777216 */
module tightloop;
import std.stdio;

main(string[] args)
    long total = 0;
    for (int n = 0; n <= 1<<24; n++) total += n;

Representative timing: 0.05 seconds



prompt$ gdc tightloop-d.d -o tightloop-d
prompt$ time -p ./tightloop-d
real 0.05
user 0.05
sys 0.00


ECMAScript, tightloop-js.js

/* Sum of values from 0 to 16777216 */
var total = 0;
for (var i = 0; i <= Math.pow(2,24); i++) total += i

try { print(total); } catch(e) {}
try { console.log(total); } catch(e) {}

Representative timing: 0.83 seconds (node.js), 10.95 (gjs), 63.37 (duktape)



prompt$ time -p nodejs tightloop-js.js
real 0.78
user 0.77
sys 0.01

gjs [2]

prompt$ time -p gjs tightloop-js.js
real 10.96
user 10.95
sys 0.00

Duktape [2]

prompt$ time -p duktape tightloop-js.js
real 63.37
user 63.36
sys 0.00


Elixir, tightloop-elixir.ex

# Sum of values from 0 to 16777216
Code.compiler_options(ignore_module_conflict: true)
defmodule Tightloop do
    def sum() do
        limit = :math.pow(2, 24) |> round
        IO.puts Enum.sum(0..limit)

Representative timing: 2.03 seconds


elixirc 1.1.0-dev

prompt$ time -p elixirc tightloop-elixir.ex
real 2.10
user 2.14
sys 0.06



( Sum of values from 0 to 16777216)
variable total
: tightloop ( -- )  1 24 lshift 1+ 0 do  i total +!  loop ;
0 total !  tightloop  total ? cr

Representative timing: 0.52 seconds (Ficl), 0.12 seconds (Gforth)



prompt$ time -p ficl
real 0.49
user 0.49
sys 0.00


prompt$ time -p gforth
real 0.12
user 0.12
sys 0.00


Fortran, tightloop-fortran.f

! sum of values from 0 to 16777216
program tightloop
    use iso_fortran_env
    implicit none

    integer :: i
    integer(kind=int64) :: total

    total = 0
    do i=0,2**24
        total = total + i
    end do
    print *,total
end program tightloop

Representative timing: 0.06 seconds



prompt$ gfortran -o tightloop-fortran -ffree-form tightloop-fortran.f
prompt$ time -p ./tightloop-fortran
real 0.06
user 0.06
sys 0.00


Groovy, tightloop-groovy.groovy

/* Sum of value from 0 to 16777216 */

public class TightloopGroovy {
    public static void main(String[] args) {
        long total = 0;
        for (int i = 0; i <= 1<<24; i++) {
            total += i

Representative timing: 0.47 seconds (will use multiple cores)


groovyc 1.8.6, OpenJDK 8

prompt$ groovyc tightloop-groovy.groovy
prompt$ time -p java -cp ".:/usr/share/groovy/lib/*" TightloopGroovy
real 0.46
user 0.83
sys 0.04



/* Sum of values from 0 to 16777216 */
public class tightloopjava {
    public static void main(String[] args) {
        long total = 0;

        for (int n = 0; n <= Math.pow(2, 24); n++) {
            total += n;


Representative timing: 0.11 seconds


OpenJDK javac

prompt$ javac
prompt$ time -p java -cp . tightloopjava
real 0.11
user 0.11
sys 0.00


Lua, tightloop-lua.lua

-- Sum of values from 0 to 16777216
total = 0
for n=0,2^24,1 do
    total = total + n
print(string.format("%d", total))

Representative timing: 0.73 seconds



prompt$ time -p lua tightloop-lua.lua
real 0.71
user 0.70
sys 0.00


Neko, tightloop-neko.neko

// Sum of values from 0 to 16777216
var i = 0;
var total = 0.0;
var limit = 1 << 24;
while i <= limit {
    total += i;
    i += 1;
$print(total, "\n");

Representative timing: 0.89 seconds


nekoc, neko

prompt$ nekoc tightloop-neko.neko
prompt$ time -p neko tightloop-neko
real 0.85
user 0.90
sys 0.14


Nickle, tightloop-nickle.c5

/* Sum of values from 0 to 16777216 */
int total = 0;
for (int i = 0; i <= 1 << 24; i++) {
    total += i;
printf("%g\n", total);

4.85 seconds representative


Nickle 2.77 [2]

prompt$ time -p nickle tightloop-nickle.c5
real 4.85
user 4.83
sys 0.01


Nim, tightloopNim.nim

# Sum of values from 0 to 16777216
var total = 0
for i in countup(0, 1 shl 24):
    total += i
echo total

Representative timing: 0.31 seconds



prompt$ nim compile --verbosity:0 --hints:off tightloopNim.nim
prompt$ time -p ./tightloopNim
real 0.31
user 0.31
sys 0.00



# sum of values from 0 to 16777216
my $total = 0;
for (my $n = 0; $n <= 2 ** 24; $n++) {
    $total += $n;
print "$total\n";

Representative timing: 1.29 seconds


perl 5.22

prompt$ time -p perl
real 1.37
user 1.37
sys 0.00


PHP, tightloop-php.php

# Sum of values from 0 to 16777216
$total = 0;
for ($i = 0; $i <= 1 << 24; $i++) {
    $total += $i;
echo $total.PHP_EOL;

Representative timing: 0.39 seconds


PHP 7.0.15, see PHP.

prompt$ time -p php tightloop-php.php
real 0.41
user 0.41
sys 0.00


See above. Unicon, C and Python are the ballpark for this comparison.


REBOL, tightloop-rebol.r

; Sum of values from 0 to 16777216
total: 0
for n 0 to integer! 2 ** 24 1 [total: total + n]
print total

2.22 seconds representative



prompt$ time -p r3 tightloop-rebol.r3
real 2.82
user 2.18
sys 0.00


REXX, tightloop-rexx.rex

/* Sum of integers from 0 to 16777216 */
parse version host . .
parse value host with 6 which +6
if which = "Regina" then
    numeric digits 16

total = 0
do n=0 to 2 ** 24
    total = total + n
say total

2.38 seconds representative (oorexx), 4.48 (regina)


oorexx 4.2

prompt$ time -p /usr/bin/rexx tightloop-rexx.rex
real 2.55
user 2.54
sys 0.00

regina 3.9, slowed by NUMERIC DIGITS 16 for clean display [2]

prompt$ time -p rexx tightloop-rexx.rex
real 4.84
user 4.84
sys 0.00


Ruby, tightloop-ruby.rb

# Sum of values from 0 to 16777216
total = 0
for i in 0..2**24
    total += i

Representative timing: 1.16 seconds


ruby 2.3

prompt$ time -p ruby tightloop-ruby.rb
real 1.13
user 1.13
sys 0.00



// sum of values from 0 to 16777216
fn main() {
    let mut total: u64 = 0;
    for i in 0..1<<24 {
        total += i;
    println!("{}", total);

Representative timing: 0.44 seconds


rustc 1.16.0

prompt$ /home/btiffin/.cargo/bin/rustc
prompt$ time -p ./tightloop-rust
real 0.37
user 0.36
sys 0.00


Scheme, tightloop-guile.scm

; sum of values from 0 to 16777216
(define (sum a b)
  (do ((i a (+ i 1))
       (result 0 (+ result i)))
      ((> i b) result)))

(display (sum 0 (expt 2 24)))

Representative timing: 0.85 seconds


guile 2.0.11

prompt$ time -p guile -q tightloop-guile.scm
real 0.85
user 0.85
sys 0.00



# Sum of integers from 0 to 16777216
while [ $i -le $((2**24)) ]; do
    let total=total+i
    let i=i+1
echo $total

Representative timing: 281.29 seconds


bash 4.3.46 [2]

prompt$ time -p source
real 281.29
user 280.08
sys 1.11



% Sum of values from 0 to 16777216
variable total = 0L;
variable i;
for (i = 0; i <= 1<<24; i++)
    total += i;

Representative timing: 4.92 seconds


slsh 0.9.1 with S-Lang 2.3 [2]

prompt$ time -p slsh
real 4.92
user 4.92
sys 0.00



"sum of values from 0 to 16777216"
| total |
total := 0
0 to: (1 bitShift: 24) do: [:n | total := total + n]
(total) printNl

Representative timing: 4.60 seconds


GNU Smalltalk 3.2.5 [2]

prompt$ time -p gst
real 4.56
user 4.55
sys 0.00


SNOBOL, tightloop-snobol.sno

* Sum of values from 0 to 16777216
        total = 0
        n = 0
loop    total = total + n
        n = lt(n, 2 ** 24) n + 1               :s(loop)
        output = total

Representative timing: 5.83 seconds


snobol4 CSNOBOL4B 2.0 [2]

prompt$ time -p snobol4 tightloop-snobol.sno
real 5.83
user 5.82
sys 0.00


Tcl, tightloop-tcl.tcl

# Sum of values from 0 to 16777216
set total 0
for {set i 0} {$i <= 2**24} {incr i} {
    incr total $i
puts "$total"

Representative timing: 4.59 seconds (jimsh), 17.69 seconds (tclsh)


jimsh 0.76 [2]

prompt$ time -p jimsh tightloop-tcl.tcl
real 4.59
user 4.59
sys 0.00

tclsh 8.6 [2]

prompt$ time -p tclsh tightloop-tcl.tcl
real 17.69
user 17.67
sys 0.00


Vala, tightloop-vala.vala

/* Sum of values from 0 to 16777216 */
int main(string[] args) {
    long total=0;
    for (var i=1; i <= 1<<24; i++) total += i;
    stdout.printf("%ld\n", total);
    return 0;

Representative timing: 0.16 seconds


valac 0.34.2

prompt$ valac tightloop-vala.vala
prompt$ time -p ./tightloop-vala
real 0.10
user 0.10
sys 0.00



/* Sum of values from 0 to 16777216 */
    total:long = 0
    for var i = 1 to (1<<24)
        total += i
    print("%ld", total)

Representative timing: 0.16 seconds


valac 0.34.2

prompt$ valac
prompt$ time -p ./tightloop-genie
real 0.10
user 0.10
sys 0.00

Unicon loadfunc

A quick test for speeding up Icon

Unicon loadfunc, tightloop-loadfunc.icn

# tightloop trial, sum of values from 0 to 16777216
procedure main()
    faster := loadfunc("./", "tightloop")
    total := faster(2^24)

Representative timing: 0.05 seconds


C loadfunc for Unicon, tightloop-cfunc.c

/* sum of values from 0 to integer in argv[1] */

#include "../icall.h"

tightloop(int argc, descriptor argv[])
    int i;
    unsigned long total;

    total = 0;
    for (i = 0; i <= IntegerVal(argv[1]); i++) total += i;


unicon with loadfunc

prompt$ gcc -o -O3 -shared -fpic tightloop-cfunc.c
prompt$ time -p unicon -s tightloop-loadfunc.icn -x
real 0.04
user 0.02
sys 0.01




In the image above, bars for Bash shell and Tclsh are not included. ECMAScript value is from Node.js (V8). [3]

There was no attempt to optimize any code. Compile times are only included when that is the normal development cycle. Etcetera. Routines can always be optimized, tightened, and fretted over. The point of this little exercise is mainly for rough guidance (perhaps with healthy doses of confirmation, self-serving, halo effect, academic, and/or experimenters bias [8]). [4]

While there may be favouritism leaking through this summary, to the best of my knowledge and belief there is no deliberate shilling. As this is free documentation in support of free software, I can attest to no funding, bribery or insider trading bias as well. Smiley. I will also attest to the belief that Unicon is awesome. You should use it for as many projects as possible, and not feel any regret or self-doubt while doing so. Double smiley.

With that out of the way, here is a recap:

Unicon translated to icode is nearly equivalent to Python and Elixir in terms of the timing (variants occur between runs, but within a few tenths or hundredths of a second difference, up and down).

C wins, orders of magnitude faster than the scripted language trials and marginally faster than most of the other compiled trials.

A later addition of gcc -O3 compiles and an Assembler sample run faster, but that counts as fretting [4].

The GnuCOBOL, gfortran, GNAT/Ada and BaCon programs fare well, only a negligible fraction slower than the baseline C. Both GnuCOBOL and BaCon use C intermediates on the way to a native compile. gfortran uses the same base compiler technology as C in these tests (GCC). Unicon can load any of these modules when pressed for time.

D also fares well, with sub tenth of a second timing.

Java and Gforth test at a third as fast as C, admirable, neck and neck with Vala and Genie. Nim, PHP and Rust clock in shortly after those timings.

Unicon compiled via -C completes roughly 4 times faster than the icode virtual machine interpreter version, at about 1/10th C speed.

Ruby, Perl and Neko, are faster than interpreted Unicon for this sample.

Elixir clocks in next and like Python, pretty close to the bar set by interpreted Unicon.

REBOL and REXX clock in just a little slower than the Unicon mark.

Python and Elixir perform this loop with similar timings to interpreted Unicon, all within 2% of each others elapsed time. Widening the circle a little bit, REXX and REBOL become comparable as well. Revealing a bit of the author’s bias, let’s call these the close competition while discussing raw performance. I have a different (overlapping) set of languages in mind when it comes to overall competitive development strategies [7].

Tcl takes about twice as long as Unicon when using the JimTcl interpreter, and approaching 9 times slower with a full Tcl interpreter. Gjs ended up timing in between the two Tcl engines.

ALGOL, S-Lang, Smalltalk and SNOBOL also took about twice as long as Unicon when running this trial.

Duktape ran at second to last place, just over a minute.

bash was unsurprisingly the slowest of the bunch, approaching a 5 minute run time for the 16.8 million iterations.

Native compiled Unicon timing is on par with Ficl, and a little faster than Lua, node.js, and then Guile, but still about 10 times slower than the compiled C code. (Unicon includes automatic detection of native integer overflow, and will seamlessly use large integer support routines when needed. Those tests will account for some of the timing differences in this particular benchmark when compared to straight up C).

Once again, these numbers are gross estimates, no time spent fretting over how the code was run, or worrying about different implementations, just using the tools as normal (for this author, when not fretting [4]).

Unicon stays reasonably competitive, all things considered.

Scale Languages
<1x Assembler, -O3 C
1x C, Ada, BASIC, COBOL, D, Fortran, Unicon loadfunc
3x Java, GForth, Vala, Genie
6x Nim, PHP, Rust
10x Unicon -C
15x Ficl, Lua, Scheme
20x Neko, Node.js
25x Perl, Ruby
40x Unicon, Elixir, Python, REBOL, REXX
100x Algol, JimTcl, S-Lang, Smalltalk, SNOBOL
200x Gjs
350x Tcl
1200x Duktape
5600x Shell

The Unicon perspective

With Unicon in large (or small) scale developments, if there is a need for speed, you can always write critical sections in another compiled language and load the routines into Unicon space. The overhead for this is minimal in terms of both source code wrapper requirements and runtime data conversions[5]. The number of systems that can produce compatible loadable objects is fairly extensive; C, C++, COBOL, Fortran, Vala/Genie, Ada, Pascal, Nim, BASIC, Assembler, to name but a few. This all means that in terms of performance, Unicon is never a bad choice for small, medium or large projects. The leg up on development time may outweigh a lot of the other complex considerations.

A note on development time

With a program this short and simple minded, development time differences are negligible. Each sample takes a couple of minutes to write, test and document [6].

That would change considerably as problems scaled up in size. Not just in terms of lines of code needed, but also the average time to get each line written, tested and verified correct.

General know how with each environment would soon start to influence productivity and correctness, and bring up a host of other issues. A lot can be said for domain expertise with the language at hand. Without expertise, development time may extend; referencing materials, searching the ecosystem for library imports, becoming comfortable with idioms, with testing and with debugging.

The less lines that need to be written to solve tasks can add up to major benefits when comparing languages in non-trivial development efforts.

[2](1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) Due to the length of the timing trials, some results are not automatically captured during documentation generation. Results for those runs were captured separately and copied into the document source. All other program trials are evaluated during each Sphinx build of this document (results will vary slightly from release to release).

The bar chart graphic was generated with a small Unicon program.


[4](1, 2, 3) Ok, I eventually fretted. Added loadfunc() to show off mixed programming for speed with Unicon. Also added -O3 gcc timing and an assembler sample that both clock in well under the baseline timing.
[5]There are a few examples of how much (little) code is required to wrap compiled code in Unicon in this docset. See libsoldout markdown for one example, wrapping a C library to perform Markdown to HTML processing in about 30 lines of mostly boilerplate source. Other entries in the Programs chapter exercise and demonstrate the loadfunc feature that allows these mixed language integrations.
[6](Except for the BaCon trial). That example stole a few extra minutes for debugging, out of the blue, many days after the initial code writing and verification pass. It turns out BaCon generated a temporary Makefile during its translation to C phase, and I had to track down the mystery when an unrelated Makefile sample kept disappearing during generation of new versions of this document. That led to moving all the performance samples to a separate sub-directory to avoid the problem, and any others that may occur in the future when dealing with that many programming environments all at the same time and in the same space. BaCon was fixed the same day as the inconvenience report.
[7]Not to leave you hanging; I put C, C++, C#, Erlang, Go, Java, Lua, Node.js, Perl, Python, PHP, Red, and Ruby firmly in the Unicon competitive arena. Bash/Powershell and Javascript count as auxiliary tools, that will almost always be mixed in with project developments. Same can be said for things like HTML/CSS and SQL, tools that will almost always be put to use, but don’t quite count as “main” development systems. For Erlang, I also count Elixir, and for Java that includes Scala, Groovy and the like. I live in GNU/Linux land, so my list doesn’t include Swift or VB.NET etc, your short list may differ. I also never quite took to the Lisp-y languages so Scheme and Closure don’t take up many brain cycles during decision making. And finally, I’m a huge COBOL nerd, and when you need practical programming, COBOL should always be part of the development efforts.

Development time

Even though execution time benchmarking is hard to quantify in an accurate way (there are always issues unaccounted for, or secrets yet to uncover) and is fraught with biased opinion[8], judging development time is magnitudes harder. Sometimes lines pour out of fingers and code works better than expected. Sometimes an off by one error can take an entire afternoon to uncover and flush productivity down the toilet.

In general, for complex data processing issues, very high level languages beat high level languages which beat low level languages when it comes to complete solution development time. It’s not a hard and fast rule, but in general.

Unicon counts as a very high level language. There are features baked into Unicon that ease a lot of the day to day burdens faced by many developers. Easy to wield data structures, memory managed by the system and very high level code control mechanisms can all work together to increase productivity and decrease development time. In general. For some tasks, Ruby may be the better choice, for others tasks Python or C or Tcl, or some language you have never heard of may be the wisest course. Each with a strength, and each having skilled practitioners that can write code faster in that language than in any other.

Within the whole mix, Unicon provides a language well suited to productive, timely development with good levels of performance. Other languages can be mixed with Unicon when appropriate, including loadable routines that can be as close to the hardware as hand and machine optimized assembler can manage.

If the primary factor is development time, Unicon offers an extremely competitive environment. The feature set leaves very few domains of application left wanting.


Unicon has a few places that can expose hard to notice construction problems.

Goal-directed evaluation can spawn exponential backtracking problems when two or more expressions are involved. Some expression syntax can end up doing a lot more work in the background than it would seem at a glance. Bounded expressions (or lack thereof) can cause some head scratching at times.

There are tools in place with Unicon to help find these issues, but nothing will ever beat experience, and experience comes from writing code, lots of code.

Luckily, Unicon is at home when programming in the small as it is with middling and large scale efforts[9]. The class, method, and package features, along with the older link directive make for a programming environment that begs for application. There are a lot of problem domains that Unicon can be applied to, and that can all help gaining experience.

Due to some of the extraordinary features of Unicon, it can be applied to very complex problems. Complex problems always shout out for elegant solutions, and that lure can lead to some false positives with initial Unicon programs. It can take practice to know when an edge case is not properly handled, or when a data dependent bug sits waiting for the right combination of inputs to cause problems. Rely on Unicon, but keep a healthy level of skepticism when starting out. This is advice from an author that is just starting out, so keep that in mind. Read the other Technical Reports, articles, and Unicon books; as this document is very much entry level to intermediate Unicon. Wrap expressions in small test heads and throw weird data at your code. Experiment. Turn any potential Unicon downsides into opportunities.

[8](1, 2) With biased opinions comes cognitive filtering. While writing the various tightloop programs, I wanted Unicon to perform well in the timing trials. That cognitive bias may have influenced how the results were gathered and reported here. Not disappointed with the outcomes, but C sure sets a high bar when it comes to integer arithmetic.

Having no actual experience beyond middling sized projects, I asked the forum if anyone has worked on a large Unicon system with over 100,000 lines of code. Here are a couple of the responses:

The biggest project I worked on using Unicon was CVE ( not sure if we broke the 100,000 LOC [mark] though. I don’t see any reason why you can’t write large projects using Unicon. With the ability to write very concise code in Unicon, I’d argue it is even easier to go big.


The two largest known Unicon projects are SSEUS at the National Library of Medicine, and mKE by Richard McCullough of Context Knowledge Systems, not necessarily in that order. They are in the 50-100,000 lines range. CVE approaches that ballpark when client and server are bundled together.

Ralph Griswold used to brag that Icon programs were often 10x smaller than corresponding programs in mainstream languages, so this language designed for programs mostly under 1,000 lines is applicable for a far wider range of software problems than it sounds. While Icon was designed for programming in the small, its size limits have gradually been eliminated. Unicon has further improved scalability in multiple aspects of the implementation, both the compiler/linker and the runtime system. In addition, Unicon was explicitly intended to further support larger scale software systems, and that is why classes and packages were added.


Those quotes don’t answer all the questions, like what maintainers go through, or how long it takes new contributors to get up to speed, but as anecdotes, I now feel quite comfortable telling anyone and everyone that Unicon is up to the task of supporting large scale development and deployments, along with the small.

Unicon Benchmark Suite

There is a Technical Report (UTR16a) 2014-06-09, by Shea Newton and Clinton Jeffery detailing A Unicon Benchmark Suite and sourced in the Unicon source tree under doc/utr/utr16.tex.

The results table shows that unicon runs of the benchmarks range from

  • 345.2x (n-body calculation) to
  • 1.6x for the thread-ring trial, compared to the C code baseline timings.

uniconc compiled versions range from

  • 57.9x (n-body) to
  • 0.6x (regex-dna) of the C baseline.

Uniconc increasing the n-body run speed by a factor of 6 compared to the icode interpreter. The regex-dna trial actually ran faster with Uniconc than in C. Take a look at the TR for full details.

With another caveat; runtime vs development time. There should be a column in any benchmark result sheet that quantifies the development time to get correctly functioning programs. I’d wager that Unicon would shine very bright in that column.


You can run your own test pass in the tests/bench sub-directory of the source tree.

prompt$ cd tests/bench
prompt$ make
prompt$ ./run-benchmark

A local pass came up looking like

prompt$ make
generating input files.............done!

prompt$ ./run-benchmark

Times reported below reflect averages over three executions.
Expect 2-20 minutes for suite to run to completion.

Word Size  Main Memory  C Compiler  clock    OS
64 bit     7.272 GB     gcc 5.4.0   3.4 GHz  UNIX

4x AMD A10-5700 APU with Radeon(tm) HD Graphics

                                        Elapsed time h:m:s        |Concurrent |
benchmark                           |Sequential|   |Concurrent|   |Performance|
concord concord.dat                      3.213            N/A
deal 50000                               2.469            N/A
ipxref ipxref.dat                        1.345            N/A
queens 12                                3.554            N/A
rsg rsg.dat                              2.815            N/A
binary-trees 14                          5.172          7.736         0.668x
chameneos-redux 65000                      N/A          5.706
fannkuch 9                               3.601            N/A
fasta 250000                             3.174            N/A
k-nucleotide 150-thou.dat                5.153            N/A
mandelbrot 750                           13.877          7.662         1.811x
meteor-contest 600                       4.793            N/A
n-body 100000                            4.326            N/A
pidigits 7000                            2.903            N/A
regex-dna 700-thou.dat                   4.231          3.452         1.225x
reverse-complement 15-mil.dat            4.828            N/A
spectral-norm 300                        3.469            N/A
thread-ring 700000                         N/A          6.460

To compare (and take part) visit

If there are no results for your particular machine type and chipset on the Accumulated Results chart, Clinton Jeffery collects summaries with information on how to report them at

The makefile creates some fairly large test files, so you’ll probably want to clean up after running the benchmark pass.

prompt$ make clean
prompt$ rm *-thou.dat *-mil.dat ipxref.dat

Unfortunately, make clean does not remove the benchmarking data files.

Index | Previous: Execution Monitoring | Next: Icon Program Library