ARMVLS - Atomic Reaction Model Visual Language System -
A New Way of Animating Algorithms
 
 Kai Warendorf, Wen Jing Hsu, Poh Yeen Seah
 
School of Applied Science, Nanyang Technological University
Blk N4 Nanyang Avenue #2a-36, Singapore 639798

Abstract

Visual language (VL) is a programming language without textual codes and algorithm animation (AA) is about visualizing a computer algorithm. Visual programming (VP) and AA are traditionally separate factions in software visualization (SV). With our project, the Atomic Reaction Model (ARM), we have bridged these two major branches in SV.

Due to the advancements in graphics technology, many developers have come up with graphical applications that claim to be VLs. We see in this diversity of VP systems two main categories of VLs. They are the application specific and the general programming (GP). GPVLs are be developed with the eventual aim of substituting textual programming. Currently, there are two main approaches: graphical rewrite systems and dataflow VLs.

AA systems have up to this point of time used the approach pioneered by the BALSA AA system to animate algorithms. A textual program is first written, compiled and then run to generate a script file. The script file is subsequently read by an algorithm animator to animate the algorithm. This approach is not effective for active learning which proposes that the student should animate the algorithm he wishes to learn.

Opsis, a Java applet for teaching binary tree algorithms, has shown that the visual approach to AA is more effective as a learning tool. Therefore in this project, we create a VP system that can animate most of the algorithms traditionally done by textual coding. ARM offered an alternative to the textual codes used in BALSA systems. We have also proven that a VL is capable of solving complex problems without sacrificing its visual clarity and showed that graphical rewrite systems are not necessarily simplistic and without much flow control as compared to dataflow VLs. We have even laid down the frameworks of a heterogeneous VL. Our research has successfully opened the door to true GPVLs.

1 Introduction

In this paper, we describe a visual language algorithm animator, which models the fundamental causes-and-effects of a simple atomic reaction. We call the model that we developed the Atomic Reaction Model, or ARM in short, and named our project ARM Visual Language System (ARMVLS). Briefly, it is a system that allows the end-user to create images to visually demonstrate or to assist in the description of how a computer algorithm works by means of drawing and moving images on screen. This is algorithm animation (AA). The end-user can be a teacher preparing the images as an instructional aid for his lessons, or a student learning the algorithm through active participation [4], i.e. to actually create his own examples in order to learn or understand the algorithm better. Take the example of the teacher who wishes to teach his students the procedure to insert a new node into a binary tree. In the traditional classroom, he draws the binary tree on the blackboard, then draws the new node at the insertion point, and redraws it at every level of the tree as he explains how the node traverses down to the leaves. AA automates this process and it proves to be very useful as many algorithms are often too complex and time consuming to be manually drawn on the blackboard every time a lesson is to be conducted. This is only one of the many uses of AA.

What distinguishes ARMVLS from most other existing AA systems is that visual techniques, instead of textual codes, are employed to specify the animation sequence. This is what most researchers define as visual programming (VP). So ARM combines AA and VP, linking these two traditionally separate factions in software visualization (SV) technology [7].

2 Evolution

The importance of visual representations in understanding computer programs was recognized way back in the 1950's when Goldstein and von Neumann demonstrated the usefulness of flowcharts [7]. Subsequently, Knuth developed an automatic flowchart generator which worked by the systematic integration of documentation with source code. In the 1970's, the Nassi-Shneiderman chart (or box diagram) [6] was developed to counter the unstructured nature of standard flowcharts. Roy and St. Denis then came up with a system for automatically generating box diagrams from source code through a specialized compilation process.

Modern software visualization (SV) research started in the 1980's with the newly emerged graphical workstation technology. Human-computer interface was emphasized. The BALSA [3] was the most important and well known among the systems that utilized bitmap display and window interface. Along with Balsa II [1], BALSA was regarded as the pioneer of today’s algorithm animators.

2.1 BALSA

BALSA stands for Brown University Algorithm Simulator and Animator [7]. It is one of the earliest AA systems to make use of the graphics and windowing capabilities of personal workstations in the 1980's. It was designed for two purposes: as a teaching tool for undergraduates and as an aid to algorithm design and analysis. As a teaching tool, it allows the teacher to give a running commentary on the prepared graphical animation running on each student’s machine. This is the automated process of the traditional classroom teaching example that we gave in the previous chapter. Figure 1 shows BALSA with several view windows. A view is an animated picture portraying the events generated by an algorithm [2]. Each view can exist as a separate window to show the various aspects of the algorithm being animated.
 

We see from figure 1, BALSA supports multiple simultaneous views of the running algorithm. The contents of each view window depend on what the teacher has provided and cannot be changed by the student. A code view (the window at the lower left quarter of the BALSA screen) is usually provided to show the listing of the current procedure in Pascal with the current line highlighted. Views of data structures can range from dots or sticks to complex graphs, trees or computational geometry abstractions. BALSA is able to display multiple views of the same data structure, and is also the first system that can show algorithms racing with each other on the same display (e.g. to compare the speeds of various sorting algorithms). This pioneering work made it the benchmark against which all subsequent AA systems have been measured.

To animate a computer algorithm in BALSA, the teacher starts with the standard Pascal implementation of the algorithm and annotates it with markers at points where the code changes interesting data structures or enters/exits a subroutine. These markers are called interesting events and they make calls to BALSA’s Interesting Event manager with parameters that typically identify program data [2]. After annotating the source code, the teacher builds views. Each view controls some screen real estate and is notified by the Interesting Event manager when an interesting event is encountered in the algorithm. The notified view is then responsible for updating its graphical display appropriately based on the event. This is done by either modifying the existing view from the BALSA library or by building a completely new view using the built-in graphics primitives. We demonstrate this approach with examples in subsequent sections.

The initial versions of BALSA in 1983 ran in black-and-white on Apollo workstations. Brown’s next version, Balsa-II ran in color on Apple Macintosh personal computers and allowed rudimentary sounds to be associated with events. Figure 2 shows an algorithm race in Balsa-II.
 

Figure 2 Balsa-II demonstrating a race between two bin-packing programs

2.2 Opsis

Opsis [5] is a Java applet designed to teach binary tree algorithms. Figure 3 shows the Opsis applet in X Window environment. As Opsis employs visual programming (VP) methodologies to manipulate abstract tree fragments.
 

Figure 3 Constructing binary search algorithm in Opsis

In Opsis, the student is the one creating animation instead of the teacher. Michail raised empirical evidence that students learn better through active rather than passive activities. He thus advocates that students should learn algorithms by programming their own animation, instead of viewing those prepared by their teachers.

Opsis uses a state-based model (SBM) for representing an animation program. Each state of the tree is one animation frame and thus the smooth animation in XTango is not achieved. The student specifies transitions from one state to another by manipulating abstract objects in a visual manner. The idea is to specify the algorithm in full generality. The sequence of state transition is displayed in the column of visual or state diagrams in figure 3. A user-defined function is modeled as a state graph, which is a directed graph whose nodes represent states and whose directed edges represent operations to transform one state to another. Computation starts at the initial state and ends at one of the final states. Parameters to be passed into the function are specified in the initial state which has no incoming edges. The final states have no outgoing edges and each serves to return a possible value for the function. An example of an initial state is (1) in figure 4; final states are (3b) and (4c).
 

Figure 4 Opsis visual code for binary tree search

It is not difficult to realize that SBM requires no control flow constructs as it subsumes sequencing, iteration and conditionals. In the above figure, sequencing occurs from (4b) to (2) by a simple transition of state; iteration or loop occurs in cycle (2) (3a) (4a) (2); and a conditional, done through operations that result in two or more states, occurs from (2) to (3a) and (3b).

3 ARM: Principle Concepts

Figure 5 shows the Atomic Reaction Model Visual Language System (ARMVLS) after the completion of a three-level binary search tree (BST) construction. The BST was programmed from scratch and made ready to run in less than five minutes, demonstrating the rapid prototyping capability of our Atomic Reaction Model (ARM). There are no functions or modes specially catered for the tree construction. All features and constructs of ARM are generic and can apply to all algorithms that it can animate, where deemed fit. Creating animation in ARMVLS also does not require an expert user. In fact, the BST was animated by a programming illiterate who was only briefed on the aspects of ARM pertaining to BST, and taught the BST insertion algorithm itself just minutes before she created the animation.

ARM is designed as a general programming heterogeneous visual language (GPHVL) with the ultimate aim of animating all algorithms achievable by textual coding. This is not to be confused with BALSA systems. Algorithm animation (AA) using BALSA’s approach are basically coded in text and therefore restricted only by the machine power, compiler capability and human creativity. ARMVLS is a visual programming (VP) system to animate algorithms that are themselves programmed in ARM.

Of course, our goal is extremely ambitious and will take many more years of research to achieve, considering the level of VL technology today. Nevertheless, we felt that our contribution to the VL research community is our approach adopted in ARM.

3.1 The Atomic Reaction

The main component of ARM is as the name suggests the Atomic Reaction: In our simple atomic reaction, we assume only two types of atoms: the active and the inactive. Active atoms have kinetic energy. They are unstable particles and they move around. They are called reactors. Inactive atoms, or reactants, are stable and do not move. Because reactors move, they initiate interactions with reactants. In an interaction, there can be two possible outcomes: either nothing happens, or there can be excitation. We say the interaction is sparked when there is excitation. If the excitation involves only one reactor and one reactant, it is a reaction; if many reactants are involved, it is a compound reaction. ARM does not define the many-reactor-to-one-reactant and the many-reactor-to-many-reactant cases. In subsequent discussions, we use the term ‘reaction’ to include compound reaction as well, unless we explicitly specify that compound reaction is excluded.

In a reaction, there can be three resulting scenarios:

  1. the reactor remains excited and the reactant remains stable,
  2. the reactor remains excited and the reactant becomes excited, or
  3. the reactor stabilizes and the reactant becomes excited.
(the reactor stabilizes and the reactant remains unexcited is a case of unsparked interaction, and not reaction) When the reactor stabilizes, it becomes a reactant; the other way is true — when the reactant is excited, it becomes a reactor and fires an excitation response (potential energy to kinetic energy) that throws it into motion. The reactor also fires an excitation response in order to keep itself in motion if it is to remain excited. We use active responses to refer to the set of excitation responses that an atom fires as a reactor; and for those excitation responses that the atom fires as a reactant, we call them passive responses. Dual responses is the intersection set of these two response types, i.e. a dual response is both an active and a passive response. We say that an atom is triggered when it either becomes or remains active after a reaction.
 
Figure 5 Atomic Reaction Simulation Model

4.2 Critical Structures

In order to react to critical structures two more terms — the follow-up response and the self-combust response. Figure 5 shows how these two entities fit in our atomic reaction model. The diagram illustrates the state change of an individual atom which does not include the time dimension.

To assume that a reactor maintains its response path, or flight orbit after reaction until it meets another reactant is over-simplistic. There are many physical forces that can affect the response path. These forces are too complex to model. For our purpose, we theorize very simplistically that a reactor can fire a sequence of responses to simulate the alteration in its response path. These responses are called follow-up responses. Figure 5 shows that the reactant’s counterpart to the follow-up response seems to be a self-combust response. The underlying concept is however different. When a reactant gains sufficient momentum from the forces around it, it throws itself into orbit by firing a self-combust response. This response is triggered without the need to react with a reactor. The state of the atoms in the system just before self-combustion occurs is known as the critical structure of the interaction arena. The purpose of identifying a critical structure is to allow the animation programmer to specify a restoration procedure to resolve the structure. The restoration procedure is simply a set of self-combust responses.

4.3 The Time Dimension

Although the atomic reaction model that we presented so far appears to be quite sophisticated, it is still not complete. ARM also incorporates the time perspective in order to simulate real-life situations. To do this, there must be history and future. In the next few paragraphs, we introduce the last active response and the next active response. Before we touch on these two types of responses, we first clarify the trigger condition. We have assumed in previous examples that the trigger condition is simply a weight comparison function, or weight trigger. Actually it consists of another part: the visual trigger. Therefore,

where <operator> can either be && (AND) or || (OR). The visual trigger can be expressed as a function of three visual diagrams: visual_trigger(visual_diagram(t - 1),
visual_diagram(t), visual_diagram(t + 1));                                                     (2)
where t is the time of interaction (note that (1) and (2) serve to assist in our explanation, the user of ARMVLS does not enter them). The last and next active responses enable visual_diagram(t - 1) and visual_diagram(t + 1) to be defined respectively. Figure 6 shows their relationship.
Figure 6 Relationship between visual diagrams and the last and next active responses in the time perspective.

5 Conclusion

In this paper, we have discussed an alternative to the traditional BALSA approach of animating algorithms. Instead of using textual codes and script files, we have produced a live system that allows visual codes to be used. The Atomic Reaction Model (ARM) has disproved the common belief that visual languages (VLs) are limited in their capabilities to solve complex problems. We have also proven that graphical rewrite systems (GRSs) are not necessarily inferior to dataflow VLs in terms of problem-solving ability. In fact, by integrating the textual and visual paradigms, we have explored the uncharted areas of visual programming (VP) and reaped some significant successes. Our contribution to the software visualization (SV) community is our framework for a VL algorithm animator.

Of course, our goal is extremely ambitious and will take many more years of research to achieve, considering the level of VL technology today. Nevertheless, we feet that our contribution to the VL research community is our approach adopted in ARM. Even though ARMVLS is in an early development stage, it has been demonstrated that it can be used to program many algorithms with a fraction of the effort and time needed by conventional programming and other existing Visual Programming Systems.

References

[1] M.H. Brown, "Exploring Algorithms Using Balsa-II", IEEE Computer, 1988, 21(5), pp 14-36.

[2] M.H. Brown, "Zeus: A System for Algorithm Animation and Multi-View Editing", Proceedings of Visual Languages, 1991, pp 10-17.

[3] M.H. Brown and R. A Sedgewick, "System for Algorithm Animation", Computer Graphics, 1984, 18(3) pp 177-186.

[4] A.W.Lawrence, A.M. Badre and J.T. Stasko, "Empirically Evaluating the Use of Animation to Teach Algorithms", IEEE Symposium on Visual Languages, IEEE Computer Society Press, Los Alamitos, CA, 1994, pp 48-54.

[5] A. Michail, "Teaching Binary Tree Algorithms through Visual Programming", IEEE Symposium on Visual Languages, 1996.

[6] R.S. Pressman, Software Engineering: A Practitioner’s Approach, 3rd ed, McGraw-Hill, Singapore, 1992.

[7] B.A. Price, R.M. Baecker and I.S. Small, "A Principled Taxonomy of Software Visualisation", Journal of Visual Languages and Computing, Vol 4, No 3, September 1993.