Some Numerical Aspects of Functional Iteration and Chaos in Mathematica

download notebookDownload this example as a Mathematica notebook.

Robert Knapp and Mark Sofroniou

Mathematica provides a substantial environment for technical computing which includes powerful easily used tools for symbolic computation, numerical computation and graphics. These tools all use the the same fundamental data structure so the results from one command can be shared with others in a natural way. The glue that unifies the system is a rich and powerful programming language.

The Mathematica kernel, or computational engine, has hundreds of commands which range from simple arithmetic to complex symbolic or numerical algorithms, such as matrix decomposition or numerical differential equation solvers. While we could perhaps fill this entire issue with examples of how these numerical commands can be used, this would not give you a good impression of how the components of Mathematica fit together. Therefore we have chosen an alternate of showing how you can interactively investigate a particular class of scientific problems.

The Mathematica front end, or user interface, allows you to enter equations in a familiar two-dimensional mathematically typeset form. The main difference from an ordinary text book is that instead of being a static page description, your calculations and programs can be presented in a dynamic document in which computations can be run and developed interactively. Since Mathematica files, or notebooks, comprise purely of ascii characters, they are platform independent and can be easily exchanged with colleagues. In fact, the manuscript for this article was entirely prepared using Mathematica's typesetting system and the resulting notebook document was then automatically translated by Mathematica into HTML.

Introduction

A difference equation or map is an equation of the form [Graphics:Iterationsgr1.gif], which together with some specified values, or initial conditions, defines a sequence [Graphics:Iterationsgr2.gif]. Despite the seemingly simple form, difference equations have a wide variety of applications and can display a wide and fascinating range of dynamics. Since maps describe iterative processes, it is not surprising that they come up frequently in computer science. Also, many of the approximations found in numerical analysis, such as numerical solutions of differential equations, typically approximate continuous dynamical systems using discrete systems of difference equations. The advent of the digital computer has made it possible to explore these sequences in new and exciting ways. Modelling a map using a computer is equivalent to studying the process of functional composition or functional iteration.

most most interesting set of discoveries was made for a restricted class of maps of the form [Graphics:Iterationsgr3.gif] with [Graphics:Iterationsgr4.gif] when [Graphics:Iterationsgr5.gif]. May [May, 1976] used the logistic map, where f(x) is given by

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr6.gif]

Notice how the LogisticFunction can be defined using currying, e.g. f[x,y] can be written as f[x][y]. Some functional languages only support functions of one argument and this is the only way that multivariate functions can be implemented [Maeder, 1996]. Mathematica supports both curried and multivariate functional descriptions.

The logistic map models populations with discrete generations, such as grasshoppers, and is normalized to carrying capacity. The parameter r is determined by the base birth rate. When the population becomes large, the net birth rate decreases in a nonlinear way because of lack of resources. When [Graphics:Iterationsgr8.gif], the logistic function satisfies the conditions above. May discovered that as the parameter was increased from 0 to 4, there were different regimes of behavior, divided by so called bifurcations. For r sufficienly small, the population would die out. Next, there was a set of values of r where the population would approach a stable equilibrium number. Above that, he found population cycles, where the population would vary in alternate generations, but as the parameter was increased, the length of the cycles doubled, from 2 generations, to 4, to 8,... Beyond this, the population showed seemingly random behavior: the population would vary from generation to generation in an unpredictable chaotic way. In fact, this provides a classic example of a transition from regular behavior to deterministic chaos, which is characterized by unpredictable behavior with extreme senstivity to initial values.

In this article, we will set up programs in Mathematica and also use links to external programs to demonstrate some of these dynamical properties. Using the interactive notebook version, you can investigate them further on your own. We hope that these examples show you how Mathematica provides a natural environment for setting up and investigating computer simulations. Mathematica's programming language is quite easy to learn for C/C++ programmers, since the syntax is similar in many ways. One key difference is that the language is unified by a single data structure, an expression, which forms a tree. As you will see, this single data structure gives you the flexibility of invoking the same program with various argument types depending on whether you wish to find results symbolically, or numerically with say machine arithmetic or arbitrary precision arithmethic.

Simple Iterations

In Mathematica, the command NestList repeatedly composes (or iterates) a function given an initial condition and returns the values in a list. Using the parametric function LogisticFunction defined above with r = 2.5 we can easily form a sequence of iterates and visualize the results as follows.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr9.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr10.gif]

You can see that the population aproaches an equilibrium value. You can easily find (an approximation to) the equilibrium value by looking at the last iterate:

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr11.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr12.gif]

More interesting is that with r = 3.1,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr13.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr14.gif]

there is a period two cycle.

It turns out that the cycle is stable, which means that no matter what initial value you start with, eventually the population settles down to a steady state and cycles between the two values shown in the graph above. What was of greater interest in the scientific community at large, however, was that as the parameter r was increased the period of the cycle doubled. Eventually, for large enough r, the period has in effect doubled infinitely many times. Just above this value, there are cycles of every period, but none of them are stable. In fact, the population varies from year to year in an unpredictable, seemingly random, way known as deterministic chaos. For example, with r = 3.6,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr15.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr16.gif]

The population never settles down to a consistent cycle and is said to be chaotic. One of the key components of chaos is sensitivity to initial conditions. For example, if you start with a value very near the example shown above, the distance between iterates eventually becomes large.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr17.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr18.gif]

The exponential rate of divergence is reflected by the Lyapunov exponent of the system. We will investigate the sensitivity to initial conditions in more detail below and show how the Lyapunov exponent of the system can be computed.

The Bifurcation Diagram

As the parameter r varies, the point at which the period of iterates doubles is known as a bifurcation point. In what follows, we will develop programs which will allow you to investigate the sequence of period doubling bifurcations and the transition to chaos. A bifurcation diagram allows you to visualize what values the population (or iterates of a map) take on as a function of the parameter. The idea works as follows. Start with some initial value and iterate the map many times. Then iterate the map several more times, and remove duplicate values amongst those iterates. If the parameter is in a range where there is a stable periodic cycle, then the number of values you have will be the period of the cycle. On the other hand, if you are in a range where there is chaos, then all of the values will be different and you will just see a range of points. To succesfully make a bifurcation diagram, you need to be able to iterate the map many times for many different values of the parameter.

Mathematica's programming language often allows you to set up a general framework for solving a class of problems with very little programming effort. Instead of writing a function which works for a particular map, such as the logistic map, we can design our program to accept an arbitrary specification for the map and its variables. The reason this is possible is because Mathematica allows us to use higher order functions to create values of functional type at run time. See Chapter 3 of [Maeder, 1996] for more on this subject.

A common procedural way of building up the points to plot in the bifurcation diagram is to iterate the map for each value of r separately. However, because Mathematica's arithmetic is distributive over lists, we can do the iterations for a set of values of r at once. A program which generates the points to make a bifurcation diagram is shown here. Notice how the combination of high level functions makes the program quite short. For example, Union is used to remove duplicates from a list of values, taking into account the fact that rounding errors require that floating point numbers should be compared using a tolerance to allow for discrepancy of some of the least significant bits. Such issues free the user to focus on conceptual details, not programming details. The use of functional programming constructs makes the program general and extensible and virtually loop free.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr19.gif]

For reasons that will become clearer later on, we construct the function using a separate routine called MakeMapFunction. Trying BifurcationDiagram for a small number of values of r allows you to see the form of the output

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr20.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr21.gif]

and also that between r = 3.3 and r = 3.4, there has been a bifurcation from period 2 to period 4. We can easily specify a different map, such as the sine map. Below we precompute a numerical approximation to the constant &pgr; using N to avoid doing the coercion at each iteration.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr22.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr23.gif]

We observe that this map also undergoes at least one bifurcation.

You can see from the output of the logistic map that for a few values there is a bifurcation between 3.4 and 3.5. We are ready to generate the data which can be used to make a bifurcation diagram. This program will yield a decent diagram, but you might have to be patient since it may take some time. Since Mathematica is a typeless or polymorphic interpreted language, it has to determine types at run time. Hence it will not be as fast as compiled code. For simple calculations like this, the overhead is noticeable. On the other hand, for larger problems, such as those that might arise in matrix algebra, most of the work is done in intrinsic functions and not by the interpreter, so the overhead is minor. It turns out that there are two relatively simple ways to speed up the calculation. First, you can use the Mathematica compiler, and second, you can set up an external program which will do the iterations and send the result back to Mathematica for subsequent plotting or analysis.

Most of the work in our program is spent in the evaluation of the function defining the map. By compiling the function definition we can speed up evaluation considerably. Mathematica's numerical compiler Compile takes Mathematica commands and compiles them into pseudo-code instructions which are interpreted using machine numbers. The result is a CompiledFunction object which contains the sequence of virtual machine instructions describing how the compiled function should execute. Unlike binary machine code instructions, the virtual machine description is portable and thus allows compiled programs to be transferred between different versions of Mathematica running on different platforms. The compiler supports various types of numbers as well as tensors. The rank of the tensor is specified together with the type at compile time, where a rank one tensor is a vector or list. By overwriting the definition of our map constructor function we can instead create a compiled map operating on a range of parameters and values as follows.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr24.gif]

Note that the Compile command takes type information. In this case, r and x are being specified as being rank 1 tensors (i.e. vectors) of machine floating point real numbers.

In a production quality code you could add an option to BifurcationDiagram to allow a user to select between the compiled and uncompiled versions. Many of Mathematica's numerical functions have the option Compiled to do just this.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr25.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr26.gif]

The compiled version gives the same results as the uncompiled version, but it takes only a fraction (roughly 1/6 th) of the time to produce data for a bifurcation diagram, enabling you to more efficiently visualize how the period doubling bifurcations cascade into chaos.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr27.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr28.gif]

Using different settings, you can zoom into different portions of the plot to see the fine scale of the graph. If you do lots of zooming in, even though Mathematica compiled code is reasonably fast, it cannot generally match the speed of code written in a compiled language such as C, particularly for simple iteration examples like this. There are a few reasons for this. Firstly, even though the pseudo code produced by the Mathematica compiler is relatively efficient, it does not produce binary code at the machine instruction level. Secondly, although it is generally hidden from the user, Mathematica is very thorough in its machine exception/overflow checking. For example, with many programs, the following will just return an IEEE floating point infinity, rather than an actual value.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr29.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr30.gif]

Mathematica, seamlessly reverts to arbitrary precision software arithmetic which has a much larger exponent range. If a run-time error such as an IEEE exception occurs during the execution of the virtual machine instructions in a CompiledFunction object, the uncompiled Mathematica function is used to continue the evaluation.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr31.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr32.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr33.gif]

This is behaviour is important because it allows a user's program to continue to execute even if a compiled step is buried deep inside a complicated program.

Linking to C

For many programs it is possible to use C or other compiled code for the time critical part of the calculation because Mathematica comes with an easy to use protocol called MathLink for communication with external programs. The protocol is quite general and MathLink programs have been written which allow you to access Mathematica from within , for example, Microsoft Word or Excel. MathLink allows you to transmit data based upon the tree data structure which is uniform in Mathematica. You can establish communication between processes on one or several computers. In fact, Mathematica uses MathLink to communicate between its own kernel and front end.

The example shown below demonstrates how to link Mathematica using MathLink to a C program which carries out the iterations. Since the time critical portion of the calculation is the actual iteration, this is all that we need to put into the C program. Also, since C does not generalize to doing operations for an entire list at a time, the C program will do the computation of a single value of r at a time. To generate the program for a general functional map the Mathematica Splice is used to set up a simple code generation scheme. Below, a Mathematica function, MathLinkSpliceAndCompile, is shown which will generate the C code for a given map, compile it, set up a communication link to the compiled executable, and then call it to do the desired iterations.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr34.gif]

The function MathLinkBifurcationDiagram will use Install to set up communication between Mathematica and the C program using MathLink, then call a Mathematica function which in turn calls the C function map_iterate.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr35.gif]

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr36.gif]

The file map.mc looks like this

#include "mathlink.h"
#include "mdefs.h"
#include <stdlib.h>
#include <math.h>

int main(argc, argv)
int argc; char* argv[];
{
return MLMain(argc, argv);
}

void map_iterate(double r, double x, int start, int combine)
{
int i;
double *result;

for (i = 0; i < start; i++)
x = <* MapFunction[r][x] *>;

result = (double *) malloc(start*sizeof(double));
for (i = 0; i < combine; i++) {
x = <* MapFunction[r][x] *>;
result[i] = x;
}

MLPutRealList(stdlink, result, combine);

free(result);
}

double lyapunov_exponent(double r, double x, int iterates)
{
int i;
double lexp = 0.;

for (i = 0; i < iterates; i++) {
x= <* MapFunction[r][x] *>;
lexp += log(fabs(<* Simplify[D[MapFunction[r][x],x]] *>));
if (!finite(lexp)) return 0.;
}

return lexp/iterates;
}

When Splice["map.mc"] is executed, the items inside the delimiters <* and *> are replaced with their Mathematica evaluations in a new file, map.c. In Mathematica a Block is a programming construct where variables keep their names, but are assigned local values. Since Splice is called from within the Block, it has a defined value and it evaluates to the user's function in terms of the variables r and x. As long as simple functions which have standard C library implementations are called it will work. If, for example, you need access to special functions it is not difficult to design the code to call back to Mathematica to get those values. Also you can use some more sophisticated code generation capabilities by doing functional transformations on your expressions in Mathematica, such as the Horner or nested form of a polynomial. You can also use common subexpression optimization to save some numerical work. See [Sofroniou, 1993] and [Sofroniou, 1997] for more information.

The C file generated by MathLinkBifurcationDiagram[r x (1- x), {r, 3, 4,.0005}, {x,0.1},10000,100] is

#include "mathlink.h"
#include "mdefs.h"
#include <stdlib.h>
#include <math.h>

int main(argc, argv)
int argc; char* argv[];
{
return MLMain(argc, argv);
}

void map_iterate(double r, double x, int start, int combine)
{
int i;
double *result;

for (i = 0; i < start; i++)
x = r*(1 - x)*x;

result = (double *) malloc(start*sizeof(double));
for (i = 0; i < combine; i++) {
x = r*(1 - x)*x;
result[i] = x;
}

MLPutRealList(stdlink, result, combine);

free(result);
}

double lyapunov_exponent(double r, double x, int iterates)
{
int i;
double lexp = 0.;

for (i = 0; i < iterates; i++) {
x= r*(1 - x)*x;
lexp += log(fabs(r - 2*r*x));
if (!finite(lexp)) return 0.;
}

return lexp/iterates;
}

The next stage is to specify how the C function will be accessed in Mathematica. There is a template mechanism provided with MathLink which makes this really simple to do. The file map.tm specifies the mappng from Mathematica to the C function

:Begin:
:Function: map_iterate
:Pattern: MLMapIterate[r_, x_, start_Integer, combine_Integer]
:Arguments: { r, x, start, combine }
:ArgumentTypes: { Real, Real, Integer, Integer }
:ReturnType: Manual
:End:

:Begin:
:Function: lyapunov_exponent
:Pattern: MLLyapunovExponent[r_, x_, iterates_Integer]
:Arguments: { r, x, iterates }
:ArgumentTypes: { Real, Real, Integer }
:ReturnType: Real
:End:

which defines the corresponding Mathematica functions corresponding to map_iterate and lyapunov_exponent to be MLMapIterate and MLLyapunovExponent, respectively. Notice that unlike in Mathematica, which uses reference counting to reclaim memory, we have to focus on programming details like memory allocation and deallocation. The MathLink function (including the time for compilation) runs faster than the version produced by the Mathematica compiler (about 1/6th of the time) and gives an essentially identical plot. In fact, it is fast enough to produce good plots in seconds, so it becomes feasible to consider interactively zooming in on particular ranges of r to investigate the fine detail of the bifurcations or other features of the plot. The significance of the Lyupanov exponent is discussed next.

The Lyapunov Exponent

The Lypunov exponent measures the exponential rate of divergence (or convergence) of nearby initial conditions. It is not difficult to show that it can be calculated as

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr37.gif]

for a given sequence of iterates, [Graphics:Iterationsgr38.gif]. Numerically, &lgr; can be approximated reasonably well using a finite number of iterations. The Mathematica function MathLinkLyapunovExponent will use the lyapunov_exponent function in the generated C program to compute numerical values.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr39.gif]

Now we can easily make a plot of the Lyapunov exponent as a function of the parameter.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr40.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr41.gif]

Where the Lyapunov exponent is negative, there is convergence to a fixed point or cycle. However, when it is positive, this means that nearby initial conditions diverge. You can see this by comparing with the bifurcation diagram. Notice that even beyond the transition to chaos, there are values of r for which the Lyapunov exponent is negative. These correspond to `windows' where cycles of different prime number periods appear and undergo their own period doubling bifurcations. An example is where a period 3 cycle appears and bifurcates near r = 3.85. Since the Lyapunov exponent as a function of r has structure on any scale, the resolution of the plot is insufficient to capture the entire structure of the function. However, the Mathematica function makes it trivial to zoom in on any portion of the range. For example,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr42.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr43.gif]

shows a part of the bifurcation diagram and the Lyapunov exponent for the values of r right near the onset of chaos at approximately 3.5699.

Loss of Precision

One consequence of a positive Lyapunov exponent is numerical loss of precision. When there is exponential divergence of nearby initial conditions, even small roundoff errors will eventually be magnified. Let's take a closer look at what is happening at the value r = 4. At this value, our approximations show that the Lyapunov exponent is

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr44.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr45.gif]

which is very near to Log[2]. Actually, for r = 4, there is a transformation which makes it easier to understand the divergence.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr46.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr47.gif]

and with one more step,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr48.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr49.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr50.gif]

the iterations are very simple. The message from Solve indicates that since the Sin function is not one-to-one, only the principle value has been found ([Graphics:Iterationsgr51.gif]=2([Graphics:Iterationsgr52.gif]+&pgr;) is also a solution for example). If the [Graphics:Iterationsgr53.gif]are thought of as binary numbers, the transformed map is just a shift operator. Since the distance between two nearby points is doubling at each iteration, this shows that the Lyapunov exponent should be exactly Log[2]. Another way to look at this is that small errors will be doubled at each iteration: in other words, one bit of precision is lost per iteration. What this means is that if you are doing the calculations with IEEE double precision numbers, which have 53 bits of mantissa precision, you can expect to have lost all precision by 53 iterations. To verify this, lets start with the initial value [Graphics:Iterationsgr54.gif]approximated by a machine floating point number.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr55.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr56.gif]

On the other hand, the exact value of the 53rd iterate should be

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr57.gif]

Evaluation using machine numbers is very inaccurate, since the C math library function sin() (which Mathematica uses for machine numbers) needs to do argument reduction before it approximates the sine. Argument reduction on a number of this size will result in severe loss of precision (in this case in a single step!). Mathematica's adaptive precision evaluation gets a result correct to the requested precision.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr58.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr59.gif]

N works by adaptively raising the precision to which subexpressions are approximated until a result is found to the requested precision. As you can see below none of the digits of the 53rd iterate computed with machine numbers are correct.

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr60.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr61.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr62.gif]

The graph shows successive loss of bits. One solution is to start with higher precision. For example,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr63.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr64.gif]

obtains a result with 18 digits assured to be correct. You may have noticed that the loss of precision exceeded 53 bits (32 digits ~ 106 bits). The reason for this is that significance arithmetic works by doing a worst case analysis assuming all errors are independent. In this case, the combination of operations is not as bad as it could possibly be. In cases where you know how much precision is lost per iteration, you can override the automatic precision control. For example,

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr65.gif]
[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr66.gif]

only loses 53 bits.

Since all the plots above were calculated using machine numbers, in many cases with thousands of iterations with positive Lyapunov exaponent, you may be wondering if they have any value at all. Though not strictly accurate, the graphs do show the overall (or ergodic) properties of the map as they are intended to. If you needed to have a particular value of some iterate, then, you would need to use higher precision. However, this could take substantial time if it was thousands of iterates out, since you would need to start with thousands of bits of precision. Nonetheless, the framework of arbitrary precision arithmetic is there in Mathematica when you need it.

Further Investigations

There is still more information that can be obtained from this relatively simple example. Feigenbaum [Feigenbaum, 1978] showed that

[Graphics:Iterationsgr7.gif][Graphics:Iterationsgr67.gif]

where [Graphics:Iterationsgr68.gif]are the values of r at which succesive period-doubling bifurcations occur, has the limit [Graphics:Iterationsgr69.gif], the Feigenbaum number. More importantly, he showed that this number does not depend on the details of the logistic map; the function need only have a quadratic maximum (nonzero second derivative), and so &dgr; is universal for all such maps. He was further able to argue that whenever a period doubling cascade occurs in a dynamical system, &dgr;, should be observed independently of the phase-space dimension of the system. Therefore the number applies to a much broader set of systems than simple maps such as those explored here.

With the programs that we have set up, you can verify the universality of the Feigenbaum number, by trying out some different maps which have a period-doubling sequence of bifurcations, such as the sine map. You can do this by zooming in on the parts of the bifurcation diagram where the bifurcations occur or by using the values of the Lyapunov exponent. As you zoom in, you should also be able to see the self-similar fractal nature of the bifurcation diagram. If you get the electronic Mathematica version of this article, you have access to all the commands to carry out explorations interactively.

Maeder gives a Mathematica program to compute the Feigenbaum constant. He also points out that computing a long sequence of iterates for a periodic point in a bifurcation diagram can be somewhat wasteful and shows how to search for such points. See Chapter 7 of [Maeder, 1996] for more details. Some more details relating to Mathematica's programming language can also be found in [Maeder, 1992].

Summary

Even though Mathemtica has symbolic algorithms for solving difference equations, no general solution method exists when the equations are nonlinear. Therefore we often have no option but to resort numerical methods and experimentation to investigate the dynamics. The various arithmetic models available in Mathematica enable a wide and varied study of phenomena that would otherwise fall outside the scope of conventional programming languages. For example, there has been a recent resurgence of interest in the use of interval arithmetic methods to obtain validated computer based proofs of certain physical properties.

We have shown you some relatively simple programs that are nonetheless able to demonstrate complex behavior and properties of a map. Though we made use of only a few of the sophisticated symbolic and numerical algorithms built into Mathematica, the ideas and programs are extensible can be used to study more complicated problems with essentially the same style. The integrated environment of symbolic and numerical mathematics, along with graphics and mathematical typesetting makes it possible to interactively develop your programs, explore the results and present them to others in an intuitive way.

References

[Feigenbaum, 1978] M. Feigenbaum, Quantitiative universality for a class of nonlinear transformations, J. Stat. Phys, 19, pp. 25-52

[Maeder, 1992] R. E. Maeder, Mathematica as a programming language, Dr Dobbs Journal, Feb. 1992.

[Maeder, 1996] R. E. Maeder, The Mathematica Programmer II, Academic Press, San Diego, 1996.

[May, 1976] R. M. May, Simple mathematical models with very complicated dynamics, Nature 26, pp. 459-67, 1976.

[Sofroniou, 1993] M. Sofroniou, Extending the built-in format rules, The Mathematica Journal, 3 3, pp. 74-80, 1993.

[Sofroniou, 1997] M. Sofroniou, Expression optimization in Mathematica, in Elements of Mathematica Programming, Troels Petersen editor, TELOS/Springer Verlag, Santa Clara (to appear).