Mathematica 9 is now available

How Do I Make My Mathematica Programs Run Faster?





Use the Newest Version

New versions of Mathematica often include major speed gains and new built-in functions that can speed up your calculations significantly.

For example, compare Version 4 with Version 3 for calculating autocorrelations.

[Graphics:Images/index_gr_1.gif]

In Version 3, this calculation takes 150 seconds.

[Graphics:Images/index_gr_2.gif]
[Graphics:Images/index_gr_3.gif]

Version 4 is about five times faster right out of the box.

[Graphics:Images/index_gr_4.gif]
[Graphics:Images/index_gr_5.gif]

But Version 4 also has a new built-in function for calculating list correlations. Using this, the calculation is over 340 times faster than before.

[Graphics:Images/index_gr_6.gif]
[Graphics:Images/index_gr_7.gif]

Some other speed improvements in Mathematica 4 include:
[Graphics:Images/index_gr_8.gif] [Graphics:Images/index_gr_9.gif] [Graphics:Images/index_gr_10.gif] [Graphics:Images/index_gr_11.gif]
[Graphics:Images/index_gr_12.gif] [Graphics:Images/index_gr_13.gif] [Graphics:Images/index_gr_14.gif] [Graphics:Images/index_gr_15.gif]
[Graphics:Images/index_gr_16.gif] [Graphics:Images/index_gr_17.gif] [Graphics:Images/index_gr_18.gif] [Graphics:Images/index_gr_19.gif]
[Graphics:Images/index_gr_20.gif] [Graphics:Images/index_gr_21.gif] [Graphics:Images/index_gr_22.gif] [Graphics:Images/index_gr_23.gif]

Return to Top



Use Built-in Functions

Built-in Mathematica functions are usually far more efficient and faster than user-written code with the same functionality. Part of the reason for this difference is that many of them use proprietary algorithms, are highly optimized, and are implemented in C code. Looking to see if a function you want to implement is already built into Mathematica is time well spent. Also keep in mind that Mathematica contains not only thousands of mathematical functions but also a large number of utility functions such as Sort.

For example, here is a pretty nifty way to calculate an autocorrelation.

[Graphics:Images/index_gr_24.gif]

The above is about the most terse top-level code possible for this calculation. At least it's a lot better than the following:

[Graphics:Images/index_gr_25.gif]

And it looks reasonably fast for small sets of numbers.

[Graphics:Images/index_gr_26.gif]
[Graphics:Images/index_gr_27.gif]
[Graphics:Images/index_gr_28.gif]

But let's try this on a rather large set of data.

[Graphics:Images/index_gr_29.gif]
[Graphics:Images/index_gr_30.gif]
[Graphics:Images/index_gr_31.gif]

The built-in function ListCorrelate cuts the time needed for the large data set by a factor of over 2350.

[Graphics:Images/index_gr_32.gif]
[Graphics:Images/index_gr_33.gif]

Return to Top



Use Machine-Precision Numbers

One of the most important things to notice when doing numerical calculations is that Mathematica uses fundamentally different evaluation mechanisms for floating-point numbers than for exact numerical quantities like integers and fractions. Usually, the floating-point calculations are much faster than exact arithmetic, so you should use them wherever appropriate.

Take a look at the following example.

[Graphics:Images/index_gr_34.gif]
[Graphics:Images/index_gr_35.gif]
[Graphics:Images/faster2_gr_1.gif]
[Graphics:Images/index_gr_37.gif]
[Graphics:Images/index_gr_38.gif]

Mathematica takes about 2.36 seconds to calculate the eigenvalues of this matrix. Actually, this is pretty fast. But when you simply change the entries to floating points,

[Graphics:Images/index_gr_39.gif]

the calculation of the eigenvalues is all but instantaneous.

[Graphics:Images/index_gr_40.gif]
[Graphics:Images/index_gr_41.gif]

Return to Top



Avoid Procedural Programming

For users coming from other languages like C, Fortran, or Matlab, using procedural constructs is tempting. But procedural programming compared to functional programming gives, with very few exceptions, extremely poor performance.

[Graphics:Images/index_gr_42.gif]
[Graphics:Images/index_gr_43.gif]
[Graphics:Images/index_gr_44.gif]
[Graphics:Images/index_gr_45.gif]

Return to Top



Operate on Lists as a Whole, Not on Individual Parts

Mathematica is able to operate on whole lists at once or, using the new PackedArray technology, even on lists in an internal compressed form. This method is much faster than stepping through lists and calculating single elements. Also, keep in mind list-manipulation functions such as Partition.

The speed increase is already quite significant even for simple operations such as taking the sine of all elements in a list.

[Graphics:Images/index_gr_46.gif]
[Graphics:Images/index_gr_47.gif]
[Graphics:Images/index_gr_48.gif]
[Graphics:Images/index_gr_49.gif]

For a more realistic example, let's say we have two vectors of x and y values that we want to combine into a list of {x, y} pairs.

[Graphics:Images/index_gr_50.gif]

Constructing a new table would be a pretty straightforward approach.

[Graphics:Images/index_gr_51.gif]
[Graphics:Images/index_gr_52.gif]
[Graphics:Images/index_gr_53.gif]
[Graphics:Images/index_gr_54.gif]

Combining, transposing, and partitioning the lists gives the same result but is a great deal faster.

[Graphics:Images/index_gr_55.gif]
[Graphics:Images/index_gr_56.gif]
[Graphics:Images/index_gr_57.gif]
[Graphics:Images/index_gr_58.gif]

Return to Top



Use Compile

In many numerical computations, one of the easiest ways to get better performance is to use the built-in compiler. Often, doing this is as easy as wrapping Compile[args, ...] around your function.

[Graphics:Images/index_gr_59.gif]
[Graphics:Images/index_gr_60.gif]
[Graphics:Images/index_gr_61.gif]
[Graphics:Images/index_gr_62.gif]
[Graphics:Images/index_gr_63.gif]
[Graphics:Images/index_gr_64.gif]

For more information about Compile, see the relevant sections of The Mathematica Book.

Return to Top



Use the Parallel Computing Toolkit

If you have to run large-scale calculations, the Parallel Computing Toolkit might be just the solution you are looking for.

The Parallel Computing Toolkit brings parallel computation to anybody having access to more then one computer on a network or to a multiprocessor computer. It implements many parallel-programming primitives and includes high-level commands for parallel execution of operations such as animation, plotting, and matrix manipulation. Also supported are many popular new programming approaches such as parallel Monte Carlo simulation, visualization, searching, and optimization. The implementations for all high-level commands in the Parallel Computing Toolkit are provided in Mathematica source form, and they can serve as templates for building additional parallel programs.

Return to Top



Use MathCode C++

MathCode C++ is an application package for Mathematica that generates optimized C++ code for numerical computations.

Some of Its Features and Advantages
  • Compiled code can easily be called from within Mathematica.
  • Optionally, stand-alone code can be generated (i.e., code that can be executed independently of Mathematica).
  • Performance increases by up to five thousand times for some computations.
  • External libraries of C, C++, and Fortran77 code can easily be connected and called from within Mathematica.
  • Extended functionality is available for extracting parts of matrices.

More information about MathCode C++ is available on our product pages.

Return to Top