\documentstyle[12pt]{article}
\setcounter{secnumdepth}{2}
\hyphenation{Conjugate-Anti-Symmetric
Conjugate-Symmetric Dir-i-chlet Math-e-mat-ica para-digm ZTrans-form}
%%%
%%% Modified combination of "Symbolic Transforms with Applications to Signal
%%% Processing" which appeared in the second issue of The Mathematica Journal.
%%% and "Mathematica As An Educational Tool" which appeared in the Proceedings
%%% of the 1991 IEEE Southeastern Conference.
%%%
\setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.0in}
\setlength{\topmargin}{-.65in}
\setlength{\oddsidemargin}{-.375in}
\setlength{\evensidemargin}{-.375in}
\renewcommand{\baselinestretch}{1.30}
\reversemarginpar
%
\def\sinc{\mathop{\rm sinc}\nolimits}
\def\dt#1{{\it #1}}
\def\math/{\MATH}
\def\half{{\textstyle {1\over 2}}}
%
\newcommand{\MATH}{{\it Mathematica}}
\newcommand{\zT}{$z$-Transform}
\newcommand{\zt}{$z$-transform}
\newcommand{\JNOTE}[1]{ \marginpar{\small\em #1} }
%
\def\oned/{one-dimen\-sional}
\def\twod/{two-dimen\-sional}
\def\threed/{three-dimen\-sional}
\newcommand{\fo}{f_0}
%
\newcommand{\singlespace}{\par \baselineskip 0.741\baselineskip}
\newcommand{\doublespace}{\baselineskip 1.35\baselineskip}
\def\tctext#1#2{\parbox[t]{#1}{\singlespace{#2}}}
%
\begin{document}
%
\begin{titlepage}
\title{User's Guide to the \\
Signal Processing Packages and Notebooks for \\
Implementing Linear System Theory in \\
\MATH\ 1.2 and 2.0}
\author{ {\em Brian L.\ Evans, James H.\ McClellan, Lina J.\ Karam, } \\
{\em Kevin A.\ West, Wallace B.\ McClure, Terry A.\ Crone,} \\
{\em Todd R.\ Bacon, and James A.\ Proctor} \\ \\
Digital Signal Processing Laboratory \\
School of Electrical Engineering \\
Georgia Institute of Technology \\
Atlanta, GA 30332-0250}
\date{EMAIL:~~evans@eedsp.gatech.edu}
\end{titlepage}
\maketitle
%
% Abstract
%
\begin{abstract}
We describe a hierarchical set of packages to perform basic analyses
of signals (functions) and systems (operators).
The packages are based on transform theory, and implement a general
mechanism for encoding knowledge about transforms, so their applicability
extends beyond signal processing to any field where transform analysis is
needed.
We support (bilateral) $z$- and Laplace transforms, as well
as continuous-time, discrete-time, and discrete Fourier transforms,
all in arbitrary dimension.
These rule bases can fully justify their answers.
When they cannot find a particular transform, users have the choice of
either specifying the missing transform pair(s) or letting the transform
apply its definition using built-in \MATH\ operations.
The packages can perform a variety of symbolic, graphical, and numerical
operations on signals and systems.
In symbolic terms, the packages can solve linear constant-coefficient
difference and differential equations by using the $z$ and Laplace transform
capabilities, respectively.
Another supported symbolic operation is convolution which is implemented in
both the discrete and continuous domains.
Symbolic analyses include simplification of signal processing expressions,
finding optimal implementations of algorithms, and reasoning about properties
of signals, such as stability.
Plotting capabilities for 1-D and 2-D signals include
continuous and discrete plots, magnitude and phase plots, and
root-locus plots.
Pole-zero diagrams for 1-D and 2-D signals are based on their $z$ or Laplace
transforms and include the region of convergence.
The packages and the accompanying Notebooks work simultaneously
under \MATH\ 1.2, 2.0, and 2.1.
Four of the Notebooks are tutorials on the topics of analog filter design,
discrete Fourier analysis, discrete/continuous convolution, and the
$z$-transform (in three parts).
Another set of Notebooks provide on-line help.
A third set of Notebooks form the handout we give students learning
\MATH\ and our extensions.
Two supplemental sets of Notebooks also exist.
One set helps students evaluate their own knowledge about signal processing.
Another set is written by an undergraduate student explaining difficult
concepts in signals and systems.
%%%
%%%\keywords{DSP, Laplace transform, rule base, signal processing,
%%% symbolic processing, transform, \zt}
%%%
\end{abstract}
\section*{Introduction}
Transform theory is a powerful tool employed by scientists
[Oberhettinger and Badii 1973], mathematicians [Churchill 1958], and engineers
[Lathi 1983; Muth 1978; Oppenheim and Schafer 1989].
One important use of this theory is to transform equations
involving integrals and derivatives into algebraic equations that
are easier to solve.
The solutions to the original equations are obtained by inverse
transforming the algebraic solutions.
This is the case when the Laplace transform is applied to linear
differential equations with constant coefficients, or when the
\zt\ is applied to linear difference
equations with constant coefficients.
This process is one of the staples of {\it signal processing}, the area
of engineering that deals with the design and analysis of
systems that act on signals in prescribed ways.
A {\it signal} is essentially a function of one or more
variables; often, in the one-variable case, the variable is time.
The variable can be continuous (for an \dt{analog signal}), or discrete
(for a \dt{digital}, or \dt{sampled}, signal).
Thus a \oned/ discrete-time signal is a finite or infinite
sequence of values, or \dt{samples}.
A \dt{system} is a mathematical operator that transforms one signal into
another, according to characteristic rules.
\dt{Filters}, or time-invariant linear systems, form a particularly
important class of systems, eminently suited to the application of
transform methods.
A filter is characterized by a function, its \dt{impulse response},
in terms of which the filter's action on any signal can be determined.
The output signal is the \dt{convolution} of the input signal with
the filter's impulse response; the operation of convolution involves an
integral for continuous signals and a summation for discrete ones.
Using transforms, one can reduce the task of convolving these two functions
to that of multiplying two algebraic expressions---~~the functions'
transforms---~~and then inverse transforming the product.
For an example of analog system design,
consider the transmission of AM radio signals.
Each AM radio station broadcasts at a specific frequency $\fo$
which is between 550$\,$kHz and 1600$\,$kHz and is a multiple of 10$\,$kHz
[Lathi 1983].
In order to avoid overlap with other stations, a station must
transmit its signal between the frequencies $\fo - 5$$\,$kHz and
$\fo + 5$$\,$kHz.
Before transmission, a station filters its signal so that
frequencies above 5$\,$kHz are rejected and frequencies below 5$\,$kHz are
passed.
This filtered signal is then shifted by $\fo$ in the frequency
domain and transmitted.
The entire design and analysis of this system can be carried out in
the frequency domain:~ the Laplace transform would be used for the
design and implementation of the 5 kHz low-pass filter, and
the Fourier transform would be used to analyze the transmission system.
By contrast, digital signal processing (DSP) deals with
discrete-time signals (sequences) and systems that act on them
[Oppenheim and Schafer 1989].
Primarily, digital signals come from sampling and quantizing
analog signals.
In many cases, implementing a system in digital hardware is faster,
more flexible, and more cost-effective than building it from
analog components.
One reason for this is that increasing precision in discrete systems
is as simple as adding bits, whereas more expensive components may be
required in analog systems.
Another reason is that the time to prototype a system is often
shorter for discrete systems.
Apart from that, some DSP operators, like those
that alter the sampling rate, have little meaning in the
continuous domain, and may be very difficult to implement with analog
components.
The most commonly used transforms in DSP are the \zt,
the discrete-time Fourier transform (DTFT)
and the discrete Fourier transform (DFT).
Just as the Laplace transform is a generalization of the Fourier
transform, the \zt\ is a generalized form of the DTFT.
The \zt\ is most useful in the implementation of
digital filters as digital hardware or programs for DSP chips, while
the DTFT is preferred when the frequency-domain characteristics of a
system must be defined.
Neither digital nor analog signal processing is confined to a
single dimension.
Many spatial array and imaging applications require two- and
\threed/
processing.
This is the case for detecting edges in monochrome digital images,
or finding angles of arrival with an array of sensors.
A digitized image can be thought of as a function defined on
an $N \times M$ grid, where each grid square is assigned an intensity
value known as a grey level (usually a positive integer).
In such a digitized image, human eyes perceive an edge as a sharp
transition between grey levels in adjacent regions.
Such a sharp transition gives rise to high-frequency components in
the image.
Therefore, filtering the image to boost its high-frequency components
and decrease its low-frequency components would sharpen edges
in the image.
This operation on the image is a \twod/ high-pass filter; it corresponds
to convolution in the \twod/ ``time'' domain.
Digital signal processing has always been tied closely to computer
implementations, where the signals are viewed as a stream of numbers.
The machine provides an efficient way to compute the DSP operator
that transforms one stream of numbers into another.
On the other hand, the design of a signal processing system treats the
signals as functions (in the mathematical sense).
Over the last decade, special attention has been devoted by several
researchers to the field of symbolic signal processing [Kopec 1980, 1985;
Myers 1986; Covell 1989].
In symbolic processing, the signal is represented in a computer
as a formula, rather than as a sequence of numbers.
Thus, the value of a signal might only be known in terms of a
formula, instead of a number.
In a similar manner, signal processing operators, the building
blocks for systems, are maintained in symbolic form.
This enables a machine to simplify, rearrange, and rewrite symbolic
expressions until they take a desired form.
For example, the desired form might be an optimal implementation of
an expression in which the number of additions and multiplications
is minimized.
The remainder of this paper describes a set of packages
that provide a general platform for implementing symbolic transforms
in \math/ and that represent a first step toward the building
of a comprehensive signal processing environment in \MATH.
The packages are divided into three groups, corresponding to the contexts
\verb+SignalProcessing`Support`+,
\verb+SignalProcessing`Digital`+ and
\verb+SignalProcessing`Analog`+.
Each of the next three sections focuses on one of these groups
of packages [Evans and McClellan, 1990; Evans and McClellan, 1992; Evans, 1993].
The remaining section describes the Notebooks that accompany
the packages [Evans and McClellan, 1991; Evans {\it et al.}, 1993; Evans, 1993].
Note that the signal processing packages are properly initialized in
\MATH\ by evaluating the command
%%%
\begin{center}
\verb:Needs[ "SignalProcessing`Master`" ]:
\end{center}
%%%
in which the single quote is the back quote.
\section*{Description of the Platform}
\subsection*{Mathematica and Signal Processing}
\MATH\ presents an attractive platform upon which to build
an environment for signal processing, providing as it does many
computational functions, arbitrary-precision arithmetic, and
an array of symbolic operations, such as
differentiation and integration, partial fraction decomposition,
power series expansion, and polynomial factoring and expansion.
These symbolic capabilities play a key role in computing
symbolic transforms.
As a programming environment, \MATH\ supports not only traditional
programming paradigms (procedural, functional, data-directed), but also
an object-oriented approach, since the programmer can attach data and
code to the head of an expression, so that an object can ``know''
information about itself.
Therefore, one can write code that does not have to
be modified to accommodate a new data type.
That is, properly written \MATH\ code is incrementally extensible.
Three other appealing aspects of \MATH\ for the
representation and manipulation of signal processing expressions are
the cascading of systems, dynamic data typing, and
(conditional) pattern matching.
As a consequence of {\MATH}'s support of functional programming,
a cascade of systems (operators) can be represented as
nested calls to the corresponding objects.
Dynamic data typing allows valueless symbols and permits variables
to assume any data type.
Finally, the availability of a powerful pattern matcher,
including conditional rules, permits the development of
complex rule bases to perform mathematical transforms.
\subsection*{Signal-Processing Functions}
Many of the common mathematical functions
used in signal processing are built-in:
sinusoids, exponentials, Bessel functions, etc.
Other functions that are not built-in, and all the common signal-processing
operators, are defined in the packages making up the context
\verb+SignalProcessing`Support`+.
We now examine them in more detail.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|ll|ll|}
\hline
Object & & & Meaning \\
\hline
{\tt CPulse} & & & continuous pulse: \\
& & & ~~~~$\sqcap_{L}(t) = \left\{
\begin{array}{ll}
0 & t < 0 \\
1 \over 2 & t = 0 \\
1 & 0 < t < L \\
1 \over 2 & t = L \\
0 & t > L
\end{array}
\right.$ \\
{\tt CStep} & & & continuous step function: \\
& & & ~~~~$u_{-1} (t) = \left\{ \begin{array}{ll}
1 & t > 0 \\
1 \over 2 & t = 0 \\
0 & t < 0
\end{array}
\right.$ \\
{\tt Delta} & & & Dirac delta function \\
{\tt Dirichlet} & & & Dirichlet discrete kernel: \\
& & & ~~~~$d[N, \omega] =
\displaystyle{ \sin(N \omega/ 2) \over
{ N \sin(\omega / 2) } }$ \\
& & & Also called the {\tt ASinc} function \\
{\tt Impulse} & & & Kronecker delta function \\
{\tt LineImpulse} & & & m-D delta function, e.g. $\delta(t_1 - t_2)$ \\
{\tt Pulse} & & & discrete pulse: \\
& & & ~~~~$\sqcap_{L}[n] = \left\{
\begin{array}{ll}
0 & n < 0 \\
1 & 0 \leq n \leq L - 1 \\
0 & n \geq L
\end{array}
\right.$ \\
{\tt Sinc} & & & $\mbox{\rm sinc}(t) \equiv \sin(t) / t$ so $\mbox{\rm sinc}(0) = 1$ \\
{\tt Step} & & & discrete step function: \\
& & & ~~~~$u[n] = \left\{ \begin{array}{ll}
1 & n \geq 0 \\
0 & n < 0
\end{array}
\right.$ \\
{\tt Unit} & & & family of functions which \\
& & & includes {\tt CStep} and {\tt Delta} \\
\hline
\end{tabular}
\end{center}
\caption{Signal objects defined in {\tt SignalProcessing`Support`}.
For the syntax of {\tt CPulse}, {\tt Dirichlet}, {\tt FIR},
{\tt IIR}, {\tt LineImpulse} and {\tt Pulse}, consult the
on-line documentation (e.g., {\tt ?CPulse}).}
\end{table}
%%% \begin{table}[htbp]
%%% \begin{center}
%%% \begin{tabular}{|ll|ll|}
%%% \hline
%%% Object & & & Meaning and \TeX form \\
%%% \hline
%%% {\tt CFIR} & & & ``all-zero'' analog filter (see text) \\
%%% {\tt CIIR} & & & all-pole analog filter (see text) \\
%%% {\tt CPulse} & & & continuous pulse:
%%% $\sqcap_L(t) = \left\{ \begin{array}{ll}
%%% 0 & \hbox{for $t<0$ or $t>L$} \\
%%% 1 \over 2 & \hbox{for $t=0$ or $t=L$} \\
%%% 1 & \hbox{for $0 0
%%% \end{array} \right.$ \\
%%% {\tt Delta} & & & Dirac delta $\delta(t)$ (see text) \\
%%% {\tt Dirichlet} & & & Dirichlet discrete kernel:
%%% $d[N, t] = \displaystyle{ \sin N t/2 \over N \sin t/2 }$ \\
%%% {\tt FIR} & & & all-zero digital filter (see text) \\
%%% {\tt IIR} & & & all-pole digital filter (see text) \\
%%% {\tt Impulse} & & & Kronecker delta: $\delta[n] =
%%% \left\{ \begin{array}{ll}
%%% 0 &\hbox{for $n \ne 0$}\\
%%% 1 &\hbox{for $n = 0$}
%%% \end{array} \right.$ \\
%%% {\tt LineImpulse} & & & multidimensional delta, e.g., $\delta(t_1 - t_2)$ \\
%%% {\tt Pulse}
%%% & & & discrete pulse: $ \sqcap_L[n] = \left\{ \begin{array}{ll}
%%% 0 & \hbox{for $n<0$ or $n\ge L$} \\
%%% 1 & \hbox{for $0\le n {-1, -2, -3}]} yields
{\tt CIIR[t, {6, 11, 6, 1}]}--- the roots are embedded in the filter
but not displayed.
To see the entire data structure, use {\tt FullForm}.
Applying {\tt TheFunction} to these filters will generate a time-domain
function corresponding to their impulse response.
Thus, the time-response of filters can be plotted using {\tt SignalPlot}
and {\tt SequencePlot} for analog and digital filters, respectively.
An expression with head \verb$IIR$ or \verb$FIR$ can also be seen as
the operator of convolution with the respective functions; that is,
when \verb$IIR$ or \verb$FIR$ operate on a signal, the operation can
be seen as the action of a filter with the specified zeros or poles.
The filter can be analog or digital, depending on what it is applied
to; for example, \verb+IIR[n, {1, 1/2, 1/4}][a^n Step[n]]+ represents
the output of the filter described in the previous paragraph, when the
input is $a^nu[n]$.
This filter is digital only because of the
presence of the discrete step function.
\subsection*{Signal Processing Operators}
The twenty-one new operators for signal processing are listed in Table 2.
Most of them need one or more arguments in order to be fully specified.
One of the arguments--- the last, or, for the transforms, the
second-to-last--- says what symbols are to be treated as variables in
the function to be acted on.
Thus, \verb+Shift[7, t]+ has no effect on
the expression \verb+Cos[x]+, but \verb+Shift[7, t][Cos[t]]+ {\it
represents} \verb+Cos[t-7]+.
An impulse train, for example, in continuous time would be written
either as {\tt Periodic[T, t][Delta[t]]} or as
{\tt Summation[n, -Infinity, Infinity, 1][Delta[t + n T]]} where
{\tt n} is an integer index variable, {\tt T} is the sampling period,
and {\tt t} is the continuous time variable.
For the transforms, the last two
arguments indicate the variables before or after transformation.
Other arguments are explained in the table.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ll}
%%%{\tt Aliasby[sc, w]} & aliases a continuous function of {\tt w} giving it a period of $2 \pi$ / {\tt sc} \\
%%% & and divides by {\tt sc} \\
{\tt CFIR[t, \{h0, h1, \ldots\}]} & ``all-zero'' analog filter \\
{\tt CIIR[t, \{a0, a1, \ldots\}]} & all-pole analog filter \\
{\tt CircularShift[n0, N, n]} & shifts a sequence by {\tt (n0 + n) mod N}
in {\tt n} \\
{\tt CConvolve[t]} & continuous convolution in {\tt t} \\
{\tt Convolve[n]} & discrete convolution in {\tt n} \\
{\tt DFT[L, n, k]} & the {\tt L}-sample discrete Fourier transform \\
& of a function in {\tt n} to a function in {\tt k} \\
{\tt DTFT[n, w]} & the discrete-time Fourier transform of a sequence \\
& in {\tt n} to a continuous periodic function in {\tt w} \\
{\tt Difference[i, n]} & the $i$th backward difference in {\tt n} \\
{\tt Downsample[m, n]} & keep first sample in every block of {\tt m} input samples; \\
& sampling rate decreases by a factor of {\tt m} \\
{\tt FIR[n, \{h0, h1, \ldots\}]} & all-zero digital filter with finite impulse response {\tt h0, h1, \ldots}. \\
{\tt FT[t, w]} & continuous Fourier transform of a function of {\tt t} into a function of {\tt w} \\
{\tt IIR[n, \{a0, a1, \ldots\}]} & all-pole digital filter with infinite impulse response \\
{\tt Interleave[n]} & interleaves (combines) samples from each input function of {\tt n} into one \\
{\tt InvDFT[L, k, n]} & inverse discrete Fourier transform (see {\tt DFT})\\
{\tt InvDTFT[w, n]} & inverse discrete-time Fourier transform (see {\tt DTFT}) \\
{\tt InvFT[w, t]} & inverse continuous Fourier transform (see {\tt FT}) \\
{\tt InvL[s, t]} & inverse Laplace transform (see {\tt L}) \\
{\tt InvZ[z, n]} & inverse $z$-transform (see {\tt Z}) \\
{\tt L[t, s]} & Laplace transform of a function of {\tt t} \\
{\tt Periodic[p, v]} & argument is made periodic with period {\tt p} w/r to variable {\tt v} \\
{\tt Rev[v]} & reverse the function of variable {\tt v} (flip the {\tt v} axis) \\
%%% {\tt ScaleAxis[sc, w]} & scale the {\tt w} axis (variable) by {\tt sc} \\
{\tt Shift[v0, v]} & shift function of {\tt v} by {\tt v0} points \\
{\tt Summation[i, ib, ie, inc]} & summation operator, {\tt i} =
{\tt ib} to {\tt ie} step {\tt inc} \\
{\tt Upsample[l, n]} & insert {\tt l} - 1 zeroes after each input sample;\\
& sampling rate increases by a factor of {\tt l} \\
{\tt Z[n, z]} & $z$-transform of a function of {\tt n} \\
\end{tabular}
\end{center}
\caption{(Parameterized) Operators defined in {\tt SignalProcessing`Support`}.
For the transforms, the last two arguments indicate the variable(s)
in the function to be acted on and in the result of the
transformation.
For all other operators except {\tt Summation}, the
last argument indicates the variable(s) in the function to be acted on.}
\end{table}
%%%\begin{table}
%%%\begin{center}
%%%\begin{tabular}{|lcl|}
%%%\hline
%%%Operator&\TeX form&Represents\\
%%%\hline
%%%{\tt Aliasby[sc, w]} && alias a continuous function with period
%%% $2 \pi/{\tt sc}$ \\
%%%{\tt Convolve[n]} &$*_n$& convolve two functions\\
%%%{\tt DFT[L, n, k]} && $L$-sample discrete Fourier transform \\
%%%{\tt DTFT[n, w]} && discrete-time Fourier transform \\
%%%{\tt Difference[i, n]} &$\Delta_{i,n}$&
%%% $i$-th iterated difference between \\ && $\quad$consecutive samples \\
%%%{\tt Downsample[m, n]} &$\downarrow_{m,n}$& keep only every $m$-th sample\\
%%%{\tt FT[t, w]} &${\cal F}_t$ & continuous Fourier transform \\
%%%{\tt Interleave[n]} && interleave two or more sequences \\
%%%{\tt InvDFT[L, k, n]} &${\cal DFT}^{-1}_{k}$
%%% &$L$-sample inverse discrete Fourier transform\\
%%%{\tt InvDTFT[w, n]} &
%%%${\cal DTFT}^{-1}_{w}$ & inverse discrete-time Fourier transform \\
%%%{\tt InvFT[w, t]}&${\cal F}^{-1}_{w}$&inverse continuous Fourier transform \\
%%%{\tt InvL[s, t]} &${\cal L}^{-1}_{s}$& inverse Laplace transform \\
%%%{\tt InvZ[z, n]} &${\cal Z}^{-1}_{z}$& inverse $z$-transform \\
%%%{\tt L[t, s]} &${\cal L}_{t}$& Laplace transform \\
%%%{\tt Periodic[p, n]} && make periodic of period {\tt p} \\
%%%{\tt Rev[n]} && flip the axis \\
%%%{\tt ScaleAxis[sc, w]} && scale the w-axis by {\tt sc} \\
%%%{\tt Shift[n0, n]} && shift the n-axis by {\tt n0} \\
%%%{\tt Summation[i, a, b, d]} &
%%%$\displaystyle\sum_{i=a,i=i+d}^b$ &
%%%sum for $i=a,a+d,\dots,a+\displaystyle\left[b-a\over d\right]d$\\
%%%{\tt Upsample[l, n]} &$\uparrow_{l,n}$
%%%&insert $l-1$ zeros between samples \\
%%%{\tt Z[n, z]} && $z$-transform \\
%%%\hline
%%%\end{tabular}
%%%\end{center}
%%%\caption{Operators defined in {\tt SignalProcessing`Support`}.
%%%For the transforms, the last two arguments indicate the variable(s)
%%%in the function to be acted on and in the result of the
%%%transformation. For all other operators except {\tt Summation}, the
%%%last argument indicates the variable(s) in the function to be acted on.}
%%%\end{table}
For \oned/ operators, a variable is a symbol, like \verb+t+.
Arguments other than variables can be numbers, symbols or
expressions.
By using a list of symbols in place of a variable,
like \verb+{t1, t2, t3}+ instead of \verb+t+, we make the operator
multidimensional, the length of the list indicating the
dimensionality.
For multidimensional operators, upsampling and downsampling factors
become square matrices, and other parameters become lists of
expressions, one expression per dimension.
For example, the downsample operator requires a downsampling factor
{\tt m} and a symbolic variable {\tt n}.
In one-dimensional downsampling, {\tt m} is an integer and {\tt n}
is a symbolic index.
In the multidimensional case, {\tt m} is a matrix and {\tt n}
is a list of symbolic indices.
In either case, the form of the operator is {\tt Downsample[m, n]},
and its syntax is {\tt Downsample[m, n][f]}, where f is a discrete
function in {\tt n}.
It must be emphasized that when one of the operators of Table 2 is applied to
a function, no evaluation takes place---~~the idea is to
store the resulting function symbolically, until it becomes convenient
to compute it explicitly.
This is done by applying \verb$TheFunction$ to the expression,
much as a numerical evaluation in \MATH\ is triggered by the action
of \verb$N$.
Thus \verb$Z[n, z][a^n Step[n]]$ represents the $z$-transform of $a^nu[n]$,
but the transformation is not carried out; to obtain an explicit transform
one must apply \verb$TheFunction$ to the expression.
(However, certain obvious simplifications are carried out: for example,
\verb+InvZ[z,n][Z[n,z][f]]+
reduces to \verb$f$.)
Similarly, the functions of in Table 1 are not reduced to \MATH\ built-in
objects until they appear as arguments to \verb$TheFunction$.
One may wish to reduce them, for example, in order to use \MATH's
built-in plotting routines to plot them.
Naturally, some functions in Table 1, like \verb+Delta+ and \verb+Unit+,
cannot be expressed in terms of \math/ built-in objects,
so \verb$TheFunction$ leaves them alone.
\subsection*{Other Features}
Another facility provided by \verb+SignalProcessing`Support`+ is the
plotting of signals and transforms.
For discrete-time signals, time-domain plots are displayed in the
conventional ``lollipop'' style [Oppenheim and Schafer 1989].
Continuous-time signals are plotted using built-in routines,
augmented by new routines that display Dirac delta functions as arrows.
Finally, pole-zero diagrams, root maps, and plots of
magnitude and phase responses are available
via the functions \verb$PoleZeroPlot$ and \verb$MagPhasePlot$.
Other support routines provide drivers for multidimensional
transforms.
The multidimensional drivers simply call a
one-dimensional rule base for each dimension of the transform and
properly combine the region of convergence information obtained from
each call.
The end result of these drivers is a transform object.
However, if a call to a transform rule base produces an incomplete
transform, the multidimensional drivers exits immediately.
In the digital and analog signal processing packages, these drivers are
incorporated into the forward and inverse $z$- and Laplace-transform
packages.
The last set of support routines, listed in Table 3, is needed in
order to implement transform properties, because transform properties
alter the transform function and the region of convergence.
For all of these objects, the first argument is the name of a routine
that returns true
%
\begin{table}[hbt]
\begin{center}
\tt
\begin{tabular}{|lllll|}
\hline
AddT & & ConjT & & ConvolveT \\
DerivativeT & & IntegrateT & & LineImpulsemDT \\
MultT & & ScaleT & & SubT \\
& & SubstituteForT & & \\
\hline
\end{tabular}
\end{center}
\caption{Objects implementing transform properties}
\end{table}
%
when its argument is a valid transform ({\tt LForm} for the Laplace
transform and {\tt ZForm} for the \zt).
The second argument is a transform or transform code to be evaluated.
When the second argument is a valid transform, the routine implements
the property on it.
Other arguments vary with the object.
This convention allows a nesting or cascading of properties.
\section*{Digital Signal Processing}
We now turn to the packages in the context
\verb$SignalProcessing`Digital`$.
Their important features are the
\dt{rule bases} for evaluation of forward and inverse $z$-transforms, and
forward and inverse discrete-time Fourier transforms.
Application of these rule bases is triggered by calls to the
evaluation functions
\verb$ZTransform$, \verb$InvZTransform$, \verb$DTFTransform$ and
\verb$InvDTFTransform$.
Such calls are automatically performed when \verb$TheFunction$ is
applied:~ thus, {\tt TheFunction[ Z[n, z] [ Exp[- n] Step[n] ] ]} evaluates to
{\tt ZTransform[ Exp[- n] Step[n], n, z ]}, which is then further evaluated
to an explicit form.
\subsection*{The Forward $z$-Transform}
Before we get into the details of the rule bases, we
define the $z$- and discrete-time Fourier transforms.
The forward \zt\ produces an analytic function $X(z)$, from a
sequence $x[n]$:
\begin{equation}
X(z)= {\cal Z} \{ x[n] \} = \sum_{n = -\infty}^{\infty} x[n] z^{-n}
\end{equation}
[Oppenheim and Schafer 1989].
(The bilateral version of the $z$- and Laplace transforms was adopted
in these packages because it commonly used in signal processing.
The unilateral form, where the sum extends
only over non-positive $n$, is sometimes better suited to solving
initial-value problems; but the bilateral form can also handle
initial conditions by means of delta and step functions.)
The sum in (1) converges for values of $z$ in a region of the form
$\{z:r_-<|z| Rplus[Infinity], ZVariables[z]]
\end{verbatim}
\end{minipage}
\end{center}
In this way, information about the stability of a signal can be
obtained.
(A signal is called \dt{stable} if the sum of its absolute
values remains bounded as time increases.)
The function {\tt Stable} operates on the
\zt\ of a digital signal, returning {\tt True} if the signal is
stable and {\tt False} otherwise.
If the stability is dependent on values of free parameters,
{\tt Stable} returns a set of stability conditions:
%%%%
\begin{center}
\begin{minipage}{2.0in}
\begin{verbatim}
In[2]:= Stable[%]
Out[2]= Abs[a] < 1
\end{verbatim}
\end{minipage}
\end{center}
The multidimensional \zt\ is obtained by applying the
one-dimensional \zt\ once for each dimension.
For example, the two-dimensional \zt\ $X(z_1, z_2)$ of the function
$x[n_1, n_2]$ is defined as
\begin{eqnarray}
X(z_1, z_2) &=& \sum_{n_2 = -\infty}^{\infty}
\sum_{n_1 = -\infty}^{\infty}
x[n_1, n_2] z_1^{-n_1} z_2^{-n_2}\\
&= & \displaystyle{
\sum_{n_2 = -\infty}^{\infty}
\left( \sum_{n_1 = -\infty}^{\infty}
x[n_1, n_2] z_1^{- n_1} \right) z_2^{- n_2} }
\nonumber\\
&= & {\cal Z}_{n_2} \{ {\cal Z}_{n_1} \{ x[n_1, n_2] \} \}\nonumber
\end{eqnarray}
[Dudgeon and Mersereau 1984].
However, care must be taken in computing the region of convergence.
If the function $x[n_1,n_2]$ is \dt{separable}, that is, a product
of functions of the two variables separately, the
region of convergence of the double series (2) is the Cartesian product
of the regions of convergence of the series (1) for the factors.
For a non-separable function, however, this is not so, and we
must take that into account.
The \verb$ZTransform$ function requires two arguments: the function
to be transformed and the name of the ``time'' variable(s).
The second argument determines the dimensionality
and makes all other symbols in the expression to be treated as
constants.
An optional third argument allows the user to specify
the names of the transform variable(s); the default is \verb$z$ in
one dimension and \verb${z1, z2, ...}$ in several.
Any other arguments are treated as options.
Currently there are only two options,
\verb:TransformLookup: (explained in Appendix A) and \verb:Dialogue:.
Setting \verb:Dialogue: to \verb$True$ causes the program to tell the
user about what strategies are being employed, and what functions,
if any, could not be transformed.
This is the default option.
\verb+Dialogue+ can also be set to {\tt False} (no justification)
or {\tt All} (complete justification).
The {\tt All} setting causes each step of the transformation to be
displayed, so it is useful for teaching the mechanics of taking
the \zt.
\verb$ZTransform$ works by calling
a one-dimensional rule base, {\tt MyZTransform}, once for each variable to
be transformed.
The seventy-six rules in the {\tt MyZTransform} rule base are divided
into six groups, as shown in Table 4.
The first group is needed in order to track the region of
convergence of non-separable functions.
%
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|r|c|}
\hline
Type of rule & Number & Table \\
\hline
multidimensional hooks & 9 & \\
rational transform pairs & 15 & 5 \\
non-rational transform pairs & 7 & 6 \\
transform properties & 13 & 7 \\
transforms of operators & 22 & 8 \\
transform strategies & 10 & 9 \\
\hline
\end{tabular}
\end{center}
\caption{Synopsis of rules for one-dimensional forward $z$-transforms}
\end{table}
The next two groups encode rational (see Table 5) and non-rational
(see Table 6) transform pairs.
These rules apply to terms that contain either
an impulse or a step function, so that every term is one-sided.
Only right-sided transform pairs are encoded because special rules exist
that take the left-sided transform by reversing the sequence in time,
finding the transform, and then adjusting the result.
\begin{table}
\[
\begin{array}{|ll|ll|}
\hline
\mbox{Expression} & & & \mbox{$z$-Transform} \\
\hline
0 & & & 0 \\
u[n] & & & \displaystyle {1 \over {1 - z^{-1}}} \\*[0.1in]
\delta[n] & & & 1 \\*[0.1in]
\sin[\omega n + b] u[n] & & & \displaystyle {{\sin(b) +
\sin(\omega-b) z^{-1}} \over {1 - 2 \cos(\omega) z^{-1} + z^{-2}}} \\*[0.1in]
\cos[\omega n + b] u[n] & & & \displaystyle {{\cos(b) +
\cos(\omega-b) z^{-1}} \over {1 - 2 \cos(\omega) z^{-1} + z^{-2}}} \\*[0.1in]
\sinh[\omega n + b] u[n] & & & \displaystyle {{\sinh(b) +
\sinh(\omega-b) z^{-1}} \over {1 - 2 \cosh(\omega) z^{-1} + z^{-2}}} \\*[0.1in]
\cosh[\omega n + b] u[n] & & & \displaystyle {{\cosh(b) +
\cosh(\omega-b) z^{-1}} \over {1 - 2 \cosh(\omega) z^{-1} + z^{-2}}} \\*[0.1in]
a^{n} u[n] & & & \displaystyle {1 \over {1 -
a z^{-1}}}, \mbox{ if } |z| > |a| \\*[0.1in]
a^{-n} u[-1 - n] & & & \displaystyle {1 \over {1 -
a z^{-1}}}, \mbox{ if } |z| < |a| \\*[0.1in]
\left( \begin{array}{c}
n + k - j \\ k
\end{array}
\right) u[n] & & & \displaystyle { a^{j} z^{-j} \over {(1 - a z^{-1})^{k+1}}} \\*[0.1in]
\left( \begin{array}{c}
k \\ n
\end{array}
\right) a^{n} b^{k - n} u[n] & & & (b + a z^{-1})^{k} \\*[0.1in]
{\rm FIR}[n, \{h_0, h_1, h_2, \ldots\}] & & & h_0 + h_1 z^{-1} + h_2 z^{-2} + \ldots
\\*[0.1in]
{\rm IIR}[n, \{a_0, a_1, a_2, \ldots\}] & & & \displaystyle {a_0
\over {1 - a_1 z^{-1} - a_2 z^{-2} - \ldots}} \\*[0.1in]
\hline
\end{array}
\]
\caption{Rational $z$-transform pairs;
adapted from [Muth 1978] and [Oppenheim and Schafer 1989].}
\end{table}
\begin{table}
\[
\begin{array}{|l|l|}
\hline
{\mbox{Expression}} & {\mbox{$z$-Transform}} \\
\hline & \\
\displaystyle {u[n] \over n!} & e^{z^{-1}} \\
& \\
\displaystyle {u[n] \over {2 n + 1}} & \sqrt{z} \arctan(z^{-1}) \\
& \\
\displaystyle {u[n - 1] \over n} & -\ln(1 - z^{-1}) \\
& \\
\displaystyle {u[n] \over {(2n)!}} & \cosh{ \sqrt{z^{-1}} } \\
& \\
\displaystyle {{(2n)!} \over {(2^{n} n!)^2}} u[n] & \displaystyle{\sqrt{ 1 \over {1 - z^{-1}}}} \\
& \\
\displaystyle {\Gamma[r] \over {\Gamma[n+1] \Gamma[r - n]}} u[n]
& (1 + z^{-1})^{r - 1} \mbox{for real-valued } r > 0 \\
& \\
\hline
\end{array}
\]
\caption{Non-rational $z$-Transform Pairs.
The last transform pair is from [Clements and Pease 1989];
all others are from [Muth 1978].}
\end{table}
The next group of rules implements the properties of the \zt\
listed in Table 7. For example, the shift property
$$
{\cal Z} \{ x[n + m] \} = z^m {\cal Z} \{ x[n] \}
$$
is implemented by the (somewhat simplified) rules
\begin{verbatim}
MyZTransform[ f_. Step[n_ + m_], n_, z_] :=
ScaleZ[ MyZTransform[(f /. n -> n - m) Step[n], n, z], z^m ] /;
FreeQ[m,n]
MyZTransform[ f_. Step[m_ - n_], n_, z_] :=
ScaleZ[ MyZTransform[(f /. n -> n + m) Step[-n], n, z], z^m ] /;
FreeQ[m,n]
MyZTransform[ a_. Impulse[n_ + m_.], n_, z_] :=
Transform[ (a /. n -> -m ) z^m, 0, Infinity ]
\end{verbatim}
The first rule is for right-sided (causal) sequences, and the second
is for left-sided (anti-causal) sequences; thus
the pattern {\tt f\_ Step[m\_ - n\_]}
matches anti-causal functions such as {\tt Cos[Pi k / 5] Step[3 - k]},
where {\tt k} is the variable being transformed.
The third rule transforms shifted impulses.
The {\tt z\verb:^:m} term has been factored out in all three rules.
The fifth section of {\tt MyZTransform} takes the \zt\ of different
operators, according to Table 8.
Because this rule base is recursive,
{\zt}s of cascaded systems can be found---~~for example, the \zt\ of an
upsampled, time-reversed, impulse response of an FIR filter.
\begin{table}
\[
\begin{array}{|ll|lll|}
\hline
\mbox{Structure} & & & \mbox{$z$-Transform} & \\
\hline & & & & \\
x^{*}[n] & & & X^{*} ( z^{*} ) & \\
& & & & \\
x[n] * y[n] \ & & & X(z) Y(z) & \\
& & & & \\
\Delta_{k}{x[n]} & & & (1 - z^{-1})^k X(z) & \\
& & & & \\
\Im{m} ( x[n] ) & & & \displaystyle{{1 \over {2j}} [X(z) - X^{*}(z^{*})]} & \\
& & & & \\
x[n] u[n] & & & \displaystyle{{1 \over {1 -
z^{-k}}} X_k(z)}, & \mbox{ $x[n]$ periodic of period $k$ and } \\
& & & & X_k(z) = \sum\limits_{n = 0}^{k - 1} x[n] z^{- n} \\
& & & & \\
\Re{e} ( x[n] ) & & & \displaystyle{{1 \over 2} [ X(z) + X^{*}(z^{*}) ]} & \\
& & & & \\
\uparrow_M x[n] & & & \displaystyle{X( z^{M} )} & \\
& & & & \\
\sum\limits_{j=0}^{n} x[j] & & & \displaystyle{{1 \over {1 - z^{-1}}} X(z)} & \\
& & & & \\
\hline
\end{array}
\]
\caption{$z$-Transforms of operators:
adapted from [Muth 1978] and [Oppenheim and Schafer 1989].}
\end{table}
The last group of rules (Table 9) adds strategies to try if all preceding
rules have failed to give the transform completely.
The repeated application of any of the starred strategy rules will
cause an infinite loop, so local state information is associated with
every expression (and inherited by every sub-expression) to prevent
each of these rules from being applied more than once.
The local state information is implemented as a list of five boolean values,
one for each critical rule.
\begin{table}
\begin{quote}
\begin{itemize}
\item [1.] rewrite affine step functions
\item [2.] break up any time-dependent absolute value function into
a sum of a causal and an anti-causal step function
\item [3.] collect exponential terms
(change an exponential divided by another into
a ratio raised to an exponent)
\item [4.] expand $\displaystyle{{1 \over n} x[n] u[n]}$ into
$\displaystyle{
\left\{
\lim_{n \rightarrow 0}{x[n] \over n}
\right\}
\delta[n] +
{1 \over n} x[n] u[n-1]}$
if $\displaystyle{\lim_{n \rightarrow 0} x[n] = 0}$
\item [*5.] distribute products like $(n + 1)(n + 2)$ into $n^2 + 3 n + 2$
\item [*6.] factor the denominator
\item [*7.] expand all numerator terms
\item [*8.] expand all denominator terms
\item [9.] try a two-sided transform
\item [*10.] substitute mathematical definitions for all terms
\end{itemize}
\caption{Strategies for forward $z$-transforms. An asterisk
means that once the rule is applied to an expression,
it will no longer be applied to any part of that expression.}
\end{quote}
\end{table}
\subsection*{The Inverse $z$-Transform}
The inverse \zt\ requires the evaluation of the contour
integral
$$
{\cal Z}^{-1} \{ X(z) \}
= {1 \over {2 \pi j}} \oint_{C} X(z) z^{n-1} dz = x[n]
$$
[Oppenheim and Schafer 1989], where $C$ is a counterclockwise contour
encircling the origin and $j$ is the imaginary number $\sqrt{-1}$,
represented in \math/ by \verb$I$.
Usually, the Cauchy Residue Theorem is applied to $X(z)$ in order
to reduce the contour integral to an algebraic evaluation of residues.
The inverse \zt\ is equivalent to the Laurent series expansion of $X(z)$.
The command \verb+InvZTransform+ implements the inverse $z$-transform
of one-dimensional and multidimensional expressions.
In two or more dimensions, the general inverse is quite difficult because it
involves multiple contour integrations, which \MATH\ cannot perform.
The rule base, however, covers many of these cases.
{\tt InvZTransform} calls a one-dimensional rule base for each dimension
to be transformed.
The rules are organized in a manner similar to the
one-dimensional forward \zt\ rules (Table 9), the main
difference being that only two multidimensional hooks are encoded
(which invert transforms corresponding to upsampled sequences).
Application of the one-dimensional rule bases automatically invert
transforms corresponding to line impulses and multinomials.
\begin{table}[hbt]
\begin{center}
\begin{tabular}{|l|r|c|}
\hline
Type of rule & Number & Table \\
\hline
multidimensional hooks & 2 & \\
rational transform pairs & 26 & 5 \\
non-rational transform pairs & 9 & 6 \\
transform properties & 12 & 7 \\
transforms of operators & 4 & 8 \\
transform strategies & 9 & 10 \\
\hline
\end{tabular}
\end{center}
\caption{Synopsis of rules for one-dimensional inverse $z$-transforms}
\end{table}
Six of the inverse transform pairs concern {\tt Impulse}, {\tt Step},
and sinusoidal forms.
The other pairs return the inverse transform of
rational {\zt}s (with first-order and second-order denominators) and
non-rational {\zt}s (containing logarithmic, factorial and square-root
forms).
The properties of the transform and the transforms of
operators are the same as in the forward case.
Transforms of exponentials in the time domain are inverse-transformed by the
exponential property rule, not by table lookup.
Some strategies for inverting {\zt}s (Table 10) are similar to those
applied in taking forward {\zt}s, but some new ones are also needed.
Two such strategies are partial fractions and power series expansion.
The goal of partial fractions is to break up rational functions into a
sum of terms involving only first-order and second-order denominators.
Then, each individual term can be easily transformed. Power series
expansion can be used to approximate the first $N$ polynomial terms of
the inverse transform.
The repeated application of either of these
two strategies, however, would cause an infinite loop.
In order to avoid this, local state information ensures that any strategy
with this behavior is applied at most once per expression.
\begin{table}[htb]
\begin{quote}
\begin{itemize}
\item [*1.] perform partial fraction expansion on rational polynomials
\item [*2.] perform partial fraction expansion
\item [3.] factor terms inside logarithmic functions
\item [4.] factor terms inside exponential functions
\item [*5.] normalize the denominator of a rational polynomial:
$1 + a_{1} z^{-1} + a_{2} z^{-2} + \cdots$
\item [*6.] expand the definitions of all terms
\item [*7.] take a left-sided transform using right-sided rules by
substituting $z^{-1}$ for $z$, taking the inverse $z$-transform,
and substituting $-n$ for $n$ in the result
\item [8.] complex cepstrum:
$\displaystyle {\cal Z}^{-1} \{ \log X(z) \} \rightarrow
-{1 \over n} {\cal Z}^{-1}
\left\{ {z \over X(z)} {d \over dz} X(z) \right\}$
\item [*9.] apply the inverse $z$-transform to the first $N$ terms
of a series expansion about $z=0$
\end{itemize}
\caption{Strategies for inverse $z$-transforms. An asterisk
means that once the rule is applied to an expression,
it will no longer be applied to any part of that expression.}
\end{quote}
\end{table}
As an example, consider the complex cepstrum of a sum of two impulses,
$\delta[n]+\delta[n-3]$.
(The \dt{complex cepstrum} of a sequence is the
inverse \zt\ of the logarithm of the \zt\ of the sequence.)
Thus, we are interested in the inverse $z$-transform of
$\log ( 1 + a z^{-3} )$.
In the first pass through the rule base, none of the transform pairs
and none of the property rules apply.
However, the upsampling structure rule does apply, since
the greatest common divisor of all exponents of {\tt z} in
{\tt Log [ 1 + a z\verb:^:-3 ]} is 3.
This rule fires and rewrites the inverse transform as
the upsampling by three of the inverse \zt\ of
{\tt Log [ 1 + a z\verb:^:-1 ]}.
This can now be inverse-transformed by table lookup (Table 5), giving
\verb+Upsample[3,n][- (-1)^n a^n Step[n-1]/n]+.
In addition to the \verb$Dialogue$ option, \verb$InvZTransform$
accepts the option \verb$Terms$.
If set to a positive integer, this causes \verb$InvZTransform$
to attempt a series expansion as a last resort, as shown
in Figure 1.
%%%%
\begin{figure}
\begin{center}
\begin{minipage}{4.0in}{
\begin{verbatim}
In[3]:= InvZTransform[ Exp[Tan[z]], z, n,
Terms -> 5, Dialogue -> True ]
\end{verbatim}
}
\end{minipage}
\end{center}
\begin{center}
\begin{minipage}{4.0in}{
\begin{verbatim}
Tan[z]
Breaking up the expression E
into its series representation
2 3 4 5
z z 3 z 37 z 6
1 + z + -- + -- + ---- + ----- + O[z]
2 2 8 120
The inverse z-transform will now be
applied to the first 5 terms.
Out[3]= Impulse[n] + Impulse[1 + n] +
Impulse[2 + n] Impulse[3 + n]
> -------------- + -------------- +
2 2
3 Impulse[4 + n] 37 Impulse[5 + n]
> ---------------- + -----------------
8 120
\end{verbatim}
}
\end{minipage}
\end{center}
\caption{An example of series expansion}
\end{figure}
\subsection*{Solving Difference Equations}
A common use of the \zt\ is to solve difference equations.
\verb:ZSolve:, which is defined in the context
\verb$SignalProcessing`Digital`$, solves
linear difference equations with constant coefficients:
%%%%
\begin{center}
\begin{minipage}{6.0in}
\begin{verbatim}
In[4]:= ZSolve[ y[n] - 3/2 y[n - 1] + 1/2 y[n - 2] == (1/4)^n Step[n],
y[n], y[-1] -> 4, y[-2] -> 10, Dialogue -> False ]
1 n 1 n
(2 + (-) + 3 (-) ) Step[n]
4 2
Out[4]= ---------------------------
3
\end{verbatim}
\end{minipage}
\end{center}
%%%%
Here, we have asked for no justification ({\tt Dialogue -> False}
and implied that the solution is right-sided since the default value
of the {\tt RightSided} option is {\tt True}.
Because the solution is assumed to be right-sided, the
{\tt Step} term in the driving function is not necessary.
In this case, a full set of initial conditions is given.
In general, initial conditions are optional and the missing ones
are considered to be zero.
When solving a difference equation, {\tt ZSolve} substitutes the
initial conditions into the left-hand side of the difference equation.
This gives rise to impulse functions on the right-hand side of the
difference equation that account for behavior before the driving function
turns on.
This approach enables the bilateral \zt\ to solve the augmented
difference equation completely.
\begin{quote}
{\small
The bilateral approach to solving difference equations is first to assume
that all missing initial conditions equal zero.
Then, augment the difference equations to account for the initial conditions.
Finally, transform each side, solve for the transfer function $Y(z)$, and
inverse transform to obtain the digital function $y[n]$.
For example, the difference equation
%
\begin{equation}
\label{`example_diffequ'}
y[n] - {3 \over 2} y[n-1] + {1 \over 2} y[n-2] = {1 \over 4}^n u[n]
\mbox{~~such that~~} y[-1] = 4 \mbox{~and~} y[-2] = 10
\end{equation}
%
provides all of the initial conditions so none have to be assumed zero.
Substituting the initial conditions into Equation~\ref{`example_diffequ'}
gives
%
\[
\begin{array}{lll}
\mbox{for $n$ = -1:} & \mbox{~~~~} &
y[-1] - {3 \over 2} y[-2] + {1 \over 2} y[-3] =
(4) - {3 \over 2} (10) + {1 \over 2} (0) = -11 \\
\mbox{for $n$ = -2:} & &
y[-2] - {3 \over 2} y[-3] + {1 \over 2} y[-4] =
(10) - {3 \over 2} (0) + {1 \over 2} (0) = 10
\end{array}
\]
%
The augmented difference equation, therefore, is
%
\begin{equation}
y[n] - {3 \over 2} y[n-1] + {1 \over 2} y[n-2] =
{1 \over 4}^n u[n] - 11 \delta[n-1] + 10 \delta[n-2]
\end{equation}
%
which can be solved by taking the \zt\ of both sides, solving for $Y(z)$,
and inverse transforming the result to obtain $y[n]$. [Evans and McClellan 1992]
}
\end{quote}
\subsection*{The Discrete-Time Fourier Transform}
The \oned/ discrete-time Fourier transform (DTFT) transforms a
sequence $x[n]$ into a function $X(e^{j \omega})$ of a continuous
variable $\omega$, called frequency:
$$
X(e^{j \omega}) = \sum_{n=-\infty}^{\infty} x[n] e^{-j \omega n}.
$$
The inverse transformation is given by
$$
x[n] = {1 \over {2 \pi}} \int\limits_{-\pi}^\pi X(e^{j \omega}) e^{j \omega n} d\omega
$$
[Oppenheim and Schafer 1989].
Thus, the Fourier transform is a periodic function of period $2 \pi$.
The equation for the forward DTFT is the same as for the forward \zt,
with $z = \exp(j \omega)$, so it is tempting to think that the DTFT
can always be obtained from the \zt.
But this is not so for two reasons: first, some functions,
such as $\sinc t$, have a DTFT but not a \zt.
Conversely, if the function does have a $z$-transform, the
substitution $z = \exp(j \omega)$ is only valid when the unit circle
is in the region of convergence.
The context \verb$SignalProcessing`Digital`$ provides the command {\tt
DTFTransform}, which first checks a special DTFT rule base, then
applies the \zt\ rule base (the substitution $z = \exp(j \omega)$
rewrites the \zt\ as a DTFT).
The specific DTFT rule base contains transform pairs of
forms that do not have {\zt}s, and rules describing downsampling and
upsampling.
(A downsampled signal does not have a \zt, and the
upsample operator behaves differently in the frequency domain.)
The only redundancy between the two rule bases is the recasting of three
property rules (homogeneity, additivity, and multiplication by $n$)
and one strategy rule (expand all terms).
The inverse discrete-time Fourier transform command, {\tt
InvDTFTransform}, is implemented in a similar manner.
Since both DTFT rule bases rely on the \zt\ rule bases,
the DTFT works in several dimensions, for one- and multi-sided signals.
The inverse DTFT properly handles many non-separable transforms.
\subsection*{The Discrete Fourier Transform}
When the sequence $x[n]$ is finite---~~say it is defined for only $N$
points, $0\le n\le N-1$---~~then $N$ samples of the Fourier
transform, uniformly distributed through the period, give enough
information to reconstruct the original signal.
This new sequence of samples is called the discrete Fourier transform (DFT).
Thus the \oned/ DFT $X[k]$ of the $N$-point sequence $x[n]$ is given by
$$
X[k] = \sum_{n=0}^{N - 1} x[n] e^{- j 2 \pi k n / N},
$$
and the inverse transformation is
$$
x[n] = {1 \over N} \sum_{k=0}^{N - 1} X[k] e^{j 2 \pi k n / N}.
$$
The symbolic DFT can be implemented either by this formula, or as the
DTFT of $x[n]$ over the range $n = 0, 1, \ldots, N-1$, followed by
sampling at the points $w_k = 2 \pi k / N$, for $k = 0, 1, \ldots, N-1$.
Since the DFT is discrete in both the time and frequency domains, it
can be computed and stored numerically, as well as symbolically.
The {\tt Fourier} function provided with the 1.2 release of \math/
implements the numerical DFT, whereas the \verb+SignalProcessing`Digital`+
function \verb$DFTransform$ implements a symbolic DFT.
Given a sequence, this function first takes its DTFT, then
factors the result, if possible, into the product of a real-valued amplitude
and a phase function.
For example:
%%%%
\begin{center}
\begin{minipage}{5.0in}
\begin{verbatim}
In[5]:= DFTransform[Exp[I 2 n], 10, n, k]
I (9 - 9 / 10 Pi k)
E Sin[ 10 - Pi k ]
Out[5]= DFTData[ --------------------------------------,
Pi k
Sin[ 1 - ------ ]
10
Start[0], Finish[9], FVariables[k] ]
\end{verbatim}
\end{minipage}
\end{center}
%%%%
The phase function here is the exponential, and the amplitude function
a ratio of two sinusoids.
In fact, the amplitude is none other than the aliased sinc function
introduced in Table 1.
In discrete time, this function is the DTFT of an $N$-point rectangular
window.
Like the forward DFT, the inverse DFT rule base relies on a DTFT rule base.
{\tt InvDFTransform} operates on finite-extent sequences and
requires three arguments: $X[k]$, $N$, and $k$.
Its only strategy rewrites the sampled frequency response into a continuous
one and then calls the inverse DTFT.
\subsection*{Putting it all together}
The \verb$SignalProcessing`Digital`$ packages provide a high-level
function, \verb$DSPAnalyze$, which exercises most of the features
discussed above.
This function, like \verb$ZTransform$, requires two arguments: the expression
to analyze and the variable(s) involved.
Optionally, a third and fourth arguments specify the domain over which
to plot the discrete-time function.
Only one- and \twod/ signals can be plotted.
\verb$DSPAnalyze$ finds the signal's $z$-transform, if it exists,
and gives a stability criterion based on the transform's region of
convergence.
If the $z$-transform does not exist, the frequency
response is obtained using {\tt DTFTransform}.
Using extensions to {\MATH}'s graphics abilities, \verb$DSPAnalyze$
plots the expression, its magnitude response, and its phase response.
For \oned/ and separable \twod/ expressions, it plots the pole-zero diagram,
and for non-separable \twod/ expressions, it plots the pole and zero root
locus in each dimension.
As an example, consider the \oned/ signal \verb$Upsample[3,n][Sinc[n]]$.
This expression does not have a $z$-transform (which causes the generation
of many warning messages), but it does have a Fourier
transform, so the magnitude and phase response can be computed and plotted.
The magnitude response is periodic of period $2\pi/3$ (a period
of $2\pi/k$ is characteristic of signals upsampled by a factor of $k$),
and the phase response is zero, so it is omitted here, although
{\tt DSPAnalyze} plots it.
%%%%
\begin{center}
\begin{minipage}{5.5in}
\begin{verbatim}
In[6]:= DSPAnalyze[ Upsample[3,n][Sinc[n]], n, -20, 20 ]
[graphs/ex1.time.ps]
1
Power::infy: Infinite expression - encountered.
0
Transform::incomplete:
1
The rule base could not compute the forward z-transform of -
n
with respect to n.
1
The function - does not have a valid z-transform with respect to n
n
\end{verbatim}
{\em ZTransform::notvalid:} \\
\hspace*{0.5in}{\em The forward z-transform could not be found.}
\begin{verbatim}
Upsample [Sinc[n]]
3,0,n
has the following frequency response:
ScaleAxis [Pi (-CStep[-1 + w] + CStep[1 + w])]
3,w
[graphs/ex1.mag.ps]
Out[6]= -Incomplete z-Transform-
\end{verbatim}
\end{minipage}
\end{center}
As a second example, consider the two-variable signal
%%%%
\[
{ {(n_1 + n_2)!} \over {n_{1}! n_{2}!} }
a^{n_1} b^{n_2} u[n_1, n_2],
\]
%%%%
with $a={1\over 2}$ and $b={4\over 5}$.
The increasing nature of the discrete domain plot hints at instability,
but the \zt\ cannot determine this.
Both pole root maps, however, do indicate instability because
some of the poles in the loci reside outside the unit circle.
Dudgeon and Mersereau [1984] have shown that a function of this form
is stable if and only if $| a | + | b | < 1$.
Because the time function is not stable, and because the \zt\ is used to
compute the Fourier transform, the Fourier transform and the
frequency response are physically meaningless.
For this reason we omit the magnitude and phase response graphs from
the output.
\begin{verbatim}
In[7]:= DSPAnalyze[ (n1 + n2)! (1/2)^n1 (4/5)^n2 Step[n1,n2] / (n1! n2!),
{n1, n2} ]
[graphs/ex2.time.ps]
1 n1 4 n2
(-) (-) (n1 + n2)! Step[n1] Step[n2]
2 5
----------------------------------------
n1! n2!
has the following z-transform:
-10
-------------
5 8
-10 + -- + --
z1 z2
The region of convergence is:
1 4 1
- < |z1| < Infinity - Abs[--------] < |z2| < Infinity
2 5 1
1 - ----
2 z1
The system is not stable.
The zeroes are: {}
I w
-5. 2.71828
The poles are: {-------------------}
I w
8. - 10. 2.71828
\end{verbatim}
\newpage
\begin{verbatim}
[graphs/ex2.pole1.ps]
The zeroes are: {}
I w
-8. 2.71828
The poles are: {-------------------}
I w
5. - 10. 2.71828
[graphs/ex2.pole2.ps]
1 n1 4 n2
(-) (-) (n1 + n2)! Step[n1] Step[n2]
2 5
----------------------------------------
n1! n2!
has the following frequency response:
-10
-------------------------
-I w1 -I w2
-10 + 5 E + 8 E
1
4 Abs[--------]
1
1 - ----
-10 1 2 z1
Out[7]= ZTransData[-------------, Rminus[{-, ---------------}],
5 8 2 5
-10 + -- + --
z1 z2
> Rplus[{Infinity, Infinity}], ZVariables[{z1, z2}]]
\end{verbatim}
\section*{Analog Signal Processing}
The packages making up the context \verb$SignalProcessing`Analog`$
implement the Laplace transform and the (continuous-time) Fourier transform.
Given a continuous-time function $f(t)$, its bilateral Laplace
transform $F(s)$ is defined as
\[
F(s) = \int\limits_{-\infty}^{+\infty} \; f(t) \; e^{- s t} \; dt
\; \; \; \; \mbox{and} \; \; \; \;
f(t) = {1 \over {2 \pi j}} \oint\limits_{\sigma - j \infty}^{\sigma +
j \infty} \; F(s) \; e^{s t} \; ds
\]
\noindent
[Oppenheim and Willsky 1983].
The region of convergence for the Laplace transform is a strip, of the
form $\bigl\{s\in{\bf C}: R_{-} < \Re{e} \; s < R_{+}\}$.
In the inverse transform, the contour of integration includes the
section of the $s$-plane whose real components are bounded by the
constant $\sigma$, i.e., either $\Re{e} \; s < \sigma$ or
$\Re{e} \; s > sigma$.
\par The continuous-time Fourier transform is a function of a real variable
$\omega$:
\[
F(\omega) = \int\limits_{-\infty}^{+\infty} \; f(t) \; e^{- j \omega t} \; dt
\; \; \; \; \mbox{and} \; \; \; \;
f(t) = {1 \over {2 \pi}} \int\limits_{-\infty}^{+\infty} \; F(\omega)
\; e^{j \omega t} \; d\omega
\]
The corresponding commands are \verb$LaPlace$, \verb$InvLaPlace$,
\verb$CTFTransform$ and \linebreak \verb$InvCTFTransform$.
Like their digital counterparts, these commands are implemented using
rule bases.
The organization of the forward Laplace transform rule
base follows the same outline as the forward \zt\ rule base (see Table
4).
There are more transform pairs and property rules than in the
\zt\ rule base, but fewer structure (operator) rules.
The strategies are the same except that strategy 4 of Table 8 does not
apply in the continuous case.
Similarly, the {\tt InvLaPlace} rule base has the same
structure as the {\tt InvZTransform} rule base, but contains more transform
pairs and fewer operator rules.
{\tt LaPlace} and {\tt InvLaPlace} are more comprehensive than the
Laplace-transform commands provided with the 1.2 release of \MATH,
which support only one-dimensional, one-sided Laplace transforms, and
do not keep track of the region of convergence.
Both versions of the Laplace transform, however, can be loaded into
\MATH\ since the bilateral and unilateral transforms have different
naming conventions.
The continuous-time analogs of {\tt DTFTransform}, {\tt InvDTFTransform},
and {\tt ZSolve} are \linebreak {\tt CTFTransform}, {\tt InvCTFTransform}, and
{\tt LSolve}.
The forward Fourier transform command {\tt CTFTransform}
relies on {\tt LaPlace}, and the inverse Fourier transform command
{\tt InvCTFTransform} is built on top of {\tt InvLaPlace}.
Because their definitions are very similiar, {\tt CTFTransform} and
{\tt InvCTFTransform} share the same rule base.
{\tt LSolve} solves linear differential equations with constant
coefficients; it takes the equation and the unknown function as arguments
and the initial conditions and justification level as options.
For example, the differential equation
$y''(t) + {3 \over 2} y'(t) + {1 \over 2} y(t) = \exp(a t) u_{-1}(t)$
with initial conditions $y'(0) = {1 \over 2}$ and $y(0) = 4$ is solved as
follows:
\begin{verbatim}
In[8]:= LSolve[ y''[t] + 3/2 y'[t] + 1/2 y[t] == Exp[a t] CStep[t],
y[t], y'[0] -> 1/2, y[0] -> 4, Dialogue -> False ]
Out[8]= (-10 - 36 a) Exp[- t/2] CStep[t/2] / (2 (-1 - 2 a)) +
(3 + 5 a) Exp[-t] CStep[t] / (-1 - a) -
2 Exp[a t] CStep[t] / (-1 - 3 a - 2 a^2)
\end{verbatim}
The function \verb$ASPAnalyze$ is the analog counterpart of \verb$DSPAnalyze$.
As an example of its use, consider the function
$$
t e^{-at}\cos \left(3\pi t\over 16\right) u_{-1}(t).
$$
To be able to plot the function, {\tt ASPAnalyze} substitutes a
default value of 1 for all unknown parameters ({\tt a} in this case).
The Laplace transform is a rational polynomial in $s$ with two poles,
each with multiplicity 2.
In the pole-zero diagram, a single zero is represented as {\tt O}
and a single pole as {\tt X}; multiple poles or zeroes appear as
{\tt *X*} or {\tt *O*}.
\begin{verbatim}
In[9]:= ASPAnalyze [ t Exp[- a t] Cos[3 Pi t / 16] CStep[t], t, 0, 3 Pi ]
For plotting only, these symbols will
be assigned a value of 1: {a}
Real part of the function is shown as solid lines.
Imaginary part of the function is shown as dashed lines.
[graphs/ex4.time.ps]
3 Pi t
t CStep[t] Cos[------]
16
----------------------
a t
E
has the following Laplace transform:
\end{verbatim}
\newpage
\begin{verbatim}
2
-2 (a + s) 1
-(------------------- + ----------------)
2 2
9 Pi 2 2 9 Pi 2
(----- + (a + s) ) ----- + (a + s)
256 256
The strip of convergence is: -Re[a] < Re(s) < Infinity
The system is stable if Re[a] > 0
The zeroes are: {-1.58905, -0.410951}
The poles are: {-1. + 0.589049 I, -1. - 0.589049 I,
-1. + 0.589049 I, -1. - 0.589049 I}
[graphs/ex4.pole.ps]
3 Pi t
t CStep[t] Cos[------]
16
----------------------
a t
E
has the following frequency response:
2
1 2 (a + I w)
-(------------------ - ---------------------)
2 2
9 Pi 2 9 Pi 2 2
----- + (a + I w) (----- + (a + I w) )
256 256
[graphs/ex4.mag.ps]
[graphs/ex4.pha.ps]
2
-2 (a + s) 1
Out[9]= LTransData[-(------------------- + ----------------),
2 2
9 Pi 2 2 9 Pi 2
(----- + (a + s) ) ----- + (a + s)
256 256
> Rminus[-Re[a]], Rplus[Infinity], LVariables[s]]
\end{verbatim}
Finally, here is an example with Dirac delta functions.
The time-domain plot represents them with arrowheads,
a common engineering convention.
The value of the global variable \$DeltaFunctionScaling indicates how
Delta functions are to be plotted: {\tt None} means that all Delta functions
will have identical heights equal to the height of the plot, and
{\tt Scaled} means that the height of each Delta function will be equal
to its area (the default).
This function has a Laplace transform but no poles or zeroes,
so a pole-zero diagram is not displayed.
{\tt ASPAnalyze} does plot its magnitude and phase frequency responses,
although we have omitted them here.
\begin{verbatim}
In[10]:= ASPAnalyze [ Delta[t - 1/2] + Delta[t - 3/2], t, 0, 2 ]
Real part of the function is shown as solid lines.
Imaginary part of the function is shown as dashed lines.
[graphs/ex5.time.ps]
3 1
Delta[-(-) + t] + Delta[-(-) + t]
2 2
has the following Laplace transform:
s
1 + E
--------
(3 s)/2
E
The strip of convergence is: -Infinity < Re(s) < Infinity
The system is stable.
\end{verbatim}
{\em PoleZeroPlot::notrational:} \\
\hspace*{0.5in}{\em Transform is not a rational polynomial.} \\
{\em PoleZeroPlot::noplot:} \\
\hspace*{0.5in}{\em A pole-zero plot cannot be generated.}
\begin{verbatim}
3 1
Delta[-(-) + t] + Delta[-(-) + t]
2 2
has the following frequency response:
(-3 I)/2 w I w
E (1 + E )
\end{verbatim}
{\em ASPAnalyze::notinteresting:} \\
\hspace*{0.5in}
{\em Could not determine the important section of the frequency response.}
\begin{verbatim}
.
.
.
s
1 + E
Out[10]= LTransData[--------, Rminus[-Infinity],
(3 s)/2
E
> Rplus[Infinity], LVariables[s]]
\end{verbatim}
\section*{Graphical Tools}
The signal processing packages introduce new objects to create plots
that follow engineering convention.
One- and two-dimensional sequences are displayed in ``lollipop'' style
[Oppenheim and Schafer] via {\tt SequencePlot}, whereas {\tt SignalPlot}
graphs one- and two-dimensional continuous signals.
Both {\tt SignalPlot} and {\tt SequencePlot} support the same options
as does the standard plotting function {\tt Plot}.
{\tt PoleZeroPlot} displays pole-zero diagrams for one- and two-dimensional
transfer functions.
This function supports only one option {\tt Dialogue} which if
{\tt True} or {\tt All} forces the printing of the values of the
poles and zeroes before they are plotted.
{\tt RootLocus} and {\tt MagPhasePlot} generate root locus plots
and plot magnitude/phase responses, respectively.
{\tt MagPhasePlot} displays continuous-time as well as
discrete-time Fourier transforms as long as the options are set properly.
The default options are biased toward DTFT's:
\verb+Domain -> Continuous+, \verb+DomainScale -> Linear+,
\verb+MagRangeScale -> Linear+, \verb+PhaseRangeScale -> Degree+, and
\verb+PlotRange -> All+.
For CTFT's, \verb+DomainScale+ option should be set to \verb+Log+
and the \verb+MagRangeScale+ should also be set to \verb+Log+ which
would give the familiar Bode diagram.
These routines also support the same options as {\tt Plot} does,
Two of the options, {\tt PlotRange} and {\tt DisplayFunction}, are useful for
generating animations of frequency dependence on a parameter.
\section*{Signal Processing Notebooks}
Accompanying the signal processing packages are several Notebooks.
Three are tutorials covering the topics of convolution,
analog filter design, and the $z$-transform (which comes in three parts).
One, \verb:SignalProcessingIntroduction:, introduces \MATH,
signal processing, and the signal processing packages.
The remaining Notebooks serve as on-line help.
The signal processing Notebooks are listed in Table 11.
%%%%
\begin{table}
\begin{center}
\begin{tabular}{ll}
{\tt AnalogFilters} & designing/analyzing 1-D analog filters \\
{\tt DTFT} & discrete Fourier analysis \\
{\tt EducationalTool} & \tctext{3.5in}{interactive version of a
conference paper describing educational
impact of \MATH} \\
{\tt LaPlaceTest} & testing procedure for Laplace transforms \\
{\tt PiecewiseConvolution} & tutorial on discrete/continuous convolution \\
{\tt README} & brief introduction \\
{\tt SignalProcessingExamples} & \tctext{3.5in}{interactive version of paper
in the {\em The Mathematica Journal}} \\
{\tt SignalProcessingIntroduction} & \tctext{3.5in}{introduction to \MATH,
signal processing, and the signal
processing packages} \\
{\tt SignalProcessingUsage} & \tctext{3.5in}{usage information about
every new object defined by the signal
processing packages} \\
{\tt zTransformI} & $z$-transform tutorial, part I \\
{\tt zTransformII} & $z$-transform tutorial, part II \\
{\tt zTransformIII} & $z$-transform tutorial, part III
\end{tabular}
\end{center}
\caption{List of the signal processing Notebooks}
\end{table}
%%%%
This section summarizes the contents of the tutorial Notebooks.
\subsection*{Piecewise Convolution Notebook}
The linear convolution of $f(t)$ and $g(t)$ is defined as
\[
f(t) \star g(t) =
\int\limits_{-\infty}^{+\infty} \; f(u) \; g(t - u) \; du
\; \; .
\]
Here, $f(t)$ and $g(t)$ are piecewise-continuous functions.
Piecewise convolution is a time-domain approach that finds the
convolution by computing the integral over each
interval where the two functions overlap as $g(t - u)$ ``slides''
through $f(u)$.
This is usually the first approach for solving convolution problems
that engineers learn.
The piecewise convolution Notebook shows the student how to solve
one-dimensional continuous-time convolution problems using
the piecewise convolution package.
In the introduction, the Notebook informs the user that a piecewise
function can be represented as a list of intervals.
Each interval is itself a list containing the start of the interval,
the end of the interval, and the function defined on that interval.
For example, the function $\triangle(t)$, a unit triangle defined
on $t \in [-1, 1]$, can be represented as an expression \\
\begin{center}
{\tt (1 + t) CPulse[1, 1 + t] + (1 - t) CPulse[1, t]}
\end{center}
\noindent
or in list form as
%{\tt \{\{1 + t, -1, 0\}, \{1 - t, 0, 1\}\}.}
\begin{center}
{\tt \{\{1 + t, -1, 0\}, \{1 - t, 0, 1\}\}.}
\end{center}
The Notebook gives several examples of piecewise convolution using
the newly defined object {\tt PiecewiseConvolution}, which
requires three arguments--- $f(t)$, $g(t)$, and $t$.
For example, the \MATH\ dialogue
%
\begin{verbatim}
In[11]:= PiecewiseConvolution[Delta[t - a], f[t], t]
Out[11]= {{ f[t - a], -Infinity, Infinity }}
\end{verbatim}
\noindent
illustrates the shifting property of the Dirac delta function.
We could have used the piecewise form of
{\tt Delta[t - a]}, which is {\tt \{Area[1], a, a\}}, in this example.
The Notebook also gives an example of a convolution that produces an
unstable function:
\noindent
\verb+In[12]:= PiecewiseConvolution[CStep[t], CStep[t], t]+ \\
{\it Integrate::divg: Integral does not converge} \\
\verb+Out[12]= {{ t, 0, Infinity }}+ \\
Part of computing a piecewise convolution requires identifying the
overlapping intervals by drawing pictures of $f(u)$ and $g(t - u)$.
So, for two simple functions,
% $f(t) = 2$ for $t \in (0,1)$ and
% $g(t) = 1$ for $t \in (0,2)$,
the Notebook shows an {\it animation} of $g(t - u)$ sliding through
$f(u)$.
Each frame shows an important interval over which to integrate, and
the reader can control the frame rate.
The animation was generated by using the {\tt Dialogue -> True} option
to {\tt PiecewiseConvolution}.
{\tt PiecewiseConvolution} also supports a {\tt Domain} option which is
either {\tt Continuous} or {\tt Discrete}.
The default is the value of {\tt \$ConvolutionDomain} which is initially
{\tt Continuous}.
\subsection*{Analog Filter Design Notebook}
Linear filters are certainly the workhorse of signal processing.
The signal processing packages allow a user to design Butterworth,
Chebyshev (type I and type II), elliptic, and Bessel filters using
the object {\tt DesignFilter}.
For lowpass and highpass filters, {\tt DesignFilter} takes eight arguments:
{\tt Analog}, filter class ({\tt Butterworth}, $\ldots$),
filter type ({\tt Lowpass}, $\ldots$),
time variable, $\delta_1$, $\delta_2$, $w_1$, and $w_2$.
Two additional arguments, $w_3$ and $w_4$, are necessary for specifying
a bandpass or bandstop filter.
Frequencies are in rad/sec and given in ascending order
(e.g., for a lowpass filter, $w_1$ is $w_p$ and $w_2$ is $w_s$).
The magnitude response resides on $[1 - \delta_1, 1]$
for all passbands and $[0, \delta_2]$ for all stopbands.
Since the Laplace and Fourier rule bases can transform the
time-domain expressions returned by {\tt DesignFilter},
the pole-zero diagram and frequency response of a filter
can be plotted using {\tt PoleZeroPlot} and {\tt MagnitudePhasePlot}.
The Notebook introduces the topic of filter design and guides the reader
through several design examples.
These examples include a lowpass design example for each filter type,
as well as a bandpass, bandstop, and highpass design example.
Using animation, the student gets a chance to visualize the changes
in the filter's magnitude response as one design parameter varies
(two animations are provided--- one for ripple control parameter $\epsilon$
and one for the filter order $N$).
The last section of the Notebook examines the effects of precision
in implementation--- the student can truncate the coefficients
of the filter and observe the frequency response.
\subsection*{The $z$-Transform Notebook}
The objects {\tt ZTransform} and {\tt InvZTransform} compute
the forward and inverse $z$-transform, respectively, according to
the standard definitions: [12]
\[
{\cal Z} \{ x[n] \} =
\sum_{n = -\infty}^{\infty} x[n] z^{-n} = X(z)
\]
\[
{\cal Z}^{-1} \{ X(z) \}
= {1 \over {2 \pi j}} \oint_{C} X(z) z^{n-1} dz = x[n] \; .
\]
\noindent
The forward rule base works for 1-D and $m$-D signals,
whereas the inverse rule base only inverts 1-D and separable $m$-D
transforms (\MATH\ simply cannot perform the multiple contour
integrations required in the general $m$-D case).
Since the rule bases implement the bilateral form of the $z$-transform,
{\tt ZTransform} tracks the region of convergence (ROC) so that
{\tt InvZTransform} can uniquely determine the inverse.
The $z$-transform Notebook comes in three parts because of its size.
Part I defines the bilateral $z$-transform and discusses the importance
of the ROC in uniquely defining the transform.
Discussion of the ROC leads naturally into introducing stability and
causality of a signal based on the location of its poles relative
to the ROC.
Part II links the $z$-transform with the discrete-time Fourier
transform (DTFT) and includes several animations relating pole-zero
diagrams to magnitude frequency responses, a basic idea behind
digital filter design [Oppenheim \& Schafer, 1989, ch.~5].
A figure in Part II, for example, shows one animated frame for a three-pole,
two-zero filter whose magnitude response is being evaluated at a
frequency of $\pi$ rad.
Scattered throughout the Notebook are several examples and
exercises with solutions, many of which use examples of high-level
digital signal analysis using {\tt DSPAnalyze}, the discrete-time
version of \mbox{\tt ASPAnalyze}.
Part III concludes the $z$-transform Notebook with two animations showing the
deterioration of the magnitude response of a fourth-order elliptic filter
when the location of the poles are slightly perturbed.
\section*{Conclusion}
These packages described in this article are a first step toward the building
of a comprehensive signal processing environment in \MATH.
We have implemented the common signal processing transforms
and other rudimentary analysis capabilities.
But \MATH\ has the potential to do much more.
We have developed packages that aid in filter design and perform
discrete and continuous convolution in one dimension.
We are developing an exhaustive collection of windowing functions,
which are also important in time series analysis.
Other packages under development will take further the idea of
representing signals and systems as abstract objects,
empowering \MATH\ to reason about properties of
signals and transforms, to run simulations of systems, and
to find optimal implementations of signal processing expressions.
The ultimate goal is to have \MATH\ facilitate fast prototyping of algorithms
by first optimizing an algorithm (subject to design specification and the
limitations of the target architecture) and then generate the equivalent
code for the architecture.
This paper also describes how students could use \MATH\ to supplement
classroom lectures and laboratory experience.
Using Notebooks, students can load the signal processing packages and
tackle homework problems in linear systems while documenting the solution
as they go.
Notebooks can also serve as tutorials.
We have written four Notebooks covering the topics of piecewise convolution,
analog filter design, Laplace transform, and the $z$-transform.
The Notebook are currently in use in several engineering and mathematics
departments, including Rose-Hulman Institute of Technology,
Georgia Institute of Technology, Uninversity of Pennsylvannia,
and Pennsylvannia State University.
Although some schools are integrating {\bf Mathematica} into their
curricula, we hope that more schools encourage students to use it.
\section*{Acknowledgements}
The authors would like to thank Drs.\ Silvio Levy, Paul Abbott, and
Bruce Sawhill at Wolfram Research Inc.\ for their helpful comments
and suggestions.
The authors would also like to recognize some of the pioneers
in the field of symbolic signal processing:
Michele Covell, Gary Kopec, Cory Myers, and Alan Oppenheim.
\section*{References}
\begin{enumerate}
\item Clements, Mark, and Jerrold Pease. 1989.
``On Causal Linear Phase IIR Digital Filters,''
{\em IEEE Transactions on Acoustics, Speech, and Signal Processing},
{\bf 37} (4), pp.\ 479-484.
%
\item Churchill, Ruel. 1958. {\em Operational Mathematics}.
McGraw-Hill, New York.
%
\item Covell, Michele. 1989.
{\em An Algorithm Design Environment for Signal Processing}.
Ph.\ D.\ Thesis and RLE Tech.\ Rep.\ \#549.
MIT, Cambridge (MA).
%
\item Dudgeon, Dan, and Russel Mersereau. 1984.
{\em Multidimensional Digital Signal Processing}.
Prentice Hall, Englewood Cliffs (NJ).
%
\item Evans, Brian L., James H.\ McClellan, and Wallace B.\ McClure. 1990.
``Symbolic Transforms with Applications to Signal Processing.''
{\em The Mathematica Journal}. vol.\ 1, no.\ 2 (Fall), pp.\ 70-80.
%
\item Evans, Brian L., James H.\ McClellan, and Kevin A.\ West. 1991.
``\MATH\ as an Educational Tool for Signal Processing.''
{\em IEEE Southeastern Conference}. pp.\ 1162-1166.
%
\item Evans, Brian L., and James H.\ McClellan. 1992.
``Symbolic Analysis of Signals and Systems.''
ch.\ 3 of {\em Symbolic and Knowledge-Based Signal Processing}.
eds.\ A. Oppenhiem and H. Nawab.
Prentice Hall, Englewood Cliffs (NJ).
%
\item Evans, Brian L., Lina J.\ Karam, Kevin A.\ West, and J.\ H.\ McClellan,
``Learning Signals and Systems with {Mathematica}.''
{\em IEEE Transactions on Education}. vol.~36, pp.~72--78, Feb.\ 1993.
%
\item Evans, Brian L.
``A Knowledge-Based Environment for the Design and Analysis of
Multidimensional Multirate Signal Processing Algorithms.''
Ph.\ D.\ Thesis. Georgia Institute of Technology. Atlanta, GA.
June, 1993.
%
%\item Harris, Fredric J.
% ``On the Use of Windows for Harmonic Analysis with the DFT''.
% {\em IEEE Proceedings}. Jan., 1978. pp.\ 51-83.
%
\item Kopec, Gary. 1980.
``The Representation of Discrete-Time Signals and Systems
in Programs.'' Ph.\ D.\ Thesis. MIT, Cambridge (MA).
%
\item Kopec, Gary. 1985. ``The Signal Representation Language SRL.''
{\em IEEE Transactions on Acoustics, Speech, and Signal Processing},
{\bf 33} (4), pp.\ 921-932.
%
\item Lathi, B. P. 1983.
{\em Modern Digital and Analog Communication Systems}.
Holt, Rinehart, and Winston, New York.
%
\item Muth, Eginhard. 1978. {\em Transform Methods}.
Prentice Hall, Englewood Cliffs (NJ).
%
\item Myers, Cory S. 1986.
``Signal Representation for Symbolic and Numeric Processing''.
Ph.\ D.\ Thesis and RLE Tech.\ Rep.\ \#521.
MIT, Cambridge (MA).
%
\item Oberhettinger, Fritz, and Larry Badii. 1973.
{\em Tables of Laplace Transforms}.
Springer-Verlag, New York.
%
\item Oppenheim, Alan, and Alan Willsky. 1983.
{\em Signals and Systems}.
Prentice Hall, Englewood Cliffs (NJ).
%
\item Oppenheim, Alan, and Ronald Schafer. 1989.
{\em Discrete-Time Signal Processing}.
Prentice Hall, Englewood Cliffs (NJ).
\end{enumerate}
\newpage
\section*{Appendix A: Options for the Transform Rule Bases}
All 10 transform rule bases support the {\tt Dialogue} and
{\tt TransformLookup} options.
The {\tt Dialogue} option indicates how much information the
transform rule bases give to the user:
{\tt False} means to print no information,
{\tt True} means to tell about strategies when used and about
parts of the expression that could not be transformed (if any),
and {\tt All} means to display each step of the transformation process.
(This {\tt Dialogue} option is also supported by the difference and
differential equation solvers {\tt ZSolve} and {\tt LSolve}.)
The {\tt TransformLookup} option allows users to specify their own
transform pairs.
This means that users can transform general (undefined) signals:
e.g., letting $y(t) \leftrightarrow Y(s)$:
\begin{verbatim}
In[15]:= LaPlace[ y''[t] CStep[t], t, s, TransformLookup -> {y[t] :> Y[s]} ]
2
Out[15]= LTransData[ s Y[s] - s y[0] - y'[0], Rminus[0],
Rplus[Infinity], LVariables[s] ]
\end{verbatim}
(The region of convergence is significant because
${\cal L} \{ y''(t) \} \leftrightarrow s^2 \, Y(s)
\mbox{ for } -\infty < \Re{e}(s) < \infty$
but
${\cal L} \{ y''(t) u_{-1}(t) \} \leftrightarrow
s^2 \, Y(s) - s \, y(0^+) - y'(0^+)
\mbox{ for } 0 < \Re{e}(s) < \infty$.)
The following table lists and explains these and other options.
%%%and the subsequent table indicates the options supported by the
%%%different transforms.
%%%
%%% Meaning of Options for the Transform Rule Bases
%%%
\begin{table}[htbp]
\begin{center}
{\it
\begin{tabular}{|l|l||l|}
\hline
Option & Possible Values & Meaning \\
\hline
\hline
{\tt Apart} & {\tt Rational, All} & Partial fraction decomposition only\\
& & applies to polynomials with real or\\
& & rational coefficients \\
\hline
{\tt Definition}& {\tt True, False} & Use the transform definition if all\\
& & else fails to find the transform \\
& & (does not apply to the inverse z \\
& & or Laplace transforms) \\
\hline
{\tt Dialogue} & {\tt False, True, All} & Ascending levels of justification \\
\hline
{\tt Simplify} & {\tt True, False} & Apply {\tt SPSimplify} to result \\
\hline
{\tt Terms} & {\tt False} or {\em integer} & Number of terms in series expansion\\
& & ({\tt False} means none) \\
\hline
{\tt TransformLookup} & list of rules & Users can specify their own transform \\
& & pairs, like \verb+{x[n] :> X[z]}+ or \\
& & \verb+{y[t1,t2] :> Y[s1,s2]}+ \\
\hline
\end{tabular}
}
\end{center}
\caption{{\bf Meaning of the Options for the Transform Rule Bases}}
\end{table}
%%%
%%% Mathematica Options for the Various Linear Transforms
%%%
\begin{table}[htbp]
\begin{center}
{\tt
\begin{tabular}{|l|l|l|}
\hline
{\it Transform} & {\it Option} & {\it Default Value} \\
\hline
CTFTransform & Dialogue & False \\
& Simplify & True \\
& & \\
DFTransform & Dialogue & False \\
& & \\
InvDFTransform & Dialogue & False \\
& Terms & False \\
& & \\
DTFTransform & Dialogue & False \\
& & \\
LaPlace & Dialogue & True \\
& Simplify & True \\
& & \\
InvCTFTransform & Apart & Rational \\
& Dialogue & False \\
& Simplify & True \\
& Terms & False \\
& & \\
InvDTFTransform & Dialogue & False \\
& Terms & False \\
& & \\
InvLaPlace & Apart & Rational \\
& Dialogue & True \\
& Simplify & True \\
& Terms & 10 \\
& & \\
InvZTransform & Dialogue & True \\
& Terms & 10 \\
& & \\
ZTransform & Dialogue & True \\
\hline
\end{tabular}
}
\end{center}
\caption[Options for the Transform Rule Bases]
{Options for the Transform Rule Bases.
{\tt Definition} always defaults to {\tt False} and
{\tt TransformLookup} always defaults to an empty list.}
\end{table}
%%%
%%% Examples of the TeX forms of the signal and systems defined by the SPP
%%%
\section*{Appendix B: \TeX\ Forms for Signals and Operators}
\[
\begin{array}{ll}
{\verb:Aliasby[m, w][X[w]]:}
&
{{{\it Perodic}_{{{2\,\pi }\over m},w}(X(w))}\over m}
\\
{\verb:CConvolve[t][f[t], g[t]]:}
&
f(t) \star_{t} g(t)
\\
{\verb:CPulse[{L1, L2}, {t1, t2}]:}
&
\sqcap_{{\it L1},{\it L2}} ({\it t1},{\it t2})
\\
{\verb:CStep[a + t]:}
&
u_{-1}(a + t)
\\
{\verb:CStep[t1] CStep[t2]:}
&
u_{-1}({\it t1})\,u_{-1}({\it t2})
\\
{\verb:Delta[1 + t]:}
&
\delta(1 + t)
\\
{\verb:Difference[k, n]:}
&
\Delta_{k,n}
\\
{\verb:Downsample[m, n]:}
&
\downarrow_{m,n}
\\
{\verb:DFT[L, n, k]:}
&
{\cal DFT}_{{n\rightarrow k},{\rm L} = L}
\\
{\verb:FT[{t1, t2}, {w1, w2}]:}
&
{\cal F}_{\matrix{ {{{\it t1}}\rightarrow {{\it w1}}} \cr
{{{\it t2}}\rightarrow {{\it w2}}} \cr } }
\\
{\verb:Impulse[K + m]:}
&
\delta[K + m]
\\
{\verb:Interleave[n][x1[n], x2[n]]:}
&
{\it Interleave}_{n}({\it x1}(n),{\it x2}(n))
\\
{\verb:InvDFT[L, k, n][X[k]]:}
&
{\cal DFT}_{{k\rightarrow n},{\rm L} = L}^{-1}(X(k))
\\
{\verb:InvDTFT[w, n][X[Exp[I w]]]:}
&
{\cal DTFT}_{{w\rightarrow n}}^{-1}(X({e^{i\,w}}))
\\
{\verb:InvFT[w, t]:}
&
{\cal F}_{{w\rightarrow t}}^{-1}
\\
{\verb:InvL[s, t][X[s]]:}
&
{\cal L}_{{s\rightarrow t}}^{-1}(X(s))
\\
{\verb:InvZ[z, n][Y[z]]:}
&
{\cal Z}_{{z\rightarrow n}}^{-1}(Y(z))
\\
{\verb:L[t, s][X[s]]:}
&
{\cal L}_{{t\rightarrow s}}(X(s))
\\
{\verb:Pulse[L, n]:}
&
\sqcap_{L} [n]
\\
{\verb:Rev[x][y[x]]:}
&
{\it Rev}_{x}(y(x))
\\
{\verb:ScaleAxis[m, w][W[w]]:}
&
{\it ScaleAxis}_{m,w}(W(w))
\\
{\verb:Shift[n0, n][m[n]]:}
&
{\it Shift}_{{\it n0},n}(m(n))
\\
{\verb:Sinc[Pi x^2]:}
&
{\rm sinc}(\pi \,{x^2})
\\
{\verb:Summation[i, 0, -1 + L, 1][i v^2]:}
&
\sum\limits_{i=0}^{-1 + L}(i\,{v^2})
\\
{\verb:Summation[j, 0, -1 + K, 2][j^2]:}
&
\sum\limits_{\stackrel{j=0}{j=j+2}}^{-1 + K}({j^2})
\\
{\verb:Unit[4][t]:}
&
u_{4}(t)
\\
{\verb:Upsample[2, n][x[n]]:}
&
\uparrow_{2,n}(x(n))
\\
{\verb:Upsample[{{1, 2}, {3, 4}}, {n1, n2}]:}
&
\uparrow_{\left[ \matrix{ 1 & 2 \cr 3 & 4 \cr } \right],
\matrix{ {\it n1} \cr {\it n2} \cr } }
\\
{\verb:Z[n, z][x[n]]:}
&
{\cal Z}_{{n\rightarrow z}}(x(n))
\end{array}
\]
\section*{Appendix C: A Multirate Signal Processing Example}
The following example uses the signal processing packages to derive the
input-output relationship of a two-channel non-uniform filter bank
with frequency band splits of $2\over3$ and $1\over3$ in the upper
and lower channels, respectively.
\vspace*{0.2in}
%%%% Part a
\noindent
\underline{The Flow Graph}
\vspace*{0.2in}
\begin{picture}(460,120)(0,-60)
%%% input and split
\put( 0, 10){$x[n]$}
\put( 0, 0){\line(1,0){30}}
\put( 30,-40){\line(0,1){80}}
%%%% upper branch
\put( 30, 40){\vector(1,0){20}}
\put( 50, 20){\framebox(40,40){$\uparrow_n 2$}}
\put( 90, 40){\vector(1,0){20}}
\put(110, 20){\framebox(40,40){$h_0[n]$}}
\put(150, 40){\vector(1,0){20}}
\put(170, 20){\framebox(40,40){$\downarrow_n 3$}}
\put(210, 40){\vector(1,0){40}}
\put(250, 20){\framebox(40,40){$\uparrow_n 3$}}
\put(290, 40){\vector(1,0){20}}
\put(310, 20){\framebox(40,40){$g_0[n]$}}
\put(350, 40){\vector(1,0){20}}
\put(370, 20){\framebox(40,40){$\downarrow_n 2$}}
\put(410, 40){\line(1,0){20}}
\put(430, 40){\line(0,-1){30}}
%%%% lower branch
\put( 30,-40){\vector(1,0){80}}
\put(110,-60){\framebox(40,40){$h_1[n]$}}
\put(150,-40){\vector(1,0){20}}
\put(170,-60){\framebox(40,40){$\downarrow_n 3$}}
\put(210,-40){\vector(1,0){40}}
\put(250,-60){\framebox(40,40){$\uparrow_n 3$}}
\put(290,-40){\vector(1,0){20}}
\put(310,-60){\framebox(40,40){$g_1[n]$}}
\put(350,-40){\line(1,0){80}}
\put(430,-40){\line(0,1){30}}
%%%% output
\put(430, 0){\circle{20}}
\put(430, 0){\makebox(0,0){+}}
\put(440, 0){\vector(1,0){20}}
\put(440, 10){$y[n]$}
\end{picture}
\vspace*{0.4in}
%%%% Part b
\noindent
\underline{Representation in \MATH}
\vspace*{0.2in}
\begin{center}
\begin{minipage}{5.2in}
\begin{verbatim}
upperchannel =
Downsample[2,n][
Convolve[n][g0[n], Upsample[3,n][
Downsample[3,n][
Convolve[n][h0[n],
Upsample[2,n][x[n]]]]]]]
lowerchannel =
Convolve[n][g1[n], Upsample[3,n][
Downsample[3,n][
Convolve[n][h1[n], x[n]]]]]
\end{verbatim}
\end{minipage}
\end{center}
\vspace*{0.4in}
%%%% Part c
\noindent
\underline{\TeX\ form of the input-output relationship
derived using {\tt ZTransform}}
\vspace*{0.2in}
\begin{center}
$
\begin{array}{l}
{1 \over 6}
{{\left( {\it G_0}(-{\sqrt{z}}) {\it H_0}(-{\sqrt{z}}) +
{\it G_0}({\sqrt{z}}) {\it H_0}({\sqrt{z}}) +
2 {\it G_1}(z) {\it H_1}(z) \right) X(z)}} + \\
{1 \over 6}
{{\left( {\it G_0}(-{\sqrt{z}}) {\it H_0}({e^{{i\over 3} \pi }} {\sqrt{z}}) +
{\it G_0}({\sqrt{z}}) {\it H_0}({e^{{{4 i}\over 3} \pi }} {\sqrt{z}}) +
2 {\it G_1}(z) {\it H_1}({e^{{{2 i}\over 3} \pi }} z) \right)
X({e^{{{2 i}\over 3} \pi }} z)}} + \\
{1 \over 6}
{{\left( {\it G_0}({\sqrt{z}})
{\it H_0}({e^{{{2 i}\over 3} \pi }} {\sqrt{z}}) +
{\it G_0}(-{\sqrt{z}}) {\it H_0}({e^{{{5 i}\over 3} \pi }}
{\sqrt{z}}) + 2 {\it G_1}(z) {\it H_1}({e^{{{4 i}\over 3} \pi }} z)
\right) X({e^{{{4 i}\over 3} \pi }} z)}}
\end{array}
$
\end{center}
\end{document}