Page  00000001 LIE POISSON SYNTHESIS Steven Benzel Berry College Department of Mathematics and Computer Science sbenzel@berry.edu ABSTRACT The theory of Lie Poisson differential equations is presented and a numerical algorithm for integrating these equations is introduced as a method for synthesis. The algorithm will be seen to be stable and capable of generating an extremely rich set of acoustic signals. Although the algorithm discretizes time and is thus a digital theory, signal analysis is conducted using the continuous time variable t before discretization takes place. Thus the method is more akin to an analog theory rather than a digital theory. The algorithm is computationally intensive and requires care in coding if realtime performance is required. This can be achieved on modest machines using assembly language and some experimental results are discussed. 1. INTRODUCTION There are two general concerns when one wishes to consider a function ib(t) as representing an acoustic signal. The first concern is the nature of the function. For example, if ib(t) grows without bound as t increases one would be hard pressed to consider it as representing the pressure of a sound wave passing through a point in space. A function decaying monotonically towards zero could represent a pressure wave but we would perhaps not consider it as an acoustic signal of interest. Thus, we might require in addition to boundedness that ib(t) have some type of oscillatory behavior, be it simple or chaotic. The second concern is that of actually generating the acoustic signal represented by the function ib(t) be it through a mechanical, analogue, or digital method. The study of the nature of functions is certainly a worthy mathematical endeavor, but if we wish to consider the function ib(t) as representing an acoustic signal we will eventually want to hear it and thus will require a means to generate it. Physical modeling is currently a popular method of synthesis [6]. Synthesis based on physical models is rooted in the partial differential equations governing a physical instrument and acoustic signals are produced by simplification and discretization of these pde's. However, the theory of ordinary differential equations offers an abundant source for possible acoustic signals. Not only can we use the mathematical machinery developed for the study of Odes to find suitable candidates for acoustic signals, there are a number of numerical methods we can use to produce signals. If b(t) is a solution to a differential equation, the output of a numerical method will be a a sequence {fV,} of discrete values approximating to the true solution at discrete times ib (nT). Thus we are first lead to consider Odes whose solutions are bounded and have at least some type of oscillatory behavior. Secondly we must take care in our choice of a numerical method if we wish to produce viable acoustic signals discretely. From the considerations given above we require our numerical solutions to be bounded. This is indeed a severe restraint and eliminates simple numerical methods such as the Euler method from further consideration. We also require that the numerical method preserve the oscillatory nature of the true solutions. In the following we first review some basic facts from the theories of linear algebra and ordinary differential equations. We then present the Lie Poisson differential equation. The solutions to this equation, or rather class of equations, are always bounded and for the most part oscillatory. We then present a numerical algorithm for integrating these equations that preserves the boundedness and oscillatory behavior. We call the implementation of this algorithm Lie Poisson Synthesis. We will see that a Lie Poisson differential equation is specified by a particular choice of function known as a hamiltonian. The freedom to choose this hamiltonian as one desires and modify it in realtime is where Lie Poisson Synthesis derives its dynamic complexity. We will close with basics examples of LP synthesis and demonstrate that FM synthesis can be achieved under LP synthesis with a very simple choice of hamiltonian. 2. LINEAR ALGEBRA AND ODES In this section we recall some basic theory of linear algebra and ordinary differential equations defined on vector spaces. We also define much of the notation used later. A basic understanding of calculus and linear algebra will be assumed and the reader is referred to standard texts such as [2] and [3] for a detailed review. 2.1. Linear Algebra We denote by CN the vector space over C of dimension N. We let gl(N, C) be the space of all linear functions from CN to CN or equivalently as the space of N x N matrices with values in C. We will be interested in two subsets of gl(N, C), namely sv(N) and SU(N) [7]. SU(N) is known as the special unitary Lie group in N dimensions

Page  00000002 and is the set of all N xN matrices g of determinant 1 such that gg*=I (1) where I is the N x N identity matrix and g* is the conjugate transpose of g. The components of g* are given by It is a fundamental fact of linear algebra that given any element X of su(N) there is an element g of SU(N) such that Y = g - X is a diagonal matrix. Code to find g given X is readily available. For example, the LAPACK routine zheev [4] will provide g and Y. The exponential of X can now be computed as (g*)ij = gji (2) exp(X) = g* - (exp(Y)) (14) It is a standard fact that if gi and g2 are two elements of SU(N) then their product gig2 is also an element of SU(N). su(N) is known as the special unitary Lie algebra in N dimensions and is the set of all N x N matrices X of trace 0 such that For su(2) the exponential can be written down succinctly. For X in su(2) X+X* = O (3) su(N) is in fact a vector space over R of dimension N2 -1 with a natural inner product given by X I zx y + zz ( -y + zz -zx ) we let p = x22+y2 +2. If p = 0then X zero matrix and exp(X) = I. For p > 0 exp(X) = cos(p)I + sin) X P (15) 0, the (16) < X, Y >= -trace(XY) (4) A concept that will be used quite extensively is the action of SU(N) on su(N): g. X = gXg* (5) This action can be thought of as a rotation on the vector space su(N). Another concept is that of the exponential of a matrix. For Y in gl(N) we define S1y exp(Y) = nyn (6) n=-O It is easy to see that exp(Y*) = (exp(Y))*. This implies that the exponential of an element of su(N) is an element of SU(N) and we define the Lie bracket for two elements X and Y of su(N) to be the element of su(N) defined as [X, Y] = exp(tX) - Y t=o = XY - YX (7) The SU(N) action enjoys the following three properties exp(g - X) = g(exp(X))g* (8) 2.2. ODES on a Vector Space Let V be a vector space over R with an inner product. We will denote elements of V by capital Latin letters X, Y, and Z. For X and Y in V we denote their inner product by < X, Y >. We will use the Greek letter ( to represent a function -: R - V. We consider ( as defining a path in V, thus for t in R, ((t) is an element of V which moves as t changes. Given a function F: V x R -- R, ((t) is said to satisfy the ordinary differential equation on V determined by F if for all time t ((t) = F(ý(t), t) (17) where ((t) = d(t)/dt is the time derivative of (. Usually we are given an initial condition ((0) and (17) can be considered as the equation of motion determining ((t). (17) is called an autonomous equation if F is not an explicit function of time F = F(X). If Y is a fixed element of V and ((t) is a solution to (17) we can consider -(t) =< Y, (t) > as a candidate for representing an acoustic signal. Boundedness for a(t) can be guaranteed if F satisfies the property < g X,g Y >=< X,Y > [g X,g. Y] =g ([X, Y]) The Lie bracket has the property that (9) (10) (11) < X, F(X) >= 0 (18) for all X in V. To see this, let ((t) be a solution to (17) then < [X, Y], Z >=< X, [Y, Z] > d < (t (t >= 2 < (t (t >= 0 dt (19) The definition of the exponential given above is not computationally viable. A practical method for computing the exponential on su(N) is to note that if Y is a diagonal element of su(N) so < ((t), ((t) > will be constant in time t. By the Cauchy Schwartz inequality la(t) 2 < Y >< ((t),y(t) > (20) Y = diag(zA,...zANy) (12) and a(t) will be a bounded function. One final concept we require is that of the gradient of (13) a function. If H: V -- IR is a function on V then the then exp(Y) = diag(eAl,..., XN)

Page  00000003 gradient of H is the function VH: V - V which is defined using the inner product as d < VH(X), Y >= H(X + tY)t=o (21) For example, given Z, a fixed element of V, and n a positive integer we let H(X) =< X, Z >" (22) The gradient of H will be given as VH(X) = n < X, Z >n-1 Z (23) 3. THE LIE POISSON EQUATIONS We are now in a position to define the Lie Poisson equation as a differential equation on the vector space su(N). Let H: su(N) -> R be a function with gradient VH: su(N) -- su(N). Then for X an element of su(N), VH(X) will be another element of su(N) and we can consider the function F defined via the Lie bracket F(X) [VH(X), X]. The Lie Poisson equation with hamiltonian H is defined as by (24) and (11). Thus H({(t)) is constant in time. This allows us to visualize solutions for N = 2. Now su(2) is a 3-dimensional vector space. Since < ((t), ((t) > is constant in time by (19), the motion is restricted to a sphere in su(2) centered at the origin. If we restrict H to this sphere then we see that ((t) is confined to a level curve of H. It is helpful to think of H as a height function on the sphere with ((t) circling along a level height contour. 4. THE NUMERICAL ALGORITHM Let T > 0 be a small timestep. Given a hamiltonian H and an initial condition 0o = ((0) a numerical solution to (26) is a sequence {(} such that (, ~ ((nT), where ((t) is the true solution to (26) with the given initial condition. A very tempting algorithm for generating &, is the LieEuler algorithm defined as follows: given n-1, compute VH(n-_1) and define In = exp(TVH(ln-1)) " (n-_ (30) Since the algorithm is implemented by the SU(N) action we have (t)= [VH((t)), (t)] (24) < n, (n >=< (n- l, (n- l > (31) This equation has been well studied in the mathematical physics literature and its solutions have many useful properties. It is used in mechanics to model various rotational systems such as a spinning top or a satellite in orbit [1] where angular momentum is involved. A pure rotational system differs from a spring-mass type system in that the energy (which is actually the hamiltonian H) does not oscillate between kinetic and potential terms. By (11), F satisfies the boundedness criterion (18). The simplest example is that of a linear hamiltonian. Let H: su(N) - IR be defined by H(X) =< Z,X > where Z is a fixed element ofsu(N) then solutions to (24) have the form and acn =< Y, (, > will be a bounded sequence. Unfortunately, it will miserably fail to maintain the harmonic nature of the solutions to (24) for general hamiltonians. However, this algorithm will actually generate the true solution (n = (nT)) for hamiltonians of the form H(X) c < Z, X >". We will use this to define the general algorithm. Let H be a polynomial hamiltonian of the form M H(X) Z=ck <Zk,X >k k=l (32) which we write as I(t) = exp {tZ) u i(0) In particular, if N = 2 and (25) H(X) = H (X) +... + H (X) (33) Z-(=0 0 ) We define Lie Poisson synthesis as the following algo(26) rithm: given gn, let r70 = gn and define r7k, k 1,..., M recursively by then a(t) =< Y, ((t) > will have the form a(t) = A sin(2At + 6) r]k = exp(TVHk(rlk-1)) ' rlk-1 (34) (27) Where A and 6 are constants depending on Y and ((0). For general N we can assume Z is a diagonal element of SU(N) Z = diag(z\,..., zAN) (28) and a(t) will be given as a sum of sin functions of angular frequencies Ai - Xj. If H is independent of time and ((t) is a solution to (24) then S(H((t)) =< VH(g(t)), (t) >= 0 (29) now let,+1i = rM. This algorithm falls into a class of algorithms known as symplectic integration algorithms [5] and as such enjoys several wonderful properties. One of these is that for T small the function in H Jn+1 is the true solution to (24) for a hamiltonian close to H. For the most part, the spectrum of a(t) =< Y, ((t) > will depend continuously on H ((0), and Y. This means that if H' is a function 'near' H then the spectrum of o'(t) will be 'near' the spectrum of a(t). This notion can be given as a precise mathematical statement but perhaps it is best for us to proceed to examples.

Page  00000004 0.4[ 0.4 0.2 AnAA..-..Al A 0.2 0.2 I ~I ~I~ I I 1 111 ~1 I I I~ I~ I I I I I I I 00 200 0 300 400 5 0 -0.2 -0.4 0 00 60 8 ' 0.41 Figure 1. 500 samples of z(t) Figure 2. 972 samples of the vowel /i/ 5. EXAMPLES We now consider some basic examples. is given by S0 -1Z A basis for su(2) (35) 0.8 0.6 0.4 0.2 X2 = ( 0 1 -1 0 (36) (37) X3( z X3= 10 ) Figure 3. and we let / 7,.,r, (t) 71(t) + 7?. z(t) ý () =k) I (38) -Y(t) + zz(t) -Iz(t) ) 5.1. A Nonlinear Oscillator Let H(ý(t)) = c < X1, ((t) >< X2, ((t) >= 4cx(t)y(t). The equations of motion (24) will have the form ~ 4cxz S= -4cyz (39) = 4c(x2 - y2) letting p2 = x2 Y2 +z2 and taking the derivative of gives 2 + 32c2p2z - 32c2z3 = 0 (40) We see that z(t) satisfies the equation of an oscillator with quartic potential. We can implement the algorithm by writing 5.2. The Vowel li/ Figure 2 displays 972 samples of the vowel /i/ recorded at 44100 Hz while Figure 3 displays 972 samples generated by a three term hamiltonian on su(2) H = H1 + H2 + H3 (42) given in Table 1. Now su(2) embeds naturally in su(3) and the inner product allows us to extend functions from su(2) to su(3). Figure 4 is generated by the hamiltonian from Figure 3 extended to su(3) with a small perturbation term added. 5.3. FM synthesis Oscillatory frequency can be controlled via hamiltonians defined on V = su(2) D su(2), two copies of su(2). We write ((t) = ((t) + (2 (t) for a path in V where ( (t) is a path in the first copy of su(2) and 2 (t) is a path in the H(X) = < Xl + X2,X >2 C< X1,X >2 -_ < X2,X >2 X1 X2 X3 C 1 (41) H1 H2 H3 -0.8 0.4 0.8 0.2 0.18 0.0 0.3 -0.03 0.8 -0.093 -0.3 -0.42 If we take c = 1, T = 0.1 and initial condition x(O) = 0.5, y(O) = 0.001, z(0) = 0.0 the algorithm produces the waveform z(t) given in figure 1. Table 1. Hamiltonian for figure 3

Page  00000005 1.5 0.5 -fly 2 00 40 0 6 8 00 Figure 4. A hamiltonian on su(3) second copy. Consider a hamiltonian of the form H(s) = (1 + hi (ý2)) < Z, ( > +h2 2) 250 500 750 1000 1250 1500 6 -Figure 5. a = 0. 1 (43) where h, and h2 are two functions defined on su(2). The equations of motion will have the form S= (1 + h1(ý2))[Z, (1] =< Z, ( > [V7h, (ý2)2 22 + [V7h2(ý2) (44) It is easy to see that < Z, ( (t) > is actually constant in time so if < Z, 1 (0) > 0 the second equation reduces to the Lie Poisson equation on su(2) Figure 6. a = 0.8 I = [h 2 2),2 ] and (1(t) will have the form ( (t) = exp((t + f(t))Z) - (1(0) where f satisfies the differential equation (45) (46) (47) where Z1, Z2, X1, X2 C su(n) and a is a constant called the coupling constant. The simplest bioscillator has n 2 with 1Z0 Zl = Z2 0=1 (51) f(t) = h(2(t)) and X1 = X2 0 0 (52) We examine the evolution of the first oscillator as a changes by taking For FM synthesis we can take hl and h2 to be linear functions. Let h2 (X) b <X,X > h, (X)= c < X2,X > ( 2 (0) = X2 (48) 1\ (53) Then hi (ý22(t)) =c cos(2bt). If Z = X2, Y = aXj, and ((0) X, then 0-(t) =< Y, ( (t) >= cos(a(t - 2bc cos(2bt)) (49) A straightforward generalization of the above is a bioscillator. In FM synthesis we have two oscillators with one affecting the frequency of the second. In a bioscillator, we again have two oscillators but now each affects the frequency of the other. We take V ssu(n) e su(n) and again write ((t) (=(t) +-2 (t). The hamiltonian is given as H() =< Z1,( > + < Z2, 2 > + and o-(t) < Y1, (1(t) >. For a 0 the two oscillators are uncoupled and o-(t) is a sinewave. The spectrum of cv for increasing values of a is shown in figures 5,6, and 7. o- for a = 1.0 is shown in figure 8. 6. CONCLUSION The above are examples of time independent hamiltonians on relatively low dimensional spaces and produce acoustic signals that do not evolve in time. It is possible to generate complex signals that do evolve in time by taking N large. Currently, these signals tend to be unpredictable. Alternatively, complicated evolution can be achieved through the use of time dependent hamiltonians H: su(N) x RR -> for N relatively small. It is perhaps useful to think of a a < Xi,(1 >< X,2 >

Page  00000006 [5] Scovel, C. "Symplectic Numerical Integration 2 of Hamiltonian Systems" The Geometry of Hamiltonian Systems, T Ratiu Editor. Mathematical Sciences Research Institute Publica2 5 (0 750 10 0 0 12 5 0 15 0 0 tions 22. Springer-Verlag, New York, 1991 -2 [6] Smith, J. 0. "Physical Modeling Synthesis Update" Computer Music Journal vol. 20, no. 2, pp 44-56. MIT Press, 1996 -4 [7] Varadarajan, VS. Lie Groups, Lie Algebras, -6- and Their Representations. Springer-Verlag, New York, 1984 Figure 7. a 1.0 1 0.5 5 0 10 0 15 0 2000 -0.5 -1 Figure 8. u(t) for a 1.0 time dependent hamiltonian as a path in N, the space of all hamiltonians on su(N), and using LP synthesis we can smoothly transition from one hamiltonian to another. In fact, existence theorems from the theory of differential equations ensures that u(t) will be continuous even if our hamiltonian path has jump discontinuities. Exploration of low dimensional subspaces of 1N through the use of controllers is straightforward. For signal generation in real time, efficient coding is a must. Using the transcendental instruction FSINCOS on the x86 processor allows for short assembler code for LP synthesis on a direct sum of copies of su(2). The author has had success on a 160mhz linux box using 3 copies of su(2). 7. REFERENCES [ 1] Arnold, V.I. Mathematical Methods of Classical Mechanics, Second Edition. SpringerVerlag, New York, 1989 [ 2] Boyce & DiPrima Differential Equations, Eighth Edition. Wiley & Sons, 2005 [3] Lang, S. Linear Algebra, Second Edition. Ad dison Wesley, Reading MA, 1971 [4] www.netlib.org