Page  00000001 A New Connectionist Model for Associative Retrieval of Harmonic Tunes Seong-Won Yeo (1) Min-Uk Han (2) Chong-Ho Lee(3) Department of Electrical Engineering, Inha University, Inchon Korea 402-751 (1), (2)g9621018, (3), Abstract Many of the existing connectionist models based on neural associative memory have two separate stages for learning and retrieval, and are not structurally suitable for temporal pattern storage and classification in that the user must properly specify the duration of unit sector of continuously incoming patterns. In this paper, a new connectionist model of on-line associative memory for temporal pattern store and retrieval is suggested. The input signal that the suggested model deals with is a set of multi-channel discrete sequences of binary patterns which can be an interpretation of musical tune played on a keyboard. Simulation shows the suggested model can associatively retrieves a few stored patterns and adaptively tunes its stored pattern upon the change of input patterns. 1 Introduction Most of stimuli which human receive with their sensory systems are diverse spatio-temporal patterns which continuously change along with space and time. In order to deal with such sequence of patterns, there is a request for intelligent processing method for temporal patterns. A new connectionist model of on-line associative memory for temporal pattern processing is presented. The suggested memory model is based on on-line unsupervised learning [1]. It stores multi-channel timevarying binary signals[2] and retrieves one of the stored patterns in response to its similar pattern input. 2 Memory Structure 2.1 Neuron and Activation Function The suggested memory model consists of series of layers. As shown in Figure 1, each layer can be visualized as a two-dimensional array of neurons. Each neuron is based on outstar[3] type neuron model. In this model, synapse connection does not take place at the back stage of neuron. Neuron owns weights in its forward direction and output of activation function is multiplied by these weights. Weight connections are stretched to neurons on the next layer. A neuron on a layer has limited number of weights, as many as 10, towards next layer. It is much smaller than the number of all neurons on each layer. Which neurons on the next layer to be connected by the weights are not fixed and change during network processing. Initially all weights in a neuron are disconnected. As suggested memory model stores more temporal patterns, neurons gradually activate and connect weight by their necessity. If there is no more available weight for storing new patterns, the weakest weight connection is disconnected and reallocated for new pattern. This strategy is natural for human brain to forget trivial memories and pay more attentions on other important things. Activation function is defined in Equation 1 and shown in Figure 2. Neuron fires binary output by this activation function. Net value of the activation function is accumulated by the previous layer. It has two threshold values and output is 1 if net value is near 1. f(net) =1, 1-6 < net < 1+6 (1) 0, otherwise 2.2 Encoding and Decoding The suggested memory model has encoding and decoding processes at the input and output stages. Encoding process converts time domain temporal patterns to event domain[2] coded patterns while decoding process generates time domain output patterns from coded output patterns.

Page  00000002 QO8 Previous Layer Current Layer Figure 1. Structure of Layers Next Layer layer is currently receiving given pattern vector. Given pattern vectors, which are in the form of coded pattern vector and construct a part of a given pattern matrix, are sequentially applied to layers by this index. The first layer associates first coded pattern vector with second coded pattern vector which lies on the second layer. The number of layers, L, represents the number of coded pattern vectors that suggested memory model can store. However, If the number of coded pattern vectors to be stored by the memory model is larger than the number of layers, the remaining coded pattern vectors are applied again from the first layer; that is, the last layer also associates Lth coded pattern vector with L+lth coded pattern vector by applying this vector to the first layer. Once encoding process generates given pattern vector, it is applied to indexed layer. Then pattern process spans output pattern vector (oi,i= 1...n ) on this layer. With this output pattern vector, decoding process generate time domain output pattern. The overall memory structure is displayed on Figure 4 S ' 1-5 1 1+5 net Figure 2. Activation Function Time domain temporal pattern consists of several channels. It flows in each channel as a form of binary signals. Events are generated at the every starting point of 1 signals of each channels. One event indicates one coded pattern vector is generated in the part of temporal pattern. Coded pattern vector has first element as time interval between previous and current events and its remaining elements are referred to a duration time of 1 signal of each channel. Figure 3 shows an example of encoding process. Input signal consists of nine channels, and horizontal bars appeared in time domain indicate 1 signal being given to each of channels. Here, we define given pattern matrix, G, as encoded temporal pattern that is given from input channels and to be stored by the memory model. This matrix consists of row vectors called given pattern vectors (gj,i=1...n) which are in the form of coded pattern vector. The number of given pattern vectors is correspond to the number of events when the memory model encodes one temporal pattern. The layer has a two-dimensional array of neurons. Column represents channels and row represents length (duration time of 1 signal). Given pattern vector is applied to one layer as a form of activating neurons. Figure 3 also shows an example of applying given pattern vector, g8, on a layer and activated neurons are marked channel 123456789 Event Event Event Event 5r Event Event Event 10 Event 15 C 20 I II I I.............. i....................... Coded pattern Event Interval channel t123456789 1 600000000 1 000004000 1 000300000 3 050000000 1 000060050 1 005000000 1 000000200 5 300500004 event domain Activated neurons by g8 ojo oooooooo I 0000000000 0:000000000:I o:oooeooooo 0:000000000.. 0:000000000:I 0O00 0000000 I 0:00 0 000000 oio oooooooo ill 00 OOOOOOOOO El 2 3 4 5 6 7 8 9 channel time domain g91 92 G= 93 9s 1600000000 1000004000 1000300000 5300500004 (D eg P 3 c.o C,) 2.3 Overall Memory Structure Figure 3. Example of Encoding Process The suggested associative memory model consists of series of layers. There is an index that manifests which

Page  00000003 Coded Output pattern SL\ I~......2/S.:~#.?.S2:R2 ~..........!..................... Deco g Output Decoding CD (0 ~o< ''---> c^'-^ '"-------1--- -"'' >/ Progress of learning or storage depth Encoding g\ 2 4 g3 InputCoded pattern Figure 4. Overall Memory Structure 3. Pattern Store and Retrieval 3.1 Weight Increase and Decrease Weights are tuned by weight update algorithm which will be discussed in section 3.4. If a weight is connected to a correct neuron on the next layer, then this weight becomes increased. The weight increasing shows saturation property; that is, as learning of one pattern continues, weights are converged on some predetermined values based on equation 2. 1s =,+c ( M ) (2) Here, c is a learning factor and N is the number of activated neurons on current layer minus one, and j, i denote activated neurons on previous and current layers respectively. By equation 2, weights are converged to the value of 1/N. If weight is connected to the close but not exactly correct neuron, then a new weight is created to connect this neuron by equation 3 and misleading weight is decreased by equation 4. here d is a forgetting factor. 1c"! (3) outputs via sufficiently increased weights to the next layer. Then activated neurons on the second layer can represent second given pattern vector. This process can be identically applied to the ith and i+lth pattern vectors. In this memory model, one layer can superpose several coded patterns vectors which are in different set of temporal patterns. An example of pattern store is illustrated in Figure 5. Activated neurons are marked on the layers. Converged values of weights are also indicated. In this figure, three neurons on layer 1 are connected to all activated neurons on the layer 2 by the weights of 1/3. So net values of three neurons on layer 2 are accumulated to 1 and activation function fires 1. 3.3 Pattern process Suggest associative memory model needs pattern process to store and to retrieve patterns which is summarized in Figure 6. When zero inputs are received at the input stage in the middle of temporal pattern is being given, self-recall mode is activated. Weight update algorithm is not executed in the self-recall mode. In this mode, neurons on the current layer that are activated by the weight connections of the previous layer form a predicted pattern vector. Taking this vector as output pattern vector, associations are consecutively occurred to the next layers and remaining part of temporal patterns are retrieved without input. Among neurons that are activated by the previous layer, which form a predicted pattern vector, output pattern vector is determined as neurons that are closely located with neurons of given pattern vector. If there is no weight that connects to the neuron of the given pattern vector from the previous layer, then new weight connection is created by weight update algorithm (see section 3.4) Layer 1 Layer 2 J = -N WV -- i = 1--> i - d x VK -- 9 -(4). 0k Layer 3 00 0o 4~ 3.2 Temporal Pattern Stored on Layers When first given pattern vector is applied to the first layer and second given pattern vector applied to the second layer, activated neurons on both layers are connected by weight update algorithm. In this way, with successively applied temporal patterns, neurons which are activated by first given pattern vector can transfer 00 oL" 01 = 14,2,3,0,31 02 = 12,0,2,0,3} 03 = {1,0,0,2,0} Figure 5. Example of Pattern Store

Page  00000004 Determine Self-Recall Mode * If Given Pattern Vector = Zero Input then Set Self-Recall Mode:= True * else if Self-Recall Mode = True then Set Self-Recall Mode:= False Prepare to receive new Input Pattern Vectors Span Output Pattern * Take Activation function at net values of neurons on the currently indexed Layer * If Self- Recall M ode = True then Determine Output Pattern Vector where activation functions result in 1 on the Layer Selse Determine Output Pattern Vector where activation functions result in 1 and Given Pattern Vector exist near to these neurons Weight Update * if Self-Recall Mode = False then Update Weights from neurons on previous Layer to neurons on current Layer (Refer to Figure 7) Determine net values of Neurons on the Next Layer * Take weights of neurons at Out Pattern Vectoror Given Pattern Vector on current layer * Sum net values of neurons on next Layer which pointed by these weights Figure 6. Pattern Process patterns. Self-recall mode makes the memory model possible to retrieve whole temporal pattern providing the initial part of the pattern. References [1]Zurada, J.M. 1992. "Introduction to Artificial Neural Systems", West Info Access, pp. 313-323 [2]Risset,J.C.1996. "Real-Time Performance Interaction with Computer Controlled Acoustic Piano",Computer Music Journal Vol. 20, Number 1, pp. 62-75 [3]Freeman,J.A. 1991. "Neural Networks", AddisonWesley, pp. 230-234 SSelect 9' =tY1l, Y2, Y3,-'-, YN N' = Number of all a;tep 2.Select one acti 3.4 Weight Update.With Weights which connected from zj to Current La If there exist Weight connected exactly to yi then Wii=Wii+c(-- Wii) (*/ncreas Weigh Weight update algorithm is summarized in Figure 7. If given pattern vector is slightly different from predicted pattern vector, this may mean given pattern vector is noisy. So generated output pattern vector is same as predicted pattern vector. However, If the different given pattern vector is continually applied, this may means a user wants to change the contents of stored pattern. By weight update algorithm, weight which is connected to predicted pattern vector becomes decreased and weight connecting new given pattern vector is created and increased. Eventually, predicted pattern vector become same as changed given pattern vector. In this way, the memory model adaptively tunes stored patterns. else If there W p) then = W - dx -W gassWeigt else create new weight connected to y N1.Repeat step 2 with rest neurons of Given Pattern on Previous La; Repeat step 1 with rest neurons of Given Pattern on Current Lay! Figure 7. Weight Update Algorithm Input 000000000 t=O 000000:0 ~DomDDomD moo 000 0 MOFOODE10 ~omDDDDDD oomDDDDo~ M-]o EDEDE DO*xDOODx E]EINIE]E:EIE EIDE1000 0 0 0 El l HOME1: E 00El l E O 4. Simulation Figure 8 shows the computer simulation output in time domain of the memory model. It stores and retrieves two nine-channel temporal patterns. It can completely retrieve temporal patterns with only part of patterns given. It also shows how the memory model adaptively tunes its stored pattern. 5. Conclusion In this paper a new on-line associative memory model suitable for storing and retrieval of temporal patterns is suggested. Storing and retrieval can be done in real time since neurons have limited number of weights and only activated neurons are considered during processing. The suggested memory model can adaptively tune its stored moo 000 ~0 0 D00D00000 0 No 00000~ 000000000 0 DD000000D 0 00D00000D 0 000000000 0 000000000 0 000000000 0 ~000000000 0 ~000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 ~000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 t=54 Input 000000000 0 " 000000000 ~ 0i 00000000 ~ ~000000000 ~ ~000000000 ~ 000000~0000 0D 00000~000 ~ S000000000 0 3 DDD[ ~ p DDD [ DDDDD [ Output Input 100000000 t=55 000000:i 100000000D MODDOEM3000I 100000000 DODDDDDDD 1000000 DOODooDDDDI DDDDD DODDDDDDD 100000000 DODDDDDDD 100000000D CD DODDDDDE 100000000D DODDODDED 10000000 DODooDDDDI 100000000 DODooDDEI 100000000 E 0000 1000 00000E300 E 100000000 E3 E3 0 E000 100000000 HODOME3HOMOOD 100000000 D DDD DOD D 100000000 No 000000 100000000 E3,,E3,,E3: 100000000 0 0 000~ 100000000 DD DD DOODE 100000000 DEDODEDDI 100000000 D OD DDDI 100000000 D OD DDEI 100000000 D ODD DDDI 100000000 00~00000 E 100000000 00000000:E ]HDOE000000000 0~000 DODO DDDDD DDDODDDDD 100000000 000000000: 100000000 DDDDDDDDI 100000000 D DDDDDDI 100000000 000000001 100000000 D DDDDDDI 100000000 DEDODED I 100D00000D DOODDDDDI 100000000 D DODDODD 100000000 D DDDD ODDE 100000000 00~00000:E 100000000 000~00000 E 100000000 DDDDD DD DD 100000000 D DOODDOD: 0N 00 0 00E300 00 00 o00 00 E3 o O 100000000 000000001 100000000 000000001 0:0000 1 000000DE ]000000:00 0 000000~ IDE30000 00 E000~ 100000000 DDODDDDDI 100D000000 DD OD DDE 100000000 DDODDDDDI IDDE300000 DOODDDDDI Output t~ ' 09 Input 30000000~ 0000000001 300000:00 000000:01 ODOM:00: 0000000 30000 0~0 moo 000 00 3: FIFIFIF n 000000FID 00000000 0000000001 300000000 0 000000001 30000000 000000000 30:00000 00000000I 3DDDDDD DDDDDDD0 I 30000000 1:.:::001000000 3DDDDDDD DDDDDDI 1DDDD0 DDD DDD DD 1DDID0 DDD DDD DDD 1DDDD0 DDD DDDDDD 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 0 000000000 [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD ~ DDDDDDDDD ~ DDDDDDDDD ~ DDDDDDDDD ~ DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDDD [] DDDDDDDD [] DDDDDDD [] DDDDDDDD [] DDDDDDDDD ~ DDDDDDD ~ DDDDDDDD ~ DDDDDDDD ~ DDDDDDDD [] DDDDDDDD [] DDDDDDDD [] DDDDDDDD [] DDDDDDDD [] DDDDDDDD [] DDDDDDDD ~Outputo 10 No No No 10 10 10 10 10 10 Output 000000000 moo 00DD0 00:000000 ~DDDDDDD ~DDDDDDD ~DDDDDD DDDDDDD!!!!!!!!! No 00000o 00000000: 000000:00 000000 00 Input t=110 I DDDDDDD DDDDDDDD DDDDDDD DDDDDD 00 0000000 0 000000FID0 0000000:00 000000000 000000000 000000000 000000000 DDDDDDo DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD 0DDDDDD0D 0DD0DDD0D 0DDDDDD0D DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDD DDDDDDD DDDDDDD DDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDD DDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD t=164inputI DDDDDDDDD E ~ DDDDDD |. DmDDD |0~00 Output 0000000:0 DDDDDDD DDDDDDDD DDDDDDD DDDDDD 0000000!!!!!!!!! DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD DDDDDDDDD oDDDDDDoD oDDoDDDoD oDDDDDDoD oDoDDDDDD oDoDDDDDo DDDDDDD DDDDDDD DDDDDDDD ~DDDDDDD ~DDoDDDD~ oDDDDDDD 00000000o OTHOMOHOo 00 0 0~o 000000 1~ 0 No TOMo Output 000000000 moo 0DDDD 000000000 iooooooii |DDDDDDD 00000000~ DooNDooDo 000000:00 000000 00 000000000 000000000 000000000 Output pattern unchanged Output pattern distorted Output pattern fixed Figure 8. Simulation Output of the Memory Model and Progress of Tuning Stored Pattern (c=0.4, d=0.11)