• 沒有找到結果。

Cause of the Imperfect Experimental Result

CHAPTER 4............................................................................................................................ 40

4.3 Cause of the Imperfect Experimental Result

The cause of the unsuccessful recognition is found in this thesis. Table 4.2 shows the absolute-weight of cell(4,4) which is recognized unsuccessfully. Three simulation conditions are in Table 4.2. The absolute-weight ss44M is simulated by Matlab, and that is a ideal weight.

The absolute-weight is simulated by Hspice in typical-typical corner condition. The absolute-weight is simulated by Hspice in fast-slow corner condition. The

absolute-weights and are strange. The absolute-weights in practical circuit is stored on the capacitor Cw in Fig. 2.4. The Hspice simulation result shows the charging and discharging currents are unbalanced. It is described in chapter 2 that the ratio weights are generated according to the absolute mean of absolute-weight. Table 4.3 shows the generated ratio weights according to the absolute-weights in Table 4.2. Because of the wrong absolute-weights and , the absolute means of the two absolute-weights are wrong too. Though the mean of is wrong, there is still only one weight that is larger than the mean of . Thus the ratio weight of is the same with the ratio weight of

these two ratio weights are correct. But the mean of ss44FS is too wrong to get a correct ratio

weights. There are three weights in larger than the mean of , so the generated

ratio weight of is completely wrong. The wrong ratio weight results the wrong recognition. The cause of the wrong absolute-weights is shown as blow.

44

ssFS ss44FS

44

ssFS

Chapter 2 explained the learning structure and the all detailed sub-circuits. Fig. 4.19 is the learning structure. The block W charges or discharges the capacitor Cw according to the input of two neighboring cells, and the charging current direction is controlled by the XOR gate in Fig. 4.20. Fig. 4.20 is a part of Fig. 2.10. The two inputs of XOR gate are the signs of two neighboring cells. Fig. 4.21 shows that one of the two inputs of XOR is connected to the Vin of T2.

When a pattern is learned, the shift registers need to transfer the new pattern. The pattern transferring takes a little time, and the MOS M26 in Fig. 2.8 is turned on in this timing. The MOS M26 in Fig. 2.8 is turned on and let the current I_charge in Fig. 4.19 become very small.

However, this small current still influences the absolute-weights on Cw, and Fig 4.22 shows the small current in the pattern transferring time. Note that there is small current in the pattern transferring time. Because M26 in Fig. 2.8 is turned on in the pattern transferring time, one input of the XOR gate would be Vref(1.5V). Because one input of the XOR gate is connected with 1.5V, the output of XOR is unpredictable. Thus the influence of the small current in the pattern transferring timing is out of control, and the absolute-weights are affected by the small current.

The modified circuit is shown as Fig. 4.23. A new path connected with a dummy load is inserted. The path turns on when patterns is transferring, and then the small current in the pattern transferring timing doesn’t influence the absolute-weights. Fig 4.24 is the simulation result of modified T2D, and it shows the modified design of T2D doesn’t contribute a small current to Cw. One pixel model with modified T2D is simulated too, and the modified design can indeed recognize the noisy pixel.

Table 4.2 The absolute weight of cell(4,4) in three simulation condition

Simulation condition Absolute-weight of cell(4,4)

Matlab (ideal)

Table 4.3 The absolute mean and generated ratio weights of cell(4,4) in three simulation condition

Simulation

condition Absolute-weight of cell(4,4) Mean Ratio weights of cell(4,4)

Matlab

Fig. 4.19 The absolute-weights learning structure

Fig. 4.20 The structure that controls flowing direction of I_charge

Fig. 4.21 The connection between T2 and input of XOR gate

Fig. 4.22 The integration of T2D output current and time

Fig. 4.23 The modified circuit

Fig. 4.24 The integration of T2D output current and time 1) the modified design 2) the original design

Fig. 4.25 Simulation result of one cell model 1) the original design 2) the modified design

CHAPTER 5

CONCLUSION AND FUTURE WORK

5.1 Conclusion

A new circuit of RMCNN w/o EO is implemented. The new circuit has the same recognition rate with RMCNN with elapsed operation, but the operation of RMCNN w/o EO is simpler.

The new RMCNN w/o EO doesn’t need a elapsed period to get the feature enhanced ratio weights. The RMCNN w/o EO can generate the feature enhance ratio weights directly after pattern learning, and it has a good recognition rate that is the same with RMCNN with elapsed operation. Though the operation of the RMCNN w/o EO is simpler, the circuit of RMCNN w/o EO isn’t complicated. The RMCNN w/o EO doesn’t need the multi-divider (M/D)[18] in the RMCNN with elapsed operation. Thus the transistors count of RMCNN w/o EO is less than the RMCNN with elapsed operation. Besides, there is a division behavior in RMCNN w/o EO, but there isn’t any divider in the circuit. Thus the hardware of RMCNN w/o EO is simple.

The number of the learning patterns of RMCNN w/o EO is 3. They are Chinese characteristic one , two and four (一, 二 and 四). Maximum standard deviation of normal distribution noise is about 0.3. The number of learning patterns that RMCNN w/o EO can remember is still few. To increase the number of learning patterns that can be remembered, we should modify the learning algorithm or the recognizing algorithm continuously in the future.

In the experimental result, some vertical lines of pattern “四” are unrecognized. The recognition results of patterns “一” and “二” are successful and all lines in pattern “一” and

“二” are horizontal. That doesn’t mean the recognition rate of horizontal lines is better than vertical lines. The failure of recognition result dues to the ratio weights around one pixel and

the inputs of neighborhood pixels. If the ratio weights around one pixel are wrong, the recognition fails even that pixel is on horizontal line. Thus the failure of recognition will appear in other patterns like “五” if the wrong ratio weights are generated. Fig. 5.1 shows some examples that recognizing failure may happen and not all of the failure examples are vertical lines.

Fig. 5.1 Examples of Recognizing failure 5.2 Future Works

The RMCNN w/o EO in this thesis can’t recognize all of the three patterns. The cause is found and the circuit is also redesigned in this thesis. Simulation supported that the modified design can really recognized all of the three patterns. Thus the RMCNN w/o EO should be taped out again. To reduce the chip area, the routing of RMCNN w/o EO should be modified too.

There are some modifying methods for the next chip are proposed in this thesis

1. The capacitance value should be optimized in the future. The operating speed of RMCNN w/o EO should be decided and the capacitance value of Cw and the saturated output current of T2D Iysat can be chosen according to the operating speed of RMCNN w/o EO.

2. The routing of layout can be more effective and save die area and the smaller die area has less process variation.

3. The static D-flip-flop should be used instead of dynamic D-flip-flop.

4. The modified output stage can be used to save power consumption.

REFERENCES

[1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Tran.

Circuits Syst., vol. 35, pp.1257-1272, Oct. 1988.

[2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Tran.

Circuits Syst., vol. 35, no. 10, pp.1273-1290, Oct. 1988.

[3] T. Roska, “Analog events and a dual computing structure using analog and digital circuits and operators,” Discrete Event Systems: Models and Applications, pp. 225-238, P. Varaiya and A. B. Kurzhanski (ed), Springer Verlag, New York, 1988.

[4] D. Liu and A. N. Michel, “Cellular neural netowrks for associative memories,” IEEE Trans. Circuits Syst. II, vol. 40, no. 2, pp. 119-121, February 1993.

[5] A. Lukianiuk, “Capacity of cellular neural networks as associative memories,” in proc. IEEE Int. Workshop on Cellular Neural Networks and their Applications, CNNA, June 1996, pp. 37 -40.

[6] M. Brucoli, L. Carnimeo, and G. Grassi, “An approach to the design of space-varying cellular neural networks for associative memories,” in Proc.

the 37th Midwest Symposium on Circuits and Syst., 1994, vol. 1, pp. 549-552.

[7] H. Kawabata, M. Nanba, and Z. Zhang, “On the associative memories in cellular neural networks,” in Proc. IEEE Int. Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, 1997, vol. 1, pp. 929 - 933.

[8] P. Szolgay, I. Szatmari, and K. Laszlo, “A fast fixed point learning method to implement associative memory on CNNs,” IEEE Trans. Circuits and Syst. I, vol. 44, no. 4, pp. 362-366, Apr. 1997.

[9] R. Perfetti and G. Costantini, “Multiplierless Digital Learning Algorithm for Cellular Neural Networks,” IEEE Trans. Circuits Syst. I, vol. 48, no. 5, pp.

630-635, May 2001.

[10] A. Paasio, K. Halonen, and V. Porra, “CMOS implementation of associative memory using cellular neural network having adjustable template coefficients,” in Proc. IEEE Int. Symposium on Circuits and Syst., ISCAS, 1994, vol. 6, pp. 487-490.

[11] S. Grossberg, “Nonlinear difference-differential equations in prediction and learning theory,” in Proc. Natl. Acad. Sci. USA, vol. 58, pp. 1329-1334, 1967.

[12] J. A. Feldman and D. H. Ballard, “Connectionist models and their properties,”

Cognitive Science, vol. 6, pp. 205-254, 1982.

[13] S. Grossberg, The Adaptive Brain I: Cognition, Learning, Reinforcement, and Rhythm, Elsevier/North-Holland, Amsterdam, 1986.

[14] C.-Y. Wu and J.-F. Lan, “CMOS current-mode neural associative memory design with on-chip learning,” IEEE Trans. Neural Networks, vol. 7, no. 1, pp. 167-181, 1996.

[15] J.-F. Lan and C.-Y. Wu, “CMOS current-mode outstar neural networks with long-period analog ratio memory,” in Proc. IEEE Int. Symposium on Circuits and Systems, ISCAS, 1995, vol. 3, pp. 1676-1679.

[16] B. Kosko, “Bidirectional associative memories,” IEEE Trans. Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49-60, Jan./Feb. 1988.

[17] L. O. Chua, M. Hasler, G. S. Moschytz, and J. Neirynck, “Autonomous cellular neural networks: a unified paradigm for pattern formation and active wave propagation,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 559-577, Mar.

1995.

[18] Chung-Yu Wu and Chiu-Hung Cheng; “A learnable cellular neural network structure with ratio memory for image processing” Circuits and Systems I:

Fundamental Theory and Applications, IEEE Transactions on , vol 49 , Issue:

12 , Dec. 2002 pp. 1713 - 1723

[19] Chung-Yu Wu and Chiu-Hung Cheng,” The Design of Cellular Neural Network with Ratio Memory for Pattern Learning and Recognition” Cellular Neural Networks and Their Applications, 2000, 23-25 May 2000 , pp. 301 – 307

[20] Student: J. F. Lan ; Advisor: C. Y. Wu , Chapter 3 of “The Designs And Implementations of The Artificial Neural Networks With Ratio Memories And Their Applications” June 1996 , pp. 64 -67

簡歷

VITA

姓 名:吳 諭 學 歷:

台北市立建國高級中學 (85年9 月~88 年6 月) 國立中央大學電機工程學系 (88年9 月~92 年6 月) 國立交通大學電子研究所碩士班 (92年9 月~94 年9 月)

研究所修習課程:

類比積體電路 I 吳介琮教授

類比積體電路 II 吳重雨教授

數位積體電路 柯明道教授

積體電路之靜電放電防護設計特論 柯明道教授

數位通訊 桑梓賢教授

高等數位信號處理 劉志尉教授

隨機過程 王聖智教授

混合訊號式積體電路設計與實驗 I 吳介琮教授

永久地址:板橋市介壽街66之2號 Email: vivid175.ee92g@nctu.edu.tw

u88084100@cc.ncu.edu.tw m9211657@alab.ee.nctu.edu.tw

相關文件