• 沒有找到結果。

CHAPTER 5............................................................................................................................ 59

5.1 Conclusion

A new circuit of RMCNN w/o EO is implemented. The new circuit has the same recognition rate with RMCNN with elapsed operation, but the operation of RMCNN w/o EO is simpler.

The new RMCNN w/o EO doesn’t need a elapsed period to get the feature enhanced ratio weights. The RMCNN w/o EO can generate the feature enhance ratio weights directly after pattern learning, and it has a good recognition rate that is the same with RMCNN with elapsed operation. Though the operation of the RMCNN w/o EO is simpler, the circuit of RMCNN w/o EO isn’t complicated. The RMCNN w/o EO doesn’t need the multi-divider (M/D)[18] in the RMCNN with elapsed operation. Thus the transistors count of RMCNN w/o EO is less than the RMCNN with elapsed operation. Besides, there is a division behavior in RMCNN w/o EO, but there isn’t any divider in the circuit. Thus the hardware of RMCNN w/o EO is simple.

The number of the learning patterns of RMCNN w/o EO is 3. They are Chinese characteristic one , two and four (一, 二 and 四). Maximum standard deviation of normal distribution noise is about 0.3. The number of learning patterns that RMCNN w/o EO can remember is still few. To increase the number of learning patterns that can be remembered, we should modify the learning algorithm or the recognizing algorithm continuously in the future.

In the experimental result, some vertical lines of pattern “四” are unrecognized. The recognition results of patterns “一” and “二” are successful and all lines in pattern “一” and

“二” are horizontal. That doesn’t mean the recognition rate of horizontal lines is better than vertical lines. The failure of recognition result dues to the ratio weights around one pixel and

the inputs of neighborhood pixels. If the ratio weights around one pixel are wrong, the recognition fails even that pixel is on horizontal line. Thus the failure of recognition will appear in other patterns like “五” if the wrong ratio weights are generated. Fig. 5.1 shows some examples that recognizing failure may happen and not all of the failure examples are vertical lines.

Fig. 5.1 Examples of Recognizing failure 5.2 Future Works

The RMCNN w/o EO in this thesis can’t recognize all of the three patterns. The cause is found and the circuit is also redesigned in this thesis. Simulation supported that the modified design can really recognized all of the three patterns. Thus the RMCNN w/o EO should be taped out again. To reduce the chip area, the routing of RMCNN w/o EO should be modified too.

There are some modifying methods for the next chip are proposed in this thesis

1. The capacitance value should be optimized in the future. The operating speed of RMCNN w/o EO should be decided and the capacitance value of Cw and the saturated output current of T2D Iysat can be chosen according to the operating speed of RMCNN w/o EO.

2. The routing of layout can be more effective and save die area and the smaller die area has less process variation.

3. The static D-flip-flop should be used instead of dynamic D-flip-flop.

4. The modified output stage can be used to save power consumption.

REFERENCES

[1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Tran.

Circuits Syst., vol. 35, pp.1257-1272, Oct. 1988.

[2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Tran.

Circuits Syst., vol. 35, no. 10, pp.1273-1290, Oct. 1988.

[3] T. Roska, “Analog events and a dual computing structure using analog and digital circuits and operators,” Discrete Event Systems: Models and Applications, pp. 225-238, P. Varaiya and A. B. Kurzhanski (ed), Springer Verlag, New York, 1988.

[4] D. Liu and A. N. Michel, “Cellular neural netowrks for associative memories,” IEEE Trans. Circuits Syst. II, vol. 40, no. 2, pp. 119-121, February 1993.

[5] A. Lukianiuk, “Capacity of cellular neural networks as associative memories,” in proc. IEEE Int. Workshop on Cellular Neural Networks and their Applications, CNNA, June 1996, pp. 37 -40.

[6] M. Brucoli, L. Carnimeo, and G. Grassi, “An approach to the design of space-varying cellular neural networks for associative memories,” in Proc.

the 37th Midwest Symposium on Circuits and Syst., 1994, vol. 1, pp. 549-552.

[7] H. Kawabata, M. Nanba, and Z. Zhang, “On the associative memories in cellular neural networks,” in Proc. IEEE Int. Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, 1997, vol. 1, pp. 929 - 933.

[8] P. Szolgay, I. Szatmari, and K. Laszlo, “A fast fixed point learning method to implement associative memory on CNNs,” IEEE Trans. Circuits and Syst. I, vol. 44, no. 4, pp. 362-366, Apr. 1997.

[9] R. Perfetti and G. Costantini, “Multiplierless Digital Learning Algorithm for Cellular Neural Networks,” IEEE Trans. Circuits Syst. I, vol. 48, no. 5, pp.

630-635, May 2001.

[10] A. Paasio, K. Halonen, and V. Porra, “CMOS implementation of associative memory using cellular neural network having adjustable template coefficients,” in Proc. IEEE Int. Symposium on Circuits and Syst., ISCAS, 1994, vol. 6, pp. 487-490.

[11] S. Grossberg, “Nonlinear difference-differential equations in prediction and learning theory,” in Proc. Natl. Acad. Sci. USA, vol. 58, pp. 1329-1334, 1967.

[12] J. A. Feldman and D. H. Ballard, “Connectionist models and their properties,”

Cognitive Science, vol. 6, pp. 205-254, 1982.

[13] S. Grossberg, The Adaptive Brain I: Cognition, Learning, Reinforcement, and Rhythm, Elsevier/North-Holland, Amsterdam, 1986.

[14] C.-Y. Wu and J.-F. Lan, “CMOS current-mode neural associative memory design with on-chip learning,” IEEE Trans. Neural Networks, vol. 7, no. 1, pp. 167-181, 1996.

[15] J.-F. Lan and C.-Y. Wu, “CMOS current-mode outstar neural networks with long-period analog ratio memory,” in Proc. IEEE Int. Symposium on Circuits and Systems, ISCAS, 1995, vol. 3, pp. 1676-1679.

[16] B. Kosko, “Bidirectional associative memories,” IEEE Trans. Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49-60, Jan./Feb. 1988.

[17] L. O. Chua, M. Hasler, G. S. Moschytz, and J. Neirynck, “Autonomous cellular neural networks: a unified paradigm for pattern formation and active wave propagation,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 559-577, Mar.

1995.

[18] Chung-Yu Wu and Chiu-Hung Cheng; “A learnable cellular neural network structure with ratio memory for image processing” Circuits and Systems I:

Fundamental Theory and Applications, IEEE Transactions on , vol 49 , Issue:

12 , Dec. 2002 pp. 1713 - 1723

[19] Chung-Yu Wu and Chiu-Hung Cheng,” The Design of Cellular Neural Network with Ratio Memory for Pattern Learning and Recognition” Cellular Neural Networks and Their Applications, 2000, 23-25 May 2000 , pp. 301 – 307

[20] Student: J. F. Lan ; Advisor: C. Y. Wu , Chapter 3 of “The Designs And Implementations of The Artificial Neural Networks With Ratio Memories And Their Applications” June 1996 , pp. 64 -67

簡歷

VITA

姓 名:吳 諭 學 歷:

台北市立建國高級中學 (85年9 月~88 年6 月) 國立中央大學電機工程學系 (88年9 月~92 年6 月) 國立交通大學電子研究所碩士班 (92年9 月~94 年9 月)

研究所修習課程:

類比積體電路 I 吳介琮教授

類比積體電路 II 吳重雨教授

數位積體電路 柯明道教授

積體電路之靜電放電防護設計特論 柯明道教授

數位通訊 桑梓賢教授

高等數位信號處理 劉志尉教授

隨機過程 王聖智教授

混合訊號式積體電路設計與實驗 I 吳介琮教授

永久地址:板橋市介壽街66之2號 Email: vivid175.ee92g@nctu.edu.tw

u88084100@cc.ncu.edu.tw m9211657@alab.ee.nctu.edu.tw

相關文件