• 沒有找到結果。

BABS Controlled by HNN Controller Trained by Nonlinear Reference

Chapter 4 Control of Ball and Beam System (BABS)

4.3 Architecture and Algorithm of Weighting Training

4.4.5 BABS Controlled by HNN Controller Trained by Nonlinear Reference

We use the following examples to examine the abilities of HNN controllers to control IPS with initial state different from the initial state of training the HNN controller by the nonlinear reference controller. We let the values ofw11, w12, w , 13

w14, w21, w22, w , 23 w24, w , 31 w , 32 w , 33 w , 34 w41, w42, w and 43 w44 be the same with section 4.4.3 as the equation (4-41):

⎥⎥

is fixed for the initial state of BABS: the initial position of the ball=0.2 (meter),the initial velocity of the ball=0 (meter/sec),the initial angle of the beam=10 (degree), and the initial angular speed of the beam of the BABS=0 (degree/sec) in the training phase. And then, we will examine the following sets (initial position (meter), initial velocity (meter/sec), initial angle (degree), initial angular speed (degree/sec)) of the initial state of BABS: (0.1, 0, 5, 0) and (0.4, 0, 20, 0) in the working phase as the following figures:

Fig-4.48. The control torques of BABS with initial ball’s position=0.1 meter, initial beam’s angle=5 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.49. The ball’s positions of BABS with initial ball’s position=0.1 meter, initial beam’s angle=5 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.50. The ball’s velocities of BABS with initial ball’s position=0.1 meter, initial beam’s angle=5 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.51. The beam’s angles of BABS with initial ball’s position=0.1 meter, initial beam’s angle=5 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.52. The beam’s angular speeds of BABS with initial ball’s position=0.1 meter, initial beam’s angle=5 degree: the nonlinear reference controller (dash line) and

HNN controller (solid line)

Fig-4.53. The node 1 (2) (3) (4) voltage v1 (v2) (v ) (3 v4) of the HNN circuit

Fig-4.54. The control torques of BABS with initial ball’s position=0.4 meter, initial beam’s angle=20 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.55. The ball’s positions of BABS with initial ball’s position=0.4 meter, initial beam’s angle=20 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.56. The ball’s velocities of BABS with initial ball’s position=0.4 meter, initial beam’s angle=20 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.57. The beam’s angles of BABS with initial ball’s position=0.4 meter, initial beam’s angle=20 degree: the nonlinear reference controller (dash line) and HNN

controller (solid line)

Fig-4.58. The beam’s angular speeds of BABS with initial ball’s position=0.4 meter, initial beam’s angle=20 degree: the nonlinear reference controller (dash line) and

HNN controller (solid line)

Fig-4.59. The node 1 (2) (3) (4) voltage v1 (v2) (v ) (3 v4) of the HNN circuit

Chapter 5

Discussion and Conclusion

5.1 Discussion of Parameters Setting

The resistance R of the HNN controller by the way that larger resistance causes larger node voltagev. Because a large voltage is not preferred, therefore a small R should be chosen. The time constant τ can be express as

=RC

τ , (5-1) which is the product of the resistance R and the capacitanceC. τ cannot be chosen too large because it leads to slow response. The amplification constant camp is also an important parameter of the HNN controller. According to the voltage amplifierϕ(•)=tanh(•), the output of a neuron of the HNN is limited between the values -1 to +1, we express as the following inequality.

1 ) tanh(

1≤ = ≤

xj vj (5-2) From the inequality (5-2), we can show that the output control signal of HNN controller is limited as

amp n

j j amp

amp u c x nc

nc ≤ = ≤

=

1

, (5-3) where n is the number of neurons in the HNN. That is the absolute value of the output control signal of the HNN controller is limited by the product of the amplification constant camp and the number of neuronsn.

The learning rate η must be a positive as we discussed in the section 2.3. The large η is not preferred because that will contradict (2-14) [7]. However η cannot be chosen too small or the weightings convergence will be too slow, so we use a proper value of the learning rate η to let the simulation process well.

The simulation time is chosen long enough so that the regulation state of the controlled system can be near the desired point.

The time interval is set to be a small enough value to get the required accuracy of the differential equations of the system controlled by the HNN controller.

In many cases one epoch is enough to achieve favorable training performance.

More training epoch is unnecessary unless one epoch training cannot achieve the preferred performance.

If we do not have prior knowledge of the proper weighting vector, we can just

simple initialize W from zero vector. Notice that the values of the elements of W are the same in the same column. This can be briefly explained as following. We note the equations (3-22) and (3-24), they are very similar. We write them down in the simple form to show the difference. For (3-22), the equation is w11(k+1)=w11(k)−...×{1−[tanh(v1)]2} , while for (3-24), the equation isw21(k+1)=w21(k)−...×{1−[tanh(v2)]2}. And we note the equations (3-10) and (3-11), we find if w11 =w21 andw12 =w22, then we can get i1 =i2 by equation (3-11), and furthermore, we can get v1 =v2 by equation (3-10). So it is interesting that because we set the initial values of all the weighting factors zeros, so

0

At last, the important facts are obtained as following:

n

It should keep in mind that the equation (5-4) is satisfied on the premise of the following:

After training the weighting factors of HNN controller, we find that the output (control signal) of HNN controller is approximated to the output (control signal) of the well-designed controller and the outputs of the plant controlled by the HNN controller are approximated to the outputs of the plant controlled by the well-designed controller. So, the trained HNN controller can be a good model of the well-designed controller with the controlled plant with the same initial state with HNN trained. Furthermore, although the weighting factors of HNN controller are trained by well-designed controller with the plant with initial state different from the initial state of the plant controlled by the HNN controller, the HNN controller can still control the plant well, and the output (control signal) of HNN controller is approximated to the output (control signal) of the well-designed controller and the outputs of the plant controlled by the HNN controller are still approximated to the outputs of the plant controlled by the well-designed controller. So, the trained HNN controller not only can “memorize” the output (control signal) of the well-designed

controller with the controlled plant in the same initial state, but it can also “simulate”

the output (control signal) of the well-designed controller with the controlled plant in the different initial state. So, the HNN controller has ability more than just to memorize the training data, and this property of HNN controller is important for applications.

Faults due to the aging of a controller for a control system are very common; once they happen, the controller is quite difficult to be repaired for some reasons. We proposed an HNN controller for a control system to solve this problem. After discussing the two examples of the nonlinear systems controlled by the HNN controllers, we understand that the HNN has the potential to be a good controller. The key point of the HNN controller is the parameters, especially the weighting factors between each two neurons of one HNN. To design an HNN controller for some specified nonlinear system is still a challenge. In this thesis, we trained the weighting factors of the HNN controller to mimic the existing controller. Then, the trained HNN controller is used to replace the existing controller. Can we control the system by an HNN controller trained online without a reference controller? We will focus our research interests on exploring the potential of this interesting question.

References

[1] K. Mehrotra, C. K. Mohan, and S. Ranka, “Elements of Artificial Neural Networks,” The MIT Press Cambridge,1996.

[2] L. R. Medsker and L. C. Jain, “Recurrent Neural networks: Design and Applications,” CRC Press LLC, 2000.

[3] C. T. Lin, and C. S. G. Lee, “Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems,” Prentice Hall P T R,1996.

[4] S. K. Pal and S. Mitra, “Neuro-Fuzzy Pattern Recognition: Method in Sift Computing” Networks: A Comprehensive Foundation,” John Wiley & Sons, INC., 1999.

[5] M. Friedman and A. Kandel, “Introduction to Pattern Recognition: Statistical, Structural, Neural and Fuzzy Logic Approaches,” Imperial College Press, 1999.

[6] T. W. S. Chow, X. D. Li, and Y. Fang, “A Real-Time Learning Control Approach for Nonlinear Continuous-Time System Using Recurrent Neural Networks,” IEEE Transactions on Industrial Electronics, vol. 47, pp 478-486, 2000.

[7] S. Haykin, “Neural Networks: A Comprehensive Foundation,” Prentice-Hall, second edition, 1999.

[8] A. Delgado, C. Karnbhampati, and K. Warwick, ”Dynamic recurrent neural network for system identification and control,” IEE proceedings. Control theory and applications, vol.142, pp307-314, 1995.

[9] J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two states neurons,” Proceedings of National Academy of sciences, USA, vol.81, pp3088-3092, 1984.

[10] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of National Academy of sciences, USA, vol.79, pp2554-2558, 1982.

[11] H. Jing and N. Zhao, “Study on the Global Asymptotic Stability of

Hopfield Neural Networks,” IEEE International Conference on Control and Automation, pp2780-2784, 2007.

[12] L. Wang, Y. S. Xiao, G. Zhou, and Q. Wu, “Further Discussion of Hopfield Neural Network based DC Drive System Identification and Control,” IEEE.

Proceedings of the 4'" World Congress on Intelligent Control and Automation, pp1990-1993, 2002.

[13] T. P. Troudet and S. M. Walters, “Neural Network Architecture for Crossbar Switch Control,” IEEE Transactions on Circuit and Systems, vol. 38, pp42-56,

1991.

[14] T. W. S. Chow and Y. Fang, “A Recurrent Neural-Network-Based Real-Time Learning Control Strategy Applying to Nonlinear Systems with Unknown Dynamics,” IEEE Transactions on Industrial Electronics, vol. 45, pp151-161, 1998.

[15] R. Craddock and C. Kambhampati, “Trained Hopfield Neural Networks Need Not Be Black-boxes,” Proceedings of the American Control Conference, pp368-372, 1999.

[16]N. C. Kan, “Design of Hopfield Neural Network Controller with Its applications,”

MS Thesis, Department of Electrical and Control Engineering, NCTU, Hsin-Chu, Taiwan, 2006.

[17] Z. B. Xu and C. P. Kwong, “Global Convergence and Asymptotic Stability of Asymmetric Hopfield Neural Networks,” Journal of Mathematical Analysis and Applications, vol. 191, pp405-427, 1995.

[18] L. X. Wang, ”Adaptive Fuzzy Systems and Control: Design and Stability Analysis,” PTR Prentice Hall,1994.

[19] Y. L. Li, “Coupled Derivatives Compact Schemes for One-Dimensional KDV Equation,” MS Thesis, Department of Applied Mathematics, NCTU, Hsin-Chu, Taiwan, 2007.

[20] Y. F. Tung, ”Multi-Degrees of Freedom H∞ Controller Design for the Inverted Pendulum to Fix Position,” MS Thesis, Institute of Mechanical Engineering, NCTU, Hsin-Chu, Taiwan, 2004.

[21] R. C. Dorf and Robert H. Bishop, “Modern Control Systems,” Addison-Wesley, eighth edition, 1998.

[22] J. Hauser, S. Sastry, and P. Kokotovic, “Nonlinear Control Via Approximate Input-Output Linearization: The Ball and Beam Example,” IEEE Transactions on Automatic Control, vol. 37, pp392-398, 1992.