53
54
REFERENCES
Amen, M. (2001). Heuristic methods for cost-oriented assembly line balancing: A comparison on solution quality and computing time. International Journal of Production Economics, 69 (3), 255-264.
Andrew, K. (1992). Intelligent design and manufacturing. New York: John Wiley &
Sons Inc.
Anjum, M. F., Tasadduq, I., & Al-Sultan, K. (1997). Response surface methodology: A neural network approach. European Journal of Operational Research, 101, 65-73.
Ben-Arieh, D., Kumar, R. R., & Tiwari, M. K. (2004). Analysis of assembly operations’ difficulty using enhanced expert high-level colored fuzzy Petri net model. Robotics and Computer-Integrated Manufacturing, 20, 385-403.
Boart, P. (2005). Life cycle simulation support for functional products. M.Sc. thesis, Luleå University of Technology, SWEDEN.
Bourjault, A. (1984). Contribution á une approche méthodologique de l’assemblage automatisé: Elaboration automatique des séquences opératoires, Unpublished doctoral dissertation, Faculté des Sciences et des Techniques de l’Université de Franche-Comté, France.
Chen, C. L. P. (1990). Neural computation for planning and/or precedence-constraint robot assembly sequences. In: Proceedings of the International Conference on Neural Networks. 127-142. San Diego, CA.
Chen, R. S., Lu, K. Y., & Tai, P. H. (2004a). Optimization of assembly plan through a three-stage integrated approach. International Journal of Computer Applications in Technology, 19(1), 28-38.
Chen, R. S., Lu, K. Y., & Tai, P. H. (2004b). Optimizing assembly planning through a three-stage integrated approach. International Journal of Production Economics, 88, 243-256.
Chen, W. C., & Hsu, S. W. (2007). A neural-network approach for an automatic LED inspection system. Expert Systems with Applications, 33(3), 531-537.
Chen, W. C., Tai, P. H., Deng, W. J., & Hsieh, L. F. (2008). A three-stage integrated approach for assembly sequence planning using neural networks. Expert Systems with Applications, 34, 1777-1786.
Cheng, C. S., & Tseng, C. A. (1995). Neural network in detecting the change of process mean value and variance. Journal of the Chinese Institute of Industrial Engineers, 12 (3), 215-223.
Clive, L. D., & Patric, L. (2009). Engineering Design: a project-based introduction.
55 USA, John Wiley & Sons.
Crowson, R. D. (2006). Assembly Processes: finishing, packaging, and automation.
Taylor & Francis, New York.
De Fazio, T. L., & Whitney, D. E. (1987). Simplified generation of all mechanical assembly sequences. IEEE Transactions on Robotics and Automation, 3(6), 640-658.
Eng, T. H., Ling, Z. K., Olson, W., & Mclean, C. (1999). Feature-based assembly modeling and sequence generation. Computers & Industrial Engineering, 36, 17-33.
Fogel, D. B. (1991). An information criterion for optimal neural network selection.
IEEE Transaction on Neural Network, 2(5), 490-497.
Gu, P., & Norrie, D. H. (2006). Intelligent Manufacturing Planning. USA, Chapman &
Hall.
Gu, P., & Yan, X. (1995). CAD-directed automatic assembly sequence planning.
International Journal of Production Research, 33(11), 3069-3100.
Guo, Y. W., Li, W. D., Mileham, A. R., & Owen, G. W. (2009). Applications of particle swarm optimization in integrated process planning and scheduling. Robotics and computer-integrated manufacturing, 25, 280-288.
Haupt, R. L. (2004). Practical genetic algorithms. 2nd ed. USA, Wiley-Interscience Publication.
Haykin, S. (1999). Neural Networks: A comprehensive foundation. Canada, Prentice Hall.
Henrioud, J. M., Relange, L., & Perrard, C. (2003). Assembly sequences, assembly constraints, precedence graphs. In: Proceedings of the fifth IEEE symposium on assembly and task planning, France, July 10-11, 90-5.
Holland, W. V., & Bronsvoort, W. F. (2000). Assembly features in modeling and planning. Robotics and Computer Integrated manufacturing, 16, 277-294.
Homen de Mello, L. S., & Sanderson, A. C. (1991a). Representations of mechanical assembly sequences. IEEE Transactions on Robotics and Automation, 7(2), 211-227.
Homen de Mello, L. S., & Sanderson, A. C. (1991b). A correct and complete algorithm for the generation of mechanical assembly sequence. IEEE Transactions on Robotics and Automation, 7(2), 228-240.
Hong, D. S., & Cho, H. S. (1995). A neural network based computational scheme for generating optimized robotic assembly sequences. Engineering Application Artificial Intelligence, 8(2), 129-45.
56
Huang, G. Q. (1996). Design for X: Concurrent Engineering Imperatives. London, Chapman & Hall.
Huang, G. Q., & Mak, K. L. (1997). The DFX shell: a generic framework for developing design for X tools. Robotics & Integrated Manufacturing, 13 (3), 271-300.
Hush, D. R., & Horne, B. G. (1993). Progress in supervised neural networks. IEEE Signal Processing Magazine, January, 8-39.
Kai, Y., & Basem, E. H. (2003). Design for Six Sigma: a roadmap for product development, McGraw-Hill, New York.
Kalpakjian, S. (1992). Manufacturing process for engineering materials. 2nd ed. USA, Addison-Wesley.
Khaw, J. F. C., Lim, B. S., & Lim, L. E. N. (1995). Optimal design of neural network using the Taguchi method. Neurocomputing, 7, 225-245.
Kroll, E. (1994). Intelligent assembly planning on triaxial products. Concurrent Engineering: Research and Applications, 1(2), 311-319.
Kulon, J., Broomhead, P., & Mynors, D. J. (2003). Applying knowledge-based engineering to traditional manufacturing design. International Journal of Advanced Manufacturing Technology, 30, 945-951.
Kuo, T. C., Huang, S. H., & Zhang, H. C. (2001). Design for manufacture and design for‘X’: Concepts, applications, and perspectives. Computers & Industrial Engineering, 41, 241-260.
Lai, H. Y., & Huang, C. T. (2004). A systematic approach for automatic assembly sequence plan generation. International Journal of Advanced Manufacturing Technology, 24, 752-763.
Lee, K. (1999). Principles of CAD/CAM/CAE systems. USA, Addison-Wesley.
Lee, S. (1989). Disassembly planning by subassembly extraction. In: M.A., Proceedings of the third ORSA/TIMS Conference on flexible manufacturing systems (pp. 383-388). Cambridge.
Levitin, A. V. (2007). Introduction to the design and analysis of algorithms. 2nd ed.
USA, Addison-Wesley.
Lim, S. S., Lee, B. H., Lim, E. N., & Ngoi, B. K. A. (1995). Computer-aided concurrent design of product and assembly processes: a literature review, Journal of Design and Manufacturing, 5, 67-88.
Lin, A. C., & Chang, T. C. (1993). An integrated approach to automated assembly planning for three-dimensional mechanical products. International Journal of
57 Production Research, 31(5), 1201-1227.
Liu, Y., Liu, W., & Zhang, Y. (2001). Inspection of defects in optical fibers based on back-propagation neural networks. Optics Communications, 198(4-6), 369-378.
Lotter, B. (1989). Manufacturing assembly handbook, Butterworths, London.
Lu, C., Wong, Y. S., & Fuh, J. Y. H. (2006). An enhanced assembly planning approach using a multi-objective genetic algorithm. Journal of Engineering Manufacture, 220, 255-272.
Maier, H. R., & Dandy, G. C. (1998). Understanding the behaviour and optimising the performance of back-propagation neural networks: an empirical study.
Environmental Modelling & Software, 13, 179-191.
Marian, R.M., Luong, L.H.S., & Abhary, K. (2003). Assembly sequence planning and optimisation using genetic algorithms. Applied Soft Computing, 2(3), 223-253.
Mascle, C., & Zhao, H. P. (2008). Integrating environmental consciousness in product/process development based on life-cycle thinking. International Journal of Production Economics, 12, 5-17.
McDonald, D. B., Grantham, W. J., Tabor, W. L., & Murphy, M. J. (2007). Global and local optimization using radial basis function response models. Applied Mathematical Modelling, 31, 2095-2110.
Murata, N., & Yoshizawa, S. (1994). Network information criterion-determining the number of hidden units for an artificial neural network model. IEEE Transaction on Neural Network, 5, 865- 872.
Nof, S. Y., Wilhelm, W. E., & Warnecke, H. J. (1997). Industrial Assembly. Chapman
& Hall, London.
Onoda, T. (1995). Neural network information criterion for optimal number of hidden units. Proceedings of the IEEE International Conference on Neural Networks, 1, 270-280.
Prasad, B. (1997). Concurrent engineering fundamentals: Integrated product development. New Jersey, Prentice-Hall.
Ramos, C., Rocha, J., & Vale, Z. (1998). On the complexity of precedence graphs for assembly and task planning. Computers in Industry, 36, 101-111.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1996). Learning representations of back- propagation errors. Nature(London), 323, 533-536.
Sage, A. P. (1990). Concise encyclopedia of information processing in systems and organizations. New York, Pergamon.
Santos, M. S., & Ludermir, B. (1999). Using factorial design to optimize neural
58
networks, International Joint Conference on IEEE Neural Networks, 2, 857-861.
Washington, DC.
Saridakis, K. M., & Dentsoras, A. J. (2008). Soft computing in engineering design- A review. Advanced Engineering Informatics, 22, 202-221.
Sinanog˘lu, C. (2006). A neural predictor to analyze the effects of metal matrix composite structure (6063 Al/SiCp MMC) on journal bearing. Industrial Lubrication and Tribology, 58(2), 95-109.
Smith, G. C., & Smith, S. S. F. (2002). An enhanced genetic algorithm for automated assembly planning. Robotics and Computer-Integrated Manufacturing, 18, 355-364.
Su, Q. (2009). A hierarchical approach on assembly sequence planning and optimal sequence analyzing. Robotics and Computer-Integrated Manufacturing, 25, 224-234.
Tai, P. H. (1997). Feature-based assembly modeling for assembly sequence planning of three-dimensional products. Unpublished master’s thesis, Cranfield University, UK.
Tripathi, M., Agrawal, S., Pandey, M. K., Shankar, R., & Tiwari, M. K. (2009). Real world disassembly modeling and sequencing problem: Optimization by Algorithm of Self-Guided Ants (ASGA). Robotics Computer-Integrated Manufacturing, 25(2009), 483-496.
Wang, J. F., Liu, J. H., & Zhong, Y. F. (2005). A novel ant colony algorithm for assembly sequence planning, International Journal of Advanced Manufacturing Technology, 25, 1137-1143.
Yao, S., Yan, B., Chen, B., & Zeng, Y. (2005). An ANN-based element extraction method for automatic mesh generation. Expert Systems with Applications, 29, 193-206.
Yin, Z. P., Ding, H., & Xiong, Y. L. (2004). A virtual prototyping approach to generation and evaluation of mechanical assembly sequences. In: Proceedings of the Institution of Mechanical Engineering, January, 218, 87-102.
59
APPENDIX A Wheel Illustration Over a Full CE System
Prasad addresses a number of fields to illustrate how a full Concurrent Engineering (CE) system will work as shown in Fig. 39. The successful DFX is based on dedicated CE implementation and whole realization.
Figure 39. PPO&IPD wheel illustration over a full CE system.
60
1
2
6 9 12
5
8
4 7 10
11
GS1 GS2 GS3
13 16
14
3 22 21
17
20 23
24 25
26
27
18
28
15 19
APPENDIX B
Representation of ASP Development Process
Step 1.
Figure 40. Car three-axis deployment of Above graph.
Above-X Above-Y
Above-Z
2 5
2 6 9 1 2
1 3 1 4
1 8
5 8 1 1
2 2 2
4 7 1 0
2 1 2 4
2 3 1 7
1 9 2 0
1
2 8 2 7
S A G B 3
2 2
2 6
1 5 1 6
2
61
16 1 3 22
13 14 25 27 28 26
4 10 11 12 21 15
17 9 6 8 5 7 20 19 18
23 2 24
No Parts’ name 1 MB(MainBody) 2 CP(ChassisPan) 3 DG(DriveGear) 4 GS1_1(GearSet1_1) 5 GS1_2(GearSet1_2) 6 GS1_3(GearSet1_3) 7 GS2_1(GearSet2_1) 8 GS2_2(GearSet2_2) 9 GS2_3(GearSet2_3) 10 GS3_1(GearSet3_1) 11 GS3_2(GearSet3_2) 12 PO(Power)
13 LBW(LeftBackWheel) 14 LFW(LeftFrontWheel) 15 BS1(BaseScrew1) 16 BS2(BaseScrew2) 17 PP1(PowerPack1) 18 PP2(PowerPack2) 19 PPS1(PowerPackScrew1) 20 PPS2(PowerPackScrew2) 21 RA(RearAxis)
22 RD(RearDiff)
23 RBW(RightBackWheel) 24 RFW(RightFrontWheel) 25 SL(Spoiler)
26 SP1(Spring1) 27 SP2(Spring2) 28 SR(SteeringRack)
3DG
SAGB
21RA 23RBW
22RD 13LBW
14LFW
2CP
28SR
27SP2
1MB
24RFW
25SL
26SP1
16BS2
15BS1
Figure 41. Creating a correct car explosion graph via Above graph rule
Step 2. Basepart: 2CP
Figure 42. Car sub- relational model graph (RMG) on level 1
62
2CP
26SP1 27SP2 22RD 14LFW 25SL
24RFW
1MB SAGB 3DG
21RA
28SR 13LBW
23RBW
15BS1 16BS2
Spatial constraint
17PP1
GS3
GS1 GS2
17PP1
GS3 GS2
GS1
12PO 18PP2
19PPS1 20PPS2
12PO
18PP2
19PPS1
20PPS2
GS2
GS3
17PP1
GS1
18PP2
19PPS1
20PPS2
12PO
Step 3: Basepart: 17PP1 (SAGB)
Spatial constraint
2CP
22RD
3DG
SAGB
28SR
21RA
13LBW
14LFW
24RFW
26SP1
27SP2
25SL
1MB
23RBW
15BS1
16BS2
Figure 44. Car sub- RMG and APD on SAGB
Figure 43. Car assembly precedence diagram (APD) on level 1
63
Spatial constraint
6GS1_3
10GS3_1
12PO
4GS1_1
7GS2_1
11GS3_2
5GS1_2
8GS2_2
9GS2_3
Step 5. Key connections:
6G S 1 _ 3
1 0G S 3 _ 1
4G S 1 _ 1
7G S 2 _ 1
1 1G S 3 _ 2
8G S 2 _ 2
9G S 2 _ 3
1 7P P 1
1 8P P 2 9G S 2 _ 3
8G S 2 _ 2
7G S 2 _ 1
6G S 1 _ 3
5G S 1 _ 2
4G S 1 _ 1
1 1G S 3 _ 1
1 2P O
1 0G S 3 _ 2
1 7P P 1
1 8P P 2
1 9P P S 1
2 0P P S 2
6GS1_3
10GS3_1
12PO
4GS1_1
7GS2_1
11GS3_2
5GS1_2
8GS2_2
9GS2_3
17PP1 18PP2
20PPS2 19PPS1
9G S2_3
8G S2_2
7G S2_1
6G S1_3
5G S1_2
4G S1_1
11G S3_1
12PO
10G S3_2
Figure 45. Sub- RMG and APD on car Gear Set Step 4. Base part: 9GS2_3 (Gear Set)
Figure 46. Car sub- RMG and APD on level 2
64
Step 6. Key connections: 2CP 17PP1 3DG 5GS1_2
2CP 22RD 3DG 17PP1 9GS2_3 8GS2_2 7GS2_1
4GS1_1
5GS1_2
6GS1_3
10GS3_1
11GS3_2
28SR
16BS2
21RA 13LBW 23RBW 14LFW
24RFW 26SP1 27SP2 25SL 1MB
12PO 18PP2
19PPS1 20PPS2
15BS1 C40
C39
C21
C22
C23
C24 C25
C26 C27 C29 C28
C30
C32 C31 C33
C34
C36 C35
C37 C38
C2
C3 C4
C5 C6
C7
C8
C9
C10 C11
C12 C13
C14 C15
C16 C17 C18
C19
C20
C1
3DG
21RA
23RBW
22RD
13LBW
14LFW
2CP
28SR
27SP2
1MB
24RFW
25SL
26SP1
16BS2
15BS1
6GS1_3
10GS3_1
12PO
4GS1_1
7GS2_1
11GS3_2
5GS1_2
8GS2_2
9GS2_3
17PP1 18PP2
20PPS2 19PPS1
Figure 47. Car complete- RMG and APD
65
13
14
11
12
15 2 9 16
1
6 10
17
3
5 4
8
7
66
No Part name
1 MA(MotorbikeAxle) 2 MB1(MotorbikeBearing1) 3 MB2_1(MotorbikeBearing2_1) 4 MB2_2(MotorbikeBearing2_2) 5 MB3_1(MotorbikeBearing3_1) 6 MB3_2(MotorbikeBearing3_2) 7 MH1(MotorbikeHandlE1) 8 MH2(MotorbikeHandlE2) 9 MMB1(MotorbikeMainBody1) 10 MMB2(MotorbikeMainBody2) 11 MN(MotorbikeNut)
12 MPN(MotorbikePin) 13 MPE(MotorbikePlate) 14 MS(MotorbikeScrew) 15 MW1(MotorbikeWheel1) 16 MW2(MotorbikeWheel2) 17 MW3(MotorbikeWheel3)
13
9 10
15
Above-X
7 13
11 15
10 2
9
14 8
6
17
3
10
9
4
16
5
1
12
Above-Y
2 15 7
13 12
8
1
5 16 4
9 10 11
3 17 6
14
2 15 7
13 12
8
1
5 16 4
9 10 11
3 17 6
14
Figure 48. Motorbike three-axis deployment of Above graph
Figure 49. Creating a correct Motorbike explosion graph via Above graph rule
67 Base part: 9MMB1
C1
C2 C3
C4 C5
C6 C7
C8 C9 C10
C12 C11 C13
C14
C15 C16
C17
C18 C19 C20
C21 C22
2MB1
12MPN 15MW1
13MPE
7MH1 8MH2
9MMB1 10MMB2
14MS 11MN
1MA
5MB3_1 6MB3_2
16MW2 4MB2_2 3MB2_1 17MW3
7 13
11
15 10
2
14 9
6 17
3
8 4
16
5 1
12
7 13
11 15
10 2
14 9
6 17
3
8 4
16
5
1
12
7 13
11 15 10
2 14 9
6 17 3
8 4
16
5
1 12
7 13
11 15 10
2 14 9
6 17
3
8 4 16
5
1 12
Figure 50. Motorbike Sub- RMG and APD
68
APPENDIX C Data Matrix of KBE-based ASP System
Product Name: Toy Car
69
70 Product Name: Toy Motorbike
71
72
);
( ) ( )
(
n y n x nWkj = × k × j
Δ
η
);
(ω ε
∇
= g
T
wn
w
w ⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
∂
∂
∂
∂
∂
= ∂
∇ ...
2 , 1,
. n point at the evaluated ector
gradient v the
is (n) and parameter, rate
learning or
size
step the called constant positive
a is where n
n 1
n
) (
; ) ( ) ( ) (
ω η
η ω
ω
g
− g
= +
y
y x
x n
Wkj( )= ( j − )( k − );
Δ η
hypothesis Covariance
APPENDIX D
Statistical Learning Theory and BP Algorithm
A learning theory of NN-based statistical characterization addresses how to control the generalization ability of a supervised learning machine, which is capable of implementing a set of input-output mapping functions. A neural network is a massively parallel distributed processor composed of processing units of stored useful experiential knowledge. It resembles the human brain in two respects (Haykin, 1999):
1. Knowledge is obtained by the network through a powerful learning process.
2. Interneuron connection strengths, called synaptic weights, are dedicated to storing the acquired knowledge.
Hebbian learning is known as Hebb’s basis of learning, which is the most famous of all learning rules. A Hebbian synapse is a time-dependent, highly local, and strongly interactive mechanism of interest to make synapse efficient. The simplest formula of Hebbian learning can be denoted as follows:
Where
x
andy
imply the time-averaged values of the pre-synaptic signal xj and post-synaptic signaly
k, respectively; n is the time step or epoch.Method of steepest descent:
where ∇ is the gradient operator, and ω is weight vector.
The steepest descent algorithm is formally described by
73
) ( ) ( ) ( ) 2 / 1 ( ) ( ) ( ))
( (
)) ( ( )) 1 ( ( )) ( (
; 0 ) ( ) ( ) (
n n
H n n
n
g
n ating
Differenti
n n
n
n n H n g
T
T ω ω ω
ω ε
ω ε ω
ε ω
ε ω
Δ Δ
+ Δ
≈ Δ
− +
= Δ
= Δ
+
) ( ) ( )
(
) ( ) ( ) 1 ( );
( ) ( )
(
1 1
n g n H
n
n n
n n
g n H n
−
−
−
=
Δ +
= +
−
= Δ
ω
ω ω
ω ω
) ( )
( e i
W
∑
n=
=
1 2
2 1
i
ε
) ˆ( ) ( ) ) (
) ( ) (
(
) ( ) ( ) ( );
( )
(
) ( );
( )
(
n g n e n n x
W n W
n x W
n e
n W n x n d n n e
W W n e
n e W
W
W W
n e n
e W
T
=
−
∂ =
⇒ ∂
−
∂ =
∂
−
∂ =
= ∂
∂
∂
=
) ( )
Hence, (
) ( ) ) (
) ( (
) ( ctor weight ve the
respect to with
ating Differenti
n.
at time measured
signal error the is ) ( 2
1 2
ε ε
ε ε
) ( )
(
; ) ( ) ( )
(n 1 ω n ηg n ω n ηg n
ω + = − Δ =−
ctor.
weight ve the
around loop
feedback the
is n and
parameter, rate
-learning the
is where n
1 n
) ˆ(
; ) ( ) ( ) ˆ( ) ˆ (
ω η η
ω
ω + = + x n e n
operator.
delay -unit the is 1
n
n) =Ζ−1[ˆ( + )];Ζ−1
ˆ( ω
ω
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥
⎦
⎤
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢
⎣
⎡
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
=
∇
=
2 2
2 2
1 2
2 2 2
2 2
1 2
2
1 2
2 1
2 2
1 2
2
m m m
m m
...
...
...
...
...
...
) ( H
ω ε ω
ω ε ω
ω ε
ω ω
ε ω
ε ω
ω ε
ω ω
ε ω
ω ε ω
ε
ω ε
) n ( )
n ( ) 1 n ( ) n
( ω ω ηg
ω = + − =−
Δ ; This form is error-correction rule, where
) 1 n ( +
ω is its updated value, and ω(n) is the old value of the weight vector.
The matrix H (n) is the m×m Hessian matrix of ε (ω) as represented below.
Where )H−1(n is the inverse of the Hessian ofε (ω).
Gauss-Newton method is applicable to a cost function that is expressed as the sum of error squares.
Least-Mean-Square (LMS) algorithm is based on the use of instantaneous values for the cost function, namely,
74 . )]
( [
; )]
ˆ( [
∞
→
→
∞
→
→
n as constant n
solution.
Wiener is
n as n
2
0 0
e
ε
ω ω
ω ε
X1(n)
X2(n)
Xm(n)
………
Bias, b
∫
Output Layer Input Layer
w1
w2
wm
Hard Limiter
) ϕ(v v
neuron of
field local induced or
input limiter hard
the is
1
ν ν w xi b;
m
i
i +
=
∑
=
. (n) signal, error the of value ous instantane the
is n 2 n
n) 1e2k( );E( ) ek
(
E =
larning.
of rate the determines that
parameter)
rate -learning (say the
constant positive
a is where n
n
n η η
ωkj( )= ek( )xj( ); Δ
The objective is achieved by minimizing a cost function or index of performance, E (n).
The step-by-step adjustments to the synaptic weights of neuron k are continued until the system reaches a steady state (i.e., the synaptic weights are essentially stabilized). The learning process described herein is obviously referred to as error-correction learning; belong to a closed-loop feedback system.
Let wkj(n denote the value of synaptic weight ) wkj of neuron k excited by element xj(n of the signal vector ) x n( ) at time step n.
The delta rule (or Widrow-Hoff rule): the adjustment Δwkj(n) applied to the synaptic weight wkj at time step n is defined by
The delta rule may be stated as: the adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question.
75
………………
X1(n)
X2(n)
Xm(n)
Bias, b
ϕ(.)
Output Layer
Input Layer
Wk1(n)
Wkm(n)
)
k(n
Xj(n) ν yk(n)
) ( dk n
) ( y ) ( d ) (
ek n = k n − k n
Wk2(n)
Wkj(n)
−1
) y ( y y
y
y y y . y y e
e ' y y e
y ] e [ ' y
] e . y e ' y [ ' y y ; e y
e y y
j j
j j
j j j
j j j
j j
j
j j
j j
j j
j
j j j
j
j j
j j
−
− =
− = +
− + =
=
⇒
= +
=
− +
− +
=
⇒
= +
−
−
−
−
−
−
−
−
1 1 1 1 1
1 1 1
0 1
1
υ υ υ
υ
υ υ
υ υ
) ( )
( )
( kj kj
kj n 1 ω n ω n
ω + = +Δ ; X(n)=[+1,x1(n), x2(n), …xn(n)]T W(n)=[+1,W1(n), W2(n), …Wn(n)]T; b (n): bias
)]
( y ) ( d [ ) ( )
(n+1 =ω n +η n − n
ω x(n); 0<η<1;
y(n)=sgn [WT(n)X(n)] ; d(n):desiredresponse ;y(n):actual response A commonly used form of nonlinearity is defined by the logistic function.
neuron.
the of output
the is and j, neuron of
) bias the plus inputs synaptic
all of sum weighted the
(i.e., filed local induced the
is ); exp(
1 1
j j
j j
y
y υ
υ
−
= +
Back-Propagation Algorithm
The error signal at the output of neuron j at iteration n
j.
neuron for signal function the
is n
j;
neuron for response desired
the is n n n
n
) ( y
) ( d );
( y ) ( d ) ( e
j j j j
j = −
The total error energy
∑
∑
∈ ==
= N
n av
c j
) N (
) ( );
( e )
( j
1
2 1 n
n 2 n
n 1 ε ε
ε
76
ϕ(.) )
j(n
ν yj(n)
) ( dj n
) ( ej n
−1
) ( y ) ( )
(n n n
0
i i
ji
j
∑
=
= mω ν
) ( y ) ( )
( )) ( ) (
( ) ( ) y ( y ) ( d ) ( e
) ( y )) ( ( ) ( e ) ( y )) ( ( ) )(
( e
) (
) . (
) (
) ( . y ) ( y
) ( . e ) ( e
) ( ) (
) (
j ' j j
j j
j j
j j ' j j j
j ' j j
ji j
j j
j j
j ji
n n n
, n n
, n n n
n where
n n n
-n n 1
n
n n n
n n
n n
n n
n
calculus of
rule chain the to According
0
i i
ji
j
∑
=
=
∂ =
− ∂
=
=
−
=
∂
∂
∂
∂
∂
∂
∂
= ∂
∂
∂
m ω ν
υ υ ϕ
υ ϕ υ
ϕ
ω υ υ
ε ω
ε
)) ( ( ).
( e
) (
) ( . y ) ( y
) ( . e ) ( e
) ( )
( ) ) (
(
j ' j j
j
n n
n n n
n n
n n
n n gradient Local
j j j
j j
j
υ ϕ
υ ε
υ δ ε
=
∂
− ∂
∂
− ∂
∂
− ∂
∂ =
− ∂
=
. neuron
of signal input the is parameter,
rate learning the
is where n
n n
correction Weight
j
i j
j ji
y );
( y ).
( . )
( ηδ η
ω =
Δ
Activation functionϕj(υj(n)): 1. Sigmoid nonlinearity
neuron.
hidden the
is j n 1
n n
n n
1 n 0
and m
- 0, a n 1
n 1
)];
( y )[
( ay )) ( ( ))
( ( ) ( y
) ( y )
( ));
( a )) exp(
( (
j j
j ' j j
j j
j j
j
−
=
⇒
=
≤
≤
∞
<
<
∞
− >
= +
υ ϕ υ
ϕ
υ υ υ
ϕ j
j
) ( ) ( )]
( y )[
( ay ) ( ) ( )) ( ( )
( kj
k k j
j kj
k k j
'
j n n n n 1 n n n
j n ϕ υ δ ω δ ω
δ =
∑
= −∑
2. Hyperbolic tangent function
77
Data Structure for NeuroSolutions
X1(n)
X2(n)
X5(n)
Bias, b
ϕ(.)
Output Layer
Input Layer
W61(n)
Wkm(n)
) n
10( ν
X3(n) yj(n)
) ( dj n
) ( ej n
W62(n)
Wkj(n) −1
Wji(n)
Hidden Layer X4(n)
Bias, b
) n
6( ν
) n
7( ν
) n
8( ν
) n
9( ν
)]
( y a )][
( y a a[ )) b ( b ( (
ab )) ( b ( ab
)) ( (
) ( y )
(
; ) ( )) a
( b exp(
)) ( b a exp(
)) ( (
j j
j ' j
j j
j
n n
n tan
-1 n
Sech n
1 n 1 and m -
0, b a, ) n (b tanh n
1
n . 1
n
2
2 = = − +
=
≤
≤
∞
<
<
∞
>
− = +
−
= −
j j
j
j j
j
υ υ
υ ϕ
υ
υ υ υ υ
ϕ
) ( ) ( )]
( y a )][
( y a a[ )) b ( ( ) ( e )
( kj
k k j
j j
' j
j n n n n n n
j n ϕ υ δ ω
δ = = − +
∑
An example for BPNN Algorithm using NeuroSolutions package can be seen below: assume that there are five inputs: the value of assembly incidence (AI), total penalty value (TPV), feature number (FN), weight and volume, which are crucial criteria for assembly sequence planning, and the output variable is a global optimal assembly sequence.
The above BPNN architecture with weights and bias as follows.
78
Compared with BPNN implementation using C and KF programming
The performance of C programming is much better than that of KF programming, and both programs are listed below.
[C programming]
// back2.cpp : //
#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define Ntest 17
#define Ninp 5
#define Nhid 14
#define Nout 1
#define weight_file "d:\\5925007\\myvc\\xorTrain.wei"
#define test_file "d:\\5925007\\myvc\\xorTest.txt"
#define recall_file "d:\\5925007\\myvc\\xor.rec"
int _tmain(int argc, _TCHAR* argv[]) {
FILE *fp1,*fp2,*fp3;
float X[Ninp],T[Nout],H[Nhid],Y[Nout];
float W_xh[Ninp][Nhid],W_hy[Nhid][Nout];
float Q_h[Nhid],Q_y[Nout];
float sum,mse;
int Itest;
int i,j,h;
/*--- open files ---*/
fp1=fopen(weight_file,"r");
fp2=fopen(test_file,"r");
fp3=fopen(recall_file,"w");
if (fp1==NULL) {
puts("File not exist !!");
getchar();
exit(1);
}
if (fp2==NULL) {
puts("File not exist !!");
getchar();
exit(1);
}
79 /*--- input weights from weight_file ---*/
fseek(fp1,0,0);
for (h=0;h<Nhid;h++) for (i=0;i<Ninp;i++)
fscanf(fp1,"%f",&W_xh[i][h]);
for (j=0;j<Nout;j++)
for (h=0;h<Nhid;h++)
fscanf(fp1,"%f",&W_hy[h][j]);
for (h=0;h<Nhid;h++)
fscanf(fp1,"%f",&Q_h[h]);
for (j=0;j<Nout;j++)
fscanf(fp1,"%f",&Q_y[j]);
/*--- Testing ---*/
fseek(fp2,0,0);
for (Itest=0;Itest<Ntest;Itest++) {
/*--- input one testing example ---*/
for (i=0;i<Ninp;i++)
fscanf(fp2,"%f",&X[i]);
for (j=0;j<Nout;j++)
fscanf(fp2,"%f",&T[j]);
// for (i=0;i<Ninp;i++) { // printf("x[i]=");
// printf("%-8.2f",X[i]);
// }
/*--- comput H,Y ---*/
for (h=0;h<Nhid;h++) {
sum=0.0;
for (i=0;i<Ninp;i++){
// printf("w[%d][%d]=",i,h);
// printf("%-8.2f",W_xh[i][h]);
sum=sum+X[i]*W_xh[i][h];
// printf("sumA[%d]=",h);
// printf("%-8.2f",sum);
}
// printf("sum[%d]=",h);
// printf("%-8.2f",sum);
H[h]=(float)1.0/(1.0+exp(-(sum-Q_h[h])));
//H[h]=(float)(exp(sum-Q_h[h])-exp(-(sum-Q_h[h])))/(exp(sum-Q_h[h])+exp(-(sum-Q_h[h])));
// printf("H[%d]=",h);
// printf("%-8.3f",H[h]);
// printf("Q_h[%d]=",h);
// printf("%-8.3f",Q_h[h]);
80 }
for (j=0;j<Nout;j++) {
sum=0.0;
for (h=0;h<Nhid;h++)
sum=sum+H[h]*W_hy[h][j];
Y[j]=(float)1.0/(1.0+exp(-(sum-Q_y[j])));
//Y[j]=(float)(exp(sum-Q_y[j])-exp(-(sum-Q_y[j])))/(exp(sum-Q_y[j])+exp(-(sum-Q _y[j])));
// printf("Y[%d]=",j);
// printf("%-8.5f",Y[j]);
}
/*--- compute the mean_square_error ---*/
mse=0.0;
for (j=0;j<Nout;j++)
mse+=(T[j]-Y[j])*(T[j]-Y[j]);
/*--- Write the results to recall_file ---*/
printf("T[j]= ");
fprintf(fp3,"T[j]= ");
for (j=0;j<Nout;j++) { printf("%-8.2f",T[j]);
fprintf(fp3,"%-8.2f",T[j]);
}
printf("Y[j]= ");
fprintf(fp3,"Y[j]= ");
for (j=0;j<Nout;j++) { printf("%-8.5f",Y[j]);
fprintf(fp3,"%-8.5f",Y[j]);
}
printf(" mse= %-8.4f\n\n",mse);
fprintf(fp3," mse= %-8.4f",mse);
fprintf(fp3,"\n");
} /*--- end of recalling for total teat examples ---*/
fclose(fp1);
fclose(fp2);
fclose(fp3);
getchar();
return 0;
}
[KF programming]
#! NX/KF 5.0
DefClass: recursion (ug_base_part);
(Number)Ncycle:1;
81
(Number)Ntrain:45;
(Number)Ninp:5;
(Number)Nhid:14;
(Number)Nout:1;
(Number)eta:0.5;
(Number)alpha:0.9;
(Number)etao:0.7;
(Number)alphao:0.1;
(String) train_file:"D:\NXKF\BB\Proposal\Assembly_Sequencing\train_file.xls";
(String)weight_file:"D:\NXKF\BB\Proposal\Assembly_Sequencing\weight_file.xls";
(String)mse_file:"D:\NXKF\BB\Proposal\Assembly_Sequencing\mse_file.xls";
(List)fp1:@
{
$openExcel << ug_excel_open_file(train_file:, read);
$Bracket << Loop{
For $i from 1 to 6;
For $Read is ug_excel_read_range($openExcel, 1, 1, $i, Ntrain:,$i);
For $sublist is subList($Read,6,Length($Read)+5);
For $MakeNumber is loop{
For $j from 1 to length($sublist);
For $Make is MakeNumber(nth($j,$sublist));
collect $Make;
};
Collect $MakeNumber;
};
$closeFile << ug_excel_close_file($openExcel, True);
$Bracket;
};
(List)ReadTextList:@
{
$openExcel << ug_excel_open_file(weight_file:, read);
$Read << ug_excel_read_range($openExcel, 1, 1, 1, -1,1);
$sublist << subList($Read,6,Length($Read)+5);
$MakeNumber << loop{
For $j from 1 to length($sublist);
For $Make is MakeNumber(nth($j,$sublist));
collect $Make;
};
$closeFile << ug_excel_close_file($openExcel, True);
$MakeNumber ; };
##################################################################
(Method Boolean) writeMethod:(integer $fileOpen, integer $WorkID, integer $rowStar, integer
$colStar, integer $rowEnd, integer $colEnd, List $data)
@{
$write << ug_excel_write_range($fileOpen, {$WorkID, $rowStar, $colStar, $rowEnd, $colEnd} +
$data);
};
(String)dW_hy_file:"D:\NXKF\BB\Proposal\Assembly_Sequencing\dW_hy.xls";
(Uncached any)ALearning:loop{
For $Icycle from 1 to Ncycle:;
Do StoreValue(0, self:, mse);
For $Trainingtimes is ATrainingtimes:+1;
Do StoreValue($Trainingtimes, self:, ATrainingtimes);
For $Icycle2 is loop{
82
For $Itrain from 1 to Ntrain:;
For $HideNet is loop{
For $h from 1 to Nhid:;
For $Sum is 0;
For $HideNet1 is loop{
For $i from 1 to Ninp:;
For $Sum is $Sum+nth($Itrain,nth($i,X:))*nth($i,nth($h,W_xhList:));
Sum $sum;
};
For $HideNet2 is 1.0/(1.0+exp(-($HideNet1-nth($h,Q_hList:))));
Do StoreValue($HideNet2, self:,nth($h,HStoreList:));
append{$HideNet2};
};
For $YoutputNet is loop{
For $j from 1 to Nout:;
For $Sum is 0;
For $YoutputNet1 is loop{
For $h from 1 to Nhid:;
For $Sum is $Sum+nth($h,H:)*nth($j,nth($h,W_hyList:));
Sum $sum;
};
For $YoutputNet2 is 1.0/(1.0+exp(-($YoutputNet1-Q_y_1:)));
Do StoreValue($YoutputNet2, self:,nth($j,YStoreList:));
append{$YoutputNet2};
};
For $delta_y is loop{
For $j from 1 to Nout:;
For $delta_y1 is nth($j,Y:)*(1.0-nth($j,Y:))*(nth($Itrain,X6:)-nth($j,Y:));
Do StoreValue($delta_y1, self:,nth($j,delta_yStoreList:));
append{$delta_y1};
};
For $delta_h is loop{
For $h from 1 to Nhid:;
For $Sum is 0;
For $delta_h2 is loop{
For $j from 1 to Nout:;
For $Sum is $Sum+nth($j,nth($h,W_hyList:))*nth($j,delta_y:);
Sum $sum;
};
For $delta_h3 is nth($h,H:)*(1.0-nth($h,H:))*$delta_h2;
Do StoreValue($delta_h3, self:,nth($h,delta_hStoreList:));
append{$delta_h3};
};
For $dw_hy is loop{
For $j from 1 to Nout:;
For $dw_hy2 is loop{
For $h from 1 to Nhid:;
For $dw_hy3 is etao:*nth($j,delta_y:)*nth($h,H:)+alphao:*nth($j,nth($h,dw_hyList:));
Do StoreValue($dw_hy3, self:,(nth($j,nth($h,dw_hyStoreList:))));
append{$dw_hy3};
};
append{$dw_hy2};
};
For $dQ_y is loop{
For $j from 1 to Nout:;
For $dw_hy2 is -etao:*nth($j,delta_y:)+alphao:*dQ_y_1:;
Do StoreValue($dw_hy2, self:, dQ_y_1);
append{$dw_hy2};
83
};
For $dw_xh is loop{
For $h from 1 to Nhid:;
For $dw_xh2 is loop{
For $i from 1 to Ninp:;
For $dw_xh3 is eta:*nth($h,delta_h:)*nth($Itrain,nth($i,X:))+alpha:*nth($i,nth($h,dw_xhList:));
Do StoreValue($dw_xh3, self:,(nth($i,nth($h,dw_xhStoreList:))));
append{$dw_xh3};
};
append{$dw_xh2};
};
For $dQ_h is loop{
For $h from 1 to Nhid:;
For $dQ_h2 is -eta:*nth($h,delta_h:)+alpha:*nth($h,dQ_hList:);
Do StoreValue($dQ_h2, self:,nth($h,dQ_hStoreList:));
append{$dQ_h2};
};
For $W_hy is loop{
For $j from 1 to Nout:;
For $W_hy2 is loop{
For $h from 1 to Nhid:;
For $W_hy3 is nth($j,nth($h,W_hyList:))+nth($j,nth($h,$dw_hy));
Do StoreValue($W_hy3, self:,(nth($j,nth($h,W_hyStoreList:))));
append{$W_hy3};
};
append{$W_hy2};
};
For $Q_y is loop{
For $j from 1 to Nout:;
For $Q_y2 is Q_y_1:+nth($j,$dQ_y);
Do StoreValue($Q_y2, self:, Q_y_1);
append{$Q_y2};
};
For $W_xh is loop{
For $h from 1 to Nhid:;
For $W_xh2 is loop{
For $i from 1 to Ninp:;
For $W_xh3 is nth($i,nth($h,W_xhList:))+nth($i,nth($h,$dw_xh));
Do StoreValue($W_xh3, self:,(nth($i,nth($h,W_xhStoreList:))));
append{$W_xh3};
};
append{$W_xh2};
};
For $Q_h is loop{
For $h from 1 to Nhid:;
For $Q_h2 is nth($h,Q_hList:)+nth($h,$dQ_h);
Do StoreValue($Q_h2, self:,nth($h,Q_hStoreList:));
append{$Q_h2};
};
For $mse is loop{
For $h from 1 to Nout:;
For $mse1 is mse:+(nth($Itrain,X6:)-nth($h,Y:))*(nth($Itrain,X6:)-nth($h,Y:));
Do StoreValue($mse1, self:, mse);
append {$mse1};
};
For $IcycleNumber is $Icycle;
collect{$IcycleNumber};
};
For $openExcel is ug_excel_open_file(weight_file:, Write);