• 沒有找到結果。

䁧⛰᱄

在文檔中 數 學 年 會 (頁 86-105)

最佳化 Optimization 地 點 : M 3 1 0 數 學 館

TMS Annual Meeting

數 學 年 會

2018 數 學 年 會

D e c . 8 / 0 9 : 3 0 - 2 1 : 0 0

D e c . 9 / 0 9 : 3 0 - 1 5 : 5 0

演講摘要

Speech Abstracts

ѱ䗜௤փ ঊ䗜௤փ

D e c . 8 / 0 9 : 3 0 - 2 1 : 0 0

1 1 : 2 0 - 1 2 : 0 5 <RQJGR/LP 6WURQJ&RQYH[LW\RI6DQGZLFKHG(QWURSLHVDQG5HODWHG

2SWLPL]DWLRQ3UREOHPV

1 3 : 3 0 - 1 3 : 5 5

哹␇⩪

6KX&KLQ+XDQJ ..0WKHRUHPVLQ+DGDPDUGPDQLIROGV

1 4 : 0 0 - 1 4 : 2 5

㜗ᢵᯯ

&KHQJ)HQJ+X 1HWZRUNGDWDHQYHORSPHQWDQDO\VLVZLWKFRPPRQZHLJKWV

1 4 : 3 0 - 1 4 : 5 5

ᶒ့ԋ

:HL6KLK'X

1HZJHQHUDOL]DWLRQVRI(NHODQGVYDULDWLRQDOSULQFLSOHDQGZHOONQRZQ

IL[HGSRLQWWKHRUHPVZLWKDSSOLFDWLRQVWRQRQFRQYH[RSWLPL]DWLRQSUREOHPV

1 5 : 2 0 - 1 5 : 4 5

D e c . 9 / 0 9 : 3 0 - 1 5 : 5 0

1 0 : 2 0 - 1 1 : 0 5

1 1 : 1 0 - 1 1 : 3 5

1 1 : 4 0 - 1 2 : 0 5

1 3 : 3 0 - 1 3 : 5 5

1 4 : 0 0 - 1 4 : 2 5

1 4 : 3 0 - 1 4 : 5 5

Strong Convexity of Sandwiched Entropies and Related Optimization Problems

Yongdo Lim

Department of Mathematics Sungkyunkwan University

E-mail: ylim@skku.edu

We present several theorems on strict and strong convexity for sandwiched quasi-relative entropy (a parametrised version of the classical fidelity). These are crucial for establishing global linear convergence of the gradient projection algorithm for optimization problems for these functions. The case of the classi-cal fidelity is of special interest for the multimarginal optimal transport problem (the n-coupling problem) for Gaussian measures. This is joint work with Ra-jendra Bhatia and Tanvi Jain.

77

KKM theorems in Hadamard manifolds

Shue-Chin Huang

Department of Applied Mathematics National Dong Hwa University E-mail: shuang@gms.ndhu.edu.tw

The purpose of this talk is to present a fixed point theorem for generalized KKM mappings in the Hadamard manifold settings. We derive the finite inter-section property of this class of mappings. As an application of this property, we also discuss the existence conditions on the generalized equilibrium prob-lem. This research is supported by a grant MOST 106-2115-M-259-005 from the Ministry of Science and Technology of Taiwan.

78

Network data envelopment analysis with common weights

Cheng-Feng Hu

Department of Applied Mathematics National Chiayi University E-mail: cfhu@mail.ncyu.edu.tw

Common weight models can combat the computational burden of data en-velopment analysis (DEA) in the big data environment. This work considers studying a common-weights general network DEA model which is applicable to most network systems, except those with feedbacks and cycles. It shows that the general network DEA model with common weights can be reduced into an auxiliary fuzzy bi-objective mathematical programming problem by applying the basic principle of compromise of TOPSIS. The case of Taiwanese non-life insurance companies is utilized for illustration and comparison purposes. Our results show that the proposed common-weights network DEA model not only compares DMUs on a common base, but also produces reliable results in mea-suring efficiencies.

79

New generalizations of Ekeland’s variational principle and well-known fixed point theorems

with applications to nonconvex optimization problems

Wei-Shih Du

Department of Mathematics National Kaohsiung Normal University

Email: wsdu@mail.nknu.edu.tw

In this talk, we establish new generalizations of Ekeland’s variational princi-ple, Caristi’s fixed point theorem, Takahashi’s nonconvex minimization theorem and nonconvex maximal element theorem for uniformly below sequentially lower semicontinuous from above functions and essential distances. New simultane-ous generalizations of fixed point theorems of Mizoguchi-Takahashi type, Nadler type, Banach type, Kannan type, Chatterjea type and others are also presented.

As applications, we concentrate on studying nonconvex optimization and mini-max theorems in metric spaces.

Keywords: Nonconvex optimization, minimax theorem, Ekeland’s varia-tional principle, Caristi’s (common) fixed point theorem, Takahashi’s noncon-vex minimization theorem, nonconvex maximal element theorem, MT -function (or R-function), MT (λ)-function, uniformly below sequentially lower semicon-tinuous from above, essential distance, Mizoguchi-Takahashi’s fixed point theo-rem, Nadler’s fixed point theorem, Banach contraction principle, Kannan’s fixed point theorem, Chatterjea’s fixed point theorem.

80

Deep Learning for Region of Interest Based Clustering of White Matter Fibers

Feng-Sheng Tsai

Department of Biomedical Imaging and Radiological Science China Medical University

E-mail: fstsai@mail.cmu.edu.tw

To cluster white matter fibers in whole-brain tractography, anatomical re-gions of interest (ROIs) are selected manually in brain diffusion MRI. Those ROIs are used to isolate tracts and cluster fiber bundles accordingly. Deep learning approaches may be applied to voxel-based ROI segmentation imme-diately; however, the number of voxels in ROIs is extremely smaller than the number of voxels in whole brain images, so they always suffer from the class im-balance problem when extracting related voxels of ROIs for training. Here we propose a hierarchical sampling technique to resolve the class imbalance problem of deep learning. ROI segmentation with deep learning is divided into hierar-chical sub-tasks, from 2-dimensional objective-plane explorations to restricted, bounded hot-zone locations, and then to voxel-based discrimination. Sampling datasets in all sub-tasks are more balanced for training. Specifically, two ROIs for clustering arcuate fasciculus in whole-brain tractography are presented.

81

A block symmetric Gauss-Seidel decomposition theorem and its applications in big data

nonsmooth optimization

De-Feng Sun

Department of Applied Mathematics The Hong Kong Polytechnic University

E-mail: defeng.sun@polyu.edu.hk

The Gauss-Seidel method is a classical iterative method of solving the linear system Ax = b. It has long been known to be convergent when A is symmet-ric positive definite. I n t his t alk, w e s hall f ocus o n i ntroducing a symmetric version of the Gauss-Seidel method and its elegant extensions in solving big data nonsmooth optimization problems. For a symmetric positive semidefinite linear system Ax = b with x = (x1, . . . , xs) being partitioned into s blocks, we show that each cycle of the block symmetric Gauss-Seidel (block sGS) method exactly solves the associated quadratic programming (QP) problem but added with an extra proximal term. By leveraging on such a connection to optimiza-tion, one can extend the classical convergent result, named as the block sGS decomposition theorem, to solve a convex composite QP (CCQP) with an addi-tional nonsmooth term in x1. Consequently, one is able to use the sGS method to solve a CCQP. In addition, the extended block sGS method has the flexi-bility of allowing for inexact computation in each step of the block sGS cycle.

At the same time, one can also accelerate the inexact block sGS method to achieve an iteration complexity of O(1/k2) after performing k block sGS cycles.

As a fundamental building block, the block sGS decomposition theorem has played a key role in various recently developed algorithms such as the proxi-mal ALM/ADMM for linearly constrained multi-block convex composite conic programming (CCCP) and the accelerated block coordinate descent method for multi-block CCCP.

82

Strong duality in minimizing a quadratic form subject to two homogeneous quadratic

inequalities over the unit sphere

Ruey-Lin Sheu

Department of Mathematics National ChengKung University E-mail: rsheu@mail.ncku.edu.tw

This problem, called (P), is a contrast with a simpler version (P) which also minimizes a quadratic form but has just one homogeneous quadratic constraint over the unit sphere. The inclusion of an additional homogeneous quadratic constraint can cause (P) to have a positive duality gap, although the simpler version (P) has been proved to adopt strong duality under Slater’s condition.

On the surface the underlined problem (P) appears to be different f rom the CDT (Celis-Dennis-Tapia) problem. Their SDP relaxations, however, share a very similar format. The minute observation turns out to be valuable in deriving a necessary and sufficient condition for (P) to admit strong du ality. We will see that, in the sense of strong duality results, problem (P) is a generalization of the CDT problem. Many nontrivial examples are constructed in the paper to help understand the mechanism. Finally, as the strong duality in quadratic optimization is closely related to the S-lemma, we derive a new extension of the S-Lemma with three homogeneous quadratic inequalities over the unit sphere, with and without the Slater condition.

Keywords: Quadratically constrained quadratic programming, CDT prob-lem, S-lemma, Slater condition, Joint numerical range

83

Phase retrieval algorithms with random masks

Peng-Wen Chen

Department of Applied Mathematics National Chung Hsing University

E-mail: pengwen@nchu.edu.tw

Phase retrieval aims to recover one unknown vector from its magnitude measurements, e.g., coherent diffractive imaging, where phase information is missing. The recovery of phase information can be formulated as one minimiza-tion problem subject to a non convex high-dimensional torus set. In theory, uniqueness of solutions can be obtained under random masks. The introduction of random masks actually breaks the symmetry of Fourier matrices and cre-ates spectral gap for the local convergence of many phase retrieval algorithms, including alternative projection methods and Fourier Douglas-Rachford algo-rithms. The spectral gap is related to the local convergence rate.

On the other hand, these alternative algorithms still could fail to generate the global solution effectively. To alleviate the stagnation of possible local solutions, we propose one null vector method as an initialization method for phase retrieval algorithms. The method is motivated by the following observation: Gaussian random vectors in high dimensional space are always nearly orthogonal to each other. According to magnitude data, we can construct one sub-matrix assem-bled from the sensing vectors nearly orthogonal to the unknown vector. One candidate for the initialization vector is given by the singular vector of the sub-matrix corresponding to the least singular value. Thanks to isometric Fourier matrices, this vector coincides with the dominant singular vector of the com-plement sub-matrix. Empirical studies (non-ptychography and ptychography) indicate that its incredible closeness to the unknown vector, compared with other existing methods. In this talk, we present one nonasymptotic error bound in the case of random complex Gaussian matrices, which sheds some light on its superior performance in the Fourier coherent diffractive case with random masks.

Keywords: random masks, phase retrieval, null vector method.

84

The solvabilities of SOCEiCP and SOCQEiCP

Wei-Ming Hsu

Department of Mathematics National Taiwan Normal University

In this paper, we study the solvabilities of two optimization problems asso-ciated with second-order cone, including eigenvalue complementarity problem associated with second order cone (SOCEiCP), and quadratic eigenvalue com-plementarity problem associated with second order cone (SOCQEiCP). First of all, we try to rewrite the SOCEiCP as instances of the SOCCP. Secondly, we also try to rewrite SOCQEiCP as instances of SOCCP. Furthermore, we study some algorithms for solving SOCEiCP and SOCQEiCP.

Keywords: Solvability, eigenvalue, second-order cone.

References

[1] A. Seeger, Eigenvalue analysis of equilibrium processes defined by linear complementarity conditions, Linear Algebra and its Applications, vol. 292, pp. 1-14, 1999.

[2] A. Seeger, Quadratic eigenvalue problems under conic constraints, SIAM Journal on Matrix Analysis and Applications, vol. 32, no.3, pp. 700-721, 2011.

[3] M. Queiroz, J. Júdice,C. Humes, The symmetric eigenvalue comple-mentarity problem, Mathmatics of Computation, vol. 73, no.248 ,pp. 1849-1863, 2003.

[4] S. Adly, H. Rammal, A new method for solving second-order cone eigen-value complementarity problems, Journal of Optimization Theory and Ap-plications, vol. 165, issue 1, pp. 563-585, 2015.

[5] C. Brás, M. Fukushima, A. Iusem, J. Júdice, On the quadratic eigen-value complementarity problem over a general convex cone, Applied Math-ematics and Computation, vol. 271, pp. 391-403, 2015.

[6] C. Brás, A. Iusem, J. Júdice, On the quadratic eigenvalue complemen-tarity problem, Journal of Global Optimization, vol. 66, issue 2, pp. 153-171, 2016.

85

[7] L. Fernandes, M. Fukushima, J. Júdice, H. Sherali, The second-order cone eigenvalue complementarity problem, Optimization Methods and Software, vol. 31, issue 1, pp. 24-52, 2016.

[8] J. Tao, M. Gowda, Some P-Properties for Nonlinear Transformations on Euclidean Jordan Algebras, Mathematical Methods of Operations Research, vol. 30, no. 4, pp. 985-1004, 2005.

[9] S.-H. Pan, S. Kum, Y. Lim, J.-S. Chen, On the generalized Fischer-Burmeister merit function for the second-order cone complementarity prob-lem, Mathematics of Computation, vol. 83, no. 287, 1143-1171, 2014.

[10] J. Wu, J.-S. Chen, A proximal point algorithm for the monotone second-order cone complementarity problem, Computational Optimization and Ap-plications, vol. 51, no. 3, pp. 1037-1063, 2012.

[11] J.-S. Chen, S.-H. Pan, A survey on SOC complementarity functions and solution methods for SOCPs and SOCCPs, Pacific Journal of Optimization, vol. 8, no. 1, pp. 33-74, 2012.

[12] S.-H. Pan, J.-S. Chen, A least-square semismooth Newton method for the second-order cone complementarity problem, Optimization Methods and Software, vol. 26, no. 1, pp. 1-22, 2011.

[13] S.-H. Pan, J.-S. Chen, A semismooth Newton method for SOCCPs based on a one-parametric class of complementarity functions, Computational Optimization and Applications, vol. 45, no. 1, pp. 59-88, 2010.

[14] S.-H. Pan, J.-S. Chen, A linearly convergent derivative-free descent method for the second-order cone complementarity problem, Optimization, vol. 59, no. 8, pp. 1173-1197, 2010.

[15] J.-S. Chen, S.-H. Pan, A one-parametric class of merit functions for the second-order cone complementarity problem, Computational Optimization and Applications, vol. 45, no. 3, pp. 581-606, 2010.

[16] S.-H. Pan, J.-S. Chen, A damped Gauss-Newton method for the second-order cone complementarity problem, Applied Mathematics and Optimiza-tion, vol. 59, no. 3, pp. 293-318, 2009.

[17] S.-H. Pan, J.-S. Chen, A regularization method for the second-order cone complementarity problems with the Cartesian P0-property, Nonlinear Anal-ysis: Theory, Methods and Applications, vol. 70, no. 4, pp. 1475-1491, 2009.

[18] J.-S. Chen, S.-H. Pan, A descent method for solving reformulation of the second-order cone complementarity problem, Journal of Computational and Applied Mathematics, vol. 213, no. 2, pp. 547-558, 2008.

86

[19] J.-S. Chen, Conditions for error bounds and bounded level sets of some merit functions for SOCCP, Journal of Optimization Theory and Applica-tions, vol. 135, no. 3, pp. 459-473, 2007.

[20] J.-S. Chen, Two classes of merit functions for the second-order cone com-plementarity problem, Mathematical Methods of Operations Research, vol.

64, no. 3, pp. 495-519, 2006.

[21] J.-S. Chen, A new merit function and its related properties for the second-order cone complementarity problem, Pacific Journal of Optimization, vol.

2, no. 1, pp. 167-179, 2006.

[22] J.-S. Chen, P. Tseng, An unconstrained smooth minimization reformu-lation of second-order cone complementarity problem, Mathematical Pro-gramming, vol. 104, no. 2-3, pp. 293-327, 2005.

87

Penalty and barrier methods for second-order cone programming

Nguyen Thanh Chieu Department of Mathematics National Taiwan Normal University

E-mail: thanhchieu90@gmail.com

In this talk we will present penalty and barrier methods for solving convex second-order cone programming:

min f (x)

s.t Ax− b ≼Kn0

where A is an n× m matrix with n ≥ m, rank A = m. f : ℜm→ (−∞, +∞] is a closed proper convex function. Kn is a second-order cone (SOC for short) in n given by

Kn:={

(x1, x2)∈ ℜ × ℜn−1| ∥x2∥ ≤ x1

},

where∥ · ∥ denotes the Euclidean norm.

This class of methods is an extension of penalty and barrier methods for convex optimization which was presented by A. Auslender et al. in 1997. With this method, we provide under implementable stopping rule that the sequence generated by the proposed algorithm is bounded and that every accumulation point is a solution to the considered problem. Furthermore, we examine effec-tiveness of the algorithm by means of numerical experiments.

Keywords: Second-order cone, penalty and barrier methods, asymptotic functions, recession functions, convex analysis, smoothing functions.

88

References

[1] A. Auslender, R. Cominetti and M. Haddou, Asymptotic analysis of penalty and barrier methods in convex and linear programming, Mathematics of Operations Research, 22, pp. 43-62, 1997.

[2] A. Auslender, Penalty and barrier methods: A unified framework, SIAM J. Optimization, 10, pp. 211-230, 1999.

[3] A. Auslender, Variational inequalities over the cone of semidefinite positive matrices and over the Lorentz cone, Optimization methods and software, pp. 359-376, 2003.

[4] A. Auslender and M. Teboulle, Asymptotic cones and functions in opti-mization and variational inequalities, Springer monographs in mathemat-ics, Springer, Berlin Heidelberg New York, 2003.

[5] A. Auslender and H. Ramírez C, Penalty and barrier methods for con-vex semidefinite programming, Mathematics of Operations Research, 63, pp.195-219, 2006.

[6] M. S. Bazaraa, H. D. Sherali and C. M. Shetty , Nonlinear Programming:

Theory and Algorithms, 3rd Edition, Wiley - Interscience, 2006.

[7] A. Ben-Tal and M. Teboulle, A smoothing technique for nondifferentiable optimization problems, in Optimization, Fifth French German Conference, Lecture Notes in Math. 1405, Springer-Verlag, New York, pp. 1-11, 1989.

[8] A. Ben-Tal and M. Zibulevsky, Penalty-barrier multiplier methods for con-vex programming problems, SIAM J. Optim. Vol 7, No. 2, pp. 347-366, 1997.

[9] J. S. Chen, T. K. Liao and S. Pan, Using Schur complement theorem to prove convexity of some soc-functions, Journal of Nonlinear and Convex Analysis, Vol 13, No. 3, pp. 421-431, 2012.

[10] C. Chen and O. L. Mangasarian, A class of smoothing functions for non-linear and mixed complementarity problems, Comput. Optim. Appl, 5, pp.

97-138, 1996.

[11] U. Faraut and A. Koranyi, Anlysis on Symmetric Cones, Oxford Mathe-matical Monographs, Oxford University Press, New York, 1994.

[12] E.D. Dolan and J.J. More, Benchmarking optimization software with per-formance profiles, Mathematical Programming, vol. 91, pp. 201-213, 2002.

[13] M. Fukushima, Z.-Q. Luo, and P. Tseng, Smoothing functions for second-order-cone complimentarity problems, SIAM Journal on Optimization, vol.

12, pp. 436-460, 2002.

89

[14] L. Mosheyev and M. Zibulevsky, Penalty-barrier multiplier algorithm for semidefinite programming, Optimization Meth. Soft., Vol. 13, pp. 235-261, 2000.

[15] N. Parikh and S. Boyd, Proximal Algorithms, Foundations and Trends in Optimization, Vol. 1, No. 3, pp. 123-231, 2013.

[16] S. H. Pan and J. S. Chen, A proximal-like algorithnm using quasi D-function for convex second-order cone programming, J. Optim. Theory Appl., 138 , pp. 95-113, 2008.

[17] S. H Pan and J. S. Chen, A class of interior proximal-like algorithms for convex second-order cone programming, SIAM J. Optim. Vol. 19, No. 2, pp. 883-910, 2008.

[18] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.

[19] L. Zhang, J. Gu and X. Xiao, A class of nonlinear Lagrangians for non-convex second order cone programming, Computational Optimization and Applications. Vol. 49, pp. 61-99, 2011.

90

Neural networks based on three classes of NCP-functions for solving nonlinear

complementarity problems

Jan Harold M. Alcantara Department of Mathematics National Taiwan Normal University

E-mail: janharold27@yahoo.com

We consider a family of neural networks for solving nonlinear complementar-ity problems (NCP). The neural networks are based from the merit functions induced by three classes of NCP-functions: the generalized natural residual function and its two symmetrizations. We first provide a characterization of the stationary points of the induced merit functions. To describe the level sets of the merit functions, we prove some important properties related to the growth be-havior of the complementarity functions. Furthermore, we analyze the stability of the steepest descent-based neural network model for NCP. To illustrate the theoretical results, we provide numerical simulations using our neural network and compare it with other similar neural networks in the literature which are based on other well-known NCP-functions. The numerical results suggest that the neural network has a better performance when their common parameter p is smaller. We also found that one among the three families of neural networks we considered is capable of outperforming other existing neural networks.

This is a joint work with Jein-Shan Chen.

Keywords: NCP-function, Neural network, natural residual function, sta-bility.

References

[1] Y.-L. Chang, J.-S. Chen, C.-Y. Yang, Symmetrization of generalized natural residual function for NCP, Operations Research Letters, 43(2015), 354-358.

[2] J.-S. Chen, C.-H. Ko, and S.-H. Pan, A neural network based on gen-eralized Fischer-Burmeister function for nonlinear complementarity prob-lems, Information Sciences, 180(2010), 697-711.

91

[3] J.-S. Chen, C.-H. Ko, and X.-R. Wu, What is the generalization of nat-ural residual function for NCP, Pacific Journal of Optimization, 12(2016), 19-27.

[4] J.-S. Chen and S.-H. Pan (2008), A family of NCP functions and a descent method for the nonlinear complementarity problem, Computational Optimization and Applications, vol. 40, 389-404.

[5] R.W. Cottle, J.-S. Pang and R.-E. Stone,The Linear Complemen-tarity Problem, Academic Press, New York 1992.

[6] C. Dang, Y. Leung, X. Gao, and K. Chen (2004), Neural networks for nonlinear and mixed complementarity problems and their applications, Neural Networks, vol. 17, 271-283.

[7] M. C. Ferris, O. L. Mangasarian, and J.-S. Pang, editors, Com-plementarity: Applications, Algorithms and Extensions, Kluwer Academic Publishers, Dordrecht, 2001.

[8] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequal-ities and Complementarity Problems, Volumes I and II, Springer-Verlag, New York, 2003.

[9] F. Facchinei and J. Soares (1997), A new merit function for nonlin-ear complementarity problems and a related algorithm, SIAM Journal on Optimization, vol. 7, 225-247.

[10] C. Geiger, and C. Kanzow (1996), On the resolution of monotone complementarity problems, Computational Optimization and Applications, vol. 5, 155-173.

[11] J. J. Hopfield and D. W. Tank (1985), Neural computation of decision in optimization problems, Biological Cybernetics, vol. 52, 141-152.

[12] X. Hu and J. Wang (2006), Solving pseudomonotone variational inequal-ities and pseudoconvex optimization problems using the projection neural network, IEEE Transactions on Neural Networks, vol. 17, 1487-1499.

[13] X. Hu and J. Wang (2007), A recurrent neural network for solving a class of general variational inequalities, IEEE Transactions on Systems, Man, and Cybernetics-B, vol. 37, 528–539.

[14] C.-H. Huang, K.-J. Weng, J.-S. Chen, H.-W. Chu and M.-Y. Li (2017), On four discrete-type families of NCP Functions, to appear in Jour-nal of Nonlinear and Convex AJour-nalysis, 2017.

[15] C. Kanzow and H. Kleinmichel (1995), A class of Newton-type methods for equality and inequality constrained optimization, Optimization Methods and Software, vol. 5, pp. 173-198.

92

[16] C. Kanzow (1996), Nonlinear complementarity as unconstrained optimiza-tion, Journal of Optimization Theory and Applications, vol. 88, 139-155.

[17] M. P. Kennedy and L. O. Chua (1988), Neural network for nonlinear programming, IEEE Tansaction on Circuits and Systems, vol. 35, 554-562.

[18] M. Kojima and S. Shindo (1986), Extensions of Newton and quasi-Newton methods to systems of P C1 equations, Journal of Operations Re-search Society of Japan, vol. 29, 352-374.

[19] J. P. LaSalle (1968) Stability Theory for Ordinary Differential Equations, Journal of Differential Equations, vol. 4, 57-65.

[20] L.-Z. Liao, H. Qi, and L. Qi (2001), Solving nonlinear complementarity problems with neural networks: a reformulation method approach, Journal of Computational and Applied Mathematics, vol. 131, 342-359.

[21] R. K. Miller and A. N. Michel (1982), Ordinary Differential Equations, Academic Press.

[22] S-K. Oh, W. Pedrycz, and S-B. Roh (2006), Genetically optimized fuzzy polynomial neural networks with fuzzy set-based polynomial neurons, Information Sciences, vol. 176, 3490-3519.

[23] L. Qi and J. Sun (1993) A nonsmooth version of Newton’s method Math.

Programm. 58, 353-368.

[24] A. Shortt, J. Keating, L. Monlinier, and C. Pannell (2005), Op-tical implementation of the Kak neural network, Information Sciences, vol.

171, 273-287.

[25] D. W. Tank and J. J. Hopfield (1986), Simple neural optimization net-works: an A/D converter, signal decision circuit, and a linear programming circuit, IEEE Transactions on Circuits and Systems, vol. 33, 533-541.

[26] S. Wiggins (2003), Introduction to Applied and Nonlinear Dynamical Sys-tems and Chaos, Springer-Verlag, New York, Inc.

[27] Y. Xia, H. Leung, and J. Wang (2002), A projection neural network and its application to constrained optimization problems, IEEE Transactions on Circuits and Systems-I, vol. 49, 447-458.

[28] Y. Xia, H. Leung, and J. Wang (2004), A genarl projection neural net-work for solving monotone variational inequalities and related optimization problems, IEEE Transactions on Neural Networks, vol. 15, 318-328.

[29] Y. Xia, H. Leung, and J. Wang (2005), A recurrent neural network for solving nonlinear convex programs subject to linear constraints, IEEE Transactions on Neural Networks, vol. 16, 379-386.

93

[30] M. Yashtini and A. Malek (2007), Solving complementarity and vari-ational inequalities problems using neural networks, Applied Mathematics and Computation, vol. 190, 216-230.

[31] S. H. Zak, V. Upatising, and S. Hui (1995), Solving linear programming problems with neural networks: a comparative study, IEEE Transactions on Neural Networks, vol. 6, 94-104.

[32] G. Zhang (2007), A neural network ensemble method with jittered training data for time series forecasting, Information Sciences, vol. 177, 5329-5340.

94

統計 Statistics 地 點 : M 2 1 1 數 學 館

TMS Annual Meeting

數 學 年 會

2018 數 學 年 會

D e c . 8 / 0 9 : 3 0 - 2 1 : 0 0

D e c . 9 / 0 9 : 3 0 - 1 5 : 5 0

演講摘要

Speech Abstracts

ѱ䗜௤փ ঊ䗜௤փ

D e c . 8 / 0 9 : 3 0 - 2 1 : 0 0

1 1 : 2 0 - 1 2 : 0 5

တᙗឬ

<L&KLQJ<DR

$PRGHOELDVSUREOHPDULVLQJIURPLPDJHDQDO\VLVLQFU\RJHQLFHOHFWURQ

PLFURVFRS\

1 3 : 3 0 - 1 4 : 1 5

⧁㏣㧷

:HL&KLQJ:DQJ 䂋զཝᆮуੂޛᆮ㇗䚉ҁ㎧䀾⁗ශ

1 4 : 2 0 - 1 4 : 4 5

䲩ᱛ

&KXQ6KX&KHQ 0HDVXULQJVWDELOL]DWLRQLQPRGHOVHOHFWLRQ

哹ь䊠

6KLK+DR+XDQJ

2SWLPDOGHVLJQVIRUELQDU\UHVSRQVHPRGHOVZLWKPXOWLSOHQRQQHJDWLYH

YDULDEOHV

在文檔中 數 學 年 會 (頁 86-105)

相關文件