Sequential Optimization & Probabilistic Analysis Using Adaptively Refined Constraints in LS-OPT®

semanticscholar(2020)

引用 1|浏览2
暂无评分
摘要
This paper presents some of the sequential optimization and probabilistic analysis methods in LS-OPT with particular emphasis on the use of classifiers for accuracy and efficiency improvement. Classifiers were first introduced in LS-OPT 6.0 for the handling of constraints. This paper provides a review of the basic classification-based constraint handling method and its applications and advantages for specific types of problems. Additionally, the application of classifiers is extended to adaptive sampling using EDSD (explicit design space decomposition) sampling constraints in LS-OPT 6.1. The different adaptive sampling options and approaches are presented through the examples. Another aspect of this paper is the extension of the probabilistic analysis method in LS-OPT from single iteration to sequential. The sequential analysis can be performed with or without EDSD sampling constraints, but sampling constraints, if used, are can guide the samples adaptively to important regions. Although the EDSD sampling constraints are defined using support vector machine (SVM) classifiers, the adaptive samples are useful in enhancing the constraint boundary accuracy even if it is defined using metamodels. Overview of LS-OPT Optimization and Probabilistic Analysis Methods The optimization and probabilistic methodologies or tasks in LS-OPT [1] are broadly divided into direct and metamodel-based methods. Direct methods are in general robust and can solve all types of complex problems if sufficient simulations are performed. Unfortunately, the associated computational cost can often be prohibitive. Metamodels, on the contrary, attempt to build computationally inexpensive surrogates for simulation models using only a few samples [2]. The focus of this paper is on such surrogate-based methods. However, the fidelity of the inexpensive surrogates is determined by the number and quality of the samples, as well as by the complexity of the problem at hand. Therefore, special sampling strategies are sometimes needed, some of which have been part of LS-OPT for quite some time [1]. There is, however, scope for improving these strategies as this is still an evolving research area. This section provides an overview of the pre-existing metamodel-based methods in LS-OPT, before diving into some of the potential enhancements in the following sections. Fig.1: Response and feasibility prediction using a metamodel. The predicted response can be used as an objective function or as a constraint for optimization or probabilistic analysis. 16th International LS-DYNA® Users Conference Optimization June 10-11, 2020 2 Metamodel-based Optimization There are four metamodel-based optimization strategies in LS-OPT – single iteration, sequential, sequential with domain reduction and efficient global optimization [1]. The single iteration method requires a prior total number of samples to be evaluated. These response values at these samples is fitted to a metamodel. Once the metamodel is trained, a core optimizer is used to solve the approximated optimization problem. The sequential approach adds samples in batches over several iterations; the single iteration optimization is repeated to gradually improve the approximation accuracy and converge to a solution. Both these approaches are rather naïve in their sampling strategy; there is no consideration of the nature of objective and constraint functions while selecting the samples for evaluation. The domain reduction method [3] uses a more intelligent and informed strategy which adaptively adjusts the region of interest based on the current approximations. The EGO method [4] is an adaptive approach when applied as a serial sampling method but is relatively naïve in multiple parallel sampling based on its current LS-OPT implementation. Planned improvements to the parallelization strategy should however mitigate this limitation. Fig.2: Metamodel-based optimization strategies in LS-OPT. One thing to note is that design problems are often constrained, and therefore accuracy of the constraint boundary is also important in addition to the objective function [5-8]. For certain types of problems, a classification-based approach can be useful for defining and refining the constraints [9-14]. Metamodel-based Probabilistic Analysis The probabilistic analysis methods in LS-OPT include single iteration Monte Carlo analysis for failure probability calculation, multi-level framework-based reliability and tolerance analysis and DynaStats – a tool for spatial-temporal stochastic analysis for LS-DYNA® models [1]. While DynaStats is a very interesting tool for result visualization and the multi-level framework gives a great deal of flexibility for reliability calculation [15], the later also adds some complexity to the problem setup. 16th International LS-DYNA® Users Conference Optimization June 10-11, 2020 3 LS-OPT Constraint Handling and Probabilistic Analysis Enhancements This section presents the contributions of this work, which can be grouped in two categories – allowing sequential sampling and convergence study for probabilistic analysis and developing a classifier-based constraint handling and adaptive sampling method. Sequential Metamodel-based Probabilistic Analysis (Version 6.1) Being limited to single iteration analysis, the single level reliability capabilities of LS-OPT have been fairly basic for a while. As part of this work, a sequential Monte Carlo strategy has been added to the next LS-OPT version 6.1 in order to alleviate this limitation. The sequential approach facilitates incremental sample addition (Fig 3) and convergence study of the failure probability (Fig 4). However, the methodology of sample addition can have a great impact on the sampling quality and the failure probability estimate. A classifier-based method has been developed in LS-OPT 6.1 to adaptively guide samples to important regions and improve the failure boundary estimate. Fig.3: Sequential sampling (left) and Monte Carlo analysis using the predicted failure boundary (right). The numbers indicate the iteration. Fig.4: LS-OPT GUI for sequential probabilistic analysis (left) and an example of failure probability convergence (right). Classifier-based Constraint Boundary (Version 6.0) and its Applications The basic idea of classifiers implemented in LS-OPT 6.0 [1] is introduced in this section along with its applications. One of the applications is to use them for adaptive sampling [11-13], which is the focus of this paper. A separate section is dedicated later to classifier-based sampling constraints implemented in version 6.1. In both design optimization and reliability assessment one of the main tasks is the demarcation between acceptable (feasible/safe) and unacceptable (infeasible/failed) designs. Classification methods use the pass/fail information at a few specified samples used for training an optimal boundary that separates the two categories. 16th International LS-DYNA® Users Conference Optimization June 10-11, 2020 4 The difference between metamodel-based and classification-based methods to determine acceptability of any general design alternative is shown in Fig 5. The classification-based method takes a decision directly based on the position of the new sample in the design space, whereas a metamodel takes the decision based on the corresponding predicted response value and threshold. Some of the applications of classifiers are listed below: • Pass/fail information is readily available even for binary responses, e.g. failed simulation, lack of quantification ability etc., making classification the method of choice for such applications. • A classifier considers the feasibility information during training itself, because of which the emphasis is on the accuracy near the decision boundary. This is particularly useful in cases where noise and response discontinuities hamper metamodel construction. • A single classifier can be constructed for multiple failure modes related to different simulations, thereby providing an opportunity to skip many simulations or to terminate simulations early without any loss of useful data. • A classifier can also be used to define the sampling domain, which is the main topic of this paper. Fig.5: Summary of basic classification method (bottom) and comparison to metamodeling (top). Support Vector Machine Classification Support Vector Machine (SVM) [16] is a type of macine learning technique that can be used for both classification and regression. The basic idea of SVM classification in the context of linear binary separators is to maximize the margin between two hyperplanes (lines in a two-dimensional space) that are parallel and equidistant on either side from the separating hyperplane. The separating boundary demarcating the samples belonging to two classes, typically labelled as +1 and -1, is referred to as the SVM decision boundary and the two parallel hyperplanes are known as the support hyperplanes. The SVM decision boundary is constructed such that there is no sample belonging to either class in the margin between the support hyperplanes. The SVM value is equal to zero at the decision boundary and +1 and -1 at the two support hyperplanes. The same idea is extended to nonlinear decision boundaries using a kernel function. In such cases the decision boundary and the supporting boundaries are linear in a higher dimensional feature space, but they are nonlinear in the original variable space or input space. The SVM values at the decision boundary and the two support boundaries are still 0, +1 and -1. The general SVM boundary for the nonlinear case is obtained as s(x)=0, where s(x) is given in Eq. (1). ∑ = + = N i i i i K y b s 1 ) , ( ) ( x x x α (1) Here, yi =± 1 (e.g. red vs green) is the class label, αi is the Lagrange multiplier for ith sample and b is the bias. The kernel K maps the design space and the feature space (the high-dimensional space consisting of basis 16th International LS-DYNA® Users
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要