AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We summarize the functional analytic approach in section 2, mainly to give those familiar with the tradition a guide for recognizing what happens in the rest of the paper

Solving Ill-Conditioned and Singular Linear Systems: A Tutorial on Regularization.

SIAM Review, no. 3 (1998): 636-666

被引用640|浏览3
EI
下载 PDF 全文
引用
微博一下

摘要

It is shown that the basic regularization procedures for finding meaningful approximate solutions of ill-conditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many known results in a more elementary form, we also der...更多

代码

数据

简介
  • In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

    Usually, y is the result of measurements contaminated by small errors.
  • Section 11 extends the stochastic approach to the situation where the smoothness condition x = Sw is replaced by the condition that some vector Jx, usually composed of suitably weighted finite differences of function values, is reasonably bounded.
重点内容
  • In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

    Usually, y is the result of measurements contaminated by small errors
  • The importance of the problem can be seen from a glance at the following probably incomplete list of applications: numerical differentiation of noisy data, nonparametric smoothing of curves and surfaces defined by scattered data, multivariate approximation by radial basis functions, training of neural networks, image reconstruction, deconvolution of sequences and images (Wiener filtering), shape from shading, computer-assisted tomography (CAT, PET), indirect measurements and nondestructive testing, inverse scattering, seismic analysis, parameter identification in dynamical systems, analytic continuation, inverse Laplace transforms, calculation of relaxation spectra, air pollution source detection, solution of partial differential equations with nonstandard data, and so on
  • The construction of an approximation satisfying this error bound depends on the knowledge of p and δ or another constant involving δ such as δ/ω; by a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples whenever the pseudoinverse of T is unbounded
  • By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise
  • If r > 0 or s > 0, we find the (r, s)-generalized cross validation (GCV) merit function frs(t) = log γrs(t) − r + s log βrs(t), where (58) (59)
  • Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)
结果
  • For a well-posed data fitting problem, i.e., one with a well-conditioned normal equation matrix A∗A, the least squares estimate has an error of the order of ∆.
  • By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise.
  • Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)
  • J is usually a matrix of suitably weighted first- or second-order differences that apply to some part of x containing case 2r 2s gold silver bronze n = 500 0 0 1674 433 alg.
  • If V = I, the authors may use the stochastic setting and obtain from Theorem 8.1 the optimal estimator x = Cy = (SS∗A∗A + ∆2I)−1SS∗A∗y = (A∗A + ∆2J ∗J )−1A∗y, and this formula agrees with (66).
  • For large-scale problems, (66) can be solved using one Cholesky factorization for each value of t, and the authors show below how these factorizations can be used to find an appropriate regularization parameter.
  • Once a good regularization parameter t is determined, the solution xof the least squares problem (66) is found by completing (85) with a back substitution, solving the triangular linear system
  • If one formulates each such constraint as the condition that some linear expression Jνx is assumed to be well scaled and not too large, one may again take account of these constraints as penalty terms in the least squares problem.
结论
  • One may assume that the tν are proportional to some known constants, thereby reducing the problem to one with a single regularization parameter.
  • The GML criterion generalizes in a natural way; (84)–(86) remain valid, but Lt is a Cholesky factor of Bt, and the vector t of regularization parameters may be found by a multivariate minimization of f00(t).
总结
  • In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

    Usually, y is the result of measurements contaminated by small errors.
  • Section 11 extends the stochastic approach to the situation where the smoothness condition x = Sw is replaced by the condition that some vector Jx, usually composed of suitably weighted finite differences of function values, is reasonably bounded.
  • For a well-posed data fitting problem, i.e., one with a well-conditioned normal equation matrix A∗A, the least squares estimate has an error of the order of ∆.
  • By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise.
  • Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)
  • J is usually a matrix of suitably weighted first- or second-order differences that apply to some part of x containing case 2r 2s gold silver bronze n = 500 0 0 1674 433 alg.
  • If V = I, the authors may use the stochastic setting and obtain from Theorem 8.1 the optimal estimator x = Cy = (SS∗A∗A + ∆2I)−1SS∗A∗y = (A∗A + ∆2J ∗J )−1A∗y, and this formula agrees with (66).
  • For large-scale problems, (66) can be solved using one Cholesky factorization for each value of t, and the authors show below how these factorizations can be used to find an appropriate regularization parameter.
  • Once a good regularization parameter t is determined, the solution xof the least squares problem (66) is found by completing (85) with a back substitution, solving the triangular linear system
  • If one formulates each such constraint as the condition that some linear expression Jνx is assumed to be well scaled and not too large, one may again take account of these constraints as penalty terms in the least squares problem.
  • One may assume that the tν are proportional to some known constants, thereby reducing the problem to one with a single regularization parameter.
  • The GML criterion generalizes in a natural way; (84)–(86) remain valid, but Lt is a Cholesky factor of Bt, and the vector t of regularization parameters may be found by a multivariate minimization of f00(t).
表格
  • Table1: Failure rates in percent
  • Table2: Number of first, second, and third places
Download tables as Excel
引用论文
  • A. B. BAKUSHINSKII, Remarks on choosing a regularization parameter using the quasioptimality and ratio criterion, USSR Comput. Math. Math. Phys., 24 (1984), pp. 181–182.
    Google ScholarLocate open access versionFindings
  • O. E. BARNDORFF-NIELSEN AND D. R. COX, Inference and Asymptotics, Chapman and Hall, London, 1994.
    Google ScholarFindings
  • M. BERTERO, C. DE MOL, AND G. A. VIANO, The stability of inverse problems, in Inverse Scattering in Optics, H. P. Baltes, ed., Springer-Verlag, New York, 1980, pp. 161–214.
    Google ScholarFindings
  • A. BJO RCK, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996.
    Google ScholarFindings
  • Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., Academic Press, Boston, 1987, pp. 165–175.
    Google ScholarFindings
  • P. CRAVEN AND G. WAHBA, Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation, Numer. Math., 31 (1979), pp. 377–403.
    Google ScholarLocate open access versionFindings
  • L. ELDEN, Algorithms for the regularization of ill-conditioned least squares problems, BIT, 17 (1977), pp. 134–145.
    Google ScholarFindings
  • H. W. ENGL, Regularization methods for the stable solution of inverse problems, Surveys Math. Indust., 3 (1993), pp. 71–143.
    Google ScholarLocate open access versionFindings
  • H. W. ENGL, M. HANKE, AND A. NEUBAUER, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.
    Google ScholarFindings
  • P. E. GILL, W. MURRAY, AND M. H. WRIGHT, Practical Optimization, Academic Press, London, 1981.
    Google ScholarFindings
  • G. H. GOLUB AND C. F. VAN LOAN, Matrix Computations, John Hopkins University Press, Baltimore, MD, 1989.
    Google ScholarFindings
  • C. W. GROETSCH, Generalized Inverses of Linear Operators, Marcel Dekker, New York, 1977.
    Google ScholarFindings
  • CHONG GU AND G. WAHBA, Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method, SIAM J. Sci. Stat. Comput., 12 (1991), pp. 383–398.
    Google ScholarLocate open access versionFindings
  • M. HANKE, Conjugate Gradient Type Methods for Ill-Posed Problems, Pitman Res. Notes Math. Ser., Longman, Harlow, UK, 1995.
    Google ScholarLocate open access versionFindings
  • M. HANKE AND M. P. C. HANSEN, Regularization methods for large-scale problems, Surveys Math. Indust., 3 (1993), pp. 253–315.
    Google ScholarLocate open access versionFindings
  • M. HANKE AND T. RAUS, A general heuristic for choosing the regularization parameter in ill-posed problems, SIAM J. Sci. Comput., 17 (1996), pp. 956–972.
    Google ScholarLocate open access versionFindings
  • P. C. HANSEN, Analysis of discrete ill-posed problems by means of the L-curve, SIAM Rev., 34 (1992), pp. 561–580.
    Google ScholarLocate open access versionFindings
  • P. C. HANSEN, Rank-Deficient and Discrete Ill-Posed Problems. Numerical Aspects of Linear Inversion, SIAM, Philadelphia, PA, 1997.
    Google ScholarFindings
  • A. K. KATSAGGELOS, Digital Image Restoration, Springer-Verlag, Berlin, 1991.
    Google ScholarFindings
  • L. KAUFMAN AND A. NEUMAIER, PET regularization by envelope guided conjugate gradients, IEEE Trans. Medical Imag., 15 (1996), pp. 385–389.
    Google ScholarLocate open access versionFindings
  • L. KAUFMAN AND A. NEUMAIER, Regularization of ill-posed problems by envelope guided conjugate gradients, J. Comput. Graph. Stat., 6 (1997), pp. 451–463.
    Google ScholarLocate open access versionFindings
  • C. F. VAN LOAN, Generalizing the singular value decomposition, SIAM J. Numer. Anal., 13 (1976), pp. 76–83.
    Google ScholarLocate open access versionFindings
  • C. F. VAN LOAN, Computing the CS and generalized singular value decomposition, Numer. Math., 46 (1985), pp. 479–492.
    Google ScholarLocate open access versionFindings
  • K. MILLER, Least squares methods for ill-posed problems with a prescribed bound, SIAM J. Math. Anal., 1 (1970), pp. 52–74.
    Google ScholarLocate open access versionFindings
  • F. NATTERER, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), pp. 29–37.
    Google ScholarLocate open access versionFindings
  • A. S. NEMIROVSKI, The regularization properties of the adjoint method in ill-posed problems, USSR Comput. Math. Math. Phys., 26 (1986), pp. 7–16.
    Google ScholarLocate open access versionFindings
  • C. C. PAIGE, Computing the generalized singular value decomposition, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 1126–1146.
    Google ScholarLocate open access versionFindings
  • D. L. PHILLIPS, A technique for the numerical solution of certain integral equations of the first kind, J. Assoc. Comput. Mach., 9 (1962), pp. 84–97.
    Google ScholarLocate open access versionFindings
  • J. D. RILEY, Solving systems of linear equations with a positive definite symmetric but possibly ill-conditioned matrix, Math. Tables Aids Comput., 9 (1956), pp. 96–101.
    Google ScholarFindings
  • G. W. STEWART, A method for computing the generalized singular value decomposition, in Matrix Pencils, B. Kagstrom and A. Ruhe, eds., Springer-Verlag, New York, 1983, pp. 207– 220.
    Google ScholarFindings
  • A. N. TIKHONOV, Solution of incorrectly formulated problems and the regularization method, Soviet Math. Dokl., 4 (1963), pp. 1035–1038.
    Google ScholarLocate open access versionFindings
  • C. R. VOGEL AND M. E. OMAN, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17 (1996), pp. 227–238.
    Google ScholarLocate open access versionFindings
  • G. WAHBA, A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem, Ann. Statist., 13 (1985), pp. 1378–1402.
    Google ScholarLocate open access versionFindings
  • G. WAHBA, Spline Models for Observational Data, SIAM, Philadelphia, PA, 1990.
    Google ScholarFindings
  • A. H. WELSH, On M-processes and M-estimation, Ann. Statist., 17 (1990), pp. 337–361. (Correction, Ann. Statist., 18 (1990), p. 1500.)
    Google ScholarLocate open access versionFindings
  • N. WIENER, Cybernetics, MIT Press, Cambridge, MA, 1948.
    Google ScholarFindings
  • G. M. WING AND J. D. ZAHRT, A Primer on Integral Equations of the First Kind, SIAM, Philadelphia, PA, 1991.
    Google ScholarFindings
  • E. T. WHITTAKER, On a new method of graduation, Proc. Edinburgh Math. Soc., 41 (1923), pp. 63–75.
    Google ScholarLocate open access versionFindings
您的评分 :
0

 

标签
评论
小科