## AI 生成解读视频

AI抽取解析论文重点内容自动生成视频

AI解析本论文相关学术脉络

## AI 精读

AI抽取本论文的概要总结

We summarize the functional analytic approach in section 2, mainly to give those familiar with the tradition a guide for recognizing what happens in the rest of the paper

# Solving Ill-Conditioned and Singular Linear Systems: A Tutorial on Regularization.

SIAM Review, no. 3 (1998): 636-666

EI

It is shown that the basic regularization procedures for finding meaningful approximate solutions of ill-conditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many known results in a more elementary form, we also der...更多

• In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

Usually, y is the result of measurements contaminated by small errors.
• Section 11 extends the stochastic approach to the situation where the smoothness condition x = Sw is replaced by the condition that some vector Jx, usually composed of suitably weighted finite differences of function values, is reasonably bounded.

• In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

Usually, y is the result of measurements contaminated by small errors
• The importance of the problem can be seen from a glance at the following probably incomplete list of applications: numerical differentiation of noisy data, nonparametric smoothing of curves and surfaces defined by scattered data, multivariate approximation by radial basis functions, training of neural networks, image reconstruction, deconvolution of sequences and images (Wiener filtering), shape from shading, computer-assisted tomography (CAT, PET), indirect measurements and nondestructive testing, inverse scattering, seismic analysis, parameter identification in dynamical systems, analytic continuation, inverse Laplace transforms, calculation of relaxation spectra, air pollution source detection, solution of partial differential equations with nonstandard data, and so on
• The construction of an approximation satisfying this error bound depends on the knowledge of p and δ or another constant involving δ such as δ/ω; by a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples whenever the pseudoinverse of T is unbounded
• By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise
• If r > 0 or s > 0, we find the (r, s)-generalized cross validation (GCV) merit function frs(t) = log γrs(t) − r + s log βrs(t), where (58) (59)
• Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)

• For a well-posed data fitting problem, i.e., one with a well-conditioned normal equation matrix A∗A, the least squares estimate has an error of the order of ∆.
• By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise.
• Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)
• J is usually a matrix of suitably weighted first- or second-order differences that apply to some part of x containing case 2r 2s gold silver bronze n = 500 0 0 1674 433 alg.
• If V = I, the authors may use the stochastic setting and obtain from Theorem 8.1 the optimal estimator x = Cy = (SS∗A∗A + ∆2I)−1SS∗A∗y = (A∗A + ∆2J ∗J )−1A∗y, and this formula agrees with (66).
• For large-scale problems, (66) can be solved using one Cholesky factorization for each value of t, and the authors show below how these factorizations can be used to find an appropriate regularization parameter.
• Once a good regularization parameter t is determined, the solution xof the least squares problem (66) is found by completing (85) with a back substitution, solving the triangular linear system
• If one formulates each such constraint as the condition that some linear expression Jνx is assumed to be well scaled and not too large, one may again take account of these constraints as penalty terms in the least squares problem.

• One may assume that the tν are proportional to some known constants, thereby reducing the problem to one with a single regularization parameter.
• The GML criterion generalizes in a natural way; (84)–(86) remain valid, but Lt is a Cholesky factor of Bt, and the vector t of regularization parameters may be found by a multivariate minimization of f00(t).

• In many applications of linear algebra, the need arises to find a good approximation xto a vector x ∈ Rn satisfying an approximate equation Ax ≈ y with ill-conditioned or singular A ∈ Rm×n, given y ∈ Rm.

Usually, y is the result of measurements contaminated by small errors.
• Section 11 extends the stochastic approach to the situation where the smoothness condition x = Sw is replaced by the condition that some vector Jx, usually composed of suitably weighted finite differences of function values, is reasonably bounded.
• For a well-posed data fitting problem, i.e., one with a well-conditioned normal equation matrix A∗A, the least squares estimate has an error of the order of ∆.
• By a theorem of Bakushinskii [1], any technique for choosing regularization parameters in the absence of information about the error level can be defeated by suitably constructed counterexamples, and the techniques in use all fail on a small proportion of problems in simulations where the right-hand side is perturbed by random noise.
• Λk r μk s c2k λk + tμk λk + tμk λk + tμk (in an important special case discussed in section 11 below, the global minimizer of the (0,1)-GCV merit function is the familiar GCV estimate for the regularization parameter.)
• J is usually a matrix of suitably weighted first- or second-order differences that apply to some part of x containing case 2r 2s gold silver bronze n = 500 0 0 1674 433 alg.
• If V = I, the authors may use the stochastic setting and obtain from Theorem 8.1 the optimal estimator x = Cy = (SS∗A∗A + ∆2I)−1SS∗A∗y = (A∗A + ∆2J ∗J )−1A∗y, and this formula agrees with (66).
• For large-scale problems, (66) can be solved using one Cholesky factorization for each value of t, and the authors show below how these factorizations can be used to find an appropriate regularization parameter.
• Once a good regularization parameter t is determined, the solution xof the least squares problem (66) is found by completing (85) with a back substitution, solving the triangular linear system
• If one formulates each such constraint as the condition that some linear expression Jνx is assumed to be well scaled and not too large, one may again take account of these constraints as penalty terms in the least squares problem.
• One may assume that the tν are proportional to some known constants, thereby reducing the problem to one with a single regularization parameter.
• The GML criterion generalizes in a natural way; (84)–(86) remain valid, but Lt is a Cholesky factor of Bt, and the vector t of regularization parameters may be found by a multivariate minimization of f00(t).

• Table1: Failure rates in percent
• Table2: Number of first, second, and third places

• A. B. BAKUSHINSKII, Remarks on choosing a regularization parameter using the quasioptimality and ratio criterion, USSR Comput. Math. Math. Phys., 24 (1984), pp. 181–182.
• O. E. BARNDORFF-NIELSEN AND D. R. COX, Inference and Asymptotics, Chapman and Hall, London, 1994.
• M. BERTERO, C. DE MOL, AND G. A. VIANO, The stability of inverse problems, in Inverse Scattering in Optics, H. P. Baltes, ed., Springer-Verlag, New York, 1980, pp. 161–214.
• A. BJO RCK, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996.
• Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., Academic Press, Boston, 1987, pp. 165–175.
• P. CRAVEN AND G. WAHBA, Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation, Numer. Math., 31 (1979), pp. 377–403.
• L. ELDEN, Algorithms for the regularization of ill-conditioned least squares problems, BIT, 17 (1977), pp. 134–145.
• H. W. ENGL, Regularization methods for the stable solution of inverse problems, Surveys Math. Indust., 3 (1993), pp. 71–143.
• H. W. ENGL, M. HANKE, AND A. NEUBAUER, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.
• P. E. GILL, W. MURRAY, AND M. H. WRIGHT, Practical Optimization, Academic Press, London, 1981.
• G. H. GOLUB AND C. F. VAN LOAN, Matrix Computations, John Hopkins University Press, Baltimore, MD, 1989.
• C. W. GROETSCH, Generalized Inverses of Linear Operators, Marcel Dekker, New York, 1977.
• CHONG GU AND G. WAHBA, Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method, SIAM J. Sci. Stat. Comput., 12 (1991), pp. 383–398.
• M. HANKE, Conjugate Gradient Type Methods for Ill-Posed Problems, Pitman Res. Notes Math. Ser., Longman, Harlow, UK, 1995.
• M. HANKE AND M. P. C. HANSEN, Regularization methods for large-scale problems, Surveys Math. Indust., 3 (1993), pp. 253–315.
• M. HANKE AND T. RAUS, A general heuristic for choosing the regularization parameter in ill-posed problems, SIAM J. Sci. Comput., 17 (1996), pp. 956–972.
• P. C. HANSEN, Analysis of discrete ill-posed problems by means of the L-curve, SIAM Rev., 34 (1992), pp. 561–580.
• P. C. HANSEN, Rank-Deficient and Discrete Ill-Posed Problems. Numerical Aspects of Linear Inversion, SIAM, Philadelphia, PA, 1997.
• A. K. KATSAGGELOS, Digital Image Restoration, Springer-Verlag, Berlin, 1991.
• L. KAUFMAN AND A. NEUMAIER, PET regularization by envelope guided conjugate gradients, IEEE Trans. Medical Imag., 15 (1996), pp. 385–389.
• L. KAUFMAN AND A. NEUMAIER, Regularization of ill-posed problems by envelope guided conjugate gradients, J. Comput. Graph. Stat., 6 (1997), pp. 451–463.
• C. F. VAN LOAN, Generalizing the singular value decomposition, SIAM J. Numer. Anal., 13 (1976), pp. 76–83.
• C. F. VAN LOAN, Computing the CS and generalized singular value decomposition, Numer. Math., 46 (1985), pp. 479–492.
• K. MILLER, Least squares methods for ill-posed problems with a prescribed bound, SIAM J. Math. Anal., 1 (1970), pp. 52–74.
• F. NATTERER, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), pp. 29–37.
• A. S. NEMIROVSKI, The regularization properties of the adjoint method in ill-posed problems, USSR Comput. Math. Math. Phys., 26 (1986), pp. 7–16.
• C. C. PAIGE, Computing the generalized singular value decomposition, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 1126–1146.
• D. L. PHILLIPS, A technique for the numerical solution of certain integral equations of the first kind, J. Assoc. Comput. Mach., 9 (1962), pp. 84–97.
• J. D. RILEY, Solving systems of linear equations with a positive definite symmetric but possibly ill-conditioned matrix, Math. Tables Aids Comput., 9 (1956), pp. 96–101.
• G. W. STEWART, A method for computing the generalized singular value decomposition, in Matrix Pencils, B. Kagstrom and A. Ruhe, eds., Springer-Verlag, New York, 1983, pp. 207– 220.
• A. N. TIKHONOV, Solution of incorrectly formulated problems and the regularization method, Soviet Math. Dokl., 4 (1963), pp. 1035–1038.
• C. R. VOGEL AND M. E. OMAN, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17 (1996), pp. 227–238.
• G. WAHBA, A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem, Ann. Statist., 13 (1985), pp. 1378–1402.
• G. WAHBA, Spline Models for Observational Data, SIAM, Philadelphia, PA, 1990.
• A. H. WELSH, On M-processes and M-estimation, Ann. Statist., 17 (1990), pp. 337–361. (Correction, Ann. Statist., 18 (1990), p. 1500.)
• N. WIENER, Cybernetics, MIT Press, Cambridge, MA, 1948.
• G. M. WING AND J. D. ZAHRT, A Primer on Integral Equations of the First Kind, SIAM, Philadelphia, PA, 1991.
• E. T. WHITTAKER, On a new method of graduation, Proc. Edinburgh Math. Soc., 41 (1923), pp. 63–75.

0