LoginSignup
0
0

R3 on "W.a.t.m.i. statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(21)

Last updated at Posted at 2021-10-07

のR3(References on References on References) on "W.a.t.m.i. (What are the most important) statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(21)

R3(References on References on References) on "W.a.t.m.i. (What are the most important) statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(0)
https://qiita.com/kaizen_nagoya/items/a8eac9afbf16d2188901

What are the most important statistical ideas of the past 50 years?
Andrew Gelman, Aki Vehtari
https://arxiv.org/abs/2012.00174

References

21

Cand`es, E. J., Romberg, J., and Tao, T. (2008). Robust uncertainty principles: Exact signal recon- struction from highly incomplete frequency information. IEEE Transactions on Information Theory 52, 489–509.

Reference on 21

21.1

[1] S. Boucheron, G. Lugosi, and P. Massart, A sharp concentration inequality with applications, Random Structures Algorithms 16 (2000), 277–292.

Reference on 21.1

###21.1.1
New concentration inequalities in product spaces
M. Talagrand
Mathematics
1996
Abstract. We introduce three new ways to measure the “distance” from a point to a subset of a product space and we prove corresponding concentration inequalities. Each of them allows to control the…
###21.1.2
A new look at independence
M. Talagrand
Mathematics
1996
The concentration of measure phenomenon in product spaces is a farreaching abstract generalization of the classical exponential inequalities for sums of independent random variables. We attempt to…
###21.1.3
On the Length of the Longest Monotone Subsequence in a Random Permutation
A. Frieze
Mathematics
1991
In this short note we prove a concentration result for the length L n of the longest monotone increasing subsequence of a random permutation of the set but less is known about the concentration of L…
###21.1.4
On Talagrand's deviation inequalities for product measures
M. Ledoux
Mathematics
1997
We present a new and simple approach to some of the deviation inequalities for product measures deeply investigated by M. Talagrand in the recent years. Our method is based on functional inequalities…
###21.1.5
About the constants in Talagrand's concentration inequalities for empirical processes
P. Massart
Mathematics
2000
We propose some explicit values for the constants involved in the exponential concentration inequalities for empirical processes which are due to Talagrand. It has been shown by Ledoux that deviation…
###21.1.6

Bounding $\bar{d}$-distance by informational divergence: a method to prove measure concentration
K. Marton
Mathematics
1996
There is a simple inequality by Pinsker between variational distance and informational divergence of probability measures defined on arbitrary probability spaces. We shall consider probability…
###21.1.7
A simple proof of the blowing-up lemma
K. Marton
Mathematics, Computer Science
IEEE Trans. Inf. Theory
1986
TLDR
Here an information-theoretic proof of the blowing-up lemma, generalizing it to continuous alphabets, is given.
###21.1.8
The Nature of Statistical Learning Theory
V. Vapnik
Computer Science, Mathematics
Statistics for Engineering and Information Science
2000
Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing…
###21.1.9
On Increasing Subsequences of I.I.D. Samples
J. Deuschel, O. Zeitouni
Mathematics
1999
We study the fluctuations, in the large deviations regime, of the longest increasing sub-sequence of a random i.i.d. sample on the unit square. In particular, our results yield the precise upper and…
###21.1.10
A measure concentration inequality for contracting markov chains
K. Marton
Mathematics
1996
The concentration of measure phenomenon in product spaces means the following: if a subsetA of then'th power of a probability space Χ does not have too small a probability then very large probability…

###21.1.11

On the independence number of random graphs
A. Frieze
Computer Science, Mathematics
Discret. Math.
1990
TLDR
It is shown that if ϵ > 0 is fixed then with probability going to 1 as n → ∞ ∥α(G n,p)– 2 n d (logd–loglogd-log2–log2+1)∥⩽ ϵn d provided dϵ ⩽d=o(n), where dϴ is some fixed constant.
###21.1.12
Concentration of measure and isoperimetric inequalities in product spaces
M. Talagrand
Mathematics
1994
The concentration of measure phenomenon in product spaces roughly states that, if a set A in a product ΩN of probability spaces has measure at least one half, “most” of the points of Ωn are “close”…
###21.1.13
Majorizing measures: the generic chaining
M. Talagrand
Mathematics
1996
Majorizing measures provide bounds for the supremum of stochastic processes. They represent the most general possible form of the chaining argument going back to Kolmogorov. Majorizing measures arose…
###21.1.14
Nonnegative Entropy Measures of Multivariate Symmetric Correlations
T. Han
Computer Science, Mathematics
Inf. Control.
1978
TLDR
A “hierarchical structure” of probabilistic dependence relations is proposed where it is shown that any symmetric correlation associated with a nonnegative entropy is decomposed into pairwise conditional and/or nonconditional correlations.
###21.1.15
Some applications of concentration inequalities to statistics
P. Massart
Mathematics
2000
Nous presentons quelques applications d'inegalites de concentration a la resolution de problemes de selection de modeles en statistique. Nous etudions en detail deux exemples pour lesquels cette…

###21.1.16
A Catalog of Complexity Classes
D. Johnson
Computer Science, Mathematics
Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity
1990
TLDR
This chapter discusses the concepts needed for defining the complexity classes, a set of problems of related resource-based complexity that can be solved by an abstract machine M using O(f(n) of resource R, where n is the size of the input.
###21.1.17
Elements of Information Theory
T. Cover, Joy A. Thomas
Engineering, Computer Science
1991
TLDR
The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
###21.1.18
Predicting {0,1}-functions on randomly drawn points
D. Haussler, N. Littlestone, Manfred K. Warmuth
Computer Science, Mathematics
COLT '88
1988
TLDR
This model is related to Valiant′s PAC learning model, but does not require the hypotheses used for prediction to be represented in any specified form and shows how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions.
###21.1.19
Structural Risk Minimization Over Data-Dependent Hierarchies
J. Shawe-Taylor, P. Bartlett, R. C. Williamson, M. Anthony
Computer Science
IEEE Trans. Inf. Theory
1998
TLDR
A result is presented that allows one to trade off errors on the training sample against improved generalization performance, and a more general result in terms of "luckiness" functions, which provides a quite general way for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets.
###21.1.20
Adaptive Model Selection Using Empirical Complexities
G. Lugosi, A. Nobel
Mathematics
1998
Given n independent replicates of a jointly distributed pair (X,Y) in Rd times R, we wish to select from a fixed sequence of model classes F1,F2... a deterministic prediction rule f: Rd to R whose…

###21.1.21
A Probabilistic Theory of Pattern Recognition
L. Devroye, L. Györfi, G. Lugosi
Mathematics, Computer Science
Stochastic Modelling and Applied Probability
1996
TLDR
The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
###21.1.22
Isoperimetry and Gaussian analysis
M. Ledoux
Mathematics
1996

###21.1.23
Bounds on conditional probabilities with applications in multi-user communication
R. Ahlswede, P. Gács, J. Körner
Mathematics
1976
###21.1.24
An inequality related to the isoperimetric inequality
L. H. Loomis, H. Whitney
Mathematics
1949

21.2

[2] E. J. Cand`es, and P. S. Loh, Image reconstruction with ridgelets, SURF Technical report, California Institute of Technology, 2002.

Reference 21.2

###21.2.1
[1] E. Candes, SURF Mentor, 2002.
###21.2.2
[2] E. Candes and D. Donoho, Ridgelets: a key to higher-dimensional intermittency?, Phil. Trans. R. Soc. Lond. A., vol. 357, pp. 2495-2509, 1999.
###21.2.3
[3] E. Candes, Ridgelets and Their Derivatives: Representation of Images with Edges, Curves and Surfaces, L. L. Schumaker et al. (eds), Vanderbilt University Press, Nashville, TN, 1999.
###21.2.4
[4] S. Chen, D. Donoho, and M. Saunders, Atomic Decomposition By Basis Pursuit, SIAM J. Sci. Comput., vol. 20, no. 1, pp. 33-61.
###21.2.5
[5] W. Richter mentored by E. Candes, The Power of Representation: The Ridgelet Transform, Caltech Science Writing E-Journal, vol. II, 2001.

21.3

[3] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Scientific Computing 20 (1999), 33–61.
https://web.stanford.edu/group/SOL/papers/BasisPursuit-SIGEST.pdf

REFERENCES 21.3

21.3.1

[1] R. E. Bixby, Commentary: Progress in linear programming, ORSA J. Comput., 6 (1994), pp. 15–22.

21.3.2

[2] P. Bloomfield and W. Steiger, Least Absolute Deviations: Theory, Applications, and Al- gorithms, Birkh ̈auser, Boston, 1983.

21.3.3

[3] J. Buckheit and D. L. Donoho, WaveLab and reproducible research, in Wavelets and Statis- tics, A. Antoniadis, ed., Springer-Verlag, Berlin, New York, 1995.

21.3.4

[4] S. S. Chen, Basis Pursuit, Ph.D. Thesis, Department of Statistics, Stanford University, Stan- ford, CA, 1995; see also http://www-stat.stanford.edu/ ̃atomizer/.

21.3.5

[5] S. Chen, S. A. Billings, and W. Luo, Orthogonal least squares methods and their application to non-linear system identification, Internat. J. Control, 50 (1989), pp. 1873–1896.
[6] R. R. Coifman and Y. Meyer, Remarques sur l’analyze de Fourier a` Fenˆetre, C. R. Acad.
Sci. Paris (A), 312 (1991), pp. 259–261.
[7] R. R. Coifman and M. V. Wickerhauser, Entropy-based algorithms for best-basis selection,
IEEE Trans. Inform. Theory, 38 (1992), pp. 713–718.
[8] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton,
NJ, 1963.
[9] I. Daubechies, Time-frequency localization operators: A geometric phase space approach,
IEEE Trans. Inform. Theory, 34 (1988), pp. 605–612.
[10] I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992.
158 S.S.CHEN,D.L.DONOHO,ANDM.A.SAUNDERS
[11] G. Davis, S. Mallat, and Z. Zhang, Adaptive time-frequency decompositions, Optical Engrg., 33 (1994), pp. 2183–2191.
[12] R. A. DeVore and V. N. Temlyakov, Some remarks on greedy algorithms, Adv. Comput. Math., 5 (1996), pp. 173–187.
[13] D. L. Donoho, De-Noising by soft thresholding, IEEE Trans. Inform. Theory, 41 (1995), pp. 613–627.
[14] D. L. Donoho, Wedgelets: Nearly-minimax estimation of edges, Ann. Statist., 27 (1999), pp. 859–897.
[15] D. L. Donoho and X. Huo, Uncertainty Principles and Ideal Atomic Decomposition, Technical Report 99-13, Department of Statistics, Stanford University, Stanford, CA, 1999; IEEE Trans. Inform. Theory, to appear.
[16] D. L. Donoho and I. M. Johnstone, Ideal de-noising in an orthonormal basis chosen from a library of bases, C. R. Acad. Sci. Paris S ́er. I Math., 319 (1994), pp. 1317–1322.
[17] D. L. Donoho and I. M. Johnstone, Empirical Atomic Decomposition, manuscript, 1995.
[18] D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, Wavelet shrinkage:
Asymptopia? J. Roy. Statist. Soc. Ser. B, 57 (1995), pp. 301–369.
[19] D. Gabor, Theory of communication, J. Inst. Elect. Eng., 93 (1946), pp. 429–457.
[20] P. E. Gill, W. Murray, D. B. Ponceleo ́n, and M. A. Saunders, Solving Reduced KKT
Systems in Barrier Methods for Linear and Quadratic Programming, Report SOL 91-7,
Stanford University, Stanford, CA, July 1991.
[21] P. E. Gill, W. Murray, and M. H. Wright, Numerical Linear Algebra and Optimization,
Addison-Wesley, Redwood City, CA, 1991.
[22] G. Golub and C. V Loan, Matrix Computations, 2nd ed., Johns Hopkins University Press,
Baltimore, MD, 1989.
[23] X. Huo, Sparse Image Decomposition via Combined Transforms, Ph.D. Thesis, Department of
Statistics, Stanford University, Stanford, CA, 1999; see also http://www-stat.stanford.edu/
research/abstracts/99-18.ps.
[24] N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4
(1984), pp. 375–395.
[25] M. Kojima, S. Mizuno, and A. Yoshise, A primal-dual interior point algorithm for lin-
ear programming, in Progress in Mathematical Programming: Interior Point and Related
Methods, Springer-Verlag, New York, 1989.
[26] Y. Li and F. Santosa, A computational algorithm for minimizing total variation in image
restoration, IEEE Trans. Image Proc., 5 (1996), pp. 987–995.
[27] I. J. Lustig, R. E. Marsten, and D. F. Shanno, Interior point methods for linear program-
ming: Computational state of the art, ORSA J. Comput., 6 (1994), pp. 1–14.
[28] S. Mallat and W. L. Hwang, Singularity detection and processing with wavelets, IEEE Trans.
Inform. Theory, 38 (1992), pp. 617–643.
[29] S. Mallat and Z. Zhang, Matching pursuit in a time-frequency dictionary, IEEE Trans.
Signal Proc., 41 (1993), pp. 3397–3415.
[30] S. Mallat and S. Zhong, Wavelet transform maxima and multiscale edges, in Wavelets and
Their Applications, M. B. Ruskai, G. Beylkin, and R. Coifman, eds., Jones and Bartlett,
Boston, 1992.
[31] MATLAB, The MathWorks, Inc., Natick, MA.
[32] N. Megiddo, On finding primal- and dual-optimal bases, ORSA J. Comput., 3 (1991), pp. 63–
65.
[33] Y. Meyer, Ondelettes sur l’intervalle, Rev. Mat. Iberoamericana, 7 (1991), pp. 115–134.
[34] Y. Meyer, Wavelets: Algorithms and Applications, SIAM, Philadelphia, 1993.
[35] Y. Nesterov and A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Program-
ming, SIAM, Philadelphia, 1994.
[36] C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse
least squares, ACM Trans. Math. Software, 8 (1982), pp. 43–71.
[37] C. C. Paige and M. A. Saunders, Algorithm 583; LSQR: Sparse linear equations and least-
squares problems, ACM Trans. Math. Software, 8 (1982), pp. 195–209.
[38] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proc. 27th Asilomar Conference on Signals, Systems and Computers, A. Singh, ed., IEEE Comput. Soc. Press,
Los Alamitos, CA, 1993.
[39] S. Qian and D. Chen, Signal representation using adaptive normalized Gaussian functions,
Signal Process., 36 (1994), pp. 1–11.
[40] C. Roos, T. Terlaky, and J.-Ph. Vial, Theory and Algorithms for Linear Optimization: An Interior Point Approach, Wiley, Chichester, UK, 1997.

[41] L. J. Rudin, S. Osher, and E. Fatemi, Nonlinear total-variation-based noise removal algo- rithms, Phys. D, 60 (1992), pp. 259–268.
[42] S. Sardy, A. G. Bruce, and P. Tseng, Block coordinate relaxation methods for nonparametric wavelet denoising, J. Comput. Graph. Statist., 9 (2000), pp. 361–379.
[43] M. A. Saunders, Commentary: Major Cholesky would feel proud, ORSA J. Comput., 6 (1994), pp. 23–27.
[44] M. A. Saunders, pdsco.m, Matlab code for minimizing convex separable objective functions subject to Ax = b, x ≥ 0, http://www-stat.stanford.edu/ ̃atomizer/.
[45] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, Shiftable multiscale transforms, IEEE Trans. Inform. Theory, 38 (1992), pp. 587–607.
[46] M. J. Todd, Commentary: Theory and practice for interior point methods, ORSA J. Comput., 6 (1994), pp. 28–31.
[47] R. J. Vanderbei, Commentary: Interior point methods: Algorithms and formulations, ORSA J. Comput., 6 (1994), pp. 32–34.
[48] L. F. Villemoes, Best approximation with Walsh atoms, Constr. Approx., 13 (1997), pp. 329– 355.
[49] M. H. Wright, Interior methods for constrained optimization, Acta Numerica, 1992, pp. 341– 407.
[50] S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1996; see also http:// www.siam.org/books/swright/.

21.4

[4] A. H. Delaney, and Y. Bresler, A fast and accurate iterative reconstruction algorithm for parallel-beam tomography, IEEE Trans. Image Processing, 5 (1996), 740–753.

21.5

[5] D. C. Dobson, and F. Santosa, Recovery of blocky images from noisy and blurred data, SIAM J. Appl. Math. 56 (1996), 1181–1198.

21.6

[6] D.L. Donoho, P.B. Stark, Uncertainty principles and signal recovery, SIAM J. Appl. Math. 49 (1989), 906–931.

21.7

[7] D.L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Transactions on Information Theory, 47 (2001), 2845–2862.

21.8

[8] D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci. USA 100 (2003), 2197–2202.

21.9

[9] M. Elad and A.M. Bruckstein, A generalized uncertainty principle and sparse repre- sentation in pairs of RN bases, IEEE Transactions on Information Theory, 48 (2002), 2558–2567.

21.10

[10] P. Feng, and Y. Bresler, Spectrum-blind minimum-rate sampling and reconstruction of multiband signals, in Proc. IEEE int. Conf. Acoust. Speech and Sig. Proc., (Atlanta, GA), 3 (1996), 1689–1692.

21.11

[11] P. Feng, and Y. Bresler, A multicoset sampling approach to the missing cone problem in computer aided tomography, in Proc. IEEE Int. Symposium Circuits and Systems, (Atlanta, GA), 2 (1996), 734–737.

21.12

[12] A. Feuer and A. Nemirovsky, On sparse representations in pairs of bases, Accepted to the IEEE Transactions on Information Theory in November 2002.

21.13

[13] J. J. Fuchs, On sparse representations in arbitrary redundant bases, IEEE Transactions on Information Theory, 50 (2004), 1341–1344.

21.14

[14] R. Gribonval and M. Nielsen, Sparse representations in unions of bases, Technical report, IRISA, November 2002.

21.15

[15] C. Mistretta, Personal communication (2004).

21.16

[16] F. Santosa, and W. W. Symes, Linear inversion of band-limited reflection seismograms,
SIAM J. Sci. Statist. Comput. 7 (1986), 1307–1330.

21.17

[17] P. Stevenhagen, H.W. Lenstra Jr., Chebotar ̈ev and his density theorem, Math. Intel-
ligencer 18 (1996), no. 2, 26–37.

21.18

[18] T. Tao, An uncertainty principle for cyclic groups of prime order, preprint.
math.CA/0308286

21.19

[19] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, Technical Report, The University of Texas at Austin, 2003.

21.20

[20] J. A. Tropp, Just relax: Convex programming methods for subset selection and sparse approximation, Technical Report, The University of Texas at Austin, 2004.

21.21

[21] M. Vetterli, P. Marziliano, and T. Blu, Sampling signals with finite rate of innovation, IEEE Transactions on Signal Processing, 50 (2002), 1417–1428.

21.22

[22] A. C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-optimal sparse Fourier representations via sampling, 34th ACM Symposium on Theory of Computing, Montr ́eal, May 2002.

21.23

[23] A. C. Gilbert, S. Muthukrishnan, and M. Strauss, Beating the B2 bottleneck in esti- mating B-term Fourier representations, unpublished manuscript, May 2004.

参考資料(References)

Data Scientist の基礎(2)
https://qiita.com/kaizen_nagoya/items/8b2f27353a9980bf445c

岩波数学辞典 二つの版がCDに入ってお得
https://qiita.com/kaizen_nagoya/items/1210940fe2121423d777

岩波数学辞典
https://qiita.com/kaizen_nagoya/items/b37bfd303658cb5ee11e

アンの部屋(人名から学ぶ数学:岩波数学辞典)英語(24)
https://qiita.com/kaizen_nagoya/items/e02cbe23b96d5fb96aa1

<この記事は個人の過去の経験に基づく個人の感想です。現在所属する組織、業務とは関係がありません。>

最後までおよみいただきありがとうございました。

いいね 💚、フォローをお願いします。

Thank you very much for reading to the last sentence.

Please press the like icon 💚 and follow me for your happy life.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0