LoginSignup
0
0

R3 on "What are the most important statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(3)

Last updated at Posted at 2021-10-02

R3(References on References on References) on "What are the most important statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(3)

R3 on "What are the most important statistical ideas of the past 50 years? " Andrew Gelman, Aki Vehtari(0)
https://qiita.com/kaizen_nagoya/items/a8eac9afbf16d2188901

What are the most important statistical ideas of the past 50 years?
Andrew Gelman, Aki Vehtari
https://arxiv.org/abs/2012.00174

References

3

Anderlucci, L., Montanari, A., and Viroli, C. (2019). The importance of being clustered: Uncluttering the trends of statistics from 1970 to 2015. Statistical Science 34, 280–300.
https://arxiv.org/abs/1709.03563

References on 3

3.1

Ambroise, C. and G. Govaert (2000). Data Analysis, Classification, and Related Methods, Chapter EM Algorithm for Partially Known Labels, pp. 161–166. Berlin, Heidelberg: Springer Berlin Heidelberg.
https://link.springer.com/chapter/10.1007/978-3-642-59789-3_26

Reference on 3.1

3.1.1

DEMPSTER, A.P., LAIRD, N.M. and RUBIN, D.B. (1977): Maximum Likelihood from Incomplete Data via the EM Algorithm, Journal of the Royal Statistical Society, B, 39, 1–38.

3.1.2

GANESALINGAM, S. and MCLACHLAN, G.J. (1978): The Efficiency of Linear Discriminant Function Based on Unclassified Initial Samples, Biometrika, 65, 658–662.

3.1.3

JORDAN, M.I. and JACOBS, R.A. (1994): Hierarchical Mixtures of Experts and the EM Algorithm, Neural Computation, 6, 181–214.

3.1.4

MCLACHLAN, G.J. and BASFORD, K.E. (1989): Misture Models. Inference and Applications to Clustering. Marcel Dekker, New York.

3.1.5

MCLACHLAN, G. and KRISHNAN, T. (1997): The EM Algorithm and Extensions. Wiley, New York.

3.1.6

O’NEIL, T.J. (1978): Normal Discrimination with Unclassified Observations, Journal of the American Statistical Association, 73, 821–826.

3.1.7

TITTERINGTON, D.M., SMITH, A.F. and MAKOV, U.E. (1985): Statistical Analysis of Finite Mixture Distributions. Wiley, New York.

3.2

Blei, D., A. Ng, and M. Jordan (2003). Latent dirichlet allocation. Journal of machine Learning research 3, 9931022.

3.3

Bouveyron, C., P. Latouche, and R. Zreik (2017). The stochastic topic block model for the clustering of networks with textual edges. Statistics and Computing, in press.

3.4

Chang, J. and D. Blei (2009). Relational topic models for document networks. In Interna- tional Conference on Artificial Intelligence and Statistics, 8188.

3.5

Coˆme, E., L. Oukhellou, T. Denœux, and P. Aknin (2009). Learning from partially su- pervised data using mixture models and belief functions. Pattern Recognition 42(3), 334–348.

3.6

Deerwester, S., S. Dumais, G. Furnas, T. Landauer, and R. Harshman (1990). Indexing by latent semantic analysis. Journal of the American society for information science 41, 391–407.

3.7

Dhillon, I. S. and D. S. Modha (2001). Concept decompositions for large sparse text data using clustering. Machine Learning 42(1/2), 143–175.

3.8

Diaconis, P. (1988). Group representations in probability and statistics. In Lecture notes, Volume 11 of Monograph Series. Institute of Mathematical Statistics.

3.9

Fligner, M. A. and J. S. Verducci (1986). Distance based ranking models. Journal of the Royal Statistical Society. Series B (Methodological) 48 (3), 359–369.

3.10

Fraley, C. and A. Raftery (2002). Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association 97, 611–631.

3.10

Hofmann, T. (1999). Probabilistic latent semantic indexing. In Proceedings of the 22nd an- nual international ACM SIGIR conference on Research and development in information retrieval, 50–57.

3.11

Maitra, R. and I. P. Ramler (2010). A k-mean-directions algorithm for fast clustering of data on the sphere. Journal of Computational and Graphical Statistics 19 (2), 377–396.

3.12

Mallows, C. L. (1957). Non-Null Ranking Models. I. Biometrika 44(1/2), 359–369.

3.13

McLachlan, G. J. and D. Peel (2000). Finite Mixture Models. Wiley.
Murphy, T. B. and D. Martin (2003). Mixtures of distance-based models for ranking data. Computational Statistics & Data Analysis 41, 645–655.

3.14

Nigam, K., A. McCallum, S. Thrun, and T. Mitchell (2000). Text classification from labeled and unlabeled documents using em. Machine learning 39, 103134.

3.15

Salton, G. and M. J. McGill (1986). Introduction to Modern Information Retrieval. New York, NY, USA: McGraw-Hill, Inc.

3.16

Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal 27, 379–423, 623–656.

3.17

Sun, Y., J. Han, J. Gao, and Y. Yu (2009). Itopicmodel: Information network-integrated topic modeling. In ninth IEEE International Conference on Data Mining, 493502.

3.18

Vandewalle, V., C. Biernacki, G. Celeux, and G. Govaert (2013). A predictive deviance criterion for selecting a generative model in semi-supervised classification. Computational Statistics & Data Analysis 64, 220–236.

3.19

Zhu, X., A. B. Goldberg, R. Brachman, and T. Dietterich (2009). Introduction to Semi- Supervised Learning. Morgan and Claypool Publishers.

参考資料(References)

Data Scientist の基礎(2)
https://qiita.com/kaizen_nagoya/items/8b2f27353a9980bf445c

岩波数学辞典 二つの版がCDに入ってお得
https://qiita.com/kaizen_nagoya/items/1210940fe2121423d777

岩波数学辞典
https://qiita.com/kaizen_nagoya/items/b37bfd303658cb5ee11e

アンの部屋(人名から学ぶ数学:岩波数学辞典)英語(24)
https://qiita.com/kaizen_nagoya/items/e02cbe23b96d5fb96aa1

<この記事は個人の過去の経験に基づく個人の感想です。現在所属する組織、業務とは関係がありません。>

最後までおよみいただきありがとうございました。

いいね 💚、フォローをお願いします。

Thank you very much for reading to the last sentence.

Please press the like icon 💚 and follow me for your happy life.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0