0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

@kzuzuo 「知識構造の多様性」 記録 AI(12)

Last updated at Posted at 2024-11-17
  1. 20211003分割 更新中止) 特許SDI用AI: 複数の自然言語深層学習モデルにおいて見られた個性とその解釈 および認知的観点に基づく知識構造の多様性を評価した価値共創の展望と、創造性
    https://qiita.com/kzuzuo/items/d41327433c9cdc6a5fd3

を拝見しながら、URLをたたきながら、考察を追記予定。

<この項は書きかけです。順次追記します。>
This article is not completed. I will add some words in order.

URL引用

*移動・再投稿(元 https://qiita.com/kzuzuo/items/9a149e69642ee7b3221e )404 Not Found.

*20211003,502 bad gateway対策として,前,中,後へ分割
2) 更新中 前) 特許SDI用AI: 複数の自然言語深層学習モデルにおいて見られた個性とその解釈 および認知的観点に基づく知識構造の多様性を評価した価値共創の展望と、創造性 前 
https://qiita.com/kzuzuo/items/4670b5ff7526319680f4
3) 更新中 中) 特許SDI用AI: 複数の自然言語深層学習モデルにおいて見られた個性とその解釈 および認知的観点に基づく知識構造の多様性を評価した価値共創の展望と、創造性 中 
https://qiita.com/kzuzuo/items/237b9f5192464817aa40
4) 更新中 後) 特許SDI用AI: 複数の自然言語深層学習モデルにおいて見られた個性とその解釈 および認知的観点に基づく知識構造の多様性を評価した価値共創の展望と、創造性 後 
https://qiita.com/kzuzuo/items/756470e6e17c54aa5e2e

Three mysteries in deep learning: Ensemble, knowledge distillation, and self-distillation
Published January 19, 2021
By Zeyuan Allen-Zhu , Senior Researcher Yuanzhi Li , Assistant Professor, Carnegie Mellon University
https://www.microsoft.com/en-us/research/blog/three-mysteries-in-deep-learning-ensemble-knowledge-distillation-and-self-distillation/
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
https://arxiv.org/abs/2012.09816

Are Pre-trained Convolutions Better than Pre-trained Transformers?
https://arxiv.org/abs/2105.03322

複数の深層学習モデルを組み合わせた自然言語AI実装の一例とモデルの多様性に注目した展望(概報) 2018年12月提出
 http://patentsearch.punyu.jp/asia/2018hayashi.pdf
 https://sapi.kaisei1992.com/wp-content/uploads/2019/03/2018hayashi.pdf

Control and Robotics 多様体論へのいざない 基礎数学 大阪大学大学院
https://www.youtube.com/watch?v=6npSJdMQqVY

SHAP (SHapley Additive exPlanations) https://github.com/slundberg/shap

Explainable machine-learning predictions for the prevention of hypoxaemia during surgery
https://www.nature.com/articles/s41551-018-0304-0

How to use in R model-agnostic data explanation with DALEX & iml
https://www.slideshare.net/kato_kohaku/how-to-use-in-r-modelagnostic-data-explanation-with-dalex-iml
p116- SHAP
SHAP を用いて機械学習モデルを説明する
https://www.datarobot.com/jp/blog/explain-machine-learning-models-using-shap/

LIME https://github.com/marcotcr/lime

Keras LSTM for IMDB Sentiment Classification
https://slundberg.github.io/shap/notebooks/deep_explainer/Keras%20LSTM%20for%20IMDB%20Sentiment%20Classification.html

Attention is not Explanation https://arxiv.org/abs/1902.10186 )

26~モデルの解釈性(201908)
https://speakerdeck.com/mimihub/20190827-aws-mlloft-lt5?slide=29

産総研人工知能研究センター【第40回AIセミナー】機械学習モデルの判断根拠の説明(Ver.2)(202001)
https://www.slideshare.net/SatoshiHara3/ver2-225753735
tidymodels+DALEXによる解釈可能な機械学習 / Tokyo.R83(202001)
https://speakerdeck.com/dropout009/tokyo-dot-r83

https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/text.html

機械学習モデルを解釈する指標SHAPを自然言語処理に対して使ってみた
https://qiita.com/m__k/items/87cf3e4acf414408bfed

契約書コーパスと法律コーパスの品詞比較など
https://speakerdeck.com/mimihub/20190827-aws-mlloft-lt5?slide=18

magnitude
https://github.com/plasticityai/magnitude

ホットリンク日本語大規模SNS+Webコーパス
https://www.hottolink.co.jp/blog/20190304-2.

・Juman++&BPE 黒川河原研BERT日本語Pretrainedモデル
http://nlp.ist.i.kyoto-u.ac.jp/index.php?BERT日本語Pretrainedモデル
・SentencePiece hottoSNS-BERT
https://www.hottolink.co.jp/blog/20190311-2
・MeCab StockMark日本語ビジネスニュースコーパスBERT事前学習済モデル 
https://qiita.com/mkt3/items/3c1278339ff1bcc0187f

BioBERT
https://arxiv.org/abs/1901.08746
SciBERT
https://arxiv.org/abs/1903.10676

MTDNN 
https://arxiv.org/abs/1901.11504
Improving Language Understanding by Generative Pre-Training(transformers and unsupervised pre-training)
https://openai.com/blog/language-unsupervised/
Unified Language Model Pre-training for Natural Language Understanding and Generation (Microsoft)
https://arxiv.org/abs/1905.03197
XLNet: Generalized Autoregressive Pretraining for Language Understanding
https://arxiv.org/abs/1906.08237
RoBERTa
https://arxiv.org/abs/1907.11692
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://openreview.net/forum?id=H1eA7AEtvS
https://github.com/google-research/google-research/tree/master/albert
RACEでみた進歩.
http://www.qizhexie.com/data/RACE_leaderboard
*Unsupervised Data Augmentation
https://arxiv.org/abs/1904.12848

*Making Convolutional Networks Shift-Invariant Again
https://arxiv.org/abs/1904.11486
*Predictive Uncertainty Estimation via Prior Networks.
http://arxiv.org/abs/1802.10501

*Two models of double descent for weak features
https://arxiv.org/abs/1903.07571

26 Sep 2019 (modified: 26 Sep 2019)ICLR 2020 Conference
https://openreview.net/forum?id=ryGWhJBtDB

*Document Scanner using Computer Vision
https://towardsdatascience.com/document-scanner-using-computer-vision-opencv-and-python-20b87b1cbb06

*Phase transition in PCA with missing data: Reduced signal-to-noise ratio, not sample size! 
https://arxiv.org/abs/1905.00709

・ブラックボックスのまま理解する「機械行動:machine behavior」と呼ばれる新しい学問領域をつくるべきだという提案
https://www.nature.com/articles/s41586-019-1138-y
https://www.media.mit.edu/publications/review-article-published-24-april-2019-machine-behaviour/

高次元科学への誘い https://japan.cnet.com/blog/maruyama/2019/05/01/entry_30022958/

*AIが予測するCOVID-19の3つの重症化因子(202004)
https://aitimes.media/2020/04/02/4589/?6598

Karen Hao2019.05.17
https://www.google.com/amp/s/www.technologyreview.jp/s/141062/deep-learning-could-reveal-why-the-world-works-the-way-it-does/amp/

Information theory holds surprises for machine learning
https://www.santafe.edu/news-center/news/information-theory-holds-surprises-machine-learning
Caveats for information bottleneck in deterministic scenarios
https://arxiv.org/abs/1808.07593
【論文】メタ強化学習による因果推論
https://qiita.com/kodai_sudo/items/780b3e05c150f9c9dda6

https://www.reddit.com/r/rarepuppers/comments/bb7lfg/the_mystic_tiger_boye/?utm_content=title&utm_medium=post_embed&utm_name=b4322056f05c4faba1ce818d731245fd&utm_source=embedly&utm_term=bb7lfg
その場合は間違えてはいるが、認知的には正し

GRAPH TRANSFORMER
https://openreview.net/pdf?id=HJei-2RcK7

Utilization of Bio-Ontologies for Enhancing Patent Information Retrieval
https://ieeexplore.ieee.org/document/8754131

須山敦志(2019)ベイズ深層学習 講談社サイエンティフィク
筑波大HCOMP研究室の勉強会資料.
https://speakerdeck.com/catla/beizushen-ceng-xue-xi-3-dot-3-3-dot-4

*動的ベイズ推定
https://arxiv.org/abs/1901.05353

*ジェリー・Z・ミュラー(2019) 測りすぎーなぜパフォーマンス評価は失敗するのか みすず書房
https://www.msz.co.jp/book/detail/08793.html

*言論マップ、議論マイニング
https://speakerdeck.com/cfiken/nlpaper-dot-challenge-wai-bu-zhi-shi-niji-dukuying-da-sheng-cheng-sabei?slide=28
文章の意味と個性
相澤彰子 国立情報学研究所教授
NHK技研R&D 2018.4
https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.nhk.or.jp/strl/publica/rd/rd168/pdf/P02-03.pdf&ved=2ahUKEwjeiY3N0qLkAhWgy4sBHetKAMoQFjACegQIBhAB&usg=AOvVaw3GuRDWW9Jo1MiaEfm7uxW6

Pay Less Attention with Lightweight and Dynamic Convolutions
https://arxiv.org/abs/1901.10430

Channel Equilibrium Networks
Sep 25, 2019 ICLR 2020 Conference Blind
https://openreview.net/forum?id=BJlOcR4KwS )

Channel Equilibrium Networks
Sep 25, 2019 ICLR 2020 Conference Blind
https://openreview.net/forum?id=BJlOcR4KwS )

0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?