0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

The Consciousness Prior【1 Introduction】【論文 DeepL 翻訳】

Last updated at Posted at 2020-11-04

この記事は自分用のメモみたいなものです.
ほぼ DeepL 翻訳でお送りします.
間違いがあれば指摘していだだけると嬉しいです.

翻訳元
The Consciousness Prior
Author: Yoshua Bengio

前: 【Abstract】
次: 【2 System 2 Processing and Global Workspace Theory of Consciousness】

1 Introduction

本論文では, 人間が自然言語で操作するような概念のトップレベル抽象表現のための新しい種類のプライヤーを提案する. これは, 意識の一形態 [van Gulick, 2004] としてのグローバルワークスペース理論 [Baars, 1988, 1997, 2002, Dehaene and Naccache, 2001, Dehaene et al., 2017] などの現代の意識論, すなわち, ロックが定義したように, 意識とは "人間自身の心の中を通過するものの知覚", すなわち, 外部の物体や自分の中にあるものに対する意識である (Wikipediaの定義). この論文の主な貢献は, 意識的思考の要素がアテンションメカニズム (先に紹介したコンテンツベースのアテンションメカニズムなど [Bahdanau et al., 2015]) によって選択され, 脳の他の部分に伝達され, 次の意識的思考の内容だけでなく, 下流の知覚や行動にも強く影響を与えることを規定し, この理論の一側面を機械学習によって正当化することを提案していることである. この論文では, これを意識的思考を形成しうる高レベル変数の種類間の同時分布の形についての仮説と一致する計算メカニズムとして捉えている. 意識的思考は一度に非常に少数の変数を参照するだけなので, これは一度に少数の変数を含む断片に事実化された知識表現の形態に相当することを示唆している. 確率論的モデリングの観点からは, これは疎な因子グラフに相当する. 各 "因子" は, いくつかの変数間の強い依存関係をキャプチャする. 1つの変数がそのような多くの因子に参加することができるが, 各因子は, 自然言語の文の中で一緒にリンクされた単語や概念に似て, 非常に少数の変数をリンクしている.

原文

We propose here a new kind of prior for top-level abstract representations of concepts of the kind humans manipulate with natural language, inspired by modern theories of consciousness such as the global workspace theory [Baars, 1988, 1997, 2002, Dehaene and Naccache, 2001, Dehaene et al., 2017] as a form of awareness [van Gulick, 2004], i.e., as defined by Locke, consciousness is “the perception of what passes in a man’s own mind”, or awareness of an external object or something within oneself (Wikipedia definition). The main contribution of this paper is proposing a machine learning justification for an aspect of this theory, stipulating that elements of a conscious thought are selected through an attention mechanism (such as the content-based attention mechanism we introduced in [Bahdanau et al., 2015]) and then broadcast to the rest of the brain, strongly influencing downstream perception and action as well as the content of the next conscious thought. The paper sees this as a computational mechanism which is consistent with a hypothesis about the form of the joint distribution between the type of high-level variables which can form a conscious thought. Since a conscious thought only refers to very few variables at a time, we suggest that this corresponds to a form of knowledge representation which is factored into pieces involving a few variables at a time. From a probabilistic modeling point of view, this corresponds to a sparse factor graph. Each “factor" captures the possibly strong dependency between a few variables. Although a variable can participate in many such factors, each factor links very few variables, similarly to words or concepts linked together in a sentence in natural language.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?