LoginSignup
41
42

More than 5 years have passed since last update.

CDH5でhadoopのクラスタを構築する

Last updated at Posted at 2014-09-20

はじめに

CDH5でhadoopのクラスタを構築する方法を記述します。

環境

  • CentOS 6.5
  • CDH5
  • jdk 1.7.0_55

構成

ホスト名 IPアドレス ResourceManager Namenode NodeManager Datanode JobHistoryServer
hadoop-master 192.168.122.101 - -
hadoop-master2 192.168.122.102 - - -
hadoop-slave 192.168.122.111 - - -
hadoop-slave2 192.168.122.112 - - -
hadoop-slave3 192.168.122.113 - - -
hadoop-client 192.168.122.201 - - - - -

クラスタの構築

事前準備

  • jdkのインストール
$ curl -LO -b "oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u55-b13/jdk-7u55-linux-x64.rpm"
$ sudo yum localinstall jdk-7u55-linux-x64.rpm
$ java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
  • CDH5のリポジトリの追加
$ curl -LO http://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
$ sudo yum localinstall cloudera-cdh-5-0.x86_64.rpm
$ sudo yum clean all
$ yum repolist
Loaded plugins: fastestmirror, presto
Loading mirror speeds from cached hostfile
cloudera-cdh5                                                                                                                                |  951 B     00:00
cloudera-cdh5/primary                                                                                                                        |  41 kB     00:00
cloudera-cdh5                                                                                                                                               141/141
repo id                                                        repo name                                                                                      status
base                                                           CentOS-6 - Base                                                                                6,367
cloudera-cdh5                                                  Cloudera's Distribution for Hadoop, Version 5                                                    141
extras                                                         CentOS-6 - Extras                                                                                 15
updates                                                        CentOS-6 - Updates                                                                             1,507
repolist: 8,030

$ sudo rpm --import http://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera

パッケージのインストール

  • マスタノード(hadoop-master,hadoop-master2)
$ sudo yum install hadoop-yarn-resourcemanager hadoop-hdfs-namenode hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
  • スレーブノード(hadoop-slave,hadoop-slave2)
$ sudo yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
  • クライアント(hadoop-client)
$ sudo yum install hadoop-client 

ネットワーク設定

各ノードでホスト名を名前解決できるように設定を追加します。(マスタノード、スレーブノードおよびクライアントすべて)

/etc/hosts
192.168.122.101 hadoop-master
192.168.122.102 hadoop-master2
192.168.122.111 hadoop-slave
192.168.122.112 hadoop-slave2
192.168.122.113 hadoop-slave3
192.168.122.201 hadoop-client

HDFSの設定

  • 設定ファイルのひな型をコピーします。(マスタノード、スレーブノードおよびクライアントすべて)
$ sudo cp -r /etc/hadoop/conf.empty /etc/hadoop/conf.cluster
$ sudo alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.cluster 50
$ sudo alternatives --set hadoop-conf /etc/hadoop/conf.cluster

$ sudo alternatives --display hadoop-conf
hadoop-conf - status is manual.
 link currently points to /etc/hadoop/conf.cluster
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.impala - priority 5
/etc/hadoop/conf.cluster - priority 50
Current `best' version is /etc/hadoop/conf.cluster.
  • HDFSの設定を行います。(マスタノード、スレーブノードおよびクライアントすべて)
/etc/hadoop/conf/core-site.xml
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-master:8020</value>
  </property>
</configuration>
/etc/hadoop/conf/hdfs-site.xml
<configuration>
  <property>
    <name>dfs.permissions.superusergroup</name>
    <value>hadoop</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/var/lib/hadoop-hdfs/cache/dfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.name.dir</name>
    <value>/var/lib/hadoop-hdfs/cache/dfs/data</value>
  </property>
</configuration>
$ sudo mkdir -p /var/lib/hadoop-hdfs/cache/dfs/name
$ sudo mkdir -p /var/lib/hadoop-hdfs/cache/dfs/data
$ sudo chown hdfs:hadoop -R /var/lib/hadoop-hdfs/cache/dfs
  • ネームノードをフォーマットします。(hadoop-masterのみ)
$ sudo -u hdfs hdfs namenode -format
14/09/20 03:59:16 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/192.168.122.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.3.0-cdh5.1.2
STARTUP_MSG:   classpath = /etc/hadoop/conf:...(省略)...
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-08-26T01:36Z
STARTUP_MSG:   java = 1.7.0_55
************************************************************/
14/09/20 03:59:17 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/09/20 03:59:17 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-51ba5115-4500-4b1d-b26c-b8fb9f41e03d
14/09/20 03:59:18 INFO namenode.FSNamesystem: fsLock is fair:true
14/09/20 03:59:18 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/09/20 03:59:18 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/09/20 03:59:18 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.ms is set to 0 ms.
14/09/20 03:59:18 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Sep 20 03:59:18
14/09/20 03:59:18 INFO util.GSet: Computing capacity for map BlocksMap
14/09/20 03:59:18 INFO util.GSet: VM type       = 64-bit
14/09/20 03:59:18 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
14/09/20 03:59:18 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/09/20 03:59:18 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/09/20 03:59:18 INFO blockmanagement.BlockManager: defaultReplication         = 3
14/09/20 03:59:18 INFO blockmanagement.BlockManager: maxReplication             = 512
14/09/20 03:59:18 INFO blockmanagement.BlockManager: minReplication             = 1
14/09/20 03:59:18 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/09/20 03:59:18 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/09/20 03:59:18 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/09/20 03:59:18 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/09/20 03:59:18 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
14/09/20 03:59:18 INFO namenode.FSNamesystem: fsOwner             = hdfs (auth:SIMPLE)
14/09/20 03:59:18 INFO namenode.FSNamesystem: supergroup          = hadoop
14/09/20 03:59:18 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/09/20 03:59:18 INFO namenode.FSNamesystem: HA Enabled: false
14/09/20 03:59:18 INFO namenode.FSNamesystem: Append Enabled: true
14/09/20 03:59:18 INFO util.GSet: Computing capacity for map INodeMap
14/09/20 03:59:18 INFO util.GSet: VM type       = 64-bit
14/09/20 03:59:18 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
14/09/20 03:59:18 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/09/20 03:59:18 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/09/20 03:59:18 INFO util.GSet: Computing capacity for map cachedBlocks
14/09/20 03:59:18 INFO util.GSet: VM type       = 64-bit
14/09/20 03:59:18 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
14/09/20 03:59:18 INFO util.GSet: capacity      = 2^18 = 262144 entries
14/09/20 03:59:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/09/20 03:59:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/09/20 03:59:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/09/20 03:59:18 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/09/20 03:59:18 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/09/20 03:59:18 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/09/20 03:59:18 INFO util.GSet: VM type       = 64-bit
14/09/20 03:59:18 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
14/09/20 03:59:18 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/09/20 03:59:18 INFO namenode.AclConfigFlag: ACLs enabled? false
14/09/20 03:59:18 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1157687996-10.250.0.101-1411185558593
14/09/20 03:59:18 INFO common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/dfs/name has been successfully formatted.
14/09/20 03:59:19 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/09/20 03:59:19 INFO util.ExitUtil: Exiting with status 0
14/09/20 03:59:19 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/192.168.122.101
************************************************************/
  • HDFSの起動

ネームノードを起動します。(マスタノード)

$ sudo service hadoop-hdfs-namenode start

データノードを起動します。(スレーブノード)

$ sudo service hadoop-hdfs-datanode start
  • HDFS上にtmpディレクトリを作成します。(クライアントノード)
$ sudo -u hdfs hadoop fs -mkdir /tmp
$ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
$ sudo -u hdfs hadoop fs -ls /
Found 1 items
drwxrwxrwt   - hdfs hadoop          0 2014-09-20 04:30 /tmp

Map Reduce v2(YARN)の設定

  • Map Reduce v2(YARN)の設定を行います。(マスタノード、スレーブノードおよびクライアントすべて)
/etc/hadoop/conf/mapred-site.xml
<configuration>
  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>file:///var/lib/hadoop-yarn/cache/local</value>
  </property>
  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>file:///var/log/hadoop-yarn/containers</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>hdfs://hadoop-master:8020/var/log/hadoop-yarn/apps</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop-master:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop-master:19888</value>
  </property>
  <property>
    <name>hadoop.proxyuser.mapred.groups</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.mapred.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.staging-dir</name>
    <value>/user</value>
  </property>
</configuration>
/etc/hadoop/conf/yarn-site.xml
<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop-master</value>
  </property>
  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>
  <property>
    <description>List of directories to store localized files in.</description>
    <name>yarn.nodemanager.local-dirs</name>
    <value>file:///var/lib/hadoop-yarn/cache/local</value>
  </property>
  <property>
    <description>Where to store container logs.</description>
    <name>yarn.nodemanager.log-dirs</name>
    <value>file:///var/log/hadoop-yarn/containers</value>
  </property>
  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>hdfs:///hadoop-master:8020/var/log/hadoop-yarn/apps</value>
  </property>

  <property>
    <description>Classpath for typical applications.</description>
     <name>yarn.application.classpath</name>
     <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
        $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
     </value>
  </property>
</configuration>
  • 必要なディレクトを作成します。(マスタノード、スレーブノードおよびクライアント)
$ sudo mkdir -p /var/lib/hadoop-yarn/cache/local
$ sudo chown yarn:hadoop /var/lib/hadoop-yarn/cache/local
$ sudo mkdir -p /var/log/hadoop-yarn/containers
$ sudo chown yarn:hadoop /var/log/hadoop-yarn/containers
  • 必要なディレクトリをHDFS上に作成します。(クライアントで実行)
$ sudo -u hdfs hadoop fs -mkdir -p /user/history
$ sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
$ sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history
$ sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
$ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
$ sudo -u hdfs hadoop fs -ls -R /
drwxrwxrwt   - hdfs hadoop          0 2014-09-20 04:30 /tmp
drwxr-xr-x   - hdfs hadoop          0 2014-09-20 05:08 /user
drwxrwxrwt   - mapred hadoop          0 2014-09-20 05:08 /user/history
drwxr-xr-x   - hdfs   hadoop          0 2014-09-20 05:08 /var
drwxr-xr-x   - hdfs   hadoop          0 2014-09-20 05:08 /var/log
drwxr-xr-x   - yarn   mapred          0 2014-09-20 05:08 /var/log/hadoop-yarn
  • リソースマネージャを起動します。(マスタノード)
$ sudo service hadoop-yarn-resourcemanager start
  • ノードマネージャを起動します。(スレーブノード)
$ sudo service hadoop-yarn-nodemanager start
  • ヒストリサーバを起動します。(hadoop-masterのみ)
$ sudo service hadoop-mapreduce-historyserver start

動作確認

  • サンプルプログラムを実行します。(クライアントで実行)
$ sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar pi 1 300
Number of Maps  = 1
Samples per Map = 300
Wrote input for Map #0
Starting Job
14/09/20 06:33:37 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/09/20 06:33:37 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/09/20 06:33:37 INFO input.FileInputFormat: Total input paths to process : 1
14/09/20 06:33:37 INFO mapreduce.JobSubmitter: number of splits:1
14/09/20 06:33:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local936971230_0001
14/09/20 06:33:37 WARN conf.Configuration: file:/tmp/hadoop-hdfs/mapred/staging/hdfs936971230/.staging/job_local936971230_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/09/20 06:33:37 WARN conf.Configuration: file:/tmp/hadoop-hdfs/mapred/staging/hdfs936971230/.staging/job_local936971230_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/09/20 06:33:38 WARN conf.Configuration: file:/tmp/hadoop-hdfs/mapred/local/localRunner/hdfs/job_local936971230_0001/job_local936971230_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/09/20 06:33:38 WARN conf.Configuration: file:/tmp/hadoop-hdfs/mapred/local/localRunner/hdfs/job_local936971230_0001/job_local936971230_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/09/20 06:33:38 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/09/20 06:33:38 INFO mapreduce.Job: Running job: job_local936971230_0001
14/09/20 06:33:38 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/09/20 06:33:38 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Waiting for map tasks
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Starting task: attempt_local936971230_0001_m_000000_0
14/09/20 06:33:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/09/20 06:33:38 INFO mapred.MapTask: Processing split: hdfs://hadoop-master:8020/user/hdfs/QuasiMonteCarlo_1411194815372_1775447123/in/part0:0+118
14/09/20 06:33:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/09/20 06:33:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/09/20 06:33:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/09/20 06:33:38 INFO mapred.MapTask: soft limit at 83886080
14/09/20 06:33:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/09/20 06:33:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/09/20 06:33:38 INFO mapred.LocalJobRunner:
14/09/20 06:33:38 INFO mapred.MapTask: Starting flush of map output
14/09/20 06:33:38 INFO mapred.MapTask: Spilling map output
14/09/20 06:33:38 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600
14/09/20 06:33:38 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
14/09/20 06:33:38 INFO mapred.MapTask: Finished spill 0
14/09/20 06:33:38 INFO mapred.Task: Task:attempt_local936971230_0001_m_000000_0 is done. And is in the process of committing
14/09/20 06:33:38 INFO mapred.LocalJobRunner: map
14/09/20 06:33:38 INFO mapred.Task: Task 'attempt_local936971230_0001_m_000000_0' done.
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local936971230_0001_m_000000_0
14/09/20 06:33:38 INFO mapred.LocalJobRunner: map task executor complete.
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Waiting for reduce tasks
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Starting task: attempt_local936971230_0001_r_000000_0
14/09/20 06:33:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/09/20 06:33:38 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@73616964
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=652528832, maxSingleShuffleLimit=163132208, mergeThreshold=430669056, ioSortFactor=10, memToMemMergeOutputsThreshold=10
14/09/20 06:33:38 INFO reduce.EventFetcher: attempt_local936971230_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
14/09/20 06:33:38 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local936971230_0001_m_000000_0 decomp: 24 len: 28 to MEMORY
14/09/20 06:33:38 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local936971230_0001_m_000000_0
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->24
14/09/20 06:33:38 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
14/09/20 06:33:38 INFO mapred.LocalJobRunner: 1 / 1 copied.
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
14/09/20 06:33:38 INFO mapred.Merger: Merging 1 sorted segments
14/09/20 06:33:38 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: Merged 1 segments, 24 bytes to disk to satisfy reduce memory limit
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: Merging 1 files, 28 bytes from disk
14/09/20 06:33:38 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
14/09/20 06:33:38 INFO mapred.Merger: Merging 1 sorted segments
14/09/20 06:33:38 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
14/09/20 06:33:38 INFO mapred.LocalJobRunner: 1 / 1 copied.
14/09/20 06:33:38 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
14/09/20 06:33:38 INFO mapred.Task: Task:attempt_local936971230_0001_r_000000_0 is done. And is in the process of committing
14/09/20 06:33:38 INFO mapred.LocalJobRunner: 1 / 1 copied.
14/09/20 06:33:38 INFO mapred.Task: Task attempt_local936971230_0001_r_000000_0 is allowed to commit now
14/09/20 06:33:38 INFO output.FileOutputCommitter: Saved output of task 'attempt_local936971230_0001_r_000000_0' to hdfs://hadoop-master:8020/user/hdfs/QuasiMonteCarlo_1411194815372_1775447123/out/_temporary/0/task_local936971230_0001_r_000000
14/09/20 06:33:38 INFO mapred.LocalJobRunner: reduce > reduce
14/09/20 06:33:38 INFO mapred.Task: Task 'attempt_local936971230_0001_r_000000_0' done.
14/09/20 06:33:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local936971230_0001_r_000000_0
14/09/20 06:33:38 INFO mapred.LocalJobRunner: reduce task executor complete.
14/09/20 06:33:39 INFO mapreduce.Job: Job job_local936971230_0001 running in uber mode : false
14/09/20 06:33:39 INFO mapreduce.Job:  map 100% reduce 100%
14/09/20 06:33:39 INFO mapreduce.Job: Job job_local936971230_0001 completed successfully
14/09/20 06:33:39 INFO mapreduce.Job: Counters: 38
        File System Counters
                FILE: Number of bytes read=552150
                FILE: Number of bytes written=988114
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=236
                HDFS: Number of bytes written=451
                HDFS: Number of read operations=19
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=9
        Map-Reduce Framework
                Map input records=1
                Map output records=2
                Map output bytes=18
                Map output materialized bytes=28
                Input split bytes=150
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=28
                Reduce input records=2
                Reduce output records=0
                Spilled Records=4
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
                Total committed heap usage (bytes)=634388480
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=118
        File Output Format Counters
                Bytes Written=97
Job Finished in 2.024 seconds
Estimated value of Pi is 3.16000000000000000000

参考

41
42
1

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
41
42