LoginSignup
1
0

More than 5 years have passed since last update.

Installing CUDA 9.2, TensorFlow, Keras and PyTorch on Fedora 27 for Deep Learning

Last updated at Posted at 2018-08-17

A. Installing CUDA 9.2 on Fedora 27

The important points for installing CUDA 9.2

  1. I recommend you to use the runfile(local) to install CUDA 9.2, because both installing by rpm(local) and rpm(network) were failed; the graphical interface didn't work properly and I got a black screen when booting.
  2. You have to install elfutils-libelf-devel for runfile installation:
sudo dnf install elfutils-libelf-devel

The step-by-step procedures are shown below.

A-1. Burning a DVD of Fedora 27

I used the image file, Fedora-Workstation-Live-x86_64-27-1.6.iso, to burn a DVD for installing Fedora 27.

A-2. Installing Fedora 27

After booting from the DVD, I selected "Start Fedora-Workstation-Live 27 in basic graphics mode" in "Troubleshooting". It is a fairly safe way to start the desktop for the installer of Fedora on several hardware settings. Install Fedora 27 to a hard drive using the installer. Make sure that your PC is connected to the internet before installation.

A-3. Installing the kernel packages of the version 4.15.x

According to Table.1 in the NVIDIA CUDA Installation Guide for Linux (https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html), the kernel of the version 4.15.x is recommended for CUDA 9.2. As the default version of Fedora 27 is 4.13.x and the upgraded version is 4.16.x, it would be better to install the packages of the version 4.15.x. I used the packages built by koji that were available from https://kojipkgs.fedoraproject.org/packages/kernel/ and selected the version 4.15.17. Installation of the packages was performed as follows:

sudo dnf install https://kojipkgs.fedoraproject.org/packages/kernel/4.15.17/300.fc27/x86_64/kernel-core-4.15.17-300.fc27.x86_64.rpm
sudo dnf install https://kojipkgs.fedoraproject.org/packages/kernel/4.15.17/300.fc27/x86_64/kernel-modules-4.15.17-300.fc27.x86_64.rpm
sudo dnf install https://kojipkgs.fedoraproject.org/packages/kernel/4.15.17/300.fc27/x86_64/kernel-devel-4.15.17-300.fc27.x86_64.rpm
sudo dnf install https://kojipkgs.fedoraproject.org/packages/kernel/4.15.17/300.fc27/x86_64/kernel-headers-4.15.17-300.fc27.x86_64.rpm

Check if the initramfs file for the version 4.15.17 is in /boot. If the file does not exist, you can generate it by dracut:

sudo dracut --regenerate-all --force

A-4. Installing application software packages

I utilized the group list obtained by the following command to select and install application software packages:

LANG=C dnf group list

I ran the following shell program using a group list for installation. Note that I ran the shell as root to avoid entering the password for sudo and thus to make the installation process fully automatic.

#!/bin/sh
# Run as root for fully automatic installation

# Upgrade vim-minimal; it's only exceptional procedure
dnf -y upgrade vim-minimal

# Install the groups
for group in "3D Printing" \
   "Administration Tools" \
   "Ansible node" \
   "Audio Production" \
   "Authoring and Publishing" \
   "Books and Guides" \
   "C Development Tools and Libraries" \
   "Cloud Infrastructure" \
   "Cloud Management Tools" \
   "Compiz" \
   "Container Management" \
   "D Development Tools and Libraries" \
   "Design Suite" \
   "Development Tools" \
   "Domain Membership" \
   "Fedora Eclipse" \
   "Editors" \
   "Educational Software" \
   "Electronic Lab" \
   "Engineering and Scientific" \
   "FreeIPA Server" \
   "Games and Entertainment" \
   "Headless Management" \
   "MATE Applications" \
   "Medical Applications" \
   "Milkymist" \
   "Network Servers" \
   "Office/Productivity" \
   "Python Classroom" \
   "Python Science" \
   "Robotics" \
   "RPM Development Tools" \
   "Security Lab" \
   "Sound and Video" \
   "System Tools" \
   "Text-based Internet" \
   "Window Managers"
do
   dnf -y group install "${group}"
done
exit 0

Ofcourse you don't need to install all the groups shown in the shell. However, "C Development Tools and Libraries" shoule be installed, because gcc is indispensable to install CUDA. In addition, I recommend you to install "Python Science" including useful packages such as matplotlib, numpy and pandas, because these packages are used with Keras and PyTorch.

A-5. Installing the development packages for CUDA 9.2

The following development package is required to install CUDA 9.2 by runfile(local):

sudo dnf install elfutils-libelf-devel

The following packages should be installed to compile all the CUDA samples:

sudo dnf install mesa-libGLU-devel
sudo dnf install libXi-devel
sudo dnf install libXmu-devel
sudo dnf install freeglut-devel

A-6. Pre-installation actions for CUDA 9.2

I followed the instructions shown in the NVIDIA CUDA Installation Guide for Linux: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html.

A-6.1. Verify you have a CUDA-Capable GPU

lspci | grep -i nvidia

Two graphics cards, Quadro P4000 and GeForce GTX 1080 Ti, were listed on my machine.

15:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
15:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
2d:00.0 VGA compatible controller: NVIDIA Corporation GP104GL [Quadro P4000] (rev a1)
2d:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)

A-6.2. Verify you have a supported version of Linux

uname -m && cat /etc/*release

I got the following output and verified the version of linux:

x86_64
Fedora release 27 (Twenty Seven)
NAME=Fedora
VERSION="27 (Workstation Edition)"
ID=fedora
VERSION_ID=27
PRETTY_NAME="Fedora 27 (Workstation Edition)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:27"
HOME_URL="https://fedoraproject.org/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=27
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=27
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
Fedora release 27 (Twenty Seven)
ROS fuerte (Fuerte Turtle)
Fedora release 27 (Twenty Seven)

A-6.3. Verify the system has gcc installed

gcc --version

The output was:

gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6)

The version is consistent with one shown in Table 1 of the NVIDIA CUDA Installation Guide for Linux.

A-6.4. Verify the system has the correct kernel headers and development packages installed

First, I checked the version of the running kernel:

uname -r

The version was:

4.15.17-300.fc27.x86_64

Then, I confirmed the versions of the headers and development packages:

sudo dnf info kernel-headers
sudo dnf info kernel-devel

If the version of the installed headers and development packages are not consistent with the version of the running kernel, you should run the following commands:

sudo dnf install kernel-headers-$(uname -r)
sudo dnf install kernel-devel-$(uname -r)

A-6.5. Choose an installation method

I chose the runfile(local), because both installing CUDA 9.2 by the rpm(local) and by rmp(network) were failed; the graphical interface didn't work properly and I got a black screen when booting.

A-6.6 Download the NVIDIA CUDA Toolkit

The runfile(local), cuda_9.2.148_396.37_linux.run, was downloaded from https://developer.nvidia.com/cuda-downloads. The checksum of the file was verified as instructed:

md5sum cuda_9.2.148_396.37_linux.run

I got the same checksum shown in http://developer.download.nvidia.com/compute/cuda/9.2/Prod2/docs/sidebar/md5sum-c.txt.

8303cdf46904e6dea8d5d641b0b46f0d  cuda_9.2.148_396.37_linux.run

A-6.7 Handle conflicting installation methods

Since there was no previous installation, I skipped this step.
If you have any previous installations, check Table 2. and Table 3. of the NVIDIA CUDA Installation Guide for Linux.

A-7. Runfile installation of CUDA 9.2

A-7.1. Disabling Nouveau

The Nouveau drivers are loaded if the following command prints anything:

lsmod | grep nouveau

If the Nouveau drivers are loaded, create a file at

/usr/lib/modprobe.d/blacklist-nouveau.conf 

with the following contents:

blacklist nouveau
options nouveau modeset=0

Then, regenerate the kernel initramfs:

sudo dracut --force

A-7.2. Reboot into text mode (runlevel 3).

I added

systemd.unit=multi-user.target

at the end of the system's kernel boot parameters to reboot into the text mode.

A-7.3. Verify that the Nouveau drivers are not loaded

Check the output the following command:

lsmod | grep nouveau

A-7.4. Run the installer

sudo sh cuda_9.2.148_396.37_linux.run

The installer prompted for the followings:

Do you accept the previously read EULA?
accept/decline/quit: accept
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 396.37?
(y)es/(n)o/(q)uit: y
Do you want to install the OpenGL libraries?
(y)es/(n)o/(q)uit [ default is yes ]: 
Do you want to run nvidia-xconfig?
This will update the system X configuration file so that the NVIDIA X driver
is used. The pre-existing X configuration file will be backed up.
This option should not be used on systems that require a custom
X configuration, such as systems with multiple GPU vendors.
(y)es/(n)o/(q)uit [ default is no ]: 
Install the CUDA 9.2 Toolkit?
(y)es/(n)o/(q)uit: y
Enter Toolkit Location
 [ default is /usr/local/cuda-9.2 ]:
Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y
Install the CUDA 9.2 Samples?
(y)es/(n)o/(q)uit: y
Enter CUDA Samples Location
 [ default is /home/ichimura ]

Note that if the GPU used for display is not an NVIDIA GPU, the NVIDIA openGL libraries should not be installed.

While installing the toolkit, you may encounter an error message such as:

The driver installation is unable to locate the kernel source.

I strongly recommend you to see the log file created in /tmp, because an error message you see on a screen may not be correct. For example, adding the option to the runfile:

--kernel-source-path /usr/src/kernels/4.15.17-300.fc27.x86_64

was useless to eliminate the error shown above. In my case, the actual error appeared in the log file was:

Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y

and the cause of the error was the lack of the development package lfutils-libelf-devel. I resolved the error by installing the development package:

sudo dnf install elfutils-libelf-devel

A-7.5. Reboot the system to reload the graphical interface.

Reboot the system without the kernel boot parameter for the text mode.

A-7.6. Verify the device nodes are created properly.

ls /dev/nvidia*

The number of the device nodes was equal to the number of GPUs; nvidia0 and nvidia1 were listed.

A-8. Post-installation actions for CUDA 9.2

A-8.1. Environment setup

I added

/usr/local/cuda/bin

to PATH in .bash_profile as follows:

# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin:.:/usr/local/cuda/bin
export PATH

Note that I installed a symbolic link at /usr/local/cuda.

For the path for the CUDA libraries, I created a file at

/etc/ld.so.conf.d/cuda9.2-x86_64.conf

with the following content:

/usr/local/cuda/lib64

Then, I ran the following command:

sudo ldconfig

A-8.2. Verify the installation

A-8.2.1. Verify the driver version

The driver version can be found by:

cat /proc/driver/nvidia/version

I got the output:

NVRM version: NVIDIA UNIX x86_64 Kernel Module  396.37  Tue Jun 12 13:47:27 PDT 2018
GCC version:  gcc version 7.3.1 20180712 (Red Hat 7.3.1-6) (GCC)

The version of the driver, 396.37, was consistent with the runfile cuda_9.2.148_396.37_linux.run.

A-8.2.2. Compiling the samples

I compiled the CUDA samples by changing to ~/NVIDIA_CUDA-9.2_Samples and typing make.
I guess you can take a cup of coffee/tea while the binaries of the samples are computed.

A-8.2.3. Running the binaries

In ~/NVIDIA_CUDA-9.2_Samples/bin/x86_64/linux/release/, I ran deviceQuery and bandwidthTest to see the CUDA capable devices work properly. In addition, I ran smokeParticles and volumeFiltering to check whether the software packages related to graphics were installed properly or not.

B. Installing TensorFlow and Keras

I installed TensorFlow from sources, because the pip package consistent with CUDA 9.2 and cuDNN 7.2.1 was required. As the modules of Keras such as keras_applications were necessary to build the pip package for tensorflow-1.10.0rc1, I installed Keras with TensorFlow.

In the installation process, I basically followed the instructions in https://www.tensorflow.org/install/install_sources.

B-1. Installing cuDNN

The cuDNN library is necessary for TensorFlow. You can download it from https://developer.nvidia.com/cudnn. Note that the membership of the NVIDIA Developer Program is required for downloading.

I downloaded "cuDNN v7.2.1 Library for Linux", cudnn-9.2-linux-x64-v7.2.1.38.tgz. After unpacking the tar file, I copied the include file and libraries to the CUDA 9.2 directory:

sudo cp cudnn.h /usr/local/cuda/include/
sudo cp libcudnn.so.7.2.1 /usr/local/cuda/lib64/
sudo cp libcudnn_static.a /usr/local/cuda/lib64/
sudo cp -R libcudnn.so /usr/local/cuda/lib64/
sudo cp -R libcudnn.so.7 /usr/local/cuda/lib64/

In addition, I downloaded the code samples to see cuDNN had installed properly. I selected "cuDNN v7.2.1 Code Samples and User Guide for Ubuntu16.04 (Deb)", libcudnn7-doc_7.2.1.38-1+cuda9.2_amd64.deb, because there was no package for Fedora.

I installed alien to unpack the deb file.

sudo dnf install alien

Using the command, I converted the deb file to a tar file:

sudo alien -t libcudnn7-doc_7.2.1.38-1+cuda9.2_amd64.deb 

I obtained libcudnn7-doc-7.2.1.38.tgz and unpacked it. I typed make at ./usr/src/cudnn_samples_v7/mnistCUDNN. The command mnistCUDNN was successfully compiled, which showed cuDNN had installed properly. Then, I ran the command mnistCUDNN to see if cuDNN worked by classifying the numbers 1, 3 and 5.

B-2. Installing NCCL

The NCCL library is necessary for TensorFlow. You can download it from https://developer.nvidia.com/nccl. I downloaded "NCCL 2.2.13 O/S agnostic and CUDA 9.2", nccl_2.2.13-1+cuda9.2_x86_64.txz. I unpacked the txz file by:

xz -d nccl_2.2.13-1+cuda9.2_x86_64.txz
tar xvf nccl_2.2.13-1+cuda9.2_x86_64.tar

I made the directories /usr/local/cuda/nccl/include and /usr/local/cuda/nccl/lib and copied the include file and libraries:

sudo cp nccl.h /usr/local/cuda/nccl/include/
sudo cp libnccl.so.2.2.13 /usr/local/cuda/nccl/lib/
sudo cp libnccl_static.a /usr/local/cuda/nccl/lib/
sudo cp -R libnccl.so /usr/local/cuda/nccl/lib/
sudo cp -R libnccl.so.2 /usr/local/cuda/nccl/lib/

B-3. Making the path for libcupti

I found libcupti.so at /usr/local/cuda/extras/CUPTI/lib64. I added the path for the library to /etc/ld.so.conf.d/cuda9.2-x86_64.conf and ran sudo ldconfig.

B-4. Clone the TensorFlow repository

I made a working directory ~/CUDA9.2 and cloned the latest TensorFlow repository at the directory:

git clone https://github.com/tensorflow/tensorflow

B-5. Prepare environment for Linux

B-5.1. Installing Bazel

First, I checked if the copr plugin for dnf was installed:

sudo dnf info dnf-plugins-core

Then, I installed bazel using the package maintained by Vincent Batts @vbatts on Fedora COPR.

sudo dnf copr enable vbatts/bazel
sudo dnf install bazel

B-5.2. Installing TensorFlow Python dependencies including Keras

I checked the python packages as follows:

sudo dnf info python3-numpy
sudo dnf info python3-devel
sudo dnf info python3-pip
sudo dnf info python3-wheel

Note that I used python3. Since numpy, devel and pip had already installed, I installed only wheel.

As building tensorflow-1.10.0rc1 required the modules of Keras, I installed Keras:

sudo pip3 install Keras

B-6. Configure the installation

I typed ./configure at ~/CUDA9.2/tensorflow. The script prompted for the followings:

Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3

Found possible Python library paths:
  /usr/local/lib64/python3.6/site-packages
  /usr/lib/python3.6/site-packages
  /usr/lib64/python3.6/site-packages
  /usr/local/lib/python3.6/site-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib64/python3.6/site-packages]
/usr/lib64/python3.6/site-packages

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: 
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: 
Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: 
Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: 
Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: 
Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: 
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: 
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: 
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: 
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2

Please specify the location where CUDA 9.2 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 

Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.2.1

Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 

Do you wish to build TensorFlow with TensorRT support? [y/N]: 
No TensorRT support will be enabled for TensorFlow.

Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 

Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local/cuda/nccl

Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1]: 

Do you want to use clang as CUDA compiler? [y/N]: 
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/lib64/ccache/gcc]: 

Do you wish to build TensorFlow with MPI support? [y/N]: 
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: 
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
Configuration finished

The answers different from the default settings were:

Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3

Please input the desired Python library path to use.  Default is [/usr/local/lib64/python3.6/site-packages]
/usr/lib64/python3.6/site-packages

Do you wish to build TensorFlow with CUDA support? [y/N]: y

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2

Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.2.1

Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local/cuda/nccl

B-7. Build the pip package

A pip package for TensorFlow with GPU support was built by:

sudo bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

I needed about 1 hour to build the pip package. The bazel build command built a script named build_pip_package. Running this script as follows built a .whl file within the /tmp/tensorflow_pkg directory:

sudo bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

B-8. Install the pip package

sudo pip3 install /tmp/tensorflow_pkg/tensorflow-1.10.0rc1-cp36-cp36m-linux-x86_64.whl

B-9. Validate the installation

I changed directory to a directory other than the tensorflow subdirectory from which I invoked the configure command, ~/CUDA9.2/tensorflow.

I invoked python3 at the directory:

python3

Then, I entered the following short program inside the python interactive shell:

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

The system outputted the following, which showed that I was ready to begin writing TensorFlow programs:

b'Hello, TensorFlow!'

I validated installation of TensorFlow and Keras using the jupyter notebook as well. I installed the jupyter:

sudo pip3 install jupyter

and started the jupyter notebook:

jupyter notebook

I copied the program at https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py and pasted it to a new notebook of python3. Running the program outputted the followings by which I validated installation:

Using TensorFlow backend.

x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
60000/60000 [==============================] - 7s 112us/step - loss: 0.2778 - acc: 0.9138 - val_loss: 0.0550 - val_acc: 0.9831
Epoch 2/12
60000/60000 [==============================] - 4s 69us/step - loss: 0.0890 - acc: 0.9734 - val_loss: 0.0468 - val_acc: 0.9843
Epoch 3/12
60000/60000 [==============================] - 4s 69us/step - loss: 0.0664 - acc: 0.9805 - val_loss: 0.0314 - val_acc: 0.9892
Epoch 4/12
60000/60000 [==============================] - 4s 69us/step - loss: 0.0543 - acc: 0.9835 - val_loss: 0.0310 - val_acc: 0.9895
Epoch 5/12
60000/60000 [==============================] - 4s 69us/step - loss: 0.0466 - acc: 0.9855 - val_loss: 0.0322 - val_acc: 0.9893
Epoch 6/12
60000/60000 [==============================] - 4s 70us/step - loss: 0.0399 - acc: 0.9875 - val_loss: 0.0308 - val_acc: 0.9914
Epoch 7/12
60000/60000 [==============================] - 4s 70us/step - loss: 0.0372 - acc: 0.9886 - val_loss: 0.0284 - val_acc: 0.9911
Epoch 8/12
60000/60000 [==============================] - 4s 70us/step - loss: 0.0329 - acc: 0.9900 - val_loss: 0.0287 - val_acc: 0.9904
Epoch 9/12
60000/60000 [==============================] - 4s 71us/step - loss: 0.0318 - acc: 0.9900 - val_loss: 0.0282 - val_acc: 0.9917
Epoch 10/12
60000/60000 [==============================] - 4s 71us/step - loss: 0.0267 - acc: 0.9917 - val_loss: 0.0253 - val_acc: 0.9922
Epoch 11/12
60000/60000 [==============================] - 4s 70us/step - loss: 0.0265 - acc: 0.9918 - val_loss: 0.0276 - val_acc: 0.9915
Epoch 12/12
60000/60000 [==============================] - 4s 71us/step - loss: 0.0247 - acc: 0.9923 - val_loss: 0.0296 - val_acc: 0.9923
Test loss: 0.0296135194057
Test accuracy: 0.9923

C. Installing PyTorch

Installing PyTorch is simple; go to https://pytorch.org/ and select the OS, package manager, version of python and version of CUDA you are using. I selected these things as follows:

OS: Linux
Package Manager: pip
Python: 3.6
CUDA: 9.2

You can check the version of python by python3 --version. In my case, I got the following:

Python 3.6.6

After the selection, you get the command to install PyTorch at the page. I got:

pip3 install http://download.pytorch.org/whl/cu92/torch-0.4.1-cp36-cp36m-linux_x86_64.whl 
pip3 install torchvision

That's it. Please enjoy writing programs using CUDA, TensorFlow, Keras and PyTorch on Fedora 27!

1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0