Use your old GPU in Ubuntu for Deep Neural Networks

Hi!

I just wanted to share some experience about to some stuff related to the usage of your old GPU with Tensorflow.

Let's say you have got an old computer. This computer has an old GPU and you want to use this (relatively) old device to practice some Deep Learning project. But first...

Why to use your GPU for Deep Learning or Deep Neural Networks (DNNs) instead of the CPU?


Well, CPUs are designed for generic computing workloads. However, GPUs  are less flexible since they are usually designed to compute in parallel the same instructions. DNNs are structured in a very uniform manner such that at each layer of the network thousands of identical artificial neurons perform the same computation. Therefore the structure of a DNN fits quite well with the kinds of computation that a GPU can efficiently perform.



GPUs have additional advantages over CPUs, these include having more computational units and having a higher bandwidth to retrieve from memory. Furthermore, in applications requiring image processing (i.e. Convolution Neural Networks) GPU graphics specific capabilities can be exploited to further speed up calculations.

The primary weakness of GPUs as compared to CPUs  is that a CPU is required to transfer data into the GPU card. This takes place through the PCI-E connector which is much slower than CPU or GPU memory. Another one is GPU clock speeds are 1/3rd that of high end CPUs, so on sequential tasks a GPU is not expected to perform comparatively well.

But GPUs have critically important advantages over CPUs including usually more computational units and having a higher bandwidth to retrieve from memory. Besides, in applications requiring image processing (for instance GANs) GPU graphics specific capabilities can be exploited to speed up calculations.

Well, this should be good enough for us.

Our test bench will be composed of:



But do not desperate! We did it and we'll pass now to describe the few steps to follow.

Procedure


Step 1  Dependencies


First, be sure you have installed JDK 8 (minimum). If not, please do this:
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer

Do not forget to install more dependencies we'll need:
$ sudo apt-get install pkg-config zip g++ zlib1g-dev unzip

$ sudo apt-get install python-numpy swig python-dev wheel git

Step 2 Install NVIDIA CUDA


Follow the steps at the NVidia documentation site.

Be sure you get a directory with CUDA, probably at /usr/local

I used cuda-9.1, so the final location was: /usr/local/cuda-9.1

WARNING: Ubuntu 17 include gcc and g++ version 7.  You will need to use the version include in cuda. To proceed with this just:


$ which gcc
$ sudo ln -s /usr/bin/gcc /usr/local/cuda/bin/gcc


Step 3 Install Bazel and compile (Bazel is an amazing tool to build software)


$ sudo apt-get update && sudo apt-get install bazel
$ sudo apt-get upgrade bazel

Anyway, everything is in their site.

Step 4 Compile and Install Tensorflow


Yes, your suspects are right. We'll need to get the source code of Tensorflow to be compiled. The key step here is to change the settings previous to the compilation, providing the compute compatibility level we need to use: 3.0

If you have any doubt you can check the Tensorflow documentation about compilation from sources.
$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow.git

or use  a specific branch if you need to:
$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow.git -b <branch-name>

Start the compilation and enter the settings. 

$ cd tensorflow
$ TF_UNOFFICIAL_SETTING=1 ./configure

It is a valid option to accept the default proposed value for the presented settings, but now pay attention to the compute compatibility setting.
Indicate basic settings according to your config. For cuda compute compatibility, enter: 3.0

And start the compilation. It will take some time. Read something interesting meanwhile.
$ /usr/bin/bazel build -c opt --config=cuda //tensorflow/cc:tutorials _example_trainer
$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu

If everything went OK you should not see enay error message, only a bunch of memory addresses like..
000009/000009 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427]

Step 5 Install compiled Tensorflow as a Python interface


The best option is to use Tensorflow as a Python interface.

Create the pip package using bazel:
$ /usr/bin/bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

Then, install the created pip package. regarding the packge in the /tmp directory depends on your platform. In my case was "tensorflow-1.6.0-cp36-cp36m-linux_x86_64"
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip install /tmp/tensorflow_pkg/tensorflow-1.6.0-cp36-cp36m-linux_x86_64.whl

At last, it sounds good! All we need to do now is to test the installation.

Final step: Verification


I tested with some notebooks using the tensorflow import and worked OK. you can use this code to test it in your notebooks:
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your DNN.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))

The out put should be:
TensorFlow Version: 1.6.0
Default GPU Device: /gpu:0

Run your notebook and check that the kernel does not die as soon as we start. If something works, well, we got it!



 

 Some final conclusions


Well, that has been all for now. The result is,

PROS

  • We can use your GPU for DNN workload. It frees the CPU from heavy computation, and allows a better parallelism of processes.

  • We can reuse old metal on-premises for heavy workload in Deep Neural Networks if your budget for cloud computation is limited or simply you want to save some money. GPU computing is really expensive in cloud.


CONS

  • However, we need to be careful to not force the temperature of the GPU, specially if we are working with laptops. It is not possible in most of laptops to replace them as usually are welded to the motherboard, depending of the model, naturally, but it is a common architecture.


Are desktops more adapted to this kind of work? We'll see in a new post, coming soon.



Cheers and happy Deep Neural Networks with your old and yet useful GPU!

Comments