Image Credit

TensorFlow — The first week

Maximiliano Tabacman
mercap-blog
Published in
8 min readSep 2, 2019

--

As part of my personal assignments at Mercap, I chose to review the TensorFlow tutorials.

The objective was to start learning about its features and current capabilities, to consider how to integrate it best into our Smalltalk products.

This post is a summary of the lessons learned during the first week of interacting with the tutorials, APIs, and projects that serendipitously entangled with my own.

What is TensorFlow?

Before explaining what I did, let’s talk about the relevance of the task.

TensorFlow is a standard when it comes to Machine Learning. It is managed by Google, used by most tech companies, and has helped breathe new life into the development of Artificial Intelligence systems.

But, the use of TensorFlow for Machine Learning is just one way to benefit from its capabilities. As we will see, I worked with it for a week without even reaching the famed deep learning features.

TensorFlow is, basically, a library for managing Tensors. A tensor is just a generalization of a matrix, of N dimensions. A single number (scalar) is a 0-dimensional tensor. An array/vector is a 1-dimensional tensor. A matrix of numbers is a 2-dimensional tensor. A matrix which values are vectors themselves, is a 3-dimensional tensor. And so on.

The default interface

Today, TensorFlow is commonly used through Python. Most tutorials online are written using the Python API.

This meant that the best way to start was installing Python, then executing the Keras tutorial.

The only issue with the steps was that the Basic Classification tutorial requires matplotlib but does not indicate it should be installed. That can be easily done by executing: pip install matplotlib.

On to Smalltalk

After completing the Text, Regression and Over/Under fitting tutorials successfully, I was ready to jump out of the Python environment, and on to Smalltalk.

The original idea was that at this point of the training I would move over to Pharo and learn how to call similar features of TensorFlow from within Smalltalk, by using the PolyMath bindings.

To my surprise, there was another project which was much closer to my goals: TensorFlow for VASmalltalk. This is a project developed by Gerardo Richarte, with the assistance of Mariano Martinez Peck from Instantiations. This is in turn a port/improvement over Gerardo’s original work in Cuis Smalltalk, done in collaboration with Javier Burroni. For the VAST version, Instantiations also provided the development of FFI calls and added support for returning structs by value in the FFI Framework and Virtual Machine.

I’ve been using VAST as a development environment at Mercap for more than 16 years, so it was a great opportunity to study how to use TensorFlow and eventually connect it to our products.

A little help from my friends

I could not have completed the steps that follow, and commited all the tests that I did, without the assistance of marianopeck and gerasdf, at the #machine-learning channel of the Buenos Aires Smalltalk Slack.

Thanks for the time dedicated to helping my research and the knowledge shared.

Follow my steps

To begin working with TensorFlow in VAST, you need the latest version that allows connecting with the required libraries (Dll’s).

Since the latest release (VA Smalltalk v9.2 ECAP 2 Pre-Release, Released 2019.06.19) had some failing tests, I obtained the unreleased ECAP 3 for my tests. Keep this in mind if you are trying to reproduce the research mentioned here.

I knew I was going to work back and forth on different machines, some of them virtual ones, so I stayed with the CPU implementation of the TensorFlow C++ library. Changing the tests I ended up implementing to run on a GPU remains as future work for now.

Windows

In order to make the C++ library visible from within VAST, I edited the FULL_PATH_TO_VAST_DIRECTORY\image64\abt.ini configuration file, found the [PlatformLibrary Name Mappings] section, and added the line TENSORFLOW_LIB=FULL_PATH_TO_DIRECTORY_WITH_DOWNLOADED_LIBRARY\tensorflow.dll.

This applies to both Windows 7 and Windows 10. Afterwards I also tried using the Linux version, which you can read about in the next section.

Important: the ini file must be edited after your first use of VAST, otherwise some “first run” scripts will overwrite your change. So, start VAST, save the image, close, edit the ini file, and you are ready to go!

After obtaining the latest commit of the tensorflow-vast Repository, I imported the content of the library at REPOSITORY_LOCATION\envy\TensorFlow.dat. Although I started when it was at version 0.45, as of this writing the version is 0.51 and all tests are passing.

To get the Windows version working, I also needed to install a Visual C++ Redistributable. The one I chose, which worked at the first try, was Microsoft Visual C++ Redistributable for Visual Studio 2017.

After these steps, and once I imported the 0.51 version of the TensorFlow Configuration Map, I had all expected tests passing!

Linux

While waiting for the Configuration Map with passing tests, I also dabbled in a xubuntu installation for VAST.

I downloaded the TensorFlow Linux CPU Library. All the contents were uncompressed to /usr/local (the libraries do not seem to use a subdirectory while the include files do). I followed the Linux installation steps, by executing sudo ldconfig, and creating the sample hello_tf.c file suggested there. I was able to compile and execute this file as suggested, which confirmed the installation was working.

Before I could use VAST in Linux, some prerrequisites needed to be installed, which I did by executing sudo apt-get install libxm4 xterm.

As was the case in Windows, the FULL_PATH_TO_VAST_DIRECTORY/image64/abt.ini had to be edited, in this case with: TENSORFLOW_LIB=/usr/local/lib/tensorflow.so. Remember that this change must be done after having started VAST at least once.

Now I was ready to work on either OS with the same results.

Image Recognition

As of this writing, Mariano has published 2 posts with examples on how to do Image Recognition in VAST using TensorFlow. The first one describes motivation, installation and an example on how to get results in text form. The second one loads a pre-trained neural network, and shows the names of objects in an image, including bounding boxes to indicate where they are located.

Following the instructions in the post, I first copied the examples directory to my VAST installation.

The post has an outdated script because the attribute imageSize of LabelImage must be an extent and not a number. Since all images used are squares, this just means that x becomes x@x.

I got all of the expected results from the first set of examples.

From the second post, I was only successful in executing the first script. The ones that follow seem to require downloading additional files not provided in the tensorflow-vast Repository.

Doing things my way

At Mercap, we are always trying to embed our knowledge into code. Our philosophy is that once we learn something, we write it so that others can benefit from our knowledge, and we ourselves can remember what we have learned.

The idea was to replicate as much as possible of the TensorFlow official tutorials, trying to learn the equivalent scripts in Smalltalk, and creating classes and messages to make it as easy as possible for others to learn about the wrapper.

Since the Keras API is not currently available in the Smalltalk wrapper, I focused on the Eager execution tutorial which seemed to teach about the basics of using TensorFlow without a higher level interface.

One of the most important lessons from this week of interacting with TensorFlow was that the Python API is more than just TensorFlow. It adds a lot of integration with Python and NumPy, and offers many ways of using the features of the library. I did not have that. VAST interacts through a library that uses that basic TensorFlow C++ routines, in the same way that Python does. So we are one level closer to the true implementation of TensorFlow, but that means that we’ll need to provide our own abstractions and support objects and messages.

To complicate matters further, the documentation for the C++ API is not as complete as the one for Python, and we even found deprecations that are not listed, not to mention the fact that there are no official examples.

Some suggested tools

If you intend to follow these steps and use VAST, or just use VAST in general, I strongly suggest installing 2 features from those available: ST: ENVY/QA which allows formatting the source code, and ST: Refactoring Browser which will allow you to find references to instance and class variables from within any browser in the IDE.

Everything is an Operation in a Graph

I started reading through the first Python examples. You’ll notice there is no mention of graphs, layers, networks nor any of that in there. The current interaction with the C++ API accessible from VAST does not include the features to use Eager Execution, so for my research during the week most operations had to be done in the context of a graph; even the sum of two numbers required using the concept of operations. 2 + 3, when using TensorFlow without Eager Execution, can be achieved using a graph which outputs the Add operation, with two inputs, which are in turn the outputs of operations that return the constant 2 and 3 respectively. Also, 2 and 3 must be declared as 0-dimensional tensors.

To better learn how to use these features, and the best way to create a compact and simple way of interacting with the API, I created a TensorFlow Environment (represented in the class TensorFlowEnvironment), which you can see in the environment branch of the tensorflow-vast project. In that branch there is a file called REPOSITORY_LOCATION\envy\Environment.dat which contains a Configuration Map with the code generated during the learning process. Since it was done using Test Driven Development, there are also tests for the different messages implemented.

To make things easier, some extensions were created in Number, TFTensor, TFGraph, that made the objects more polymorphic and their use simpler. Some of them have now been integrated into the main branch of the project.

Where to look for answers

One of the most frustrating part of this research so far, has been the lack of documentation and examples on the C++ API. The best tool at our disposal is the Python github code, since it seems to be using the C++ API internally.

To give you an idea, The Custom training: basics tutorial mentions using the function random_normal, which seems to relate to the RandomNormal function in the C++ API (to find that search for ‘RandomNormal’ with your browser in the main page of the C++ API). But when calling that function the API responded that there was no such name. Long story short, the function is now called RandomStandardNormal. To find this, go to the Python documentation for the function, click on the View source on GitHub button, which will take you here. From there you’ll see a suspicious call to ops.NotDifferentiable("RandomStandardNormal”). And there you have it. Thanks again to gerasdf for finding out about this.

The current state of things

As of this writing, you can find enough protocol to run some of the examples in the Eager execution basics, Automatic differentiation and gradient tape and Custom training: basics tutorials.

You’ll find that the most important abstraction is that you should create an environment and ask it to calculate things, which will always output a tensor.

For example, TensorFlowEnvironmentTest>>#testAddSquares shows that to do a tensor computation of 2^2+3^2 we write:

TensorFlowEnvironment new calculate: 
[:calculator | (calculator square: 2) +
(calculator square: 3)]

or also:

TensorFlowEnvironment new calculate: 
[:calculator | (calculator constant: 2) squared +
(calculator constant: 3) squared]

There are certainly more improvements to be made, as we continue exploring the power of TensorFlow and the ease with which we can now integrate it into our Smalltalk developments.

--

--