TensorFlow has quickly become one of the most ubiquitous deep learning libraries in the biz, which is particularly impressive when you consider the fact that libraries are often specialized to one or two sectors, so it’s small wonder you’ve been hearing so much about it lately.
Made by the smarty pants that form the aptly named Google Brain team, it was initially pegged for internal use only, but was later released as an open-source software and subsequently improved upon over the course of numerous iterations.
But you’re here today because you’re wondering exactly what TensorFlow is.
More specifically, you’re wondering if it’s a compiler, right? Well, let’s check out a few definitions to find the answers you’re looking for.
What Is A Compiler?
Before we can establish if TensorFlow is a compiler, we need to understand what compilers are.
You can think of a compiler as a translator, not one that translates a human language to someone else who speaks a different human language, but one that translates a human language to a computer.
Why is this necessary? Well, machines may be pretty advanced these days, but where complex language is concerned, we organic lifeforms still have a distinct edge.
For coding, we use what are referred to as high-level programming languages, of which there are a few (Python, JavaScript, Delphi, Peri, etc.).
Although to the untrained eye these languages are like the musings of a scatting, beat poet robot, they’re actually very nuanced and human, meaning computers don’t understand them.
What computers do understand are low-level programming languages or binary code, so, if we want to execute our code expressed through an elegant high-level programming language, we need to translate it into one of the two formats our computer understands.
This is, in essence, what a compiler does. It takes our sophisticated language, and translates it into something, not necessarily rudimentary, but understandable at a machine level.
Now let’s take a look at what TensorFlow is and see if the definitions line up.
What Is TensorFlow?
TensorFlow is a deep learning library, meaning it’s not technically considered a compiler, although compilation is an important aspect of what it does. So, the question then becomes, what’s a deep learning library?
Well, you know how most game studios will design their titles around game engines that already exist in order to save resources and get their products out on the shelves as soon as possible?
Well, deep learning libraries are kind of the machine learning equivalent of game engines.
A deep learning library is a repository of functions and modules created by expert coders, that you can pull through your own program to save you from having to do the leg work yourself, a programming shortcut, if you will.
Now, there are a plethora of different forms of AI, but the one TensorFlow is optimized for is known as DNN (Deep Neural Network), which is a framework of machine learning based on the general functioning of the human brain.
TensorFlow can also be of use in traditional machine learning, but DNN, is most certainly its bread and butter.
What Language Is TensorFlow Written In?
The TensorFlow functions and modules are written primarily in Python, although, to a lesser extent, C++ and CUDA are also used.
As mentioned earlier, Python is a high-level programming language, which means computers won’t understand the TensorFlow library in its raw form, which is why a compiler is required to execute the code.
However, you’re not expected to dig around for a suitable compiler that has a lot of synergy with the TensorFlow format, especially considering Tensorflow does a lot of things differently, and pre-existing compilers aren’t really up to the task.
The compiler that was tailor-made for TensorFlow is called XLA (Accelerated Linear Algebra).
Capable of speeding up TensorFlow models without the need for code source alterations, it’s one of the most sophisticated compilers ever created, and it’s helped to elevate TensorFlow to its lofty position in the tech world today.
Does TensorFlow Arrive With An Integrated Compiler?
All newer TensorFlow iterations arrive with XLA baked into the framework — Hooray!
Even though TensorFlow deals with a vast amount of numerical calculations, XLA keeps functionality nice and snappy, giving it an edge over competing libraries such as Keras and Torch.
So, Is TensorFlow A Compiler?
While TensorFlow now arrives with a custom compilation protocol, TensorFlow itself is not a compiler.
At its core, TensorFlow is a deep learning library and its compilation protocol was developed as a distinct entity, as evidenced by the fact that XLA programs can also be generated via JAX, Julia, and PyTorch.
TensorFlow couldn’t do what it does without a compiler, as the sophisticated native language makes it largely incoherent to computers, but it existed as a framework before XLA was ever created.
And speaking of frameworks, let’s take a look at what TensorFlow has going on under the hood.
TensorFlow: Framework
There are only two primary components in the TensorFlow framework, tensors and data flow graphs, hence the name.
What Are Tensors?
Every single computation in the TensorFlow library utilizes tensors, which is a generalization, of vectors and potentially larger matrices.
In other (more accessible) words, tensors are data arrays composed of differing ranks and dimensions that, once gathered, are fed as input into a neural network.
One of the drawbacks of neural networks is that they require an incredibly large amount of data, particularly during the initial training process, and, to make matters worse, this data is formatted in a complex manner, which can make coding the learning process an arduous task.
However, tensors compact this data, streamlining the creation of deep learning algorithms and speeding up the learning itself.
Tensors are comprised of two fundamental aspects:
- Dimension — The volume of the array elements
- Ranks — The number of dimensions utilized to present the information
Confused? I don’t blame you, but the good news is that you don’t really need to know the nitty-gritty of tensor architecture to use TensorFlow.
What Are Data Flow Graphs
Once all the data is represented by tensors and fed as input into the neural network, there are a great many computations to execute, and these occur in graph form.
Traditionally, programming involves the writing of code sequences, but TensorFlow utilizes these data flow graphs composed of a multitude of nodes.
Each of these nodes takes in tensor values and outputs tensor values.
You can think of them as association-forming checkpoints, the stepping stones of interrelation that form the calculating capacities of neural network AI, yet, the formation of a data flow graph doesn’t automatically put any of the information within to use.
To execute a graph, you first need to instigate a session, which, in the case of TensorFlow, involves establishing how the hardware you’re working with will communicate with each other.
Final Thoughts
There you have it — TensorFlow is not a compiler; it’s a deep learning library, but with XLA incorporated as standard, compilation is handled by the framework as and when required, ensuring a faster programming process and a more efficient neural network altogether.
We’ve covered some serious ground here today, and yet, we’ve only really just scratched the surface of TensorFlow.
So, use what you’ve just learned as a foundational understanding, and continue your research into this game-changing, Google-built deep learning library. It may just make your next AI project far easier.