An online diary of Advait's currents thoughts and activities.
Friday, June 02, 2017
Machine learning Algorithm
In the last decade, a lot of frameworks have come up that claim to solve the Artificial intelligence puzzle. In my opinion, the following six frameworks have shown great promise and will continue to make great strides in the AI worlds.
First, let us talk a little bit about the algorithms that are involved in the machine learning space. Following is a mind map to describe some of algorithms relevant to Machine learning:
In order to solve these, there is a need for a solid framework that can give us a consistent way to take inputs from different scenarios. Some of them are :
Microsoft Computational Network Toolkit (CNTK)
Computational Network Toolkit by Microsoft Research, is a unified deep-learning toolkit that trains deep learning algorithms to learn like the human brain. It describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK has been available under an open-source license since April 2015.
CNTK easily outperforms Theano, TensorFlow, Torch 7, and Caffe with its support of multi-machine-multi-GPU backends. Such a setup can be built quickly using Microsoft Azure's GPU Lab which has good support since both are Microsoft based.
It's the engine behind a lot of features found in Google applications, such as, recognizing spoken words, translating from one language to another and for improving Internet search results making it a crucial component in a lot of Google applications. As such, continued support and development is ensured in the long-term, considering how important it is to the current team at Google.
TensorFlow can run with multiple GPUs. This makes it easy to spin up sessions and run the code on different machines without having to stop or restart the program.
Other than having an easy syntax, using Python also gives developers a wide range of some of the most powerful libraries for scientific calculations like NumPy, SciPy, and Pandas without having to switch languages.
Google has made a powerful suite of visualizations available for both network topology and performance. TensorFlow is written in Python, with the parts that are crucial for performance implemented in C++. But all of the high-level abstractions and development is done in Python. You can introduce and retrieve the results of discretionary data on any edge of the graph. You can also combine this with TensorBoard suite of visualization tools to get pretty and easy to understand graph visualizations, making debugging even simpler.
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Use Keras if you need a deep learning library that:
Allows for easy and fast prototyping (Great user friendliness, modularity, and extensibility).
Supports both convolutional networks and recurrent networks, as well as combinations of the two.
Runs seamlessly on CPU and GPU.
The advantages of using this framework
User friendliness. Keras is an API designed for human beings, not machines. It puts user experience at the center of the solution. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error.
Modularity. A model is understood as a sequence or a graph of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions, regularization schemes are all standalone modules that you can combine to create new models.
Easy extensibility. New modules are simple to add (as new classes and functions), and existing modules provide ample examples. To be able to easily create new modules allows for total expressiveness, making Keras suitable for advanced research.
Work with Python. No separate models configuration files in a declarative format. Models are described in Python code, which is compact, easier to debug, and allows for ease of extensibility.
Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays. Using Theano, it is possible to attain speeds competing with custom C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent advancement in the GPU space.
Some of the things going the Theano way are:
tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
transparent use of a GPU – Perform data-intensive computations much faster than on a CPU.
efficient symbolic differentiation – Theano does your derivatives for functions with one or many inputs.
speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
dynamic C code generation – Evaluate expressions faster.
extensive unit-testing and self-verification – Detect and diagnose many types of errors.
Theano has been powering large-scale computationally intensive scientific investigations since at least the last ten years.
Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C language implementation.
The goal of Torch is to have maximum flexibility and speed in building scientific algorithms while making the process extremely simple. Torch comes with a large ecosystem of community-driven packages in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking among others, and builds on top of the Lua community.
At the heart of Torch are the popular neural network and optimization libraries which are simple to use, while having maximum flexibility in implementing complex neural network topologies. You can build arbitrary graphs of neural networks, and parallelize them over CPUs and GPUs in an efficient manner.
A summary of core features:
A powerful N-dimensional array
Lots of routines for indexing, slicing and transposing
Excellent interface to C, via LuaJIT
Linear algebra routines
Neural network, and energy-based models
Numeric optimization routines
Fast and efficient GPU support
Embeddable with ports to iOS and Android backends
It is already used heavily within Facebook, Google, Twitter, NYU, IDIAP, Purdue and several other companies and research labs.
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming.
You can use Infer.NET to solve many different kinds of machine learning problems, from standard problems like classification, recommendation or clustering through to customised solutions to domain-specific problems. Infer.NET has been used in a wide variety of domains including information retrieval, bioinformatics, epidemiology, vision, and many others.
nfer.NET provides the state-of-the-art message-passing algorithms and statistical routines needed to perform inference for a wide variety of applications. Infer.NET differs from existing inference software in a number of ways:
Rich modelling language
Support for univariate as well as multivariate variables, both continuous and discrete. Models can be constructed from a broad range of factors including arithmetic operations, linear algebra, range and positivity constraints, Boolean operators, Dirichlet-Discrete, Gaussian, and many others. Support for hierarchical mixtures with heterogeneous components.
Multiple inference algorithms
Built-in algorithms include Expectation Propagation, Belief Propagation (a special case of EP), Variational Message Passing and Gibbs sampling.
Designed for large scale inference
In most existing inference programs, inference is performed inside the program - the overhead of running the program slows down the inference. Instead, Infer.NET compiles models into inference source code which can be executed independently with no overhead. It can also be integrated directly into your application. In addition, the source code can be viewed, stepped through, profiled or modified as needed, using standard development tools.
Probability distributions, factors, message operations and inference algorithms can all be added by the user. Infer.NET uses a plug-in architecture which makes it open-ended and adaptable. Whilst the built-in libraries support a wide range of models and inference operations, there will always be special cases where a new factor or distribution type or algorithm is needed. In this case, custom code can be written and freely mixed with the built-in functionality, minimizing the amount of extra work that is needed.
A lot of work remains to be done to make sure that these frameworks continue to evolve with rapid challenges in this space but looking at the current set of GitHub projects, it is clear that most of them have shown promise in addressing the different types of algorithms are listed.