Friday, June 02, 2017

Machine learning Algorithm

In the last decade, a lot of frameworks have come up that claim to solve the Artificial intelligence puzzle. In my opinion, the following six frameworks have shown great promise and will continue to make great strides in the AI worlds.
First, let us talk a little bit about the algorithms that are involved in the machine learning space. Following is a mind map to describe some of algorithms relevant to Machine learning:
In order to solve these, there is a need for a solid framework that can give us a consistent way to take inputs from different scenarios. Some of them are :
  • Microsoft Computational Network Toolkit (CNTK)
Computational Network Toolkit by Microsoft Research, is a unified deep-learning toolkit that trains deep learning algorithms to learn like the human brain. It describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK has been available under an open-source license since April 2015.
PROS
CNTK easily outperforms Theano, TensorFlow, Torch 7, and Caffe with its support of multi-machine-multi-GPU backends. Such a setup can be built quickly using Microsoft Azure's GPU Lab which has good support since both are Microsoft based.
  • Google Tensorflow
It's the engine behind a lot of features found in Google applications, such as, recognizing spoken words, translating from one language to another and for improving Internet search results making it a crucial component in a lot of Google applications. As such, continued support and development is ensured in the long-term, considering how important it is to the current team at Google.
PROS
  • TensorFlow can run with multiple GPUs. This makes it easy to spin up sessions and run the code on different machines without having to stop or restart the program.
  • Other than having an easy syntax, using Python also gives developers a wide range of some of the most powerful libraries for scientific calculations like NumPy, SciPy, and Pandas without having to switch languages.
Google has made a powerful suite of visualizations available for both network topology and performance. TensorFlow is written in Python, with the parts that are crucial for performance implemented in C++. But all of the high-level abstractions and development is done in Python. You can introduce and retrieve the results of discretionary data on any edge of the graph. You can also combine this with TensorBoard suite of visualization tools to get pretty and easy to understand graph visualizations, making debugging even simpler.
  • Keras
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Use Keras if you need a deep learning library that:
  • Allows for easy and fast prototyping (Great user friendliness, modularity, and extensibility).
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two.
  • Runs seamlessly on CPU and GPU.
The advantages of using this framework
  • User friendliness. Keras is an API designed for human beings, not machines. It puts user experience at the center of the solution. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error.
  • Modularity. A model is understood as a sequence or a graph of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions, regularization schemes are all standalone modules that you can combine to create new models.
  • Easy extensibility. New modules are simple to add (as new classes and functions), and existing modules provide ample examples. To be able to easily create new modules allows for total expressiveness, making Keras suitable for advanced research.
  • Work with Python. No separate models configuration files in a declarative format. Models are described in Python code, which is compact, easier to debug, and allows for ease of extensibility.
Theano
  • Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays. Using Theano, it is possible to attain speeds competing with custom C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent advancement in the GPU space.
Some of the things going the Theano way are:
  • tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
  • transparent use of a GPU – Perform data-intensive computations much faster than on a CPU.
  • efficient symbolic differentiation – Theano does your derivatives for functions with one or many inputs.
  • speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
  • dynamic C code generation – Evaluate expressions faster.
  • extensive unit-testing and self-verification – Detect and diagnose many types of errors.
Theano has been powering large-scale computationally intensive scientific investigations since at least the last ten years.
  • Torch
Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C language implementation.
The goal of Torch is to have maximum flexibility and speed in building scientific algorithms while making the process extremely simple. Torch comes with a large ecosystem of community-driven packages in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking among others, and builds on top of the Lua community.
At the heart of Torch are the popular neural network and optimization libraries which are simple to use, while having maximum flexibility in implementing complex neural network topologies. You can build arbitrary graphs of neural networks, and parallelize them over CPUs and GPUs in an efficient manner.
A summary of core features:
  • A powerful N-dimensional array
  • Lots of routines for indexing, slicing and transposing
  • Excellent interface to C, via LuaJIT
  • Linear algebra routines
  • Neural network, and energy-based models
  • Numeric optimization routines
  • Fast and efficient GPU support
  • Embeddable with ports to iOS and Android backends
It is already used heavily within Facebook, Google, Twitter, NYU, IDIAP, Purdue and several other companies and research labs.
  • Infer.Net
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming.
You can use Infer.NET to solve many different kinds of machine learning problems, from standard problems like classification, recommendation or clustering through to customised solutions to domain-specific problems. Infer.NET has been used in a wide variety of domains including information retrieval, bioinformatics, epidemiology, vision, and many others.
nfer.NET provides the state-of-the-art message-passing algorithms and statistical routines needed to perform inference for a wide variety of applications. Infer.NET differs from existing inference software in a number of ways:
  • Rich modelling language
Support for univariate as well as multivariate variables, both continuous and discrete. Models can be constructed from a broad range of factors including arithmetic operations, linear algebra, range and positivity constraints, Boolean operators, Dirichlet-Discrete, Gaussian, and many others. Support for hierarchical mixtures with heterogeneous components.
  • Multiple inference algorithms
Built-in algorithms include Expectation Propagation, Belief Propagation (a special case of EP), Variational Message Passing and Gibbs sampling.
  • Designed for large scale inference
In most existing inference programs, inference is performed inside the program - the overhead of running the program slows down the inference. Instead, Infer.NET compiles models into inference source code which can be executed independently with no overhead. It can also be integrated directly into your application. In addition, the source code can be viewed, stepped through, profiled or modified as needed, using standard development tools.
  • User-extendable
Probability distributions, factors, message operations and inference algorithms can all be added by the user. Infer.NET uses a plug-in architecture which makes it open-ended and adaptable. Whilst the built-in libraries support a wide range of models and inference operations, there will always be special cases where a new factor or distribution type or algorithm is needed. In this case, custom code can be written and freely mixed with the built-in functionality, minimizing the amount of extra work that is needed.
A lot of work remains to be done to make sure that these frameworks continue to evolve with rapid challenges in this space but looking at the current set of GitHub projects, it is clear that most of them have shown promise in addressing the different types of algorithms are listed.

Tuesday, November 29, 2016

.NET Core to the forefront

Introduction
.NET is Microsoft’s open source cross platform development framework which helps developers create mobile, desktop and web applications that run on Windows devices and servers. First started in late 2000, as beta versions of .NET 1.0 it has quickly become one of most popular frameworks for developers.
Latest Trends
NET Core 1.0 was released 27 June 2016. This is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR, the virtual machine that manages the execution of .NET programs. CoreCLR comes with an improved just-in-time (JIT) compiler called RyuJIT. While .NET Core shares a subset of .NET Framework APIs, it comes with its own API that is not part of .NET Framework. Further, .NET Core contains CoreRT, the .NET Native runtime optimized to be integrated into AOT compiled native binaries. The command-line interface (CLI) of this framework offers an execution entry point for operating systems and provides developer services like compilation and package management.
Innovation using .NET
The .NET framework has been in the forefront of innovation from the beginning. Some of the innovative features of the .NET framework include LINQ (Language Integrated Query), ASP.NET Web framework, Asynchronous programming using Async/ Await, .NET Portable Class Libraries and Mobile services. Ability for the .NET core framework to be used in Docker containers is also an innovation this framework can boast of.
.NET in Mobility
Xamarin provides the following summarized features:
Native User Interfaces–Xamarin apps are built with standard, native user interface controls. Apps exactly look and behave the way the end user intended.
Native API Access–Xamarin apps have access to the full spectrum of functionality exposed by the underlying platform and device, including platform-specific capabilities like iBeacons and Android Fragments.
Native Performance–Xamarin apps leverage platform-specific hardware acceleration, and are compiled for native performance. This can’t be achieved with solutions that interpret code at runtime.
Productivity–With Xamarin.Forms developers can use the same logic and UI targeting iOS, Android and Windows 10 UWP.
.NET Core Advances
Unlike the traditional .NET Framework, which is a single package installation which is system-wide, and Windows-only runtime environment, .NET Core is about decoupling .NET from Windows, allowing it to run in non-Windows environments without having to install a bulky framework. This also helps to run this platform on Linux based Docker containers.    
Basically, the .NET Core Platform is packaged and installed in a different way. Instead of being part of the operating system .NET Core is composed of NuGet packages and is either compiled directly into an application or put into a folder inside the application. This means applications can carry .NET Core with and thus are completely side by side on the machine.
.NET Core consists of a common language runtime, which in .NET Core is named CoreCLR. .NET Core also features an extensive class library. Rather than a single .NET Framework Class Library, however, .NET Core features CoreFX, a modular collection of libraries. This allows you to include just the libraries that your app needs without the overhead of including those that you don’t need.
Value Proposition for .NET Core
The following topics are the main value-propositions of .NET Core:
Cross-platform and Open Source–.NET Core currently supports three main operating systems: Linux, Windows and OS X with other OS ports in progress such as FreeBSD and Alpine. .NET Core libraries can run unmodified across supported OSes and need the applications to be recompiled per environment, given that apps use a native host. The .NET Core 1.0 framework is available on GitHub, licensed with the MIT and Apache 2 licenses. It also makes use of a significant set of open source industry.
Modular Framework –.NET Core is built with a modular design and distributed as a set of Nuget packages enabling applications to include only the .NET Core libraries and dependencies that are needed in line with the latest docker contained based development. Each application makes its .NET Core version choice, avoiding conflicts with shared components. You can then choose a .NET image from Docker hub.
Smaller Deployment Footprint: Even when in v1.0 the size of .NET Core is a lot smaller than .NET Framework, note that the overall size of .NET Core doesn’t intend to be smaller than the .NET Framework over time, but since it is pay-for-play, most applications that utilize only parts of CoreFX will have a smaller deployment footprint.
Fast Release Cycles of .NET Core–.NET Core modular architecture provide a modern and much faster release cycles compared to slow release cycles from larger monolithic frameworks. This approach allows a much faster innovation pace from Microsoft and the OSS .NET community than what was traditionally possible with the .NET Framework. 

Sunday, November 27, 2016

Technologies that will heat up in 2017

As 2016 wind down and we start hearing the New Year bells, time to start looking at newer technologies that will be areas of focus in the next year. There are 9 areas around which major discussions and enhancements will happen in 2017.
  1. Artificial intelligence and advanced machine learning
  2. Intelligent things that will combine 3 areas of IoT, AI and ML
  3. Virtual assistants applicable to specific domain areas
  4. Virtual Reality and Augmented reality tools and products
  5. Bots
  6. Blockchain technologies and Bitcoin
  7. Conversational systems
  8. Mesh App and service architectures (aka MASA)
  9. Adaptive Security architectures 
 

Saturday, November 26, 2016

Discussion and comparison of Container technologies

Today, I would like to discuss a little bit on the Container technology and why it has become so popular.
One of the best ways to truly implement the "Infrastructure as a Service" paradigm and make it configuration based is through containers.
Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. They allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control all packaged into one.

They can help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment. Running containers like Docker in AWS is a nice flexible way to make the entire infrastructure configuration based and removing state based dependencies from end to end architectures.It automates the deployment of Linux based applications in the cloud.

Some of the cool technologies competing in this space include Docker, CoreOS, IBM, AWS, Google, Red Hat and Microsoft Drawbridge .
 

Tuesday, November 22, 2016

Cloud based tools

Summarizing some experiences of the different cloud based tools that are needed in enterprises
  • Full stack performance and analytics cloud tools
     - AppNeta
     - CoScale
     - AppDynamics
     - Dynatrace
     - Quantum Metrics
     - New Relic
  • Cloud cost management tools
    - Cloudability
     - Cloudyn
     - VMWare
     - Rightscale
     - Dell
     - Scalr
     - Cloud Cruiser
  • Workload automation tools
     - Ansible
     - Chef
     - Gigaspaces - Cloudify
     - Puppet Labs 
  • Cloud Service Management
     - BMC
    - CA Applogic
    - FUJITSU - cloud service management

Saturday, November 19, 2016

.NET Core and recent advances

Latest Trends
NET Core 1.0 was released 27 June 2016. This is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR, the virtual machine that manages the execution of .NET programs. CoreCLR comes with an improved just-in-time (JIT) compiler called RyuJIT. While .NET Core shares a subset of .NET Framework APIs, it comes with its own API that is not part of .NET Framework. Further, .NET Core contains CoreRT, the .NET Native runtime optimized to be integrated into AOT compiled native binaries. The command-line interface of this framework offers an execution entry point for operating systems and provides developer services like compilation and package management.
Innovation using .NET
The .NET framework has been in the forefront of innovation from the beginning. Some of the innovative features of the .NET framework include LINQ (Language Integrated Query) , ASP.NET Web framework, Asynchronous programming using Async/ Await, .NET Portable Class Libraries and Mobile services. Ability for the .NET core framework to be used in Docker containers is also an innovation this framework can boast of.
.NET in Mobility
Xamarin provides the following summarized features:
Native User Interfaces – Xamarin apps are built with standard, native user interface controls. Apps not only look the way the end user expects; they behave that way too.
Native API Access – Xamarin apps have access to the full spectrum of functionality exposed by the underlying platform and device, including platform-specific capabilities like iBeacons and Android Fragments.
Native Performance – Xamarin apps leverage platform-specific hardware acceleration, and are compiled for native performance. This can’t be achieved with solutions that interpret code at runtime.
Productivity – With Xamarin.Forms developers can use the same logic and UI targeting iOS, Android and Windows 10 UWP.
.NET Core advances
Unlike the traditional .NET Framework, which is a single package installation, system-wide, and Windows-only runtime environment, .NET Core is about decoupling .NET from Windows, allowing it to run in non-Windows environments without having to install a giant 400mb set of binaries plus the ability to deploy the applications coming with the framework itself supporting side-by-side execution of different versions of the framework. This also helps to run this platform on Linux based Docker containers.   
Basically, the .NET Core Platform is packaged and installed in a different way. Instead of being part of the operating system .NET Core is composed of NuGet packages and is either compiled directly into an application or put into a folder inside the application. This means applications can carry .NET Core with and thus are completely side by side on the machine.
.NET Core consists of a common language runtime, which in .NET Core is named CoreCLR. .NET Core also features an extensive class library. Rather than a single .NET Framework Class Library, however, .NET Core features CoreFX, a modular collection of libraries. This allows you to include just the libraries that your app needs without the overhead of including those that you don’t need.
Value Proposition for .NET Core
The following topics are the main value-propositions of .NET Core:
Cross-platform – .NET Core currently supports three main operating systems: Linux, Windows and OS X with other OS ports in progress such as FreeBSD and Alpine. .NET Core libraries can run unmodified across supported OSes and need the applications to be recompiled per environment, given that apps use a native host.
Open Source – .NET Core is available on GitHub, licensed with the MIT and Apache 2 licenses. It also makes use of a significant set of open source industry
Modular framework – .NET Core is built with a modular design, enabling applications to include only the .NET Core libraries and dependencies that are needed in line with the latest docker contained based development. Each application makes its own .NET Core versioning choices, avoiding conflicts with shared components. .
Natural acquisition – .NET Core is distributed as a set of NuGet packages that developers can pick and choose from and Docker images for it are also available on Docker hub. The runtime and base framework can be acquired from NuGet and OS-specific package managers, such as APT, Homebrew and Yum.
Smaller deployment footprint: Even when in v1.0 the size of .NET Core is a lot smaller than .NET Framework, note that the overall size of .NET Core doesn’t intend to be smaller than the .NET Framework over time, but since it is pay-for-play, most applications that utilize only parts of CoreFX will have a smaller deployment footprint.
Fast release cycles of .NET Core – .NET Core modular architecture provide a modern and much faster release cycles compared to slow release cycles from larger monolithic frameworks. This approach allows a much faster innovation pace from Microsoft and the OSS .NET community than what was traditionally possible with the .NET Framework.

Wednesday, November 16, 2016

Ecommerce shines

Latest ECommerce Trends shaping up
ECommerce these days has exploded many times, hitting new milestones and achieving numbers never seen.  In the year 2014, sales hit a high of $1.3 trillion reaching almost $2 trillion by the end of 2016.
By 2020, it is estimated that the value of ECommerce sales will have topped $4 Trillion. This article discusses some of the top trends in ECommerce sales that has had far reaching consequences in the way
enterprises design a strategy, develop software and execute a plan that gives seamless experience to the end users while keeping them in line with latest trends
Multi-Device Shopping : Given that sixty-six percent of all time spent on ecommerce sites is done across mobile devices, companies are aligning their web design aligned with the mobile first strategy.
eRetailers will need to ensure that the site is accessible from devices of different form factors like phones, tablets and laptops having different resolutions. 61 percent of customers leave a site if it isn’t mobile-friendly, it is hardly a surprise that responsive web design is becoming a key component of any web site’s strategy
Material Design – This vibrant and content focused pattern has been gaining popularity since late 2014 and continues to be used extensively in modern ECommerce site. It is the unified and playful experience provided by
Material design which makes it extremely attractive. Even for development shops, this frameworks allows for rapid and consistent baseline that can be utilized for almost a “game” kind of experience.
Custom Product demonstrations / Real time customization – Providing interactive product demonstrations is a very compelling way to showcase your product offering and how they work. Lot of companies would like to give a real time experience of building models and clothings so that it is customized to the need of the customer
Hidden Menus – With the advent of bold modern designs, the concept of menus has been replaced with large images and alternative mechanisms to toggle between choices. Even the hamburger menu in the corner is being
replaced by innovative ways of allowing users to choose between different modes and choices.
Multi-channel marketing – Using a combination of ECommerce indirect and direct communication channels like websites, direct mail, email, mobile apps and social media campaigns, companies are maximizing their investment in digital assets and also investing in a variety of digital asset management products. As the pace of this marketing mechanism picks up, there will surely an uptrend in the usage of these asset management products.
Location Targeting – Given that there is an innate integration of a GPS device pinpointing the location of a customer, it is natural for companies to start focusing on location centric targeting through relevant advertisement.
Using Beacons and accurate location sensors coupled with preferences and profile specifics of the individuals, it is even more possible to pinpoint the kind of discounts and promotions that could be transmitted to the individual to encourage buying specific merchandise.
Social Media Advertising – Social Media advertisements have come a long way in being used as an effective advertising mechanism. Taking the example of Facebook’s Dynamic Product Ads which automatically promote relevant products from your entire catalog with unique creative, showcasing one or more products, across any device, users can be literally reminded about the quality and uniqueness of the product that they just viewed minutes back thereby refreshing their memory and giving a good visibility for the product.
Marketing Automation – As the email campaigns to promote different products become more and more annoying, technology has found ways to classify the mails as junk or “clutter”. Marketing automation helps in a way that makes the emails a lot more relevant, attractive and possibly worthwhile to the email receiver to spend some time understanding the material.
Rich Media – Audio, Video Product – As the competition for attracting more and more eyeballs intensifies, companies are trying to find every possible way to get more “stickiness” to their ECommerce channel. One very practical way to do this is by using rich media generously. A recent survey has indicated that more than 90% of the buyers have revealed that a video explaining the features can play a much better role for them to make a buying decision. Use of bold colors, pleasing to the eyes can surely help ECommerce retailers generate more interest for their customers.
Data Aggregators – As the concept of “Data as a service” intensifies, using variety of data in real time and aggregating it to give a contextual and specific experience continues to help customers make their decisions easy.
In this information age, where data is on the fingertips of most millennials, nothing can be more irritating for the customer than to see outdated or wrong information just because aggregation and data cleansing has not been done correctly. This reflects very poorly on the quality of the product and directly influences the product buying decision process.
Artificial intelligence in ECommerceThe three areas that artificial intelligence is playing a big role in ECommerce is advanced search, Personalization and Predictive analysis. Given the behavior of the buyer, most AI algorithms
can foretell the taste of the individual and suggest specific products based on context when the user is trying to search. Given that the inventory of most retailers is considerable, it surely helps to zero in the customer to the required merchandise quicker. From a personalization standpoint, using the preferences that the user has mentioned in his profile as well as any traits that the algorithm can gather from his publicly available social or professional datapoints, companies will need to create a virtual profile of the customer and help in suggesting options accordingly. Finally, predictive analysis helps enterprises predict future purchasing patterns or possible interest points for the customer so that recommendations can be made accordingly. Used effectively by Amazon initially, this has now become a standard process
Emergence of Payment Wearables– With the emergence of mobile wallets, rings and small wearables capable of making digital payments, there has been a new trends of companies using Tap and Go kind of techniques for payments in the ECommerce arena. There is a huge market for this and it will continue to expand in the coming years.