Get Started with Apache MXNet on Amazon Linux
Get Started with Apache MXNet on Ubuntu

Apache MXNet is a lean, flexible, and ultra-scalable deep learning framework that supports state of the art in deep learning models, including convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). The framework has its roots in academia and came about through the collaboration and contributions of researchers at several top universities. It has been designed to excel at computer vision, speech, language processing and understanding, generative models, concurrent neural networks, and recurrent neural networks.

MXNet allows you to define, train, and deploy networks across a wide array of use cases from massive cloud infrastructure to mobile and connected devices. It provides a very flexible environment with support for many common languages and the ability to utilize both imperative and symbolic programming constructs. MXNet also very lightweight. This allows it to scale across multiple GPUs and multiple machines very efficiently, which is beneficial when conducting training on large datasets in the cloud.

Because of these benefits, Apache MXNet is Amazon Web Services' deep learning framework of choice. You can easily get started using Apache MXNet on AWS by launching the AWS Deep Learning AMI, available for both Amazon Linux and Ubuntu.

Contribute to the Apache MXNet Project

Get Involved at GitHub

Grab sample code, notebooks, and tutorial content at the the GitHub project page.

GitHub-Mark-120px-plus
Simple

Programmability

Simplify network definitions and use languages that you already know

Durable

Portability

Efficient use of memory to allow models to run on a broad range of devices and platforms

Scalable

 Scalability

Ability to scale across multiple GPUs and hosts to train large, sophisticated models quickly

With Apache MXNet, you have the ability to mix both imperative and sybmolic languages. In fact, the name “MXNet” comes from “mixed networks”. This means that you can manipulate network layers with the optimization of symbolic executors along with the flexible features of imperative languages like iteration loops and parameter updates. Because of this mixed nature, MXNet provides a unique set of capabilities that make it easier to work within the multi-layered, complex nature of deep learning models. 

import mxnet as mx

a = mx.nd.zeros((100, 50))

b = mx.nd.ones((100, 50))

c = a + b

c += 1

print(c)

 

import mxnet as mx

net = mx.symbol.Variable('data')

net = mx.symbol.FullyConnected(data=net, num_hidden=128)

net = mx.symbol.SoftmaxOutput(data=net)

texec = mx.module.Module(net)

texec.forward(data=c)

texec.backward()

In addition, MXNet supports a broad set of programming languages on the front end of the framework, including C++, JavaScript, Python, r, Matlab, Julia, Scala, and Go. This allows you to use all of the languages you’re already familiar with to start running your deep learning right away. On the backend, your code is always complied in C++ so that you get consistent performance regardless of what language you choose to use on the front end. 

c-plus-plus-logo-100px
javascript-logo-100px
python-logo-100px
r-logo-100px
Matlab
julia-logo-100px
scala-logo-100px
go-logo-100px

As artificial intelligence applications become more and more a part of daily life, it is increasingly important that they can be deployed across a wide variety of devices. This is particularly true when AI is deployed on mobile and connected devices at the edge where storage may be at a premium.

Apache MXNet models are able to fit in very small amounts of memory. For example, a thousand-layer network requires less than 4GB of storage. The framework is also portable across platforms. The core library (with all dependencies) fits into a single C++ source file, and it can be compiled for both iOS and Android. In fact, using JavaScript, it can even be run within a browser. This flexibility means that you can deploy your models across a very diverse set of use cases to reach the broadest set of users. 

portability-cloud

Cloud

portability-mobile

Mobile

portability-browser

Browser

portability-devices

Connected Devices

Apache MXNet is built on a dynamic dependency scheduler that parses data dependencies in serial code and automatically parallelizes both declarative and imperative operations on the fly. A graph optimization layer on top of that makes declarative execution fast and memory efficient.

Because of this auto-parallelization, MXNet scales very efficiently. For example, we trained a popular image analysis algorithm, Inception v3, using an increasing number of EC2 P2 GPU-backed instances to benchmark MXNet's efficiency.

mxnet-scale-story-1

The red line is perfect efficiency where you get double the speed for double the GPUs. MXNet's throughput rose by almost the same rate as the number of GPUs used. Across a single instance of 16 GPUs, Apache MXNet is able to achieve 91% efficiency.

However, for large networks, you may want to scale across clusters of P2 instances. Running the same benchmark in this scenario sees only a small decrease in efficiency as the number of GPUs increases exponentially.

mxnet-scale-story-2

However, don’t simply take our word for it. One of the benefits of the cloud is that you can easily test things for yourself with your own code. We’ve made our benchmark available to anyone on GitHub. We encourage you to look under the hood to see how it works and to run it to obtain your own results. You can also use our AWS CloudFormation template to quickly spin up the capacity that you need. 

It's easy to start using Apache MXNet on AWS by visiting the AWS Marketplace and launching the AWS Deep Learning AMI.

The AMI provides the latest stable build of MXNet and many other popular deep learning frameworks and tools at no cost. You pay only for the EC2 time you consume.