Machine Learning in Chainer Python
Machine learning and especially deep learning are increasingly popular and have achieved state of the art performance on many data analysis benchmarks, including visual identification, reinforcement learning to allow for game and robotics challenges, and natural language processing tasks such as machine translation.
When choosing a framework for working on neural networks, it is important to choose a framework that is flexible and allows for customization. Chainer is a neural network framework written almost entirely in Python. Chainer was the first framework to provide the “define-by-run” neural network definition, which allows for dynamic changes in the network. Define-by-run simplifies the debugging process since Chainer provides an imperative API in Python. This means error messages provide normal error documentation with Python debuggers.
Since Chainer was created from the start in Python, the code is inspectable with Python tools and can be customized if required. Newer neural network models can require new algorithms, and having a framework written in Python means that the code can be altered as required.
Chainer supports computation either on CPUs or GPUs. GPU computation is enabled in Chainer by CuPy, a numerical computation library which supports Numpy-compatible operations between arrays. This enables easy switching between CPU or GPU, as required by the coding environment.
This session will cover:
The basics of how to define a neural network
Data formatting and augmentation
Using GPUs to reduce calculation time
Distributing processing over multiple CPUs or GPUs, or multiple machines
How to process images using convents
How to do reinforcement learning
Sources for finding other neural network models, including GANs, RNNs, RLs, and others