//No Comment - Quantized Neural Networks,Generating Faces & cleverhans v0.1
Written by Mike James   
Wednesday, 05 October 2016

• Quantized Neural Networks

• Generating Faces with Deconvolution Networks

• cleverhans v0.1

nocomment

Sometimes the news is reported well enough elsewhere and we have little to add other than to bring it to your attention.

No Comment is a format where we present original source information, lightly edited, so that you can decide if you want to follow it up. 

 

 

The recent paper below reinforces some expectations about how neural networks actually work. They are not precise machines that need high floating point acurracy. Instead the infomation that they carry is spead across the neurons. We might guess that the accuracy of the wieghts in use isn't that important but perhaps not quite as unimportant as 1-bit weights!

Quantized Neural Networks

Training Neural Networks with Low Precision Weights and Activations

Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio

We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced.

We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation.

Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.

 

faceaniimaton

 

Generating Faces with Deconvolution Networks

Michael D Flynn has been doing some interesting things with neural networks and its all available on GitHub. He writes:

One of my favorite deep learning papers is Learning to Generate Chairs, Tables, and Cars with Convolutional Networks. It’s a very simple concept – you give the network the parameters of the thing you want to draw and it does it – but it yields an incredibly interesting result. The network seems like it is able to learn concepts about 3D space and the structure of the objects it’s drawing, and because it’s generating images rather than numbers it gives us a better sense about how the network “thinks” as well.

I happened to stumble upon the [Radboud Faces Database][RaDF] some time ago, and wondered if something like this could be used to generate and interpolate between faces as well.

The results are actually pretty exciting!

 nocomment

cleverhans v0.1: an adversarial machine learning library

Ian Goodfellow, Nicolas Papernot, and Patrick McDaniel

The cleverhans library is a collaboration between OpenAI and the Pennsylvania State University. OpenAI is a non-profit that aims to put the control back in the people's hands funded by Elon Musk, Reid Hoffman, Peter Thiel, and Amazon Web Services, see AI Goes Open Source To The Tune Of $1 Billion 

Adversarial examples are inputs crafted by making slight perturbations to legitimate inputs with the intent of misleading machine learning models. The perturbations are designed to be small in magnitude, such that a human observer would not have difficulty processing the resulting input. In many cases, the perturbation required to deceive a machine learning model is so small that a human being may not be able to perceive that anything has changed, or even so small that an 8-bit representation of the input values does not capture the perturbation used to fool a model that accepts 32-bit inputs.

Although completely effective defenses have yet to be proposed, the most successful to date is adversarial training. The cleverhans library provides reference implementations of the attacks, which are intended for use for two purposes.

First, machine learning developers may construct robust models by using adversarial training, which requires the construction of adversarial examples during the training procedure.

Second, we encourage researchers who report the accuracy of their models in the adversarial setting to use the standardized reference implementation provided by cleverhans.

Without a standard reference implementation, different benchmarks are not comparable—a benchmark reporting high accuracy might indicate a more robust model, but it might also indicate the use of a weaker attack implementation. By using cleverhans, researchers can be assured that a high accuracy on a benchmark corresponds to a robust model.

Implemented in Python, cleverhans is designed as a tool complementing existing numerical computation libraries like TensorFlow and Theano, as well as specialized higher-level machine learning libraries like Keras  that help developers to quickly implement models using predefined layers. It is free, open-source software, licensed under the MIT license. The project is available online through GitHub.

 

nocomment

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter,subscribe to the RSS feed and follow us on, Twitter, FacebookGoogle+ or Linkedin.

Banner

nocommentAI


IBM Updates Granite Models
28/10/2024

IBM has released new Granite models that it says provide state-of-the-art performance relative to model size. The Granite 3.0 collection includes a new, instruction-tuned, dense decoder-only LLM.



Prompt Engineering Techniques To Make You An Expert
18/11/2024

Introducing a GitHub repository full of hot tips and instructions on how to build the perfect prompt presented in a collection of Jupiter Notebooks.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Wednesday, 05 October 2016 )