Tag Archives: Neural Networks

Step Up To Recurrent Neural Networks


Step Up To Recurrent Neural Networks
// Code Download

Recurrent neural networks can solve some types of problems that regular feed-forward networks cannot handle.

Recurrent Neural Networks Tutorial Part 3: Backpropagation Through Time and Vanishing Gradients


Recurrent Neural Networks Tutorial, Part 3 – Backpropagation Through Time and Vanishing Gradients
// WildML

This the third part of the Recurrent Neural Network Tutorial.

In the previous part of the tutorial we implemented a RNN from scratch, but didn’t go into detail on how Backpropagation Through Time (BPTT) algorithms calculates the gradients. In this part we’ll give a brief overview of BPTT and explain how it differs from traditional backpropagation. We will then try to understand the vanishing gradient problem, which has led to the development of LSTMs and GRUs, two of the currently most popular and powerful models used in NLP (and other areas). The vanishing gradient problem was originally discovered by Sepp Hochreiter in 1991 and has been receiving attention again recently due to the increased application of deep architectures.

To fully understand this part of the tutorial I recommend being familiar with how partial differentiation and basic backpropagation works. If you are not, you can find excellent tutorials here and here and here, in order of increasing difficulty.

Cracking captchas with neural networks


via Cracking captchas with neural networks by Johan Fagerbeg on CodePen.

So you’d like to break a captcha, huh? It might come as a surprise, but it’s actually fairly easy to do – even in your browser (as long as you don’t set your expectations all that high). The trick is in using neural networks to find the best match for each character, after training it extensively with sample data. Luckily for us, there already exists an excellent implementation of neural networks in Javascript, called brain.js. For this guide we’ll be breaking one of the variants generated by Securimage, as they can be made pretty weak (other services, such as captchacreator, are much weaker by default, but I couldn’t find a decent dataset for those) – breaking a more complex one, not to mention reCaptcha, quickly becomes close to impossible. It should go without mention that this is only for educational purposes, even if it can be used on some real captchas.

Neural Network Papers


Table of Contents

  1. Surveys
  2. Datasets
  3. Programming Frameworks
  4. Learning to Compute
  5. Natural Language Processing
  6. Convolutional Neural Networks
  7. Recurrent Neural Networks
  8. Convolutional Recurrent Neural Networks
  9. Autoencoders
  10. Restricted Boltzmann Machines
  11. Biologically Plausible Learning
  12. Supervised Learning
  13. Unsupervised Learning
  14. Reinforcement Learning
  15. Theory
  16. Quantum Computing
  17. Training Innovations
  18. Numerical Optimization
  19. Numerical Precision
  20. Hardware
  21. Cognitive Architectures
  22. Motion Planning
  23. Computational Creativity
  24. Cryptography
  25. Distributed Computing
  26. Clustering

Surveys

Datasets

Programming Frameworks

Learning to Compute

Natural Language Processing

Word Vectors

Sentence and Paragraph Vectors

Character Vectors

Sequence-to-Sequence Learning

Language Understanding

Question Answering, and Conversing

Convolutional

Recurrent

Convolutional Neural Networks

Recurrent Neural Networks

Convolutional Recurrent Neural Networks

Autoencoders

Restricted Boltzmann Machines

Biologically Plausible Learning

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Theory

Quantum Computing

Training Innovations

Numerical Optimization

Numerical Precision

Hardware

Cognitive Architectures

Motion Planning

Computational Creativity

Cryptography

Distributed Computing

Clustering

Neural Networks (Part I) – Understanding the Mathematics behind backpropagation


Overview Artificial Neural Networks (ANNs) are inspired by the biological nervous system to model the learning behavior of human brain. One of the most intriguing challenges for computer scientists is to model the human brain and effectively create a super-human intelligence that aids humanity in its course to achieve the next stage in evolution. Recent […]

https://biasvariance.wordpress.com/2015/08/18/neural-networks-understanding-the-math-behind-backpropagation-part-i/

Classifying text with Neural Networks and mimir in JavaScript


via Classifying text with Neural Networks and mimir in JavaScript — Generally Intelligent — Medium.

I have been working a lot with Computer Vision (CV) in the last few years: inevitably, at some stage, the topic of Machine Learning will crop up when working with CV. I had the opportunity to learn about the most popular concepts and models: amongst those, I was particularly intrigued by Artificial Neural Networks (ANN). There are plenty of resources on the web that illustrate the ANN model, so I’m not going to explain it in this article, rather I will show how to use ANNs in JavaScript for text classification purposes.

Google’s Deep Dream in PyCharm


via Google’s Deep Dream in PyCharm | JetBrains PyCharm Blog.

Reading the subject of this blog post I hope you’re all ready to have some fun! A month ago, Google released the code in an IPython Notebook letting everyone experiment with neural networks, image recognition algorithms and techniques as described in their Inceptionism: Going Deeper into Neural Networks article. Neural networks are known for their ability to recognize different shapes, forms, sophisticated objects, and even people’s faces. Well, Google engineers used an upside-down approach: you show different images to a pre-trained neural network and let it draw what it sees on the images, with the ultimate goal of generating new creative imagery based on artificial intelligence!

deepdream0