Tag Archives: machine learning

Step Up To Recurrent Neural Networks

Step Up To Recurrent Neural Networks
// Code Download

Recurrent neural networks can solve some types of problems that regular feed-forward networks cannot handle.

Introducing Test-driven Machine Learning

Introducing Test-driven Machine Learning
// Packt Publishing

In this article by Justin Bozonier, the author of the book Test Driven Machine Learning, we will see how to develop complex software (sometimes rooted in randomness) in small, controlled steps also it will guide you on how to begin developing solutions to machine learning problems using test-driven development (from here, this will be written as TDD). Mastering TDD is not something the book will achieve. Instead, the book will help you begin your journey and expose you to guiding principles, which you can use to creatively solve challenges as you encounter them.

We will answer the following three questions in this article:

read more

Recurrent Neural Networks Tutorial Part 3: Backpropagation Through Time and Vanishing Gradients

Recurrent Neural Networks Tutorial, Part 3 – Backpropagation Through Time and Vanishing Gradients
// WildML

This the third part of the Recurrent Neural Network Tutorial.

In the previous part of the tutorial we implemented a RNN from scratch, but didn’t go into detail on how Backpropagation Through Time (BPTT) algorithms calculates the gradients. In this part we’ll give a brief overview of BPTT and explain how it differs from traditional backpropagation. We will then try to understand the vanishing gradient problem, which has led to the development of LSTMs and GRUs, two of the currently most popular and powerful models used in NLP (and other areas). The vanishing gradient problem was originally discovered by Sepp Hochreiter in 1991 and has been receiving attention again recently due to the increased application of deep architectures.

To fully understand this part of the tutorial I recommend being familiar with how partial differentiation and basic backpropagation works. If you are not, you can find excellent tutorials here and here and here, in order of increasing difficulty.

Neural Network Papers

Table of Contents

  1. Surveys
  2. Datasets
  3. Programming Frameworks
  4. Learning to Compute
  5. Natural Language Processing
  6. Convolutional Neural Networks
  7. Recurrent Neural Networks
  8. Convolutional Recurrent Neural Networks
  9. Autoencoders
  10. Restricted Boltzmann Machines
  11. Biologically Plausible Learning
  12. Supervised Learning
  13. Unsupervised Learning
  14. Reinforcement Learning
  15. Theory
  16. Quantum Computing
  17. Training Innovations
  18. Numerical Optimization
  19. Numerical Precision
  20. Hardware
  21. Cognitive Architectures
  22. Motion Planning
  23. Computational Creativity
  24. Cryptography
  25. Distributed Computing
  26. Clustering



Programming Frameworks

Learning to Compute

Natural Language Processing

Word Vectors

Sentence and Paragraph Vectors

Character Vectors

Sequence-to-Sequence Learning

Language Understanding

Question Answering, and Conversing



Convolutional Neural Networks

Recurrent Neural Networks

Convolutional Recurrent Neural Networks


Restricted Boltzmann Machines

Biologically Plausible Learning

Supervised Learning

Unsupervised Learning

Reinforcement Learning


Quantum Computing

Training Innovations

Numerical Optimization

Numerical Precision


Cognitive Architectures

Motion Planning

Computational Creativity


Distributed Computing


Scikit-learn and Cross Validation

Python’s Scikit-learn Package is widely used for running Machine Learning Algorithms.  A simple and naive way to use scikit-learn and pandas to run a Random Forest Classifier and Cross Validation on the dataset. In this example, we use the  pandas.read_csv to read data from the csv input file. Here, the column Target is expected to […]


Calling Python’s scikit-learn machine learning library from Julia

It turns out to be very easy. Here’s a python code for SVM classification And here’s Julia code – utilizing PyCall module


Neural Networks (Part I) – Understanding the Mathematics behind backpropagation

Overview Artificial Neural Networks (ANNs) are inspired by the biological nervous system to model the learning behavior of human brain. One of the most intriguing challenges for computer scientists is to model the human brain and effectively create a super-human intelligence that aids humanity in its course to achieve the next stage in evolution. Recent […]