Tag Archives: Deep Learning

Step Up To Recurrent Neural Networks


Step Up To Recurrent Neural Networks
// Code Download

Recurrent neural networks can solve some types of problems that regular feed-forward networks cannot handle.

Introducing Test-driven Machine Learning


Introducing Test-driven Machine Learning
// Packt Publishing

In this article by Justin Bozonier, the author of the book Test Driven Machine Learning, we will see how to develop complex software (sometimes rooted in randomness) in small, controlled steps also it will guide you on how to begin developing solutions to machine learning problems using test-driven development (from here, this will be written as TDD). Mastering TDD is not something the book will achieve. Instead, the book will help you begin your journey and expose you to guiding principles, which you can use to creatively solve challenges as you encounter them.

We will answer the following three questions in this article:

read more

Neural Network Papers


Table of Contents

  1. Surveys
  2. Datasets
  3. Programming Frameworks
  4. Learning to Compute
  5. Natural Language Processing
  6. Convolutional Neural Networks
  7. Recurrent Neural Networks
  8. Convolutional Recurrent Neural Networks
  9. Autoencoders
  10. Restricted Boltzmann Machines
  11. Biologically Plausible Learning
  12. Supervised Learning
  13. Unsupervised Learning
  14. Reinforcement Learning
  15. Theory
  16. Quantum Computing
  17. Training Innovations
  18. Numerical Optimization
  19. Numerical Precision
  20. Hardware
  21. Cognitive Architectures
  22. Motion Planning
  23. Computational Creativity
  24. Cryptography
  25. Distributed Computing
  26. Clustering

Surveys

Datasets

Programming Frameworks

Learning to Compute

Natural Language Processing

Word Vectors

Sentence and Paragraph Vectors

Character Vectors

Sequence-to-Sequence Learning

Language Understanding

Question Answering, and Conversing

Convolutional

Recurrent

Convolutional Neural Networks

Recurrent Neural Networks

Convolutional Recurrent Neural Networks

Autoencoders

Restricted Boltzmann Machines

Biologically Plausible Learning

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Theory

Quantum Computing

Training Innovations

Numerical Optimization

Numerical Precision

Hardware

Cognitive Architectures

Motion Planning

Computational Creativity

Cryptography

Distributed Computing

Clustering

Google’s Deep Dream in PyCharm


via Google’s Deep Dream in PyCharm | JetBrains PyCharm Blog.

Reading the subject of this blog post I hope you’re all ready to have some fun! A month ago, Google released the code in an IPython Notebook letting everyone experiment with neural networks, image recognition algorithms and techniques as described in their Inceptionism: Going Deeper into Neural Networks article. Neural networks are known for their ability to recognize different shapes, forms, sophisticated objects, and even people’s faces. Well, Google engineers used an upside-down approach: you show different images to a pre-trained neural network and let it draw what it sees on the images, with the ultimate goal of generating new creative imagery based on artificial intelligence!

deepdream0

Some simple facial analytics on actors (and my manager)


Some time ago I was at a party, inevitably, a question that came up was: “Longhow what kind of work are you doing?” I answered: I am a data scientist I have the most sexy job, do you want me to show you how to use deep learning for facial analytics…… Oops, it became very quiet. […]

https://longhowlam.wordpress.com/2015/05/28/some-simple-facial-analytics-on-actors-and-my-manager/

Deep Learning (CNN) Architecture and Components.


This is a brief summary of deep learning architectures referring to CS231 CNN from Stanford. I will keep trying to supplement it. 1. Loss Function. Classification: 1.1 Hinge Loss. .  For multi-label classification,  1.2 Multi-class Hinge Loss.   where is the right class and denotes the margin, in neural network,     Sometimes, people use squared […]

https://slowbull.wordpress.com/2015/07/19/deep-learning-cnn-architecture-and-components/

Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images


Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller: Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images arXiv, 2015 This paper deals with the problem of model-based reinforcement learning (RL) from images. The idea behind model-based RL is to learn a model of the transition dynamics of the system/robot […]

https://gridworld.wordpress.com/2015/07/23/embed-to-control-a-locally-linear-latent-dynamics-model-for-control-from-raw-images/