Tag Archives: ANN

Classifying text with Neural Networks and mimir in JavaScript

via Classifying text with Neural Networks and mimir in JavaScript — Generally Intelligent — Medium.

I have been working a lot with Computer Vision (CV) in the last few years: inevitably, at some stage, the topic of Machine Learning will crop up when working with CV. I had the opportunity to learn about the most popular concepts and models: amongst those, I was particularly intrigued by Artificial Neural Networks (ANN). There are plenty of resources on the web that illustrate the ANN model, so I’m not going to explain it in this article, rather I will show how to use ANNs in JavaScript for text classification purposes.


via Artificial Intelligence a Practical Approach – Neural Networks.

Part 1: How Machine Learning Algorithms Work – Artificial Neurons and Single-Layer Neural Networks

via Artificial Neurons.

This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.


If you are interested to play with the code examples, an IPython notebook version can be found athttps://github.com/rasbt/pattern_classification.

Artificial neural networks and intelligent information processing, with Kurosh Madani

via Robotics News — Artificial neural networks and intelligent….

In this episode, Audrow Nash interviews Kurosh Madani from the University of Paris-EST Créteil (UPEC) about neural networks. The talk begins with an overview of neural networks before discussing their possible applications.

WhitePaper: Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors

Click to Download WhitePaper

Many works have investigated the expressive power of various kinds of neural networks. We continue this study with inspiration from biologically plausible models. In particular, we study positive neural networks with multiple input neurons, and where neurons only excite each other and do not inhibit each other. Different behaviors can be expressed by varying the connection strengths between the neurons. We show that in discrete time, and in absence of noise, the class of positive neural networks captures the so-called monotone-regular behaviors, that are based on regular languages. A finer picture emerges if one takes into account the delay by which a monotone-regular behavior is implemented. Each monotone-regular behavior can be implemented by a positive neural network with a delay of one time unit. Some monotone-regular behaviors can be implemented with zero delay. And, interestingly, some simple monotone-regular behaviors can not be implemented with zero delay.

Part 2: Introduction to Artificial Neural Networks

via Introduction to Artificial Neural Networks Part 2 – Learning.

Welcome to part 2 of the introduction to my artificial neural networks series, if you haven’t yet read part 1 you should probably go back and read that first!


In part 1 we were introduced to what artificial neural networks are and we learnt the basics on how they can be used to solve problems. In this tutorial we will begin to find out how artificial neural networks can learn, why learning is so useful and what the different types of learning are. We will specifically be looking at training single-layer perceptrons with the perceptron learning rule.

Before we begin, we should probably first define what we mean by the word learning in the context of this tutorial. It is still unclear whether machines will ever be able to learn in the sense that they will have some kind of metacognition about what they are learning like humans. However, they can learn how to perform tasks better with experience. So here, we define learning simply as being able to perform better at a given task, or a range of tasks with experience.

Part 1: Introduction to Artificial Neural Networks

via Introduction to Artificial Neural Networks – Part 1.

This is the first part of a three part introductory tutorial on artificial neural networks. In this first tutorial we will discover what neural networks are, why they’re useful for solving certain types of tasks and finally how they work.


Computers are great at solving algorithmic and math problems, but often the world can’t easily be defined with a mathematical algorithm. Facial recognition and language processing are a couple of examples of problems that can’t easily be quantified into an algorithm, however these tasks are trivial to humans. The key to Artificial Neural Networks is that their design enables them to process information in a similar way to our own biological brains, by drawing inspiration from how our own nervous system functions. This makes them useful tools for solving problems like facial recognition, which our biological brains can do easily.