Recurrent Neural Networks have boomed in popularity over the past months, thanks to articles like the amazing The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy.
Long story short, Recurrent Neural Networks (RNNs) are a type of NNs that can work over sequences of vectors and where their elements keep track of their state history.
Neural Networks are increasingly easy to use, specially in the Python ecosystem, with libraries like Caffe, Keras or Lasagne making the assembly of neural networks a trivial task.
I was checking the documentation on Keras and found an example togenerate text from Nietzsche readings via a Long Short Term Memory Network (LSTM).
I run the example, and after a couple hours the model started producing pretty convincing, Nietzsche-looking text.