Coded by Mike Lynch for NaNoGenMo 2015.

Images were produced by applying a variant of Audun Øygard's DeepDraw process to a background of random noise, classifying the results with the CaffeNet image recognition neural network, and using a random sample of the results categories to draw the next image.

Texts were produced by applying a Markov chain algorithm to the eight categories for each image, on a text corpus produced by using the WordNet definitions of the one thousand ImageNet categories.

The source code, a detailed description of the process and full acknowledgments are available on GitHub.

Neuralgae is now available in bot form! Follow @neuraglae for regular bulletins from the loss layer.

Send comments / criticisms / shoutouts to @spikelynch on Twitter.

Credits and acknowledgements