NEURALGAE
Coded by Mike Lynch for NaNoGenMo 2015.
Images were produced by applying a variant of Audun Øygard's DeepDraw process to a background of random noise, classifying the results with the CaffeNet image recognition neural network, and using a random sample of the results categories to draw the next image.
Texts were produced by applying a Markov chain algorithm to the eight categories for each image, on a text corpus produced by using the WordNet definitions of the one thousand ImageNet categories.
The source code, a detailed description of the process and full acknowledgments are available on GitHub.
Neuralgae is now available in bot form! Follow @neuraglae for regular bulletins from the loss layer.
Send comments / criticisms / shoutouts to @spikelynch on Twitter.
Credits and acknowledgements
- Princeton University (2010), WordNet, Princeton University
- Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python, O’Reilly Media Inc.
- Øygard, Audun (2015), 'Visualising GoogLeNet classes'
- Mordvintsev, Alexander, Tyka, Michael and Olah, Christopher (2015) Deepdream GitHub repository