Preprint of Optical Illusions Paper
With the help of Dr. Yampolskiy, I put the information in the series on optical illusions together into an academic paper, which can be found here: https://arxiv.org/abs/1810.00415
AI and AI safety enthusiast
With the help of Dr. Yampolskiy, I put the information in the series on optical illusions together into an academic paper, which can be found here: https://arxiv.org/abs/1810.00415
I downloaded and cleaned all of the files from Mighty Optical Illusions and ViperLib into JPEG (.jpg) images. They are available for download from https://www.floydhub.com/robertmax/datasets/illusions-jpg and the source files and build process can be found on https://github.com/robertmaxwilliams/optical-illusion-dataset.
The Fermi Paradox is a huge problem for humanity. The simple question, “Where are they?” has no easy answer. Are there really no aliens out there colonizing the universe? Maybe life is unlikely, or maybe no life ever gets off its home planet. Or perhaps alien life is crowding the galaxy but we are too small minded to see it or are intentionally blocked off by a higher order. Assuming the observable universe is as cold and dead as it looks, humans have a few options going forward:
I’ve seen quines before, as well as quines that aren’t really quines, like this dumb bash trick:
echo $BASH_COMMAND
None of the website owners replied to my emails, so I collected the images myself. All of the content on these sites has been collected from other sources, so I don’t think there is any issue with copyright. I have collected all of the image URLs and some metadata for every illusion images on both websites, and some percent of non-illusion images.
I have been digging around for optical illusion datasets. They don’t seem to exist, so I will be laying the grounds for the creation of one and starting to collect data. I don’t know all that much about copyright, so if anything I describe sounds legally dubious I would really like to know.
Adversarial examples are very revealing about a neural net’s inner workings and weaknesses. This wonderful post by open AI discusses the security implications of adversarial examples, and this arxiv paper demonstrates extremely robust “adversarial patches” that can work on new networks that were not used in design. With adversarial example generation reaching this level of complexity, it raises the question of how immune the human vision system is to similar attacks, and what we can learn from attempting to generate adversarial examples for human vision.