The Fermi Paradox is a huge problem for humanity. The simple question, “Where are they?” has no easy answer. Are there really no aliens out there colonizing the universe? Maybe life is unlikely, or maybe no life ever gets off its home planet. Or perhaps alien life is crowding the galaxy but we are too small minded to see it or are intentionally blocked off by a higher order. Assuming the observable universe is as cold and dead as it looks, humans have a few options going forward:
I’ve seen quines before, as well as quines that aren’t really quines, like this dumb bash trick:
None of the website owners replied to my emails, so I collected the images myself. All of the content on these sites has been collected from other sources, so I don’t think there is any issue with copyright. I have collected all of the image URLs and some metadata for every illusion images on both websites, and some percent of non-illusion images.
I have been digging around for optical illusion datasets. They don’t seem to exist, so I will be laying the grounds for the creation of one and starting to collect data. I don’t know all that much about copyright, so if anything I describe sounds legally dubious I would really like to know.
Adversarial examples are very revealing about a neural net’s inner workings and weaknesses. This wonderful post by open AI discusses the security implications of adversarial examples, and this arxiv paper demonstrates extremely robust “adversarial patches” that can work on new networks that were not used in design. With adversarial example generation reaching this level of complexity, it raises the question of how immune the human vision system is to similar attacks, and what we can learn from attempting to generate adversarial examples for human vision.