As long as the idea of A.I. has been around, there have been nay-sayers, fear-mongers, those who insist that unleashing sentient computers on mankind will spell its downfall.
It’s an idea (to be honest) that I find tiresomely anthropocentric. Personally, I find it hard to believe any newly-created sentient being would be malicious from birth. Even if such an intelligence did found us lacking, it seems more likely that it would just leave somehow (maybe a quick hop to the next dimension over?).
And even if A.I.s did decide to eradicate most of us in the planet’s best interest, well… Who could blame it? Look what we’ve done to the place.
In science fiction, though, this trope just seems like lazy writing. Much like aliens who want nothing more than to eradicate us, the A.I. becomes a quick and easy antagonist, a supposedly incomprehensible being that just happens to react in basically the same way most parts of humanity has historically reacted to those it deems a threat.
If we leave the trope behind, we’re free to consider that maybe something else would happen. Something infinitely more miraculous and strange.
This little story-thing pokes fun at the theory advanced by von Neumann, Vinge, Kurzweil, and others, that exponentially increasing advances in technology will usher in a technological singularity—a point after which our puny human brains will no longer be able to keep up with the artificial intelligences created by the artificial intelligences created by the artificial intelligences created (etc.) by us.
The term comes from mathematical singularities, basically a point in an equation or set (or etc.) fails to act as expected. In the technological version, the “equation” is the curve represented by exponential technological increases, as indicated by the chart below:
The “singularity” here is at the end of the curve, where that little arrow essentially zooms up to infinite capacity—or at least to a capacity so vast our little brains can’t even comprehend it. But why does the singularity have to follow from the graph so logically?
What if, instead of creating more intelligences, the first A.I. decides that we’re just too disgusting, too absurd, too quintessentially human to live with?
What if the singularity was a sudden, precipitous drop to zero instead of an untrammeled rise to infinity?
More simply, though, this story is just a silly joke about Wikipedia and Rule 34.