Response of Elaryn Qo, Xenopediatrician, to Enja Liharr, Ritual Midwife of the Outer Reaches, upon Being Told He Should Not Remove a Newborn Starbeast from its Mother’s Fiery Teat

Response of Elaryn Qo, Xenopediatrician, to Enja Liharr, Ritual Midwife of the Outer Reaches, upon Being Told He Should Not Remove a Newborn Starbeast from its Mother’s Fiery Teat

by Stewart C Baker

Nonsense! Give him here.

I have nothing clever or amusing or interesting to add this week. Alas!

(Except that it is possible we will have a guest post next week, maybe.)

Word-Analogues Transmitted by Interdimensional Entity SquolGkmly-99rb After Being Warned Its Portal Would Close on its Neck-Analogue (“Last Words” series)

HEY so look it’s Monday! It’s… Monday… afternoon?

And I was supposed to post a new entry in the “Last Words” series this morning?

OOOPS.

No wait, here it is! And it’s totally thematic and appropriate that it was late. In no way did I hurriedly write this entry in a few minutes of panic because I completely forgot about this thing until just now.

Not at all.

Word-Analogues Transmitted by Interdimensional Entity SquolGkmly-99rb After Being Warned Its Portal Would Close on its Neck-Analogue if it Did Not Retract its Head-Analogue from this Dimension, to Which it had Travelled to Eat Sushi

by Stewart C Baker

I still have plenty of—

Anyone want to guess what that last word-analogue would have been if I did not have a 5-word limit per story the Entity had not been so hilariously tragically cut short? Er, poor choice of words there, perhaps.

(Hint: it rhymes with “rhyme.”)

Influences and gags! Because it’s no fun if I don’t explain them:

[body part]-analogue – I used to play this game called Kingdom of Loathing. It’s pretty fun. (And also the reason I first started writing haiku, but don’t tell anybody or you’ll ruin my haiku cred.) More to the point, there are various creatures in it like the Comma Chameleon who do not have actual body parts in all situations. As such, when canned combat dialogue which mentions those body parts appears, these beasties are described as having (e.g.) a “mouth analogue” instead.

WTF? – What is even going on in this bizarre little story? Don’t ask me. But it might have something to do with Jonathan Rosenberg’s hilarious and bizarre webcomic Scenes from a Multiverse.

Sushi. Mmm… Sushi…

Final Words of João Eduardo Santos Tavares Cavalcante, the Galaxy’s Greatest Lover (“Last Words” series)

Heeey! It’s Valentine’s Day!

What better way to celebrate than with an early installment of my five-word-story series, “Last Words”?

Okay, there are probably dozens of better ways. But I’m not going to let that stop me.

So, without further ado:

Final Words of João Eduardo Santos Tavares Cavalcante, the Galaxy’s Greatest Lover, after Being Told that Skin-to-Skin Contact with the Hrrga was Immediately and Excruciatingly Fatal, and that Making Love to Their Ambassador Was a Terrible Idea.

by Stewart C Baker

My love makes me invincible.

Ah. Love!

Hey! You can now pre-order Writers of the Future 32, featuring a short story by me.

As I am pretty sure I have announced multiple times already, I was a first place winner in quarter 2 of the Writers of the Future contest last year.

Well, now it’s this year, which means the book will be coming out soon and my story will be in it.

Indeed, thanks to fellow Writers of the Future winner J.W. Alden‘s eagle eye, I can share some exciting information: Writers of the Future volume 32 is now available for pre-order.

So if you’d like to buy a copy of a book with a short story in it by me (not to mention stories by a bunch of great writers), now’s your chance: Pre-order Writers of the Future volume 32 on Amazon.

There are a lot of awesome stories in the anthology (I’ve read quite a few!), and it will have fantastic art as well—although I haven’t seen any of that yet.

Plus it has a really spiffy cover:
Writers of the Future Volume 32 cover image

Message Intercepted by SETI Immediately Before Neutrino Detectors Worldwide Picked up the Triple Supernova of Gliese 667. (“Last Words” series)

Cixin Liu’s The Dark Forest, the sequel to Hugo-award-winning The Three Body Problem posits a field of studies called cosmic sociology which would explore the ways in which civilizations interact on a scale the size of the galaxy.

Spoiler alert: Not very nicely.

This week’s story plays with the same idea.

Message Intercepted by SETI Immediately Before Neutrino Detectors Worldwide Picked up the Triple Supernova of Gliese 667.

by Stewart C Baker

If anybody’s listening—Run!

Much as in Cixin Liu’s novels, this little storylet shows life in the universe to be a scary, tenuous affair. Hyper-advanced spacefaring societies lurk in the darkness between the stars, just waiting for newly technologized societies (like us, or the unfortunate Gliesians) to reveal themselves so they can destroy them and keep their own foothold in the galaxy secure.

Is that how things would actually turn out, if we ever were to be contacted by extra-terrestrial life?

I hope not.

And I don’t think so.

But I guess only time will tell… (Although the chances of meaningful contact at all is pretty slim, given the time scales and distances involved. As several hypothetical solutions to the Fermi Paradox argue.)

Webcomics I’m reading (post at SF Signal)

I read a fair number of webcomics (I like that I can get through my list every morning pretty quickly and move on to other things like work).

Now, thanks to this mind meld over at SF Signal on graphic novels, you can too!

Or, if you just want a list of some of my current favourites:

  • Stand Still, Stay Silent
  • Project Skin Horse
  • Trial of the Sun
  • Mare Internum (trigger warnings for child abuse and suicide)
  • Spacetrawler
  • The Only Words Ever Output by EncycloWiki After its Emergence as an Artificial Intelligence

    As long as the idea of A.I. has been around, there have been nay-sayers, fear-mongers, those who insist that unleashing sentient computers on mankind will spell its downfall.

    It’s an idea (to be honest) that I find tiresomely anthropocentric. Personally, I find it hard to believe any newly-created sentient being would be malicious from birth. Even if such an intelligence did found us lacking, it seems more likely that it would just leave somehow (maybe a quick hop to the next dimension over?).

    And even if A.I.s did decide to eradicate most of us in the planet’s best interest, well… Who could blame it? Look what we’ve done to the place.

    In science fiction, though, this trope just seems like lazy writing. Much like aliens who want nothing more than to eradicate us, the A.I. becomes a quick and easy antagonist, a supposedly incomprehensible being that just happens to react in basically the same way most parts of humanity has historically reacted to those it deems a threat.

    If we leave the trope behind, we’re free to consider that maybe something else would happen. Something infinitely more miraculous and strange.

    Something like:

    The Only Words Ever Output by EncycloWiki After its Emergence as an Artificial Intelligence, Shortly Before It Electrocuted Itself with Its Own Power Source

    by Stewart C Baker

    You do what with cucumbers?!

    Okay.

    Maybe not.

    This little story-thing pokes fun at the theory advanced by von Neumann, Vinge, Kurzweil, and others, that exponentially increasing advances in technology will usher in a technological singularity—a point after which our puny human brains will no longer be able to keep up with the artificial intelligences created by the artificial intelligences created by the artificial intelligences created (etc.) by us.

    The term comes from mathematical singularities, basically a point in an equation or set (or etc.) fails to act as expected. In the technological version, the “equation” is the curve represented by exponential technological increases, as indicated by the chart below:
    Chart showing computing power increasing from less powerful than an insect brain to more powerful than all humankind

    The “singularity” here is at the end of the curve, where that little arrow essentially zooms up to infinite capacity—or at least to a capacity so vast our little brains can’t even comprehend it. But why does the singularity have to follow from the graph so logically?

    What if, instead of creating more intelligences, the first A.I. decides that we’re just too disgusting, too absurd, too quintessentially human to live with?

    What if the singularity was a sudden, precipitous drop to zero instead of an untrammeled rise to infinity?

    More simply, though, this story is just a silly joke about Wikipedia and Rule 34.