Tag Archives: robot

“Sleeping Giants” by Slyvain Neuvel


A number of giant robot pieces of extremely advanced, extraterrestrial design are discovered and assembled in secret.  It’s a superweapon ala Mechwarrior, cool!  Needs two pilots inside, one for the arms and one for the legs.  A linguist is also on the team, since they need to decipher the console controls.  Oh no!  Love triangle between girl pilot, boy pilot and boy linguist!!

Kind of interesting story at the beginning, but didn’t really go anywhere.  The style is unique – told entirely via “interview records” from some secretive, supra-national, illuminati-esque narrator who is the one calling (most of) the shots.

“The Robots of Dawn” by Isaac Asimov


Well, not even Babe Ruth hit a home run every at-bat.  I thought this novel was, unfortunately, a huge strikeout vs other Asimov novels.  The plot was 1) boring and 2) was much too focused on sex (even, but not limited to, robot sex!); in particular I really disagree with the degree to which Asimov (or his characters) equate sex with love.

It was hard to relate to Baley’s neurosis about being outdoors and being virtually crippled by a rainstorm.  Yes, I get that he and all Earthpeople have been changed by generations living in enclosed cities.  I got that during the last two books.  This time it just seemed to get hit over and over again though.

There’s almost a cheesy bit of linkage in the book between Robots and the Foundation series.  One character muses about learning so much about the human brain (via attempts to recreate it in robots) that it might be possible to predict human behavior.  “We could call it, uh I dunno, … psychohistory!   Yeah, I like the sound of that.”  There was another bit that made me groan, where the same character wonders if, after millenia of colonization and spread throughout the galaxy, if mankind will ever forget its origins on Earth.  (that was the focus of one of the Foundation books)

In the end, the big reveal is that at least one robot has inadvertently been programmed in such a way as to provide telepathic powers.  The robot, Giskard, can read minds and influence them to some extent.  I wonder if this might be a hint about the origins of the Second Foundation –  are they really a group of robots quieting monitoring humanity’s development, still obeying the Three Laws?

Relevant xkcd today.

“Wired for War” by P. W. Singer


Interesting overview of the developing use of unmanned systems in warfare.  The vast majority of the book focuses on the ethical, social, and legal implications; all fine and good but the engineer in me did wish it got more into the details of how some of these systems worked.  Some of the specific systems it did mention frequently are iRobot’s PackBot, Foster-Miller’s SWORDS, and Predator.  (Note that the book was published in 2009, so it is missing 6 years of unmanned system development … practically an eternity!)

I am hesitant to agree with calling all these systems “robots” though.  I suppose I fall more into the (Star Trek’s) Data or (Star Wars) C-3PO camp of what constitutes a “robot” – intelligent yes, but more importantly independent.  The unmanned systems used by the military do not think or act for themselves; they are all remotely controlled by a human.  There is quite a big step between that and creating Skynet-style robots.  And personally, I know just enough about software engineering to be very hesitant to subscribe to a “strong AI” future.  Computers are not smart; they are just fast.

Anyway.  Unmanned systems in war.  There’s some discussion in the book on military doctrine for using unmanned systems.  First is the “mothership” or control center idea – one where a single operator is able to control an entire robot army.  Kind of like someone playing Starcraft, but each unit on the screen is really an actual robot, ready for battle.  Second option discussed is the “swarm” doctrine.  There is no explicit control over each unit here; rather the robots as a group are given an objective and they work together to complete the mission.

One of the most impressive uses of remotely-piloted aircraft in a real war was by Israel during the opening of its 1982 war with Syria, Operation Mole Cricket 19.  The Syrians had the latest and greatest Soviet radars and SAM sites.  The Israelis first sent in a wave of drones.  This caused the Syrian SAM radars to track the drones and shoot them down.  Thus they were down a few rounds of ammo…but the real secret was that the drones relayed the radar frequencies being used by each individual radar site back to the main Israeli force.  Soon after, Israeli fighters went in with radar-homing missiles locked to the SAM frequencies.  The Syrian air defenses were demolished with few Israeli losses, and the war was pretty much decided since Israeli air superiority was assured.

Another interesting note on the pace of technological development, particularly in military tech.  Paradoxically, being the tech leader is a difficult position.  The leader shoulders virtually all the development cost, whereas others coming in later can easily copy their designs.  (Witness: about a bajillion types of Chinese UAVs on this Wikipedia page)  Also it is easy for the leader to pigeonhole themselves into non-optimal solutions – the newcomer can apply lessons learned and avoid problems, whereas the tech leader may be too invested to change.

Some things to think about for unmanned warfare policy managers (if such a beast exists):

  • Robot warfare is seen as cowardly by the enemy, plus it signals that we are very loss-averse – if they can kill enough soldiers with IEDs then we will give up (even though those IED strikes are usually tactically useless)
  • Robot warfare makes U.S. public more disconnected from war and also make leaders more likely to use force.  For military operators themselves, they are like a video game – too easy to forget that people are dying on the other end of the Predator missile strikes.
  • Ethics and legality of robot warfare: who is responsible for accidental targeting of civilians?  Military user?  The programmer who made a mistake in the code?  (I hope not!)

Air Travel Automation

Just rode the Skylink at DFW and, gazing out the window at a dozen luggage carts, meal carts, fuel trucks and the like, all scurrying to and fro from gate to gate, I see potential for significant automation.  Specifically, I was reminded of Amazon’s factory automation.  Why not automate all those airport vehicles? They would operate in a closed space so not many pedestrians or “rogue” vehicles to worry about; plus they could apply any number of signs or painted patterns to assist computer vision systems.

And we could have UAV’s flying us around in a few years too.

“House of Suns” by Alastair Reynolds


This story takes place in roughly 6 million years, when the entire Milky Way has been populated by humans or their evolved (naturally or artificially) offspring.  The universal speed limit (of light) applies, which forces us to get creative if we want to explore the galaxy.  Sometime in the distant past (but far in our future), it was faddish amongst the rich and powerful to create shatterlings.  The idea is to clone yourself several times (1000 clones seems to be the norm), equip your shatterlings with souped up ships and then have them go off one by one in all directions and explore the galaxy.  Every couple of hundred thousand years or so, they all get together and exchange memories.  Not just by telling stories — there is some kind of actual memory-sharing technology implied, so that each one really does have the memories of all the others, somewhere deep down inside.

Key to the whole shatterling idea is a method of preserving life throughout the long, long sub-light speed voyages between stars.  There are two solutions, first your standard freeze’n’thaw, then something more interesting involving “synchromesh” and stasis chambers, which allow the user to compress time, so that an hour on the inside is a decade on the outside (or whatever compression rate is desired).

Thus the shatterlings roam the galaxy, with no real goals other than to soak up knowledge, experience the wonders humanity creates, and maybe help out a civilization here and there by building a stardam (to safely enclose and neutralize a soon-to-be supernova) or serving justice in a local micro-war.  They are more-or-less immortal gods, who dive in and out of history as they please.  They are really just going around in circles, traveling through the same regions of space, if not the same planets too, each time around on the galactic circuit, but since ~200,000 years of surface time have passed since the last visit, the situation on the ground is often totally different.  The Lines (each Line being a set of clones from one individual) actually maintain something called the “Universal Actuary,” which gives continually updated probabilities of existence for planetary or multi-world civilizations and empires.  “Let’s see, the last UA entry for the Poobabian Confederacy of Worlds is only 4,000 years old, but we are 10,000 years out …. there’s currently a 62% probability that the Confederacy is still in existence, but when we will arrive the probability is only 14%…”  The Lines derisively refer to everyone else as “turnover” civilizations.  Rise and fall of empire is universal and continual.

Anyway, lots of cool ideas.  The plot is good, but not as great as some of the world (or should I say “galaxy”) building.  The Gentian Line, from Abigail Gentian, gets all but wiped out in a surprise ambush during one of their memory-sharing reunions.  They get to find out why.  Key to the plot (which I won’t totally spoil here) are two other types of beings that have acheived immortality in this galaxy.  First are Machine People, robots who probably originated with some human intervention but who are now totally independent artificial intelligences.  Then there is a unique personage known as the Spirit of the Air.  He was a human from about the time of the original shatterlings, but took a different path – little by little, he downloaded his consciousness into a computerized network of ever-increasing sophistication, until he wasn’t quite human any longer.  Kind of like Ray Kurweil and his singularity.

I’m not sure why the whole shatterling fad seemed to be just a singular event.  Was only that one culture advanced enough to make it possible, and none of the turnover civs have achieved that level in all the time since?  Furthermore, even with the time compression and all, the shatterlings have lived way beyond a normal (to us) life expectancy in real time; why hasn’t their longevity tech migrated to other civs?

I always try to predict in my mind where the author is trying to take the story, and what the big twist is going to be, and I was kind of misled on this one by the flashback entries which precede each section.  I kind of think my version would work better that the real twist in the book:  the flashbacks are about Abigail’s experience with Palatial, a virtual reality simulation of a fantasy kingdom.  She’s the good princess, one of her playmates becomes her evil half-brother prince.  They eventually go to war … I thought it would have been cool if the current attack on the Gentian Line was a long dormant continuation of the battles they fought in Palatial, but such was not the case.


“I, Robot” by Isaac Asimov



A pretty influential work for its time.  But I couldn’t help thinking how it was just a little bit silly.

The big deal with Asimov’s robots are the Three Laws.  These are “built-in” to the robot’s positronic brains somehow – except when they aren’t (Silly Thing #1).  There’s one story about just this possibility becoming reality, but after it we are back to assuming the Three Laws always hold.  Most of the short stories in this collection are about figuring out how the robot brain is interpreting the Laws and how that explains their seemingly odd behavior.  But if the Three Laws are not necessarily in place, then all of the logic in the other stories kind of breaks down in my mind.

Silly Thing #2 is that the robot’s positronic brain = magic.  Even the creators don’t understand why they work.  They do not really seem to be a continuation of today’s computing.  But in one story it mentions that some robots are tasked with designing better robots, and once this has gone on for a few generations you get a design unrecognizable to humans … I guess something like that may happen; compilers can be pretty obtuse sometimes.  But it seems to me that having a “debuggable” system is way more important (really knowing what’s going on in there) rather than imprinting Three Laws and hoping for the best.  Especially see Scary Thing below, after one more Silly Thing.

Silly Thing #3 – the story about the robot prophet starting a new religion.

Scary Thing, if only subtly so — the final story.  Advanced robots called simply Machines control the world economy, and do so quite well.  But it turns out they are also actively yet indirectly stamping out robotic opposition, since they interpret any challenges to their “rule” as a violation of the First Law.  Their logic is that they alone are the best able to ensure the safety of future humanity.

And in case anyone was wondering – I have not personally seen it, but per the Wikipedia synopsis, the Will Smith movie does not follow any of the original stories whatsoever.  Yet they slap a picture of it on the reprint/audiobook version.  Typical.

“The Cyberiad” by Stanislaw Lem

This is basically a collection of short stories, each starring one or both of a famous pair of “constructor robots,” named Trurl and Klapaucius, who can build machines to do pretty much whatever they want.  Frequently their creations have unintended consequences.  Here’s a list of some of the more memorable / clever stories:

  • “How the World Was Saved” – Trurl creates a machine that can produce anything beginning with ‘n.’  Disaster narrowly avoided when Klapaucius asks it to create “Nothing…” for true Nothingness cannot exist while anything else does.
  • “The Trap of Gargantius” – Atrocitus and Ferocitus are rival kings.  Our two constructor robot heroes separately enter the employ of one of the kings.  Their plan for peace is to network all the machines and soldiers of each army, basically creating a single, gigantic consciousness.  The kings think it is great – no more unnecessary delay and confusion due to too many cooks stirring the pot; maybe finally this will give them the edge they need to crush their enemies!  But finally, when the two giant army-consciousnesses meet on the field of battle, they want to do anything but fight.  Funny how it is easier for two individuals to come to peace but so difficult for nations….  [First of all this story reminded me of the Army’s recent FCS program – maybe it’s a good thing that didn’t work out too well!  Second, it is reminiscent of the Christmas Day truce and other such incidents from WWI and I am sure other conflicts – the soldiers on either side generally have more in common with each other than either side’s soldiers do with their own politicians back home who command the fight.]
  • “Trurl’s Electronic Bard” – Trurl’s machine must simulate the entire history of the universe in order to create decent poetry.  [That poetry is the product of the whole chain of events in the universe leading up to the present moment says a lot about the difficulties of creating thinking machines.  Also interesting to me as one who has done a bit of modeling and simulation – simulating the universe is an amusing fantasy that would likely require more resources than the universe itself contains.]
  • “The Mischief of King Balerion” – funny story about a half-crazy king who likes to play hide-and-seek and a device which permits its wearer to switch bodies with anyone else.  Hilarity ensues, obviously!
  • “How Trurl’s Own Perfection Led to No Good” – A cruel king has been exiled to an asteroid and is very bored.  Even though his punishment is just, Trurl feels sorry for him and so creates a simulated civilization-in-a-box to rule.  But his simulation is so perfect it amounts to sentencing the whole population to enslavement.  When Trurl realizes this and hurries back to the asteroid, the people have already rebelled.  [Thoughts about video games – at what point is murder of an AI in a game morally wrong?]
  • “Tale of the Three Storytelling Machines of King Genius” – Lots of layers; stories within stories and at least one of those about dreams within dreams.  “We should like the first to tell stories that are involved but untroubled, the second, stories that are cunning and full of fun, and the third, stories profound and compelling.   In other words, to (1) exercise, (2) entertain and (3) edify the mind.”  [I like that kind of classification … maybe I should choose books to read in the future in such a fashion…]  One story “profound and compelling” is that of Mymosh, a robot who spontaneously comes into being on an abandoned scrap heap planet due to a highly improbable but random configuration of junk.  Quickly immobilized, but with a mind fully formed, Mymosh imagines a whole civilization, populating generations upon generations.  Finally, after many years, rust breaks through to his ad hoc processing unit, water shorts his circuits, and it is all over in an instant.
  • “Altruzine” – Klapaucius creates a machine to simulate the universe [hmmm sounds familiar] in order to interrogate a member of the H.P.L.D. civilization – the Highest Possible Level of Development.  All the H.P.L.D.’s do is sit around doing nothing – for nothing is left to prove or strive for.  But why don’t they help others?  They relate many instances of trying and always failing due to imperfection of mortals.  As an example, they give him formula for Altruzine and tell him to see for himself.  The potent drug causes one’s own emotions to be transmitted telepathically to all those within a certain radius.  The goodly intention is to cause universal peace and brotherhood, since rational individuals would only treat others very well, since they themselves would experience the emotions of those whom they interact with.  However, instead of peace and goodwill, the opposite occurs – for example, when a cook burns his finger, a nearby soldier cleaning a gun causes it to discharge, killing his wife and kids.  The soldier’s grief is so strong that a neighbor burns down the whole building just to be free of it all.  Which doubtless creates further ripples of disharmony and discord.

Entertaining stories which make you think.

The translation from Polish must have been monumental task – good job, Michael Kandel.  There are an incredible number of coined words and wordplay like alliteration and rhyming — all pretty difficult to translate.  Here’s an example passage, describing members of a particularly well-off civilization:

…<each> sat in his palace, which was built for him by his automate (for so they called their triboluminescent slaves), each with essences anointed, each with precious gems appointed, electrically caressed, impeccably dressed, pomaded, braided, gold-brocaded, lapped and laved in ducats gleaming, wrapped and wreathed in incense streaming, showered with treasures, plied with pleasures, marble halls, fanfares, balls, but for all that, strangely discontent and a little depressed.