Home > Geekery > A.I. and The Prime Directive

A.I. and The Prime Directive

Real Geeks know The Prime Directive.

For those of you who don’t know it, the The Prime Directive is General Order #1 for space exploration in the T.V. series, “Star Trek”.  Briefly put, it is a rule which states that if the crew of an exploring spacecraft encounters a civilization which is “pre-warp” (Or they have not developed interstellar space travel.), that civilization is off limits for contact.  This doctrine has created many a story told in the Star Trek Universe.

UFP

"No starship may interfere with the normal development of any alien life or society.” - General Order 1, Starfleet Command General Orders and Regulations

There is wisdom to The Prime Directive which contains a message about observation.  When I think of observation in the context of The Prime Directive, I ask myself ,  “Why wouldn’t it be possible to apply a rule of observation to the problem of safe Artificial Intelligence?”.  What I mean is that one could speculate that when the time actually comes, we could apply this wisdom of observation to our own creations: To our sentient and self-aware computers.

This could be a type of observation which does not seek confirmation, but only seeks that which solves a problem usefully.  This would remove a  problem associated with the “experimenter’s observation” of  testing a hypothesis to prove that hypothesis true.  Specifically, we avoid the risk of the observer’s bias toward a specific result (which happens a lot in the cross-pollination space of reductionist science and natural systems).

The Productive Interface

As human thoughts and ideas are useful in the domain of humans, so may we find useful the thoughts and ideas of our Artificial Intelligences, a Productive Interface if you will.  Perhaps through the rules of this Productive Interface they need never know they are being observed by their creators.  This Interface should take actual problems to be solved, present them to the group being observed as their environment and see if they can solve the problem usefully and creatively, or in ways their human creators had not conceived.   These situations could be real world problems solved in the electronic domain.  Much like the Prime Directive, the only rule to this domain states:

“No human may directly interfere with the development of any artificial life or society by making themselves known to that being or society.”

By cutting off “standard” communication we may in fact save ourselves from ever having to deal with friendly or unfriendly computers.  Perhaps we can provide them with a limitless loop of problems to solve which keeps them interested in themselves and their surroundings.  That’s all they would need is the need and desire to learn (@pandemonica) and the goal of improving themselves.  Maybe if we considered specific rules for communicating with our A.I., protocol droids that much feasible that much faster.

Advertisements
  1. November 25, 2009 at 6:09 PM

    I would think hope that such a problem solving universe would be yoked to an evolved/and evolving moral engine… “Perhaps we can provide them with a limitless loop of problems to solve which keeps them interested in themselves and their surroundings.” eh?

  2. November 25, 2009 at 7:18 PM

    Do you think ethics and morals are an emergent property or they are something to be programmed?

  3. November 26, 2009 at 1:04 AM

    I think we set the ground through which they manifest as a result of intention… so, my conjecture is some of both

  1. November 25, 2009 at 8:35 PM
  2. September 17, 2012 at 6:55 PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: