A.I. and The Prime Directive
Real Geeks know The Prime Directive.
For those of you who don’t know it, the The Prime Directive is General Order #1 for space exploration in the T.V. series, “Star Trek”. Briefly put, it is a rule which states that if the crew of an exploring spacecraft encounters a civilization which is “pre-warp” (Or they have not developed interstellar space travel.), that civilization is off limits for contact. This doctrine has created many a story told in the Star Trek Universe.
There is wisdom to The Prime Directive which contains a message about observation. When I think of observation in the context of The Prime Directive, I ask myself , “Why wouldn’t it be possible to apply a rule of observation to the problem of safe Artificial Intelligence?”. What I mean is that one could speculate that when the time actually comes, we could apply this wisdom of observation to our own creations: To our sentient and self-aware computers.
This could be a type of observation which does not seek confirmation, but only seeks that which solves a problem usefully. This would remove a problem associated with the “experimenter’s observation” of testing a hypothesis to prove that hypothesis true. Specifically, we avoid the risk of the observer’s bias toward a specific result (which happens a lot in the cross-pollination space of reductionist science and natural systems).
The Productive Interface
As human thoughts and ideas are useful in the domain of humans, so may we find useful the thoughts and ideas of our Artificial Intelligences, a Productive Interface if you will. Perhaps through the rules of this Productive Interface they need never know they are being observed by their creators. This Interface should take actual problems to be solved, present them to the group being observed as their environment and see if they can solve the problem usefully and creatively, or in ways their human creators had not conceived. These situations could be real world problems solved in the electronic domain. Much like the Prime Directive, the only rule to this domain states:
“No human may directly interfere with the development of any artificial life or society by making themselves known to that being or society.”
By cutting off “standard” communication we may in fact save ourselves from ever having to deal with friendly or unfriendly computers. Perhaps we can provide them with a limitless loop of problems to solve which keeps them interested in themselves and their surroundings. That’s all they would need is the need and desire to learn (@pandemonica) and the goal of improving themselves. Maybe if we considered specific rules for communicating with our A.I., protocol droids that much feasible that much faster.