AI and Directives

Interesting conversation about the developments in AI here:  

Some of the interesting bits were the concern over AI developing its own directives. The professor being interviewed mentioned the equally perplexing problem of the AI simply following the directives we assign it. He mentions the example of giving AI the task of procuring a cup of coffee. There is a secondary directive implied in this: the AI needs to be on in order to accomplish the primary directive, so it would counteract any intentions to shut it down. It was mentioned that there is such a thing as the Midas problem within AI programming, which references the ancient story of King Midas, who gave the gods the ‘programming directive’ of turning everything he touched to gold. The gods granted this and everything, his food, his family etc, turned to gold and he died. The point is that even we don’t really understand all the downstream effects of what we are asking. 

This made me think of the basic problem of complex systems: they are impossible for any one person to really comprehend. From Wikipedia: 

“Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are “complex” have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others.“ 

If we are incapable of fully understanding any complex system, we must understand we can’t possibly program directives into an AI for such a thing, and not assume outcomes that are wildly divergent from our expectations.