Safeguarding Against the Deep Fake
OpenAI’s Sora has just demonstrated a capability to produce idiosyncratic, text-prompted video content that will soon be indistinguishable from content taken from the consensual world. Dmytro Kuleba’s “further political problem” (the one culminating in the Personal AI Assistant) may soon be pushed into phantasmagoria.
However, we think the Universal Conversation Engine (UCE), once developed and utilized, will work quite effectively to counteract the informational, political and cultural problems of our media which have been at work since Gutenberg. The UCE will be something, on the basis of updated and quite reasonably complete human data, that generates and intermittently prompts matched human beings to perform sentences for each other, and not merely for oneself and a chatbot. Introducing human-to-human interaction as a direct component in artificial superintelligence would seem likely to alter the entire character of superintelligence.
The UCE will be a Sentential Operator, generally prompting true judgments and intermittently eliciting surprise, delight and even love in its human partners. Because humans will perform these scripts with varying degrees of fidelity and enthusiasm, and because the UCE will study their performances and add these, for future reference, to its recommender set, and because it will be required to slow its deliberations to match theirs, freely acting humans and their personal preferences will become the agents of effective control of the overall social intelligence.
The Conversation Engine will, to be sure, alter Kuleba’s political dynamic, but not in the way the blinkered Personal Assistant would, if left to itself. Because of its precision, the Sentential Operator would work to safeguard society, not to blinker it. For this reason, development of the UCE seems to be a regulative priority to be taken seriously.
For a discussion of this and the preceding Post with Inflection AI’s Pi, click here.