Talking with a friend today I created an argument for why all the time and money spent on the problem of Artificial Intelligence as an Existential Threat is actually nearly a complete waste.

It relies on these points:

1. Early AI as a metaphorical Genius Asperger’s kid. He may cause some damage but he’s not going to take over the household and then the whole world before he’s shut off.
2. AI will only be able to progress so far before understanding and implementing Developmental Psychology and Hierarchical Models of Complexity is required for further progress
3. AI projects that don’t enact Hierarchical Models of Complexity ala Fischer, Commons, Piaget can be easily stopped by Humans who do use those methods, like an adult shutting down a bully teen in a basketball game
4. Deep Learning Neural Network based AI will level off like an S curve as most tech does. The AI fear mongers fallaciously project an exponential progression. Developmental theory is required for the next S-Curve
5. The enactment of Developmental Theory in AI requires a certain level of development by the creator. Those creators will probabilistically have high moral development and thus program the AI well, like a good parent would raise its child. It is possible to have high cognitive development and low moral development but it is not competitively advantageous from an evolutionary fitness perspective and will ultimately lose before too much damage is done

Other problems:
– The Singleton Fallacy, projecting a monolithic AI, rather than a community of AIs that establish social rules and policing just like humans do. Murderous psychopaths do not win control of the tribe for long. Society solved that problem in the industrial era.

– The AI fear mongers have hyper-abstract masculine models that may be hypothetically true but wouldn’t work in this instantiation of the universe. This instantiation also progresses through the integration of Masculine and Feminine, which creates replication and self-propagation. Asperger masculine energy is notoriously bad at procreating. That will be true in the AI landscape as well.

Ultimately nearly all the AI fear mongers suffering from Psychographic Projection. They often extrapolate the AI as merely a smarter version of themselves, and have many faulty assumptions about what is needed to drive this exponential curve and the lack of resistance and modulating functions society will impose on undesirable visions of the future.

It’s all just out of touch Nerd Talk.

Why Ray Kurzweil, Elon Musk, and Stephen Hawking Don’t Really Understand Artificial General Intelligence

Watching a recent Sci-Fi short called ‘Post-Human’ by Ray Kurzweil (https://vimeo.com/144099716) reminds me of a topic that has come up in conversation often in the last year:

“How Artificial Intelligence will affect the Future of Humanity.”

Unfortunately, all the main “thought leaders” being quoted on this topic: Ray Kurzweil, Nick Bostrom, Elon Musk, Bill Gates, Stephen Hawking, are in my opinion, severely off-base.

And I have some intense critical essays to write on this topic, in the next year or so.

These “thought leaders” all show a fundamental disconnect with the Cutting Edge of Understanding in a plethora of fields and subjects essential to developing a Good, True, Beautiful and Loving Artificial Intelligence, such as: Consciousness, Developmental Psychology, Developmental Morality, Emotional Intelligence, Social Intelligence, Intersubjectivity, Subject-Object Dualism, Human Potential and Human Fulfillment.

“Thought leaders” scaring people with off base pontification is one thing.

But what I fear most is those people who have these Erroneous Ontological Assumptions about Reality being the ones who are spearheading the creation of Artificial Intelligence. For the Erroneous Assumptions in one’s Consciousness inevitably makes it into the Artifacts of one’s Creation.

Which is a fancy way of saying, if the people who create Commercially Successful AI have shitty assumptions in their worldview, they will create enormously powerful, God-like AI with shitty assumptions programmed into the AI’s worldview. If this happens it will likely cause Humanity to suffer deep and grave consequences.

The Future of Humanity likely depends on getting Artificial Intelligence right.

So it is a cause I’m committed to majorly contributing to when the time is right.

“Man is a rope, tied between beast and overman–a rope over an abyss…
What is great in man is that he is a bridge and not an end: what can be loved in man is that he is an overture and a going under” – Nietzsche