Resharing a fun, geekasmic conversation between Micah Daigle and I about the future of humanity and how to maximize its thrivability through the creation of particular types of technology and technological systems.
Micah [to someone else in the thread]:
I used to be a techno-utopian too, so you’ll have a hard time convincing me to revert to an earlier memeverse.
But don’t take me for a dystopian: that’s a false dichotomy. As anyone who knows me can attest, I’m incredibly optimistic and work consistently to try and advance humanity with technology.
That said, the Kurzweil kool-aid is sour. I no longer believe that simply accelerating our technologies will solve all our problems and continue to accelerate forever. When you look at healthy systems throughout the universe (and begin to have a reverence for biology, rather than the typical transhuman disdain for it)… you start to see that acceleration is only helpful during certain phases in complex systems, and can be very dangerous to the system if left unchecked.
The path to a healthy, syntropic society isn’t a SIngularity, but a Collectivity, in which we increase our capacity for empathy and collaboration via better communication tools (which is why I’m excited about neurotech), which then allows us to manage our new powers with greater collective wisdom and restraint.
The test of whether we’ll make it through this period of planetary adolescence isn’t how fast we can speed up (we’ve shown we can do that)… it’s if we can collectively steer and brake when needed.
Good thoughts. One perspective I like that clearly shows the flaws or pathologies in singularitarism is to look at its quadratic imbalance. It’s super over-developed in the lower right, compared to everything else.
Hence hardly any understanding of humans, interior, empathy and Collectivity.
A large part of this bias is ultimately the singularity community projecting its own asperger’s leaning cognition on to the future of humanity. Which is a dull black and white future indeed.
Kurzweil does pay some lip service to art, but I find it wanting.
I highly doubt humans who are missing a large spectrum of humanity in their own minds and realities will be to create intelligent machines that possess any more perspective than they themselves have.
You can’t model what you can’t understand.
Thus strong AI projects will fail until the teams of people working on creating it have quadratically balanced neuro-diversity.
Switching into Max-lingo: I largely agree but I’m skeptical about our attempts to create strong AI even with quadratically balanced teams. It seems to me that our own agency emerges from the collective agency of our constituent holons, all the way down to the atom and beyond. The universe seems to evolve layer upon layer, rather than jumping laterally onto a different substrate that lacks all the sub-holon complexity of the constituent layers. By trying to jump laterally, we get flat simulations of life rather than life. Which wouldn’t be a problem until you realize that they could kill us all (not due to self awareness, but due to malfunction), and then it’s pretty scary.
Yes. You know, among other things I’m a pretty strong evolutionary developmentalist.
So emergence layer by layer is a given.
Also, the 95/5 bottom up, top down ratio.
This is why I believe the next developmental layer of evolution is man-machine symbiosis and eventual strong AI constructed not apriori monolithically, but out of a holarchy of increasingly complex biomimicing, silicon substrate based parts.
Or in other words, to achieve substrate transformation from biology to silicon we must able to properly model our holarchical biological intelligence in an all quadrant, all level, all line, all state, all type fashion, and then engage in the task of horizontal translation of biological intelligence to silicon based substrates.
Once intelligence can been properly translated to non-biological substrates, without horrific compressions of our humanity, a healthy accelerating syntropian march can continue.