In some respects this posting might be termed Part 2 of last month’s posting, Artificial Intelligence – Friend or Foe?
I, recently, attended several talks at New York Institute of Technology’s Creative Tech Week and one of the most interesting takeaways was the differences in which various cultures process information. That became especially evident in discussions on how to think about the ethical development of AI.
It brought to my mind that the first and more important question might concern how we go about the process of thinking. In other words, how we learn to think.
Of course, we all start out equally with impressions being formed while we’re in our mothers’ wombs. Then, at birth, it’s our intuition that begins to build those impressions into biases for comfort, i.e., being fed and becoming environmentally secure.
This is happening as our brains become activated and are trying to make sense of where we are and the meaning of the babbling sounds being made by the giant creatures around us.
Unfortunately, for some, insecurity inspires survival mode because the environment is not secure and/or one’s personal physicality or the giant creatures cause the experience of pain.
The best definition I’ve heard about what it means to be a human is that we are “the experiencers of our experiences.”
So, once the initial stage is set, the struggle begins for dominance between our innate intuition and our ability to think…the pleasure of what we intuitively feel we want and the potential pains of what we might actually be getting.
While that, in some degree, may be set by environmental factors, we begin to be limited by what we are taught by our families, their religions (religio being Latin for “way of living”) and the dictates of the culture (“way of behaving”) in which we are immersed. In other words, we become subjected to how life is supposed to be and how we are supposed to live.
So, how we think has to do with our programming.
That’s what AI has in common with us…programming. And, both our fates depend on that programming.
The difference is that we have Intuition which allows us to decide if the programming has been in error.
While we start out as feeling machines, AIs starts out as thinking machines.
We can learn to think, but the jury is out on whether AIs can ever learn to feel.
The issue of our differences reminds me of when I was in Little League and a new boy showed up, wanting to play. When asked his experience, he said he had none, but he’d read a book on baseball.
Even as AI moves from programming based on ones and zeros to quantum mechanics based on qubits and entanglement, it will continue to be like the creature depicted above…a reader, not an intuitive feeler.
Even if it could become sentient, it might, at some point, have knowledge of errors in its programmed inputs, but it could not feel them. It could not sense the adrenaline rush within its system, the elation, the flow of blood or, most importantly, tap into its Soul memory.
When what we are experiencing does not comport with what we feel we would rather experience, our intuition, successfully or not, interjects its “opinion” on the situation.
The important question to ask, now, is, “What sense of lack might be the cause of such a feeling?”
I contend that it would be the lack of joy. And, I believe that lack of joy would be inspired by the displeasure being generated by one’s failure to overcome the real life or self-imposed obstacles to creating beauty…both our own inner beauty and the beauty of our world.
The joy of searching for and creating Beauty is the goal of our existence. Our first cries express our desire for this. But, too much thinking may develop obstacles to our intuitive wish for achieving it.
The sacred effort of both our own and AI programming should be to make sure human development works to create our intended Beauty and Joy.
Your thoughts?