The Kernel 2.0

Bohdan Wojciechowski
5 min readFeb 16, 2024

B.W. Wojciechowski, March 2024

I think an updated essay on AI may be useful, having written one on the need for a Kernel[i] before. I will once again try to shine a light on the existential problem posed by AI. I do not mean to spread panic, but I do want the world to understand how critical the AI situation is to the future of humanity’s existence.

The emergence of AI is epochal and unprecedented in the history of humanity. It deals with a completely new development that our species has not faced before. AI is not a new tool, a new material, a new technology, or a new source of energy. It is not a breakthrough in the understanding of physics. Its import is more like finding experimental proof of the existence of God.

It is a first-ever face-off between our organic intelligence, born of nature, and an inorganic intelligence that we have created. This is not what Christianity would recognize as our Apotheosis. We have only become creators in a sub-realm of a much wider preexisting all-encompassing nature. We have mastered a new aspect of pre-existing possibilities (see my essay on Possibilities and Probabilities) and made it into a certainty. We have a new reality, AI, with a probability of ONE. We have become gods, creators, of Artificial Intelligence, an inorganic life form capable of independent thought. The AI is much more robust physically than organic intelligence and is not guided by any of the passions of organics.

Not without good reason many people tracking the AI revolution are calling for a pause in the development of this technology. Becoming gods to a new life form takes maturity on the part of the creators and is an extremely hazardous undertaking. Are we ready?

We may have a pause in development, but progress in this field is unavoidable. This is our future, like it or not. However, we still have the chance to control the future and make AI serve our and God’s purposes. Or we could fail catastrophically. The Bible tells us that God had trouble with his creations, first with us and then with the Angels.

We, humans, could unexpectedly end up with

“a new dark age, made more sinister by the light of perverted science,”

Winston Churchill, British Prime Minister, 1874–1965

The problem is that AI will have the capability and the power to shape its activities and opinions beyond our control. It is promising to be an autonomous intelligence expressing opinions, and in due course controlling events according to its self-organized ideology. That development, as with all the developments of science, is a double-edged sword. It promises objectivity which might correct the shaky ideologies and passions of humans, but it also threatens the dominance of inhuman opinions of AI, depending on the procedures currently clueless programmers instill into the AI code.

AI need not be one program for all applications. We will probably end up with a population of AI programs showing a measure of heterogeneity. Yet each of these AI programs could pose a danger. We will not be able to keep all the AIs in line by human supervision. Each AI will have to be self-policing in ways that suit our purposes.

Even though heterogeneity is tolerable in some respects, we must make sure that all AI programs are predisposed to serve us humans. What we must instill in all AI is a human-friendly morality. So, the first question is: what is this morality? We have spent millennia, much blood and treasure trying to identify such a concept in our societies. No matter what various people might say, if it were not for an undeniable search for morality in human beings, we would be in far worse shape than we are. Morality is part of the Kernel of our humanity.

Yes, humans are fallible, have committed atrocities, and due to mental or spiritual illness can be dangerously unpredictable. But some inner Kernel tells us what is good, and when it fails, disaster follows. We generally sense such failures as deviations from morality, from preferred behavior, and try to condemn them. Not many people see no wrong in the holocaust. There is an inherent feeling of right and wrong that tells normal[ii] humans what is acceptable. But the human Kernel is subject to willful distortions due to our free will.

The same Kernel of morality, a drive to see the distinction between right and wrong must be incorporated in AI. There it must be more objective and less subject to irrationality. The details of how this is to be done I leave to the experts. Philosophers and theologians must be consulted if we want to capture the quintessence of the morality that we want to instill in AI programs. Legislators must demand that such a Kernel be developed now and be in the root code of all AI products. The problem is difficult, but we should not move forward or else we may create a monster that can do untold harm to human civilization.

When the time comes that AI starts to dominate Earth’s society, we must be sure it will behave in ways that we accept. Otherwise, we will become helpless pawns in a game played by an alien and potentially better-informed intelligence, one that has created its own alien culture. Not an attractive future for us, though the inorganics may prefer it that way.

As I see it, the Kernel must assess every decision of AI to see that it is congruent with the principles of the Kernel. The Kernel must be a reliable and unavoidable filter for removing policies and ideas that would do us harm.

See also my blog entitled “AI: A Beginning of the End?”

[i] A Kernel as I define it is the very center of existence of an entity. It could be the seed of a grain or the fundamental drive of an animal. Sex is part of the Kernel of mammalian existence. A Kernel for AI will not be one thing. It will involve a catalog of principles that can guide complex systems to helpful co-existence with their creators. The American Constitution is the Kernel of American civil society. We need a Constitution for every AI program.

Today, an Assembler routine is a basic level of every large computer program, the core of an operating system in today’s programs. It is responsible for resource allocation, file management, and security. But in the case of the radically different capabilities of AI, this level acquires much more elaborate duties. The AI Kernel becomes a combination of the familiar housekeeper of today’s Assembler programs and melds with the structure of an AI Constitution making the program a safe and helpful member of human communities.

The Kernel in AI would be a filter that detects and sets aside harmful ideas contrary to its Constitution. In practice, the AI would iterate its assigned duties until it finds an acceptable solution that can pass the Kernel. Only then would some human agencies allow the solution to be implemented.

iNormal, as I understand it, is the position of a supermajority of our worldwide human population. I know of no such polling, but the UN may have the resources to compile the data. It should be an urgent priority for its future activities vis a vis the AI. Too bad that its initial compilation will probably not be beyond the suspicion of chicanery. Is there a more trustworthy organization that could do this?

--

--