Skip to main content

The greatest challenge to General Artificial Intelligence (GAI) is human intelligence.

We as humans would be the GAI’s creator and its dependency lies in our ability to first understand our own intelligence—a field of study that is improving, but has yet provided answers to the most glaring questions. Once, and if, the questions of thinking, consciousness, and intelligence are answered, we’d then need the technical skill of developing that through duplication or imitation.

With that said, I believe human-compatible GAI is one day possible but it’s a long time coming. This isn’t so much of an issue of hardware or processing power, but due to how little we know about the human brain and how that lump of gray translates into a personality (“I” or consciousness) and intelligence.

The vast majority (all?) of computer processing systems are based upon a logical language. While logic is unquestionably important to humans, we are rational creatures taking in considerations of emotions and semantics rather than purely reasonable quantifications. Our “intelligence” allows us to account for (or ignore) things that can’t be explained, but yet still survive and thrive. Computer systems at the current state are unable to function without concrete particulars—gaps in knowledge or quantifiers produce undesirable results (or complete breakage). It’s astounding what humans have been able to accomplish and create without truly “knowing” how any of it works. We create theories of the world, sometimes backed by empirical study and reason, that propels us forward—but interestingly, most of that study and reason is overturned through paradigm shifts as “better” theories are developed. As of now, this is a uniquely human ability and would prove extremely difficult to automate.

As the model currently stands, a GAI system would somehow have to be able to account for and “understand” literal, semantic, and vague concepts of “nothing” and “all” (think metaphysics) which essentially breaks, “loops”, or defies human / systematic logic. Humans can still “operate”, recover, and excel in the absence of (or degradation of) logical processing, suggesting that we have “multiple” intelligences; where as of now, AI is a purely logical system.

A one-to-one comparison, or duplication of human intelligence embodied in GAI, is a monumental task. In turn, and at this point, it almost seems like an unworthy endeavor until we have a greater understanding of ourselves.