Why AI
Or why humans still matter
Summary: Artificial Intelligence (AI), particularly Generative AI (GenAI), lacks intrinsic human values and requires human intervention to ensure ethical and accurate outputs. Despite its ability to simulate intelligence through Large Language Models (LLMs), GenAI can produce flawed and harmful content due to its dependence on open Internet data. The human end user must guide and refine AI outputs to maintain appropriateness and veracity, highlighting the ongoing importance of human oversight in AI development.
One would have to be a hermit or comatose to be completely unaware of the growing controversy over the use of chatbots and so-called Generative AI (GenAI). The computer models for these types of AI are based on Large Language Models (LLMs) and are considered by many to be inherently flawed.
The LLM concept is relatively simple, although its execution is not: scrape all possible content off the open Internet, use advanced neural network processing to determine the most common patterns found there, and then use the most likely connections to parrot back statistically relevant information in a very human-consumable way, presented to the user as “intelligence”. This is how the AI is “trained”.
However, that core compute model has no heart, no ethical principles, and no humanity, by which I mean no simulacrum of compassion, empathy, or caring. By geometrically increasing the processing power by which linkages and likely outcomes are computed, it does create a simulacrum of “intelligence”, by which I mean plausible and potentially useful responses which represent a consensus summary or detail on any particular topic.
It uses the interpolated patterns of grammar to render its responses human-friendly, but it can get its utterances wrong, sometimes very wrong. Those seemingly random ramblings which veer off a logical path are often called ‘hallucinations” but are more accurately called dissociated reasoning.
Even the most current GenAI platforms must be ‘disciplined’ by human minders, to adjust the relative value of specific attributes (“weights”) so that they better reflect core human values (e.g., kindness or politeness) and avoid amplifying human weakness (e.g. cruelty or exploitation).
Unfortunately, these guardrails are not intrinsic to the models themselves, as the models are again built on the open Internet, whereby Hitler’s Mein Kampf, bombmaking recipes, or paranoid extremism are just another type of content harvested, along with the Bible, Shakespeare, and the Bhagavad Gita.
No AI currently available can extrapolate human values extrinsically from Internet content – which is the broadest body of data (“corpus”) in human history. And the current LLM approach is very unlikely to ever get us there, no matter how many GPUs we throw at it.
This is where the human end user can and must play an essential role: to guide, aid, edit, or reframe the GenAI output as they see fit. Using a powerful chatbot or GenAI platform is akin to having an army of inexperienced but industrious interns at your fingertips – they can save you hours or days of tedious research and fact-finding. But it is up to you, not the bot, to determine appropriateness, accuracy, and veracity. Just as it would be for the manager of human interns.
The danger here is that the global race to become a dominant AI platform is rendering the humanist principles as ‘nice-to-have’, not core. Again, the models are lacking basic human values as intrinsic to their data interpolation and projections, and human minders are expensive, especially at scale.
There is an inherent drive to lower cost in the capitalist economic model (“value extraction”), and without self-restraint this can lead to an acute devaluing of the human factor – resulting in job dislocation, greater economic instability, and accelerated wealth concentration among a select few. This is not viable for long term societal peace and equilibrium.
Again, this dystopian outcome points back to the human factor. It’s not really the AI which is the problem – it’s how we choose to use it that will be drive future outcomes. One can just as easily imagine the use of GenAI to further and foster a human economic and cultural apotheosis. As Charlton Heston declaimed at the end of the film Soylent Green, the answer “…is people!”