The debate between artificial intelligence and human intelligence is not about choosing one over the other; it’s about finding the balance that maximizes the strengths of both.
That’s according to Brevard Nelson, co-founder and managing director of Caribbean Ideas Synapse, situated on Wainwright Street, Port of Spain.
The company helps clients understand modern technology and other aspects of their business.
In an interview with the Sunday Business Guardian, Nelson said the proponents of AI argue that it represents the future of efficiency and innovation and that AI systems, powered by machine learning and big data, can process vast amounts of data at speeds faster than humans.
Nelson outlined the key arguments in favour of AI are:
* Efficiency and speed: AI can automate repetitive and time-consuming tasks, freeing up human resources for more strategic roles. For example, AI in finance can handle high-frequency trading far more efficiently than human traders;
* Data-driven decision making: AI systems can analyse massive datasets to identify patterns and trends, providing insights that guide more informed decision making. This capability is particularly valuable in fields like healthcare, where AI can assist in diagnosing diseases by analysing medical images; and
* 24/7 availability: AI systems do not require rest, enabling continuous operation. This is critical in industries such as customer service, where AI-powered chatbots can respond immediately to customer inquiries around the clock.
The case for human intelligence
Nelson noted that the advocates of human intelligence emphasise the qualities that make people uniquely capable of navigating complex, nuanced situations. These include creativity, emotional intelligence, ethical reasoning and the ability to think abstractly—all of which are challenging for AI to replicate...well, for now.
On creativity and innovation, Nelson highlighted that humans can think creatively, generating new ideas and concepts that go beyond established patterns.
Also, the technology executive said humans can understand and respond to emotions in a way that AI currently cannot.
“This emotional intelligence is crucial in roles that require empathy, such as counselling, negotiation and leadership. It is important to note, though, that there have been significant strides in bridging this gap with recent models recognising facial expressions and conversational contexts, but there is still some way to go to have this integrated into our daily routines fully,” he explained.
Another aspect is that human intelligence is guided by moral principles and ethical considerations. While AI can make decisions based on data, it cannot navigate the ethical complexities that often accompany those decisions, said Nelson.
The question arises, does the advancement of AI come at the expense of human intelligence, or can both coexist?
He said while some fear that AI could render certain jobs obsolete, leading to a zero-sum scenario, many view the future as encompassing a more synergistic relationship between AI and human capabilities.
Looking at complementary strengths, Nelson underscored that AI and human intelligence have different strengths.
“As shared before, both have their strengths but together, they can achieve more than either could alone.”
Delving deeper, Nelson indicated that instead of replacing human intelligence, AI can augment it.
Giving an example, he said in healthcare AI can analyse medical data to suggest potential diagnoses, but it is the doctors who use their experience and judgment to make final decisions.
Nelson pointed out that the global workspace has been here before.
“At the cusp of every industrial revolution, where there’s some major technology introduced, we’ve had this same debate. And history has shown us that—whether it’s a steam engine, the assembly line, personal computers, the internet, you name it—there is this debate.”
“Yes, there has been some displacement of specific kinds of jobs, but more importantly, there has been a creation of new jobs and new industries that didn’t exist before. Roles that involve managing AI systems, interpreting AI-generated insights, and ensuring ethical AI use are becoming increasingly important,” he detailed.
A notable example of this coexistence, Nelson said is in the financial sector where companies use AI algorithms for trading and investment strategies, but human analysts still play a crucial role in interpreting the data and making strategic decisions.
“This collaboration between AI and human intelligence has enhanced the firm’s productivity and decision-making capabilities,” he argued.
He said the future of productivity and scaling lies not in choosing between AI and human intelligence, but in the potential of augmented intelligence.
Some strategies for coexistence
With task division, Nelson emphasised the future lies in assigning AI to tasks that require speed, precision, and data analysis, while reserving human intelligence for creativity, strategy, and emotionally complex tasks.
“This division allows both AI and humans to operate at their full potential. Liken it to what was done when computers were introduced in the workplace,” he said.
What human-AI collaboration does Nelson said is assist in decision-making processes.
“For example, AI can provide data-driven recommendations, while humans apply contextual understanding and ethical judgment to make the final call.”
The key to continuous learning and adaptation, the managing director stated, is to invest in training programmes that help the workforce adapt to working alongside AI.
“As AI evolves, so too must the skills of the human workforce to ensure that they can effectively manage and complement AI systems,” he added.
AI risk
An article published on the International banker.com website last month said there are several “micro” risks arising from AI use that affect individual financial institutions. “Ubiquitous AI use in the financial sector can exacerbate threats to consumer privacy and cybersecurity. Moreover, most AI models have an inherent “black box” nature, and their predictions cannot be easily explained. They may also propagate the biases of the data on which they are trained. Other concerns include the emergence of data silos, model hallucinations, and algorithmic coordination. GenAI models are also prone to the problem of “garbage in, garbage out”: the quality of the outputs of these models is only as good as the underlying input data,” according to the article.
However, it stressed that there are also “macro” risks that affect the stability of financial systems.
“As AI use continues to gain momentum, we should remain attentive to the systemic risks it can create. Even with its limited capabilities, early AI use caused flash crashes and financial instability. Notable examples include the 1987 US stock market flash crash caused in part by the reliance on rule-based models by insurance companies,” it further added.