The “Operator Trap” – An Interview with Professor Paul Cheung

//16.2 Interview

The “Operator Trap”

 

With an intelligence revolution spurred by AI underway, Professor Paul Cheung, Honorary Professor in Computer Science at The University of Hong Kong, thinks it’s time to reflect upon and rethink our conceptions of education. 

While excited about its use, he cautions against losing our ability to think critically and risk reducing our role to mere machine operators.

 

More than half a century Professor Paul Cheung had his first encounter with AI. Then, in its infant stage and not so powerful as today, AI very much depended on input from programmers who relied on rules and facts derived from human experts and wrote the codes to mimic their decision-making abilities.

The explosion of AI development, especially over the past 10 to 15 years, is primarily a result of three factors, according to Professor Cheung. Firstly, the explosion of a “ridiculous” amount of data acquired ethically or unethically from the internet. Second, the enormous increase in computational power which has grown trillions of times; and third, the growing interest, investment and research on AI seen globally, especially in the US and China.

Keeping these factors in mind, one must, according to Professor Cheung, acknowledge that Generative AI, as an application of machine learning, has an impact on us all. It is not something that we can avoid, ignore or even do without. With simply a few prompt lines, large amounts of data can outpace and do human-like content in text, images, videos and sounds.  This means, said Professor Cheung, “that while there is still a long way to go, we are closer to the point where it will be harder to differentiate between a human being and a machine, as suggested by the Turing Test1,” as proposed by Alan Turing in 1950.

Honorary Professor Paul Cheung of the University of Hong Kong’s Department of Computer Science during an interview with Youth Hong Kong

The Intelligence Revolution: Risks and Opportunities

These developments do not surprise Professor Cheung. In attempting to put, what we might call, the “Intelligence Revolution” within a historical timeline, he urges us to consider both the “Industrial Revolution” – which liberated humans from backbreaking manual labour as a result of the inventions of steam engines and electricity – as well as the “Information Revolution” – which resulted in the processing and transmitting of vast expanses of information.

 

AI is not something we can avoid, ignore or even do without.

 

Both these “revolutions,” while ostensibly making our lives easier, had a concomitant and detrimental effect too. The Industrial Revolution alleviated us from physical work but made us potentially weaker in the body; while the Information Revolution continues to provide us with an abundance of information, but not the needed tools for critical thinking and discernment. “I once believed,” confessed Professor Cheung, “that the internet was going to change the world and make it easier to discover the truth. But look at the fake news and the destructive impact of social media. I was totally wrong at that time.”

It is no wonder, therefore, that he expresses optimism and simultaneously urges caution as the Intelligence Revolution occurs.

On a positive note, AI has the potential to help in areas where knowledge is lacking. Professor Cheung related the example of a scholar who was asked to draft AI policies for a conference. Knowing nothing about policymaking, he used Generative AI to come up with a draft version. After rounds of talks with professionals, plus improved prompting, he came up with a policy document that he considered better than what he could have done with AI assistance.

However, such technology causes concerns when we cannot easily verify the quality and accuracy of content, especially if input is due to limited and potentially biased training data collected from largely unknown and unverified sources. In this case, Professor Cheung argued, AI could very well become an “echo chamber” that only emphasises pre-existing data. “We must also,” he continued, “be vigilant regarding legal and copyright issues. Within systems, which operate much like a black box, absent of human reasoning and values, AI doesn’t know or care what it is about and is not responsible for the content it generates.” Professor Cheung was also concerned that once trained on a fixed dataset, certain AI models could essentially “be static and confined, limiting their ability to adapt to rapidly changing real-world information and scenarios.”

On a more practical level, Professor Cheung emphasised the computational costs involved, which are staggeringly high, and the training of a large language model can potentially cost billions. He further warned about the consumption of massive amounts of energy and water. ChatGPT daily power usage is nearly equal to 180,000 US households, and a single ChatGPT conversation uses the equivalent to one plastic bottle of water, according to a Forbes report。

Professor Paul Cheung speaks at the University of Hong Kong.

AI and Humans

While agreeing that any determinate risks can be addressed with guidance and regulations, Professor Cheung is more worried about the potential risk and devolution of human beings becoming intelligently weaker. What happens next, Professor Cheung predicted, is that AI will help “us do our mental work.” He continued, “The risk is that we become more and more like an operator if we fail to deploy technology in a way that benefits us and helps us think more.”

The human thinking process and help-seeking from professionals, according to Professor Cheung, is what makes a human different from simply an operator who operates a machine. “Ultimately, working with machines makes us think; this is what makes us human and what makes us valuable, compared to machines. It is the process that educates you,” he added.

 

Ultimately, working with machines should make us think more. 

 

“Some people are scared of AI and some refuse to learn about it. But what we need is a balanced mindset and the ability to work with it. Before using AI, think and ask what you want to achieve. Read and evaluate the content, digest it, and then rethink. Ask different questions and re-read the content again, until you learn. If you are lazy, AI will just turn you more into an operator.”

 

Rethinking Education

So how does this all relate to young people on a more practical level? The debate continues on whether students must learn basic computational knowledge such as coding, programming, or languages. Some argue that because AI has completely closed the technology divide, and there is no more need to learn programming from scratch.

Professor Cheung completely disagrees. He believes basic knowledge and the ability to understand how something works is even more essential today. He said not knowing how things work in a certain way will only push us into the “operator trap” where one relies on operating a machine without any understanding of what is going on.

“Yes, of course, we can all use AI to generate programme codes, write and create paintings and poems, for example. But that doesn’t mean that we don’t need to learn about painting, writing, programming, or anything else. If we are happy to stay devoid of deeper curiosity, we not only become operators or a dumb robot, but we also remain very shallow in our knowledge, reasoning and ability to be creative and innovative,” he added.

 

The longterm goal of education in an age of rapid technological progress should be to learn and adapt. 

 

Professor Cheung emphasised the importance of all-rounded education, including not only STEM subjects, but also languages, humanity, social sciences and other non-technical subjects. “Languages, for example, are what humans use to think. Without languages, our ideas remain mere abstract concepts. Languages are the crucial tools that solidify thinking into tangible form.” Unfortunately, he noticed the diminished ability in languages among young people in Hong Kong, which he believes may hinder their use of AI.

Professor Cheung reminds schools not to omit their role and responsibilities of “inspiring students and helping them grow and mature beyond acquiring knowledge and skills.

“The long-term goal of education in an age of rapid technological progress,” Professor Cheung stated, “should be teaching young people how to learn, and after acquiring the fundamental knowhow, encouraging them to broaden their horizon, acquire just-in-time knowledge and adapt, because the knowledge itself can become outdated in the blink of an eye.” A motto he often shares with his students is: “Knowing what you don’t know is more important than knowing what you do know.”

However, while focusing on technology, Professor Cheung reminds schools not to omit their role and responsibilities of inspiring students and helping them grow and mature beyond acquiring knowledge and skills. At a practical level, students should learn how to get along with people, solve problems, and most importantly, think about one’s purpose in life, he said.

Beyond schools, other actors also have a duty to young people and their all-round education. The government, parents and companies should also work together to prepare young people with the necessary skills in the age of AI, according to Professor Cheung. This also includes youth organisations like the HKFYG. Each stakeholder should embrace the new technologies themselves by effectively adopting them into daily operations. This will help young people develop a balanced mindset.

Looking ahead to a future of unpredictable technological revolutions, Professor Cheung thinks the impact of AI will be “enormous” in just a decade. “You cannot avoid it no matter where you are, how old you are, and what discipline you are in,” he concluded. ■

 


References:

  1. The Turing Test is a method to evaluate whether a computer can exhibit intelligent behaviour equivalent to a human. It involves an interrogator conversing with a human and a computer through text-based communication, without knowing which is which.
  2. Forbes: ChatGPT And Generative AI Innovations Are Creating Sustainability Havoc, March 12, 2004