Global Times: Yearender: Chinese researchers catch up with global AI momentum

BEIJING, Dec. 25, 2023 /PRNewswire/ — Year 2023 witnessed a fascinating catching-up game worldwide since OpenAI released ChatGPT in late 2022. In the coming year ahead, Zhou Hongyi, founder and chairman of 360 Security Technology, said he is “still quite optimistic” about the overall development of the artificial intelligence (AI) industry in China.

The speed of development of China’s large model is already a miracle. The world needs to be patient with China’s large models. The industrial revolution of the internet has been going on for at least 10 years, and the turning point of AI has only emerged in the past year or two, Zhou told the Global Times.

In March, Baidu took the lead by introducing its first extensive language model called “Wenxin Yiyan.” Following its step, on March 29, 360 Security Technology unveiled its artificial intelligence strategy along with the release of Zhinao, or “intelligent brain”. 

Shortly after, on April 11, Alibaba introduced its “Tongyi Qianwen” large-scale model at the Alibaba Cloud Summit. On May 6, iFlytek launched its Xinghuo large-scale model, with Chairman Liu Qingfeng expressing their goal to surpass ChatGPT in Chinese and catch up with ChatGPT in English by October 24. Additionally, Huawei, JD.com, ByteDance, Sensetime, as well as other companies, have also released their own large-scale model products in succession.

Confidence toward China’s AI industry in the coming year is pretty high among Chinese leading AI developers and industry observers, the Global Times found, though facing the fact that Google has also reemerged in the arena, marking its strong comeback with the release of Gemini on December 13.

Zhou admitted that there is still a gap between the Chinese model and ChatGPT-4. But the gap does not prevent China from building its own GPT.

Xiao Yanghua, a computer science professor at Fudan University, also director of Shanghai Key Laboratory of Data Science, also agreed that “Chinese enterprises should be a smart follower, actively explore our own competition track under the premise of ensuring that we do not lag behind.”

Zhou told the Global Times that China possesses great industry dividends, saying the key for China’s development of large-scale models is to seize the dividends of various AI-generated scenes.

China has the most comprehensive industrial categories in the world with complete supply chain and industrial chain. The greatest opportunities lie in industrialization, specialization and verticalization of the technology, also move toward deep customization, Zhou noted, calling on the wide utilization of GPT in industries, sectors, and within organizations, combining them with vertical AI-generated scene.

Having one large-model to fit all needs of various industries is way too broad and unrealistic. China’s large-model products can be more fine-grained, Zhou said. In one industry, the large model can empower different aspects, specific links and tasks. For example, in the financial sector, customer service is a relatively detailed aspect and in the field of intelligent connected vehicles, intelligent cabins, intelligent navigation or intelligent entertainment could be very detailed options.

“Many untapped blue oceans are out there,” Xiao also told the Global Times, mentioning embodied large-scale models, medical large-scale models, scientific large-scale models and other specified fields.

But Xiao also warned that ChatGPT has formed a “flywheel effect,” where iteration and optimization are pushing the technology into a self-reinforcing phase of rapid development, or possibly leading the industry to a situation in which in the future only one or two models will be the dominant players.

Adopt AI, think later?

Should AI technology, with a mix of fear and awe, progress faster or should it slow down? Or should we develop it while at the same time regulate it?

The Global AI Governance Initiative, proposed by Chinese leader this year advocates upholding a people-centered approach in developing AI and promoting the establishment of a testing and assessment system based on AI risk levels, so as to make AI technologies more secure, reliable, controllable and equitable.

Zhou concluded that the foreseeable challenges brought by AI include technical security issues mainly focused on network security and data security, as well as content security issues caused by the ability of large models to “fabricate” content.

More specifically, AI is at risk of being predominantly utilized as a tool for initiating cyberattacks, producing deceptive media or propagating false information and offensive language, industry observers warned.

Zhang Linghan, from China University of Political Science and Law and a member of the UN High-Level Advisory Body on Artificial Intelligence, told the Global Times China’s stance has always advocated active promotion of AI technology development, while attaching importance to security.

Various laws and regulations, including the Data Security Law, regulations for managing internet information service algorithms and deep synthesis, and interim measures for governing generative AI services, have also been established to form a comprehensive AI regulatory framework, Zhang noted.

More risks and threats are expected to emerge, according to industry observers, as now AI industry has accelerated its evolution to “multimodal models.”

To address the security issues of large models, “it is necessary to make technological breakthroughs rather than relying solely on the self-discipline of large model enterprises,” Zhou said. He attributed this to the fact that large models have capabilities surpassing humans is already evident, and they are on the verge of becoming “superhumans” in the near future.

“We must prevent any ‘irreversible’ consequences from happening,” Zhou emphasized. To realize it, humans should avoid relinquishing control of the system to the large model right from the start. Instead, prioritize the involvement of humans in the decision-making loop and ensure that crucial decisions are made by humans.

He went on to say that safety measures can be implemented in the agent architecture to address security and controllability issues that may arise from the utilization of various knowledge, skills and tools by large models.

Will AI become conscious?

This year, Elon Musk threw a bomb to the world saying artificial intelligence is “one of the biggest threats to humanity.” Prominent figures in the sector, including representatives from OpenAI, Google DeepMind, and Anthropic, have united to caution about the potential of AI causing human extinction.

As for the ultimate challenge brought by large models, Zhou believes that AI has not yet reached that stage. ChatGPT-5 will not appear overnight, and ChatGPT also needs to have “hands and feet” to connect with the real world in order to pose a real threat. So it is too early to worry about it now.

However, Xiao brought up a more practical emerging problem which is the addiction to use AI in daily work, simply allowing the machine to replace their thinking.

“Over-reliance on AI for thinking could potentially steer us toward intelligence degeneration, as human intelligence is intricately woven into our evolutionary-driven nature,” he said.

“In history, no technology has developed at a speed comparable to AI. If traditional technology is a rifle, AI is a hydrogen bomb, completely different in scale,” Xiao said.

But he said human activity protection zones that AI cannot interfere with can be established as a way of prevention measure, such as basic education for minors for the sake of thinking degeneration among young generations.

“When we have delegated a large number of writing tasks to machines, which means depriving ourselves of opportunities for mental exercise, resulting not only in the decline of generative abilities, such as writing, but also in the decline of human evaluation abilities.”

As for machine consciousness, Chinese observers believe it is more about blurring boundaries between science and science fiction that people try to get attention from making sensational statements.

But letting the imagination go wild, Xiao said, now, machines possess a brain, known as large models, and further acquire a body, known as embodied intelligence, then they may evolve in human society or virtual worlds, and when a sufficiently large group of machine intelligences learn and collaborate with each other, it is not impossible for consciousness to emerge.

In that case, the bottom line is to set up a forbidden zone for AI cognitive systems. “For AI machines, the identity of human beings, as ‘the creator’ of the AI world, should be hidden,” Xiao said.

Cision View original content:https://www.prnewswire.com/news-releases/global-times-yearender-chinese-researchers-catch-up-with-global-ai-momentum-302022220.html

SOURCE Global Times

Global Times: Yearender: Chinese researchers catch up with global AI momentum WeeklyReviewer

PR Newswire Technology News

World Reviewer Staff
World Reviewer Staffhttps://weeklyreviewer.com/
The first logical thought has to be "no way". I'm the World Observer! Ill find and share important news all day.

Latest articles

Earnings Disclosure

WeeklyReviewer earns primarily through affiliates and ads. We don’t encourage anyone to click on ads for any other purpose but your own. We recommend products and services often for our readers, and through many we will earn commissions through affiliate programs.

Related articles