in

How Humans Can Manage the Growing Risks of AI

How Humans Can Manage the Growing Risks of AI

Artificial Intelligence (AI) represents a technological shift unlike any previous invention. Unlike tools that enhance physical abilities, AI challenges humanity’s core trait: the capacity to generate and apply knowledge. This gives AI the power to transform personal identity, economic structures, and social organization.

The immense benefits of AI are matched by significant risks, demanding a comprehensive global strategy for governance. A narrow debate framing efficiency against safety is insufficient. Instead, we must adopt a holistic understanding of AI, its applications, and its future evolution.

Hosting 75% off

Rethinking Human-Level Intelligence

Much public discussion focuses on artificial general intelligence (AGI), a vague concept promising human-level performance across all cognitive tasks. The term is inherently unclear, as it is impossible to define the full range of human cognitive abilities. Moreover, AGI discussions often overlook a key aspect of human intelligence: autonomy. True intelligence requires not just task performance, but the ability to understand the world and adaptively achieve goals by combining diverse skills.

The Gap Between Today’s AI and True Autonomy

Current AI systems, such as conversational agents, are far from being able to replace humans in complex organizations. Autonomous driving systems, smart grids, factories, cities, and telecommunications networks are highly complex systems. Each consists of individual agents pursuing their own goals while coordinating to achieve collective objectives.

The technical hurdles to this vision of autonomy are immense, exceeding the current capabilities of machine learning. Setbacks in the autonomous vehicle industry—some of which promised full autonomy by 2020—highlight these challenges. Today’s AI agents are limited to low-risk digital tasks. To be trusted in critical roles, AI systems must demonstrate robust reasoning, rational goal pursuit aligned with ethical and legal standards, and reliability currently considered aspirational.

Read More: 7 Terrifying AI Risks That Could Change the World

Reliability and Explainability Challenges

A fundamental challenge lies in guaranteeing reliability. AI excels at extracting knowledge from data but remains largely non-explainable. This makes it nearly impossible to meet the high reliability standards required for safety-critical applications. Unlike elevators or airplanes, AI safety cannot rely on conventional certification processes.

Human-Centric AI and Ethical Considerations

Beyond technical reliability, AI must also meet human-centric cognitive standards. Concepts like “responsible AI,” “aligned AI,” and “ethical AI” abound, yet most lack a solid scientific foundation. Unlike safety, ethical and social cognition depend on complex processes poorly understood even in humans. Passing a medical exam does not make an AI equivalent to a human doctor. Developing AI systems that genuinely respect social norms and exhibit responsible collective intelligence remains a major challenge.

Categorizing AI Risks

AI risks can be grouped into three interconnected categories:

Technological Risks

AI’s “black box” nature amplifies safety and security risks. Existing risk-management frameworks demand high reliability in critical systems, which current AI cannot meet. Global technical standards are essential to build trust, but efforts are often hindered by technical limitations and resistance from Big Tech and some U.S. authorities, who argue that standards stifle innovation and promote self-certification.

Anthropogenic Risks

Human-induced risks arise from misuse, abuse, or compliance failures. In autonomous driving, examples include skill atrophy, overconfidence, and mode confusion. Compliance risks stem from manufacturers prioritizing commercial expansion over safety. Tesla’s “Full Self-Driving” system illustrates the dangers of marketing claims exceeding technical realities.

Systemic Risks

AI poses long-term or large-scale disruptions to social, economic, cultural, environmental, and governance systems. Some risks, like monopolies, job displacement, and environmental costs, are recognized, but others, such as cognitive outsourcing, are less appreciated. Delegating intellectual work to machines can erode critical thinking, weaken personal responsibility, and homogenize thought. Raising awareness of these subtle cognitive risks is essential.

Toward a Human-Centric AI Vision

Addressing AI’s complex risks requires a human-centric vision that goes beyond the narrow AGI goal promoted by tech giants. This vision must honestly assess AI’s current limitations and encourage international research into new applications in science, industry, and services.

Ideologically, we must reject a “move fast and break things” mentality, which creates technical debt and long-term fragility. Likewise, the dogma of technological determinism, which downplays human agency in shaping technology’s societal role, must be resisted.

Read More: OpenAI Releases Guidelines to Assess AI Risks

China’s Role in Global AI Development

China is well-positioned to contribute to this human-centric vision. Its strong industrial base demands increasingly intelligent products and services. Global standards and regulations will be critical in realizing this vision. By collaborating with other nations, China can help balance global AI power and harmonize development with reliability and safety.

Early initiatives, such as the China AI Safety and Development Association and the World AI Cooperation Organisation, reflect steps toward achieving this goal, emphasizing AI not just as a tool for power but as a service to society.

Hosting 75% off

Written by Hajra Naz

InnoVista Concludes Pakistan’s Largest National Agentic AI Hackathon in Collaboration with Google, Systems Ltd, and PAFLA

Claude Code in Slack may transform how your team works daily

Claude Code is coming to Slack, and it could change your workflow overnight