At the same time, a misused or poorly designed AI can reinforce discrimination, introduce harmful biases, or even create new forms of exclusion, which creates more harm to the already suffering community. This is why the ICRC is committed to using AI carefully and only where it truly adds value, ensuring that solutions are fair, transparent, and strive to meet the longstanding “do no harm” principle of the humanitarian sector.
Ladies and gentlemen,
We are confronted with a world fraught with intertwined turbulence, where security disorder, development imbalance, and governance failure are becoming increasingly prominent. At present, the ICRC classified more than 130 armed conflicts around the world, a figure that has more than doubled in the last 15 years, with many marked by intense violence, widespread destruction and restrictions on humanitarian aid.
As a humanitarian organisation, the ICRC cannot stop conflict. But to effectively help people in armed conflict, the ICRC must first assess the implications of contemporary and near-future developments in armed conflict.
In fact, the integration of AI in military operations has raised many legal, ethical and humanitarian concerns. For example, AI-assisted decision-support systems are influencing and accelerating military decisions about who or what is targeted in armed conflict in ways that surpass human cognitive capacity and therefore can undermine the quality of decision-making. Meanwhile, autonomous AI agents can orchestrate complex cyberattacks against civilian services in seconds, drastically reducing the time to detect and prevent such attacks. Furthermore, the future use of autonomous weapons systems (AWS) will involve a wider range of targets, longer time periods of use, and fewer opportunities for human intervention.
To address these issues, the ICRC submitted a position paper this year to the UN Secretary General to state our view on AWS and called for clear rules on the restriction or prohibition of AWS. The organization maintains an unwavering principle: regardless of the technological sophistication of AWS, humans must ultimately remain in control.
Ladies and Gentlemen,
When meeting with ICRC President, Mirjana Spoljaric in 2023, President Xi Jinping emphasised that “China is an active supporter of, participant in and contributor to the international humanitarian cause”.
In fact, this year marks the 20th anniversary of the establishment of the ICRC Regional Delegation for East Asia in Beijing.
As China has become a major power with significant global influence and has emerged as one of the ICRC’s key procurement hubs worldwide, the ICRC is deepening its understanding of China’s perspectives on international cooperation, development, armed conflict, and peace. At the same time, the ICRC is broadening its collaboration with China across a wide range of areas, especially in the technology sector. The ICRC proactively engages with the Chinese tech sector via valuable platforms such as the World Internet Conference to explore potential Chinese AI and other technological solutions in support of humanitarian action and to foster dialogue on their responsible development and use.
From 4-5 December, the ICRC will host a Symposium on the Responsible Use of Technology in Humanitarian Action in Beijing jointly with Tsinghua University, to explore different perspectives on opportunities and challenges relating to digital transformation, including the responsible use of technologies, including AI. We sincerely invite you to join our discussion.
Ladies and Gentlemen,
AI is reshaping the world at a pace few could have imagined. For the ICRC, the choice is clear: AI must remain in the service of humanity — protecting dignity, preserving life, and never replacing the human compassion at the heart of humanitarian action. This means embracing innovation where it can strengthen humanitarian action, while firmly rejecting uses that undermine human control, accountability, or compassion. It also means working with global partners, including China, to build common rules and safeguards so that AI becomes a force for protection, not harm.
Thank you!
See also:
The Shaping of International Rules on AI and AI Governance: Some Humanitarian Considerations
We acknowledge Source link for the information.
