蜜桃交友

Search

Australian Embassy to the Holy See hosts panel discussion on AI and human rights Australian Embassy to the Holy See hosts panel discussion on AI and human rights 

AI and ethics: No advancement can ever justify a human rights violation

Following the Paris AI Action Summit, the Australian Embassy to the Holy See holds a panel discussion to address the ethical and human rights challenges in harnessing AI.

By Kielce Gussie

By 2028, global spending on artificial intelligence will skyrocket to $632 billion, according to the International Data Corporation. In a world where smartphones, computers, and ChatGPT continue to be the center of debate, it's no wonder the need for universal regulation and awareness has become a growing topic of discussion.

To address this issue, an international two-day summit focused on AI was held in Paris, France. The goal was to bring stakeholders from the public, private, and academic sectors together to begin building an AI ecosystem that is trustworthy and safe.

Experts in various areas of the artificial intelligence sphere gathered to partake in the discussion, including Australian professor and member of the Australian Government’s Artificial Intelligence Expert Group, Edward Santow. He described feeling hopeful that the summit would advance the safety agenda of AI.

Trustworthiness and safety

On the heels of this summit, the Australian Embassy to the Holy See hosted a panel discussion to address the ethical and human rights challenges in utilizing AI. There, Prof. Santow described his experience at the Paris summit, highlighting the difficulty in building an atmosphere of trust with AI on a global scale. “It’s primarily about making sure that those systems that incorporate artificial intelligence are built in a very robust way, so that they don’t exploit people’s personal information for commercial gain,” the professor explained.

Experts from various sectors of the AI world came together to discuss how to include human rights in AI development
Experts from various sectors of the AI world came together to discuss how to include human rights in AI development

Prof. Santow stressed the importance of having safety measures in place to protect people and their data if the AI system fails. But the professor also noted the presence of what he called a counter-narrative at the summit, pushing against the establishment of a “safety net." While some people argue focusing on safety and trustworthiness will slow down AI development, he rejected the claim.

Positives and negatives

While advocating for the inclusion of ethics and rights in AI, Prof. Santow acknowledged there are “enormous opportunities…to advance a whole range of human rights” through the use of AI. As a human rights lawyer, the professor described positive scenes where AI has helped visually impaired people experience the world around them. “It allows you to have a level of independence and autonomy through the world that you wouldn't otherwise have,” he pointed out.

Yet, Prof. Santow warned against letting the benefits of AI negate or overshadow any violation of human rights - whether great or small. “When we look at artificial intelligence and we see both the extraordinary opportunity for good and the horrifying reality that it also causes harm, we need to give proportionate attention to the harm.” A safety net or level of protection could help limit or prevent this harm.

Three points to protecting human rights

To uphold human rights while using and developing AI, Prof. Santow outlined three points. First, the need for a good set of rules that “apply to all technologies.” This does not mean starting from scratch and creating a whole new approach to or moral guideline for technology, rather, it means adding new rules to our already existing values. This is important as there are things “that AI enables that are genuinely new” and therefore rules must be adapted to include AI.

Secondly, this set of rules needs effective enforcement. Citing his fellow Australian, Fr. Frank Brennan, Prof. Santow explained that “a rule without effective enforcement is not a rule at all. It’s just a good idea.” Courts, governments, and organizations must take action and uphold human rights laws when it comes to AI. This was one motivation behind the Paris AI Action Summit. As France’s Ministry for Europe and Foreign Affairs stated, “It is the international community’s responsibility to maintain balance in our societies and to craft AI that respects universal values.”

The third point Prof. Santow stressed was that the law does not need to have all the solutions now. “If we design systems that incorporate AI in ways that don't exploit people's personal information or violate their right to privacy…then we know that will be probably the most effective way of ensuring that, your human rights are upheld.” 

Creating and enforcing guidelines that promote human rights means AI can be used in such a way that perhaps one day the benefits can strongly outweigh the risks.

Thank you for reading our article. You can keep up-to-date by subscribing to our daily newsletter. Just click here

15 February 2025, 12:22