[Speaker A] [587.750s → 587.990s]: Sh. [Speaker B] [635.920s → 683.580s]: Good afternoon, Excellencies, distinguished delegates. I'm delighted to welcome you today to our event on artificial intelligence, human rights and counterterrorism. My name is Ben Saul. I'm the United Nations Special Rapporteur on Human Rights and Counter Terrorism. And I'm delighted today to launch our new paper on this. On this topic. But firstly, I'd like to hand the floor to Her Excellency Mrs. Pascal. Christine Berswill, the Ambassador, Permanent Permanent Resident, Permanent Representative of Switzerland here in New York. Switzerland has been a great supporter of my mandate over many, many years and including support for the production of this paper. So welcome and over to you. [Speaker C] [683.900s → 862.260s]: Thank you very much and Special Rapporteur, Excellencies, distinguished guests and colleagues, it's a great pleasure and honor to welcome you all to today's event making the launch of the position paper of the Special Rapporteur on the human rights impact of using AI to counter terrorism. I would like to thank the Special Rapporteur who made all the way down from Australia or up from Australia, and actually he is already in the tomorrow. So Professor Bin Sal, thank you very much and also to your team for this timely and also thought provoking report, as well as our partners and colleagues from Cted, Ohch, civil society and academia who continue to advance dialogue on this really vital issue. My country, Switzerland, is very pleased to co sponsor and support this important initiative. As a country deeply committed to human rights, the rule of law and multilateral cooperation, we see this work as central to ensuring that new technologies serve people and not the other way around. Today's discussion is of particular interest for Switzerland. We co organized an expert group meeting in Geneva on 8 May, together with UNCT, CTET and GCSP, where experts from across disciplines examined the human rights and rule of law dimensions of using AI in cities. That meeting reminded us that while AI can enhance our ability to prevent and respond to terrorism, it also raises profound ethical and legal challenges. And the question before us remains, how do we harness the benefits of AI without undermining the very rights and freedoms we seek to protect? The Special Rapporteur's recommendations offer an important response. They provide clear guidance for safeguarding human rights in the development, the deployment and transfer of AI systems used in ct? These recommendations are not only timely, they are essential. As AI becomes increasingly embedded in ct, the risks of bias, discrimination, opacity and overreach multiply. Without safeguards, we risk eroding the very foundations of justice and equality that CT aims to defend. Switzerland believes that international dialogue and cooperation are Crucial to addressing these challenges, we must foster a shared understanding of responsible AI governance, one that places human dignity, transparency and accountability at its core. So let us therefore use today's discussion to reaffirm our collective commitment, which is to ensure that AI serves as a tool for security that is firmly anchored in human rights and the rule of law. So on behalf of Switzerland, I thank the Special Rapporteur once again for his leadership and I look forward to an engaging and forward looking discussion. Thank you very much. [Speaker B] [863.540s → 1681.240s]: Thank you very much. And now I'm delighted to give the floor to myself. And today we were inspired, in fact by this joint meeting and recognize the important work of the co hosts in Geneva recently. This was one of the inspirations prompting us to write this, this report. And in fact, our consultant who led the drafting on this project was one of the participants in Geneva, Jonathan Andrews, who we recognize his work on this as well. Today is something of a soft launch of this, of this report. We have a QR code up on the screen shortly which is designed to give you a bit of a teaser of the report. We're not going to launch the full report for another week or two, but what we've given you here is the recommendations part of the report. And we're really interested to use this opportunity today as a kind of last consultation so that if anybody has any further feedback on our recommendations, if we've omitted anything, got any language wrong, we'd be very keen to hear from you in the next two weeks so that we can finalize the whole report now, today. Obviously, artificial intelligence can be potentially deployed in a broad range of activities. In counterterrorism, these include threat detection, risk assessment, physical and digital surveillance, predictive policing and force deployment, border management, detention, and even military decisions. Decision making. AI can also enhance and augment existing technologies and capabilities used for law enforcement purposes, from scenario planning, simulations and resource allocation, to modeling the logistics of operations and surveillance activities. And this is one of the important features of AI, that because it compounds other technologies, it also has the potential to compound the human rights violations caused by those underlying technologies. AI systems can operate with varying degrees of autonomy. The more autonomous a system, the less direct human control there is over its actions. AI can assist in decision making by allowing for more rapid collection and processing of huge volumes of data, and can potentially even displace human judgment in specified situations. These capacities raise fundamental questions about the ability of certain uses of AI technology to comply with human rights. Critically, certain uses of AI in countering terrorism can profoundly violate human rights. AI algorithms can aggregate and analyze highly personal or sensitive data such as arrest and criminal records associations with family members, friends and colleagues patterns of crime and policing in society social media networks and posts communications data travel information employment records personal information held in government databases for social, health, education and other services. Such information can then be used to profile alleged terrorist risks or associations of individuals and groups and prompt invasive counterterrorism measures including surveillance, arrest and detention, administrative restrictions, the subject of my report to the General assembly this week, and other measures to prevent and counter violent extremism and terrorism. And in the paper we give a whole series in part one of the paper we outline some key uses of particular kinds of AI technologies and their specific implications for human rights. Just to briefly list those I'd mention behavioral biometrics, AI driven facial recognition and networked surveillance technologies predictive policing and force deployment vision language models which associate text and images online and draw patterns between them surveillance AI powered surveillance of online platforms and online content moderation border management operations which one of these fields which draws together many of the pre existing new technologies and then finally of course military uses of AI, whether in so called autonomous weapon systems or more commonly in AI enabled data decision support systems including processes for targeting. Now for all of its for everything AI potentially offers by way of enhanced security, these systems are far from infallible. Their data sets and algorithms may be biased, unrepresentative, illegally sourced or inaccurate, potentially leading to discriminatory profiling, violations of privacy and counterproductive false positives. AI systems can lack adequate human control and analysis, resulting in overreliance on automated decision making which lacks in nuance, context and responsiveness to human rights considerations. AI assessments are merely probabilistic and only as good as the data on which they're based. Diverse uses of AI and casterism consequently have the potential to violate numerous fundamental rights. And in part one of the paper when we address the different technologies I mentioned, we outline some risks specific to those technologies. And then in part two of the paper we zero in on some key rights affected by many AI systems. Those being equality and non discrimination, privacy and diversity, data protection, freedom of expression and access to information, liberty and rights in detention, and fair trial and due process. And then on top of all of that, of course the right to effective remedy. Overconfidence in technological solutions can detract also, we think, from addressing the underlying conditions conducive to terrorism, therefore provoking states to use more repressive than preventive means to combat terrorism. And of course there is always the risk of exceptional technology produced for counterterrorism and justified as exceptionally needed to counterterrorism, then in a very short space of time, being used to in regular law enforcement and policing in all kinds of non exceptional situations. Now, fortunately, the General assembly, the Human Rights Council and many states are increasingly interested in the need to regulate AI systems generally in a way which respects human rights. Calls for inclusive, transparent, accountable AI governance that must address the specific risks in counterterrorism and security contexts. And I'd mention in this respect, although we've got some quite good examples of regulation at the national and regional level, I'd mention the EU AI act as one fairly recent example. Even the best examples have some troubling aspects to them. One of the ones we're most worried about in the European Union context, for example, is certain blanket exemptions for uses of AI in national security and military contexts and other kinds of exemptions or exceptions in law enforcement, border and public security purposes. Precisely all of the areas where you need the strongest human rights regulation, because that's where human rights are most at risk. So the position paper focuses on these human rights risks in the recommendation section, which we have given you access to today. We call for explicit legal frameworks to regulate AI to protect human rights, including no blanket exemptions. And if you do need special regulation in certain contexts like military or security, then you have to have tightly justified tailored concessions, not blanket exemptions. It's not good enough also to leave AI to a kind of laissez faire market approach where you don't regulate at all, which is what we've seen with some new technologies in some countries. Secondly, there have to be blanket prohibitions on the use or transfer of certain AI systems that intrinsically cannot meet human rights law standards. And we give some examples of these. And this is where we correspond with the EU approach. I'd say, for example, where assessment of a terrorist risk is based purely on algorithmic profiling or personality traits, including biometrics or sentiment or emotions analysis in the absence of any human assessment based on objective verifiable criminal activity, the use of facial recognition databases sourced from untargeted non consensual scraping of images from the Internet or surveillance footage, real time remote biometric identification used indiscriminately in public places, and decisions about military targeting without any human intervention. Thirdly, where AI is not prohibited, we call for heightened regulation, including human rights due diligence assessments by developers and users, public or private, throughout the life cycle of the technology. We call for what, in the jargon of the Field is called transparency and explainability, so that you know what data is being input and how it's being used, and therefore allowing for rigorous scrutiny and effective oversight and accountability, stringent export controls, very specific kinds of regulation in the military context, tracking the debates being led by the International Committee of the Red Cross, and ultimately human control in decision making. There's also got to be stringent safeguards on data quality, testing and validation and of course, personal data protection and data data security. Ultimately, of course, AI systems must be subject to effective independent oversight bodies that should be a whole range of bodies, including, of course, national human rights institutions and the courts. But it also requires specific regulation by data protection authorities and ideally an AI regulator with effective binding powers. And finally, of course, there must be accessible remedies for violations of human rights which flow from the misuse of AI. To conclude, you know, with the Global Counterterrorism Strategy Review, the ninth review coming up next year, if it's not a technical rollover and if the text is reopened, I would be keen to encourage States to ensure that new technologies are addressed in a balanced way in the ninth Review. The Pact for the Future recently only drew attention to the misuse of new technologies by terrorists and said nothing at all about the abuse of new technologies by States, which, from where I'm sitting as a human rights independent expert, accounts for by far the greater proportion of abuses of new technologies than their misuse by terrorists. And I think because artificial intelligence has so many uses and can compound so many other new technologies with pervasive human rights implications, I think we need to get ahead of this and make sure that our global approach to regulation is much stronger than it currently is. So, as Chair, I'll thank myself for an excellent presentation and without further ado, pass to our discussants on the paper, bearing in mind that we did dump the full paper, not just the recommendations, on our discussants on Friday, so they've had the weekend to digest. But the paper's about 35 pages, I think, in total, but I'm pleased to pass. Firstly, I think Cecilia, you'll go first. So, Cecilia Nadeo, Senior Human Rights Officer from cted, over to you. Thank you. [Speaker D] [1682.200s → 2067.910s]: Thank you very much, Chair and Special Rapporteur. Good afternoon, Excellencies, distinguished colleagues, it is a pleasure to be here today with you and a particularly exciting day for us who are based in New York, because we have the opportunity to welcome the Special Rapporteur to our Embir. Thank you so much for being here and also to learn this time about the contours of his much awaited Positioned Paper. I'm also grateful today to the Government of Switzerland for their continued support to this initiative and many others. So the United Nations Counterterrorism Committee Executive Directorate, or cted, is a special political mission that supports the Security Council's Counterterrorism Committee. Our work focuses on assessing Member States in implementation of their counterterrorism obligations and the relevant Security Council resolutions, and facilitated technical assistance when needed. In addition, we're mandated to monitor trends, emerging issues and developments in terrorism, but also in counterterrorism herein technological advancements in the field of counterterrorism being one of the most dynamic and rapidly evolving incident areas we track. So the Special Rapporteur's position paper, with its emphasis on principled approaches, combined with immediate actionable recommendations relevant to the deployment of AI tools when countering terrorism, opens up a vital, much needed conversation in this rapidly evolving space, also marked by a consultative approach that we value dearly, a conversation we trust will continue after today's lens Today's Positioned Paper is particularly instructive to CTED's ongoing efforts to better understand different modalities offered by artificial intelligence tools when used responsibly to focus limited resources more efficiently and thus divide measured interventions aimed at preventing the commission of a terrorist act. CTET is also currently exploring ways in which artificial intelligence can be leveraged to appropriately react to to the commission of a terrorist act by, for instance, assisting investigators with enhanced forensic analysis or further down the road conducting more targeted risk assessments leading to tailored rehabilitation support for terrorist offenders. Overall, we affirm commitments by Member States under international human rights law under which permissible limitations of human rights are to be adopted by by law proportionate to the harm to be addressed, necessary to the objective pursuit and non discriminatory in nature. Much like the Special Rapporteur's approach, we have collated examples of adoption of AI tools, particularly generative AI in the counterterrorism domain. These encompasses as detailed by the positioned paper, as well enhanced surveillance techniques, including the use of behavioral biometrics, automated processing of large amounts of data, predictive policing, devising best intervention tools for targeted content moderation, custom made counter narratives and retooled facial recognition techniques and emotion recognition tools. But we have also started to observe emerging good practices in this space, even if still not widely considered or less so fully implemented, including the conduct of human rights impact assessments, in particular privacy impact assessments prior to the adoption of innovative technology solutions, the affirmation of the human in the loop approaches to content moderation and other related interventions and the deployment of red teaming techniques aimed at testing inherent weaknesses, including impermissible limitations of human rights in AI based counterterrorism responses. CTED acknowledges the need to engage more deeply with government practitioners on this topic to be able to advance a more sophisticated understanding of the opportunities available and the risks to be assessed. This includes, for instance, Member States approaches not only to developing relevant legal frameworks but also to funding research in this space, procuring this technology and exchanging information among counterparts via formal and informal channels. To this end, CTED will continue to document any element of relevance raised by Member States in the context of our dialogue and country visits on behalf of the Counterterrorism Committee. We also take note of the Special Rapporteur's views in terms of the need to develop revitalized due diligence approaches when considering providing United nations led capacity building or technical assistance in this space. Likewise linked to the Committee's efforts to encourage Member States to adopt effective oversight mechanisms upon counterterrorism at large, CTET has also embarked on discussions with oversight entities tasked with a mandate to monitor security and law enforcement agencies, with a view to learning more about the tools they are developing to engage on the Agency's use of AI in the discharge of their mandate. We know that the impact of technology is not uniform across different segments of society. Gender, ethnic, racial, socioeconomic status and other related factors shape how individuals experience not only terrorist violence but also counterterrorism responses. As such, we emphasize that kill careful attention is always needed when deploying AAU in the counterterrorism domain. With the view to prevent replicating or reinforcing societal biases and non permissible discriminatory approaches. We will continue to find ways to enhance accountability and transparency in this space. On this an inclusive approach is needed, one that prioritizes engagement not only with Member States but but also with academia, civil society, the private sector and the communities most affected by terrorism and violent extremism. In incrementally targeted discussions. We look forward and cited look forward to today's discussions. We welcome the positioned paper and to continued work in the space. Thank you very much. [Speaker B] [2069.510s → 2092.609s]: Thanks so much Cecilia and delighted as always to pay tribute to the amazing work that your Human Rights team team in CTED does and we have had such a long and positive collaboration. Next we have Celi Long, who we have an even greater collaboration with from the UN Office of the High Commissioner for Human Rights based here in New York. Seily, thank you so much. [Speaker A] [2094.290s → 2397.090s]: Thank you Ben Good afternoon Excellencies and distinguished colleagues. Congratulations to the Special Repetitive for the launch of your position paper and the recommendations today. This is an important topic and also thank you to you and Switzerland for inviting OCHR to share our views on human rights and the use of artificial intelligence in counterterrorism. So this topic, human rights and AI, they're contrastive concepts. One rooted in centuries of moral progress and now codified in international human rights law. The other is born of code, relatively new and rapidly evolving, but both actually shape our future and will probably shape our future now. While we see AI technology can help human rights and human beings in myriad ways, including to counterterrorism, we need to ensure that fundamental human rights are adequately protected in this new digital era. Having said that, OACHR has documented some concerning examples and I think these are also highlighted in your paper. Ben for instance, predictive algorithms can be skewed by biased historical data leading to discrimination. The use of AI to moderate online content can negatively impact freedom of expression and freedom of opinion. These are just to name a few. Now, to navigate this complex terrain, let me echo the Secretary General, the High Commissioner for Human Rights, the Special Representative's recommendation that the use of AI in counterterrorism must adhere to international human rights law and they must integrate robust human rights safeguards. For today I would like to highlight three overarching principles in addition to the protection of specific human human rights. First, there is a need of transparency which the Special Rapporteur has mentioned, and this is needed to mitigate the risk that opaqueness may generate from the use of AI. The more likely and the serious and the more serious the potential or actual human rights impacts linked to the use of AI, the more transparency is needed. Now, the commitment to public transparency requires or it means AI must be explainable. We've heard about the principle explainability and this should be incorporated into law to ensure that individuals have the right to understand how decisions affecting their lives are made and also to provide them with the tools to defend human rights in the face of AI. Transparency also means informing the public and affected individuals and communities about the use of AI. Companies should also make explicit where and how AI technologies and automated techniques are used on their platforms, their services and their applications. For example, companies should publish data on content removals that are AI driven. Now the second is accountability. Now, the inherent obscurity and in some instances the absence of clarity with regards to AI based decision raises very pressing concerns and questions concerning state accountability, particularly when AI informs coercive counterterrorism measures. Now, laws regulating AI must ensure that when harm occurs, there is redress, responsibility and remedy. This also requires companies to put in place systems of human review and remedy to respond in a timely manner to complaints of users levied at AI driven systems. Now, third, where accountability and transparency are goals, independent oversight is the means to achieve them. The deployment of AI systems in counterterrorism and security context should be subject to regular audits by external and independent oversight mechanisms. Now, such mechanisms should be adequately resourced, technology proficient. They have the mandate to monitor the use of AI tools in the public sector and by the public sector and to assess them against criteria developed in conformity with international human rights law. This should include the establishment as well of independent data privacy oversight bodies. Now, in closing, I'd like to underscore that these three principles are not optional. Transparency system allows us to understand how decisions are made by AI systems. Accountability ensures that decisions can be challenged and corrected. And independent oversight provides the impartial scrutiny necessary to protect human rights. The Global Digital Compact similarly embodies these shared norms and standards in AI development. It calls for digital cooperation to advance, I quote, a responsible, accountable, transparent and human centered approach approach to AI. Thank you. [Speaker B] [2398.610s → 2407.890s]: Thanks so much, Celi. Now we turn to our dear civil society colleagues. Firstly, Tomaso, FTA from Privacy International, over to you. [Speaker A] [2408.210s → 2425.490s]: Human Rights. The Global Digital Compact similarly embodies these shared norms and standards in AI development. It calls for digital cooperation to advance, I quote, a responsible, accountable, transparent and human centered approach to AI. Thank you. [Speaker B] [2427.010s → 2436.370s]: Thanks so much, Celie. Now we turn to our dear civil society colleagues. Firstly, Tommaso Falchetta from Privacy International over to you. [Speaker E] [2441.170s → 2773.410s]: To this debate and thank you for inviting me. First of all, let me say Privacy International welcomes the initiative of the Special Rapporteur to develop a position paper on the human rights impacts of using artificial intelligence in counterterrorism. We believe it is a very timely initiative given the increased interest and use of AI technologies in counterterrorism. As a general remark and is something that I think Dera Porter has already alluded to and is present in the draft, PI believes that the use of AI technologies in counterterrorism poses significant risk to human rights, risk that in certain cases cannot be adequately mitigated. As a result, governments should not design or deploy AI technologies without having first demonstrated their capacity to comply with existing human rights law. Now I would like to focus my remarks on the risks opposed to the vast amount of personal data in indiscriminate and untargeted facts fashion raises concern of mass surveillance and questions about compliance with the principles of necessity and proportionality. Secondly, the consequences of AI decision based on data processing can lead to serious interference with other human rights, such as the right to liberty and freedom of movement. As mentioned already by the Rapporteur, in counterterrorism context, prediction assessments and decision made by or with the support of AI technologies turn individuals into suspects and that raised significant concerns. Third, AI technologies have been used in ways that exacerbate discriminatory practices. They can perpetuate and even announce discrimination, for example by reflecting embedded historical racial and ethnic bias in the data sets used. Fourth, and lastly, AI technologies challenge the capacity to remain anonymous online and offline, with serious implications for the right to privacy, freedom of expression and peaceful assembly. Let me just give you an example which I think the Rapporteur has already alluded to. Over the last few years, governments have developed their capacity to monitor activities activities in digital spaces, particularly to track individuals online and to analyze data produced by social media interactions. Social media intelligence socmint, as it's called, is often justified as a form of content moderation for counterterrorist purposes, but it is also abused to surveil peaceful assemblies and profiling people's social conduct. Various AI technologies are employed to analyze such eye revealing data, including to profile user and to predict behaviors now left unregulated. Social media monitoring leads to the kind of abuses observed in other form of COVID surveillance operation. However, according to the information we have, adequate national legal frameworks are largely missing in most countries. To conclude, we have seen that proponents of AI tends to overstate AI capabilities as well as its cost effectiveness. However, let's not forget, and I think the Rapporteur makes it very clear in his report that AI makes mistakes. There is an inherent uncertainty in the fact that AI algorithms are probabilistic. Moreover, the relevance and accuracy of data used are often questionable. An unrealistic expectation can lead to the deployment of AI tools that are not equipped to achieve the desired goals. Further, there needs to be enhanced human rights safeguards throughout the AI life cycle in order to address the challenges I just mentioned. These safeguards are both costly and time consuming, but they are necessary if we want AI to be human rights compliant. Let me just mention, as Salil has just done, three I want to also mention three safeguards. First, as the Human Rights assembly have already said, states should refrain from the use of AI technologies that are impossible to operate in compliance with international human rights law. The fact that AI is deployed for purposes of counterterrorism should not trump the limits and safeguards applicable to such such technologies. In fact given denounced human rights risk, it should lead to more stringent limits and controls. Second, modern data protection laws have quite well developed standards for transparency and accountability which must apply to AI system. In this context, it is concerning that in many jurisdictions, intelligence and law enforcement agencies are excluded from the provision of data protection legislation. This is a gap that needs addressing if AI technologies are to be used in counterterrorism. Third, and lastly, in carrying out human rights due diligence prior to the deployment of AI systems, national authorities must include a privacy impact assessment and develop a privacy by design and by default approach. Particularly in counterterrorism measures should not be assessed in isolation, but considering the cumulative effects of interactive measures. I look forward to continuing this discussion. Thank you very much. [Speaker B] [2775.730s → 2782.370s]: Thanks so much, Tommaso. And now, last discussant Antonina masalica from Article 19, Europe. [Speaker F] [2782.450s → 3190.490s]: Thank you, Excellencies, colleagues, distinguished guests. It's an honour to speak on behalf of Article 19, a global organization defending freedom of expression, expression and access to information. In the spirit of our efforts to access information to information, it's always better to provide feedback on the document that you manage to read than not. Despite the short period of time that we're being given to review this paper, it was a fascinating read. On my flight to New York, we welcomed the Special Repertoire's position paper on artificial intelligence and counterterrorism. This position paper confronts a very important how do we protect people from violence and terrorism without undermining the very rights and freedoms that define democratic societies? Artificial intelligence is now deeply woven into counterterrorism, from predictive policing to surveillance, biometrics, content moderation and border management. The paper rightly recognizes that artificial intelligence systems can profoundly affect fundamental rights, particularly the right to privacy, equality, freedom of expression and access to information. Our shared task is to ensure that new technologies do not deepen repression under the guise of protection. I would like to highlight a few points in this regard. The first one is security with rights, not security versus rights. Counterterrorism is too often framed purely as a matter of national security. When this happens, oversight narrows, secrecy expands, and human rights considerations are sidelined at Article 19. We urge states and international bodies to treat the governance of AI in counterterrorism as a human rights and democratic governance issue, not merely a security one. Freedom of expression and access to information are not obstacles to safety, they are its precondition. When societies can speak freely, debate policy and access truthful information, they become more resilient to violence and extremist narratives. When surveillance, censorship and opacity dominate, trust erodes, polarization deepens, and the very conditions conducive to terrorism are amplified. The second point is civil society must have a seat at the table. AI governance cannot be left to intelligence agencies or private contractors operating behind closed doors. Civil society organizations, journalists, academics, technologists and human rights defenders must be included in shaping the framework for AI development, deployment and oversight. CSOs bring essential expertise in identifying rights risks, designing human rights impact assessments and ensuring affected communities have remedies. Without that participation, Framework risks being Frameworks risk being built on narrow security logics and untested technological optimism. Civil society inclusion is the most reliable early warning mechanism we have. The third point is the dual use nature of technology. We all recognize that terrorists exploit technology, including AI tools, encryption and online platforms to propagate recruitment and coordinate. But these very technologies also enable journalists to report safely, activists to organize, and ordinary citizens to participate in public life. The dual use nature of technology means that how state regulated determines whether it becomes a tool of safety or a tool of oppression. Overreach and undersight are equally the dangerous. The fourth point is inclusion and pluralism. Effective counterterrorism measures grounded in human rights require plural voices. AI systems should be co designed with affected communities, not imposed upon them. Transparency, consultation and participation are not bureaucratic luxuries. They are democratic safeguards. Also, as colleagues from Office of the High Commissioner for Human Rights and from Privacy International, I would also like to highlight the safeguards that Article 19 particularly welcomes in the paper. The core safeguards in our opinion are transparency, human oversight and remedies. Article 19 aligns with the position's paper call for robust safeguards, transparency and accountability. By design, governments should publish public registers of AI systems used for counterterrorism, including data sets, accuracy metrics and oversight mechanisms. Companies developing or supplying such systems must conduct and publish human rights due diligence throughout life cycle of their products. Human review, not automation alone. Automatically automated filtering of flaggering tools are context blind and error prone. Any restriction on speech must remain subject to lawfulness, necessity and proportionality tests with meaningful human oversight before punitive action is taken. Accessible remedies Individuals wrongly targeted by algorithmic systems must have accessible timely mechanisms for appeal and redress and moratoria on high risk systems. Where AI systems cannot comply with human rights standards such as untargeted facial recognition or predictive policing or content moderation. Deployment should be suspended or prohibited until compliance is verifiable. To conclude, if AI driven measures to counter terrorism, sorry, erase silence, create silence, dissent or automate discrimination, they do not defend society, they weaken it. Technology can save democracy only if democracy governs technology. And security without rights is not safety, it's surveillance. Let us ensure that AI frameworks in counterterrorism are built not by security logic alone, but through inclusive, transparent and right best governments. Thank you. [Speaker B] [3192.250s → 3296.720s]: Okay, thanks so much to all the panelists. Now, we've left about 15 minutes for questions and comments from the floor, so I'd invite you to think about posing a question to the panelists. We've also got some preloaded questions in advance from various people. One I might throw to the panel is in drafting this paper, we were predominantly addressing the need for state regulation of AI, and it was suggested by some stakeholders we should make a recommendation in favor of a multilateral treaty regulating AI, including data protection, human rights, and aspects in line with the issues we addressed in the paper. Others thought that's a terrible idea in the current geopolitical environment because you wouldn't get the treaty human rights lawyers would want. And they pointed, for example, to the cybercrime Convention as quite a divisive convention, which. Human rights. Well, certainly OHR and my mandate don't like it, aspects of it, whereas quite a few states seem quite, quite keen on it. So is this the right environment to think about a kind of binding international standard, or is the time not right? Is it better for. To pursue the kind of soft law approach through the, you know, the International Scientific Panel and the AI governance, whatever it's called, Board something is. Is that a kind of. And pursue it through General Assembly, Human Rights Council resolutions, that kind of, you know, softly nudging things in the right direction. I'm going to dodge this question and pass it to the panelists. [Speaker D] [3312.080s → 3325.970s]: Well, thank you very much. Being that I'm the brave one. No, just to say very quickly and thank you for the question and thank you for the engagement so far, I think from a very practical perspective, when we talk about state regulation of AI. [Speaker A] [3325.970s → 3326.170s]: Right. [Speaker D] [3326.170s → 3421.520s]: That has been the core of your position paper. Our starting point of thinking within CDET had always been we need to understand how actually these tools are being used. Right. By states and by government officials. And I think we still have a bit of a gap in terms of understanding the full length of not only the possibilities out there, which might be more limited than we sometimes assume, but also the willingness and the bandwidth. Right. In which these particular technologies are being used. I think when we get a bit more of information on that space, we would be in a better position to address what the target area should be for further in engaging member states in coming up with regulation, because if you're thinking about a multilateral treaty would be again, member states. So we have at CTAD also started to develop a bit of an engagement as well with oversight entities within countries that are mandated to look at, at the way in which intelligence, security and law enforcement operate to see how they are seeing their own agencies operating. And there's still a long way to go in terms of including internal information sharing by the government authorities vis a vis their own internal oversight bodies. So I don't have a full fledged answer. I just, I wanted to share the fact that we still need to understand a little bit more. And that's why I noted that there's a ton of engagement from the perspective of civilization, civil society and experts, which I think is extremely helpful and relevant. We need to expand and we also need to understand as well a little bit the position of the private sector because there might be opportunities as well for self regulation in some areas, but it won't happen without, you know, a good understanding of the safeguards that are applicable. [Speaker A] [3421.520s → 3422.040s]: Thank you. [Speaker B] [3422.200s → 3536.640s]: Thanks, Cecilia. And I should add, in writing this paper, one of the frustrations was that we had a lot of information on how counterterrorism might use this technology, but we didn't actually find too many really concrete examples of usages. And I mean, if it is being used, states wouldn't necessarily tell us as well. Right. I think the debate is a lot more advanced in regulation in military context because there's been a very, very vibrant and strong engagement by states on that for many years now. And some usages in recent conflicts which have, have sharpened that debate. But a lot of this from a human rights regulation standpoint is quite speculative. Right. And some feedback we had on an earlier draft of the paper was to say, you know, why don't you propose specific safeguards on each of the technologies that you address? And we ultimately had to decide it's too early to do that because we don't know exactly how it's been being used. I think the second issue I think you touched on a little bit as well is, you know, there is this enormous digital divide in the world between who is developing and using this technology and who just doesn't have access to it for resource and knowledge reasons. But of course, as we know, for example, with the work the mandate previously did on spyware, you know, this is developed in some, you know, a select number of countries. Countries, but it very quickly finds its way into the hands of authoritarian governments all over the world who then use it very quickly to smash civil society. And so I think this question of transfer of technology is going to be an absolutely critical one to contain the risks of this technology as it is brought out more and more into the wild to whoever is willing to, to pay for it. Did anyone else want to come in on this first question on regulation? Yeah, Tomasov. [Speaker E] [3541.200s → 3620.470s]: Briefly. It is challenging. I mean, PI, by the way, shares the concerns about the cybercrime treaty. And so I think to some extent we added this experience quite recent of the difficulties of building an international treatise with the strong human rights safeguards. So I assume you will have similar issues if you were to open up something as amorphous and to some extent defined as AI. Think what a priority is necessary now is pursuing your approach, which is developing some guidelines based strongly on applicable human rights law, but also maybe leveraging the capacity of the UN to push states to adopt the necessary legislation and the necessary safeguards. And once you have a baseline of states with good practices, let's be optimistic, then one can see how that can be kind of brought up at international level with something as ambitious as a treaty. Thank you. [Speaker B] [3621.110s → 3633.510s]: Thanks, Tomazo. And I mean, we mentioned the EU act already. I mean, I think that middle level of regional organizations is another way you can push forward positive standards, potentially. Celine. [Speaker A] [3635.680s → 3694.050s]: Just echoing that, I don't have a direct answer for you, except that maybe this is a difficult environment to push for human rights. But I think maybe we should ask ourselves, what is the aim of such a multilateral treaty? Do we need one before we actually embark on doing one? It takes a lot of resources to come up with a multilateral treaty. And I think what perhaps our energy and time could be better used to, to enforcing international human rights law and using that as a benchmark, as a minimum benchmark, when you use artificial intelligence, whether in countering terrorism or in law enforcement generally. So I think just perhaps to take a step back and ask ourselves, do we really need this treaty and why do we need it? And the parameters. And we do have many treaties on international human rights law that can be used. [Speaker F] [3697.490s → 3794.110s]: Yes, thank you. I have a few thoughts on that. The first one, in terms of that we don't really know how the technologies are applied. A lot of governance are now entertaining themselves with the so called foresight exercises. So maybe this is something that can be done also. So within the mandate of this particular mandate holder to see what kind of possible developments can be perceived in the most optimistic scenario and the less optimistic scenario, so to speak, in terms of Leveraging capacity of the United Nations. I agree with Tommaso here. And also the overall framework of international human rights standards maybe offers us the option, for example, within the upr, to basically connect the AI usage with general obligations of states under international human rights standards. That can also be factored in when it comes to the recommendations. With the newest technology developments and compounding risks in terms of regulations, I think EU this is the area to look at and to follow actively because these years the first results of the implementation process of many instruments related to the new technologies are about the surface. I think that can give a lot of ideas in terms of what can be improved or what can be taken on board from the practice of the EU member states. [Speaker B] [3795.150s → 3822.770s]: Thank you. And I should add on the regulation question, we had a question about what's the best forum to standardise or harmonize free speech, hate speech and dangerous speech, legal benchmarks. But in this era of backsliding on free speech, even in historical friends of free speech over the last couple of years, I think now is probably a terrible time to enter into that debate. Question from Stephen Sicara from UNOCT. [Speaker F] [3827.580s → 3827.700s]: And. [Speaker B] [3827.700s → 4056.700s]: For the opportunity to comment on it, we'll certainly take a close look at the recommendations. As you've asked, we have a number of programs that help member States to look at the human rights aspects of new technologies as they are addressing counterterrorism. And so we'd certainly get their feedback on your recommendations. My question for you, maybe you could expand a little bit on the consultations or discussion you had with some of the companies, because the advancement in these spaces is all being undertaken in the private sector, almost uniquely so in this era. And of course governments play catch up. Regional organizations, international organizations which have members, states as members also then have to play catch up. So the discussion really is at the level of the private sector. And who, who are the stakeholders for the private sector? Well, they're investors, they're venture capitalists, they're users. So I'm wondering aloud and I'm wondering if you've put some thought into how to address those stakeholder bodies with a view to more private sector led guidelines that they bind themselves with and then are held holding themselves accountable to. You'd be very familiar with the gentleman gifct. That's a model that we work closely with. It has advantages, it has disadvantages, but maybe that's somewhere that would be productive. Thank you. Yeah, it's a great question. And we had a lot of engagement from states and civil society, much less from the private sector. I Mean we reached out but didn't have a huge amount of luck. I mean we've had quite a bit of engagement in the past with Meta. On online content moderation for example, which is one area increasingly AI powered, you know, industry led self regulation. I'm a pretty big skeptic of whether that's enough. I mean it's an important piece but from my experience as a human rights person over a long time, I think it rarely is enough. You know, whether it's prison, private military, security contractors or whatever is the issue, it doesn't get you far enough. I mean on online content moderation I think that's a really, really hard one because if you go for minimum international standards, as we said, you might get a lowest common denominator approach and that could be worse than what some of the good companies are currently doing. At the same time, companies are all over the map on moderation and with the change of administration in the U.S. you know, it flipped pretty quickly in terms of the quality of that, of that moderation. And this is an area which again has so many problematic human rights dimensions because companies are often moderating for terrorism which isn't defined. They're moderating for violent extremism or just extremism also not defined. And so you have and then you know, all the forms of engagement, support etc for those concepts. And so it becomes such an amorphous area which inevitably leads to overreach and you know, preemptively pulling content which should remain there. I mean you see it in, you know, the debates over just small things like from the river to the sea. Right. You know, it's either terrorist or it's just plural happy multinational future Middle east state. So I think we need to be state led on regulation but of course that must consult with the private sector also. So you know what's technically possible because sometimes it's harder than you think. Yes, up the back. [Speaker G] [4065.170s → 4209.140s]: Davis. I am a UN ECOSOC CSO representative but I also, my background is private sector working with governments, youth and every stakeholder you can imagine. There's a couple of things I just wanted to share that I think that are missing. I think we're missing a multi stakeholder approach because you not only need the private sector. State led is great, but we need to actually also bring it out to the public and public awareness. We also need to look at those who have and those who don't have. Infrastructure wise, all of these things come into play. If you're in a rural area and you don't have Access. When there's maybe one child or a couple of people that have access to the Internet in that area and they feel like we're not getting enough work or we're not getting enough food or whatever, there's going to be anger and resentment. And people can use that form of anger and resentment to go online for extremism. Okay, we can then use generative AI to fake out celebrities and other people and make them look like they are saying things that they're not. That's the thing that we really have to look at. I teach AI to artists and others around the world, and that's one of the deep fakes that we're looking at. And if they can do it with selection levies and others, they can also do it with heads of government, heads of cities, heads of anything making them. The people in those countries think that they're saying something, they're actually not. We need those safeguards in place. And also the use of AI with children, there are no safeguards in place. As we know, in certain cultures and countries, the extremism goes all the way down to the child. And they use children as a way of, you know, bombing and things like that. We have to have those safeguards in place too, because children are really connecting with that and AI, the more it goes into the brain. We just had people here in the last two months. If we look back on the web TV talking about how AI affects the mind, our minds become atrophied because we're saying AI do it for us, rather than learning how AI really works, how you really are supposed to prompt it. People are not thinking about that in rural countries. They're thinking about, I need this for my business or I need this to make money. So we have to look at where extremism is meeting the incessant and desperate needs of people that they will do anything. So I just wanted to leave it at that. Thank you. [Speaker B] [4209.780s → 4217.950s]: Thanks very much. We'll take that one as a comment. Thank you. Any questions for the panelists? Yeah, Annabelle. [Speaker H] [4220.750s → 4221.630s]: Thank you very much. [Speaker D] [4221.790s → 4222.670s]: Question, Comment. [Speaker H] [4222.830s → 4346.310s]: I'm not sure I have a lot of thoughts about. About the discussion today, reflecting a little bit on what you said about regulation. Unfortunately, and you did raise it, we don't have a definition of terrorism. So if I'm now actually relying only on member states who, yes, do have and bear the responsibility of developing a legal framework, to what extent that they're going to develop a legal framework that serves everyone interest, and not only their interest in a certain way, we're not able to collect any kind of evidence today from member states openly on how they're using those technologies while we know they're being used for military purposes, security purposes. So to what extent I also want to leave it only to member states, I'm not sure. So definitely multi stakeholder discussions and dynamics are like really important also in many ways. And I'm sorry to say that all the reflections today, we're a little bit late on those discussions. Like AI in the CT space is. Is done already. It's a done deal. So by the time we are the UN and member states, the public sector are going to move forward on those questions, the train would have gone. It goes way too fast for us. And what I fear the most in the current context and when you're seeing the team's trust and safety they were calling it and the major tech companies before who are actually they were the one mitigating the risk and trying to protect the users. But today they're being completely downsized. They're being renamed, I think banners such as Democracy for All, Tech for Society. Like really trying to reshift how security and AI is being developed right now within the private sector. How do we make sure that we keep up in this discussion and that we can properly target and protect human rights. But thank you Celia, also for raising. We have the tools, they exist. Let's use the laws that we already have. They're strong enough. Let's not develop as we did for ct, a huge monster. Again, that is potentially more risky than helpful. So yes, let's do our international us. [Speaker B] [4347.110s → 4371.430s]: I said about great observations and it reminds me of the development of new military weapons. Right. Like until states figure out what they can do with this and whether it's a net benefit for them, they're not willing to regulate and it's only later when they realize how terrible this stuff is that they get on the. The regulation train. Okay, Any further questions, comments before we close? [Speaker A] [4371.580s → 4371.820s]: Close? [Speaker B] [4372.860s → 4375.020s]: Yeah, please, one more comment, I think. [Speaker F] [4375.020s → 4477.820s]: Yeah. To a comment. To your comment, if I may. I don't think we are that late because we do have court cases in various countries where the predictive AI employment deployment by various city administrations was effectively prohibited. And I'm based in the Netherlands, so those are the cases from there. The very recent case where Meta was struck down by one of the regional courts in the Netherlands that needs now to follow the regulations that are applicable within the European Union. So I think it's. We're totally behind. I think it's important to press on and possibly in related to your question, sir, I don't know about the participation with the or involvement of private companies. I think it's important to send a message that they cannot exclude themselves artificially by developing so called self regulating standards. The international human rights standards should be applicable to them in light of the global digital Compact or in general how the human rights standards interpreted in terms of creating obligations for business entities. And also I think it should be highlighted that content moderation is not the only problem here. It should be content moderation companies or social media platforms, they should be decentivized from propagating the content that can instigate people to participate in terrorist or violent activities. And I think this is one of the angles that should be pursued not only in terms of freedom of expression, but of course also counterterrorism measures. Thank you. [Speaker B] [4478.950s → 4620.190s]: Thank you. We will draw to a close now. A few quick announcements before we wind up. Firstly, definition was mentioned and I've just released a call for inputs for my next report to the Human Rights Council in March next year which will be on the definition of terrorism, violent extremism and extremism, whatever that one means. It's a aim to update and revise the Special Rapporteur's modern model definition of terrorism which has been around since 2010. I think in light of new developments including the need for exceptions, the question of state terrorism, organized crime and terrorism jump. On our website there's a two page outline of the issues where we're interested in. So very keen to get submissions from as many states as possible. Secondly, I want to introduce everybody to my New York legal adviser, Lily over the back here who's masterminded this event but is also here permanently for us. So for member states in particular who haven't been able to meet our New York contact point, Lily's here and always available to discuss your interest. Finally, on Thursday we are having a 20th anniversary celebration of the mandate of Special Rapporteur and three of the four Special Rapporteurs over the last 20 years will be participating in that event. It's at the Australian Mission Free lunch. Switzerland didn't give us food today, but Australia's outbidding so all welcome. Just if you want to come along, register. But it's a good time, a good opportunity to take stock of where we've come and where we're going in human rights in counterterrorism, which has been frankly a pretty tough brief over the last 20 years and not getting easier. Finally, my thanks to all of the panelists for their engage for engaging so seriously with our position paper. We will release the full paper very soon. We have your details from registering, so we will give you a heads up and it'll be on our website pretty soon. But if you do have any feedback on, on the preliminary recommendations, if you're able to give us any feedback in the next one to two weeks, that would be amazing and would help to finally strengthen the paper. So please, thank you all for coming and round of applause for our panelists. Thank you. [Speaker G] [4625.790s → 4626.670s]: Thank you so much. [Speaker C] [4682.620s → 4682.860s]: Ra.