Author: Mirela Imširović
Date: 24.06.2025
We have long since moved away from traditional understandings of the world around us. In a revolutionary pace of technological progress, artificial intelligence (AI) is increasingly entering spheres that were until recently reserved for human actions, from medical diagnostics to border security, from technical matters to logical reasoning. While states and international organizations are developing AI tools to combat irregular migration, smuggling networks are increasingly using the same technology for more sophisticated and more difficult to detect operations. This very development raises a number of questions regarding legal norms, responsibility and ethics. One of the questions of our time is: How do legal systems respond to this technological challenge (do they stay up to date with technological developments), and where does security end, and human rights violations begin?
Migrant smuggling in the era of algorithms
Traditional routes for people smuggling have long been subject to surveillance and control. With the help of Artificial Intelligence smugglers now use advanced tools such as real-time data analysis, algorithmic route optimization, and fake digital identities generated using deepfake technology. In addition, the use of drones for border surveillance, encrypted communication applications, and automated document forgery systems is changing the rules of the game completely. For example, deepfake technology allows the creation of fake video and audio recordings of identities that are used to defraud when seeking asylum or crossing borders. In addition, using AI to identify the “safest” times and places to cross borders reduces the likelihood of detection and significantly complicates the work of border services. Smuggling is thus not only becoming more efficient, but also increasingly digitally decentralized, thus escaping traditional methods of surveillance.
Legal gaps and normative ambiguity
One of the key problems in this context is the fact that the legal framework of international law on migrant smuggling (UN Protocol against the Smuggling of Migrants by Land, Sea and Air) does not provide for the specifics of the digital dimension of the crime. Most laws do not distinguish between traditional and technologically mediated smuggling. While many countries have legislation on cybercrime, it rarely includes specific provisions addressing the misuse of AI for the purpose of human smuggling. This gap leaves institutions without clear tools for processing new forms of digital evidence, liability for AI generated actions, or even basic definitions of when someone is a “user” and when an “autonomous creator” of a system. In the case where a smuggling network uses a self-learning algorithm that independently determines the best route – who is responsible?
Ethics of security and borders
On the other hand, the use of AI by states and security agencies in the fight against smuggling carries serious ethical dilemmas. Automatic facial recognition systems, predictive analytics, and surveillance of digital communications raise the question of the balance between security and the right to privacy, freedom of movement, and access to asylum. In regions like the Southeast Europe, where migrants are often “trapped” at borders for long periods, such systems can further complicate their situation. In many cases AI systems make decisions without human intervention based on patterns that can be misinterpreted or even racially and ethnically biased. Automated detection of “suspicious behavior” often does not know the context, culture or real needs of migrants, which can lead to false accusations or illegal detention.
Where to next?
In the fight against the digital smuggling of migrants, new legal standards, stronger international cooperation, and ethics that do not exclude humanity are needed. It is necessary to recognize that AI is a tool that depends on how it is used. Legal regulation must keep pace with the development of technology, but also ensure that antismuggling measures do not in themselves become a source of human rights violations. In addition, education about AI technologies, digital literacy of migrants, and the transparency of security structures are important steps in protecting the rights and lives of the most vulnerable.
Dr. Mirela Imširović received her doctorate in International Relations in China. Her academic and professional work is permeated with an interdisciplinary approach with a focus on the transformation of political elites, soft power, and geopolitical changes. She is the recipient of several international scholarships, fellowships and research grants, including a full academic scholarship from the Chinese government for doctoral studies, a UNESCO/China Great Wall scholarship, an Erasmus+ scholarship for advanced training in qualitative comparative analysis at Radboud University Nijmegen, and research scholarships from the Austrian Ministry of Education and OeAD for a stay at the Carinthia University of Applied Sciences in Klagenfurt and a summer school on alternative economic and monetary systems at BOKU University in Vienna. She is a long-time associate of the Council of Europe in Strasbourg, where she participated in the projects FRED – Fostering Rapprochement through Education for Democracy and Language Learning, Strengthening Education for Democracy, Online Resources Development, LEMON – Learning Modules Online System, Pestalozzi Program, and Development of Intercultural Environment Using Social Media.