Author: Joshua Ellul (University of Malta) and George Azzopardi (University of Groningen)
Date: 7-Oct-2025
Introduction
Ethereum, one of the most widely used blockchains, which supports over a million transactions per day and often sees over 200,000 newly deployed smart contracts in a day. Alongside this success, it has also become a target for scams, Ponzi schemes, phishing attempts, and fraudulent ICOs. Whilst pseudo-anonymous identities make it hard to identify malicious actors, transparent and public transaction histories provide a mechanism to potentially identify malicious intent. Building trust in such systems requires tools that can automatically flag suspicious behaviour, for which we describe one such approach in this post.
 The Study
In a paper published in Expert Systems with Applications in 2020, Steven Farrugia, Joshua Ellul, and George Azzopardi examined whether illicit accounts could be detected solely from their transaction histories. They compiled a dataset of 4,681 Ethereum accounts: 2,179 identified as fraudulent by the Ethereum community and 2,502 randomly selected normal accounts. From these, forty-two features were extracted, including transaction frequency, the time span of account activity, values of transactions, and account balances. These features were used to train an XGBoost classifier, a machine learning model known for accuracy and scalability.
Findings
The model achieved an accuracy of 96.3% and an AUC of 0.994, demonstrating that transaction histories carry strong signals of illicit behaviour. Fraudulent accounts showed distinctive patterns: they were typically active for only short bursts of time, often ended up with drained balances, and received transactions that differed in value compared to those of normal accounts. These findings suggest that many illicit accounts are created for narrow, short-term schemes and then abandoned.
Implications for Digital Trust
The relevance for the GLITSS community is clear. Automated approaches of this kind can support compliance with anti-money-laundering obligations, assist investigators in flagging suspicious accounts, and provide policymakers with evidence on the feasibility of technical safeguards. By releasing their dataset publicly, the authors have also provided a benchmark for future work and collaboration across disciplines.Â
Looking Ahead
While the account-level approach is highly effective, it cannot detect fraudulent behaviour hidden inside smart contracts. Some malicious contracts are deliberately designed to evade detection. Future research should therefore combine account-level monitoring with analysis of smart contract code and internal blockchain operations. This combined approach would further enhance our ability to ensure trust and transparency in blockchain ecosystems.Â
Reference: