Skip to main content
Article

A Decision Tree Analysis of Bias in Predictive Policing: A Cyber Law Perspective 

Authors

Abstract

As law enforcement agencies increasingly adopt data-driven technologies, predictive policing systems present a significant challenge to constitutional principles of equal protection and due process. This paper provides an interdisciplinary analysis of this challenge from a cyber law perspective, using a machine learning model as a technical case study. The primary objective was to build an interpretable predictive policing model and audit it for both performance and encoded demographic bias, thereby creating a concrete foundation for a legal and ethical critique. A Decision Tree classifier was trained on a publicly available Crime & Safety dataset to predict crime types based on a combination of temporal, geographical, and victim demographic data. The methodology involved standard data preprocessing, feature engineering, and the use of a "white-box" model specifically chosen for its high degree of interpretability. The model's performance was evaluated using standard metrics, including accuracy, precision, and recall, while bias was assessed through a detailed analysis of feature importances and a direct inspection of the tree's decision-making logic. The results demonstrated a dual failure of the model. First, it was functionally ineffective, achieving an overall accuracy of only 10%, rendering it useless for practical application. Second, and more critically, the feature importance analysis revealed that the model was systematically biased, relying heavily on protected characteristics such as victim race and gender to make its classifications. The visualization of the Decision Tree provided direct, irrefutable evidence that these demographic factors were used to create explicit decision-making rules within the algorithm. This study concludes that the deployment of such an unaudited model would be both negligent, due to its inaccuracy, and unconstitutional, due to its discriminatory logic. The findings illustrate the profound legal risks municipalities face and underscore the absolute necessity of mandatory, independent audits and public transparency reports before any predictive policing system is deployed. The interpretability of the model proved to be a powerful tool for exposing bias, highlighting the importance of Explainable AI (XAI) in the legal oversight of algorithmic governance.

Keywords: Algorithmic Bias, Cyber Law, Decision Tree, Disparate Impact, Predictive Policing

How to Cite:

Yang, L. & Pigultong, M., (2025) “A Decision Tree Analysis of Bias in Predictive Policing: A Cyber Law Perspective ”, Journal of Cyber Law 1(3), 212-227. doi: https://doi.org/10.63913/jcl.v1i3.45

Downloads:
Download PDF
View PDF

22 Views

3 Downloads

Published on
2025-10-01