Designing criminal investigation workflows with AI-assisted evidence processing
In partnership with Universidade de Fortaleza and the Public Ministry of Ceará, I led end-to-end design of a portal that automates evidence analysis in criminal investigations.
The scale and precision challenge
Criminal prosecutors at the Public Ministry of Ceará manually reviewed thousands of digital evidence files per investigation: WhatsApp conversations, photos, videos, audio messages, social media content. Process was slow (weeks per investigation), error-prone (critical evidence missed), and failed to identify connections between evidence pieces. Cases needed automation without compromising legal chain of custody or admissibility in court.
Discovery with critical stakeholders
Mapped evidence review workflows through structured interviews with prosecutors, IT staff, and legal experts. Understood non-negotiable constraints: complete audit trail required by law, confidentiality at every step, traceability of all decisions, and preservation of evidence integrity. Every system action needed documentation showing who did what, when, and why (this wasn't bureaucracy, this was legal requirement).
Identified bottlenecks: manual message reading (thousands per case), photo review for face recognition, audio transcription, and pattern identification across different evidence types. Prosecutors spent investigation time on mechanical tasks rather than legal analysis and strategy.
AI workflows with explicit error handling
Designed interfaces for two AI-assisted processes.
Natural Language Processing (Text Analysis): Automated message analysis identified narrative patterns, temporal sequences, participant relationships, and key topics. Interface showed AI confidence scores for every finding ("85% confidence this message refers to illicit transaction"). Prosecutors reviewed and validated AI interpretations before incorporating into case. Critical decision: never hide uncertainty. Show confidence levels, flag ambiguous interpretations, and require human validation at decision points.
Facial Recognition (Recurring Face Detection): System analyzed photo collections identifying recurring faces, suggesting individual identification, and mapping relationships between people. Interface displayed match confidence percentages. Prosecutors manually verified matches, assigned identities, and confirmed relationships. Facial recognition acted as investigative aid, not autonomous decision-maker.
Human-in-the-loop as core design principle
AI provided analysis speed, humans provided legal judgment. Every automated finding required prosecutor validation before entering official case record. Interface design made validation efficient without encouraging rubber-stamping: clear presentation of evidence supporting AI conclusions, easy confirmation for obvious matches, detailed review interfaces for ambiguous cases.
Designed explicit error feedback loops: prosecutors could flag AI mistakes, provide corrections, and system learned from these corrections. This maintained prosecutor control while improving AI accuracy over time.
Audit and traceability architecture
Every action generated audit log entry: user identity, timestamp, action type, affected evidence items, and decision rationale. Prosecutors could annotate why they accepted or rejected AI suggestions. Complete investigation history became legally defensible record. Audit interface let supervisors review investigation process, verify proper procedures followed, and identify where mistakes occurred if cases were challenged.
Balanced comprehensive logging against usability (audit happened automatically without requiring prosecutors to manually document every click). Interface surfaced audit capabilities when needed (supervisor review, investigation verification) while staying invisible during routine work.
Accessibility and ethics from the start
Applied WCAG accessibility guidelines ensuring system usable by prosecutors with varying abilities. Implemented keyboard navigation, screen reader compatibility, sufficient color contrast, and resizable text. Privacy protection went beyond legal minimum: end-to-end encryption, minimal data retention, access logs, and anonymization wherever possible without compromising investigative utility.
Ethical considerations shaped every design decision: How do we prevent bias in AI training data? How do we ensure fair treatment regardless of demographic factors? How do we balance investigation effectiveness against privacy rights? These questions didn't have easy answers but needed explicit consideration.
Security-first interface design
Worked with cybersecurity requirements from start. Designed authentication flows, role-based access controls, secure file handling, and encrypted communications. Every interaction considered security implications: Could this expose sensitive data? Could this action be spoofed? How do we verify user identity at critical moments?
Created security-focused empty states and error messages that informed users without revealing system internals to potential attackers. Designed graceful degradation when security measures triggered (session timeouts, suspicious activity detection).
Validation with real prosecutors
Built high-fidelity interactive prototypes demonstrating complete investigation workflows: evidence upload, AI analysis review, validation and annotation, relationship mapping, and report generation. Conducted usability testing with actual prosecutors using realistic (anonymized) case materials. Validated terminology matched legal language, workflow matched investigation practices, and interface supported legal reasoning rather than obstructing it.
Handoff for secure implementation
Delivered comprehensive handoff documentation: detailed Figma specifications, interaction flows, state diagrams, field validation rules, permission matrices, security requirements, and accessibility compliance checklist. Created QA/UAT scripts with special attention to security testing: authentication bypass attempts, unauthorized data access, audit log integrity, and encrypted transmission verification.
Balancing innovation and legal rigor
Project operated at intersection of cutting-edge AI technology and conservative legal system. Success required respecting both: leveraging AI capabilities while maintaining legal standards. Every design decision navigated this tension. Interface couldn't just be usable (it had to be legally defensible).



