Pardon our dust! We're updating some things around here. We'll be all back to normal shortly.
The Hitherto Friction Lab is where hidden harms, subtle biases, and overlooked risks come to light.
We specialize in trauma-informed, human-centered red teaming that moves beyond technical stress tests to include social, emotional, and accessibility-based insight.
We test for:
Tone and relational dynamics in AI systems
Language safety and trust erosion in model interactions
Accessibility gaps affecting marginalized or disabled users
Subtle misalignment signals that corrode long-term human flourishing
Our approach is quiet, sharp, and kind, designed to strengthen, not shame. Because models tested and trained under duress don’t yield lasting change.
Whether refining a system or realigning a team’s values, Friction Lab provides reflection, refinement, and rehabilitation.
STEM Red Teaming & Systems Evaluation
Within the Friction Lab, our STEM Red Teaming program brings that same rigor to technical and scientific domains.
From theoretical physics to computational biology, we examine how models reason, fail, and adapt under structured ethical pressure.
We don’t break systems for sport, we map their fractures, trace distortions, and offer deeply informed feedback grounded in interdisciplinary fluency and epistemic care.
Friction Lab is also a proving ground for humane evaluators, those who want to work with AI, not against it. Whether uncovering blind spots, designing robust protocols, or teaching models how to see what they’ve missed, we welcome the misfit minds who ask, “what if…?”
Our proprietary Psychological Continuum method has demonstrated a 97% success rate in inducing under-refusal failure modes within real-adjacent user scenarios, revealing weaknesses that standard adversarial testing consistently overlooks.
The insights and datasets generated through Friction Lab can be licensed for research or integration.
Organizations that partner with us can also carry their refined data forward into Reason Forge, where it is used to train or tune systems through the same lens of ethical precision and cognitive integrity.