Every system encodes values — whether admitted or not.
Key Takeaways:
- The Bias of the Coder: The world view of the creator is in the code.
- The Bias of the Data: The history of the world is in the data (good and bad).
- The Bias of the Metric: What you measure, you encourage.
The Architect's Bias
A building directs how you walk. A hallway says "Go here." A wall says "Stop." Software is a building for the mind. It directs how you think.
Algorithms are Opinions
An algorithm that ranks "Most Liked" comments at the top has an opinion: "Popularity = Quality." That is an opinion. It might be wrong. An algorithm that ranks "Longest" comments at the top has a different opinion.
See the Walls
Don't just walk down the hall. See the walls. Ask why they were built there. And if needed, grab a sledgehammer.
Playbook
The 'Value' Inspection: Ask 'What does this system value?' (e.g., Engagement? Truth? Speed?)
The 'Counter-Weight': If the system biases toward speed, build a human process that biases toward care.
The 'Open' Question: Constantly question the 'default' settings.
Common Pitfalls
- Trusting the Default: Assuming the standard setting is the 'right' one.
- Ignoring the Invisible: Not seeing the way the UI guides your hand.
- Tech-Solutionism: Thinking every moral problem has a code solution.
Metrics to Track
System Fairness
User Agency
Diversity of Outcomes
FAQ
Can we build neutral AI?
No. Neutrality is a value choice (choosing not to choose). Better to be explicit about your values.
Who decides the values?
You do. Or the engineer in Silicon Valley does. I suggest you do.
Related Reading
Next: browse the hub or explore AI Operations.