Bioethics to AI Ethics Inheritance
AI ethics borrows heavily from bioethics, but the translation isn’t clean.
The foundational principles of beneficence, non-maleficence, autonomy, and justice come directly from the Belmont Report (1979), developed for human subjects research. These concepts assume a relationship between human researchers and human subjects, with clear lines of agency and harm.
AI systems break these assumptions. Harms are distributed and statistical. Agency is diffused across developers, deployers, and users. “Consent” means something different when algorithms make decisions at scale without individual interaction. The borrowed vocabulary sounds familiar but maps imperfectly to the actual dynamics of algorithmic systems.
This inheritance creates a false sense of ethical maturity, the concepts feel established because they’ve worked elsewhere, obscuring how much adaptation they still require.
Related: 05-atom—ethics-principle-proliferation, 05-atom—human-agency-oversight