I’ve been giving a lot of thought to how we see risk the past few years. What is an acceptable risk? What are the boundaries of risk perception and how much variability is there between two people? How much of those tolerances are innate versus cultural? Can you consent to risks you don’t understand?
Philosophers have been working on these questions for a while and we don’t seem to have gotten much further on the problem than some of us dislike change and some of us are more open to change. Figuring out any grand causal theories of openness doesn’t seem any more legible with regression analysis.
We have little coordination of acceptable risks at the individual, local, national, planetary and species level, just as we most need to understand if we can all collectively tolerate significant social, economic and political risks associated with new technologies.
We just don’t seem to have consensus on risk much beyond “don’t get someone killed.” Yelling “slow down” barely works with toddlers, so I don’t see how anyone considers it a viable tactic for coping with, let’s just say, artificial intelligence.
I don’t consider myself to be someone who takes a lot of unnecessary risks. I like to do my homework. While I was never a Boy Scout, I do subscribe to their motto. “Be Prepared!” But if you asked my friends and family they’d probably say I am a risk taker. Who is right? It’s clear that preparation and planning mitigate known risks. Beyond that it’s not up to me. It’s probably not up to you either.