What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?
If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.
If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.
So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?
Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.
It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.
Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.
Generally we’ve found that what can pay for itself survives and what can profit for others thrives.
Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.
But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.
If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.
I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.
This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick
If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.
I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies
But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.
And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.