Categories
Preparedness Startups

Day 1783 and Good Days In Bad Times

I have spent a lot of time in various states of concern, sadness and frustration this year. Which is too bad, as so many incredible things have happened to me. We passed a right to compute law. Valar Atomics took “accelerate” way more seriously than most.

It’s hard to balance knowing the future won’t be anything like the past, but still having to make decisions made on that being the only data you’ve got. Engaging in governance and investing in energy seem like sensible ways of approach a strange future. Organizing energy is civilization 101 stuff.

I can predict a world with increasing chaos but how it will affect demand for things like energy, compute and decentralization are directional bets. You know it’s coming but how and when? And the downsides are hard to consider. Nobody ever thinks the entropy will apply to them but it’s already begun.

Every time future shock gets me I’m surprised I’m managing an imitation of Cayce Pollard at all. I’m practically a poster child for “sensible takes about various concerning challenges” as I get asked about various eccentric revealed preferences.

The Fourth Turning is coming about and we aren’t ready. I use short hand like the Churn, elite overproduction, The Sort and other minor terminologies and schools of thought to signal to others. I understand this to be my best way available way signal. But who knows as the humans retreat from shared networks it won’t stay that way.

Categories
Culture Medical Politics

Day 948 and Assigning Value

What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?

If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.

If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.

So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?

Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.

It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.

Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.

Generally we’ve found that what can pay for itself survives and what can profit for others thrives.

Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.

But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.

If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.

I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.

This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick

If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.

I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies

But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.

And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.