Categories
Emotional Work Medical

Day 1865 and Letting Things Fester

I let something fester for far too long. A family member had some health troubles that were not immediately threatening and I didn’t want to push them. They promised to see to it after a lengthy set of other issues were resolved.

Well, now the list was all finished or at least that is the rationalizing we are all doing around it, as it’s gone too far to be left alone. And it has to be seen to with a surgery.

Now they are healthy, young and the damage can be undone with a little science but I can’t help but feel I failed them. I knew they were leaving it to fester but the first rule of medical ethics is informed consent. The patient chooses even if you think you know better. This goes for doctors just as much as family.

And so here I am feeling guilty that I knew they were putting it off based on actions that I was partially responsible for resolving. They kept pushing it off citing this and that needing to be done first.

Now budget was an oft cited reason and I aid on that to some degree but it was really about a whole tangle of issues or managing till it was unendurable. And I don’t control their endurance or capacity to tolerate discomfort.

I know I couldn’t have done anything to force the issue, especially when the pride of an individual is concerned, but I still feel like shit about it.

Why couldn’t I have pushed forward the other issues and projects to rid the excuses? Why wasn’t I more forceful insisting they get it looked at sooner?

You know how guilt works when you have some responsibility but no ultimate say in the doing of the deed.

Not only did they let it fester but now it will fester with me as I try to forgive myself for something I couldn’t have changed. The body is sovereign and it wasn’t mine so I better let it go and help them recover.

Categories
Culture Medical Politics

Day 948 and Assigning Value

What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?

If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.

If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.

So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?

Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.

It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.

Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.

Generally we’ve found that what can pay for itself survives and what can profit for others thrives.

Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.

But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.

If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.

I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.

This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick

If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.

I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies

But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.

And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.