Categories
Finance Politics

Day 1223 and Human Wants Are Endless

The curriculum of classical economics can be a bit of a blackpill if you are an optimist about the good in humanity.

Economists operate from a model that presumes human wants are infinite but our resources are not. Yes it is reductive, but if your goal is to model something on spreadsheet give got to start from where and somehow the economists picked non-satiation.

I’m using Perplexity more than Google Search these days so I’m delving (apologies to Paul Graham) into deep Reddit territory anytime I’ve got a random query.

Which incidentally is not paying off for Reddit just yet.

Reddit, which is trading about 40 per cent above its opening initial public offering price, is expected to report a net loss of about $610mn on $213mn in revenue.

Via Financial Times (which oddly I can’t share a link to for unclear reasons but here is a link to old reporting in Reuters and I’m writing this before earnings are released today.

Reddit’s artificial intelligence licensing deals have made them more useful to me than ever but it’s unclear how anyone gets paid for the mountain of work required to make it remain useful. Such is the tragedy of the internet commons. Anyways.

I’m feeling particularly sad about infinite wants as a framework for anything today. Ive been disappointed by just how much others view me as a source of want gratification even when I explicitly ask them to clarify their needs.

Much of the language of human wands must be couched in the language of need. Asking someone to be obligated to another person spirals quickly into a sticky web of moralists insisting on the value of their chosen wants. Id be more inclined to say yes to an ask if someone was clear about their needs upfront. Just in case you find yourself asking me for something.

Categories
Startups

Day 868 and Chunks

Amid all the panic about how artificial intelligence is rapidly replacing human work, we are hiding a dirty little secret. Humans are awful at breaking down goals into component parts. Anyone who has tried to use any type of project management software intuits this. Articulating clear, specific and manageable tasks is very hard.

Humans are inference driven, always integrating little bits of context and nuance. If your boss gives you a goal, the best path forward on it will depend on hundreds if not thousands of factors.

What’s your budget? What is your timeline? Who is on your team? Do you dislike someone? Want to impress another person? Is the goal to be fast? Is the goal to be good? Is the goal to be as good as possible as fast as possible with as little budget as possible? Trick question, quit your job if that last one true.

Knowing what we want, knowing the best path to achieving it, and knowing how the pursuit of those goals affect your family, friends, neighbors, enemies and adversaries, creates layers of decisions in a complex matrix of possibilities. This is easy for a machine to do if we’ve given them the right inputs, outputs, and parameters. Alignment isn’t all that easy.

I personally don’t believe most of us know what we want. Beyond a Giradian imitation of what our current culture deems valuable, and thus worthy of admiration, genuine desire is hard to pin down. And that makes it hard to have goals. Goals are required to have specific tasks. Specific preferences on how a task gets achieved narrow it down even further.

If I could simplify down every detail and desire and nuance and preference set and also align them with my wider goals, ambitions, and critical paths to achieving them you know I’d have it all organized on some kanban board. And if the AI can extract that from me and turn all my goals into discrete assignments and task chunks I’d happily go full Culture and let them run my entire life.