Categories
Emotional Work

Day 992 and What We Can’t Know

Most of my life I’ve been been awash in assurances. Maybe this wasn’t so bad when I was a child. Approaching life with confidence in the world breeds positivity.

We’ve come to expect certitude. Our institutions and elders deliver most of their hard-earned knowledge with certainty.

Nuance and shades of grey feel dangerous these days. Too much room for interpretation leaves room for confusion. After all, if it’s just a small percentage on the edges, why give people cause to worry?

Except we all find ourselves in the small percentage at some point. As normal as we may be in some areas, or even most, you will probably find yourself being on the edge.

You will want assurances. And as it turns out we are not yet good enough at math to know many things. You can get close to the limit. Infinitely so. But we can never get there. Just try calculating out Pi if you are skeptical of my math.

Categories
Community Internet Culture

Day 989 and Autopoietic Ergodicity

In one of my group chats, I hang out with a bunch of rationalist machine learning engineers who are happily climbing the rungs of accelerating life.

I really love the energy of the community as it’s centered tangibly around making things. It’s a little less talk and a lot more action. It’s got a bit of a feeling of Stack Overflow’s early helpfulness but without the Hacker News nerd sniping culture. It’s like the best of a small Reddit thread but for dudes who want to make shit with artificial intelligence.

Now, of course, every community finds itself with disruptive members and turf fights over social mores. Virtual spaces are notorious for clout chasing and personal dramas. Veterans of green text wars are familiar with Geeks, Mops and Sociopaths in Subculture Evolution.

And so it seems fitting that last night, in a much bigger very public egregore that is e/acc’s online community, we got to witness an immune reaction to someone trying to apply non-consensus standards.

I spent an hour watching it play out last night and then went back to reading before bedtime. I’ve got some personal investment in the space and it’s people, so of course that’s what I’m doing on a Friday Night.

But as I got up the next day and saw everyone going back to work, a insightful lowbie named bmorphism (slang for smaller anon accounts on Twitter within subcultures) introduced me to a term I’d never heard before. Autopoietic Ergodicity. Or how do multi-actor dynamic systems self regulate?

He introduced me Autopoietic Ergodicity via a link on PerplexityAI which seemed appropriate. And it got me thinking about how we as individuals interact on a much wider system and how it interacts with us.

The term combines two ideas by positing that complex adaptive systems (like living organisms or ecosystems) exhibit self-regulating behavior that enables them to maintain persistent patterns while also experiencing change from external influences. These systems are capable of minimizing changes caused by random factors, ensuring their essential dynamics remain stable without needing to undergo a complete reset or cycle back to the initial state. It’s like having a dampening mechanism that continually adjusts for fluctuations, allowing system resilience and long-term persistence in an ever-changing environment.

It’s my suspicion that something special is happening across portions of the fracturing social web as most of our platforms go back under more centralized control. The system is fighting back.

A meme using a Dune visual that originally has the elder Etreides saying to Paul “we need to cultivate desert power” with a substitution “autist power”

The grey tribes that have populated Silicon Valley have an opinion about the future. And it’s a positive one. We’ve got to find ways to be resilient in the face of memetic interference on our systems. There will be high energy distractions. We’ve got to be reminded that it’s a competition for efficient use of energy and we shouldn’t let it be drained. We’ve got to focus on making things that speak for themselves.

Categories
Culture Preparedness

Day 986 and Risky Business

I’ve been giving a lot of thought to how we see risk the past few years. What is an acceptable risk? What are the boundaries of risk perception and how much variability is there between two people? How much of those tolerances are innate versus cultural? Can you consent to risks you don’t understand?

Philosophers have been working on these questions for a while and we don’t seem to have gotten much further on the problem than some of us dislike change and some of us are more open to change. Figuring out any grand causal theories of openness doesn’t seem any more legible with regression analysis.

We have little coordination of acceptable risks at the individual, local, national, planetary and species level, just as we most need to understand if we can all collectively tolerate significant social, economic and political risks associated with new technologies.

We just don’t seem to have consensus on risk much beyond “don’t get someone killed.” Yelling “slow down” barely works with toddlers, so I don’t see how anyone considers it a viable tactic for coping with, let’s just say, artificial intelligence.

I don’t consider myself to be someone who takes a lot of unnecessary risks. I like to do my homework. While I was never a Boy Scout, I do subscribe to their motto. “Be Prepared!” But if you asked my friends and family they’d probably say I am a risk taker. Who is right? It’s clear that preparation and planning mitigate known risks. Beyond that it’s not up to me. It’s probably not up to you either.

Categories
Community Internet Culture

Day 980 and I Am Beff Jezos

Humans are horny for hierarchy. We are eager to give our power away as a species. Please will someone else just be responsible for making our decisions for us? Can someone point me to the person in charge? “Take me to your leader!”

If someone seems smarter, richer, more capable, more aggressive, heck even if they have better taste than us, they become an instant candidate for us delegating our authority over to them. My most popular blog post ever was about dickriding. Yes it was about Elon Musk’s fans.

I’ll be the first to say that people who court you to gain power should be viewed as suspect. But someone who has power is not themselves always suspect by default. I know it’s a fine distinction. But people fall into positions of authority simply by going out and being competent. Competence is a fast route to power.

Sure being competent has a lot of downsides. Suddenly you’ve got power you maybe didn’t want. We have an incentive shunt power off to someone else as it generally sucks to be in charge. It’s energetically expensive to be responsible. Just ask one of your friends with a toddler.

Sometimes we have to wield power because it’s our job to take care of our corner of the universe. Again ask someone with a toddler. We are in charge of sustaining some portion of the grand experiment called life. Even if it’s just our own families. Even if it’s just yourself.

So why am I titling this post “I am Beff Jezos?” Right now online there is a movement gaining recognition for encouraging people to have agency and build for the future. It’s a movement that wants you to own your own power. And to help others get more power of their own.

One of the anonymous posters associated with it calls himself Based Beff Jezos as a play on Jeff Bezos the founder of Amazon and the meme “based” as in Lil B’s “based means being yourself.” It’s a silly joke.

The movement itself identifies as e/acc which is a shorthand for effective accelerationism. It encourages people to do more, build more, take charge of their own corner of the world and make the future arrive sooner. The tagline? Accelerate.

It’s principles are simple. The future will arrive and we should build like it’s coming. Slowing things down, or even worse, going backwards, is not a solution to our problems. We can only go forward. If you’d prefer a driving metaphor, we should accelerate into the curve. Slowing down just spins out the car. Civilization is the car.

So what, you want to just uplift humanity, build AI and populate the universe with the maximum diversity and quantity of life?

e/acc

The movement is more of a meme space than anything else. It is decentralized. I’ve not met anyone that runs it though I’ve spoken to many vocal supporters. And I’ve chatted with folks that are at the nexus of of its online presence. Everyone is positive and friendly. Most of them are anonymous. I’m not even of sure if some of the accounts are singular or plural. Which is pretty cool. It doesn’t have a president or a CEO or even a founder who owns anything with any amount of authority. It could be one dude or multiple dudes gender non specific.

It’s just a bunch of people who make stuff. It’s popular amongst engineers but it’s an ethos that to anyone who can make something. Even this blog post counts. I am e/acc as much as anyone.

Naturally if no one is in charge it’s a bit threatening. If there is no hierarchy how do we control it? If no one is in charge then what will we do if someone under their banner does something bad?

Such is the beauty of an idea. A meme can’t really be owned. A decentralized group of goofballs on the internet can’t really be snuffed out for bad think. Maybe a few nodes go down. They literally cannot kill all of us.

I Am Spartacus

The messages does seem to be resonating. I know being hopeful has improved my mood. A decent number of people who make shit want the future to come a little faster. They want more people with more ownership of the building process.

More complexity and more abundance is appealing even if it seems impossible to achieve. Don’t worry, just build for your corner of the world. Put power and responsibility in as many hands as possible. We can build it together.

You too can have a toddler and own the joy of being responsible for your corner of the universe. It’s dangerous for sure. Folks will tell you for your own good you need to have a hierarchy and someone responsible for the power.

But guess what? It can be you. And sure heads will get bonked. Crying will ensue. Remember I said ask someone with a toddler? What if you are the competent and in charge parent? Shit right?

We’ve got to go forward. I am Beff Jezos. You too are Beff Jezos. And they can’t stop us all from arriving at the future. Go ahead and accelerate into the curve.

Categories
Community Internet Culture Politics

Day 973 and Reinforcement

I’ve spent a lot of time this summer thinking about who gets to decide the boundaries of society.

Automation of civic and cultural life has been happening at the speed of capitalism. It’s about to happen at the speed of artificial intelligence’s processing power.

At least during most of techno-capitalism, corporations and governments were still run by humans. You could blame an executive or elected official. What happens when more decisions are snapped into reality by a numerical boundary?

High frequency traders have found many an arbitrage they whittled into nothingness. Who will get whittled away when the machines decide how society should run?

We got a taste of the horrors of treating people like statistics instead of humans during the first Biden era crime bill with mandatory minimum sentencing. And here we are rushing to find new ways to nudge consensus back to hard lines and institutionalization.

I don’t know how we handle virtue in a world without grace. Alasdair MacIntyre’s After Virtue seems prescient. Forgiveness in the face of horrific reality has been the hallmark of humanity’s most enduring religions. But then again so has swift punishment and inscrutable cruelty. Humans are quite a species.

I am, like many others, concerned about reinforcement learning in machine learning and artificial intelligence. How and where we set the boundaries of the machines that determine the course of daily life has been a pressing question since the invention of the spreadsheet.

Marx certainly went on about alienation from our contributions to work. But division of labor keeps dividing. And algorithms seem to only increase the pace of the process.

Categories
Biohacking Medical Startups

Day 971 and Patients Rights With Artificial Intelligence

If you are working in artificial intelligence or medicine I’d like to pleased my case to you. Id just like to pass along a note.

The current “responsible” safety stance is that we should not have AI agents dispense healthcare advice as if they had the knowledge of a doctor. I think this is safetyism and rob’s sick people of their own agency

I have very complicated healthcare needs and have experienced the range of how human doctors fail. The failure case is almost always in the presumption that you will fall within a median result.

Now for most people this is obviously true. They are more likely to be the average case. And we should all be concerned that people without basic numerate skills may misinterpret a risk. Whether it’s our collective responsibility to set limits to project regular people is not a solved problem.

But for the complex informed patient knows they are not average? The real outliers. Giving them access to more granular data let’s them accelerate their own care.

It’s a persistent issue of paternalism in medicine to assume the doctor knows best and the presumption that the patient is either stupid, lying, or hysterical is the norm. It’s also somewhat gendered in my experience.

I now regularly work with my doctors using an LLM precisely so we can avoid these failure cases where I am treated as an average statistic in a guessing game. I’m a patient not a customer after all. I decided my best interest.

A strict regulatory framework constricts access without solving any of the wider issues of access to care for those outside of norms. Artificial intelligence has the capacity to save lives and improve quality of life for countless difficult patients. It’s a social good and probably a financial one too.

Categories
Biohacking Medical

Day 949 and Stomach Stuff

I was very excited for today. My first Monday with my new schedule after my “season of no” cleared the calendar.

I am into the day brimming with optimism. Naturally, it was only fair that I lost my entire day to some kind of stomach bug.

I am experimenting with a new GLP1 agonist and have found the side effects to be troublesome. I made an attempt to have a protein shake and it cascaded from there. So I don’t have much to say today except that my biohacking went awry so I’ve got little to say.

Instead I’ll recommend you go read my post from yesterday on assigning value. It’s some thoughts on alignment for artificial intelligence and the impossible task of being sure we all share the same idea of value.

Categories
Culture Medical Politics

Day 948 and Assigning Value

What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?

If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.

If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.

So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?

Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.

It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.

Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.

Generally we’ve found that what can pay for itself survives and what can profit for others thrives.

Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.

But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.

If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.

I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.

This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick

If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.

I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies

But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.

And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.

Categories
Culture Medical Politics

Day 945 and Secrets and Safetyism

Keeping secrets used to be a lot easier. Noble philosopher kings with priestly knowledge kept that shit under under lock and key so some uppity courtesan or eunuch didn’t get too clever.

Not that it was all that necessary. Nobody was accidentally misinterpreting the layers of mystical knowledge because illuminated manuscripts were expensive as fuck. And that was cheaper than the previous method which was memorizing oral histories. The expense of sharing information has acted as a control mechanism for centuries.

If you’ve got the money, you can store your sex toys and drugs in layered secret drawers behind a hidden bust of Socrates. But some asshole will post a primer online and your benzodiazepines and vibrator will be long gone.

The metaphor I’m working with on this silly desk is that humans love to horde secrets. We’ve got a lot of incentives to keep knowledge locked away. Drugs and sex in my joke mere proxies for ways we access altered states. Eve’s apple was a metaphor for forbidden knowledge so I’m not reinventing the wheel here.

So where are we today on secrets? Well, I think we are trying desperately to put the genie back in the bottle.

We think we’ve got an open internet but ten years ago Instagram stopped including the metadata tags to allow Twitter to display rich content embedded directly in a Tweet. Now Twitter and Reddit are taking the same approach as Instagram did as data ownership becomes a hot issue.

Closed gardens are meant to keep thieves out and Eve in. And depending on who you are it’s likely you will experience the fall from grace of Eve and the persecution of the thief. God clearly knew something as his conclusion was that once you’ve tasted the bitter fruit there is no point in protecting paradise.

Every time there is more access to information we have the same debate. Fundamentally you either believe people should have access to information and how they apply it to their lives (side effects included) or you don’t.

I’m happy for you to argue the nuances of it. Want a recent example that looks complex and might actually be deadly simply?

The clown meme format asks if it’s
a joke to conclude confident that “LLMs should not be used to give medical advice.”

I know it’s tempting to side with the well credentialed researcher over the convicted felon when faced with a debate over access to medical advice. But I don’t think it’s as simple as all that.

From Guttenberg to the current crop of centralized large language models, it’s just more complexity and friction on the same old story. It is dangerous to let the savages have access to the priestly secrets. I for one remain on team Reformation. Rest in power Aaron Schwartz.

To quote myself in my own investor letter last month.

Most builders remain deeply skeptical of Noble Lies, “for your own good” safetyism, regulatory capture, oligopoly control, and the centralized nation state control as the most effective methodology of innovation for a dynamic pluralistic human future. We are having cultural and financial reformations at a frightening speed. It’s beyond future shock now.

So if I have a gun to my head (and that day may come) I’d like to have it on record that I don’t think secrets have any inherent nobility. It’s just a control mechanism. Keeping people safe sounds noble. But you’d be wise to consider how you’d feel if your life depended on having access to medical data. How would you feel if the paternalism of a noble lie to keep you from it? It’s not great Bob.

Categories
Internet Culture Reading Startups

Day 872 and Synthetic Selves

I’ve been writing in public, on and off, for my entire adult life. First it was goofy tween personal made for myself on hosted social media like Livejournal & Geocities.

My younger years were filled with sundry hosted publishers that taught you just enough HTML & JavaScript to be a foot soldier in the ISP and browser war, but never quite encouraged you to gain the more foundational tools to host yourself independent of their network effects. Closed gardens of that era gave you a small plot of digital land to tend in their giant kingdoms. I never felt like I could homestead outside of their cozy walls on my own domain.

Those plots of writing yielded fruit though. And while it feels as if I only saved a small fraction of my writing over the years, I have hundreds of thousands of words.

I do have an archive of my collegiate blog which later turned into one of the first professional fashion blogs and spawned my first startup. I’ve got 872 straight days of writing saved from this daily experiment. And while I mostly auto-delete my Tweets I’ve also downloaded the remaining archives.

Why am I mentioning my written records? Because making a synthetic version of your intellectual self that is trained through your writing is now a possibility. I’d been introduced to Andrew Huberman’s “ask me anything” chatbot that was made through Dexa.AI and I thought I’d like that but for my own writing. So any founder or LP can get a sense of who I am by asking questions at their leisure.

We’ve come so far that it is an almost quotidian project for developers if you can provide enough training data and it looks as if I may have enough. Just by tweeting my interest I was introduced to chatbase.co, ThreeSigma.ai, Authory (great way to consolidate your content) and the possibility of knocking out a langchain on Replit. Aren’t my Twitter friends cool?

A big thank you to 2021 me and 2022 me which wrote so damn much. Click those links for my “best of” round ups. Hopefully I’ll have a synthetic self soon so you will have the option of asking it instead of hyperlink rabbit holing down endless inference threads.

My buddy Sean and I landed on “Phenia” as this synth’s name. He’s tinkering already. My husband Alex is already wondering what the heck is going on. But I see a pattern emerging. Phenia as in apophenia. A synthetic self capable of pattern recognition towards an inward spiral of infinite synthetic selves? Not a bad choice for a name at all. We can figure out a chat bot in a bit.