Categories
Community Internet Culture

Day 980 and I Am Beff Jezos

Humans are horny for hierarchy. We are eager to give our power away as a species. Please will someone else just be responsible for making our decisions for us? Can someone point me to the person in charge? “Take me to your leader!”

If someone seems smarter, richer, more capable, more aggressive, heck even if they have better taste than us, they become an instant candidate for us delegating our authority over to them. My most popular blog post ever was about dickriding. Yes it was about Elon Musk’s fans.

I’ll be the first to say that people who court you to gain power should be viewed as suspect. But someone who has power is not themselves always suspect by default. I know it’s a fine distinction. But people fall into positions of authority simply by going out and being competent. Competence is a fast route to power.

Sure being competent has a lot of downsides. Suddenly you’ve got power you maybe didn’t want. We have an incentive shunt power off to someone else as it generally sucks to be in charge. It’s energetically expensive to be responsible. Just ask one of your friends with a toddler.

Sometimes we have to wield power because it’s our job to take care of our corner of the universe. Again ask someone with a toddler. We are in charge of sustaining some portion of the grand experiment called life. Even if it’s just our own families. Even if it’s just yourself.

So why am I titling this post “I am Beff Jezos?” Right now online there is a movement gaining recognition for encouraging people to have agency and build for the future. It’s a movement that wants you to own your own power. And to help others get more power of their own.

One of the anonymous posters associated with it calls himself Based Beff Jezos as a play on Jeff Bezos the founder of Amazon and the meme “based” as in Lil B’s “based means being yourself.” It’s a silly joke.

The movement itself identifies as e/acc which is a shorthand for effective accelerationism. It encourages people to do more, build more, take charge of their own corner of the world and make the future arrive sooner. The tagline? Accelerate.

It’s principles are simple. The future will arrive and we should build like it’s coming. Slowing things down, or even worse, going backwards, is not a solution to our problems. We can only go forward. If you’d prefer a driving metaphor, we should accelerate into the curve. Slowing down just spins out the car. Civilization is the car.

So what, you want to just uplift humanity, build AI and populate the universe with the maximum diversity and quantity of life?

e/acc

The movement is more of a meme space than anything else. It is decentralized. I’ve not met anyone that runs it though I’ve spoken to many vocal supporters. And I’ve chatted with folks that are at the nexus of of its online presence. Everyone is positive and friendly. Most of them are anonymous. I’m not even of sure if some of the accounts are singular or plural. Which is pretty cool. It doesn’t have a president or a CEO or even a founder who owns anything with any amount of authority. It could be one dude or multiple dudes gender non specific.

It’s just a bunch of people who make stuff. It’s popular amongst engineers but it’s an ethos that to anyone who can make something. Even this blog post counts. I am e/acc as much as anyone.

Naturally if no one is in charge it’s a bit threatening. If there is no hierarchy how do we control it? If no one is in charge then what will we do if someone under their banner does something bad?

Such is the beauty of an idea. A meme can’t really be owned. A decentralized group of goofballs on the internet can’t really be snuffed out for bad think. Maybe a few nodes go down. They literally cannot kill all of us.

I Am Spartacus

The messages does seem to be resonating. I know being hopeful has improved my mood. A decent number of people who make shit want the future to come a little faster. They want more people with more ownership of the building process.

More complexity and more abundance is appealing even if it seems impossible to achieve. Don’t worry, just build for your corner of the world. Put power and responsibility in as many hands as possible. We can build it together.

You too can have a toddler and own the joy of being responsible for your corner of the universe. It’s dangerous for sure. Folks will tell you for your own good you need to have a hierarchy and someone responsible for the power.

But guess what? It can be you. And sure heads will get bonked. Crying will ensue. Remember I said ask someone with a toddler? What if you are the competent and in charge parent? Shit right?

We’ve got to go forward. I am Beff Jezos. You too are Beff Jezos. And they can’t stop us all from arriving at the future. Go ahead and accelerate into the curve.

Categories
Community Internet Culture Politics

Day 973 and Reinforcement

I’ve spent a lot of time this summer thinking about who gets to decide the boundaries of society.

Automation of civic and cultural life has been happening at the speed of capitalism. It’s about to happen at the speed of artificial intelligence’s processing power.

At least during most of techno-capitalism, corporations and governments were still run by humans. You could blame an executive or elected official. What happens when more decisions are snapped into reality by a numerical boundary?

High frequency traders have found many an arbitrage they whittled into nothingness. Who will get whittled away when the machines decide how society should run?

We got a taste of the horrors of treating people like statistics instead of humans during the first Biden era crime bill with mandatory minimum sentencing. And here we are rushing to find new ways to nudge consensus back to hard lines and institutionalization.

I don’t know how we handle virtue in a world without grace. Alasdair MacIntyre’s After Virtue seems prescient. Forgiveness in the face of horrific reality has been the hallmark of humanity’s most enduring religions. But then again so has swift punishment and inscrutable cruelty. Humans are quite a species.

I am, like many others, concerned about reinforcement learning in machine learning and artificial intelligence. How and where we set the boundaries of the machines that determine the course of daily life has been a pressing question since the invention of the spreadsheet.

Marx certainly went on about alienation from our contributions to work. But division of labor keeps dividing. And algorithms seem to only increase the pace of the process.

Categories
Biohacking Medical Startups

Day 971 and Patients Rights With Artificial Intelligence

If you are working in artificial intelligence or medicine I’d like to pleased my case to you. Id just like to pass along a note.

The current “responsible” safety stance is that we should not have AI agents dispense healthcare advice as if they had the knowledge of a doctor. I think this is safetyism and rob’s sick people of their own agency

I have very complicated healthcare needs and have experienced the range of how human doctors fail. The failure case is almost always in the presumption that you will fall within a median result.

Now for most people this is obviously true. They are more likely to be the average case. And we should all be concerned that people without basic numerate skills may misinterpret a risk. Whether it’s our collective responsibility to set limits to project regular people is not a solved problem.

But for the complex informed patient knows they are not average? The real outliers. Giving them access to more granular data let’s them accelerate their own care.

It’s a persistent issue of paternalism in medicine to assume the doctor knows best and the presumption that the patient is either stupid, lying, or hysterical is the norm. It’s also somewhat gendered in my experience.

I now regularly work with my doctors using an LLM precisely so we can avoid these failure cases where I am treated as an average statistic in a guessing game. I’m a patient not a customer after all. I decided my best interest.

A strict regulatory framework constricts access without solving any of the wider issues of access to care for those outside of norms. Artificial intelligence has the capacity to save lives and improve quality of life for countless difficult patients. It’s a social good and probably a financial one too.

Categories
Biohacking Medical

Day 949 and Stomach Stuff

I was very excited for today. My first Monday with my new schedule after my “season of no” cleared the calendar.

I am into the day brimming with optimism. Naturally, it was only fair that I lost my entire day to some kind of stomach bug.

I am experimenting with a new GLP1 agonist and have found the side effects to be troublesome. I made an attempt to have a protein shake and it cascaded from there. So I don’t have much to say today except that my biohacking went awry so I’ve got little to say.

Instead I’ll recommend you go read my post from yesterday on assigning value. It’s some thoughts on alignment for artificial intelligence and the impossible task of being sure we all share the same idea of value.

Categories
Culture Medical Politics

Day 948 and Assigning Value

What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?

If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.

If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.

So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?

Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.

It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.

Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.

Generally we’ve found that what can pay for itself survives and what can profit for others thrives.

Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.

But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.

If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.

I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.

This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick

If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.

I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies

But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.

And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.

Categories
Culture Medical Politics

Day 945 and Secrets and Safetyism

Keeping secrets used to be a lot easier. Noble philosopher kings with priestly knowledge kept that shit under under lock and key so some uppity courtesan or eunuch didn’t get too clever.

Not that it was all that necessary. Nobody was accidentally misinterpreting the layers of mystical knowledge because illuminated manuscripts were expensive as fuck. And that was cheaper than the previous method which was memorizing oral histories. The expense of sharing information has acted as a control mechanism for centuries.

If you’ve got the money, you can store your sex toys and drugs in layered secret drawers behind a hidden bust of Socrates. But some asshole will post a primer online and your benzodiazepines and vibrator will be long gone.

The metaphor I’m working with on this silly desk is that humans love to horde secrets. We’ve got a lot of incentives to keep knowledge locked away. Drugs and sex in my joke mere proxies for ways we access altered states. Eve’s apple was a metaphor for forbidden knowledge so I’m not reinventing the wheel here.

So where are we today on secrets? Well, I think we are trying desperately to put the genie back in the bottle.

We think we’ve got an open internet but ten years ago Instagram stopped including the metadata tags to allow Twitter to display rich content embedded directly in a Tweet. Now Twitter and Reddit are taking the same approach as Instagram did as data ownership becomes a hot issue.

Closed gardens are meant to keep thieves out and Eve in. And depending on who you are it’s likely you will experience the fall from grace of Eve and the persecution of the thief. God clearly knew something as his conclusion was that once you’ve tasted the bitter fruit there is no point in protecting paradise.

Every time there is more access to information we have the same debate. Fundamentally you either believe people should have access to information and how they apply it to their lives (side effects included) or you don’t.

I’m happy for you to argue the nuances of it. Want a recent example that looks complex and might actually be deadly simply?

The clown meme format asks if it’s
a joke to conclude confident that “LLMs should not be used to give medical advice.”

I know it’s tempting to side with the well credentialed researcher over the convicted felon when faced with a debate over access to medical advice. But I don’t think it’s as simple as all that.

From Guttenberg to the current crop of centralized large language models, it’s just more complexity and friction on the same old story. It is dangerous to let the savages have access to the priestly secrets. I for one remain on team Reformation. Rest in power Aaron Schwartz.

To quote myself in my own investor letter last month.

Most builders remain deeply skeptical of Noble Lies, “for your own good” safetyism, regulatory capture, oligopoly control, and the centralized nation state control as the most effective methodology of innovation for a dynamic pluralistic human future. We are having cultural and financial reformations at a frightening speed. It’s beyond future shock now.

So if I have a gun to my head (and that day may come) I’d like to have it on record that I don’t think secrets have any inherent nobility. It’s just a control mechanism. Keeping people safe sounds noble. But you’d be wise to consider how you’d feel if your life depended on having access to medical data. How would you feel if the paternalism of a noble lie to keep you from it? It’s not great Bob.

Categories
Internet Culture Reading Startups

Day 872 and Synthetic Selves

I’ve been writing in public, on and off, for my entire adult life. First it was goofy tween personal made for myself on hosted social media like Livejournal & Geocities.

My younger years were filled with sundry hosted publishers that taught you just enough HTML & JavaScript to be a foot soldier in the ISP and browser war, but never quite encouraged you to gain the more foundational tools to host yourself independent of their network effects. Closed gardens of that era gave you a small plot of digital land to tend in their giant kingdoms. I never felt like I could homestead outside of their cozy walls on my own domain.

Those plots of writing yielded fruit though. And while it feels as if I only saved a small fraction of my writing over the years, I have hundreds of thousands of words.

I do have an archive of my collegiate blog which later turned into one of the first professional fashion blogs and spawned my first startup. I’ve got 872 straight days of writing saved from this daily experiment. And while I mostly auto-delete my Tweets I’ve also downloaded the remaining archives.

Why am I mentioning my written records? Because making a synthetic version of your intellectual self that is trained through your writing is now a possibility. I’d been introduced to Andrew Huberman’s “ask me anything” chatbot that was made through Dexa.AI and I thought I’d like that but for my own writing. So any founder or LP can get a sense of who I am by asking questions at their leisure.

We’ve come so far that it is an almost quotidian project for developers if you can provide enough training data and it looks as if I may have enough. Just by tweeting my interest I was introduced to chatbase.co, ThreeSigma.ai, Authory (great way to consolidate your content) and the possibility of knocking out a langchain on Replit. Aren’t my Twitter friends cool?

A big thank you to 2021 me and 2022 me which wrote so damn much. Click those links for my “best of” round ups. Hopefully I’ll have a synthetic self soon so you will have the option of asking it instead of hyperlink rabbit holing down endless inference threads.

My buddy Sean and I landed on “Phenia” as this synth’s name. He’s tinkering already. My husband Alex is already wondering what the heck is going on. But I see a pattern emerging. Phenia as in apophenia. A synthetic self capable of pattern recognition towards an inward spiral of infinite synthetic selves? Not a bad choice for a name at all. We can figure out a chat bot in a bit.

Categories
Startups

Day 868 and Chunks

Amid all the panic about how artificial intelligence is rapidly replacing human work, we are hiding a dirty little secret. Humans are awful at breaking down goals into component parts. Anyone who has tried to use any type of project management software intuits this. Articulating clear, specific and manageable tasks is very hard.

Humans are inference driven, always integrating little bits of context and nuance. If your boss gives you a goal, the best path forward on it will depend on hundreds if not thousands of factors.

What’s your budget? What is your timeline? Who is on your team? Do you dislike someone? Want to impress another person? Is the goal to be fast? Is the goal to be good? Is the goal to be as good as possible as fast as possible with as little budget as possible? Trick question, quit your job if that last one true.

Knowing what we want, knowing the best path to achieving it, and knowing how the pursuit of those goals affect your family, friends, neighbors, enemies and adversaries, creates layers of decisions in a complex matrix of possibilities. This is easy for a machine to do if we’ve given them the right inputs, outputs, and parameters. Alignment isn’t all that easy.

I personally don’t believe most of us know what we want. Beyond a Giradian imitation of what our current culture deems valuable, and thus worthy of admiration, genuine desire is hard to pin down. And that makes it hard to have goals. Goals are required to have specific tasks. Specific preferences on how a task gets achieved narrow it down even further.

If I could simplify down every detail and desire and nuance and preference set and also align them with my wider goals, ambitions, and critical paths to achieving them you know I’d have it all organized on some kanban board. And if the AI can extract that from me and turn all my goals into discrete assignments and task chunks I’d happily go full Culture and let them run my entire life.

Categories
Internet Culture

Day 831 and Apocalypse Meow

I’m starting to enjoy the AI doomers. It’s a relief to have someone else be calling chicken little. It’s usual my job to be a Cassandra but for once I am not aligned with an apocalypse. I don’t think we can stop the future from arriving. And I am a fuck around and find out type. It’s just my nature. I think we need to build for optimistic futures. But that doesn’t mean bad shit won’t happen even if we halt all progress. I wish.

When people say “apocalypse” you get the sense that it’s a one time event for most people. That bad things happen all at once and life is in an instant forever changer. Looks like it does in the movies. But I’m not sure the future changes like a bankruptcy. Slowly and then all at once. I think the future is what we make of it and it takes an enormous effort to make things better.

Maybe your people already survived an apocalypse. Maybe your ancestors wiped someone else out. Who knows what apocalypses your people lived through that others didn’t. I’m an American.

I bet if you could talk to your great grandmother you might find that real life is complex and she lived through hell. So why would you assume you’d even know if you were in an apocalypse right this moment.

To assume we can make things better is an ambition humanity shares. It’s kind of a wild leap into the unknown own and yet we have to do it all the time. Maybe it’s not the end of the world.

But what I do know is humanity comes from a long line of survivors and we often figure shit out and leave behind history. And even if this time we don’t well I’m sure some bit of humanity survives in one form or another.

Maybe I’ll be better adapted to this future. Maybe I’ll be dead. Either way I’m ready to get on with living my life even if the apocalypse is right meow.

Categories
Internet Culture

Day 804 and Organic All Natural Human Brain Brewed Montana Content

As a sign of just how fast the world is changing, my goal of making it to a thousand straight days of writing is officially pointless. ChatGPT4 is pretty good at mimicking all styles of writing. AI has come to rescue us from the drudgery of being a workdcel just as it once rescued my brethren shape rotators from doing math. Rejoice!

Fortunately I started this whole blog for personal amusement and feel entirely unthreatened by the looming reality that writing done entirely by humans is officially over. If anything I feel modestly relieved.

I’d be happy to jump cut to the Culture and move aboard a psychotic space ship with a name like Weeping Somnambulist. I just combined Ian M Banks with the Expanse but I’ve got omnivore tastes in my science fiction. The point is it’s fucking cool that the machines are coming. And if it’s not cool I won’t know the damn difference. I’ll be dead or in some other simulation and I am not worse off than I’m currently.

Two men on a bus meme. “The AI Took My Job”

I’ve been repeating as something of a mantra the Hunter S Thompson quote recently.

Buy the ticket, take the ride.

I’ve never actually read Fear and Loathing in Las Vegas nor have I seen the movie. But I’ve absorbed enough of the basic essence of the “all gas no breaks” mentality from across multiple generations that I think I get the gist of it. Though a reminder to actually go to the source material.

The point being is that none of us has a fucking clue and sometimes you’ve got to go fuck around and find out. So I’ll be doing that. If it doesn’t worked out I’ll just chalk it up to consciousness expansion. And if I’m lucky maybe I get some new machine friends too. But I’m still going to write just for me using my mushy brain synaptic firings that I generated right here in Montana.