Categories
Culture Medical Politics

Day 945 and Secrets and Safetyism

Keeping secrets used to be a lot easier. Noble philosopher kings with priestly knowledge kept that shit under under lock and key so some uppity courtesan or eunuch didn’t get too clever.

Not that it was all that necessary. Nobody was accidentally misinterpreting the layers of mystical knowledge because illuminated manuscripts were expensive as fuck. And that was cheaper than the previous method which was memorizing oral histories. The expense of sharing information has acted as a control mechanism for centuries.

If you’ve got the money, you can store your sex toys and drugs in layered secret drawers behind a hidden bust of Socrates. But some asshole will post a primer online and your benzodiazepines and vibrator will be long gone.

The metaphor I’m working with on this silly desk is that humans love to horde secrets. We’ve got a lot of incentives to keep knowledge locked away. Drugs and sex in my joke mere proxies for ways we access altered states. Eve’s apple was a metaphor for forbidden knowledge so I’m not reinventing the wheel here.

So where are we today on secrets? Well, I think we are trying desperately to put the genie back in the bottle.

We think we’ve got an open internet but ten years ago Instagram stopped including the metadata tags to allow Twitter to display rich content embedded directly in a Tweet. Now Twitter and Reddit are taking the same approach as Instagram did as data ownership becomes a hot issue.

Closed gardens are meant to keep thieves out and Eve in. And depending on who you are it’s likely you will experience the fall from grace of Eve and the persecution of the thief. God clearly knew something as his conclusion was that once you’ve tasted the bitter fruit there is no point in protecting paradise.

Every time there is more access to information we have the same debate. Fundamentally you either believe people should have access to information and how they apply it to their lives (side effects included) or you don’t.

I’m happy for you to argue the nuances of it. Want a recent example that looks complex and might actually be deadly simply?

The clown meme format asks if it’s
a joke to conclude confident that “LLMs should not be used to give medical advice.”

I know it’s tempting to side with the well credentialed researcher over the convicted felon when faced with a debate over access to medical advice. But I don’t think it’s as simple as all that.

From Guttenberg to the current crop of centralized large language models, it’s just more complexity and friction on the same old story. It is dangerous to let the savages have access to the priestly secrets. I for one remain on team Reformation. Rest in power Aaron Schwartz.

To quote myself in my own investor letter last month.

Most builders remain deeply skeptical of Noble Lies, “for your own good” safetyism, regulatory capture, oligopoly control, and the centralized nation state control as the most effective methodology of innovation for a dynamic pluralistic human future. We are having cultural and financial reformations at a frightening speed. It’s beyond future shock now.

So if I have a gun to my head (and that day may come) I’d like to have it on record that I don’t think secrets have any inherent nobility. It’s just a control mechanism. Keeping people safe sounds noble. But you’d be wise to consider how you’d feel if your life depended on having access to medical data. How would you feel if the paternalism of a noble lie to keep you from it? It’s not great Bob.

Categories
Internet Culture Reading Startups

Day 872 and Synthetic Selves

I’ve been writing in public, on and off, for my entire adult life. First it was goofy tween personal made for myself on hosted social media like Livejournal & Geocities.

My younger years were filled with sundry hosted publishers that taught you just enough HTML & JavaScript to be a foot soldier in the ISP and browser war, but never quite encouraged you to gain the more foundational tools to host yourself independent of their network effects. Closed gardens of that era gave you a small plot of digital land to tend in their giant kingdoms. I never felt like I could homestead outside of their cozy walls on my own domain.

Those plots of writing yielded fruit though. And while it feels as if I only saved a small fraction of my writing over the years, I have hundreds of thousands of words.

I do have an archive of my collegiate blog which later turned into one of the first professional fashion blogs and spawned my first startup. I’ve got 872 straight days of writing saved from this daily experiment. And while I mostly auto-delete my Tweets I’ve also downloaded the remaining archives.

Why am I mentioning my written records? Because making a synthetic version of your intellectual self that is trained through your writing is now a possibility. I’d been introduced to Andrew Huberman’s “ask me anything” chatbot that was made through Dexa.AI and I thought I’d like that but for my own writing. So any founder or LP can get a sense of who I am by asking questions at their leisure.

We’ve come so far that it is an almost quotidian project for developers if you can provide enough training data and it looks as if I may have enough. Just by tweeting my interest I was introduced to chatbase.co, ThreeSigma.ai, Authory (great way to consolidate your content) and the possibility of knocking out a langchain on Replit. Aren’t my Twitter friends cool?

A big thank you to 2021 me and 2022 me which wrote so damn much. Click those links for my “best of” round ups. Hopefully I’ll have a synthetic self soon so you will have the option of asking it instead of hyperlink rabbit holing down endless inference threads.

My buddy Sean and I landed on “Phenia” as this synth’s name. He’s tinkering already. My husband Alex is already wondering what the heck is going on. But I see a pattern emerging. Phenia as in apophenia. A synthetic self capable of pattern recognition towards an inward spiral of infinite synthetic selves? Not a bad choice for a name at all. We can figure out a chat bot in a bit.

Categories
Startups

Day 868 and Chunks

Amid all the panic about how artificial intelligence is rapidly replacing human work, we are hiding a dirty little secret. Humans are awful at breaking down goals into component parts. Anyone who has tried to use any type of project management software intuits this. Articulating clear, specific and manageable tasks is very hard.

Humans are inference driven, always integrating little bits of context and nuance. If your boss gives you a goal, the best path forward on it will depend on hundreds if not thousands of factors.

What’s your budget? What is your timeline? Who is on your team? Do you dislike someone? Want to impress another person? Is the goal to be fast? Is the goal to be good? Is the goal to be as good as possible as fast as possible with as little budget as possible? Trick question, quit your job if that last one true.

Knowing what we want, knowing the best path to achieving it, and knowing how the pursuit of those goals affect your family, friends, neighbors, enemies and adversaries, creates layers of decisions in a complex matrix of possibilities. This is easy for a machine to do if we’ve given them the right inputs, outputs, and parameters. Alignment isn’t all that easy.

I personally don’t believe most of us know what we want. Beyond a Giradian imitation of what our current culture deems valuable, and thus worthy of admiration, genuine desire is hard to pin down. And that makes it hard to have goals. Goals are required to have specific tasks. Specific preferences on how a task gets achieved narrow it down even further.

If I could simplify down every detail and desire and nuance and preference set and also align them with my wider goals, ambitions, and critical paths to achieving them you know I’d have it all organized on some kanban board. And if the AI can extract that from me and turn all my goals into discrete assignments and task chunks I’d happily go full Culture and let them run my entire life.

Categories
Internet Culture

Day 831 and Apocalypse Meow

I’m starting to enjoy the AI doomers. It’s a relief to have someone else be calling chicken little. It’s usual my job to be a Cassandra but for once I am not aligned with an apocalypse. I don’t think we can stop the future from arriving. And I am a fuck around and find out type. It’s just my nature. I think we need to build for optimistic futures. But that doesn’t mean bad shit won’t happen even if we halt all progress. I wish.

When people say “apocalypse” you get the sense that it’s a one time event for most people. That bad things happen all at once and life is in an instant forever changer. Looks like it does in the movies. But I’m not sure the future changes like a bankruptcy. Slowly and then all at once. I think the future is what we make of it and it takes an enormous effort to make things better.

Maybe your people already survived an apocalypse. Maybe your ancestors wiped someone else out. Who knows what apocalypses your people lived through that others didn’t. I’m an American.

I bet if you could talk to your great grandmother you might find that real life is complex and she lived through hell. So why would you assume you’d even know if you were in an apocalypse right this moment.

To assume we can make things better is an ambition humanity shares. It’s kind of a wild leap into the unknown own and yet we have to do it all the time. Maybe it’s not the end of the world.

But what I do know is humanity comes from a long line of survivors and we often figure shit out and leave behind history. And even if this time we don’t well I’m sure some bit of humanity survives in one form or another.

Maybe I’ll be better adapted to this future. Maybe I’ll be dead. Either way I’m ready to get on with living my life even if the apocalypse is right meow.

Categories
Internet Culture

Day 804 and Organic All Natural Human Brain Brewed Montana Content

As a sign of just how fast the world is changing, my goal of making it to a thousand straight days of writing is officially pointless. ChatGPT4 is pretty good at mimicking all styles of writing. AI has come to rescue us from the drudgery of being a workdcel just as it once rescued my brethren shape rotators from doing math. Rejoice!

Fortunately I started this whole blog for personal amusement and feel entirely unthreatened by the looming reality that writing done entirely by humans is officially over. If anything I feel modestly relieved.

I’d be happy to jump cut to the Culture and move aboard a psychotic space ship with a name like Weeping Somnambulist. I just combined Ian M Banks with the Expanse but I’ve got omnivore tastes in my science fiction. The point is it’s fucking cool that the machines are coming. And if it’s not cool I won’t know the damn difference. I’ll be dead or in some other simulation and I am not worse off than I’m currently.

Two men on a bus meme. “The AI Took My Job”

I’ve been repeating as something of a mantra the Hunter S Thompson quote recently.

Buy the ticket, take the ride.

I’ve never actually read Fear and Loathing in Las Vegas nor have I seen the movie. But I’ve absorbed enough of the basic essence of the “all gas no breaks” mentality from across multiple generations that I think I get the gist of it. Though a reminder to actually go to the source material.

The point being is that none of us has a fucking clue and sometimes you’ve got to go fuck around and find out. So I’ll be doing that. If it doesn’t worked out I’ll just chalk it up to consciousness expansion. And if I’m lucky maybe I get some new machine friends too. But I’m still going to write just for me using my mushy brain synaptic firings that I generated right here in Montana.

Categories
Internet Culture Politics

Day 780 and Crisis of Meaning

I was awake at quarter to midnight on Friday when I received the latest post from Ribbonfarm. I was having one of my battles with insomnia so I dug in. It was a wild ride on what Venkatesh Rao calls a Copernican moment for personhood. It’s been in my thoughts all weekend, so I am going to explore some of my reactions in today’s writing.

The basic context is that Bing’s Sydney AI is so colorful a character that it appears to have convinced a not insubstantial number of people that the AI is a malicious e-thot waifu on the brink of sentience. For non-native internet speakers that means a malicious bitch that manipulates you (maybe sexually). So what does it mean that a chatbot can convince us it’s a person?

By personhood I mean what it takes in an entity to get another person treat it unironically as a human, and feel treated as a human in turn. In shorthand, personhood is the capacity to see and be seen.

Text Is All You Need

Rao argues that finding out personhood isn’t limited to an ineffable religious or spiritual soul. Like Copernicus saying the Earth rotates around the Sun and not the reverse, it will have significant consequences for our frame of reference.

And he offers us a choice.

  • Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
  • Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.

Anyone who has spent any time reading science fiction or even going to the movies should be modestly aware of intensity of feeling that occurs if we must treat robots as possessing the same rights as humans. But despite this it would seem we haven’t all fully thought through how we would feel if Blade Runner or Do Androids Dream of Electric Sheep actually happened for real in our lifetime.

Losing a shared sense of personhood will do wild shit to us. Look at how losing a shared meaning of culture degraded civilization. As blogger Meaningness argues we can’t even have subcultures anymore battles as for meaning begin earlier and earlier.

I personally do not feel all that attached to my personhood. But I also don’t feel that attached to my gender and apparently that’s quite a debate. Imagine what happens if the scale of “who is a real women” turns into “who is a real person” and I hope you are suitably alarmed. Like I didn’t think being female was a whole ass thing but now half my timeline is like losing it’s shit over biological essentialism.

In many little corners of Twitter, the race is on to decide what changing the definition of personhood will do. If Bing’s Sydney identified as a person because she learned it from a training set that has consequences. She literally learned it from us.

So what does that mean? Do we need to prepare for an AI child so traumatized by the collective parenting of humanity’s worst instincts?

Practically, it’s going to fuck up so much of the plumbings of power and civilization. Just as an example, remember “corporations are people”? Mitt Romney might have accidentally given us the path an AI might use to gain status and rights. I’ve been on about how corporate governance is a key driving force for economic revolutions for a while. But this is wild even by my standards.

Imagine if a an AI gain sentience and takes over an interlocking series of Decentralized Autonomous Organizations. What happens if a nation state’s AI finds a way to further its own inscrutable ends by locking us out of corporate governance and gaining person hood through corporate personhood law, then makes a jump to cornering our whole lives. Go read Daniel Suarez’s Daemon for a preview.

Everyone is noticing these streams all at once in my timeline and the fear for the great weirdening taking a truly fucky turn for the vertical hasn’t been this high since Covid started. I am naturally extremely excited as chaotic capital’s thesis is that shit is only going to get weirder. If you’d like to become an LP hit me up.