Categories
Internet Culture Politics

Day 1369 and California SB-1047 Vetoed

Last night I received a push alert and then a flurry of excited text messages and phone calls. California Governor Gavin Newsom vetoed the controversial SB-1047 artificial intelligence bill.

Gavin Newsom vetoes California’s contentious AI “safety” bill SB-1047

Twitter lit up with joyful streams of relief and praise for this decision. Everyone from politicians, economists, researchers, academic luminaries, open source collectives, founders and venture capitalists.

It was a bad bill that lacked the necessary clarity and focus to even begin the task of regulating the nascent field of artificial intelligence.

We can and will do better in finding regulatory frameworks for safety and competitiveness but this bill wasn’t it. It was especially concerning as they say so goes California so goes the world.

I have been banging on about the #FreedomToCompute and math’s crucial role in our constitutional right to free speech in America. This must be considered in all future attempts at regulation in America.

Math and computing power are as essential as speech. In today’s world, they ARE speech. We may speak in natural language, but the way we extend ourselves, build things, and grow as a species is through our tools. Computation is a tool.

These tools are extensions of the human mind. Consider that the first computers were just regular humans counting. We may have started with our fingers and toes as our first tools. And it wasn’t quick progress as the evolution from the abacus to modern computing took us nearly 4,000 years.

We’ve made an astonishing amount of progress in the last hundred years. We’ve gone from thousands of computations per second in the 1940s to 200 quadrillion calculations per second with modern super computers.

Consumer devices are better too. The computer I’m using to write this post has more power than the computers we used to send man to the moon. It’s 100,000 times faster with seven million times more memory.

Alas, as tools get more powerful the powerful get nervous. This isn’t the first bad artificial intelligence bill we’ve seen. We have Europe to thank for that. And it likely won’t be the last.

But defeating SB-1047 is a rare moment of bipartisan cooperation not only in California but across the world as the entire compute space came together to make its voice heard. And Gavin Newsom listened.

We should celebrate this rare consensus as we look towards better policy in our future.

Categories
Internet Culture Startups

Day 1245 and AI: Tool Versus Master

A bad workman blames his tools

Idiom

As we rapidly accelerate the power of our computing tools, machine learning has blossomed into the most heated topic in government policy, business strategy, and popular culture, as artificial intelligence begins to affect everyday life.

The focus on harms, and in particular singularity doomerism, has (ironically) pulled focus from the implications of the seminal “attention is all you need” paper.

Astonishing as it may seem at times, intelligence does not top out at “median human”, but can, and possibly will, go much further.

What a triumph this represents. We will all have access to tools that will enable our entire species to build. More intelligence applied to more problems means more solutions to very real human problems.

I am in Austin Texas for Consensus 2024. I’ll be participating a town hall to discuss how crypto’s mindset of open-source, decentralized computation might help us grapple with who builds, maintains, & owns AI tools.

Policymakers and Silicon Valley executives are both calling for regulation of artificial intelligence as fears grow over its potential harms. But others warn of entrenching a dominant, opaque, centralized Big Tech model and instead advocate for open-source code, decentralized computation and distributed data sourcing. Whom should policymakers listen to? What, if anything, can governments do to help this vital technology evolve in a pro-human way?

Thankfully, we aren’t starting from scratch in building a regulatory framework for artificial intelligence. Code has been treated as speech in Bernstein v. United States Department of State. It would seem like a reasonable precedent to consider algorithms speech as well.

And let us be clear, math and computing power are as essential as speech. In today’s world, they ARE speech. Humans may speak in natural language, but the way we extend ourselves, build things, and grow as a species is through our tools. Computation is a tool. To presume that these tools do harm is to make us bad workmen.

Now of course incumbent powers may try to keep the disruptive and democratizing power of these tools out of the hands of the populace “for their own safety”, but imagine if the first amendment had been frozen in time at the printing press and didn’t protect the internet? We cannot accept permanently lowered standards of fundamental rights.

The 90s era fight for strong encryption enabled a flourishing of digital businesses from finance to e-commerce. We must insist that the freedom to innovate remains the default for U.S. digital policy.

Ultimately, I agree with R Street’s Adam Thierer. He says “fear based narratives that prompt calls for preemptive regulation of computational processes and treat AI innovations ‘as guilty until proven innocent’ are no way to make good policy.”

America does not have a Federal Computer Commission for computing or the internet but instead relies on the wide variety of laws, regulations, and agencies that existed long before digital technologies came along.

R Street Comments on NTIA

If we are to regulate sensibly, let us treat artificial intelligence as we would any other tool and let us do so within the existing framework of our constitutional rights and their interpretations within past precedents.

Categories
Culture

Day 1234 and Intelligence

I’m reasonably intelligent as far as humans go. I’m probably in the top quartile or so of reasoning, processing & other measures of cognition. Not being insecure about my intelligence, I feel perfectly comfortable admitting that I’m an idiot. I’m only human.

Humans just aren’t a terribly bright species. But we are a curious one. We’ve built tools that extend our capabilities significantly. And each new upgrade in our tools helps us achieve more with our meager intelligence. L

We can quibble over whether intelligence is different than achievement but analytical, creative and practical capabilities are things you want to cultivate. We want to cultivate in ourselves and ideally we will want to cultivate in the things we build.

Two men are on a bus on a mountainside side. An sad anxious looking man is staring at the rock face on dark side of the bus side with a thought bubble “the AI took my job” while a smiling happy man on the bright side overlooks scenic views with a thought bubble “the AI took my job.”

Sure we as a species have fought these advances but eventually the benefits of developing ways to pass on and improve intelligence outweighs other fears. Material progress is good.

If you afraid of intelligence greater than your own I realize I have no way to talk you out of that fear. I can argue impartially about the benefits that intelligence has brought us in the past but humans are feeling animals not reasoning animals. The best I can hope for is to coax you to consider the bright side of the bus. Imagine feeling awesome about an AI taking your job. Go ahead and see if your curiosity can consider it.

Categories
Community Internet Culture Politics

Day 973 and Reinforcement

I’ve spent a lot of time this summer thinking about who gets to decide the boundaries of society.

Automation of civic and cultural life has been happening at the speed of capitalism. It’s about to happen at the speed of artificial intelligence’s processing power.

At least during most of techno-capitalism, corporations and governments were still run by humans. You could blame an executive or elected official. What happens when more decisions are snapped into reality by a numerical boundary?

High frequency traders have found many an arbitrage they whittled into nothingness. Who will get whittled away when the machines decide how society should run?

We got a taste of the horrors of treating people like statistics instead of humans during the first Biden era crime bill with mandatory minimum sentencing. And here we are rushing to find new ways to nudge consensus back to hard lines and institutionalization.

I don’t know how we handle virtue in a world without grace. Alasdair MacIntyre’s After Virtue seems prescient. Forgiveness in the face of horrific reality has been the hallmark of humanity’s most enduring religions. But then again so has swift punishment and inscrutable cruelty. Humans are quite a species.

I am, like many others, concerned about reinforcement learning in machine learning and artificial intelligence. How and where we set the boundaries of the machines that determine the course of daily life has been a pressing question since the invention of the spreadsheet.

Marx certainly went on about alienation from our contributions to work. But division of labor keeps dividing. And algorithms seem to only increase the pace of the process.

Categories
Biohacking Medical Startups

Day 971 and Patients Rights With Artificial Intelligence

If you are working in artificial intelligence or medicine I’d like to pleased my case to you. Id just like to pass along a note.

The current “responsible” safety stance is that we should not have AI agents dispense healthcare advice as if they had the knowledge of a doctor. I think this is safetyism and rob’s sick people of their own agency

I have very complicated healthcare needs and have experienced the range of how human doctors fail. The failure case is almost always in the presumption that you will fall within a median result.

Now for most people this is obviously true. They are more likely to be the average case. And we should all be concerned that people without basic numerate skills may misinterpret a risk. Whether it’s our collective responsibility to set limits to project regular people is not a solved problem.

But for the complex informed patient knows they are not average? The real outliers. Giving them access to more granular data let’s them accelerate their own care.

It’s a persistent issue of paternalism in medicine to assume the doctor knows best and the presumption that the patient is either stupid, lying, or hysterical is the norm. It’s also somewhat gendered in my experience.

I now regularly work with my doctors using an LLM precisely so we can avoid these failure cases where I am treated as an average statistic in a guessing game. I’m a patient not a customer after all. I decided my best interest.

A strict regulatory framework constricts access without solving any of the wider issues of access to care for those outside of norms. Artificial intelligence has the capacity to save lives and improve quality of life for countless difficult patients. It’s a social good and probably a financial one too.

Categories
Culture Medical Politics

Day 948 and Assigning Value

What does assigning value mean to you? How do you begin to investigate what is valuable? If someone asked you to value “object X” do you know what tools you would use first to make a measurement?

If I tell you determining value is a cultural problem, you may investigate the problem of value through religious or philosophical frameworks. If I tell you value is an artistic problem, you may use taste in finding value.

If I tell you that assigning value is primarily a computing problem, you may search for weightings, databases and referents to determine value.

So what happens when determining value has to account for multiple or even contradictory frameworks? Which framework assigns the ultimate value? And how do we align them?

Congratulations, you’ve known become an artificial intelligence alignment researcher. I bet you thought that required a doctorate but it doesn’t.

It’s not an entirely intractable problem. The Industrial Revolution found ways to align competing frameworks. We assigned labor value and made currencies to facilitate the exchange of different goods.

Markets can, and do, spring up for all kinds of previously impossible to value things. Capitalism done its best to make cultural value fungible and legible to an agreed upon value. Sure, artisans and artists complain we conclude incorrect values regularly. But we don’t always agree on value.

Generally we’ve found that what can pay for itself survives and what can profit for others thrives.

Not all people are motivated by profit, but we all are motivated to survive. And so we contribute what we believe has value to each other and hope the frameworks of value that others have will align with ours. The balance between the two has held together humanity for sometime.

But deciding on value isn’t the same thing as a thing driving a profit and we have to remember that truth. Between the gaps in the models of what we value is the epsilon of what cannot be calculated.

If you’d like to read a horror story on how assigning fungible value in a database can end up assigning a value to something we humans generally don’t consider interchangeable at all, then I’d go read this piece on how public hospice care’s incentives have been perverted by private equity profit motive.

I don’t always agree with the author of the piece Cory Doctorow. But I think he’s raising a powerful point on how we are assigning value when we overlay competing frameworks.

This is the true “AI Safety” risk. It’s not that a chatbot will become sentient and take over the world – it’s that the original artificial lifeform, the limited liability company, will use “AI” to accelerate its murderous shell-game until we can’t spot the trick

If you aren’t familiar with Doctorow, he’s a powerful voice in right to repair circles, a classical hacker opposed to corporate oligopoly, and a bit of a anarcho-syndicaticalist in his preferred solutions.

I like markets more than governments for most things. More of us can contribute to markets than we can contribute to specialist bureaucracies

But we have assigned value to end of life care inside the convoluted system of profit motives and medical ethics and it’s not the value most of us share on life.

And that’s going to happen a lot more as we get further and further abstracted away from the existing models of value that govern our lives. So remain skeptical when someone tells you that they know what you value. How they assign value might be different than you.

Categories
Culture Medical Politics

Day 945 and Secrets and Safetyism

Keeping secrets used to be a lot easier. Noble philosopher kings with priestly knowledge kept that shit under under lock and key so some uppity courtesan or eunuch didn’t get too clever.

Not that it was all that necessary. Nobody was accidentally misinterpreting the layers of mystical knowledge because illuminated manuscripts were expensive as fuck. And that was cheaper than the previous method which was memorizing oral histories. The expense of sharing information has acted as a control mechanism for centuries.

If you’ve got the money, you can store your sex toys and drugs in layered secret drawers behind a hidden bust of Socrates. But some asshole will post a primer online and your benzodiazepines and vibrator will be long gone.

The metaphor I’m working with on this silly desk is that humans love to horde secrets. We’ve got a lot of incentives to keep knowledge locked away. Drugs and sex in my joke mere proxies for ways we access altered states. Eve’s apple was a metaphor for forbidden knowledge so I’m not reinventing the wheel here.

So where are we today on secrets? Well, I think we are trying desperately to put the genie back in the bottle.

We think we’ve got an open internet but ten years ago Instagram stopped including the metadata tags to allow Twitter to display rich content embedded directly in a Tweet. Now Twitter and Reddit are taking the same approach as Instagram did as data ownership becomes a hot issue.

Closed gardens are meant to keep thieves out and Eve in. And depending on who you are it’s likely you will experience the fall from grace of Eve and the persecution of the thief. God clearly knew something as his conclusion was that once you’ve tasted the bitter fruit there is no point in protecting paradise.

Every time there is more access to information we have the same debate. Fundamentally you either believe people should have access to information and how they apply it to their lives (side effects included) or you don’t.

I’m happy for you to argue the nuances of it. Want a recent example that looks complex and might actually be deadly simply?

The clown meme format asks if it’s
a joke to conclude confident that “LLMs should not be used to give medical advice.”

I know it’s tempting to side with the well credentialed researcher over the convicted felon when faced with a debate over access to medical advice. But I don’t think it’s as simple as all that.

From Guttenberg to the current crop of centralized large language models, it’s just more complexity and friction on the same old story. It is dangerous to let the savages have access to the priestly secrets. I for one remain on team Reformation. Rest in power Aaron Schwartz.

To quote myself in my own investor letter last month.

Most builders remain deeply skeptical of Noble Lies, “for your own good” safetyism, regulatory capture, oligopoly control, and the centralized nation state control as the most effective methodology of innovation for a dynamic pluralistic human future. We are having cultural and financial reformations at a frightening speed. It’s beyond future shock now.

So if I have a gun to my head (and that day may come) I’d like to have it on record that I don’t think secrets have any inherent nobility. It’s just a control mechanism. Keeping people safe sounds noble. But you’d be wise to consider how you’d feel if your life depended on having access to medical data. How would you feel if the paternalism of a noble lie to keep you from it? It’s not great Bob.

Categories
Internet Culture Reading Startups

Day 872 and Synthetic Selves

I’ve been writing in public, on and off, for my entire adult life. First it was goofy tween personal made for myself on hosted social media like Livejournal & Geocities.

My younger years were filled with sundry hosted publishers that taught you just enough HTML & JavaScript to be a foot soldier in the ISP and browser war, but never quite encouraged you to gain the more foundational tools to host yourself independent of their network effects. Closed gardens of that era gave you a small plot of digital land to tend in their giant kingdoms. I never felt like I could homestead outside of their cozy walls on my own domain.

Those plots of writing yielded fruit though. And while it feels as if I only saved a small fraction of my writing over the years, I have hundreds of thousands of words.

I do have an archive of my collegiate blog which later turned into one of the first professional fashion blogs and spawned my first startup. I’ve got 872 straight days of writing saved from this daily experiment. And while I mostly auto-delete my Tweets I’ve also downloaded the remaining archives.

Why am I mentioning my written records? Because making a synthetic version of your intellectual self that is trained through your writing is now a possibility. I’d been introduced to Andrew Huberman’s “ask me anything” chatbot that was made through Dexa.AI and I thought I’d like that but for my own writing. So any founder or LP can get a sense of who I am by asking questions at their leisure.

We’ve come so far that it is an almost quotidian project for developers if you can provide enough training data and it looks as if I may have enough. Just by tweeting my interest I was introduced to chatbase.co, ThreeSigma.ai, Authory (great way to consolidate your content) and the possibility of knocking out a langchain on Replit. Aren’t my Twitter friends cool?

A big thank you to 2021 me and 2022 me which wrote so damn much. Click those links for my “best of” round ups. Hopefully I’ll have a synthetic self soon so you will have the option of asking it instead of hyperlink rabbit holing down endless inference threads.

My buddy Sean and I landed on “Phenia” as this synth’s name. He’s tinkering already. My husband Alex is already wondering what the heck is going on. But I see a pattern emerging. Phenia as in apophenia. A synthetic self capable of pattern recognition towards an inward spiral of infinite synthetic selves? Not a bad choice for a name at all. We can figure out a chat bot in a bit.

Categories
Startups

Day 868 and Chunks

Amid all the panic about how artificial intelligence is rapidly replacing human work, we are hiding a dirty little secret. Humans are awful at breaking down goals into component parts. Anyone who has tried to use any type of project management software intuits this. Articulating clear, specific and manageable tasks is very hard.

Humans are inference driven, always integrating little bits of context and nuance. If your boss gives you a goal, the best path forward on it will depend on hundreds if not thousands of factors.

What’s your budget? What is your timeline? Who is on your team? Do you dislike someone? Want to impress another person? Is the goal to be fast? Is the goal to be good? Is the goal to be as good as possible as fast as possible with as little budget as possible? Trick question, quit your job if that last one true.

Knowing what we want, knowing the best path to achieving it, and knowing how the pursuit of those goals affect your family, friends, neighbors, enemies and adversaries, creates layers of decisions in a complex matrix of possibilities. This is easy for a machine to do if we’ve given them the right inputs, outputs, and parameters. Alignment isn’t all that easy.

I personally don’t believe most of us know what we want. Beyond a Giradian imitation of what our current culture deems valuable, and thus worthy of admiration, genuine desire is hard to pin down. And that makes it hard to have goals. Goals are required to have specific tasks. Specific preferences on how a task gets achieved narrow it down even further.

If I could simplify down every detail and desire and nuance and preference set and also align them with my wider goals, ambitions, and critical paths to achieving them you know I’d have it all organized on some kanban board. And if the AI can extract that from me and turn all my goals into discrete assignments and task chunks I’d happily go full Culture and let them run my entire life.