Categories
Internet Culture Preparedness

Day 1868 and Educating An An Entire Species or Start With Your Family

A viral essay was posted a few days ago by a Matt Schumer meant to help introduce the current state of artificial intelligence tools to people who do not work in technology.

It’s a very compelling piece of writing (or maybe it’s just reading), which I believe is well received by normal people especially older family members or technical skeptics. They are often the hardest to reach because of age and experience gaps and a smooth essay goes down well.

The author is the founder of HyperWrite. His company offers a suite of AI writing and research tools. So yes, his excellent writing and wide reach (over 40 million views so far) were achieved thanks his fluent use of AI for both writing and promotion.

The end result of using tools is an excellent essay distributed far and wide. Or if you prefer, the end product was a tool shaped object which gave people a sense of understanding. That’s valuable.

Don’t let his usage of AI in producing this writing and publishing stop you from taking his points seriously. In fact, it should encourage you to read it and consider if you want to share it.

You too will soon be competing in a world where regular people like Matt are capable of super human feats. Perhaps you’d like the same leverage for yourself and your family.

All of us can learn to work with the amplifying effects of networks and artificial intelligence algorithms with practice and usage. Allowing us global reach and potentially maximizing the potential of our insights and points of view. That should make us feel better about where we are headed and not worse.

I feel it is useful to share the essay with your skeptical family and friends who are either scared, confused, angry or indifferent about the rapid changes because it is the current reality we all live in.

I know it’s hard as a middle aged professional to learn new tricks. I’m in the middle of it too. But we have to educate all of us and it’s going to take some time. I’d rather we get started on it. And on that note my lunch break from Montana’s digital innovation committee is only an hour so I’ll get back to it.

Categories
Internet Culture Startups

Day 1805 and Dark Leisure, Time Violence & Outputting Value

Any other software developers out there remember the mythical man hour? It comes from Fred Brooks’ classic book The Mythical ManMonth which argues that adding more people to a late software project often makes it even later. This is also known as Brooks’s Law.

The man‑hour is “mythical” when tasks are not perfectly partitionable and require significant communication, shared context, and integration.

I think in the age of artificial intelligence we need to be revisiting this classic complexity insight as it applies across a world where we understand even less about how the time of input drives its notional value.

Measuring productivity in hours is a relic of a past labor era. And most workers have little incentive to improve output when they aren’t paid for it.

If we had quiet quitting during the pandemic where jobs could be done in minimal ways without getting fired, in this new artificial intelligence roll out we see another type of value capture mismatch between input labor and firm.

We are seeing what Fabian Steltzer calls Dark leisure. Others call it shadow user innovation.

Innovation happening through employee adoption of new technologies that is opaque to management doesn’t get counted and workers are reticent to be transparent.

the reason ppl hide their AI use isn’t that they’re being shamed, it’s that the time-based labor compensation model does not provide economic incentives to pass on productivity gains to the wider org

so productivity gains instead get transformed to “dark leisure”

Fabian Steltzer

Anthropic released a study on the supposed stigma attached to using artificial intelligence at work. Humans are already reacting to artificial intelligence as if it were an existential threat.

Except it’s been generally existentially freeing up to this point. Anyone who has used commercial large language models on healthcare can attest to that. So why are hiding its use?

Even coders are doing it. And who can blame them. It’s a lot less fun for some folks to coordinate a swarm of agents than it is to write code for a living. If you wanted to be a product manager, well you’d already be one.

The boss makes a dollar and I make a dime so that’s why I prompt on the company dime!

We are seeing the early artificial intelligence era take off collide with industrial-era systems of management that are no longer relevant in age of increasing complexity.

We’re putting intelligence into systems designed to measure hours and surprised when there is a misalignment. A Twitter mutual has a theory of consciousness systems they believe makes this is a form of time violence.

Human beings can tolerate NP hard moments of complexity, but cannot survive continuous low-grade complexity

The gap between human adaptability and systemic inertia is now wide enough to generate an entirely new form of harm: time violence

Idea Nexus Ventures

We just cannot keep up with the varieties and types of complexities that are arising, so any advantage that can be used is being used. And you’d want to hide that advantage as long as you can. Sharing it has no rational basis. I find that disappointing.

I’d rather we not vice signal artificial intelligence as it only harms us. The value capture won’t always match up, but the gains to be made are worth having so keep using it where it works for you.