Highlights and Threads for May 2, 2026 — Agency, Good Faith, and Bad Faith

Highlights and Threads for May 2, 2026 — Agency, Good Faith, and Bad Faith
Photo by Javier Allegue Barros / Unsplash

This is a first. A round-up of what I've been reading and watching this week. Given it's a first, I'm stretching back two weeks.

I'll be honest. I intend to include some wit and observation as a prefix to highlights and threads. But I spent the past 2 hours coding the system to grab my notes and observations from ReadWise and it's time to go outside and mow the yard. Still. This was a fun Saturday morning project (one I've been itching to complete for a while). I LOVE Readwise (it solves a particular need I've had for a while...I tried to get Notion to deliver on it, but it kept breaking).

Next week will be better.


Weekly Video Digest

Not a lot of video watching this week. But did get my face metaphorically punched in by Yung Lean.

Also, did some memory tripping with LCD Soundsystem. This great performance.


Weekly Reading Digest — May 2, 2026

Themes & Synthesis

The Agency Problem

The most insistent thread running through this reading is the question of whether people retain meaningful agency in a world being restructured by technology and capital. My own annotation on the surveillance pricing piece puts it most directly: "Do not create and live in a world where we don't have agency. That is not good."

This isn't just a personal reminder — it connects to Nilay Patel's "software brain" argument (the database is being made to fit reality, and when it doesn't, they change reality instead of the database), to the New Yorker's AI-in-schools piece (children offloading cognition before they've built it), and to Sutton's observation that the cruelest thing about bad institutional decisions is making people feel helpless. Agency — or its erosion — is the common wound across technology, education, and organizational life right now.

Cognitive Offloading as the Real Stakes of the AI Debate

The AI-in-schools piece makes the sharpest argument: LLMs encourage cognitive offloading before kids have done cognitive onloading. But this concern isn't limited to children. Patel's piece warns the same thing is happening to businesses — conforming their operations to AI's database logic rather than demanding AI serve them. The MIT "cognitive atrophy" finding echoes in the adult world too.

The deeper question isn't whether AI is useful (it clearly is for already-structured business loops), but whether habitual reliance on it degrades the very capacities — curiosity, inefficient exploration, unexpected connection — that produce good thinking. The annotation that sticks: "Learning isn't really meant to be efficient. That's not the goal. Inefficiencies create learning."

The Infrastructure Layer Grab

Salesforce's Headless 360 launch and the discussion of agentic AI architecture reveal something important: the real competition right now isn't for users, it's for infrastructure position. Salesforce is trying to become the invisible plumbing that AI agents run on — separating data, work, agency, and engagement into discrete programmable layers.

Meanwhile, the "Why Are We Still Doing This?" piece and the AI bubble article sit in productive tension with this — massive capex, unclear unit economics, but genuine enterprise adoption accelerating anyway. The infrastructure grab is happening whether the bubble pops or not.

Institutional Bad Faith as the Default Setting

Two distinct threads — tech regulation and organizational layoffs — converge on the same diagnosis: institutions frequently know what the right thing to do is, and choose not to do it. Maryland's surveillance pricing law has deliberate loopholes. Cory Doctorow's point isn't that regulators are slow — it's that they're often captured.

Ben Horowitz and Sutton's HBR piece both argue that the right way to do layoffs is well-understood (transparency, speed, compassion, honest accounting of company failure) and yet companies routinely fail to do it. The darkest version: some leaders use cuts performatively, to feel powerful. The Malus.sh story fits here too — someone building a tool specifically to strip legal obligations from open-source work. The playbook for bad faith is often just as visible as the playbook for good faith.

The Loneliness / Helplessness Nexus

Patel's warning that nihilism and helplessness produce social violence, the New Yorker's concern that AI warps how children forge selfhood and relationships, lonely Chinese youth in crowded cities, and Guy Gavriel Kay's quiet line about the danger of loving something too much — these don't fit neatly together, but they rhyme.

Something is happening to the social fabric of people living inside systems optimized for throughput. Kay's character holds back from love as protection against loss. Young Chinese people are surrounded by people and feel no one. Children get a chatbot if they want to talk to "no one." These are not the same thing, but they are pointing at the same hollow.


Sources

Pluralistic: How Not to Ban Surveillance Pricing
Doctorow argues against the lazy narrative that tech regulation fails because government moves slowly — the more common truth is that obvious solutions exist, warnings are issued, and lawmakers ignore them anyway. Maryland's surveillance pricing law is the case study: nominally a response to algorithmic price discrimination, it's riddled with loopholes that leave most of the problem untouched. A useful corrective to technological fatalism.


The (other) problem with automatic conversion of free software to proprietary software
Malus.sh sells a service that uses LLMs to refactor open-source code into "clean room" versions stripped of license obligations — essentially laundering software freedom. The twist Doctorow identifies is that the AI-generated output can't actually be owned (it's public domain), which means the attack partially backfires and may even expand the commons unintentionally. A sharp example of bad-faith technical maneuvering producing unintended consequences.


Written on the Dark — Guy Gavriel Kay
A single highlight from Kay's fiction: a character explains why he holds back from loving things too much — because losing what you love too deeply can destroy you. A small moment that carries real weight as a counterpoint to the week's other themes of helplessness and loss.


BEWARE SOFTWARE BRAIN
Nilay Patel defines "software brain" as seeing the world as a set of databases to be controlled through code — and argues this worldview is driving both AI hype and public backlash against it. The key insight: real databases always stop matching reality at some point; the software brain response is to reshape reality to fit the database rather than the reverse. Politicians and tech executives who make people feel helpless — especially while announcing AI will eliminate jobs — are generating real social harm.


What Will It Take to Get A.I. Out of Schools?
Jessica Winter's New Yorker piece examines the rapid, largely unconsented rollout of AI tools in K–12 schools, focusing on cognitive and developmental risks. The central concern: LLMs promote cognitive offloading before children have built cognitive capacity — and optimizing for efficient, impressive outputs confuses means with ends in education. Parents, educators, and researchers are raising alarms that are being systematically ignored in favor of industry-driven deployment.


Salesforce Launches Headless 360 to Turn Its Entire Platform Into Infrastructure for AI Agents
Salesforce is decomposing its platform into four programmable layers — context, work, agency, and engagement — and opening all of them via APIs so AI agents can operate without any graphical interface. The strategic bet: enterprise platforms survive the agentic era not by competing with AI, but by becoming the infrastructure AI runs on.

Zach Vander Veen

Zach Vander Veen

Zach Vander Veen is the cofounder and Chief Innovation Officer at Abre Inc., an education data platform. He's worn many of the hats in education and as an entrepreneur. He loves learning, teaching, traveling, and wandering with his family.
Cincinnati