3 Things

A link-blog, of sorts

Read

The grift we deserve

The Soham saga and Cluely’s rise aren’t outliers, rather, they serve as case studies in how the ecosystem actually works. We’ve built an industry that rewards optimization over ethics, virality over value, and performance over product. Hustle is treated as a stand-in for integrity, until someone plays the game too efficiently.

Even the backlash gave away the game. Y Combinator’s Garry Tan praised the community for “catching” Soham, but multiple YC-backed startups had hired him. He wasn’t freeloading off the system; he was farming it. He targeted the valley of move-fast founders, where technical skill is verified quickly and commitment is assumed blindly.

For all the talk of meritocracy, much of startup recruiting runs on speed and gut feeling. Soham didn’t subvert those norms. He followed them to their logical endpoint.

Since the pandemic started, we’ve seen several stories make the rounds of what we now call “over-employed” individuals who take on multiple full-time jobs without letting any of their employers know.

Cards on the table: I work remotely in my current role and I would like to continue doing so. And, for the record, I’m quite happily singularly-employed (monogomously-employed?) with my one role. Not under- or over-, just simply, employed. I have a stake in these situations as employers look at remote work with increasing—if oddly, often illogical—levels of scrutiny.

But Ayers’ article is actually about much more than remote work:

Maybe we also need to stop pretending the system rewards anything it claims to. Because the question isn’t whether Soham deserved the work. It’s whether we’ve built a world where his playbook made perfect sense.

Poor ethical behavior by one group doesn’t make a poor ethical response by another ‘ok’, but it’s worth reflecting on the permission structures startup culture has created.


Read

I Made This

Ethically, the argument that generative AI is “just doing what humans do” seems to draw an equivalence between computer programs and humans that doesn’t feel right to me.

I keep coming back to this.

There is plenty to focus on in understanding how AI—current and future—affects our industries and culture, but I find the “just doing what humans do” argument to be thin and possibly intellectually dishonest.

The word that I rarely see present in these arguments is: scale. If an LLM can “learn” from a book just as I can learn from reading a book and then apply that knowledge, the argument tracks. But if you acknowledge the scale at which LLMs operate, then, in my opinion, it can’t possibly be further from “what humans do”. I will never be able to read all of the books and internet, watch all of the Youtube, movies, and television. And I will never be able to scale myself to answer multiple queries simultaneously with that found knowledge—or do so with virtually-infinite capacity.

Culturally, we seem to have moved on from the question of copyright infringement, with regard to LLMs. Yes, there are a few cases working their way through the courts, but does anyone give serious consideration to the idea that this technology gets unwound and put back in the box?

Wherever the courts land, I think developers of LLM technology found a loophole: copyright law never considered scale. I don’t know if it should have, but if it had, building and operating LLMs would be an entirely different endeavor.

Also, this:

In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act. And if the same act (absent any prior legal arrangements) confers ownership in one context but not in another, then perhaps it’s not the best candidate.

I am perhaps less worried about this disconnect than I was even a few months ago. I think we’re already starting to see some areas of culture learn how to operate with LLMs in the mix, where artists are still very much creating net new work and pushing boundaries, but now with additional tools at their disposal.

But I do worry we have not yet seen the fallout of these shifts. Art in our modern age has a commercial element: something must be sold so that the artist can eat. If those driven to create art—even if they use LLMs themselves—are faced with a world where their work is so easily mimicked and ultimately consumed by LLMs for replication, do the incentives change? Does any sort of remaining commercial viability in art collapse? Likely, but to what extent?


Read

American

Some of them are not what you would call tricky. For example, Q90. What ocean is on the East Coast of the United States? (The Atlantic Ocean.) Others have the character of trivia questions. Q7. How many amendments does the Constitution have? (Twenty seven.) Q66. When was the Constitution written? (1787.) But others are not trivial at all. I think many Americans would get them wrong if asked. And I feel confident every one of the fifty or so people who took the Oath with me on Friday knew them backwards and forwards.

I can’t imagine I’d be able to casually pass the citizenship test.