Bob Nystrom

@munificent@mastodon.social

Let's say that AI really is a thing and before too long, every white collar employee uses it every day to be productive enough to keep their job.

It's going to be a real bumpy road at that point when the AI companies start enshittifying and cranking up the token cost and all of a sudden you have to pay more to do your job than you get paid in return.

The whole economic arc of ride shares is going to be repeated for information jobs.

July 24, 2025 at 2:22:09 PM
Web

Unless it becomes practical to self-host, be it directly on your workstation or a team-wide server?

@zoul @munificent self-hosting solves part of the problem but where do you get the models, from whom, and for free?

@higgins @munificent Hopefully we will see models like this and they will turn out to be good: theaiinsider.tech/2025/07/15/s. Who knows, eventually we may even get models that were trained without breaking the copyright 😬

it’s absolutely wild to me that this arc is such an obvious ploy to rent back people’s ability to do their job, yet folks still jump in enthusiastically.

I assume that it's the employers they'd be soaking rather than the individual employees. That seems to be the way it is now. But if you're a consultant or otherwise self-employed...

I don't think this is going to be an issue.

As far as we know, AI companies could be profitable (or at least close to profitable) with their current cost structure, as long as they didn't spend all the money on training better models and building new datacenters.

They can't do that right now because the competition is extremely cutthroat, and if your model isn't the best in its weight class for at least some use cases, you have no reason to exist. However, if it was obvious that there's no longer a path to further LLM scaling, I think many companies could shift to profitability pretty quickly.

This is true about most companies, but especially true about Google. If everybody else dies, Google, with its excellent (and much cheaper) infrastructure, is definitely going to survive.

I can't cite sources here, as nobody releases numbers like that publicly, but all the credible people in this industry are saying roughly the same thing. From the numbers we do have, it seems like the OSS world is just underinvesting in model inference optimizations, which is likely where those predictions come from. According to an actual VLLM dev, VLLM being among the fastest open source LLM inference frameworks available, DeepSeek's stack is still about an order of magnitude faster in some cases[1], and I think other labs are likely to have similar optimizations.

With that said, I think a much more likely scenario is Chinese companies releasing SOTA open source models and pricing everybody else out of the market. Many users don't care about CCP propaganda or western values, they just want a good model that solves their use case and does well on their specific evals. The users who care are a much smaller pie to fight over, and potentially one that doesn't justify the costs that frontier model training demands.

I can definitely see a world where western companies can't justify the cost of further model training and implode, while Chinese labs go ahead on full steam. This will make Chinese models with Chinese values the only ones worth using, and that's going to be a scary world to live in. We're further from that scenario than I thought we would be in some ways, we're passing less boneheaded, protectionist copyright legislation than I thought we would be, but we're much closer than I anticipated in others, as the Chinese models are just really damn good.

[1] news.ycombinator.com/item?id=4

While there are some barriers to entry, I still think it's too easy to compete for that to happen.There are multiple competitors already, including new entrants from China.

Elk Logo

Welcome to Elk!

Elk is a nimble Mastodon web client. You can login to your Mastodon account and use it to interact with the fediverse.

Expect some bugs and missing features here and there. Elk is Open Source and we're actively improving it as a community project. Join us and let's build it together!

If you'd like to report a bug, help us testing, give feedback, or contribute, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

Anthony FuPatakDaniel Roe三咲智子 Kevin DengJoaquín SánchezTAKAHASHI Shuuji

The Elk Team