The New Tax on Engineering Against the Grain

May 21, 2023
What was previously Google's biggest strength might prevent it from innovating in the future — Diseconomies of Scale at Google

With the future of AI-assisted code writing, the cost of bespoke architecture is even more costly. The idea is simple — best-of-breed engineering teams (like at Google) built bespoke technology stacks years ahead of the industry. As time passed, open-source caught up, albeit with an incompatible but only slightly different API.

But now, it’s not just open-source that’s catching up. Models are trained on publicly available data: open-source libraries, common application patterns, and public cloud infrastructure. As a result, these AI models will best assist developers in writing code — especially if those developers are working on well-trodden ground.

Current AI models can indeed generalize well outside of cases they’ve seen. Still, many companies must maintain fine-tuned models specifically trained on their proprietary data, APIs, and patterns. Meta is already doing this. Unfortunately, every time you engineer against the grain, it’s another chance for the model to have a more challenging time generating autocompletions, reviewing code, or testing pull requests (if you even use git at your company).

On the other hand, it’s possible that these models, if fine-tuned, could potentially provide the developer velocity boosts needed for a company to maintain its bespoke stacks. Or that a model fine-tuned on a specific company’s code could produce a much higher than average quality of code (or much lower!).

Either way, there’s a new tax on engineering against the grain — accept worse completions, fine-tune, or change the stack.