AI and The Eternal Twilight of Code Freeze

This was mostly written as a result of this blog post: https://fly.io/blog/youre-all-nuts/

Some Initial Preamble 

I like technology, and I mostly like technology because of the impact it has on humans.  I believe the purpose of a system is what it does, and the drive to understand the impact of technology as it changes and grows on people, companies, and society, has been the root of most of the things I've enjoyed working on the most in my career. 

I don't write much code these days, most of the work I get paid for these days is in getting people to build the abstractions that drive what code gets written, in order to make sure that amorphous vision thing becomes something resembling "strategy", which is then further decomposed into things that people do.  

Caveat: there are a lot of words I could write on the broader scope of AI and society, but this is mostly about AI for software development and somewhat about knowledge-centric work directly. 

Where I'm at on AI 

The first thing is "AI" is like saying "tech" in the late 90s or early 200s, where it was constantly retconned on an ongoing basis to describe the capabilities as they are now, vs what they were when you used them last, regardless of if you have that capability available or not. We'll touch on this sort of "immaculate arrival" again later.  

The second thing is that the capabilities that are roughly lumped under "AI" are in many cases actually quite useful. What I have discovered is they're the most useful when you can hold them in a way that respects the sharp edges of a system, same as any other system. Lead may make your wine taste sweet but it also has downstream consequences. 

What we are appearing to see now is a shift where AI can credibly generate useful code at a rate where it can accelerate a developer's operations. In the context of code, used as Thomas describes in his blog post, it seems like it's something in the level of an order of magnitude efficiency improvements, if you have optimized your skillset to review code instead of write it. 

I'm mostly concerned about the broader implications of AI usage, in the same way I'm not actually concerned about the specific code that an individual person writes. I'm most interested in the outcomes of that code and how that is expressed against the strategy of a business. The best code isn't "elegant", it's easily parseable to any audience, regardless of context. I want straightforward, simple tools that are obvious and powerful in how they are applied.

The end state that I see from increased adoption of AI is basically companies accelerate towards a state that I tend to call "The Eternal Twilight of Code Freeze". We're in the early stages of that right now as an industry, but this feels like the method by which this problem calcifies massively.

The Eternal Twilight of Code Freeze 

The Eternal Twilight of Code Freeze occurs because conflicting or missing assumptions of systems reaches the tipping point into operational stasis, because no one can implement a change without clearly associated, catastrophic unintended consequences.

One of the more common patterns for this is duplication of similar service functionality in companies, but without the benefit of the lessons learned that are unique to that company in implementation. 

This problem is very abstract until the consequences become real, and they usually become real in a way that is extremely high risk to solve. 

A good filter to apply to this problem space is the question of a "source of truth". Can you reasonably abstract storage of a critical piece of information to one responsible person, technical stack, system, or service? Is that abstraction that you have chosen for a source of truth useful when your teams make technical decisions?

An example of this would be if you have a single database that tracks location and location related assumptions. You might store billing address, customer location, and IP. Each of these pieces of information is related to a user account, or perhaps a session. Can you reason about how you should use that piece of information? Does the reasoning your technical teams do related to a piece of location information scale appropriately to how much you've invested in the technical infrastructure for that storage of data? Can you reconcile conflict in what each piece of information implies? Did you pull that information from a trustworthy source? Who controls it?

This becomes impactful to actual service delivery because a statement like "I would like to update the user's address" goes from having a direct owner who is responsible for reconciling service dependencies of a single data field, to a complex and undefined ownership question scattered across separate teams who many not understand that they have become de-facto owners for a piece of technical infrastructure that supports the concept of an address.  Do you have a method to reason about how the changes to that information echo through your product? Does anyone at your company have a reason to care about that problem space?

This occurs pretty organically in technical systems, and usually ends up falling under the category of "technical debt", due to a delta between both explicit and implicit requirements, and implementation. 

How AI intersects with this 

The core risk that I see with AI within the current idea of "agentic implementation" may be completely fine for Thomas, who I believe undersells his own skills - he has the critical evaluation skills built from a career of coding work to actually spend the majority of his time doing the architectural evaluation, which appears to be the part of the work he enjoys - making the pieces fit together.

But that skillset of critical evaluation of "how the pieces fit together" exists primarily in a space where he can conceptualize responsibility for it as him being the primary owner for the totality of the project space. If you have 30 or 300 developers at his level doing that work, then they get to take 50% of that time to debate about what abstraction is most appropriate, and you can probably build something that's pretty effective. 

But if you don't have a set of entirely senior employees, then what you now is have no coherent architectural design control systems, at a massive rate of change.  

Critically, you don't fix this by adding code, you fix this by refactoring code, deciding on logical compromises between technical reality and product design, and collapsing context where needed into order to make sure the technical reality and the product functionality don't drift to the point that the assumptions that your product operates on live entirely in the heads of your users with no technical structure behind them. 

At some point, this becomes a pure scaling math problem. Let's take the AI optimist view of this and say that it's a 10x increase in LoC development, setting aside that no one who takes software development seriously thinks that LoC is a serious metric to measure productivity. Where is the corresponding architectural evaluation process that assesses that 10x increase in volume against making sure that the system you have built is achieving product goals? 

Furthermore, what happens when you do need to develop novel functionality, or when you need to revise your architecture? How do you tell the system that you need to refactor to take into account that the product has pivoted? Surely senior technical leadership intends that the architectural decisions that they make are meaningful, but how do you make those changes meaningful when the semantic weight of the codebase pushes all changes towards your historical patterns?

For people who do not have that experience, the acceleration of AI development means there is basically zero time for that feedback loop to occur and knowledge to build. We've already got a relatively major crisis with no one wanting to hire Junior people, and this is going to make it significantly worse. We see the reality of this now with the education system, where students get 100% on the homework and fail the tests because they don't understand the concepts outside of plugging in the text to the LLM.

Concerns about the Future 

I think there is a world where we could use generative capabilities to rapidly accelerate operational work and counterbalance that with design work to better systems, but I also do not think that most executive leadership understands how dramatically different their approach needs to be in order to make this happen. 

Instead, what I predict will happen is that some high skill developers will rapidly accelerate their code output, and then they will get either crushed under the weight of their self generated tech debt as companies continue to lay off employees, or they become accountable for the meta-processing overhead of the work expectations of the ownership distributed amongst those laid off people at a 10x pace of development, and they eventually crumble under that overhead.

Those negative externalities will get pushed to customers who may or may not be able to do anything about it. There is a chance for exceptional companies to avoid this, by being much more cautious about the adoption of AI systems, and by recognizing that many companies are going to end up failing to be able to execute on basic tasks as AI development means they end up accelerating themselves into the Eternal Twilight of Code Freeze that much faster. 

The problem never has been velocity, the problem has always been the direction that you're going. You can spend a lot of effort jumping up and down rather than moving forward, and if you sample at the right (or wrong) times you see a lot of movement, but your location doesn't actually change. I think for those who can hold the complete context of the system they're developing in their head, new generative AI capabilities is amazing. I think it's going to make business technical infrastructure so much more unimaginably complex it might actually cause them to become unprofitable and non-functional. However, if everything backslides at roughly the same rate, customers have no choice but to tolerate it due to the levels of regulatory capture that exist, so consumers of business services have no choice but to deal with the degraded service quality. It's gonna mean people who have the ability to effectively introspect these jobs will probably have work for forever, I guess, but also, good luck evaluating for these things vs finding convincing charlatans.  

So we're back to the immaculate arrival - don't look too closely, these things are all problems for the future, right now, the AI future is here, and if it's not here now, it'll be here tomorrow! Just gotta keep believing!