5 Jan 2025
2 min read

Refactoring with LLMs

2025 factors arning

I was reflecting on an old blog post that discussed using Large Language models to act as an interface between an app and a human user. This enables even non expert users to interact with an application by requesting it perform certain tasks. The LLM then interprets these prose-based questions and returns a list of actions for the app to perform, similar to a state machine. It reduces the barrier to entry as users no longer need to know where various options/controls in the app are.

Having recently done some learning on code structure, refactors and abstractions, it dawned on me that what this method does is effectively create an app abstraction for the LLM. I mentioned this in the blog but taking this further, it could be a possible future idea for LLMs to interact with large codebases. Instead of giving them a whole codebase to interact with, each is only interested in a small piece, with the interfaces of other modules available to it.

However, this would be a trade-off. While this strategy would be useful for LLMs to have good knowledge of the code within their own function or module, it would make it difficult for them to be aware of and suggest large-scale refactors. Unfortunately, the majority of widely-available code is not of high quality, and so any model trained on this will be of equally low quality. This is something that LLMs have had to deal with a lot, in some cases causing the UK Government to support initiatives to provide trusted high-quality sources like the Content Store for Education content.

Nevertheless, it would be interesting to see what such a model would come up with in comparison to existing ones.