ng
TLDR: The best solutions are often combinations of textbook and contextual learning.
I was working on a problem recently that saw an interesting mix of learnt and experienced learning. The problem essentially boiled down to having a list of boxes that I want to draw, and removing all entries from a list/array that overlapped with another entry. The current implementation suffered from O(n2) complexity (the work required scales with size of the data times itself). This is a problem, because while for 2 values you’re only doing 4 iterations, this quickly gets out of hand. For 10,000 values you’re doing 10 million iterations, and so on. After doing some digging into the theory of reducing this down to the theoretical max of scaling only with the size of the data, I drew a blank.
There are some optimisations that are possible with ordered data, but my data is not ordered, and ordering it is itself an expensive operation.
However, here’s where context comes back in. A screen is only ever a fixed size, and so are the boxes. There’s a maximum number of boxes I can draw to the screen, which is always finite. So if I only store non-overlapping boxes in a new list, and I check and update that list as I continue iterating through the original list, I only need to check each box’s collisions with a smaller list of boxes. In fact, since the list of accepted values is related to the fixed screen size, its complexity is only O(1), so iterating through it doesn’t pose any impact to scalability at all.
So I found a solution that, while in textbooks talking about abstract problems, the best solutions only ever reduce to O(n2), by analysing the applied context was possible to get down to just O(n). It was a neat example of how both aspects are useful for solving problems, it’s how you combine them that is key.