This is getting a bit fancytowards micro-optimizations (but not necessarily with microscopic effects, I've gotten "mega" effects from it) but you can implement linked lists (and all other linked structures, including trees) using contiguous arrays as the memory backing and largely mitigate the memory inefficiencies there (the need to dynamically allocate each node separately, and likewise mitigate the loss of spatial locality), and all without reaching for custom memory allocators like free lists. We can actually do this sort of thing:
I'm still rather competitive performance-wise in my field and sometimes I feel like my ability to outperform some of my competitors isn't coming from algorithmic magic so much as just utilizing linked lists and sillyleaning heavily on my profilers and little things like that while many of my peers have become allergic to them, but in a way where I don't degrade locality of reference too much or involve dynamic allocations per node. Of course if I implemented them the same way I started out when CPU caches were almost non-existent and pointers were a quarter of the size, they'd perform very poorly nowadays. But I've turned my old implementations into "array linked lists" with great benefit and in ways that still outperform turning them into just random-access sequences using arrays with linear-time insertions/removals to/from the sequence.