Here's a list of operations that can be performed on lists and arrays, compared with the relative operation cost (n = list/array length): This is a very low-level comparison of these two popular and basic data structures and you can see that lists performs better in situations where you have to make a lot of modifications to the list it self (removing or adding elements). Hash maps can obviously do insertion and deletion in O(1) but then you cannot iterate over the elements in order. More than 0, less than a million ;-) Was Jerry's question "give good uses of lists", or "give good uses of lists which every programmer uses on a daily basis", or something in between? When you run into problems remember this language is so dumb one little character will cause it to break. You can also combine linked lists - e.g. difference in implementation of map and dictionary, List design (Object oriented) suggestion needed. Though I'm not sure if sparse-matrices are best implemented using linked lists - probably there is a better way, but it really helped learning the ins-and-outs of sparse matrices using linked lists in undergrad CS :). You only need a stack, so a singly-linked list is sufficient. You used arrays instead of "real" memory. If records are also not supported, parallel arrays can often be used instead. In this answer I've taken into account the basic data structure every programmer should know about. These lists are just POJOs in JavaScript or a POROs in ruby(yes I know everything in ruby is an object). One example of good usage for a linked list is where the list elements are very large ie. From my experience, implementing sparse-matrices and fibonacci heaps. Locality of reference is no additional problem compared with a vector or deque of pointers, since you'd have to pull each object into memory either way. What do you think about adding to the end? To insert into a known location in a list you update a fixed number of pointers - O(1), but to insert into an array and copy any later items up one position to make room for the insertion - O(n). A free list of reusable nodes is "interwoven" between them (if there were deletes). When we want to store, say, a million variable-length sub-sequences averaging, say, 4 elements each (but sometimes with elements being removed and added to one of these sub-sequences), the linked list allows us to store 4 million linked list nodes contiguously instead of 1 million containers which are each individually heap-allocated: one giant vector, i.e., not a million small ones. Splitting and joining (bidirectionally-linked) lists is very efficient. Of course, even though it was posted only as a comment, not an answer, I think Neil's blog entry is well worth reading as well -- not only informative, but quite entertaining as well. Linked lists are one of the natural choices when you cannot control where your data is stored, but you still need to somehow get from one object to the next. Given the fact above, hash map can be combined with a linked list to create a nifty LRU cache: A map that stores a fixed number of key-value pairs and drops the least recently accessed key to make room for new ones. Can someone be saved if they willingly live in sin? That total overhead for GC is low? There is no allocation overhead for an intrusive list node, provided the cells are large enough to contain a pointer. The implementation of which would include a link asking its next link for its apparent weight, then adding its own weight to the result. What are the lesser known but useful data structures? Last time I tried measuring it on a real app, the key point was that Java was doing all the work when the processor was otherwise idle anyway, so naturally it didn't affect visible performance much. If you are suggesting that members of the C++ Committee somehow sabotaged linked lists (which must logically be slow for many operations), then name the guilty men! My point is that a large part of the cost of reallocating the array is not. Because you always imediately know, where you are in the list when delete is called, you can easily give up memory in O(1). @Neil: Our linked list has fundamental functionality where we can add to the head of the list. An intrusive linked list is still a linked list. Assume which? Adding a new item means the array must be reallocated (or you must allocate more space than you need to allow for future growth and reduce the number of reallocations), Removing items leaves wasted space or requires a reallocation, inserting items anywhere except the end involves (possibly reallocating and) copying lots of the data up one position, on lists you just need to allocate memory for the new element and redirecting pointers. Initial loading of the hash table is pretty fast, because the array is filled sequentially (plays very nice with CPU cache). Why Linked List? @Neil:I guess kind of a survey. Asking for help, clarification, or responding to other answers. And actually "intrusive" normally has a slightly different meaning than you have given it - that each possible list element contains a pointer separate from the data. On the other hand arrays performs better than lists when you have to access directly the elements of the array. Worst case: O(n), on arrays you can access the element immediately. on lists you just redirect pointers. There are the obvious built in data structures like objects and arrays and there are the modified data structures like Linked Lists. (The other main supporting structure being ConcurrentQueue.). You do not want to perform a delete-from-middle-and-add-to-the-end on a vector or deque at every read access, but moving a node to the tail is typically fine. So then why are Linked Lists useful in software? To learn more, see our tips on writing great answers. Our Storage Unit has data and it also has a map to other storage units; traditionally this is called the “next” node. Walking the list is very rarely needed in this case, so the O(n) cost is not an issue here (walking a structure is O(n) anyway). We just inserted a new head.