It's been what... forty years since personal computers, and such an elegant solution JUST appeared?
@balrogboogie no idea. It would be nice to measure!
Also, I suppose it could really help with everything-on-the-heap languages...
@federicomena Interesting! Well worth a read!
Strikes me that GNOME partitions their heap quite similarly with their "Slice" allocator, so I wonder how quickly they would adopt this?
@alcinnz I'm afraid glib's slice allocator is pretty obsolete these days 😅
I think it was nice when system malloc() was slow, but those have improved a lot. Also, unfortunately Glib's slice allocator could never really use the recycle-to-default-values scheme from Solaris's original allocator.
I don't remember if the GTK team had plans to remove GSlice and just use malloc... maybe @firstname.lastname@example.org remembers?
GLibc malloc was always on par with GSlice performance. GSlice was faster than non-glibc malloc() impls and it's more memory efficient, because it doesn't have to store boundary tags (2*size_t fields before each memory block that contain the memory size free() needs to know about).
True to the original slab magazine paper, it only recycles memory back to the kernel after a timeout of several seconds, a fact the Gitlab bug benchmarks don't reflect.
The GSlice allocator doesn't suffer from the catastrophic fragmentation described in the paper intro. Basically, it has per-thread aches for objects of the same size and it allocates and recycles objects of the same size from a single page, which prevents large fragmentation due to widely varying object sizes.
@federicomena whoa, I haven't read the whole paper yet and already can't wait to see this used in the glibc allocator
@federicomena Ooh, that's clever. I was ready to ramble about how that can already work at the OS level but they addressed that concern right away and added another layer to it that cleverly fixes the problem with that.
@fluffy I'm boggling at the part where it installs a segfault handler to catch writes to pages that it is in the process of compacting. Truly a lovely, scary hack.
* sigsegv as "game over, program is buggy"
* sigsegv as "perfectly legitimate way to catch writes to a page, who's your VMM now"
And of course I'm biased, but using this allocator on a memory safe language seems like a wonderful opportunity.
@federicomena @zwol chained handlers are a thing and segfaults are already a natural part of how mmap() et al work. Not to mention virtual memory in general. There’s definitely some care you need to take in chaining sigsegv due to the prevalence of crash reporters and such though. I don’t see this as something you’d want to LD_PRELOAD willy-nilly
@fluffy @zwol yeah, the paper mentions that they only do their wait-until-remapped thing in the sigsegv handler only if the pointer in question is known to the Mesh allocator. I haven't read the code to see if it chains to other handlers if it isn't, but
a) that sounds like the right thing to do, anyway;
b) I have absolutely no clue how they deal with the order of initialization of the handlers :)
@federicomena @zwol I guess my big question about this is whether it’s actually beneficial in actual usage - intuitively I feel like the chance of two pages having non-overlapping allocation offsets isn’t going to be much higher than the chance of a page having no allocations at all, and the OS VMM can already defragment physical allocations in the latter case.
@federicomena @fluffy I was thinking about this with my “occasional contributor to glibc” hat on, and the issue there is that signal handlers are process globals, which means libraries mustn’t touch them. Also, come to think of it, nothing stops you from using sigprocmask to block delivery of SIGSEGV and (hurriedly writes test program) this causes both Linux and BSD kernels to kill the process instead of invoking a handler.
@zwol @federicomena yeah that’s absolutely fair and I would expect anyone who’s using this allocator to specifically know they’re doing it and only use LD_PRELOAD as a reliable means of overriding the libc one. Because overriding a default allocator in C++ at the language level is a gigantic pain in the butt.
@federicomena @fluffy as another thing https://sourceware.org/git/?p=glibc.git;a=blob;f=nptl/allocatestack.c;hb=HEAD#l1120 can do. (yes, it's horrible.)
> For example, on low-end Android devices,Google reports that more than 99 percent of Chrome crashesare due to running out of memory when attempting to dis-play a web page
surely the solution is to write some allocator trickery. yup.
@grainloom heap fragmentation is a real problem; I don't think they mention the Chrome factoid other than for illustrative purposes.
@federicomena I assume they're updating pointers by making pages inaccessible and using a segfault handler? That's a lot more practical now that everything is 64 bit. I guess the need to keep the mappings around isn't that big a deal if you don't move objects gratuitously.
I'd been thinking about a similar approach for object-level virtual memory/persistence, inspired by "Pointer Swizzling At Page Fault Time" and the implementation of libgc.
@federicomena Another idea I had is to achieve some level of memory safety by never reusing addresses. This would work really well with a compacting allocator.
@freakazoid they don't update pointers. They move an allocated block to another page, at the same page offset it had before, and remap the old page to point to the new one.
La red social del futuro: ¡Sin anuncios, sin vigilancia corporativa, diseño ético, y descentralización! ¡Sé dueño de tu información con Mastodon!