Is deallocation of multiple large bunches of memory worth it? -


say instance write program allocates bunch of large objects when initialized. program runs awhile, perhaps indefinitely, , when it's time terminate, each of large initialized objects freed.

so question is, take longer manually deallocate each block of memory separately @ end of program's life or better let system unload program , deallocate of virtual memory given program system @ same time.

would safe and/or faster? also, if safe, compiler when set optimise anyway?

1) not systems free memory when application terminates. of course of modern desktop systems this, if going run program on linux or mac(or windows), can leave deallocation system.

2) needed make operations data on termination, not free memory. if going develop such program design makes hard deallocate objects @ end manually, can happen later need perform code before exiting , face hard problem.

2') if think program need objects way until dead, later may want make library program or change project load , unload big objects , poor design of program make hard or impossible.

3) moreover, program deallocation performance depends on implementation of allocator going use in program. system deallocation depends on system memory management , single system there can several implementations. if face allocation/deallocation performance problems - develop better allocator rather hope on system.

4) opinion is: when deallocate memory manually @ end - on right way. when don't this, perhaps can ambiguous benefits in several cases, face problems sooner or later.


Comments

Popular posts from this blog

blackberry 10 - how to add multiple markers on the google map just by url? -

php - guestbook returning database data to flash -

delphi - Dynamic file type icon -