It depends on how you define "leak". According to the most obvious
definition, and the only useful one, it is not a leak, at least at the
application level. A bucket doesn't leak because you intentionally
allow a finite quantity of water to escape. And practically speaking,
an application doesn't fail because you intentionally allow a bound set
of objects to persist beyond the end of the program.
With regards to memory leaks, our perception of the word has been
colored by "leak checkers"---programs like Purify or Valgrind. Their
role is to find leaks (amongst other things), but they have no way of
knowing what is intentional, and what isn't, and what is bound, and what
isn't. So they invent other definitions: an object which is unreachable
has "leaked" (and there's a good probability in real code that that's
true), or an object which hasn't been deleted after all of the
destructors of static objects have been executed has "leaked". In
this latter case, the definition is obviously wrong, and sort of
useless. But there are enough cases where such things are leaks that it
is reasonable to at least warn about them ("possible leaks"), provided
there is a way of filtering out specific cases. (Both Purify and
Valgrind recognize that not all of these cases are really leaks, and
provide various filtering mechanisms for their detection.) All of which
is well and good—I'm very happy that we have such tools—but
we shouldn't allow them to pervert the language.
And one final reminder: the standard says that the standard iostream
objects (std::cout
, etc.) will never be destructed. So any buffers
they allocate will (probably) never be freed. Certainly no one in their
right mind would consider these "leaks".
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…