The point of having limits on a pool is to be able to pre-allocate the resources needed for doing something or failing that, at least have a fixed limit for the number of allocations that the user will allocate from it. The goal is that, at some point, the system will no longer have to runtime allocate resources when you request resources from the pool.
Individual descriptors take up some form of resource, whether CPU, GPU, or both. But bundling them into descriptor sets can also takes up resources, depending on the implementation. As such, if an implementation can pre-allocate some number of set resources, that would be good for minimizing runtime allocations.
Now, it might have made more sense to define a descriptor pool by providing a number of descriptor layouts and saying that you're only going to allocate some number of sets of each particular layout. But that would be a very constrained descriptor pool model.
So the current interface makes for a compromise between such a strict model and a model that only looks at descriptors rather than sets.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…