As wikipedia knows in its Context switch article, "context switch is the process of storing and restoring the state (context) of a process so that execution can be resumed from the same point at a later time.". I'll assume context switch between two processes of the same OS, not the user/kernel mode transition (syscall) which is much faster and needs no TLB flush.
So, there is lot of time needed for OS kernel to save execution state (all, really all, registers; and many special control structures) of current running process to memory, and then load execution state of other process (read in from memory). TLB flush, if needed, will add some time to the switch, but it is only small part of total overhead.
If you want to find context switch latency, there is lmbench
benchmark tool http://www.bitmover.com/lmbench/ with LAT_CTX test http://www.bitmover.com/lmbench/lat_ctx.8.html
I can't find results for nehalem (is there lmbench in phoronix suite?), but for core2 and modern Linux context switch may cost 5-7 microseconds.
There are also results for lower-quality test http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html with 1-3 microseconds for context switch. Can't get exact effect of non-flushing the TLB from his results.
UPDATE - Your question should be about Virtualization, not about process context switch.
RWT says in their article about Nehalem "Inside Nehalem: Intel’s Future Processor and System. TLBs, Page Tables and Synchronization" April 2, 2008 by David Kanter, that Nehalem added VPID to the TLB to make virtual machine/host switches (vmentry/vmexit) faster:
Nehalem’s TLB entries have also changed subtly by introducing a “Virtual Processor ID” or VPID. Every TLB entry caches a virtual to physical address translation ... that translation is specific to a given process and virtual machine. Intel’s older CPUs would flush the TLBs whenever the processor switched between the virtualized guest and the host instance, to ensure that processes only accessed memory they were allowed to touch. The VPID tracks which VM a given translation entry in the TLB is associated with, so that when a VM exit and re-entry occurs, the TLBs do not have to be flushed for safety. .... The VPID is helpful for virtualization performance by lowering the overhead of VM transitions; Intel estimates that the latency of a round trip VM transition in Nehalem is 40% compared to Merom (i.e. the 65nm Core 2) and about a third lower than the 45nm Penryn.
Also, you should know, that in the fragment cited by you in the question, the "[18]" link was to "G. Neiger, A. Santoni, F. Leung, D. Rodgers, and R. Uhlig. Intel Virtualization Technology: Hardware Support for Efficient Processor Virtualization. Intel Technology Journal, 10(3).", so this is feature for effective virtualization (fast guest-host switches).