在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
A post I made a couple days ago about the side-effect of concurrency (the concurrent collections in the .Net 4.0 Parallel Extensions) allowing modifications to collections while enumerating them has been quite popular, with a lot of attention coming from www.dotnetguru.org, what appears to be a French-language news/aggregation site (I only know enough French to get my face slapped, so it’s hard for me to tell). I’m not sure why the post would be so popular in France, but the Internet’s weird that way … things take off for unexpected reasons. Regardless, it occurred to me that some further research might be in order, before folks get all hot for .Net 4.0 and want to change their collections so they can be modified while enumerating. The question is: what’s the performance penalty for these threadsafe collections? If I use one in a single-threaded environment, just to get that modification capability, is the performance-price something I’m willing to pay? So I set up a simple test to satisfy my curiosity – but first the test platform (your mileage may vary):
It’s also important to note that the Dictionary class has been around for a while and my guess is it’s been optimized once or twice, while the ConcurrentDictionary is part of a CTP. The test is set up as two loops – the first a for that adds a million items to a Dictionary; the second a foreach that enumerates them: 1: static void Main(string[] args) 2: {
3: var dictionary = new Dictionary<int, DateTime>(); 4:
5: var watch = Stopwatch.StartNew();
6:
7: for (int i = 0; i < 1000000; i++) 8: {
9: dictionary.Add(i, DateTime.Now);
10: }
11:
12: watch.Stop();
13: Console.WriteLine("Adding: {0}", watch.ElapsedMilliseconds); 14:
15: int count = 0; 16: watch.Reset();
17: watch.Start();
18: foreach (var item in dictionary) 19: {
20: count += item.Key;
21: }
22:
23: watch.Stop();
24: Console.WriteLine("Enumerating: {0}", watch.ElapsedMilliseconds); 25: Console.ReadLine();
26:
27: }
Not the most scientific of tests, nor the most comprehensive, but enough to sate my curious-bone until I have time to do a more thorough analysis. Running this nine times, I got the following results:
Then I changed to a ConcurrentDictionary (also note the change from Add() to TryAdd() on line 9): 1: static void Main(string[] args) 2: {
3: var dictionary = new ConcurrentDictionary<int, DateTime>(); 4:
5: var watch = Stopwatch.StartNew();
6:
7: for (int i = 0; i < 1000000; i++) 8: {
9: dictionary.TryAdd(i, DateTime.Now);
10: }
11:
12: watch.Stop();
13: Console.WriteLine("Adding: {0}", watch.ElapsedMilliseconds); 14:
15: int count = 0; 16: watch.Reset();
17: watch.Start();
18: foreach (var item in dictionary) 19: {
20: count += item.Key;
21: }
22:
23: watch.Stop();
24: Console.WriteLine("Enumerating: {0}", watch.ElapsedMilliseconds); 25: Console.ReadLine();
26:
27: }
This change resulted in the following times:
So there’s clearly a performance difference, with the ConcurrentDictionary being slower, but keep in mind a few key facts:
The time necessary to do the adding is more troublesome to me, but in dealing with a million-item set, is it really that unreasonable? That’s a design decision you’d have to make for your application. Having satisfied the curiosity-beast to a certain extent, yet another question arose (curiosity is like that): Since this post came about from the ability to alter a collection while enumerating it, what effect would that have on the numbers? So I changed the code to remove each item from the collection as it enumerates: 1: var dictionary = new ConcurrentDictionary<int, DateTime>(); 2:
3: var watch = Stopwatch.StartNew();
4:
5: for (int i = 0; i < 1000000; i++) 6: {
7: dictionary.TryAdd(i, DateTime.Now);
8: }
9:
10: watch.Stop();
11: Console.WriteLine("Adding: {0}", watch.ElapsedMilliseconds); 12:
13: int count = 0; 14: watch.Reset();
15: watch.Start();
16: foreach (var item in dictionary) 17: {
18: count += item.Key;
19: DateTime temp;
20: dictionary.TryRemove(item.Key, out temp); 21: }
22:
23: watch.Stop();
24: Console.WriteLine("Enumerating: {0}", watch.ElapsedMilliseconds); 25: Console.WriteLine("Items in Dictionary: {0}", dictionary.Count); 26: Console.ReadLine();
Which added significantly to the enumeration time:
Removing the current item from the collection during enumeration triples the time spent in the foreach loop – a disturbing development, but we’re still talking about a total of a quarter-second to process a million items, so maybe not worrisome? Depends on your application and how many items you actually have to process – and other processing that you may have to do. Now, with the whole purpose of the concurrent collections being parallel development, you have to know that I couldn’t leave it without doing one more test. After all, those two loops have been sitting there this entire post fairly screaming to try parallelizing them with Parallel.For and Parallel.ForEach: 1: var dictionary = new ConcurrentDictionary<int, DateTime>(); 2:
3: var watch = Stopwatch.StartNew();
4:
5: Parallel.For(0, 1000000, (i) =>
6: {
7: dictionary.TryAdd(i, DateTime.Now);
8: }
9: );
10:
11: watch.Stop();
12: Console.WriteLine("Adding: {0}", watch.ElapsedMilliseconds); 13:
14: int count = 0; 15: watch.Reset();
16: watch.Start();
17:
18: Parallel.ForEach(dictionary, (item) =>
19: {
20: // count += item.Key; 21: DateTime temp;
22: dictionary.TryRemove(item.Key, out temp); 23: }
24: );
25:
26: watch.Stop();
27: Console.WriteLine("Enumerating: {0}", watch.ElapsedMilliseconds); 28: Console.WriteLine("Items in Dictionary: {0}", dictionary.Count); 29: Console.ReadLine();
Not good numbers at all, but not unexpected when you think about it. Each iteration of the two loops would become a Task when parallelized, which means we’re incurring the overhead of instantiating two million Task objects, scheduling them and executing them – but each Task consists of very little code; code that doesn’t take that long to begin with, so any performance improvement we gain by executing in parallel is offset (and more) by the overhead of managing the Tasks. Something to keep in mind as you’re looking for parallelization candidates in a real application. So what about the more traditional way of handling this – the situation where we make the decision to remove an item from a collection while enumerating over it. Typically we’d probably make a list of the items to be removed, then remove them after the first enumeration was complete. 1: var dictionary = new Dictionary<int, DateTime>(); 2:
3: var watch = Stopwatch.StartNew();
4:
5: for (int i = 0; i < 1000000; i++) 6: {
7: dictionary.Add(i, DateTime.Now);
8: }
9:
10: watch.Stop();
11: Console.WriteLine("Adding: {0}", watch.ElapsedMilliseconds); 12:
13: watch.Reset();
14: watch.Start();
15: var toRemove = new List<int>(); 16:
17: foreach (var item in dictionary) 18: {
19: toRemove.Add(item.Key);
20: }
21: foreach (var item in toRemove) 22: {
23: dictionary.Remove(item);
24: }
25:
26: watch.Stop();
27: Console.WriteLine("Enumerating: {0}", watch.ElapsedMilliseconds);
Based on this limited test, the traditional method of waiting until the first enumeration of a collection is complete before removing items from it appears to still be the most efficient. Adding to a Dictionary is faster than adding to a ConcurrentDictionary, even if the adding is parallelized … provided the parallelized code is so brief that the overhead of parallelization outweighs the benefits. That last bit is important, because if the parallelized example had done significantly more than just add an item to a Dictionary, the results would likely be different. When enumerating the items in a collection, the simple Dictionary again proves faster than ConcurrentDictionary; and when actually modifying the collection by removing items, the traditional method of building a list of items to remove and then doing so after the foreach is complete proves to be fastest. Does this mean that you should never use one of the new concurrent collections in this way? That’s a design decision that you’ll have to make based on your particular application. Keeping in mind that the concurrent collections are still in CTP and will likely improve dramatically in performance by the time .Net 4 is released – but also that the very nature of making them threadsafe and, consequently, able to be modified while enumerating will likely mean that they’re always going to be somewhat less performant than their counterparts. There may be instances, though, where making the decision to sacrifice performance for this capability is the best solution. For instance, what if the results of processing one item in the collection result in a need to remove an item (or items) that haven’t been processed yet? In that case, simply removing the item at the point the decision’s made, rather than maintaining a list of items not to be processed, might be the simplest, most maintainable solution and sacrificing a bit of performance might be worth it. Like so many things in software development, the answer is simple … It depends.
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论