As always, the only correct way to answer a question about performance is to actually measure the code.
Here's a sample LINQPad program that tests:
- Activator.CreateInstance
- new T()
- calling a delegate that calls new T()
As always, take the performance program with a grain of salt, there might be bugs here that skews the results.
The output (timing values are in milliseconds):
Test1 - Activator.CreateInstance<T>()
12342
Test2 - new T()
1119
Test3 - Delegate
1530
Baseline
578
Note that the above timings are for 100.000.000 (100 million) constructions of the object. The overhead might not be a real problem for your program.
Cautionary conclusion would be that Activator.CreateInstance<T>
is taking roughly 11 times as much time to do the same job as a new T()
does, and a delegate takes roughly 1.5 times as much. Note that the constructor here does nothing, so I only tried to measure the overhead of the different methods.
Edit: I added a baseline call that does not construct the object, but does the rest of the things, and timed that as well. With that as a baseline, it looks like a delegate takes 75% more time than a simple new(), and the Activator.CreateInstance takes around 1100% more.
However, this is micro-optimization. If you really need to do this, and eek out the last ounce of performance of some time-critical code, I would either hand-code a delegate to use instead, or if that is not possible, ie. you need to provide the type at runtime, I would use Reflection.Emit to produce that delegate dynamically.
In any case, and here is my real answer:
If you have a performance problem, first measure to see where your bottleneck is. Yes, the above timings might indicate that Activator.CreateInstance has more overhead than a dynamically built delegate, but there might be much bigger fish to fry in your codebase before you get (or even have to get) to this level of optimization.
And just to make sure I actually answer your concrete question: No, I would not discourage use of Activator.CreateInstance. You should be aware that it uses reflection so that you know that if this tops your profiling lists of bottlenecks, then you might be able to do something about it, but the fact that it uses reflection does not mean it is the bottleneck.
The program:
void Main()
{
const int IterationCount = 100000000;
// warmup
Test1();
Test2();
Test3();
Test4();
// profile Activator.CreateInstance<T>()
Stopwatch sw = Stopwatch.StartNew();
for (int index = 0; index < IterationCount; index++)
Test1();
sw.Stop();
sw.ElapsedMilliseconds.Dump("Test1 - Activator.CreateInstance<T>()");
// profile new T()
sw.Restart();
for (int index = 0; index < IterationCount; index++)
Test2();
sw.Stop();
sw.ElapsedMilliseconds.Dump("Test2 - new T()");
// profile Delegate
sw.Restart();
for (int index = 0; index < IterationCount; index++)
Test3();
sw.Stop();
sw.ElapsedMilliseconds.Dump("Test3 - Delegate");
// profile Baseline
sw.Restart();
for (int index = 0; index < IterationCount; index++)
Test4();
sw.Stop();
sw.ElapsedMilliseconds.Dump("Baseline");
}
public void Test1()
{
var obj = Activator.CreateInstance<TestClass>();
GC.KeepAlive(obj);
}
public void Test2()
{
var obj = new TestClass();
GC.KeepAlive(obj);
}
static Func<TestClass> Create = delegate
{
return new TestClass();
};
public void Test3()
{
var obj = Create();
GC.KeepAlive(obj);
}
TestClass x = new TestClass();
public void Test4()
{
GC.KeepAlive(x);
}
public class TestClass
{
}