• 0

C# vs C++ performance: sorting an array


Question

I'm trying to compare the performance of the standard sorting facilities of .Net and C++, just for fun mainly. I devised the following test in C++/CLI: to run this, create a "CLR Console Application". I just fill a large C++ array and an equally large .NET array with the same random numbers, and sort each with their respective standard sorting functions: std::sort and Array.Sort, respectively. I repeat the test a few times and compute the average for each.

#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
int main(array<System::String ^> ^args)
{
auto cppArray = new int[ARRAY_SIZE];
auto netArray = gcnew array<int>(ARRAY_SIZE);
double totalTimeCpp = 0.0;
double totalTimeNet = 0.0;

auto randGen = gcnew Random();

for (int i = 0; i < NUM_LOOPS; ++i) {
for (int i = 0; i < ARRAY_SIZE; ++i) {
int randNum = randGen->Next();
cppArray[i] = randNum;
netArray[i] = randNum;
}

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (double)stopWatch->ElapsedMilliseconds;
}

Console::WriteLine(L"Average time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.", totalTimeNet / (double)NUM_LOOPS);
Console::WriteLine(L"Array.Sort time / std::sort time: {0}.", totalTimeNet / totalTimeCpp);
Console::ReadKey(false);
return 0;
}[/CODE]

I'm posting this because I can't believe the results. In release mode, optimizing for speed, Array.Sort() takes 89.6% of the time std::sort() takes on my machine. std::sort is supposed to be faster, if only because C++ doesn't perform bounds check on each random access in an array and .Net does. At this point I suspect there's something wrong with my testing methodology, perhaps due to compiling with the /CLI switch on? I don't know, so if you have a better idea let me know.

Link to comment
https://www.neowin.net/forum/topic/1057418-c-vs-c-performance-sorting-an-array/
Share on other sites

17 answers to this question

Recommended Posts

  • 0

I don't know for a fact, but I would guess it's not doing bounds checking in release mode. I would also guess that Microsoft would add a faster sort to .NET compared to what's in the standard C++ library.

  • 0

I don't know for a fact, but I would guess it's not doing bounds checking in release mode.

It is, apparently:


cppArray[i] = randNum;
000000dd mov eax,dword ptr [ebp-8]
000000e0 mov edx,dword ptr [ebp-1Ch]
000000e3 mov ecx,dword ptr [ebp-24h]
000000e6 mov dword ptr [edx+eax*4],ecx
netArray[i] = randNum;
000000e9 mov eax,dword ptr [ebp-8]
000000ec mov edx,dword ptr [ebp-5Ch]
000000ef cmp eax,dword ptr [edx+4]
000000f2 jb 000000F9
000000f4 call 711471E8
000000f9 mov ecx,dword ptr [ebp-24h]
000000fc mov dword ptr [edx+eax*4+8],ecx [/CODE]

Pretty sure the cmp/jb/call there is a bounds check and a call to raise an exception if it fails.

  • 0

I'm trying to compare the performance of the standard sorting facilities of .Net and C++, just for fun mainly. I devised the following test in C++/CLI: to run this, create a "CLR Console Application". I just fill a large C++ array and an equally large .NET array with the same random numbers, and sort each with their respective standard sorting functions: std::sort and Array.Sort, respectively. I repeat the test a few times and compute the average for each.

#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
int main(array<System::String ^> ^args)
{
auto cppArray = new int[ARRAY_SIZE];
auto netArray = gcnew array<int>(ARRAY_SIZE);
double totalTimeCpp = 0.0;
double totalTimeNet = 0.0;

auto randGen = gcnew Random();

for (int i = 0; i < NUM_LOOPS; ++i) {
for (int i = 0; i < ARRAY_SIZE; ++i) {
int randNum = randGen->Next();
cppArray[i] = randNum;
netArray[i] = randNum;
}

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (double)stopWatch->ElapsedMilliseconds;
}

Console::WriteLine(L"Average time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.", totalTimeNet / (double)NUM_LOOPS);
Console::WriteLine(L"Array.Sort time / std::sort time: {0}.", totalTimeNet / totalTimeCpp);
Console::ReadKey(false);
return 0;
}[/CODE]

I'm posting this because I can't believe the results. In release mode, optimizing for speed, Array.Sort() takes 89.6% of the time std::sort() takes on my machine. std::sort is supposed to be faster, if only because C++ doesn't perform bounds check on each random access in an array and .Net does. At this point I suspect there's something wrong with my testing methodology, perhaps due to compiling with the /CLI switch on? I don't know, so if you have a better idea let me know.

The .NET code might be optimised for better CPU register usage. Or it might have an optimised algorithm. It might make more efficient use of memory, like consecutive memory accesses. Or memory paragraphs.

What about checking that the total array bytes is the same? In some compilers and CPUs, an int is 32 bits, on others 64 bits. Checking the array size in .NET and C++ will tell if they are using the same size int.

.NET programs might use Windows hooks to allow more efficient processing. The C++ program might have to do everything itself.

  • 0

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort. The std::sort is supposed to handle the worst case much faster while quicksort is faster on average. THe worst case thing is to prevent attackes where an attacker hangs a system by sending a job that will take forever to sort. Use a C++ real quicksort if you want to compare more... I don't think qsort is always an actual QuickSort like the Array.Sort is.

sauce: http://msdn.microsof...y/6tf1f0bc.aspx

C++\CLI is really nice for interop isn't it? It's amazing. But for the highest performance you need a vectorizing compiler like ICC.

  • 0
Or it might have an optimised algorithm.
They both use QuickSort.
What about checking that the total array bytes is the same? In some compilers and CPUs, an int is 32 bits, on others 64 bits. Checking the array size in .NET and C++ will tell if they are using the same size int.
They are both using System.Int32 which is guaranteed to be 4 bytes.

I did some tests with various integral and floating-point types, and here are the results:

Byte:

C++ 29 ms

.NET 43 ms

Int32:

C++ 87 ms

.NET 78 ms

Int64:

C++ 132 ms

.NET 99 ms

Single:

C++ 99 ms

.NET 193 ms

Double:

C++ 101 ms

.NET 203 ms

Very interesting! C++ seems strangely slow on Int32 and Int64, but is twice as fast as .NET on floating point types and unsigned char. I took a quick look at VS2010's algorithm.h and it apparently uses the same sort for all types, no specialization for float or anything of the sort.

.NET programs might use Windows hooks to allow more efficient processing.
Such as? I don't see what you could be referring to.
  • 0

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort.

VS2010's algorithm.h:

template<class _RanIt,
class _Diff> inline
void _Sort(_RanIt _First, _RanIt _Last, _Diff _Ideal)
{ // order [_First, _Last), using operator<
_Diff _Count;
for (; _ISORT_MAX < (_Count = _Last - _First) && 0 < _Ideal; )
{ // divide and conquer by quicksort
_STD pair<_RanIt, _RanIt> _Mid =
_Unguarded_partition(_First, _Last);
_Ideal /= 2, _Ideal += _Ideal / 2; // allow 1.5 log2(N) divisions
if (_Mid.first - _First < _Last - _Mid.second)
{ // loop on second half
_Sort(_First, _Mid.first, _Ideal);
_First = _Mid.second;
}
else
{ // loop on first half
_Sort(_Mid.second, _Last, _Ideal);
_Last = _Mid.first;
}
}
if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions
_STD make_heap(_First, _Last);
_STD sort_heap(_First, _Last);
}
else if (1 < _Count)
_Insertion_sort(_First, _Last); // small
}[/CODE]

Lovely isn't it? :laugh: Anyway, basically it's quicksort but it resorts to heap sort or insertion sort when it detects quicksort would be non-optimal (quicksort's worst case is O(n^2)).

The C++ standard doesn't specify what algorithm std::sort should use, only that it should be O(n log n) on average.

  • 0

So .NET uses intro sort. If C++ does use merge sort, then it would do fewer comparisons on average. Comparing floats is more expensive, but that alone shouldn't make std::sort twice as fast. However, despite heap sort having an average runtime of O(n log n), it's much slower on average compared to merge sort or quick sort because it doesn't make efficient use of the CPU cache. So if Array.Sort() is falling back on heap sort, that could explain the slower benchmark.

  • 0

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort. The std::sort is supposed to handle the worst case much faster while quicksort is faster on average. THe worst case thing is to prevent attackes where an attacker hangs a system by sending a job that will take forever to sort. Use a C++ real quicksort if you want to compare more... I don't think qsort is always an actual QuickSort like the Array.Sort is.

sauce: http://msdn.microsof...y/6tf1f0bc.aspx

C++98 doesn't specify an algorithm. All it specifies is complexity as "Approximately N log N (where N == last - first) comparisons on the average" (section 25.3.1.1, ISO/IEC 14882:1998) Most C++ implementations of std::sort are quicksort, partially because it can be done in-place.

C++\CLI is really nice for interop isn't it? It's amazing. But for the highest performance you need a vectorizing compiler like ICC.

Auto-vectorising won't help with sort. In general, it won't help with highly branchy, non-linear code. On the other hand, a compiler like ICC will do IPO and profile-guided optimisations, which are generally helpful.

  • 0

So .NET uses intro sort. If C++ does use merge sort, then it would do fewer comparisons on average. Comparing floats is more expensive, but that alone shouldn't make std::sort twice as fast. However, despite heap sort having an average runtime of O(n log n), it's much slower on average compared to merge sort or quick sort because it doesn't make efficient use of the CPU cache. So if Array.Sort() is falling back on heap sort, that could explain the slower benchmark.

The code I posted was std::sort, so it's std::sort that uses introsort; all Microsoft says about Array.Sort is that it uses Quicksort. It's not clear to me though in which cases std::sort does fall back to merge sort,

if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions[/CODE]

That's the relevant code but I don't really understand the condition?

By the way I've revised the code so it benches a few different numerical types one after the other:

[CODE]// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

void generateArrays(Byte* cppArray, array<Byte>^ netArray) {
auto randGen = gcnew Random();
randGen->NextBytes(netArray);
for (int i = 0; i < ARRAY_SIZE; ++i) {
cppArray[i] = netArray[i];
}
}

void generateArrays(Single* cppArray, array<Single>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = (Single)randGen->NextDouble();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Double* cppArray, array<Double>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = randGen->NextDouble();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Int32* cppArray, array<Int32>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = randGen->Next();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Int64* cppArray, array<Int64>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = (Int64)randGen->Next();
cppArray[i] = netArray[i] = randNum;
}
}

template<typename T>
void benchmark() {
auto cppArray = new T[ARRAY_SIZE];
auto netArray = gcnew array<T>(ARRAY_SIZE);
Double totalTimeCpp = 0.0;
Double totalTimeNet = 0.0;

Console::Write("Testing {0}", T::typeid);

for (int i = 0; i < NUM_LOOPS; ++i) {
generateArrays(cppArray, netArray);

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (Double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (Double)stopWatch->ElapsedMilliseconds;

// progress indicator and sanity check
Console::Write(cppArray[0] < netArray[ARRAY_SIZE - 1] ?
"." :
"wuuuuuuut?!");
}

Console::WriteLine(L"\nAverage time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.\n", totalTimeNet / (double)NUM_LOOPS);
};

int main() {
benchmark<Byte>();
benchmark<Int32>();
benchmark<Int64>();
benchmark<Single>();
benchmark<Double>();

Console::WriteLine("All done.");
Console::ReadKey(false);
return 0;
}[/CODE]

  • 0

The code I posted was std::sort, so it's std::sort that uses introsort; all Microsoft says about Array.Sort is that it uses Quicksort. It's not clear to me though in which cases std::sort does fall back to merge sort,

if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions[/CODE]

That's the relevant code but I don't really understand the condition?

That condition is to choose between heap sort and insert sort. _ISORT_MAX is a constant to ensure insert sort is only used on small sublists.

The relevant condition is in the loop, specifically the _ideal variable:

[CODE]for (; _ISORT_MAX < (_Count = _Last - _First) && 0 < _Ideal; )
...
_Ideal /= 2, _Ideal += _Ideal / 2; // allow 1.5 log2(N) divisions[/CODE]

But this is all irrelevant. I didn't realize what you posted is the C++ code.

  • 0

> .NET programs might use Windows hooks to allow more efficient processing.

Such as? I don't see what you could be referring to.

It can depend if the compiler uses heap, stack or virtual (disk) memory. Also whether the algorithm uses a loop or recursive function calls.

Linux has mmap() but Windows doesn't. Some programs might use HeapAlloc() (windows), but others might use plain alloc(). I don't have any good examples, but here's a page which talks about different types of memory calls in Windows.

http://www.mofeel.net/1147-comp-os-ms-windows-programmer-win32/1326.aspx

  • 0

That shouldn't have any significant influence because the algorithm either needs to make 0 or 1 big memory allocation if it's not in-place. It's not like it was constantly allocating and freeing memory, anyway if it did it'd be very slow.

Thanks for the info though.

  • 0

11 February 2012 - 01:52

No offense Dr_Asik but come on: It is saturday (when you posted this).....I think this would be better suited if you did it workdays and for some actual purpose; not just for fun.

Example: Im currenting seeing videos that are suppose to prepare me for the CCENT exam to get my CCNA certification. I dont do that "just for fun".

I mean NO OFFENSE at all by this post Dr_Asik. Everyone is free to do whatever they want with their time and life. Just my opinion :)

On topic: It is surprising (if I read it correctly) that the C# implementation is faster than the C++. If incorrect, ignore me.

  • 0

Don't you think you might be overdoing it with the C++11 type inference?

On the sort itself. A few notes:

1. Have you tried providing your own comparison function? return x < y;

2. It would be interesting to compare the results against C's qsort()

3. Have you tried using another compiler/STL?

  • 0

No offense Dr_Asik but come on: It is saturday (when you posted this).....I think this would be better suited if you did it workdays and for some actual purpose; not just for fun.

Most code I'm proud of having written was written for fun.
Don't you think you might be overdoing it with the C++11 type inference?
No... it's a compile-time feature and has no bearing on execution. It makes the code cleaner by eliminating redundancies. C# (a similar language) had type inference since VS2008 and I believe it should be used whenever possible with a few exceptions. I don't want to turn this into a debate on type inference.
1. Have you tried providing your own comparison function? return x < y;

2. It would be interesting to compare the results against C's qsort()

All the code's there, just create a C++/CLI project, copy-paste and modify at your will.
3. Have you tried using another compiler/STL?
I'd have to change the approach because only MSVC supports C++/CLI.
Also try a different compiler than Visual Studio (MingW using Code::Blocks for example).
Same remark as above.
Test with 100 million atleast. 1 million is nothing for computers those days. (No, Not kidding).
Results are consistent with what I presented here even with much larger array sizes. Besides, 100-200ms is not an insignificant amount of time for a cpu-bound operation.
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.