The array size is limited to a total of 4 billion elements, and to a maximum index of 0x7FEFFFFF in any given dimension (0x7FFFFFC7 for byte arrays and arrays of single-byte structures).
Why these numbers, and not other?
I mean, they are not even, not maxint or anything like this. What is the use of the "cut" piece? (0x7F_E_FFFFF = 0x7FFFFFFF - 0x100000, or, for byte arrays, 0x7FFFFF_C7 = 0x7FFFFF_FF - 0x38)
--- edited ---
In the related question (Why is the max size of byte 2 GB - 57 B?), they are talking about overhead. But 0x10000 overhead does not look like a type or index information, does it? So I cannot accept the answer "because there is runtime overhead". It's like explaining that salt tastes salty because it contains a salty substance.
I am guessing one must be aware of reasoning behind .net implementation details to actually explain that number?
Out of curiosity, I looked at the source of
System.Array and found this:
// We impose limits on maximum array lenght in each dimension to allow efficient // implementation of advanced range check elimination in future. // Keep in sync with vm\gcscan.cpp and HashHelpers.MaxPrimeArrayLength. // The constants are defined in this method: inline SIZE_T MaxArrayLength(SIZE_T componentSize) from gcscan // We have different max sizes for arrays with elements of size 1 for backwards compatibility internal const int MaxArrayLength = 0X7FEFFFFF; internal const int MaxByteArrayLength = 0x7FFFFFC7;
That was a .NET 3.1 Core version. I wouldn't doubt if it was the same for other versions. Apparently the intent is for some sort of optimizations. Perhaps someone could add more authoritative reasons to this answer or post another.