int.Minvalue / -1 results in implementation defined behavior according to the C# specification:
7.8.2 Division operator
If the left operand is the smallest representable int or long value and the right operand is –1, an overflow occurs. In a checked context, this causes a System.ArithmeticException (or a subclass thereof) to be thrown. In an unchecked context, it is implementation-defined as to whether a System.ArithmeticException (or a subclass thereof) is thrown or the overflow goes unreported with the resulting value being that of the left operand.
var x = int.MinValue; var y = -1; Console.WriteLine(unchecked(x / y));
This throws an
OverflowException on .NET 4.5 32bit, but it does not have to.
Why does the specification leave the outcome implementation-defined? Here's the case against doing that:
idivinstruction always results in an exception in this case.
Also interesting is the fact, that if
x / y is a compiletime constant we indeed get
unchecked(int.MinValue / -1) == int.MinValue:
Console.WriteLine(unchecked(int.MinValue / -1)); //-2147483648
This means that
x / y can have different behaviors depending on the syntactic form being used (and not only depending on the values of
y). This is allowed by the specification but it seems like an unwise choice. Why was C# designed like this?
A similar question points out where in the specification this exact behavior is prescribed but it does not (sufficiently) answer why the language was designed this way. Alternative choices are not discussed.
This is a side-effect of the C# Language Specification's bigger brother, Ecma-335, the Common Language Infrastructure specification. Section III, chapter 3.31 describes what the DIV opcode does. A spec that the C# spec very often has to defer to, pretty inevitable. It specifies that it may throw but does not demand it.
Otherwise a realistic assessment of what real processors do. And the one that everybody uses is the weird one. Intel processors are excessively quirky about overflow behavior, they were designed back in the 1970s with the assumption that everybody would use the INTO instruction. Nobody does, a story for another day. It doesn't ignore overflow on an IDIV however and raises the #DE trap, can't ignore that loud bang.
Pretty tough to write a language spec on top of a woolly runtime spec on top of inconsistent processor behavior. Little that the C# team could do with that but forward the imprecise language. They already went beyond the spec by documenting OverflowException instead of ArithmeticException. Very naughty. They had a peek.
A peek that revealed the practice. It is very unlikely to be a problem, the jitter decides whether or not to inline. And the non-inlined version throws, expectation is that the inlined version does as well. Nobody has been disappointed yet.
A principal design goal of C# is reputedly "Law of Minimum Surprise". According to this guideline the compiler should not attempt to guess the programmer's intent, but rather should signal to the programmer that additional guidance is needed to properly specify intent. This applies to the case of interest because, within the limitations of two's-complement arithmetic, the operation results in a very surprising result: Int32.MinValue / -1 evaluates to Int32.MinValue. An overflow has occurred and an unavailable 33'rd bit, of 0, would be required to properly represent the correct value of Int32.MaxValue + 1.
As expected, and noted in your quote, in a checked context an Exception is raised to alert the programmer to the failure to properly specify intent. In an unchecked context the implementation is allowed to either behave as in the checked context, or to allow the overflow and return the surprising result. There are certain contexts, such as bit-twiddling, in which it is convenient to work with signed int's but where the overflow behavious is actually expected and desired. By checking the implementation notes, the programmer can determine whether this behaviour is actually as expected.