Consider the following code:

void f(void) {
unsigned char x[1]; /* intentionally uninitialized */
x[0] ^= x[0];
printf("%d\n", x[0]);
printf("%d\n", x[0]);
return;
}

In this example, the unsigned char array x is intentionally uninitialized but cannot contain a trap representation because it has a character type. Consequently, the value is both indeterminate and an unspecified value. The bitwise exclusive OR operation, which would produce a zero on an initialized value, will produce an indeterminate result, which may or may not be zero. An optimizing compiler has the license to remove this code because it has undefined behavior. The two printfcalls exhibit undefined behavior and, consequently, might do anything, including printing two different values for x[0].

The programmer has clearly attempted to set  x[0]=0.  The compiler here is claimed to be “optimizing” by removing the xor. So what is exactly made faster or more efficient by this “optimization” ? (Note that there is no “volatile” used on the variable, so the compiler is permitted to optimize using the result of the first read.)  Let me ask an easier question: is there an example of a program in which such an “optimization” results in a better executable by some sensible measure? If all we care about is execution speed, without regard to program correctness, compilers could optimize by generating nothing at all. Wow, that null program sure runs fast!

 Some compiler writers would prefer to eliminate trap representations altogether and simply make any uninitialized read undefined behavior—the theory being, why prevent compiler optimizations because of obviously broken code? The counter argument is, why optimize obviously broken code and not simply issue a fatal diagnostic?

Why optimize obviously broken code? Why declare a working and common usage in C to be broken in the first place?

According to the current WG14 Convener, David Keaton, reading an indeterminate value of any storage duration is implicit undefined behavior in C, and the description in Annex J.2 (which is non-normative) is incomplete. This revised definition of the undefined behavior might be stated as “The value of an object is read while it is indeterminate.”

And yet:

Uninitialized memory has been used as a source of entropy to seed random number generators in OpenSSL, DragonFly BSD, OpenBSD, and elsewhere.10 If accessing an indeterminate value is undefined behavior, however, compilers may optimize out these expressions, resulting in predictable values.1

So the intent of at least some members of the C standard committee is to make production C code fail in some unpredictable manner as an “optimization”.  Despite the best efforts of developers of rival programming languages, C’s advantages have preserved it as an indispensable systems and applications programming language. Whether it can survive the C standards process is a different question.

As a final note: this type of design failure is not all that unusual, but it’s the job of engineering management to block it. The proposal to silently make working crypto code fail in order to enable some type of efficiency for the compiler should be a non-starter.

 

Thanks to John Regehr for bringing this report from the weird underworld of C standard development to my attention. Also this

The C standard committee effort to kill C continues
Tagged on:     

13 thoughts on “The C standard committee effort to kill C continues

  • June 17, 2017 at 1:54 am
    Permalink

    Using undefined behaviour as a source of entropy is the most moronic thing imaginable.
    Yuen Long Kau Hui Reply
    But it is not undefined in the current standard – only indeterminate.

  • June 17, 2017 at 2:15 am
    Permalink

    Unfortunately, whether or not this misbehaviour becomes part of the standard, it’s something that compilers are already doing. I first came across a discussion of accesses with undefined behaviour resulting in code being optimised out not too long ago in a blog post (https://www.viva64.com/en/b/0306/); the access with undefined behaviour there is calculating a member offset by taking a member address on a null pointer, but it’s essentially the same sort of issue. The example they cite from the Linux kernel doesn’t lead to actually hitting this problem because the kernel is built with GCC options that turn this optimisation off, but if I understand correctly the optimisation is on by default. My take on this is that, whatever the merit of specifying these sorts of things as undefined behaviour and of reasoning using the non-existence of undefined behaviour for optimisations generally, when it comes to these intersections of them, no amount of scolding people for insufficient language lawyering is going to prevent developers from running into them and I don’t understand what value is being gained in exchange for this.

  • June 17, 2017 at 3:40 am
    Permalink

    “The programmer has clearly attempted to set x[0]=0”? No, the way a programmer does that is with “x[0] = 0”, not with “x[0] ^= x[0]”. What you are doing is using an indeterminate value, something that’s rarely a good idea. Clean up your code and you won’t have to worry about the dark corners of the standard.
    Yono Reply
    According to the definition of C, x ^= x should zero x.

  • June 17, 2017 at 4:40 am
    Permalink

    Don’t optimize errors, report an error end exit.

  • June 17, 2017 at 12:02 pm
    Permalink

    Just compile with -std(89/99/11) and you should be free from this bullshit.

  • June 17, 2017 at 12:55 pm
    Permalink

    Err, if you can optimize out that code, why not raise an error during compilation?

  • June 17, 2017 at 3:13 pm
    Permalink

    x[0] = 0; is correct, and my opinion is that it is what the programmer should have written. However, the optimizing compiler should be smart enough to know that (at least on Intel architecture) it is faster and considered better practice to initialize it by emitting x[0] ^= x[0]; instead. However, this might not be the case on all architectures, which is why I think the compiler, not the programmer, is in a better position to decide how best to emit it.

  • June 18, 2017 at 4:38 am
    Permalink

    “But it is not undefined in the current standard – only indeterminate.”

    Using an uninitialised value is indeterminate, but that’s not the only thing you have to be aware of. You also need to be sure that the value isn’t a trap representation for the type and that it wasn’t declared “register”. Both of those invoke undefined behaviour and it’s hard to be sure you’re not doing so across many different architectures.

    Also, the quality of the “entropy” is seriously questionable…

    In short, it’s still a foolish thing to do and changing the spec to discourage it even more is a good thing.

  • June 18, 2017 at 7:51 am
    Permalink

    Your description shows exactly the problem with the Standard Committee approach. Instead of focusing on the application, it asks the programmer to navigate an increasingly complex set of unintuitive rules that have no mathematical or engineering coherence. And the payoff is that the compiler can “optimize” incorrect code. Completely pointless. The original rule is perfectly reasonable: automatically allocated variables have indeterminate value. End of story.The goals of the standards developers should be to reduce UB so that language is not littered with land mines.

  • Pingback:Links 18/6/2017: New Debian Release, Catchup With a Lot of News | Techrights

  • June 19, 2017 at 8:31 am
    Permalink

    I think you’ll only ever encounter these “land mines” if you insist on crossing the demilitarized zone, to extend the analogy. It baffles me that programmers would use uninitialized memory as a source of entropy unless they are under complete control of the compiler and run-time environment. x[0] ^ x[0] of indeterminate x[0] is neat, but while it demonstrates how counter intuitive the standard can be in these fringe cases, it doesn’t demonstrate a likely actual use case. If anyone relies on any particular behavior from reading uninitialized memory, I sure hope that they only upgrade their compiler and run-time environment, not to mention the standard, after very careful consideration and testing.

  • June 21, 2017 at 7:37 am
    Permalink

    You’re incorrect. It is very difficult to avoid the land mines. The most ridiculous example is that copying a structure using unsigned character pointers is(currently) safe but the same code using signed characters is undefined. The compiler is free to add padding between elements of a structure to improve alignment – but the result is that the padding bytes are “indeterminate” and reading indeterminate values is UB – unless you have magic unsigned char pointers and maybe also you walk counterclockwise under a full moon. The counter intuitive nature of the standard is a real problem and I have yet to see a compelling rationale.

  • Pingback:The C standard versus C and the mother of all hacks. – keeping simple

Comments are closed.