C ????: allow 16-bit ptrdiff_t again

Submitter:Philipp Klaus Krause
Submission Date:????-??-??

Summary:

Allow 16-bit ptrdiff_t again.

This allows ptrdiff_t to be 16 bits, partially reverting a change made by C99.

Background:

In C90, it was common to have a 16-bit ptrdiff_t on 16-bit systems (and the standard allowed even smaller ptrdiff_t). C99 raised implementation limits, in particular a translation limit was raised from "32767 bytes in an object (in a hosted environment only)" to "65535 bytes in an object (in a hosted environment only)". From the C99 rationale it is clear that WG14 intentionally banned hosted implementation on systems with less than 512 KB of RAM. The increase in the implementation limit led to a corresponding larger ptrdiff_t (minimum 17 bits, required for both freestanding and hosted implementation). The concern that this is inefficient for 16-bit systems was raised, but WG14 chose to go ahead anyway (N849, response to comment 16): "The committee recognizes your concern, but chose to leave the new minimum limits as they are."

WG14 made a bad choice, making the C standard less relevant for embedded systems. C implementations targeting small systems have ignored this aspect of the standard, and still use a 16-bit ptrdiff_t.

A typical example is the STM8, an architecture common in 8-bit µC used in household and automative applications. It has a 16-bit address space for objects, with the reset vector at 0x8000 of the address space (all known systems have 0x8000 as the lowest ROM adress, while RAM and memory-mapped I/O are below). All of the four mainstream C implementations (Rasisonance, Cosmic, SDCC, IAR) targeting this system use a 16-bit ptrdiff_t (Raisonance only aims for C90 conformance, buit the others all support newer standards). The situation is similar for other small architectures, e.g. SDCC targeting Z80, GCC targeting AVR.

Whet WG14 raised the implementation limits, "The goal was to allow reasonably large portable programs to be written, […]" (C99 rationale). However, from today's perspective, this seems pointless, as users just use as much memory, and objects as large as seem resonable for their tasks, and do not consider translation limits when aiming to write portable code.

On some small systems, OSes are used that provide a filesystem and I/O. On such systems, implementation limits can be all that prevents an implementation from being considered "hosted". It might thus even make sense to just drop some implementation limits back to C90 levels or below.

Do we want to lower the implementation limit for object size for hosted implementations back to the C90 level, and allow a 16-bit ptrdiff_t for both hosted and freestanding implementations?

Proposed changes (against N2596): §5.2.4.1: Change "65535 bytes in an object" to "32767 bytes in an object". §7.20.3: Change "PTRDIFF_WIDTH 17" to "PTRDIFF_WIDTH 16".

In case we decide "no" on the above question: Do we want to allow a 16-bit ptrdiff_t for freestanding implementations only?

Proposed change (against N2596): §7.20.3: Change "PTRDIFF_WIDTH 17" to "PTRDIFF_WIDTH 17 // (minimum of 17 applies in hosted environment only)".