Having 19580x12600 in a 29.2x20.2mm, great! Considering the wavelength of visible light, that's about twice the minimum theoretical size of a photosite, assuming perfect optics, of course. It might have applications for research where they sometimes use electronic microscopes to "see" things too small for regular light waves.
Sorry if I joined this interesting thread too late :-)
Personally, I consider this a good step in the right direction, but there is still a long way to go, to be of transformative value to photography.
Just 3 quick thoughts:
1) Current sensor technology is in a large part still analog technology (until the A/D process kicks in) with the known broad set of noise sources. Imagine that the size of the sensels will get over time so small that each sensel can detect/register a single photon hitting the sensor surface - effectively moving the digital domain much further upstream. No need to check for full well capacity, no need for perfect calibration among the many A/D converters, etc, etc, ..). Removes some classes of noise sources.
2) It allows to use software to address current physical limitations. Take diffraction limits: Well know, often an issue, etc ... Imagine you have a sensor with such a resolution, that you have 10000 sensel per Airy disc. Allowing the camera to measure the shape, amplitude and size of the Airy disc. If there would be only one Airy disc, processing would be simple. Due to overlapping nature of all the Airy discs in a photographic image, an inference pattern of overlapping airy discs will be the result. With sufficient specialized and parallel CPU resources, the image degradiation of diffraction can potentially be computationally compensated (partially or may be even fully).
A fictous example:
Internal resolution: 10 Gigapixel on DX
External resolution: 100 MP for the RAW file
Wouldn't it be nice to use your (from a user's perspective) 100 MP camera even at f16 without beeing impacted by diffraction ?
The current technology approach wouldn't get you there.
3) I assume, that sooner or later, the notion of RAW or sensor resolution will split into "2 resolutions". One internal to the camera and one "visible" to the photographer in selecting the RAW size option of the camera. The native chip resolution will not be a 1:1 relation to the storage resolution like today. Kind of oversampling. BTW, Nikon's D journey started along those lines. The orginal D1 sensor used 4-fold binning internally. Only option to store the RAW was the 2,7 MP resolution (but not the "native" 10,8 MP of the sensor).
This decoupling might open up another option for camera manufacturers. It allows them to have different speed of progress for the "native&internal" resolution increases HW technology will provide and the "storage" resolution they will be able offer to the user (depending on the sppeed of progress in SW, image processing, etc, etc, ...)
Just thoughts,
Andy