I have seen the video. It's a gadget.
Apart from the points Frank mentions. It's about stitched photo's with different lenses, beyond any control and dependent on the program of the makers / camera.
It has a fixed aparture, and the problems with depth or field are not answered. And looking at the size of the 'prototype' it's not small too
Computational imaging has been around for quite some time, but not too long ago, doing the sort of computations required in this camera would require a massive computer and lots of time. Bringing these technologies to a commercial portable device is the main challenge they are facing. We will see next fall how it pans out.
The lack of control in stitching is not a minus, but a plus, you don't want to do this by hand trust me. Instead, you get control at a higher level over things like dof, bokeh characteristics and focus distance after the picture has been taken, as Frank and the co-founder in the video nicely explained.
When stitching two shots which have parallax (maybe you have tried doing this), you can make only objects in one plane match. This is your virtual focus plane. Objects in front or behind that plane will have a double image. By mixing the two versions in a smart way, you can control the amount and characteristic of blur. Having more than two shots will make it smoother. A similar computation is done by an ordinary lens, but it is done in an analog way by combining the electromagnetic waves that make up the light, whereas this camera first digitizes the waves and then combines the resulting data. Phase information is retrieved from the data thanks to the different viewpoints (they can reconstruct a depth map, but don't need a fancy sensor as the LightField camera does, only ordinary cellphone cameras).