As Glenn might say, “Faster, please.” From Tech Times:

UC Berkeley computer scientists and vision scientists worked together to create vision-correcting displays–screens that self-correct to the user’s visual impairment. The team used algorithms to direct the intensity of light from a single pixel in an image and then, through a process called deconvolution, aimed the light through a pinhole array to produce a sharp image.

“Our technique distorts the image such that, when the intended user looks at the screen, the image will appear sharp to that particular viewer,” said Brian Barsky, UC Berkeley professor of computer and vision science.

More at the link.

I should point out that my Sony NEX-6 camera already allows for adjusting the electronic viewfinder for those like me who don’t want to wear our reading glasses while using it.