Remember old movies, where the hero had a blurred out picture of a super villain and he heads out to his computer which clears out the pixels of the picture to make it super fine or at least visible to identify the culprit? For me and many of us it was far to imagine that in real world we’ll ever get hands on any software that can actually pull the real thing out off a really worsened picture but it’s the 21st century and Google has made its mark in this too. This week Google has unveiled a “prototype” machine learning software named RAISR which open up to Rapid and Accurate Image Super-Resolution whose main work is to “fill pixels” into blurred out pictures to make them quite visible or enhanced from the original version of the same.
RAISR somewhat stands near to current scenario of “upsampling” which basically works on inserting new pixels into the places where the pixels were lost while taking the picture causing its quality to degrade or make it look blurred out.
But upsampling works by making the images bigger by filling in new pixel values using fixed rules, while RAISR works while understanding what it’s working upon and then performs work accordingly. It can actually understand the “edges” which are the part of the picture where there is color difference and basically where the makes up the corners or body shape of the subject.
All this leads to pictures which are more vivid than traditional versions of upsampling and produces less blurred out pictures. With the coming future all I can assure is that, there are going to be less low resolution pictures, can’t say about videos though.