Not quite, although that was the promise in the title of an article in the March 2010 “Wired” magazine about a recently invented algorithm called “Compressed Sensing” or CS. That isn’t to say CS isn’t a really cool algorithm for certain applications – it is and you should know about what it can do – but it can’t make something out of nothing.

To understand the concept behind CS, think of a digital image. Different kinds of compression technologies (jpg, gif, png, etc.) are used to shrink the size of these images so they use less memory to store and process. Basically, these technologies work by using clever ways to reduce redundant or repetitive data points to a much smaller number of data points. For example, a large area of a single color can be saved without saving each data point since the color is identical. But even though compression can reduce an image size significantly, the compressed file still holds all of the essential information to display the image in its original detail and resolution.

Therefore, a digital image that is compressed to, say, 10% of its original size has essentially discarded 90% of its original data as unnecessary. So if 90% of the data that was originally collected by the image sensors is unnecessary, why collect it in the first place? Why not just collect the essential 10%?

The idea behind CS is that you can collect digital image data using far fewer physical sensors than would normally be used and then use the CS algorithm to reconstruct the digital image as if you had used a conventional number of sensors. So you lose the computationally expensive overhead of collecting all the data, analyzing it, and then discarding most of it. Instead, you only need to collect a small amount of it, and then use CS to reconstruct the rest. And CS can do a remarkable job of reconstructing an image from very little data.

The CS algorithm isn’t just applicable to digital images. It can be used on all kinds of digital data processing from music to interstellar radio waves to scrambled radio communications.

CS works based on a concept called “sparsity,” which describes the density of data. Conceptually, a floor that has a few balls spread out over it would be considered sparse whereas a floor covered with many balls of different colors all touching each other would not. It turns out that reconstructing an image using CS means finding the sparsest image that can be constructed from the dataset.

However, there is one key point that needs to be stressed: CS cannot reconstruct data that isn’t there – you can’t make something out of nothing. In other words, if you take a digital image using far fewer sensors than normal and there is a critical detail that is missed entirely by the sensors, it cannot be recovered using CS. Unlike CS, conventional image compression works well because it looks at all the detail first and throws away the data it doesn’t need.

Nevertheless, the promise for CS is exciting, particularly in areas where full data collection can be difficult or impossible because of volume of data or physical constraints. These can be sampled using far fewer sensors than otherwise might be required and then reconstructed using CS to obtain a resolution that is adequate for extracting information. One of the major challenges in applying CS is determining the minimum number of sensors required to sample a given data set.

I have been thinking about how CS technology might be applied to business, process development, and manufacturing data. Can you think of any potential applications in your business?

Post script: The original title in Wired magazine was “F_ll _n T_e Bl__ks: A revolutionary algorithm can make something out of nothing.” The on-line version’s title was changed to “Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples.” Much better.



What are the best uses of your company’s dollars and resources? Optsee® can tell you. Optsee® is a project portfolio management and budgeting optimization tool unlike any that you’ve ever seen. Click here to find out more.