I'm a Math student by the by and my project is pretty interesting and the basics of it are simple to understand. Also it is very applicable, which is nice, so I thought maybe some people on TL would like to read a little about it and this way I'm at least thinking about the material.
The topic is called Compressed Sensing (or compressive sensing) and it is a fairly new thing(<10 years). You can just google it and find out lots of information of course, but who doesn't want to read what I have to say? Right?!
I'll talk a bit about the application and why you might care about this. Say you have to get a MRI scan or something of that sort. Well what they do(more or less) is fire some waves as you and detect what bounces back and what passes through. Then using that information about what they sent and what they received a picture will be constructed. The hope is that this picture resembles what you look like inside in some sense and the Doctors can get some useful information. Now a lot of times these scans can take a long time because a lot of measurements are required in order to get even a decent picture. So wouldn't it be nice if we could take fewer measurements and/or get a better picture? This is going to mean less time in a crazy magnetic tube for you(the patient) and better information for the Doctors leading to better diagnosis hopefully. This is one of the things that compressed sensing can lead to.
Things will get a little more mathematical as I go on, but some basic linear algebra should be sufficient to get something and if you know a little bit more you might get a little bit more . So the mathematical problem is as follows. There is a fixed unknown vector(the picture of you inside) that we want to discover. The way we can gather information about this vector is by applying a linear transformation to it(i.e. applying a matrix to it)(this is the waves they fire at you). Now we could just pick a matrix like the identity matrix and then we would know our vector, sure. But this would mean measuring every single entry in the vector. If our vector is say 1024x768 entries (maybe it encodes a picture with that many pixels) then this is going to take a long time. So we would like to pick a matrix that doesn't measure everything, but so that we can still figure out what the vector was through some mathematics and computation. This is going to give us an underdetermined system of equations to solve. That is, we will have more unknowns that we have equations! (You know how you can uniquely solve 3 equations with 3 unknowns, 2 equations with 2 unknowns etc, but you can't do that if you have 2 equations and 3 unknowns - you will have infinitely many solutions and we just want one). So what do we do? Classic linear algebra that you will learn in any course is that this is impossible and that is true. However, we can impose an extra condition that will give us hope. That condition is sparsity.
Sparsity in the sense of a vector means for us that most of the entries are just zero. So if there are 1 million entries all but maybe 10000 and zero, say. So we would say that such a vector is 10000-sparse. It turns out that a lot of the things we want to deal with in real life are sparse. Look at x-rays for example. Mostly black... All that black is our zeroes - our sparsity. I'm going to introduce just a little notation - hopefully I don't use it much. Lets call our unknown vector x, and our measurment matrix A. We don't know x, but we do get to know Ax(Ax is what we measure bouncing off of you and through out). We want to solve find the sparsest vector we can such that when we hit it with A, we get Ax. That is, the sparsest vector y, such that Ay=Ax. Great! Now I don't really want to talk about how A is chosen. Randomnnes s is used, its not too complicated but lets not go there. Point is for a good choice of A, if we solve this problem, the y that we get will be exactly the same as the unknown x that we wanted to find. So we have - with relatively few measurements - recovered the picture of your brain, bones whatever exactly. But there is another problem... In practise solving for this y is extremely inefficent computationally. The best algorithms basically just try every possibility. So while this is nice in theory it is largely useless in practise. Fortunately there is an amazing game that we can play.
Instead of looking for the sparsest vector y, we will pose a new problem and look for the "smallest" vector y. I say "smallest" because I have to tell you what I mean by small in this setting. I want the vector that is smallest in the little L-1 norm for those of you who know what that is. For those of you who do not, don't be scared for it is simple. The little L-1 size (or norm) of a vector is just the sum of the absolute value of its components. (For example the L-1 size of (1,-2,3)=|1|+|-2|+|3|=1+2+3=6.) Ok so now we have a new problem.However, the amazing amazing thing is that these two problems are the same! Instead of finding the sparsest y such that Ay=Ax, we can find the "smallest" y such that Ay=Ax and we will get the same answer(technically this will not always happen, but it will happen almost all the time - the probability of it not happening is similar to me teleporting across the world due to quantum fluctuation - or perhaps winning a starleague.) And the beauty of this observation is that our new problem can be solved efficently by a computer!(since it is now a convex problem, we can apply convex optimization techniques blahblahblah).
These are the basic ideas and of course there are many more details that I did not discuss, nor did I mention any proofs but the fundamentals are not too complex to understand I don't think.
I should mention that I only discussed an ideal situation where you gain perfect measurments and there is no corruption due to noise or measurement error. In the real world this will never happen. However, the theory is robost is the sense that under small errors the result will be a good approximation to the original data though not exactly the same. Hopefully this mathematics will help to reduce hospital wait times and expenses and things like that , but I think there will need to be udates to some equipment - I don't really know this side of it.
The paper that I am studying is called A Probabilistic and RIPless Theory of Compressed Sensing - by Candes and Plan in case anyone cares to check it out.
This vein of study leads to a topic called low-rank matrix recovery which is a the heart of the Netflix problem(yep, Netflix) and has applications to facial construction from a series of partial photographs and such, very cool.
Hopefully you got something from this or killed a little bit of time as I have .
TL;DR: Hmm, don't really like these - but I guess they're useful. Compressed Sensing is some math that has applications in medical imaging. I'm studying it.