Successive approximated coded matrix multiplication
Date:
In this talk I describe some of our work on coded matrix multiplication when an approximate result will suffice. We are motivated by potential application in optimization and learning where an exact matric product is not required and one would prefer to get a lower-fidelity result faster. We are also motivated by developing rate-distorion analogs for coded computing and particularly by a recent JSAIT paper by Jeong et al. epsilon-coded-computing wherein the authors show that they can recover an intermediate, approximate, result half-way to exact recovery. In this talk I build on that prior work to show how to realize schemes in which there are multiple stages of recovery (more than one) en-route to exact recovery. In analog to successive refinement in rate distortion we terms this successive approximation coding. [Slides]