Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Journal 1, 2009
This paper is about the number 1. The number 2 is left for future work.
Recommended citation: Your Name, You. (2009). "Paper Title Number 1." Journal 1. 1(1). http://academicpages.github.io/files/paper1.pdf
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2). http://academicpages.github.io/files/paper2.pdf
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3). http://academicpages.github.io/files/paper3.pdf
Published:
Published:
Published:
One of the most common, but at the same time expensive operations in linear algebra, is multiplying two matrices. With the rapid development of machine learning and increases in data volume, performing fast matrix intensive multiplications has become a major challenge. Two different approaches to overcome this issue are 1) to approximate the product and 2) to perform the multiplication distributively. In this talk, I focuse on the first approach and summarize some random-sampling based techniques for the approximation of the matrix-matrix multiplication, such as the work by Mahoney et al., in SIAM J. Comput’ 2006. [Link] [Slide]
Published:
Published:
I presented this work as a poster at the 2022 MWCC. [Link] [Poster]
Published:
In this talk I describe some of our work on coded matrix multiplication when an approximate result will suffice. We are motivated by potential application in optimization and learning where an exact matric product is not required and one would prefer to get a lower-fidelity result faster. We are also motivated by developing rate-distorion analogs for coded computing and particularly by a recent JSAIT paper by Jeong et al. epsilon-coded-computing wherein the authors show that they can recover an intermediate, approximate, result half-way to exact recovery. In this talk I build on that prior work to show how to realize schemes in which there are multiple stages of recovery (more than one) en-route to exact recovery. In analog to successive refinement in rate distortion we terms this successive approximation coding. [Slides]
Published:
In this talk, I described our framework for learning with group identities, where individuals may share data selectively within specific groups, such as contributing business data in their company group or personal genomic data in their family group. We designed the modeling and control of privacy leakage propagation across the potentially overlapping group structures.
Published:
In this talk, I presented my two Ph.D. projects that explore privacy-preserving machine learning mechanisms to promote more equitable and private collaborative learning. In the first project, we introduced a framework for learning with group identities, allowing individuals to share data within specific groups, such as business data within a company group or personal genomic data within their family group. We modeled and controlled privacy leakage propagation across potentially overlapping group structures. In the second project, we proposed a novel time-adaptive privacy spending mechanism, enabling participants to preserve more privacy during certain training rounds. Together, these works offer new perspectives on how trust and privacy can be formalized and quantified in federated learning systems.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.