I am a PhD candidate in Electrical Engineering and Computer Science at the University of California, Irvine, where I am advised by Hamid Jafarkhani. My research is focused on making distributed algorithms work in communication-constrained networks, with an emphasis on privacy-preserving Machine Learning. I derive theoretical bounds and demonstrate my results with practical implementations. This includes algorithms for Federated Learning and Decentralized Control.
More broadly, I am interested in optimization, information theory and AI.
Before joining UCI, I graduated from Universitat Politècnica de Catalunya with a double degree in Mathematics and Electrical Engineering as part of the CFIS program, and a MS in Advanced Mathematics and Mathematical Engineering, focused on discrete mathematics and information theory.
For my undergrad thesis I worked with the Communications Architectures and Research Section at NASA's Jet Propulsion Laboratory, helping build the next-generation space radios.
My projects
Privacy-preserving Error Feedback for Distributed Learning
Practical distributed learning often uses biased aggressive compression for communication from the clients to the server. However, to guarantee convergence, we need client-specific control variates to perform error feedback.
Individual control variates kill privacy guarantees, and do not scale with the number of clients. To fix error feedback, we proposed a framework that leverages previous aggregated client updates for feedback. This allows highly aggressive compression without the privacy and scale issues that come with client-specific control variates.
Truly decentralized learning on directed graphs
Decentralized optimization algorithms typically require communication between nodes to be bi-directional. In the directed case, existing algorithms required nodes to know how many listeners they have (knowledge of their out-degree). We proposed a series of works that circumvent this requirement.
A nice property of this framework is that it naturally accomodates networks with delays, as one can add imaginary nodes to the network to model delays, and use the same analysis to obtain convergence guarantees.
Currently, I'm interested in incorporating more aspects of real networks to bridge the gap between the practice and theory of decentralized learning.
Proving stuff with Lean
On the side, I'm learning about theorem proving with Lean. In the future I would like to formalize more of my proofs using Lean, so far I have only a small auxiliary lemma and some elementary results about Sidon sets.
Error correcting codes from Generalized Quadrangles
Together with Simeon Ball, we developed a method to construct point-line incidence matrices for Generalized Quadrangles in polynomial time (polynomial to the 4th, 6th and 11th power for different GQs but hey, still better than existing exponential methods).
This allowed us to construct the largest point-line GQ incidence matrix repository that currently exists. But, better than that, these point-line incidence matrices are quasi-cyclic! This is a really desireable property if you want to construct error correcting codes from GQs, which achieve a very good error rate to round of belief propagation decoding ratio. The repository also has .alist matrices to easily run LDPC simulations.
Maintaining communication while entering the Martian atmosphere
During the Entry, Descent and Landing (EDL) phase of rover missions to Mars, the communications are lost due to the large Doppler shift caused by the spacecraft dramatically decelerating when hitting the Martian atmosphere. During my time at JPL, we proposed a system to enable us to track this shift in Doppler and maintain comms throughout.
This system will be implemented in the next generation of NASA's spacecraft radios.
As a part of this line of work, we derived an analytical approximation for Phase-Locked Loops frequency error standard deviation, which is of independent interest.
If you made it this far...
Here is a Snake game I coded in Java.