Machine Learning Research

Here's a summary of the work I do with Dr. Jake Abernethy

I started working with Dr. Abernethy in the Spring 2020 semester and so far it's been an outstanding experience. I work with a group of four undergrads where we have a weekly meeting with Dr. Abernethy. We do research as well as weekly paper readings. In Spring 2020 I explored a topic based on a paper Dr. Abernethy had written with some Google Researchers. At a high level I sought to learn about the relationship between boosting and adversarial robustness, which I know sound confusing but don't worry it's not! Boosting is a technique in machine learning which combines many basic models to create a really strong model. Adversarial robustness is loosely, "how easily can your model can be fooled." Over the course of this semester I developed an algorithm which uses boosting in order to improve adversarial robustness. To do so I had to implement my own decision tree which allowed for very specific weighting schemes. You can see the algorithm in action in the second and third picture. All the (+) points are adversarial examples, and notice how between the second and third picture I'm able to eliminate most of the (+) points! The final picture depicts how well the algorithm performs given various size perturbations, (the lines at the bottom are pretty big perturbations). If you're interested in the technical details, or if you want to keep up with all my research updates, check out the link below for our lab's blog!