Home Resume Projects Class Notes Digital Garden Travel Blog
When I was interviewing in January 2022, I received offers from a mix of public and private companies at different stages of funding. I found it incredibly difficult to compare the offers between public companies that were offering RSUs (restricted stock units) and private companies that were giving me stock options, so I decided to build this app that would help me better understand and compare my offers. The app generates a breakdown of salary and taxes by year given your projected growth of the company. Feel free to play around with it and see how much you would make if your company quintuples in value!
Frustrated with the bias, data inaccessiblity, and pedantic nature of current political statistical analyses, Kodi Obika and I worked together to create a website that provides a dataset repository of collated open source political data and also straightforward statistical analysis of current events and politics using this data.
Whim is a note taking web app designed to support the way our brains organize data and make connections between ideas. I am currently developing Whim with Tanishq Kancharla, and we have an early read-only version of the webapp published at whim.so.
Notion Slides is a google chrome extension that allows users to navigate notion databases in a slide show. Tanishq Kancharla and I were frustrated by the clunkiness of popular slide show creation tools like Powerpoint and Google Slides and wanted to tweak some of the limitations imposed by these tools. We adapted Notion's clean and minimialistic UI to support creating slide shows in a way that was consistent with our design principles: slides are not limited in the amount of content they can show, content like youtube videos and websites can be embedded within a slide, and formatting is automatic with markdown. Unfortunately the way the chrome extension operates is not ideal since Notion was not desiged to support slide shows, nonetheless I think the chrome extension is a step forward in how slide shows should be built and designed.
I worked with Maxwell Wang, Wuwei Lin, and Chang Shi to adapt an existing DAG structure estimation algorithm called NOTEARS to time series data.
I wrote a report reviewing recent literature on optimal treatment regimes in the causal inference community.
I worked with Tanishq Kancharla to build a regression model that would predict the year a song was released just based on some of the song's musical factors and some of its lyrics. We determined that predictive ability of the lyrics was much worse than the musical factors for a given song. Our final paper is shown below.
I worked with Bradley Zhou, Candia Gu, and Aditi Hebbar to build an app that would automatically create html code from drawings on a sheet of paper. The user of our application would place their phone on top of an acrylic mount and load our mobile app that we created using React Native. The app would continuously take snapshots of a sheet of paper where they would draw the website layout in real time and then transmit these images into HTML. We used OpenCV to extract and differentiate between meaningful components (such as text and images) from the drawing and turn them into HTML. We then sent the HTML to be rendered and displayed on a website via a Flask API.
I worked with Evann Wu, Ka Chun Cheung, and Tanishq Kancharla to build an app that would take in an image from a user, as well as certain visual preferences, and automatically output a logo based on their input. We first used Microsoft Azure to extract interesting features from the input image (such as color, items in the background, etc.) that would influence the generated logo design. We then used Adobe Illustrator's Javascript API to manipulate the user's input image based on their preferences and the data extracted from the image using Microsoft Azure's API. We also use this data to generate a brand name for the logo. Once we have the manipuated image and brand name our code determines a pleasing way to orient them and outputs a logo.
I worked with Evann Wu and Tanishq Kancharla to create a website using Django that would take in an image url and match it with a CMU professor who looked the most similar to the individual in the given picture. We used the Microsoft Azure API to extract traits from the input image and then matched the image with the person who had the most similar traits in CMU's faculty database.
This was the final project for my introductory programming class at CMU, 15-112. This project introduced me to machine learning, and it was one of the major reasons why I switched majors from business to statistics and machine learning. For this project, I developed a fully functioning chess game using python. The game's main modes were: competitive and multiplayer. In the competitive mode I developed a chess AI that used a minimax search algorithm to iterate through possible game states and a neural network that scored each board that would be encountered with the minimax search. I trained my neural network to match the board evaluations given by the famous chess engine Stockfish, using a database of grandmaster games in .pgn formatting. I made the neural network entirely by myself (without a library), so it was not very skilled at playing chess and was pretty slow at making a decision. I used sockets to implement multiplayer in the game.
I worked with Alex Li, Evann Wu, and Tanishq Kancharla to develop GunAR for CMU's Build18 competition. The goal of the project was to develop a laser tag game that used computer vision to identify when you had been shot. The user would mount their phone on top of our toy gun, and when the trigger was pressed, a bluetooth signal would be sent to the user's phone through our mobile app. If that person's target was displayed within a certain area on the phone's camera then the shot would register as a hit. We had a competitive mode where players could play against each other and a single player mode where virtual reality targets were spawned for the user to shoot at. Unfortunately we ran out of time to complete the project and put together all of the project's different components, but the experience was quite fun, and it inspired me to take Introduction to Electrical and Computer Engineering.
I worked with Trevor Daino and Arpad Voros to design a new way to detect the trajectory of high energy muon particles for the purposes of muon tomography. We had read a few papers about how vaccuum tubes were being used to measure the trajectory of muons to capture images of materials. We were fascinated by the new technology and reached out to a professor to learn more about building our own muon tomography device. We crafted a design and he told us it would cost over a million dollars! So we scrapped the design and set out to build a new device to use muonss for imaging, that would be within our price range of a few hundred dollars. Our idea was to have 4 layers of plastic scintillators, each paired with a silicon photomultiplier array, to triangulate the muon's position at each layer. Then we could use the change in positions from layer to layer to find the muon's trajectory. We planned to put an object in the middle of the device (between two top layers and two bottom layers) so that we could measure the deflection of the muon after penetrating through the object (by comparing the muon's trajectory before and after hitting the object). We worked every day after school for months designing, soldering, and sawing in my friend's garage to create a working model. We tested our protoype using UV light and built a muon scattering simulation for a scaled up version of the device in java and matlab. Eventually we presented our findings at the Intel International Science and Engineering Fair (ISEF) where we placed 3rd in the Physics and Astronomy category! You can learn more about what we built from this article in the local paper. Also, the poster we displayed at ISEF is shown below!