Projects
MotiSpectra

1 / 3
- Developed a real-time emotion analysis software using Next.js and Python to seamlessly integrate with virtual call platforms (Zoom, Google Meet, MS Teams).
- Implemented intuitive user interfaces with dynamic radar graphs and rolling averages for visualizing emotion and attentiveness data.
- Built and trained ML models from scratch using the FER-2013 dataset for emotion and attentiveness recognition.
- Fine-tuned the YuNet face detection model, ensuring an accuracy rate of 97%

- Developed a mobile app to address vision impairment using React Native, Tesseract OCR (optical character recognition), and Django that recognizes text in images and narrates it aloud using Expo's speech synthesis service
- Verified legibility of text and summarized text into keywords using custom-trained Cohere NLP models, implementing NLP text pre-processing strategies to increase model effectiveness.
- Applied the Cohere API to implement advanced features such as text language detection and summarization
- Built a personalized physical therapy platform using React, TypeScript, and FastAPI that provides real-time posture correction and hands-free feedback through text-to-speech technology
- Trained a custom deep neural network in TensorFlow from scratch in 24 hours to generate individualized exercise routines based on user data including weight, height, age, and fitness goals
- Integrated YOLO v11 object detection model in PyTorch for real-time pose estimation and movement analysis, ensuring proper form during exercises with immediate actionable feedback
- Leveraged Cohere API to generate clear, concise exercise descriptions and instructions for nearly 20 different exercises, creating a dynamic and engaging therapy experience

1 / 3
- Developed a web application with Python and OpenAI APIs to generate multiple-choice questions from a variety of content formats such as PDFs, websites, Markdown files, and YouTube videos
- Integrated text-to-speech and speech-to-text capabilities with Whisper AI and Microsoft Azure to provide greater accessibility to the visual impaired
- Trained a Naïve-Bayes machine learning model using 7.8 million lines of Wiki sentences to format text.
- Implemented caching by maintaining generated quiz questions in a MongoDB database, designating each with a unique UUID to allow for user replayability and sharing.

- Developed a personalized AI voice assistant that is powered by OpenAI APIs to provide long-term memory and context features, setting it apart from commercial voice assistants like Alexa or Siri
- Implemented voice recognition and transcription using Microsoft Azure and Whisper AI, and added GPT-4 and Hume AI's emotion detection model to allow the assistant to detect and adapt to the user's emotions, providing a more enhanced conversation experience.
- Integrated support for 10+ services features including Wolfram Alpha, Google Maps, News, and Spotify, allowing users to perform a wide range of tasks effortlessly
Review Recap
- Built a Chrome extension that parses and analyzes thousands of Amazon product reviews and sorts them by rating, allowing users to make purchasing decisions more easily and boosting efficiency
- Used CoHere's NLP and Beautiful Soup to scrape and extract keywords from 5000+ reviews in seconds
- Stored and cached results in a RESTful Django backend
WLP4 Compiler
- Developed a compiler for a language similar to C++ as part of a Compilers course
- Used CFGs and bottom-up parsing to define syntax and generate MIPS assembly code