RealTime Context Engine
Interview Assistant (aka Job Bandit) is an
end-to-end orchestration and experience layer designed for the high-stakes
environment of technical interviews. Engineered with a Data Engineering first
mindset, the application manages the entire lifecycle of interview data—from
real-time screen/audio ingestion and high-fidelity RAG (Retrieval-Augmented
Generation) context enrichment to the delivery of optimized, low-latency streaming
responses.
Orchestration: A seamless bridge between a versatile Electron desktop shell
and a high-performance FastAPI backend, ensuring local compute efficiency and secure
model interaction.
Advanced RAG Pipeline: Dynamically grounds AI responses in user-specific
professional contexts by injecting resume data and real-time job requirements into
the LLM context window via automated data-layer mapping.
Unified Data Persistence: Features a resilient session management system,
automated export pipelines for post-interview synthesis, and searchable historical
logs with MDX support for complex system design review.
Secure Licensing Layer: Includes an integrated RSA-backed licensing system
that ensures professional-grade accessibility control and secure software
deployment.
The "Convo Tab" Experience: A floating glassmorphism window provides
live-streaming AI insights, comprehensive performance telemetry (TTFT, token usage),
and cumulative cost tracking for total transparency.
Seamless Human-AI Interaction: Adopts a minimalist, dark-mode-first aesthetic
with an intuitive "Snip-and-Solve" interaction model (Ctrl+K/S), enabling
instantaneous screen-based problem solving without breaking the user's flow.
High-Fidelity Feedback Loops: Tracks live metrics such as Time-To-First-Token
(TTFT) to ensure the interface feels alive and responsive, aligning with the
precision required for complex engineering and product design interviews.
Cross-Platform Music Synchronization
The “Monthly Music Sync” project
is
designed with the goal of
synchronizing a
user's liked songs between YouTube Music and Spotify platforms using Python and
relevant
APIs.
The ELT pipeline is built using Python, AWS S3, YouTube Data API, Spotify API, and
integrates
with third-party libraries such as spotipy and youtube_dl. The project retrieves
music data
from
YouTube Music and Spotify using their respective APIs and processes this data by
extracting
metadata and searching for corresponding tracks on Spotify. The processed data,
specifically
YouTube video IDs, are stored in an AWS S3 bucket for tracking synchronized
songs.
The pipeline is automated to run monthly via Apache Airflow, ensuring periodic
synchronization
of liked songs.
Future enhancements include further integration with Apache Airflow for automation,
development
of a React App for a user-friendly interface, and enhanced error handling
mechanisms.
Spotify User's "Liked Songs" Data exploration & visualization
With the aim of retrieving the user's music data via Python's REST API and staging it to the Postgres server, the ELT pipeline has been designed using a combination of Python, Postgres database, Dagster for automation, docker for containerization, and Power BI for visualization. The data is transformed in a number of ways using pandas to produce useful insights that can be used to build PowerBI dashboards and visualizations. With Dagster, the pipeline is automated to run every month. The project's ultimate objective is to discover the user's monthly music preferences by analyzing particular characteristics, such as the recently released "audio-features" by Spotify.