Staff Engineer at Stryker bridging R&D and Manufacturing. Northwestern mpd² candidate building fluency in the fuzzy front end of innovation.
I serve as the bridge between Research and Development and Manufacturing — evaluating global supplier capabilities, driving supply chain readiness, and guiding contract manufacturers through production approval while balancing cost, quality, and schedule. Six years of launching connected medical devices, from first-generation Wi-Fi hospital stretchers to global manufacturing transfers across Turkey, China, and Taiwan, have taught me that the best product decisions live at the intersection of technical depth and business clarity.
I traded capital goods for disposables, Kalamazoo for Chicago, and a familiar business unit for a new portfolio — because growth requires the right stimulus. I am pursuing Northwestern's Master of Product Design and Development Management to strengthen my approach to the fuzzy front end of innovation: defining user needs, building financial models that inform strategy, and developing the leadership vocabulary to move from executing launches to shaping product roadmaps.
During a recent new product introduction, I built this demand scenario modeler to help leadership visualize inventory and cost implications of demand uncertainty across a 26-month planning horizon. Scroll to adjust the demand scenario from 50–100% of baseline — or drag the slider directly.
NPI Case Study — Contract Manufacturer Supply Planning
Leading surgical positioning device launch with 50% COGS reduction through China manufacturing transfer. Guiding contract manufacturer through first PPAP while managing 26-month supply plan across committed, negotiation, and auto-adjusted order horizons.
Led Prime Connect launch — first Wi-Fi-connected hospital stretcher generating $21M in first-year sales. Recovered critical supplier timeline through on-site Shenzhen visits, proving that technical communication can transcend language barriers.
Managed $0.75M CapEx budget. Developed internal torque specification capability across 60+ tools. Built COVID Emergency Relief Bed production cells under rapidly changing conditions, deepening an empathy for operators that now shapes every process I design.
Part-time program alongside full-time role. Coursework in sustainable design, materials selection, life cycle assessment, and product strategy. Bringing a Voice of Operations perspective to a cohort of seasoned product leaders from diverse industries.
Biomedical Engineering concentration. Alumni Distinguished Scholar, Honors College. GPA: 3.95/4.0.
DFMA, Process Validation (IQ/OQ/PQ), PPAP, SCADA, SPC, DMAIC, Injection Molding, PCBA Integration, Python, Data Visualization
Make vs. Buy Analysis, NPV/IRR Modeling, CapEx Management, Supplier Negotiation, Tariff Strategy, Product Roadmapping
Cross-Functional Teams, Global Coordination (China, Turkey, Taiwan), Operator Training, Stakeholder Management, Crisis Recovery
The control plane behind Token Jockey
A Python-based remote control server for managing local llama.cpp models with API key authentication, real-time GPU and VRAM monitoring, health checks, and live log tailing — all accessible over Tailscale from anywhere on the mesh. LLM Launcher is the server-side foundation that Token Jockey connects to: it handles model lifecycle (start, stop, swap), exposes an OpenAI-compatible chat endpoint, and serves an embedded web UI for configuration. This was the first piece of the stack — built to solve the practical problem of running inference on a home GPU while working from a laptop across the house or on the road.
External JSON config for model management. Auto-detects Tailscale IP. Binds llama-server to all interfaces for network access. CORS-aware with API key authentication on all endpoints.
Real-time GPU stats (VRAM, utilization, temperature). Health checks that confirm model readiness for inference, not just process liveness. Live log tailing piped from llama-server stdout. Full REST API for external tool integration.
Native mobile client for the LLM Launcher stack
A lightweight SwiftUI app for managing and chatting with LLM models — local llama.cpp over Tailscale, plus MiniMax and GLM/ZhipuAI cloud providers. Token Jockey connects to the LLM Launcher server for full model control, and adds a native chat experience that works across local and cloud backends — streaming responses via SSE, storing conversations locally, and supporting per-model system prompts.
Switch between Local llama.cpp, MiniMax, and GLM/ZhipuAI from Settings. All providers use the same OpenAI-compatible SSE streaming path. Native SwiftUI chat with stop generation, markdown rendering, double-tap message navigation, and long-press send to select system prompt (None, Global, Model, or Global + Model).
Local JSON-backed conversation history with swipe-to-delete. Per-provider API keys and model fields stored in iOS Keychain. Chat Provider picker in Settings. Appearance mode and accent color customization. Global and per-model system prompts with active prompt indicators (* model, + global).
Open to conversations about product development, manufacturing engineering, and the mpd² experience.