M
Marc M Veihl
Marc M Veihl
Product Development & Manufacturing

Marc M Veihl

Staff Engineer at Stryker bridging R&D and Manufacturing. Northwestern mpd² candidate building fluency in the fuzzy front end of innovation.

NPI Launch Supply Chain PPAP Global Mfg DFMA Product Strategy
What I Do

I serve as the bridge between Research and Development and Manufacturing — evaluating global supplier capabilities, driving supply chain readiness, and guiding contract manufacturers through production approval while balancing cost, quality, and schedule. Six years of launching connected medical devices, from first-generation Wi-Fi hospital stretchers to global manufacturing transfers across Turkey, China, and Taiwan, have taught me that the best product decisions live at the intersection of technical depth and business clarity.

Why Northwestern

I traded capital goods for disposables, Kalamazoo for Chicago, and a familiar business unit for a new portfolio — because growth requires the right stimulus. I am pursuing Northwestern's Master of Product Design and Development Management to strengthen my approach to the fuzzy front end of innovation: defining user needs, building financial models that inform strategy, and developing the leadership vocabulary to move from executing launches to shaping product roadmaps.

Interactive Showcase

During a recent new product introduction, I built this demand scenario modeler to help leadership visualize inventory and cost implications of demand uncertainty across a 26-month planning horizon. Scroll to adjust the demand scenario from 50–100% of baseline — or drag the slider directly.

Supply Chain Demand Scenario Modeler

NPI Case Study — Contract Manufacturer Supply Planning

Portfolio Project
Demand Scenario
50% 100%
50%
Scenario Demand
50%
Safety Stock First Achieved
Peak Excess vs. SS
Cost of Excess Inventory
@ estimated landed COGS
Excess Pallet Positions
@ cases per pallet
Original Supply Plan
100% Demand — Baseline
Scenario Model
50% Demand
Supply (Committed PO)
Supply (Negotiation Window)
Supply (Auto-Adjusted, MOQ)
Proj. Inventory
Dependent Demand
Safety Stock
Scroll to explore

Resume

Experience

Staff New Product & Process Development Engineer
Stryker SAGE: Advanced Operations
Apr 2024 – Present

Leading surgical positioning device launch with 50% COGS reduction through China manufacturing transfer. Guiding contract manufacturer through first PPAP while managing 26-month supply plan across committed, negotiation, and auto-adjusted order horizons.

Staff Process Development Engineer
Stryker Medical Acute Care
Dec 2023 – Apr 2024

Led Prime Connect launch — first Wi-Fi-connected hospital stretcher generating $21M in first-year sales. Recovered critical supplier timeline through on-site Shenzhen visits, proving that technical communication can transcend language barriers.

Senior / Process Development Engineer
Stryker Medical Acute Care
Jun 2019 – Dec 2023

Managed $0.75M CapEx budget. Developed internal torque specification capability across 60+ tools. Built COVID Emergency Relief Bed production cells under rapidly changing conditions, deepening an empathy for operators that now shapes every process I design.

Education

Master of Product Design & Development Management
Northwestern University — mpd²
Expected June 2027

Part-time program alongside full-time role. Coursework in sustainable design, materials selection, life cycle assessment, and product strategy. Bringing a Voice of Operations perspective to a cohort of seasoned product leaders from diverse industries.

BS Mechanical Engineering
Michigan State University
May 2019

Biomedical Engineering concentration. Alumni Distinguished Scholar, Honors College. GPA: 3.95/4.0.

Skills

Technical

DFMA, Process Validation (IQ/OQ/PQ), PPAP, SCADA, SPC, DMAIC, Injection Molding, PCBA Integration, Python, Data Visualization

Business

Make vs. Buy Analysis, NPV/IRR Modeling, CapEx Management, Supplier Negotiation, Tariff Strategy, Product Roadmapping

Leadership

Cross-Functional Teams, Global Coordination (China, Turkey, Taiwan), Operator Training, Stakeholder Management, Crisis Recovery

Portfolio

Side Project · Local LLM Infrastructure — Server

LLM Launcher

The control plane behind Token Jockey

A Python-based remote control server for managing local llama.cpp models with API key authentication, real-time GPU and VRAM monitoring, health checks, and live log tailing — all accessible over Tailscale from anywhere on the mesh. LLM Launcher is the server-side foundation that Token Jockey connects to: it handles model lifecycle (start, stop, swap), exposes an OpenAI-compatible chat endpoint, and serves an embedded web UI for configuration. This was the first piece of the stack — built to solve the practical problem of running inference on a home GPU while working from a laptop across the house or on the road.

Infrastructure

External JSON config for model management. Auto-detects Tailscale IP. Binds llama-server to all interfaces for network access. CORS-aware with API key authentication on all endpoints.

Monitoring & Control

Real-time GPU stats (VRAM, utilization, temperature). Health checks that confirm model readiness for inference, not just process liveness. Live log tailing piped from llama-server stdout. Full REST API for external tool integration.

gaming-pc.tailnet.ts.net:8081
Scrollable demo — saved snapshot of the LLM Launcher control panel
Python llama.cpp CUDA Tailscale NVIDIA SMI OpenAI-Compatible API GGUF Models
Early Prototype · Python 3.8+ View on GitHub →
Side Project · Local LLM Infrastructure — iOS Client

Token Jockey iOS

Native mobile client for the LLM Launcher stack

A lightweight SwiftUI app for managing and chatting with LLM models — local llama.cpp over Tailscale, plus MiniMax and GLM/ZhipuAI cloud providers. Token Jockey connects to the LLM Launcher server for full model control, and adds a native chat experience that works across local and cloud backends — streaming responses via SSE, storing conversations locally, and supporting per-model system prompts.

Multi-Provider Chat

Switch between Local llama.cpp, MiniMax, and GLM/ZhipuAI from Settings. All providers use the same OpenAI-compatible SSE streaming path. Native SwiftUI chat with stop generation, markdown rendering, double-tap message navigation, and long-press send to select system prompt (None, Global, Model, or Global + Model).

History & Settings

Local JSON-backed conversation history with swipe-to-delete. Per-provider API keys and model fields stored in iOS Keychain. Chat Provider picker in Settings. Appearance mode and accent color customization. Global and per-model system prompts with active prompt indicators (* model, + global).

Token Jockey iOS — control panel
Token Jockey iOS — chat interface
Token Jockey iOS — conversation history
Token Jockey iOS — screen 4
Token Jockey iOS — screen 5
SwiftUI WKWebView SSE Streaming llama.cpp MiniMax GLM/ZhipuAI Tailscale iOS Keychain MarkdownUI
Active Development · iOS 16+ · Xcode 15+ View on GitHub →
Northwestern mpd² · Team Purple

Design Through Understanding

WellBean App Portfolio

Let's Connect

Open to conversations about product development, manufacturing engineering, and the mpd² experience.

Location Chicago, IL