Alexios Bluff Mara × Illinois State University
Research Collaboration · Cardinal & Code
Project · Cortex Gemma 4 Good · Health & Sciences v0.1.0 · Apache-2.0

Watch your brain
respond to any video.

A multimodal brain-response analysis system, built on Meta's TRIBE v2 brain foundation model and Google's Gemma 4. Upload a clip — get a 3D cortical-activation map plus four parallel narrations from four very different readers (an ISU freshman, a WBEZ science reporter, a Northwestern neurologist, and a Google ML scientist).

The Brain Cinema — in one paragraph

Picture a movie theatre. Your brain is the audience: 20,484 people in 20,484 assigned seats, each responsible for a specific job — seeing faces, recognising voices, feeling suspense, processing language. The movie is whatever you upload. TRIBE v2, Meta's brain foundation model, is the high-speed sensor system in every seat — twice per second, it predicts how excited each audience member is going to get, three to five seconds before their reaction visibly peaks. Gemma 4 is the panel of four critics in the back booth: after the screening, all four read the same audience-reaction printout and write their own takes — a chatty freshman, a WBEZ reporter, a Northwestern neurologist, and a Google ML scientist. You see all four side-by-side and pick the voice that sounds like your brain.

"Twenty thousand seats. One movie. Four critics. About three minutes."

Architecture — how it actually runs right now

   ┌──────────────────────────────────────────────────────────────────┐
   │ Browser / phone                                                  │
   │   ↓ https://big-apple.scylla-betta.ts.net  (Tailscale Funnel)    │
   ├──────────────────────────────────────────────────────────────────┤
   │ Big Apple — M4 Max MacBook Pro · 48 GB unified memory            │
   │                                                                  │
   │   FastAPI backend  (port 8773) — serves HTML + API directly      │
   │     ├─ TRIBE v2 (PyTorch on MPS, ~6 GB unified)                  │
   │     │   → 20,484-vertex BOLD prediction at 2 Hz                  │
   │     │                                                            │
   │     └─ 4× narrate (parallel, in queue)                           │
   │         ↓                                                        │
   │   Inference router (port 8766)                                   │
   │     ├─→ Local Ollama (Gemma 4 E4B / 26B / 31B on Metal)          │
   │     └─→ OpenRouter free tier (cloud failover, $0/token)          │
   ├──────────────────────────────────────────────────────────────────┤
   │ Seratonin (Windows · RTX 5090 · Chicago) — STANDBY               │
   │   ↳ joins router pool when not gaming, runs same code via CUDA   │
   └──────────────────────────────────────────────────────────────────┘
  

The whole demo currently runs on a single MacBook Pro (M4 Max, 48 GB unified). Same Python source code as the 5090; the cortex.device abstraction picks MPS on Apple Silicon and CUDA on NVIDIA. When Seratonin is back in the pool, narration jobs round-robin across both nodes; if both fall over, the router fails to OpenRouter's free Gemma-4-26B endpoint so the demo URL never returns a 502.

The viewer

A WebGL/Three.js scene with per-vertex animation, written by Kimi K2.6 via the Nous Portal during the Mercury sprint. Don't read about it — open the live demo and click around.

Open the live demo →   Or browse the gallery

Where to go next

  • Live demo: big-apple.scylla-betta.ts.net — running on Big Apple (M4 Max, Chicago) via Tailscale Funnel.
  • Gallery of past scans: /gallery — every completed scan with all four persona narrations.
  • Source: github.com/AlexiosBluffMara/cortex — Apache-2.0; TRIBE v2 weights ship under CC-BY-NC 4.0 (Meta) and install separately.
  • Not a diagnostic tool. Predictions are population-averaged across 25 NeuroMod subjects, not tuned to any individual's brain. Cortex does not replace fMRI.
Research conducted in association with Illinois State University, research collaboration · Bloomington–Normal, IL · ABM in Chicago, IL.
Cortex v0.1.0 · Apache-2.0