Brain-Computer
Interfaces, simplified.
A unified Python toolkit for building, testing, and deploying BCI systems — from stimulus delivery to live streaming, signal processing, and model training, all in one package.
Everything a BCI researcher needs,
in one place.
pyBCI handles the complexity so you can focus on the science. From hardware abstraction to trained models — the entire stack, open source.
Stimuli Engine
Precision-timed visual and auditory stimuli delivery for ERPs, SSVEP, P300, and custom paradigms.
Data Processing
Real-time filtering, artifact rejection, epoching, and feature extraction — all in a unified pipeline.
Live Streaming
LSL-compatible streaming with sub-millisecond latency. Plug in any EEG, ECoG, or fNIRS device instantly.
Model Training
Built-in FBCSP, EEGNet, and custom model support. Train, validate, and deploy without leaving Python.
Agentic Helper
An AI co-pilot that suggests paradigms, debugs pipelines, and auto-tunes your processing chain.
Unified SDK
One import. Every tool. Consistent APIs across stimuli, data, and models — designed for reproducible science.
From raw signal to prediction
in under 10 lines.
pyBCI connects your hardware, runs your pipeline, and returns labelled epochs — ready for your model.
News & releases.
Updates from the pyBCI project — new features, research highlights, and community milestones.
pyBCI v0.4 — Unified Streaming API
Major refactor of the LSL backend brings sub-millisecond sync across all supported devices.
Closed-loop P300 speller achieves 98% accuracy in pilot study
Using pyBCI's real-time EEGNet classifier, our team hit record accuracy on a 36-symbol P300 matrix.
pyBCI v0.3 — Agentic Pipeline Assistant
Introducing the built-in AI co-pilot that suggests preprocessing parameters and flags artifacts automatically.
Get updates when we ship.
Early access, release notes, and research highlights — no spam, unsubscribe anytime.
Meet the team.
Researchers and engineers building the future of open-source BCI tooling.