Hack the Brain - code - seventh-sense-team
Waag BY-NC-SA

Rapid Prototyping with Muse & Max

As a participant of both Hack the Brain 2014 and this year’s Education edition, I'd like to share my team’s (technical) experiences with coding rapid prototypes using EEG data from the Muse headset. 

More user-friendly 
Compared to Hack the Brain 2014, Muse has become a lot friendlier for developers. Last year, we spent almost a day just getting to the stage of being able to draw EEG signals onscreen. Part of the reason for this was unfamiliarity with the subject matter and the device.

But this year, it was nearly a ‘plug and play’ experience. The Muse’s battery also seemed to last longer. And pairing the headset with the Mac now works every time. 

Furthermore, the muse.io software has been improved significantly. Last year we could only use the raw EEG signals, but muse.io now provides common calculations convenient for building brainwave applications.

So, there is no need for custom frequency analysis to determine frequency components on the signal; a permutation of intermediate transformation on the signal from each sensor is exposed via OSC path such as:

/muse/eeg
/muse/elements/raw_fft0
/muse/elements/alpha_absolute

But higher-level calculations are also provided. For example:

/muse/elements/blink
/muse/elements/jaw_clench
/muse/elements/experimental/concentration
/muse/elements/experimental/mellow

Experimental coding
Some of these elements are experimental, so the quality of these signals should be taken with a grain of salt. But, for quickly whipping up a proof of concept they are very helpful!

This year, we tried to use a setup that connected two Muses through OSC to Max/MSP, which in turn controlled to two iPhone Apps. Within the span of a morning, we were up, running, and trying out different mechanics and experiences. 
Max/MSP is designed for music programming. But, for a computer, there is no difference between an audio signal and an EEG signal—so it’s a great fit. The defining feature of Max is that you can make changes to the program while it’s running (no need for compiling, starting/stopping servers etc). Could this be the ideal setup for the kind of rapid prototyping needed during a Hackathon? 

Open source tools 
Also new during this edition was OpenBCI. It’s great to see an open source alternative to the Muse and the Emotiv. There is a lot of potential in the platform, but the project is still young. Therefore, it’s still missing the convenience of more mature, commercial brain-computer interfaces. 

I think it’s a nice challenge for the OpenBCI community to follow Interaxon’s lead, and come up with an open source version of muse.io. That way, it will be possible to send OpenBCI EEG data and higher-level transformations through OSC. Maybe the OSC path scheme could be based on the one already defined by Muse, so that it becomes trivial to build applications that work with both interfaces.

And—to pressure Muse to keep up their innovative work—hopefully at next year’s edition we can experiment with a full, open-stack of OpenBCI, OSC, and PD (an open alternative to Max) that’s as pleasant to use as the Muse & Max setup was this year!