A neuroscientist’s thoughts on Kernel's announcement

Kernel, the neurotech company founded by Bryan Johnson, just released a wave of new information about the technology they have been building over the past half-decade. With this announcement, we finally get a glimpse into the secretive company’s plans. We knew that they had ditched their intentions of pursuing invasive brain recording techniques, leaving that all to Neuralink, a similarly minded effort launched by serial entrepreneur Elon Musk. There were rumours of a NIRS (near-infrared spectroscopy) based system, but nothing concrete. Well here we are, so let’s get to it and unwrap some of the information revealed today.

The system(s)
The vision Kernel outlined in their post is that of a multimodal recording system based on MEG (magnetoencephalography, a mouthful) and NIRS (near-infrared spectroscopy). Both of these technologies are already available and widely used in research settings (and clinical settings too in the case of MEG). What’s new here is the package, miniaturization and ease of use. Far from trivial, these advances could allow both recordings modalities to move out of the lab and into more “ecological” settings (e.g. use during movement, daily activities, etc). If Kernel developed a robust system which successfully deals with the many notoriously tricky drawbacks of both NIRS and MEG, this would be an important contribution to the field, helping democratize these two underused techniques. The general thinking is that MEG offers something similar to EEG (high temporal resolution, limited ability to localize signal), only with better overall signal quality (at least theoretically, we haven’t seen any raw data), while NIRS offers a portable equivalent of fMRI (functional magnetic resonance imaging), at the price of reduced access to deeper brain structures and lower spatial resolution.

Although both modalities were already in use in research laboratories, Kernel’s contribution here is twofold: they tackled some of the key limitations of these techniques, and they produced a more polished and miniaturized version of each. The most exciting advance is the active magnetic shielding in their MEG system which, according to their post, allows them to acquire signals outside of a shielded room (active shielding has been proposed in the past, but not to the point of bringing the system outside shielded rooms, such as here and here). However, the data they report (more on that later) was acquired inside a shielded room ("All experimentation took place inside a magnetically-shielded room to attenuate environmental noise."), which raises the question of how well the system works in practice, and what the hit to signal quality might be in more realistic everyday environments. Still, this would be a big advance, dramatically increasing the scenarios in which MEG could be applied. Another important point is that the MEG system developed by Kernel has a (comparatively) small size. This is possible because they use optically pumped magnetometer (OPM) sensors, which allow for much smaller footprints than the more traditional MEG devices which require extremely low temperatures for their superconducting sensors to work, and therefore very bulky and immobile systems. OPM sensors have been used to build small systems in research settings (see here and here for examples), but nothing quite as polished and with such a high channel count as what Kernel unveiled.

Both the MEG and NIRS systems (which for now are two separate helmets) weigh in at roughly 1.5kg each, which is a lot to be carrying on the head. Having worked at a company intent on placing computers on people’s heads (Magic Leap, which produces glasses that weigh 260g), I can safely say that those weights would not work for any kind of prolonged use and would likely cause musculoskeletal issues in the long run (even the “heavy" Microsoft Hololens comes in at only 566g). Which brings us to the next point: these are obviously not consumer devices meant to be worn for hours on end, or even assistive devices for people with sensorimotor disorders (and to be clear, they don’t claim to be). These are research devices, to be used for neuroscience studies. This raises an important question about a company like Kernel: if their ultimate goal is to pursue general consumer applications, which are far on the horizon, what will sustain them in the shorter term? Kernel may have an answer in “neuroscience as a service."

The compelling (albeit tricky) vision of neuroscience as a service
Neuroscience as a service (coined NaaS by Kernel) is the idea of offering companies the ability to run state-of-the-art neuroscience studies by leveraging Kernel’s in house expertise and technology. The idea is simple and compelling, and one can easily imagine this creating a lot of interest from companies working in UX, learning, rehabilitation and more. Many (if not most) companies do not have the capacity and know-how to perform neuroscience studies to guide their product design and development. Yet many could benefit from these types of studies. Our neuroscientific knowledge may be too limited to learn anything of value in most industries, but it certainly seems plausible that this could work for a subset of companies and use-cases.

Nonetheless, a proposal like this ought to (and certainly will) raise a number of red flags in any neuroscientist’s mind. With our limited knowledge about the brain, and being able to access but a small percentage of ongoing activity with non-invasive sensors, things can quickly veer into pseudoscience. In my view, this is one of the biggest risks Kernel faces with NaaS. It’s easy to envision how a number of companies with little to no interest in rigorous neuroscience may try to add some trendy neurotech spice to their marketing campaigns (I can already hear the taglines: "this is your brain on X”). Luckily, offering this service on their own terms allows Kernel to tackle this issue however they please. But market forces may conspire against them, pressuring them to strike a difficult balance between scientific rigour and profits (i.e. smaller studies will have less power, but will also be cheaper, etc).

A few words about the two demo applications
Together with a description of their system, Kernel also released two demo “studies” serving to establish the company’s scientific and technical prowess. Although somewhat expected of a company trying to generate hype, the wording of their announcements significantly oversells what they achieved in these experiments and how it relates to previous work. Both experiments show nice results and support the quality of their system, but neither of them is groundbreaking. Unlike the headlines, the full-length article is more transparent about the significance of these two studies.

In one experiment, which they call Sound ID, Kernel scientists extended a previously published study. Their headline reads: "Kernel Sound ID decodes your brain activity and within seconds identifies what speech or song you are hearing.” In practice, the experiment works as follows: a person is made to listen to ten predefined and preprocessed song or speech excerpts, and the system will guess which of the ten auditory fragments the person is listening to based on the MEG signal (after tens of seconds). Importantly, this requires knowledge about the piece being listened to. Additionally, this approach would most likely degrade very quickly as one started adding more snippets to it (e.g. recognizing one out of a hundred songs the user might be listening to, etc). This is certainly an interesting illustrative example but is unlikely to be practically useful in this form. Although not from Kernel directly, suggesting this is a “Shazam for the mind” is definitely deep in hype territory. On the other hand, an interesting application of this technique which came up in my research is the ability to detect which auditory stream a person is attending to, for example when multiple people are talking at the same time. It’s easy to envision how this could be helpful in studies on attention as part of Kernel’s NaaS offering.

The second experiment was a more standard speller scenario, where a person can (slowly) type by visually attending letters on a virtual keyboard. This relies on a (widespread) smart trick: each virtual key on the screen flickers with a specific pattern, which is then reflected in the brain when a person looks at it. This way, it’s possible for the system to predict which key the person is looking at by recognizing the flickering pattern. This type of approach works well, and can allow people with motor impairments to communicate (e.g. someone with paralysis after spinal cord injury). What’s intriguing about this choice of experiment is that it highlights one of the main advantages of invasive neural interfaces (of the type Neuralink is pursuing). Specifically, the speller trick requires the person to “type” with their eyes, which is not a very fast and efficient way to enter text (the information transfer rate Kernel found was around 0.9 bit/s). On the other hand, invasive interfaces have been shown to allow speech synthesis directly from neural recordings (see here for another example), potentially enabling a “direct brain Siri” type of interface. Neither of these papers specifically look at information transfer rates, but they should be close to that of natural speech, which some estimates put at 40 bit/s. This highlights the dramatic difference between the two approaches. So while there is nothing wrong with this “speller” demonstration, it’s an odd choice, because it happens to be one of the areas where invasive interfaces have an objective, well-quantified performance advantage over non-invasive alternatives.

Final thoughts
It’s exciting to live in a time where neurotechnology receives so much attention and funding. One of the things extremely well-funded companies can achieve that researchers cannot (and have little incentive to), is building integrated and polished systems. Both the Neuralink and Kernel announcements have that in common: they take state-of-the-art research technologies and turn them into actual products. That’s no small feat and requires a lot of resources. In both cases, the resulting systems are many researcher’s dreams come true, offering signal quality, reliability and ease-of-use well beyond what clunky, semi-custom lab rigs can offer. Although researchers certainly cannot complain, one wonders whether such monumental efforts make financial sense. These companies share a common vision of a future where we interact with our digital tools in more efficient and seamless ways, using our brains directly. But there remains a long and windy road before this vision can become a reality for everyday consumers, and how easily these companies can sustain themselves until that day is still to be seen. Getting there will require more than streamlined and optimized versions of tools researchers already have. The science isn’t quite there yet, and some fundamental breakthroughs will need to happen before many of the promised sci-fi sounding application can see the light of day. The fancy tools Kernel and Neuralink are building will help get us there faster. Here’s to hoping they’ll have the patience to stick around and continue building them while Science meanders along its slow and steady path to the future.



If you enjoyed this story, consider subscribing to my website (you can use this link). That way, you'll automagically be notified every time a new story is online, right in your mailbox! I know, technology, right?

←I've kept a diary for a 1000 straight days, here's why you should tooBrain Calligraphy→