In the beginning, we had the keyboard, the mouse, and the joystick. Then came gyroscopes, voice recognition, and more. There are multiple ways to interact with a machine; but could we someday be able to capture the frequencies emitted by our brain, process them, and make them available as an API? Deploy a pod on a cluster just by thinking about it? A California startup has released a brain helmet that can capture our brain's electrical activity and has made available an SDK that allows you to create applications and services with this new form of interface: the brain. After a quick explanation of how our brain works, we will see in the form of mini demos how the SDK works and the perspectives it opens up—for accessibility in particular.
Get notified about new features and conference additions.