{"id":768,"date":"2015-09-25T16:34:09","date_gmt":"2015-09-25T20:34:09","guid":{"rendered":"https:\/\/sites.bu.edu\/msl\/?page_id=768"},"modified":"2015-09-25T16:36:34","modified_gmt":"2015-09-25T20:36:34","slug":"brain-swarm-interface-bsi","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/msl\/research\/brain-swarm-interface-bsi\/","title":{"rendered":"Brain Swarm Interface (BSI)."},"content":{"rendered":"<p>This work presents a novel marriage of Swarm\u00a0Robotics and Brain Computer Interface technology to produce\u00a0an interface which connects a user to a swarm of robots. The\u00a0proposed interface enables the user to control the swarm\u2019s size\u00a0and motion employing just thoughts and eye movements. The\u00a0thoughts and eye movements are recorded as electrical signals\u00a0from the scalp by an off-the-shelf Electroencephalogram (EEG)\u00a0headset. Signal processing techniques are used to filter out noise\u00a0and decode the user\u2019s eye movements from raw signals, while a\u00a0Hidden Markov Model \u00a0(HMM) technique is employed to decipher the\u00a0user\u2019s thoughts from filtered signals. The dynamics of the robots\u00a0are controlled using a swarm controller based on potential\u00a0fields. The shape and motion parameters of the potential fields\u00a0are modulated by the human user through the brain-swarm\u00a0interface to move the robots. The method is demonstrated\u00a0experimentally with a human controlling a swarm of three\u00a0m3pi robots in a laboratory environment, as well as controlling\u00a0a swarm of 128 robots in a computer simulation.<\/p>\n<p>Brain Computer Interfaces hold great promise for enabling\u00a0people with various forms of disabilities, from restricted\u00a0motion due to injury or old age, to severe disabilities like\u00a0the ALS , Locked-in syndrome, Tetraplegia and paralysis. There have been several works which have investigated using BCIs for controlling\u00a0prosthetics and for medical rehabilitation.Whereas the motivation for a BCI operated prosthetic or\u00a0wheelchair is evident, the applications for a brain swarm\u00a0interface may be less obvious. We envision several applications\u00a0for this technology. Firstly, people who are mobility impaired\u00a0may use a swarm of robots to manipulate their\u00a0environment using a brain-swarm interface. Indeed, a swarm\u00a0of robots may offer a greater range of possibilites for\u00a0manipulation than what is afforded by a single mobile robot\u00a0or manipulator. For example a swarm can reconfigure to suit\u00a0different sizes or shapes of objects, or to split up and deal\u00a0with multiple manipulation tasks at once. Another motivation\u00a0for our work is that using a brain interface may unlock a new,\u00a0a more flexible, way for people to interact with swarms.<br \/>\nCurrently human swarm interfaces are largely restricted to\u00a0gaming joysticks with limited degrees of freedom. However,\u00a0swarms typically have many degrees of freedom, and the\u00a0brain has an enormous potential to influence those degrees\u00a0of freedom beyond the confines of a traditional joystick.\u00a0We envision that the brain can eventually craft shapes and\u00a0sizes for swarm, split a swarm into sub swarms, aggregate or\u00a0disperse the swarm, and perhaps much more. In this work,\u00a0we take a small step toward this vision.<\/p>\n<p><strong>The Training Phase<\/strong><\/p>\n<p>We train the HMM model using the Baum-Welch algorithm to obtain the model parameters from a set of observations. These observations are essentially three of \u00a0Emotiv&#8217;s performance metrics (Engagement,Meditation and Excitation). \u00a0In this phase the user thinks of two particular thoughts which are quite different from each other (for example relaxing in a beach v\/s doing a mental calculation) over a period of 60 seconds while revisiting each thoughts at least once. The observations and the thought estimation over a period of 60 seconds can be seen in the figures below.<\/p>\n<p><a href=\"\/msl\/files\/2015\/09\/observation_data.png\"><img loading=\"lazy\" src=\"\/msl\/files\/2015\/09\/observation_data-636x309.png\" alt=\"observation_data\" width=\"636\" height=\"309\" class=\"alignnone size-medium wp-image-769\" srcset=\"https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/observation_data-636x309.png 636w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/observation_data-1024x498.png 1024w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/observation_data.png 1536w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a> <a href=\"\/msl\/files\/2015\/09\/Baum_welch.jpg\"><img loading=\"lazy\" src=\"\/msl\/files\/2015\/09\/Baum_welch-636x309.jpg\" alt=\"Baum_welch\" width=\"636\" height=\"309\" class=\"alignnone size-medium wp-image-770\" srcset=\"https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/Baum_welch-636x309.jpg 636w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/Baum_welch-1024x498.jpg 1024w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/Baum_welch.jpg 1536w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><em>Fig. 1 Observation and state estimation during the training phase<\/em><\/p>\n<p>The Algorithm has proven to be pretty decisive and accurate, i.e. at all times one\u00a0state is red (meaning the probability that the user is in that\u00a0thought state is nearly one), while the other is blue (meaning\u00a0probability that the user is in that thought state is almost\u00a0zero).<\/p>\n<p><strong>The Simulation and Experiment Phase\u00a0<\/strong><\/p>\n<p>After the training phase we used the model parameters learnt in the &#8216;Forward Algorithm&#8217; \u00a0to estimate the current state\/thought by the user. The thoughts were used to modulate the size of the swarm (Aggragate and Disperse) and eye movements were used to move the swarm. The four eye movements (i.e. Up,Down,Left and Right) corresponded to the respective movement of the swarm. For both the simulation and experiment we chose a simple rectangular path to navigate as a\u00a0proof of concept. We labelled the edges of the rectangle as sections 1,2,3 and 4 and navigated in a clockwise manner. In sections 1,2 and 4 the user thinks of the thought which makes the robots disperse and in section 3 (purple path in the simulation and experiments) \u00a0the robots aggregate corresponding to the other thought by the user.<\/p>\n<p>We simulated the control of 128 robots in a virtual Matlab environment which consisted of a rectangular path which is part of BU campus using real online EEG data. The results are shown below in Fig. 2 where the red line indicates the path of the centroid of the swarm and the blue lines indicate the size of the swarm.<\/p>\n<p>For the experiment we used 3 M3pi robots and used Optitrack system for Localisation and feedback. We used a point offset control method for driving individual robots and potential field approach to drive the swarm. Fig 3 shows the control flow during the experiement. Fig 4 shows the path traveled by the individual robots \u00a0during the experiment.<\/p>\n<p>The video of the simulation and experiment is shown in the bottom of the page for more clarifications.<\/p>\n<p><a href=\"\/msl\/files\/2015\/09\/centroid_sim.png\"><img loading=\"lazy\" src=\"\/msl\/files\/2015\/09\/centroid_sim.png\" alt=\"centroid_sim\" width=\"371\" height=\"374\" class=\"alignnone  wp-image-772\" srcset=\"https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/centroid_sim.png 584w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/centroid_sim-150x150.png 150w\" sizes=\"(max-width: 371px) 100vw, 371px\" \/><\/a><\/p>\n<p>Fig. 2 \u00a0Shows the path of the centroid of the swarm and its size during the course of the simulation.<\/p>\n<p><a href=\"\/msl\/files\/2015\/09\/control_loop.png\"><img loading=\"lazy\" src=\"\/msl\/files\/2015\/09\/control_loop-636x358.png\" alt=\"control_loop\" width=\"507\" height=\"285\" class=\"alignnone wp-image-774\" srcset=\"https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/control_loop-636x358.png 636w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/control_loop-1024x576.png 1024w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/control_loop.png 1280w\" sizes=\"(max-width: 507px) 100vw, 507px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Fig. 3 \u00a0Control flow during the experiment.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" src=\"\/msl\/files\/2015\/09\/centroid_exp-636x511.png\" alt=\"centroid_exp\" width=\"334\" height=\"269\" class=\"alignnone  wp-image-771\" srcset=\"https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/centroid_exp-636x511.png 636w, https:\/\/sites.bu.edu\/msl\/files\/2015\/09\/centroid_exp.png 786w\" sizes=\"(max-width: 334px) 100vw, 334px\" \/><\/p>\n<p>Fig. 4 \u00a0Path travelled by the individual robots and the swarm centroid during the experiment.<\/p>\n<p>&nbsp;<\/p>\n<div class=\"responsive-video responsive-youtube\"><iframe loading=\"lazy\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/XZVEWn4jQlQ?feature=oembed\" frameborder=\"0\" allowfullscreen><\/iframe><\/div>\n","protected":false},"excerpt":{"rendered":"<p>This work presents a novel marriage of Swarm\u00a0Robotics and Brain Computer Interface technology to produce\u00a0an interface which connects a user to a swarm of robots. The\u00a0proposed interface enables the user to control the swarm\u2019s size\u00a0and motion employing just thoughts and eye movements. The\u00a0thoughts and eye movements are recorded as electrical signals\u00a0from the scalp by an [&hellip;]<\/p>\n","protected":false},"author":11041,"featured_media":0,"parent":44,"menu_order":12,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/pages\/768"}],"collection":[{"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/users\/11041"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/comments?post=768"}],"version-history":[{"count":4,"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/pages\/768\/revisions"}],"predecessor-version":[{"id":778,"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/pages\/768\/revisions\/778"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/pages\/44"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/msl\/wp-json\/wp\/v2\/media?parent=768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}