Gesture control of laboratory instrumentation

Stephen Guy, chief technical officer at the Plextek and Design Momentum Life Science Partnership, explores how to communicate with collaborative robots in a lab environment. The recent rush to develop effective tests for COVID-19 and the successful roll-out of the vaccination programme has put unprecedented pressure on laboratory staff all over the world. This has highlighted the need for increased laboratory automation, but these environments differ markedly from factory floors, meaning that robots often need to work in confined spaces and integrate more closely with humans.

At the Plextek and Design Momentum Life Science Partnership, we have been exploring how laboratory staff will optimise their interaction with automation and instrumentation in the future. In particular, we are investigating how gesture controls may be used with collaborative robotic systems to ensure a safe and confident working environment.

Research suggests that voice and physical gestures are key components to communicate with collaborative robots. But the modern laboratory often has high background acoustic noise from environmental control systems and general benchtop equipment, such as centrifuges and shakers. They are often busy places with many staff not only performing experiments but using the dynamic environment to share knowledge and exchange ideas.

So, the idea that this space is shared with robots that are instructed and controlled solely by voice commands is therefore an unattractive prospect, except in very specific use cases.

For this reason, isolated physical gestures that do not require additional voice commands should be considered as a key area of investigation.

We are currently investigating how collaborative robots may be controlled by physical gestures, whilst also using their own specific physical gesture vocabulary, which can indicate to the user their own status.

As an illustration, collaborative robotics may be taught to recognise gestures generated by staff such as ‘Halt’ and ‘Start’, and staff to recognise gestures generated by the robot such as ‘Sleep’, ‘System Standby’ and ‘Error’.

The recognition of physical gestures also opens up the fascinating possibility of collaborative robots communicating together in groups to streamline processes and improve efficiency.

Recent research on this topic has focussed on the use of data-gloves to determine gestures, but this is not practical in laboratories where staff are required to be ‘hands-free’ and often wear specialised protective gloves. For this reason, we have taken a multi-sensor approach to detecting physical gestures including vision sensors.    

A helpful factor when trying to monitor gestures in a laboratory environment is that the lighting is generally of high luminosity and consistent, allowing visual images of high quality. Also, the short physical distance between robots and staff means that the physical effort required by staff to generate a gesture is low and the field of view of the sensors can be well focussed.

To enable a multi-sensor array to be retrofitted to existing robotics, our research is investigating how sensors may be configured in the form of a ‘wearable’ sleeve. We believe that this is an effective approach as it is not robot specific, allowing implementation on a wide range of robot types. It is also driving innovation in how we think about the re-design of the sensors themselves.

An important consideration is that the sleeve must be cleanable, using standard ethanol/water mixtures and laboratory disinfectants, and sterilisable using UV or hydrogen peroxide.

To provide feedback to the user that the collaborative robotics has responded to a gesture, lab staff could be provided with a wearable device themselves. This approach will potentially be helped by the proliferation of consumer digital wearable devices that already exist, such as smart watches.  

Data from the multiple sensors can be overlayed to create a map of the scene from which the region of interest is extracted. The gesture is determined by matching patterns with those stored in a gesture database. The appropriate response for the specific gesture is determined and instructions sent to the robot controller.

Generalised features characterising the gesture include size and arc, plane, speed and abruptness of the movement. The larger the feature set, the greater the filtering or elimination of irrelevant gestures, but also the longer processing required. The richness of the information contained in the gesture database is key, and research is presently considering the benefits of a ‘machine learning’ approach. It is interesting to consider if this should be built upon data collected from a large number of random staff, or only those staff permitted to use the system.

From the set of gestures, staff can communicate commands to the collaborative robot and the collaborative robot can communicate its own status to staff.

Early adoption of collaborative robotics is well suited to laboratory applications as processes are generally well defined and structured, especially if they are covered by regulatory requirements. There is no great need to communicate subtly, as might be the case for collaborative robotics in applications where there is less formal structure such as personalised care or education. In these applications one could imagine that the range of gestures would need to be greatly expanded.

It would perhaps be interesting to explore the scalability of our approach with additional peripherals such as data gloves, voice commands and augmented reality applications.

As the population grows, we will enter the era of ‘big health’ requiring complex analysis of millions of samples drawn from across the population. This will create a demand for extensive laboratory automation to remove the burden of tedious and repetitive tasks from human operators.

There will be increasing pressure to bring robotics out from behind large and expensive enclosures to utilise laboratory space more efficiently and streamline processes, and this is where collaborative robotics can help.  

Key success factors will be ensuring that the system is perceived by staff to be safe and also likeable.  Perceived safety may have more to do with the response of the collaborative robot to the gesture, such as speed, abruptness and displacement of the motion, than to the nature of the gestures themselves.

The challenges of realising collaborative robotics in the laboratory are multiple, but the potential rewards in terms of reduced costs and enhanced efficiency justify further investigation. It is hard to imagine one organisation meeting all the challenges involved, so forming meaningful strategic partnerships with interested parties, including potential end-users and equipment suppliers, is key.

The COVID pandemic has shown that governments and other agencies need to be fully prepared for such eventualities, which will require investment in technologies such as automation to support the life science supply chain.

By bringing together end-users, suppliers and funding bodies to address the technology challenges facing the development of collaborative robotics in laboratories, we can create a meaningful impact in the development of new vaccines and therapeutics.