A rising shortfall of trained diagnostic imagers in the NHS, combined with an increased reliability and abundance of commoditised AI algorithms, and changes in regulation to encourage innovation in the field of medtech - for example France announcing the creation of a health data hub - makes now the perfect time for computer scientists to be applying image recognition and artificial intelligence technologies to the healthcare sector.
The mission of the CW Healthcare SIG is to connect organisations who have specific technology challenges in healthcare with potential solution providers from within the wider CW community. Which is why, on 23 October, they put on a half-day event that brought together a Radiation Oncology Consultant and a Radiologist from the University of Cambridge with two brilliant start-ups already making waves in the field of deep learning for diagnostics, Kheiron Medical Technologies and Granta Innovation.
While a common concern amidst this rise of new technology is that they will be shortly replaced by machines, this is, of course, a fallacy. There aren’t enough radiologists already; you can’t replace what doesn’t exist. As Hugh Harvey of Kheiron pointed out, there are more tigers in the wild than there are trained radiologists in the UK! Furthermore, demand is so high on the NHS thanks to a growing and ageing population, and advanced diagnostic imaging is required for multiple treatments. Finally, AI solutions require a trained, human expert to check and validate the output – this is unlikely to change due to the legal implications of what would happen when a machine makes a mistake. In this field, there really are lives at stake.
Segmentation for Radiotherapy Treatment
What AI technology promises to do is improve the cost and time efficiency – and the effectiveness - of delivering treatments to patients. Raj, a Radiation Oncology Consultant for the University of Cambridge and a consultant for Microsoft’s InnerEye programme, is working on a new system for segmenting tissue prior to radiotherapy treatment.
The first step of a radiotherapy pathway is segmentation – working with an image to outline/classify cancerous tissue from healthy tissue so that it can be accurately pinpointed in treatment. This is extremely time consuming for a professional to do – 2.5hrs per patient can be typical – and the accuracy of the segmentation worsens as treatment progresses (over weeks) and the patient’s body changes -meaning this lengthy process needs to be repeated. The accuracy of segmentation has a clear impact on the outcomes of the patient, as TROG 02.02 suggested.
Time to locoregional failure by deviation status. The four cohorts are (1) compliant from the outset (n 502), (2) made compliant following a review by the Quality Assurance Review Center (n 86), (3) noncompliant but without predicted major adverse impact on tumor control (n 105), and (4) noncompliant with predicted major adverse impact on tumor control (n 87).
Using CT images as the input, Raj and Microsoft teams in the InnerEye project have developed a segmentation tool that delivers accurate results in 5 minutes – compared to the 60-70 minutes that human segmentation normally takes. These 5 minutes includes 1 minute computerised segmentation and 3-4 minutes of human fix-up, but such a technology has the potential to massively increase the throughput of cases in hospitals. Raj’s vision is to integrate image recognition technology into the scanning hardware such that the entire 5 minute process can occur while the patient is in the room – just in case further scans or tests are needed. A long-term goal is to combine the data from scans with all known patient history/data and see if it can be used to predict patient outcomes.
Artificial Intelligence in Mammography
The UK has high quality standards for its mammographers with double reading the standard and regular performance reviews to ensure that their work remains accurate. The question for any deep learning technology entering this field, the test is whether it can be more accurate and quicker than two highly trained professionals.
There is potential for computer aided detection to reduce oversight and detect anomalous cells earlier, enabling treatment at an earlier stage of the cancer, and therefore better patient outcomes. However, early trials suggested that diagnostic accuracy decreased when one of the readers was replaced with a computer; clinicians became overly reliant on the machine and did not perform adequate checks (Lehman C et al JAMA 2015).
Tools such as Transpara are changing this. They have been extensively tested and are demonstrating themselves to be more accurate than radiologists. In the words of Fiona Gilbert, Head of Cambridge University’s Department of Radiology, ten minutes is simply too long for a human to spend reading an ultrasound. Utilizing breakthrough image analysis and deep learning technologies, the Transpara DBT application provides information to significantly improve reading workflow for DBT on breast reading workstations. The reader can automatically jump to a relevant DBT slice in both the CC and MLO 3D data, by simply clicking on a suspicious region in a synthetic mammogram, hugely decreasing the time spent reading images, and increasing throughput in that department.
Also out there
There are companies who are using natural language processing software to review the entire corpus of medical literature and distil the information into consumable chunks for clinicians looking to solve a particular problem. How cool is that?
Granta Innovation and Kheiron Medical Technologies are two organisations developing solutions for the healthcare market. Granta Innovation is a start-up (who recently won CW’s Discovering Start-Ups Competition) consisting of an experienced team of entrepreneurs and technologists who are deploying image recognition techniques to prostate cancer. Kheiron Medical Technologies are focused on breast cancer and their solution, which to date has been trained on over 1M images, provides clinicians with a proposed decision – based on the scans, should a patient be called back, or not. They are the first to receive a CE marking for deep learning in medical imaging and have featured in The Times, Forbes, The Economist and more.
Two key takeaways from these presentations are that for start-ups in this field, the business case needs to be clear in order to encourage technological innovation. Preparing an image recognition software requires thousands of man-hours and millions of pounds; few organisations are going to want to dedicate resources to this through pure altruism. Furthermore, it would be wise for this country to encourage a healthy marketplace where innovation can thrive without either (a) start-ups being pushed out by the global technology giants or (b) over-competition makes the marketplace unattractive.
The second key takeaway from the second half of the morning was to do with the classification of medical data. In this nascent market, there is the opportunity for organisations to adopt standard practices and terminology. High quality input data is essential in producing an accurate image recognition tool; but clearly the use of medical records comes with its own challenges. Hugh Harvey of Kheiron is encouraging the market to adopt the MIDaR (Medical Imaging Data Recognition) system which classifies medical data into four segments according to the quantity and sensitivity of data associated with an image. While each start-up may be working with its own unique database of images, building consistency in the classification at this point makes sense to enable any collaboration, data-sharing and transparency that may be needed in the future. It can take months to convert level D data (live, un-anonymised clinical data inside hospitals) to level A data (de-identified, labelled, clean data for training).
Is France’s approach of opening up data access really going to generate the best solutions? Or will providing open access to data stifle the marketplace by encouraging too much competition? How will the technology and healthcare sectors work together to build a positive public reception of AI-enhanced treatments? This already seems to be taking place in the field of robotic surgery but it is an essential next step to move AI based image recognition from studies to widespread use. These are some of the questions which were left open after an informative morning that explored a medical technology with a lot of potential, but a lot of growing to do.
For those looking to innovate in this sector, take a look at the Innovate UK website for funding opportunities. Under the Government’s Industrial Challenge Fund there are grants and loans available for organisations working in “AI and Data Economy” and “Healthy Ageing”.
To find out more about upcoming CW events, visit our event pages.