To borrow Professor Dan Ariely's quote about Big Data, AI is like like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.
However, this may be a too harsh commentary about the state of AI affairs. Not withstanding the overstatements about the capabilities AI by some; there is substance behind the hype with the demonstrated benefits of deep learning, natural language processing and robotics in various aspects of our lives. In Medical Informatics, the area I can credibly comment, neural networks and data mining have been employed to enhance the ability of human clinicians to diagnose and predict medical conditions, and in some instances like medical imaging and histo-pathological diagnosis, AI applications have met or exceeded the accuracy of human clinicians. In terms of economics, AI deserves the attention it is getting. The top 100 AI companies have raised more than US$11.7 billion in revenue and even back in 2015, about 49 billion US$ revenue was generated just in the North American Market. Countries like China have invested massively in AI research and start-ups, with some forecasting the Chinese AI industry exceeding US$150 billion by 2030. In other instances like the political sphere, AI has received high profile recognition with the appointment of the first AI Minister in the world in U.A.E.
However, one has to be aware of the limitations with AI and the amount of research and analysis that is yet to be undertaken for us to confidently accept a ubiquitous AI system our lives. I state this as a vocal proponent of application of AI techniques especially in Medicine (as my earlier articles and book chapter indicate) but also as one who is aware of the 'AI Winter' and 'IBM Watson/MD Anderson' episodes that have occurred in the past. Plus there is the incident that happened yesterday where a driverless Uber car was responsible for a fatality in Arizona. So in this article I list, from a healthcare perspective, three main limitations in terms of adoptability of AI technologies. This analysis is based on current circumstances, which considering the rapid developments that are occurring in AI research may not apply going into the future.
1) Machine Learning Limitations: The three main limitations I see with machine learning are Data-feed, Model Complexity and Computing Times.
In machine learning the iterative aspect of learning is important. Good machine learning models rely on data preparation and ongoing availability of good data. If you don't have good data to train the machine learning model (in supervised learning) and there is no new data, the pattern recognition ability of the model is moot. For example in the case of radiology, if the images being fed into the deep learning algorithms tend to come with underlying biases (like images from a particular ethnic group or images from a particular region) the diagnostic abilities and accuracy rates of the model would be limited. Also, reliance on historical data to train algorithms may not be particularly useful for forecasting novel instances of drug side-effects or treatment resistance. Further, cleaning up and capturing data that are necessary for these models to function will provide a logistical challenge. Think of the efforts required to digitize handwritten patient records.
With regards to model complexity, it will be pertinent to describe deep learning (a form of machine learning) here. Deep Learning in essence is a mathematical model where software programs learn to classify patterns using neural networks. For this learning, one of the methods used is backpropagation or backprop that adjusts mathematical weights between nodes-so that an input leads to right outputs. By scaling up the layers and adding more data, deep learning algorithms are to solve complex problems. The idea is to match the cognition processes a human brain employs. However, in reality, pattern recognition alone can't resolve all problems, especially so all medical problems. For example, when a decision has to be made in consultation with the family to take off mechanical ventilation for a comatose patient with inoperable intra-cerebral hemorrhage. The decision making in this instance is beyond the capability of a deep learning based program.
The third limitation with machine learning is the current capabilities of computational resources. With the current resources like GPU cycles and RAM configurations, there are limitations as to how much you can bring down the training errors to reasonable upper bounds. This means the limitation impacts on the accuracy of model predictions. This has been particularly pertinent with medical prediction and diagnostic applications, where matching the accuracy of human clinicians in some medical fields has been challenging. However, with the emergence of quantum computing and predicted developments in this area, some of these limitations will be overcome.
2) Ethico-legal Challenges: The ethico-legal challenges can be summarized as 'Explainability', 'Responsibility' and 'Empathy'.
A particular anxiety about artificial intelligence is that decisions made by complex opaque algorithms cannot be explained even by the designers (the black box issue). This becomes critical in the medical field, where decisions either made directly or indirectly through artificial intelligence applications can impact on patient lives. If an intelligent agent responsible for monitoring a critical patient incorrectly interprets a medical event and administers a wrong drug dosage, it will be important for stakeholders to understand what led to the decision. However, if the underlying complexity of the neural network means the decision making path cannot be understood; it does present a serious problem. The challenge in explaining opaque algorithms is termed as the interpretability problem. Therefore, it is important for explainable AI or transparent AI applications to be employed for medical purposes. The medical algorithm should be fully auditable when and where (real-time and after the fact) required. To ensure acceptability of AI applications in the healthcare system, researchers /developers need to work on accountable and transparent mathematical structures when devising AI applications.
When a robotic radical prostatectomy goes wrong or when a small cell pulmonary tumor is missed in an automated radiology services, who becomes responsible for the error? The developer? The hospital? The regulatory authority(which approved the use of the device or program)? As AI applications get incorporated in medical decision making and interventions, regulatory and legal bodies need to work with AI providers to set up appropriate regulatory and legal frameworks to guide deployment and accountability. Also, a thorough process of evaluation of new AI medical applications ,before they can be used in practice, will be required to be established especially if autonomous operation is the goal. Further, authorities should work with clinical bodies to establish clinical guidelines/protocols to govern application of AI programs in medical interventions.
One of the important facets of medical care is the patient-clinician interaction/interface. Even with current advances in Robotics and intelligent agent programming, the human empathetic capabilities far exceed that those of AI applications. AI is dependent on a statistically sound logic process that intends to minimize or eliminate errors. This can be termed by some as cold or cut-and-dry unlike the variable emotions and risk taking approach humans employ. In medical care, clinicians need to adopt a certain level of connection and trust with the patients they are treating. It is hard to foresee in the near future, AI driven applications/robots replacing humans in this aspect. However, researchers are already working on classifying and coding emotions (see: https://www2.ucsc.edu/dreams/Coding/emotions.html) and robots are being developed with eerily realistic facial expressions (http://www.hansonrobotics.com/robot/sophia/), so maybe the cynicism about AI's usefulness in this area is not all that deserved?
3) Acceptability and Adoptability: While I think the current technological limitations with machine learning and robotics can be addressed in the near future, it will be a harder challenge for AI providers to convince the general public to accept autonomous AI applications, especially those that make decisions impacting on their lives. AI to an extent is pervasive already with availability of voice-driven personal assistants, chatbots, driverless cars, learning home devices, predictive streaming..etc. There hasn't been a problem for us in accepting these applications in our lives. However, when you have AI agents replacing critical positions that were previously held by humans ; it can be confronting especially so in medical care. There is thus a challenge for AI developers and companies to ease the anxiousness of public in accepting autonomous AI systems. Here, I think, pushing explainable or transparent applications can make it easier for the public to accept AI agents.
The other challenge, from a medical perspective, is adoption of AI applications by clinicians and healthcare organizations. I don't think the concern for clinicians is that AI agents will replace them but the issues are rather the limited understanding clinicians have about AI techniques (what goes behind the development), apprehensiveness about the accuracy of these applications especially in a litigious environment and skepticism as to whether the technologies can alleviate clinician's stretched schedules. For healthcare organizations, the concerns are whether there are cost efficiencies and cost-benefits with investment into AI technologies, whether their workforce will adopt the technology and how clients of their services will perceive their adoption of AI technologies. To overcome these challenges, AI developers need to co-design algorithms with clinicians and proactively undertake clinical trials to test the efficacy of their applications. AI companies and healthcare organizations also need to have a education and marketing strategy to inform public/patients about the benefits in adopting AI technologies.
Don't rule out AI
I outline the above concerns largely to respond to the misconceptions and overhyping of AI by media and those who are not completely conversant with the mechanics behind AI applications. Overhyping AI affects the acceptability of AI especially if it leads to adoption of immature or untested AI technologies. However, there is much to lose healthcare organizations rule out adoption of AI technologies. AI technologies can be of immense help in healthcare delivery:
Leave a Reply.
Health System Academic