As medical care evolves, clinicians and researchers are exploring the use of technology to improve the quality and effectiveness of medical care. In this regard, technology is being used to deliver precision medicine. This form of medicine is a new approach that focuses on using genomic, environmental and personal data to customize and deliver precise form of medical treatment. Hence the name ‘precision medicine’. One of the most influential factors, in recent years, in delivering precision medicine has been Artificial Intelligence (AI). In specific one of its forms Machine Learning (ML). ML, which uses computation to analyze and interpret various forms of medical data to identify patterns and predict outcomes has shown increasing success in various areas of healthcare delivery. In this article, I discuss how computer vision and natural language processing, which use ML can be used to deliver precision medicine. I also discuss the technical and ethical challenges associated with the approaches and what the future holds if the challenges are addressed.
Various forms of medical imaging techniques like X-rays, CT, MRI and Nuclear imaging techniques are being used by clinicians to assist their diagnosis and treatment of various conditions ranging from cancers to simple fractures. The importance of these techniques in devising specific treatments has become critical in recent years. However, the dependency on a limited subset of trained medical specialists (Radiologists) to interpret and confirm the images has meant in many instances increase with the diagnosis and treatment times. The task of classifying and segmenting medical images can not only be tedious but take a lot of time. Computer Vision (CV), a form of AI that enables computers to interpret images and relate what the images are, has in the recent years shown a lot of promise and success. CV is now being applied in medicine to interpret radiological, fundoscopic and histopathological images. The most publicized success of recent years has been the interpretation of retinopathy images to diagnose diabetic and hypertensive retinopathy. The use of CV, powered by neural networks (an advanced form of ML), is said to take over the tedious task of segmenting and classifying medical images and enable preliminary or differential diagnosis. This approach is stated not only to accelerate the process of diagnosis and treatment but also provide more time for the radiologists to focus on complex imaging interpretations.
Natural Language Processing
As with CV, Natural Language Processing (NLP) has had a great impact on society in the form of voice assistants, spam filters, and chat-bots. NLP applications are also being used in healthcare in the form of virtual health assistants and in recent years have been identified to have potential in analyzing clinical notes and spoken instructions from clinicians. This ability of NLP can lessen the burden for busy clinicians who are encumbered by a need to document all their patient care in electronic health records (EHRs). By freeing up the time in writing copious notes, NLP applications can enable clinicians to focus more of their time with patients. In the recent period NLP techniques have been used to analyze even unstructured (free form and written notes) data, which makes it useful in instances where written data is not available in the digital form or there are non-textual data. By integrating NLP applications in EHRs, the workflow and delivery of healthcare can be accelerated.
Combination of Approaches
Precision medicine is premised on customization of medical care based on individual profile of patients. By combining NLP and CV techniques, the ability to deliver precision medicine s greatly increased. For example, NLP techniques can scroll through past medical notes to identify previously diagnosed conditions and medical treatment and present the information in a summary to doctors even as the patient presents to the clinic or to the emergency department. Once in the clinic or emergency department, NLP voice recognition applications can analyze conversation between the patient and clinicians and document it in the form of patient notes for the doctor to review and confirm. This process can free up time for the doctor and ensure accuracy of notes. As the doctor identifies the condition affecting the patient and relies on confirmation through relevant medical imaging, automated or semi-automated CV techniques can accelerate the confirmation process. Thus, a cohesive process that can accelerate the time in which the patient receives necessary medical treatment.
Let us see how this works in a fictitious example. Mr Carlyle, an avid cyclist, meets with an accident on his way to work when an automobile swerves into the bike lane and flings him from his bicycle. The automobile driver calls in an ambulance when he notices Mr Carlyle seated and grimacing with pain. The ambulance after arrival having entered his unique patient identifier number, which is accessed from his smartwatch, rushes him to the nearest emergency department. The AI agent embedded in the hospital’s patient information system identifies Mr Carlyle through his patient identifier number and pulls out his medical details including his drug allergies. This information is available for the clinicians in the emergency department to review even as Mr Carlyle arrives. After being placed in an emergency department bay, the treating doctor uses an NLP application to record, analyze and document the conversation between her and Mr Carlyle. This option allows the doctor to focus most of her time on Mr Carlyle. The doctor suspects a fracture of the clavicle and has Mr Carlyle undergo an X-ray. The CV application embedded in the imaging information system has detected a mid-shaft clavicular fracture and relays the diagnosis back to the doctor. The doctor, prompted by an AI clinical decision support application embedded in the patient information system, recommends immobilization and a sling treatment for Mr Carlyle along with pain killers. His pain killer excludes NSAIDs as the AI agent has identified he is allergic to aspirin.
The above scenario while presenting a clear example of how AI, in specific CV and NLP applications, can be harnessed to deliver prompt and personalized medical care is yet contingent on the technologies to deliver such outcomes. Currently, CV techniques have not achieved the confidence of regulatory authorities nor clinicians to allow automated medical imaging diagnosis (except in minor instances such as diabetic retinopathy interpretation) and neither are NLP applications embedded in EHRs to allow automatic recording, analysis and recording of patient conversations. While some applications have been released in the market to analyze unstructured data, external validation and wide acceptance of these type of applications are some years away. Coupled with this technical and regulatory challenges is the ethical challenges of enabling autonomy of non-human agents to guide and deliver clinical care. Further issues may arise due to the use of patient identifiers to extract historical details even if it is for medical treatment if the patient hasn’t consented so. Yet, the challenges can be overcome as AI technology improves and governance structures to protect patient privacy, confidentiality and safety are established. As focus on the ethics of application of AI in healthcare increases and technological limitations of AI application get resolved, the fictitious scenario may become a reality not too far into the future.
There is a natural alignment between AI and precision medicine as the power of AI methods such as NLP and CV can be leveraged to analyze bio-metric data and deliver personalize medical treatment for patients. With appropriate safeguards, the use of AI in delivering precision medicine can only benefit both the patient and clinician community. One can based on the rapidly evolving AI technology predict the coming years will see wider adaption of precision care models in medicine and thus AI techniques.
With recent developments in regard to AI in Healthcare, one could be mistaken that the entry of AI in healthcare is inevitable. Recent developments include two major studies, one where machine learning classifiers used for hypothetico-deductive reasoning were found to be as accurate as paediatricians and the other one where a deep-learning based automated algorithm outperformed thoracic radiologists in accuracy and was externally validated in multiple sites. The first study is significant in that machine learning classifiers are now proven to be not only useful for medical imaging interpretation but also useful in extracting clinically relevant information from electronic patient records. The second study was significant in that the algorithm could detect multiple abnormalities in chest x-rays (useful in real world settings) and was validated multiple times using external data-sets. Coupled with these developments, we now have the FDA gearing up for the inevitable use of AI software in clinical practice by developing a draft framework anticipating modifications to AI medical software. Also, we now have medical professional bodies across the world welcoming the entry of AI in medicine albeit cautiously and by issuing guidelines. Like this one and this one. Compared to even a year ago, it seems AI has definitely had a resounding impact on healthcare. Even the venerable IEEE is keeping track of where AI is exceeding the performance of clinicians. However, I most certainly think we have yet seen the proper entry of AI in healthcare. Let me explain why and what needs to be done to enable this?
While there is strong evidence emerging about the usefulness of machine learning, especially neural networks in interpreting multiple medical modalities, the generalization of such successes is relatively uncommon. While there has been progress with the ability to minimize generalization error (through avoidance of over-fitting) and understanding how generalization and optimization of neural networks work, it still remains the fact that prediction of class labels outside trained data sets is not for certain. In medicine, this means deep learning algorithms that have shown success in certain contexts are not guaranteed to deliver the same success even with similar data in a different context. There is also the causal probabilistic approach of current machine learning algorithms, which do not necessarily align with the causal deterministic model of diagnostic medicine. I have covered this issue previously here. Even if we accept that machine learning/deep learning models with current limitations are useful in healthcare, there is the fact there is limited readiness of hospitals/health services to deploy these models in clinical practice. The lack of readiness spans infrastructure, policies/guidelines and education. Also, governments and regulatory bodies in many countries don't have specific policies and regulatory frameworks to guide the application of AI in healthcare. So, what has to be done?
As illustrated below, the following steps have to be adopted for us to see AI bloom in the healthcare context.
The first step is development and use of appropriate AI Technology in Medicine. This means ensuring there is validity and relevance of the algorithms being used to address the healthcare issues. For example, if a convolutional neural network model has shown success in screening pulmonary tuberculosis through chest x-ray interpretation it doesn't necessarily mean it is equipped to identify other chest x-ray abnormalities say atelectasis or pneumothorax. So the model should be used for the exact purpose it was trained. Also, the model trained with a labelled x-ray data-set from a particular region has to be validated with a data-set from another region and context. Another issue that pertains to technology is the type of machine learning model being used. While deep learning seems to be in-vogue, it is not necessarily appropriate in all medical contexts. Because of the limitations it poses with explainability, other machine learning models like Support Vector Machines, which lend themselves to interpretability should be considered.
The second step in facilitating the entry and establishment of AI in healthcare is Infrastructure. What do I mean by infrastructure? At this stage, even in developed countries, hospitals do not necessarily have the digital platforms and data warehouse structures for machine learning models to successfully operate. Many hospitals are still grappling with the roll-out of electronic health records. This platform will be essential for machine learning algorithms to mine and query patient data. Also, to train many machine learning models structured data is necessary (some models can work on unstructured data as this application). This data structuring process includes data labelling and creating data warehouses. Not all hospitals, facing budget crunches, have this infrastructure/capability. Further, the clinical and administrative workforce and patient community are to be educated about AI if AI applications will be used in clinical practice and healthcare delivery. How many healthcare organizations have this infrastructure readiness? I doubt many. So infrastructural issues are most certainly necessarily to be addressed before one can think of use of AI in the healthcare context.
The next step, Policy, is also critical. Policy covers both governmental and institutional strategies to guide the deployment of AI for healthcare delivery, and regulatory frameworks to facilitate the entry of and regulate AI medical software in the market. There is definitely progress here with many governments, national regulatory bodies, medical professional bodies and think tanks issuing guidance about this matter. Yet, there are gaps in that many of these guidance documents are theoretical or cursory in nature or not linked to existing infrastructure. Worse yet, is in countries where such policies and guidance don't exist at all. Also, an issue is limited funding mechanisms to support AI research and commercialization, which has significantly hampered innovation or indigenous development of AI medical applications.
The final step that needs to be considered is Governance. This step not only covers the regulation frameworks at the national level (necessary to scrutinize and validate AI applications) but also monitoring and evaluation frameworks at the institutional level. It also covers the requirement to mitigate risk involved in the application of AI in clinical care and the need to create patient-centric AI models. The latter two steps are vital in clinical governance and continuous quality improvement. Many institutions have issued ethical guidelines for the application of AI in Healthcare, but I am yet to see clinical governance models for the use of AI in clinical care. It is critical that clinical governance models for the application of AI in healthcare delivery are developed.
Addressing the steps, I list above: Technology, Infrastructure, Policy and Governance appropriately will most certainly facilitate the entry and establishment of AI in Healthcare. Also, with the accelerated developments in AI technology and increasing interest in AI by policy makers, clinical bodies and healthcare institutions, maybe we are not that far away from seeing this occur.
In recent years, there has been a great deal of coverage about the dearth of PhD qualified AI data-scientists and the level of salaries qualified candidates can gain. One such piece can be found here: NYTimes article. Then you have universities complaining how their PhD qualified AI scientists are being poached by the industry thus demonstrating the demand for PhD qualified AI scientists: Guardian Article. Also, you have many universities opening numerous funded AI PhD positions such as this university: Leeds University Isn't it then obvious, a PhD in AI technology should be on all data scientists to do list. Well, as one who contemplated briefly to do a second PhD (focusing on swarm intelligence and multi-agent system in Healthcare) and who spent some time researching the necessity of completing a PhD to be across AI, I found it detrimental to undertake a PhD focusing on a specific AI algorithmic approach. Let me explain why?
On 11th February, the US administration formalized the proposals made during last year's White House summit on AI through an executive order but there was no mention of the amount that will be set aside for investment in AI (except for statements about prioritization of investment in AI). This compared to official commitments by other countries/regions:
With the tendency nowadays for many 'software as service' providers to label their software as 'AI' or add a component labelled as AI, it is hard to distinguish what is a real AI based software and what is not? Some vendors state only machine learning (including deep learning) based software is real AI. However, AI is much more than machine learning and involves other forms such as reinforcement learning, swarm intelligence, genetic algorithms and even some forms of robots. To make it simple for clients/purchasers/organizations, I have developed this simple chart to identify an AI and Machine Learning based software. This chart does not cover all forms of AI software's as currently most AI software's being offered in the market are machine learning based software's (I am creating a more comprehensive chart, which encompasses all the schools and tools of AI including symbolism, connectionism, analogism, bayesianism and evolutionism approaches). I also wanted to keep this chart simple so it is easy to use by buyers (so you data scientists and mathematicians, feel free to comment but be forgiving). Anyway, here you are:
Moving from probability inference to causal inference and hybrid approaches in Machine Learning-Why is this pertinent to AI application in Medicine?
Recently a twitter war (OK, maybe a vigorous debate) erupted between proponents of deep learning models and long-standing critics of deep learning approaches in AI (I won't name the lead debaters here, but a simple Google search will help you identify them). After following the conversation and my own study of the maths of probabilistic deep learning approaches, I have been thinking about alternative approaches to current bayesian/back-propagation based deep learning models.
The versatility and deep impact (pun intended) of neural network models in pattern recognition, including in medicine, is well documented. Yet, the statistical approach/maths of deep learning is largely based on probability inference/Bayesian approaches. The neural networks of deep learning remember classes of relevant data and recognise to classify the same or similar data the next time around. It is kind of what we term 'generalisation' in research lingo. Deep learning approaches have received acclaim because they are able to better capture invariant properties of the data and thus achieve higher accuracies compared to their shallow cousins (read supervised machine learning algorithms). As a high-level summary, deep learning algorithms adopt a non-linear and distributed representation approach to yield parametric models that can perform sequential operations on data that is fed into it. This ability has had profound implications in pattern recognition leading to various applications such as voice assistants, driver-less cars, facial recognition technology and image interpretation not to mention the development of the incredible AlphaGo, which has now evolved into AlphaGo zero. The potential of application of deep learning in medicine, where big data abounds, is vast. Some recent medical applications include EHR data mining and analysis, medical imaging interpretation (the most popular one being diabetic retinopathy diagnosis), medical voice assistants, and non-knowledge based clinical decision support systems. Better yet, the realisation of this potential is just beginning as newer deep learning algorithms are developed in the various Big-Tech and academic labs. Then what is the issue?
Aside from the often cited 'interpretability/black-box' issue associated with neural networks (I have previously written even this may not be a big issue with solutions like attention mechanism, knowledge injection and knowledge distillation now being available) and it's limitations in dealing with hierarchal structures and global generalisation, there is the 'elephant in the room' i.e. no inherent representation of causality. In medical diagnosis, probability is important, but causality is more so. In fact the whole science of medical treatment is based on causality. You don't want to use doxycycline instead of doxorubicin to treat Hodgkin's Lymphoma or the vice versa for Lyme Disease. How do you know if the underlying is Hodgkin's Lymphoma or Lyme Disease? I know Different diseases and different presentations, of course. However, the point is the diagnosis is based on a combination of objective clinical examination, physical findings, sero-pathological tests, medical imaging. All premised on a causation sequence i.e. Borrelia leads to Lyme Disease and EB virus/family history leads to Hodgkin's. This is why we adopt Randomised Control Tests, even with its inherent faults, as the gold standard for incorporation of evidence-based treatment approaches. If understanding of causal mechanisms is the basis of clinical medicine and practice, then there is so much deep learning approaches can do for medicine. This is why I believe there needs to be a serious conversation amongst AI academics and the developer community about adoption of 'causal discovery algorithms' or better yet 'hybrid approaches (a combination of probabilistic and causal discovery approaches)'.
We know 'correlation is not causation', yet under appropriate conditions we can make inferences from correlation to causation. We now have algorithms that search for causal structure information from data. They are represented by causal graphical models as illustrated below:
Figure I. Acyclic Graph Models (Source: Malinsky and Danks, 2017)
While the causal search models do not provide you a comprehensive causal list, they provide enough information for you to infer a diagnosis and treatment approach (in medicine). There have been numerous successes documented with this approach and different causal search algorithms developed over the past many decades. Professor Judea Pearl, one of the godfathers of causal inference approaches and an early proponent of AI has outlined structural causal models, a coherent mathematical basis for the analysis of causes and counterfactuals. You can read an introduction version here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2836213/
I think it is probably a time now to pause and think, especially in the context of medicine, if a causal approach is better in some contexts than a deep learning gradient informed approach. Even better if we can think of a syncretic approach. Believe me, I am an avid proponent of the application of deep learning in medicine/healthcare. There are some contexts, where back-propagation is better than hypothetico-deductive inference especially when terrabytes/petabytes of unstructured data accumulates in the healthcare system. Also, there are limitations to causal approaches. Sometimes probabilistic approaches are not that bad as outlined by critics. You can read a defence of probabilistic approaches and how to overcome its limitations here: http://www.unofficialgoogledatascience.com/2017/01/causality-in-machine-learning.html
However, close-mindedness to an alternative approach to probabilistic deep learning models and reliance on this approach is a sure-bet to failure of realisation of the full potential of AI in Medicine and even worse failure of the full adoption of AI in Medicine. To address this some critics of prevalent deep learning approaches are advocating for a Hybrid approach, whereby a combination of deep learning and causal approaches are utilised. I strongly believe there is merit in this recommendation, especially in the context of application of AI in medicine/healthcare.
A development in this direction is the 'Causal Generative Neural Network'. This framework is trained using back-propagation but unlike previous approaches, capitalises on both conditional independences and distributional asymmetries to identify bivariate and multivariate causal structures. The model not only estimates a causal structure but also differential generative model of the data. I will be most certainly following the developments with this framework and more generally hybrid approaches especially in the context of it's application in medicine but I think all those interested in the application of AI in Medicine should be having a conversation about this matter.
Malinsky, D. & Danks, D. (2017). Causal discovery algorithms: A practical guide. Philosophy Compass 13: e12470, John Wiley & Sons ltd.
Earlier this year American researchers Xiao, Choi and Sun published a systematic review about the application of deep learning methods to extract and analyse EHR data in the prestigious JAMIA journal. The 4 categories of analytic tasks that they found deep learning was being used were 1) Disease Detection/Classification 2) Sequential prediction of clinical events 3) Concept embedding where feature representations are derived algorithmically from EHR 4) Data augmentation whereby realistic data elements or patient records are created and 5) EHR data privacy where deep learning was used to protect patient privacy. More importantly, they identified challenges in the application and corresponding solutions. I have summarised the issues and solutions in this table:
Around the late 1970's, the so called first 'AI winter' emerged because of technical limitations ( the failure of machine translation and connectionism to achieve their objectives). Then after in the late 1980's, the second AI winter occurred because of financial/economic reasons (DARPA cutbacks, restriction on new spending in academic environments and collapse of the LISP machine market). With the current re-emergence of AI and the increasing attention being paid to it by governments and corporates, some are speculating another AI winter may occur. Here is a lay commentary as to why this may occur: https://www.ft.com/content/47111fce-d0a4-11e8-9a3c-5d5eac8f1ab4 and here is a more technical commentary: https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/
What these commentator's don't take into account is the conditions that prevailed in the late 70's and 80's are not the conditions prevalent now. The current demand for AI has emerged because of actual needs (the need to compute massive amounts of data). With the rapid rise in computational power, increase in acquisition of data from all quarters of life, increased investment by governments and corporates, dedicated research units both in academia and corporate bodies, integration with other innovative IT areas like IoT, Robotics, AR, VR, Brain-Machine Interfaces..etc there has never been a better time for AI to flourish. What sceptics seem to mistakenly assume is that developers are intending for Artificial General Intelligence to occur. Maybe there is a time when this is to occur but nearly all developers are focused on developing narrow AI applications . The rediscovery of the usefulness of early machine learning algorithms and introduction of newer forms of machine learning algorithms like convolutional neural networks, GANs, Deep Q networks..etc coupled with advances in understanding of symbolism, neurobiology, causation and correlational theories have advanced the progress of AI applications. If commentators stop expecting that AI systems will replace all human activities and understand that AI is best suited to augment/enhance/support human activity, there would be less pessimism about the prospects of AI. Of course, there will be failures as AI gets applied in various industries followed by the inevitable 'I told you so' by arm-chair commentators or sardonic academics but this will not stop the progress of AI and related systems. On the other hand, if AI researchers and developers can aim for realistic objectives and invite scrutiny of their completed applications, the waffle from the cynics can be put to rest.
So where does this leave health services and the application of AI in healthcare? Traditionally the clinical workforce have been slow adopters of technological innovations. This is expected as errors and risks in clinical care unlike other industries are less tolerated and many times unacceptable. However, the greatest promise for AI is in its application in healthcare. For long, health systems across the world have been teetering on financial bankruptcy with governments or other entities bailing them out (I can't think of a health system, which runs on substantial profits) . A main component of the costs of running health services is recurrent costs, which includes recurrent administrative and workforce costs. Governments or policy wonks have never been able to come up with a solution to address this except reiterate the tired mantra of early prevention or advocate for deficit plugging or suggest new models of workforce, which are hard to implement. Coupled with this scenario, is the humongous growth in medical information that no ordinary clinician can retain and pass onto their patients or include in their treatment, rapid introduction of newer forms of drugs and treatments and the unfortunate increase in the number of medical errors (In the US, approximately 251,454 deaths are caused per annum due to medical errors).
AI Technology, provides an appropriate solution to address these issues considering the potential of AI enabled clinical decision support systems, digital scribes, medical chatbots, electronic health records, medical image analysers, surgical robots, surveillance systems; all of which can be developed and delivered at economical pricing. Of course, a fully automated health service is many years away and AI regulatory and assessment frameworks are yet to be properly instituted. So we won't be realising the full potential of AI in healthcare in the immediate future. However, if health services and clinicians think AI is another fad like Betamax, Palm Pilots, Urine therapy, Bloodletting and lobotomies they are very mistaken. The number of successful instances of the application of AI in healthcare delivery is increasing at a rapid rate for it to be a flash in the pan. Also, many governments including the Chinese, UAE, UK and French governments have prioritised the application of AI in healthcare delivery and continue to invest in its growth. While AI technology will never replace human clinicians it will most certainly replace clinicians and health service providers who do not learn about it or engage with it. Therefore it is imperative for health service providers, medical professional bodies, medical schools and health departments to actively incorporate AI/technology (machine learning, robotics and expert systems) in their policies and strategies. If not, it will be a scenario of too little and too late depriving patients of the immense benefits of personalised and cost-efficient care that AI enabled health systems can deliver.
The most common AI systems that are currently emerging are machine learning, natural language processing, expert systems, computer vision and robots. Machine Learning is when computer programs are designed to learn from data and improve with experience. Unlike conventional programming, machine learning algorithms are not explicitly coded and can interpret situations, answer questions and predict outcomes of actions based on previous cases. These processes typically run independently in the background and incorporate existing data, but also “learn” from the processing that they are doing. There are numerous machine learning algorithms, but they can be divided broadly into two categories: supervised and unsupervised. In supervised learning, algorithms are trained with labelled data, i.e. for every example in the training data there is an input object and output object and the learning algorithm discovers the predictive rule. In unsupervised learning, the algorithm is required to find patterns in the training data without necessarily being provided with such labels. A leading form of machine learning termed Deep Learning is considered particularly promising.
Natural Language Processing (NLP) is a set of technologies for human-like-processing of any natural language, oral or written, and includes both the interpretation and production of text, speech, dialogue etc. NLP techniques include symbolic, statistical and connectionist approaches and have been applied to machine translation, speech recognition, cross-language information retrieval, human-computer interactions and so forth. Some of these technologies, and certainly the effects, we see in normal societal IT products such as through email scanning for advertisements or auto-entry text for searches.
”Expert systems” is another established field of AI, in which the aim is to design systems that carry out significant tasks at the level of a human expert. Expert systems do not yet demonstrate general-purpose intelligence, but they have demonstrated equal and sometimes better reasoning and decision making in narrow domains compared to humans, while conducting these tasks similar to how a human would do so. To achieve this function, an expert system can be provided with a computer representation of knowledge about a particular topic and apply this to give advice to human users. This concept was pioneered in medicine the 1980s by MYCIN, a system used to diagnose infections and INTERNIST an early diagnosis package. Recent knowledge based expert systems combine more versatile and more rigorous engineering methods. These applications typically take a long time to develop and tend to have a narrow domain of expertise, although they are rapidly expanding. Outside of healthcare, these types of systems are often used for many other functions, such as trading stocks.
Another area is Computer Vision systems, which capture images (still or moving) from a camera and transforms or extract meanings from them to support understanding and interpretation. . Replicating the power of human vision in a computer program is no easy task, but it attempts to do so by relying on a combination of mathematical methods, massive computing power to process real-world images and physical sensors. While great advances have been made with Computer Vision in such applications as face recognition, scene analysis, medical imaging and industrial inspection the ability to replicate the versatility of human visual processing remains elusive.
The final technique we cover here are Robots, which have been defined as “physical agents that perform tasks by manipulating the physical world” for which they need a combination of sensors (to perceive the environment) and effectors (to achieve physical effects in the environment). Many organisations have had increasing success in limited Robots which can be fixed or mobile. Mobile “autonomous” robots that use machine learning to extract sensor and motion models from data and which can make decisions on their own without relying on an operator are most relevant to this commentary, of which “self-driving” cars are well known examples.