Founded in March 2018 with headquarters in Cotonou, Benin, Atlantic Artificial Intelligence Laboratories (Atlantic AI Labs™) is fostering research, education, and implementation of AI and related technologies in Africa (see press release). Unraveling the secrets of human intelligence is one of the grand challenges of science in the 21st century. It will require an interdisciplinary approach and may hold the key to building intelligent machines that can interact safely and efficiently with humans to help solve a wide range of complex problems.
We are interested in unleashing the potential of AI in Africa through self-reliance. At present, Atlantic AI Labs™ comprises the AI for Agriculture (A4A), the AI for Biomedicine (A4B), and the AI for Aerospace Control (A4C) Labs. This is part of our "Africa 2060" initiative of transforming Africa from an exporter of raw materials to a unified, inclusive, and sustainable center of high-tech research, development, and production by the year 2060. We see AI as the catalyst for leapfrogging into this 4th Industrial Revolution.
We are Africans solving African problems with AI. Our guiding principles are: Innovation, Collaboration, and Excellence.
In addition to building a team of talented AI researchers and engineers, Atlantic AI Labs™ will also promote early mathematics and science education and offer AI Research Fellowship and Residency Programs in collaboration with African universities.
About usWhat we do.
A passionate team of creative & innovative minds.
We are a multidisciplinary team passionate about researching and working toward the long term goal of Artificial General Intelligence (AGI).
Our goal is to use AI to improve people's lives and health. Our current focus is on AI, Cognitive Robotics, and Autonomy with applications in aerospace, biomedicine, and agriculture.
![]()
Slowly but surely, the chameleon reaches the top of the bombax tree.
As Africans solving African problems with AI, our approach is influenced by the diverse philosophies, traditions, indigenous knowledge, languages, arts, and cultures of African people.
One of the major themes at AI conferences in the Western world these days is diversity and inclusion. The Dahomean Amazons — a regiment of all-female warriors respectfully and affectionately called the Mi Non in the Fongbe language which means "Our Mothers" — fearlessly and ferociously fought the colonial French army in hand-to-hand combats in the First and Second Franco-Dahomean Wars (1890 and 1892-1894 respectively). We honor their sacrifice to preserve the independence, sovereignty, and freedom of Dahomey. Unfortunately, the Corps of the Mi Non was dismantled by the French colonial rule — the reason why we are reluctant to take gender equality lessons from the West.
Today, hard working women remain the social and economic pillars of Africa. In seeking to build a modern, fair, compassionate, and peaceful Africa, free of neo-colonial collusion, we value and call on the hard work, contribution, and ingenuity of African Women in AI. The entrepreneurship, work ethics, and integrity of ordinary African women truly stand in sharp contrast to the corruption and mediocrity of certain members of the so-called "ruling elite".
Harry S. Truman once remarked that "There is nothing new in the world except the history you do not know." In 1935, Colonel John C. Robinson (1905-1954) — an African American aviator from Gulfport, Mississippi — volunteered to travel to Ethiopia to help establish the Ethiopian Air Force, of which he was named the commander during the war against the colonial and fascist Italian army. The Ethiopian army had previously defeated the Italian army at The Battle of Adwa on 1 March 1896 during the First Italo-Ethiopian War. More than a century before The Battle of Adwa, General Toussaint Louverture (1743-1803) whose grandfather was from Allada (part of the Kingdom of Dahomey in present-day Benin) launched an armed slave rebellion in Saint-Domingue (in present-day Haiti). At The Battle of Vertières on 18 November 1803, Napoleon's French expeditionary forces were defeated by the slave rebellion under the command of General Jean-Jacques Dessalines who succeeded General Toussaint Louverture after he was captured by French forces. The historic victory of the rebellion paved the way to the proclamation of the independence of Haiti and the abolition of slavery on 1 January 1804. We honor the memory of Kaptein Hendrik Witbooi (1830-1905) and Chief Samuel Maharero (1856-1923) who led the resistance against the German genocidal colonial rule in Namibia. The genocide of the Herero and Namaqua people of Namibia by the German Empire between 1904 and 1908 is widely recognized as the first genocide of the 20th century. The atrocities commited in the Congo Basin from 1885 to 1908 under the rule of King Leopold II of the Belgians have been well documented. We honor the service and sacrifice of the men and women for the hard-fought and long wars for independence in Algeria, Mozambique, Angola, Namibia, Guinea Bissau, Zimbabwe, and South Africa. We Will Nerver Forget.
In keeping with this long tradition of transatlantic solidarity — the principle of Umoja in the Swahili language which means "Unity" — we wish to foster a bridge for scientific and technical collaboration between the African Motherland and the African Diaspora and allies in the Americas, in Europe, and in the rest of the world. To paraphrase U.S. President John F. Kennedy, "Ask not what Africa can do for you — ask what you can do for Africa."
We are working toward the long-term goal of Artificial General Intelligence (AGI) with an emphasis on improving Global Health. As Dr. Martin Luther King, Jr. once said: "Of all the forms of inequality, injustice in health care is the most shocking and inhumane." We believe that quality research requires upholding high ethical standards and have a zero-tolerance policy toward "ethics dumping" in Africa. The ongoing COVID-19 pandemic has highlighted the urgency of building scientific and technical capacity in Africa to reduce dependence on foreign aid which is leveraged by foreign entities to gain soft power — when not masquerading a debt trap or the financing of their exports to Africa as development aid. In July 1825, under the threat of imminent war by the French armada, Jean-Pierre Boyer, the second President of Haiti, was bullied into signing a treaty to pay 150 million francs to France — the equivalent of $28 billion in today's US dollars — in five equal installments to indemnify the former French colonists and slave owners. Haiti was forced to borrow 30 million francs from French banks to pay the first installment. It took Haiti 122 years to fully pay all the subsequent loans and interests. Today, we're already witnessing debt-strapped nations going bankrupt followed by their economic, political, and social collapse. These nations are then forced to make concessions to foreign creditors on national assets like ports, media, mineral resources, and energy production.
AI can certainly play a role not only in healthcare delivery but also in drug and vaccine discovery. Since the next pandemic will not give us an explicit advanced warning, we will also need an African network and digital infrastructure for pandemic surveillance and data sharing. While affirming the need for African solutions to African problems, we acknowledge our interconnectedness with the rest of the world — the principle of Umuntu Ngumuntu Ngabantu in the Zulu language which means "I am because we are."
Although we work on interventions that leverage AI and computational health informatics, we approach healthcare as what it truly is — a complex system. For example, there is a clear interdependence between the following: the environmental and social impact of meat production (violence between farmers and herders, greenhouse gas emissions, waste disposal, antibiotic resistance, and the use of growth enhancers like clenbuterol); wildlife extinction and loss of biodiversity due to livestock encroachment; wildlife poaching due to economic deprivation; the global wildlife trade and the sale of wildlife meat in open wet markets; chronic diseases like hypertension, heart disease, diabetes, and cancer; and zoonotic diseases like COVID-19 which are caused by viruses jumping from animals to humans. We fully embrace the vision of One Health which promotes a unified approach to the health of people, animals, and the environment.
We need to urgently tackle the challenge of food security as well; reduce the importation of food and fertilizers; and give serious thought to sustainable and regenerative agricultural production in Africa. African Indigenous food not only tastes good but is also nutritious and can contribute to good population health. As Hippocrates once remarked, "Let thy food be thy medicine and medicine be thy food." We look forward to exploring how AI can contribute to a transition to precision agriculture in Africa.
Of all the challenges ahead of us in Africa, climate change is already having the most devastating health, social, economic, security, and political consequences, particularly in the Sahel region. Environmental protection and the responsible use of clean energy for AI computational resources should be our top priorities.
Our primary objective is to improve people's lives and health — not publication and citation counts. We do plan to share knowledge by publishing reproducible research and novel ideas that advance the field. In October 2020, a group of 31 international researchers wrote a paper titled Transparency and reproducibility in artificial intelligence after Google Health refused to share the methods and code of a paper published in January 2020 and titled International evaluation of an AI system for breast cancer screening which claimed that their Deep Learning algorithm was capable of "surpassing human experts in breast cancer prediction." In another paper titled Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal published in the BMJ in January 2021, Wynants et al. reviewed 169 prediction models and concluded: "The COVIDPRECISE group does not recommend any of the current prediction models to be used in practice, but one diagnostic and one prognostic model originated from higher quality studies and should be (independently) validated in other datasets." As the British statistician Doug Altman puts it: "To maximize the benefit to society, you need to not just do research, but do it well."
The US National Institute of Health (NIH) has a budget of $47.5 billion US dollars for fiscal year 2023, while the US Department of Defense 2023 research, development, test, and evaluation (RDT&E) budget stands at $130.1 billion. Other agencies provide additional funding including the National Science Foundation ($9.87 billion), the Department of Energy Office of Science ($8.1 billion), and NASA Science ($7.8 billion). In the US private sector, companies like Google, Facebook, and Microsoft also inject billions of dollars in AI R&D allowing them to poach top AI talent from academia, although their massive data and compute resources have not delivered on the promise of robust Artificial General Intelligence (AGI) so far. In comparison, the total 2023 budget (not just R&D) of the government of Nigeria (the most populous country in Africa) is $49 billion in US dollars. Clearly, Africa needs a smart approach to self-funding and conducting research that is adapted to its context including a focus on research with high societal impact and eliminating research waste.
We are opposed to the development, importation, and unethical use of facial recognition and other AI-enabled surveillance tech. During the last three decades, the people of Africa have made significant strides toward democracy and the rule of law and are not interested in copying the governance model of foreign surveillance states. On the other hand, the neurobiology of face recognition in humans is a subject of scientific interest for both neuroscience and AI. Health data privacy in the age of surveillance capitalism is also a concern and we look forward to contributing and participating in an international legal framework for data privacy.
An African AI policy is a matter of national sovereignty and its formulation and implementation must proceed through self-reliance and without neo-colonial collusion. Therefore, it cannot be outsourced to Google, Facebook, Twitter, IBM, the International Monetary Fund (IMF), the World Bank (WB), the French Development Agency (FDA), China's Belt and Road Initiative, and certainly NOT to Russia's Wagner Group. As an example, the neo-colonial monetary policy of France based on the CFA franc currency — which is still in use by fourteen African countries more than 60 years after declaring independence from France — had not served the economic interests of the African people. One wonders what all the Africans with a PhD in economics and trained in elite Western universities were doing. Visionary Cameroonian economist Tchundjang Pouemi did publish a book in 1980 titled Monnaie, servitude et liberté. La répression monétaire de l'Afrique (Money, servitude and freedom. Africa's monetary repression) which has recently gained renewed interest. Political, economic, and technological sovereignty is the sine qua non for eradicating extreme poverty in Africa.
Western deference to France for security in the Sahel has not delivered security for the populations affected by jihadist violence. Inept African governments — some came to power through unconstitutional military coups — are now inviting Russia's mercenary Wagner Group to provide security in return for exploitative mining contracts. The Wagner Group is also involved in the disastrous Russia's colonial war against Ukraine. The presence of the Wagner Group will only exacerbate the security, political, and economic situation in Africa. Africa should take responsibility for its own defense and security and seek direct security partnerships with like-minded democracies around the world who value and uphold international laws, norms, and regulations for global peace and prosperity.
We would like to emphasize that we do like French culture, language, and cuisine and find ordinary French people welcoming and tolerant (les Français sont sympas). However, historically certain policies and actions of successive French governments (in particular, the so called Françafrique architected by French politician Jacques Foccart) have been detrimental to peace and prosperity in Africa. So, we certainly look forward to a new era of shared peace and prosperity in France-Africa relations.
Countries like the US, Canada, and members of the European Union have published national AI policies and roadmaps. These national AI policies do not preclude international cooperation between like-minded democracies on issues like AI ethics. An example is the emerging Global Partnership on Artificial Intelligence (GPAI). AI in Africa can create real value and support a sustainable development without displacing workers. In fact, AI can serve as a catalyst for investing in critical communication and computing infrastructure, cybersecurity, and promoting early math and science education to ensure a pipeline of domestic talent and entrepreneurs. We will promote and sponsor national competitions in math and science to support and encourage exceptionally talented African youth.
While holding them accountable through a non-violent democratic process (military coups are NOT the solution, particularly when driven my foreign interference), we trust that governments and institutions like the African Union and the African Continental Free Trade Area (AfCFTA) which aim to create a common marketplace of 1.3 billion people will contribute to boosting intra-African cooperation and trade — the principle of Ujamaa in the Swahili language which means "cooperative economics". We must not allow the 500-year-old looting of Africa to continue in cyberspace in the age of intelligent machines. Let us rise to the challenge.
In healthcare for example, the responsible use of AI can alleviate the severe shortage of radiology and oncology specialists by extending the capabilities of generalist physicians and nurses. Using sensors, cameras, and AI algorithms can help farmers increase crop yields. Examples of applications in precision agriculture include early detection of plant diseases and irrigation. AI-enabled drones represent an effective solution for the delivery of time-sensitive packages like vaccines, medications, and medical supplies to rural and remote areas given the poor state of our transportation infrastructure. They have also been proven effective in wildlife conservation efforts including the real-time detection of poachers. At the current rate of poaching, all the lions, rhinos, and elephants will be gone soon. These applications create high tech jobs for a young population.
Peace is the foundation of development and the deterioration of the security situation in parts of Africa is a cause of concern. First and foremost, we believe that the eradication of extreme poverty through political, economical, social, health, and educational means must be the top priority toward ensuring peace and stability. Secondly, Africa's military posture must be driven by the interests of the African people. It should be based on the principles of unity, solidarity, self-reliance, intra-African cooperation, and preserving our sovereignty, democracy, and the rule of law including international humanitarian law. Africa does not want to be the theater for proxy wars and geo-political and economic rivalry between foreign powers and other entities which aim to loot our natural resources that are in particularly high demand in rich countries' renewable energy sectors including electric vehicles, energy storage, solar panels, wind turbines, and electronic devices. Africa's industrial policy should aim for the development of a manufacturing base for transforming these raw mineral resources into value-added clean energy products.
In an interconnected world, willing allies can provide military equipment, logistic support, and training. But there should be a preference for African boots on the ground doing the actual fighting with an urgent focus on defeating the jihadist insurgency which represents a clear and present danger for peace and prosperity in Africa. We must learn to defend our homeland like our ancestors did with so much dignity when faced with the threat of colonialism. In addition, we also need to develop our scientific, technical, and industrial capacity for self-defense and peace keeping with an emphasis on highly maneuverable and intelligent multi-domain operations including land, maritime, air, space, and cyber. In the maritime domain, we must protect the livelihood of impoverished and economically deprived coastal communities against Illegal, Unreported, and Unregulated (IUU) fishing by Chinese armada. In the cyber domain, we must vigorously counter the growing threat of online disinformation campaigns on social media including Facebook, Twitter, Youtube, Instagram, TikTok, and others. A worrying trend is the use of so called African "social media influencers" by foreign actors like Russia to spread disinformation and manipulate public opinion.
Some researchers at Facebook have proclaimed that "there is no such thing as AGI" (they even came up with the #noAGI hashtag on Twitter) on the grounds that human intelligence is not general. While these researchers are entitled to their opinion, we do not take AI directions from the people who are leading the ultimate platform for online disinformation campaigns which represent a clear and present danger to social cohesion, democracy, and peace in Africa. According to a Wall Street Journal article published in May 2020 and titled Facebook Executives Shut Down Efforts to Make the Site Less Divisive, an internal presentation to Facebook executives in 2018 stated: "Our algorithms exploit the human brain's attraction to divisiveness." Recent developments in the use of AI-enabled Deepfake in disinformation campaigns are particularly troublesome.
In the preface to the Proceedings of the Third Conference on Artificial General Intelligence in 2010, Eric Baum, Marcus Hutter, and Emanuel Kitzelmann wrote: "Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI — to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies." Marcus Hutter had previously proposed a foundational mathematical theory of Universal Artificial Intelligence (UAI) in his 2005 book titled Universal artificial intelligence: Sequential decisions based on algorithmic probability, and a universal algorithmic agent called AIXI.
At Atlantic AI Labs™, our goal is responsible and trustworthy AI in mission-critical domains such as agriculture, transportation, and healthcare. To account for the cognitive phenomena that underlie human performance in these domains, we take inspiration from what Edwin Hutchins called "cognition in the wild". The embodied mind — which gives rise to cognition — is a complex system. Our methodology is therefore interdisciplinary and informed by complexity science. In what follows, we will discuss generalization. In particular, we will review how learning abstract concepts allows humans to generalize across domains and tasks; compositionality or the human ability to compose new arbitrary concepts from existing ones (e.g., the concept of a hybrid creature of human, bird, and lion); how conceptualization and meaning are grounded in embodiment and sensorimotor systems; the human capacity for imagination and mental simulation (as when dreaming during sleep); and why conceptual metaphorical mapping across domains is pervasive in human thought, language, discourse, and even commonsense reasoning.
Our founder studied aviation flight operations and worked in the aviation and healthcare industries. We therefore have a bias in favor of building intelligent systems that are safe and reliable and operate in highly regulated environments. The Swiss cheese model of accident causation is of particular relevance to both the aviation and healthcare domains and can inform a systems approach to AI safety. Developed by J. Reason, the theory suggests than an accident occurs when the holes in cheese slices align allowing "a trajectory of accident opportunity". Each slice represents a layer of defense against failure including factors such as human-centered design following cognitive ergonomics and human factor engineering principles, systems redundancy and fault tolerance, formal verification, regulations, certification, full simulator training of abnormal and emergency procedures, maintenance, and organizational leadership and culture.
Mathematically, we already know that Neural Networks (NNs) are universal function approximators. So, it is not surprising that NNs can do a lot of impressive things (including some level of reasoning) given enough data and compute. The question therefore is not whether NNs can do this or that, but rather whether they can perform these tasks in a reliable, safe, and cost effective manner when operationalized. Can we provide provable guarantees? Can they pass formal verification as required by regulation for certification in safety-critical domains like aviation (e.g., DO-333, Formal Methods Supplement to DO-178C for avionics software)? Explainability is also a fundamental requirement for these domains — imagine not being able to explain an aircraft accident due to the failure of an AI-based component. OpenAI's GPT-3 is definitively not ready to be used in clinical care: during an experiment conducted by AI researchers at a France-based company called Nabla, GPT-3 reportedly recommended to a mock patient that they should kill themselves. Reliable Clinical Question Answering (CQA) does have the potential to be very useful in clinical care given the ever-increasing amount of medical literature that physicians must keep up with. Compute cost is another important consideration in the real world because most organizations do not have Google-scale data and compute infrastructure and many AI-enabled systems and devices will actually operate at the edge of the network. Ideas about the role of scale in AI progress ("scaling laws," "scale is all you need," "scaling maximalism," or the "bitter lesson") must consider the energetics of computation. So, the bottom line is this: can it fly (airworthiness is the technical term in aviation)? And if not, it must be grounded until certified as safe and reliable.
Our interdisciplinary approach for solving complex problems in AI allows us to draw on diverse bodies of knowledge which include: biology, psychology, philosophy, cognitive science, neuroscience, mathematics, physics, biochemistry, medicine, statistics, computer science, and aviation. As the physicist Richard Feynman explained: "If our small minds, for some convenience, divide this universe, into parts — physics, biology, geology, astronomy, psychology, and so on — remember that nature does not know it!"
Some are wondering about the possibility of another AI winter. There could be an AI winter for those who overhype Deep Learning by claiming that we no longer need radiologists or that level 5 autonomous vehicles are ready to freely roam our streets. AI will be an important and essential subject of scientific inquiry for centuries to come. Unraveling the mysteries of the mind is one of the grand challenges of science in the 21st century. Historically, there has been a mutually enriching relationship between the cognitive sciences, neuroscience, and AI. This will also be the key to building intelligent machines that can interact safely and efficiently with humans to help solve a wide range of complex problems. Furthermore, comparisons between machine and human cognition are fraught with issues resulting from human cognitive bias. An example of cognitive bias is the human tendency to attribute anthropomorphic competencies to an AI agent that is only learning image surface statistics as opposed to conceptual abstractions. This inevitably leads to finding spurious correlations in the data set. In Adversarial Examples Are Not Bugs, They Are Features, Ilyas et al. called adversarial vulnerabilities "a fundamentally human phenomenon." Concept learning starts early in infancy — a research topic in the field of developmental psychology — and continues throughout the adult lifespan. It is also related to embodiment and sensorimotor experiences and memories which in turn enable common sense reasoning and counterfactual thinking (causality). During a recent public AI debate, Judea Pearl said: "I am very much opposed to the culture of data only....I believe that we should build systems which have a combination of knowledge of the world together with data." Not heeding Pearl's advice will impede the progress and trustworthiness of AI.
In solving problems in healthcare, the Machine Learning community has a lot to learn from the field of biostatistics which has developed many techniques, tools, and guidelines (such as the TRIPOD Statement) for developing and validating clinical prediction models. Deep Learning practitioners are starting to pay attention to the challenges of identifiability and Causal Inference (for decision making and policy) which have been well studied in statistics and econometrics. On the other hand, Machine Learning has come of age in the era of Big Data and has proven effective at handling high-dimensional data sets. We do hope that a mutually enriching relationship will exist between these two fields as well.
Our methodology is based on the neuroscientist David Marr's three levels of analysis: computational, algorithmic, and implementational. By providing a higher level of abstraction, Marr's framework allows us to avoid premature algorithmic commitments in analyzing and developing AI applications. While Deep Learning is currently the leading approach to AI in vision, language, speech recognition, and control in game play, we leave open the possibility that better approaches will emerge in the future.
For example, flight cognition — the study of the cognitive and psychological processes that underlie pilot performance, decision making, and human errors — can inform the design of Cognitive Architectures for safe and autonomous agents. These autonomous agents will be very helpful to humans during future airline single-pilot operations, crewed spaceflight missions into deep space, and the exploration of Mars. A Cognitive Architecture implements a computational model of various mechanisms and processes involved in cognition such as: perception, memory, attention, learning, causality, reasoning, decision making, planning, action, motor control, language, emotions, drives (such as food, water, and sex), imagination, social interaction, adaptation, self-awareness, Theory of Mind (ToM), metacognition, and consciousness (the "c-word" which we believe should be brought into the realm of scientific inquiry). These Cognitive Architectures will enable the design of autonomous agents that can interact safely and effectively with humans (human-like AI).
Similarly, from biology, neuroscience, and perhaps also physics, we can learn how hummingbirds develop fascinating learning and cognitive abilities (e.g., spatial memory, episodic-like memory, vision, motor control enabling sophisticated flight maneuvers, and vocal learning) with tiny brains. This approach called Nature-Inspired Computing (NIC) can inform the development of more efficient intelligent machines. We are particularly intrigued by the thermodynamic efficiency of biological computation.
Deep neural networks (DNNs) have recently achieved impressive levels of performance in tasks such as object recognition, speech recognition, language translation, and control in game play. DNNs have proven to be effective at perception and pattern recognition tasks with high dimensional input spaces — a challenge for previous approaches to AI. However, they tend to overfit in low data regimes (most organizations don't have Google-scale data and computing infrastructure) and more work is needed to fully incorporate cognitive mechanisms and processes like memory, attention, commonsense reasoning, and causality.
Returning to our aviation example, we know that good pilots "stay ahead of the airplane". Through rigorous learning, simulation training, and planning, the pilot has acquired a mental model for reasoning about the flight. This mental model includes the aerodynamic, propulsion, and weather models. It allows the pilot to "stay ahead of the airplane" by maintaining situational awareness and by asking herself questions like: "What can happen next?" (prediction), "What if an unplanned situation arises?" (counterfactual causal reasoning), and "What will I do?" (procedural knowledge). For example, thunderstorm activity at the destination airport could force the pilot to divert the plane to an alternate airport or execute a go-around procedure during the approach to landing due to the presence of windshear. Flight instructors encourage their students to practice a technique called "chair flying" which consists in mentally visualizing a flight while sitting on a chair or cockipt before the flight. In a NASA Report titled Human Performance Contributions to Safety in Commercial Aviation, Jon B. Holbrook et al. write: "Experience in the aircraft and the ability to mentally simulate its future state was needed to anticipate a required action, choose an appropriate action, and choose the implementation timeframe for the action." Understanding the contribution of human performance to aviation safety becomes even more important in the context of increasing flight automation and autonomy. In addition to training Boeing 737 MAX pilots on the Maneuvering Characteristics Augmentation System (MCAS) flight control, the US Federal Aviation Administration (FAA) now also requires flight training on a Boeing 737 MAX Level C or D full flight simulator (FFS) including the failure modes that occurred during the two crashes in Indonesia and Ethiopia in 2018 and 2019.
Situational awareness is especially important during spatial disorientation in flight when the pilot's perception of the aircraft's attitude and spatial position turns into misperception. The pilot's awareness of her illusory perceptions allows her to rely on flight instruments to ensure flight safety. According to the FAA, "between 5 to 10% of all general aviation accidents can be attributed to spatial disorientation, 90% of which are fatal". A NATO report published in 2008 and titled Spatial Disorientation Training — Demonstration and Avoidance revealed that 25% percent of military aircraft accidents in the UK between 1983 and 1992 and 33% between 1993 and 2002 can be attributed to spatial disorientation. The air space is not the natural habitat in which the human body and mind evolved. The study of spatial disorientation in flight allows us to better disentangle the interactions between our bodily sensations, perception, and action.
In the Allegory of the Cave, Greek philosopher Plato offered insights into the nature of perception vs. reality. More recently, in his book Reality Is Not What It Seems, theoretical physicist Carlo Rovelli (founder of loop quantum gravity theory) wrote: "It is only in interactions that nature draws the world....The world of quantum mechanics is not a world of objects: it is a world of events."
The framework of Active Inference — introduced by the neuroscientist Karl Friston — views veridical and illusory perceptions as Bayesian inferences combining bodily sensations and prior beliefs based on the agent's generative model of the environment. The agent optimizes these inferences and reduces uncertainty through action (minimization of surprise and variational free energy). Vision, vestibular organs, and proprioception play a role in maintaining spatial orientation and also impact human cognition (Embodied and Enactive Cognition). Interoception on the other hand has been found to play a role in self-awareness, abstract concept learning, and emotional feelings like fear. The pilot uses her metacognitive abilities to monitor the accuracy and uncertainty of her perception of the environment and to assess and regulate her own reasoning and decision-making processes.
John O'Keefe, May-Britt Moser, and Edvard I. Moser won The 2014 Nobel Prize in Physiology or Medicine for discovering the grid cells and place cells that constitute the so called "inner GPS" in the brain. In a paper titled Navigating in a three-dimensional world, Kathryn J. Jeffery et al. review the role of place cells and grid cells in the hippocampal-entorhinal system and report that "the absence of periodic grid firing in the vertical dimension suggests that grid cell odometry was not operating in the same way as it does on the horizontal plane." In Navigating cognition: Spatial codes for human thinking, Jacob L. S. Bellmund et al. build on research on "Conceptual Spaces" by Peter Gärdenfors to introduce "Cognitive Spaces" whose dimensions of experience are mapped by place and grid cells to support general cognitive functions beyond spatial navigation. In Vector-based navigation using grid-like representations in artificial agents, Andrea Banino et al. develop a deep reinforcement learning agent with emergent grid-like representations whose performance "surpassed that of an expert human and comparison agents."
In aircraft equipped with an automated flight control system (fly-by-wire) and a glass cockpit, human-machine interaction must be carefully designed to avoid potentially catastrophic out-of-the-loop performance problems which can result from the loss of situational awareness when the pilot must regain manual control of the aircraft. Out-of-the-loop performance problems resulting from ill-conceived human-machine interaction should not be confused with human errors, hence the important concept of "human-centered design".
The human mind is also a very efficient learner. The FAA requires airline first officers (second in command) to hold an Airline Transport Pilot (ATP) certificate which requires a knowledge and practical tests and 1,500 hours total time of flying experience. Up to 100 hours of the required flying experience can be accumulated on a full flight simulator. In contrast, Google's AlphaGo — designed using an approach to AI known as Deep Reinforcement Learning (DRL) — played more than 100 million game simulations. The latest incarnation of AlphaGo called AlphaZero used 5,000 tensor processing units (TPUs) and required significantly less game simulations to achieve superhuman performance at the games of chess, shogi, and Go. A previous incarnation called AlphaGo Zero used graphics processing units (GPUs) to train the deep neural networks through self-play with no human knowledge except the rules of the game.
How applicable is AlphaGo's approach to real world decision problems? In Go, the states of the game are fully observable which enables learning through self-play with Monte-Carlo Tree Search (MCTS). On the other hand, partial observability is typical of real world environments. Also, it is hard to imagine how an AI system can learn tabula rasa — with no human knowledge — through self-plays in the domain of aerospace. Such an AI system would have to rediscover the 300 years old Newton's and Euler's laws and the Navier-Stokes equations — the foundations of modern aerodynamics. Isaac Newton himself once famously remarked: "If I have seen further than others, it is by standing upon the shoulders of giants." Therefore, we explore physics-aware machine learning approaches.
Furthermore, the lack of explanability of the policies learned by DRL agents remains an impediment to their use in safety-critical applications like aviation. Nonetheless, DRL (preferably the model-based variant) can be helpful in teaching complex tasks like autonomous aircraft piloting to a robot, although we believe that DRL alone does not account for all the cognitive phenomena that underlie the performance of human pilots (more on that later). According to the International Air Transport Association (IATA), the 2019 fatality risk per millions of flights was 0.09. Beyond automation with current autopilot systems, the increasing demand for air travel worldwide will create a need for machine autonomy. The Canadian Council for Aviation and Aerospace (CCAA) predicts a shortage of 6,000 pilots in Canada by 2036.
We subscribe to the No Free Lunch Theorem (introduced by David Wolpert and William Macready) and have experience in various state-of-the-art approaches to AI including: symbolic, connectionist, bayesian, frequentist, and evolutionary. In building cognitive systems, we seek synergies between these approaches. For example, Bayesian Deep Learning can help represent uncertainty in deep neural networks in a principled manner — a requirement for domains such as healthcare. Bayesian Decision Theory is also a principled methodology for solving decision making problems under uncertainty. We see Deep Generative Models combining Deep Learning and probabilistic reasoning as a promising avenue for unsupervised and human-like learning including concept learning, one-shot or few-shot generalization, and commonsense reasoning. Reminiscent of human metacognition, Meta Learning (or learning to learn) for Reinforcement Learning and Imitation Learning have generated a lot of interest at the latest NeurIPS conference and holds promise of learning algorithms that can generate algorithms tailored to specific domains and tasks.
Lately, there has been a resurgence of evolutionary algorithms proposed as an alternative to established Reinforcement Learning algorithms (like Q-learning and Policy Gradients) or as an efficient mechanism for training deep neural networks (neuroevolution). Evolutionary algorithms are also amenable to embarrassingly parallel computations on commodity hardware.
In addition, rumors of the demise of Symbolic AI in favor of statistical learning methods (in the era of Machine Learning and Deep Learning hype) have been greatly exaggerated. The fact is that mathematics and language — the foundations of human civilization — are the ultimate symbol systems. Since the seminal Dartmouth AI workshop of 1956, decades of research in logic-based methods (e.g., classical, nonmonotonic, probabilistic, description, modal, and temporal logics) have produced useful commonsense reasoning capabilities that are lacking in today's Deep Learning and Reinforcement Learning systems which are essentially based on pattern recognition. This lack of reasoning abilities in AI systems can potentially lead to sample inefficiency or difficulties in providing formal guarantees of system behavior — a concern that is exacerbated by known vulnerabilities such as adversarial attacks against Deep Neural Networks and reward hacking in Reinforcement Learning.
Real world safety-critical systems like aircraft are indeed required by regulation to go through a formal verification process for certification. Consider the following rule in the US federal aviation regulations: "When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right." (14 CFR part 91.113(e)). This rule can be easily specified in a logic-based formalism such as probabilistic temporal logic to account for sensor and perception uncertainty. We can then formally verify that an autonomous robotic pilot complies with the rule. A hybrid approach consists in generating DRL policies that satisfy probabilistic temporal logic constraints. Autonomous agents like robotic pilots must comply with the laws, regulations, and ethical norms of the country in which they operate — a concept related to algorithmic accountability. We see the need for innovation in aircraft Guidance, Navigation, and Control (GNC). Next-generation aircraft should have robust, adaptive, and certifiable flight control with enhanced upset prevention and recovery capabilities. Loss of control in flight (LOC-I) still accounts for a disproportionate number of accidents. With all the little drones and eVTOLs flying around (Urban Air Mobility), we will also need good Airborne Collision and Avoidance Systems (ACAS). The FAA and RCTA are already working on the ACAS-* family of specs: ACAS-X, ACAS-Xa, ACAS-Xu, ACAS-Xr, and ACAS-sXu.
Humans do in fact accumulate knowledge and experience through lifelong learning and intrinsic motivation. Another important aspect of human cognition that is missing in traditional Reinforcement Learning is the role that abstract conceptual knowledge and memory play in decision making by enabling the mental simulation of alternative courses of action and their potential outcomes. In addition to conceptual knowledge, memory, and mental simulation, humans also use their metacognitive abilities to monitor the accuracy and uncertainty of their perception of the environment and to control and regulate their reasoning and decision-making processes. An example of a mutually enriching relationship and cross-fertilization between neuroscience and AI is the study of the role of metacognition in Reinforcement Learning with implications for decision making in machines and neuropsychiatric disorders in humans.
Another issue is that modern machine learning algorithms like Deep Neural Networks (DNNs) and Random Forests are data-hungry. Organizations with low-data volumes can jumpstart their adoption of AI by modeling and automating their business processes and operational decisions with logic-based methods. For example, prior to 2009, less than 10% of US hospitals had an Electronic Medical Record (EMR) system. Logic-based Clinical Decision Support (CDS) systems for medical Knowledge Representation and Reasoning (KRR) have been successfully deployed for the automatic execution of Clinical Practice Guidelines (CPGs) and care pathways at the point of care. Description Logic (DL) is the foundation of the Systematized Nomenclature of Medicine (SNOMED) — an ontology which contains more than 300,000 carefully curated medical concepts organized into a class hierarchy and enabling automated reasoning capabilities based on subsumption and attribute relationships between medical concepts.
The clinical algorithms in CPGs often require the automated execution of highly accurate and precise calculations (over multiple clinical concept codes and numeric values) which are better performed with a logic-based formalism. An example is a clinical recommendation based on multiple diagnoses or co-morbidities, the patient's age and gender, physiological measurements like vital signs, and laboratory result values. Consider the following rule from the 2013 American College of Cardiology Foundation/American Heart Association (ACCF/AHA) Guideline for the Management of Heart Failure: "Aldosterone receptor antagonists (or mineralocorticoid receptor antagonists) are recommended in patients with NYHA [New York Heart Association] class II-IV HF [Heart Failure] and who have LVEF [left ventricular ejection fraction] of 35% or less, unless contraindicated, to reduce morbidity and mortality. Patients with NYHA class II HF should have a history of prior cardiovascular hospitalization or elevated plasma natriuretic peptide levels to be considered for aldosterone receptor antagonists. Creatinine should be 2.5 mg/dL or less in men or 2.0 mg/dL or less in women (or estimated glomerular filtration rate > 30 mL/min/1.73 ㎡), and potassium should be less than 5.0 mEq/L. Careful monitoring of potassium, renal function, and diuretic dosing should be performed at initiation and closely followed thereafter to minimize risk of hyperkalemia and renal insufficiency". Healthcare payers have established strict quality measures to ensure physicians' concordance with clinical practice guidelines.
Machine autonomy in the care management of patients goes counter to the principle of shared decision making in medicine. Legal scholars and lawyers should decide whether existing doctrines of informed consent are still relevant or should be updated. In the meantime, the use of AI should be disclosed to patients in routine care. This can be done as part of the well-established principle of shared decision-making which considers the values, goals, and preferences of the patient during care planning. Argumentation Theory is a good old branch of AI that can help reconcile AI recommendations, uncertainty, risks and benefits, patient preferences, clinical practice guidelines, and other scientific evidence. As a guide to rational clinical decision making (by evaluating and communicating the pros and cons of various courses of action), the implementation of Argumentation Theory may also reduce physician's exposure to liability by generating arguments for potential jurors. This approach empowers both the patient and clinician to reason given the fact that modern AI algorithms like Deep Learning are based on pattern recognition and lack logical and causal reasoning abilities. In their paper titled Why do humans reason? Arguments for an argumentative theory, Hugo Mercier and Dan Sperber wrote: "Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade."
Previous efforts like Statistical Relational Learning (SRL) and Neural-Symbolic integration (NeSy) effectively combined logic and statistical learning to achieve sophisticated cognitive reasoning. Progress in AI should build on these previous efforts and we believe that the symbol grounding problem (SGP) can be addressed effectively in hybrid AI architectures comprising both symbolic and sub-symbolic representations through the autonomous agent's embodiment and sensorimotor interaction with the environment. German mathematician, theoretical physicist, and philosopher Hermann Weyl (1885-1955) once remarked that "logic is the hygiene the mathematician practices to keep his ideas healthy and strong." French mathematician Jacques Hadamard (1865-1963) stated that "logic merely sanctions the conquests of the intuition." Given the current shaky theoretical foundations of Deep Learning, it would be wise to heed the wisdom of these giants.
We believe that a unified theory of machine intelligence is needed, and we are approaching that challenge from a complex systems theory perspective. Networks (of arbitrary computational nodes, not just neural networks), emergence, and adaptation are key relevant concepts in the theory of complex systems. In particular, intelligent behavior can emerge from the interaction between diverse learning algorithms. In their drive to survive, predators and prey co-evolve through their interaction in nature. Our current biosphere is the product of 3.8 billion years of evolution and adaptation. The body of a human contains in average between 30 and 40 trillion of cells. The ratio of the number of bacteria in the microbiome over the number of human cells is commonly estimated to be 10 to 1. A paper published in 2016 by Ron Sender, Shai Fuchs, and Ron Milo and titled Revised Estimates for the Number of Human and Bacteria Cells in the Body puts the ratio at 1.3 to 1. The complex interaction networks between hosts (human, animal, or plant) and their microbiomes — including their role in health and disease — are forcing evolutionary biologists to reconsider our notion of the individual or "self". This includes the interaction between the gut microbiome and the brain in humans. In a paper titled A Symbiotic View of Life: We Have Never Been Individuals, Scott F. Gilbert, Jan Sapp, and Alfred I. Tauber wrote: "Thus, animals can no longer be considered individuals in any sense of classical biology: anatomical, developmental, physiological, immunological, genetic, or evolutionary. Our bodies must be understood as holobionts whose anatomical, physiological, immunological, and developmental functions evolved in shared relationships of different species." Culture also plays an important role in both cognition and evolution. The ubiquitous nature of networks (e.g., social and biological networks) will drive the implementation of graph neural networks (GNNs).
An important reason why humans are efficient learners is that we are able to learn concepts and can represent and reason over those concepts efficiently. Pilots take a knowledge test to prove understanding of key concepts and the logical and causal connections between these concepts. Exam topics include: aerodynamics, propulsion, flight control, aircraft systems, the weather, aviation regulations, and human factors (aviation physiology and psychology). This knowledge and a lifetime of accumulated experience came in handy when on January 15, 2009, Captain Chesley "Sully" Sullenberger made the quick decision to ditch the Airbus A320 of US Airways flight 1549 on the Hudson River after the airplane experienced a complete loss of thrust in both engines due to the ingestion of large birds. All 150 passengers and 5 crewmembers survived. Simulation flights were conducted as part of the US National Transportation Safety Board (NTSB) investigation. In the final accident report, the NTSB concluded that "the captain's decision to ditch on the Hudson River rather than attempting to land at an airport provided the highest probability that the accident would be survivable." This event provides a good case study for AI research about decision making under not only uncertainty but also time pressure and high workload and stress levels.
In addition, humans are able to compose new concepts from existing ones — a thought process that Albert Einstein referred to as "combinatorial play". Learning abstract concepts also allows humans to generalize across domains and tasks — a requirement for continuous (life-long) learning in AI systems. For example, concepts learned in the aviation domain — simulation, checklists, Standard Operating Procedures (SOP), Crew Resource Management (CRM), and debriefings — have been successfully applied to medicine. This ability to learn, compose, reason over, generalize, and contextualize abstract concepts is related to language as well. We are particularly intrigued by the pervasive use of argumentation and conceptual metaphors in human thought, language, and discourse. Current Deep Learning architectures fail to represent these abstract concepts which are the basis of human thought, imagination, and ingenuity. Therefore, we explore novel approaches to concept representation, commonsense reasoning, and language understanding. Effective and safe machine autonomy will also require the implementation of important cognitive mechanisms such as intrinsic motivation, attention, episodic and counterfactual thinking, metacognition, and understanding the physics and causal structure of the world (causality).
Human and animal cognition evolved under bounded computational resources. The average power consumption of the human brain is about 20 watts. We believe that the way forward is energy-efficient AI. Some would argue that as long as the energy consumption is 100% renewable, the current approach of data-hungry and energy-hungry brute force Deep Learning is sustainable. It is an approach to AI that favors large corporations with deep pockets but has not led to major breakthroughs in Artificial General Intelligence (AGI). It has led instead to a troubling brain drain from academia. Despite impressive results on certain tasks, recent transformer architectures like BERT still rely on spurious statistical regularities in humongous data sets.
In a paper titled Energy and Policy Considerations for Deep Learning in NLP, Emma Strubell, Ananya Ganesh, and Andrew McCallum estimated the carbon footprint from training a single Deep Learning model with 213M parameters using a Transformer with neural architecture search at 626,155 pounds of carbon dioxide equivalent compared to 1,984 for a passenger flying round-trip in an airliner from New York to San Francisco. According to an article published in Bloomberg Green in April 2020 and titled Google Data Centers' Secret Cost: Billions of Gallons of Water, the "internet giant taps public water supplies that are already straining from overuse." In contrast, the United States Geological Survey (USGS) Water Science section estimates that each person uses about 80-100 gallons of water per day. Energy consumption and heat dissipation are also important challenges for edge devices like smartphones, virtual reality devices, and drones. We believe that progress toward AGI will accelerate when we accept that cognition (biological or artificial) is fundamentally resource-bounded.
The ability of AI agents to acquire meaning is a complicated subject that goes beyond the Turing test and shouldn't be conflated with virtual assistants like Siri or Alexa executing voice commands. Cognitive scientists have studied and proposed different theories to explain the emergence of meaning. One school of thought suggests that meaning is rooted in the agent's embodiment and sensorimotor interaction with its environment. How does the framework of Active Inference relate to meaning? The answer is human imagination or the ability for mental simulation. Evidence suggests that the execution of an action and the off-line mental simulation of that action recruit the same neural substrate. Conceptualization and meaning are grounded in our sensorimotor experiences and memories. This also explains why conceptual metaphors are pervasive in human thought, language, discourse, and even commonsense reasoning. Conceptual Metaphor Theory was introduced by George Lakoff and Mark Johnson in their book Metaphors We Live By. A good example of a metaphor is when Michelle Obama famously said: "When they go low, we go high". What is needed is AI agents that can move around and interact with the world and people the way human infants do. This is a lot harder than throwing humongous datasets at Deep Learning (DL) algorithms and seeing what sticks. Although Deep Learning algorithms like GPT-3 and BERT can be very useful and effective for certain tasks (for example, BERT has been used to improve Google Search with advertising revenue contributing to at least 80% of Alphabet's total revenue in 2019), they have no understanding of the data they process. Therefore, we take an embodied and enactive view of cognition in our research in Cognitive Robotics.
The Hippocratic Oath primum non nocere or "first, do no harm" applies to AI in healthcare as well. The introduction of AI algorithms into the real world requires validation, especially for applications that directly impact people's lives and health. For example, predictive models used for diagnosis and prognosis in clinical care should undergo rigorous validation. In the context of supervised Machine Learning, dataset and covariate shifts can produce incorrect and unreliable predictions when the model training and deployment environments differ due to population, policy, or practice variations. We follow existing consensus guidelines such as the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Statement. Internal validation using methods like cross-validation or preferably the bootstrap should provide clear performance measures such as discrimination (e.g., C-statistic or D-statistic) and calibration (e.g., calibration-in-the-large and calibration slope). In addition to internal validation, external validation should be performed as well to determine the generalizability of the model to other patient populations. External validation can be performed with data collected at a different time (temporal validation) or at different locations, countries, or clinical settings (geographic validation). The clinical usefulness of the prediction model (net benefit) can be evaluated using decision curve analysis. We look forward to the release of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis-Machine Learning (TRIPOD-ML) and the Standards For Reporting Diagnostic Accuracy Studies-Artificial Intelligence (STARD-AI).
Principled approaches should be used for handling missing data and for representing the uncertainty inherent in clinical data including measurement errors and misclassification. We see the Bayesian approach as a promising alternative to null hypothesis significance testing (using statistical significance thresholds like the p-value) which has contributed to the current replication crisis in biomedicine. Furthermore, clinicians sometimes need answers to counterfactual questions at the point of care (for example, when estimating the causal effect of a clinical intervention). We believe that these questions are best answered within the framework of Causal Inference as opposed to prediction with Machine Learning. It is a well-known adage that correlation does not imply causation. Increasingly, observational studies using Causal Inference over real world clinical data are being recognized as complementary to randomized control trials (RCTs) — the gold standard for Evidence-Based Practice (EBP). These observational studies provide Practice-Based Evidence (PBE) which is necessary for closing the evidence loop.
The ongoing COVID-19 pandemic has revealed the impact of the lack of timely access to the trustworthy data necessary for developing algorithms that can be used in making evidence-based clinical decisions. We believe that infectious disease registries should be implemented at the healthcare provider, national, and global level. Access to the data can be accomplished by exposing these registries and other clinical data sources through web application programming interfaces (APIs) or using privacy-preserving federated machine learning architectures. However, this will necessitate the global adoption of foundational standards for health data and semantic interoperability, security, and privacy. These standards already exist and include: HL7 FHIR, SNOMED CT, and OpenID Connect.
The American engineer, statistician, and quality guru Edwards Deming once famously remarked: "In God we trust; all others must bring data." We strongly believe in data-driven evidence and decision making in medicine. However, AI in Healthcare is not a mere rebranding of biostatistics. It offers an opportunity to research and better understand the cognitive processes that underlie diagnostic reasoning, clinical decision making, cognitive biases, and medical errors. In a paper titled Medical error — the third leading cause of death in the US published in the BMJ in May 2016, Martin Makary and Michael Daniel (patient safety researchers at John Hopkins University School of Medicine) estimated the incidence of medical error at more than 250,000 deaths a year. With the discovery of new biomarkers from imaging, genomics, proteomics, and microbiomics research, the number of data types that should be considered in clinical decision making will quickly surpass the information processing capacities of the human brain. In addition, there is an increasing awareness of the social, economic, and environmental determinants of human health. We believe that the safe use of AI will translate into reduced incidence of iatrogenic errors, improved health outcomes, and better quality of life for patients. It is estimated that physicians' decisions contribute to 80% of healthcare expenditures, hence the opportunity to reduce costs as well.
To understand the current limitations of Deep Learning in medicine, one should start with a general theory of clinical decision making. One such theory called Dual Process Theory has been proposed by Daniel Kahneman in his book Thinking, Fast and Slow. Pat Croskerry has written extensively about the application of Dual Process Theory to Clinical Cognition. According to Dual Process Theory, human reasoning consists of two different systems. System 1 is fast, emotional, intuitive, stereotypical, unconscious, and automatic. System 2 is slow, conscious, rational, deliberate, analytical, and logical. Deep Learning which is based on pattern recognition is a System 1 process. System 1 is where cognitive biases and medical errors are more likely to occur. As in any complex system, System 1 and System 2 are not isolated processes but interact significantly to produce rational, safe, and ethical clinical decisions. It is the reason why we are generally cautious when reading on comparisons of the performance of clinicians with that of AI algorithms in research papers and the press. For example, in addition to perceptual processing, radiologists also recruit other cognitive abilities such as attention and semantic processing (including knowledge of human anatomy and disease) during medical image interpretation. We use decision and process modeling to determine how to safely embed AI algorithms in clinical decision making, treatment planning, and care pathways with the goal of achieving seamless clinical workflows. The automated execution of clinical practice guidelines and standardized care pathways will enable greater accountability through the use of audit logs and process mining for diagnostic feedback and avoiding unwarranted clinical variations.
We believe that medicine can borrow proven practices from safety-critical domains like aviation. Examples include: emphasis on human factors, cognitive ergonomics, simulation, checklists, situational awareness, standard operating procedures, crew resource management, flight data recording and analysis, mandatory flight duty time limitations and rest requirements, debriefings, and safety reporting (a prerequisite for a learning health care system). Over the last ten years, there was only one passenger fatality out of several billion passengers flying on over 100 million U.S.-certified scheduled commercial airline flights. In addition, price transparency in healthcare should allow patients to purchase healthcare services at a competitive price just as they are able to comparison-shop online for travel packages including flights, hotels, cars, tours, entertainment, and other activities.
We can improve the health and save the lives of millions of people worldwide with the medical knowledge that is already available in CPGs and the biomedical literature. To harness that knowledge at the point of care, we explore novel approaches to medical Knowledge Representation and Reasoning (KRR) as well as Natural Language Understanding (NLU) and Clinical Question Answering (CQA).
Based on research on human clinical decision making from the fields of neuroscience and the cognitive sciences, our approach to AI in healthcare emphasizes patient safety, effective human-machine interaction, integrated care pathways, and the importance of the clinician-patient relationship. Our approach supports a shared decision making process which takes into account the values, goals, and wishes of the patient. One lesson we have learned from studying the introduction of AI in medicine during the last decade is that the responsible use of AI requires not only validation and verification but also prospective studies to evaluate the efficacy of AI on patient-centered outcomes which include essential measures such as survival, time to recovery, severity of side effects, quality of life, functional status, remission (e.g., depression remission at six and twelve months), and health resource utilization. We follow the guidelines for clinical trial protocols for interventions involving artificial intelligence (SPIRIT-AI extension) and reporting guidelines for clinical trial reports for interventions involving artificial intelligence (CONSORT-AI extension).
In a paper titled Physician Burnout, Interrupted published in the NEJM, Pamela Hartzband, M.D. and Jerome Groopman, M.D. discuss EHR-induced burnout and physician intrinsic motivation through the lenses of self-determination theory and write: "They [doctors, nurses, and other health care professionals] tend to enter their field with a high level of altruism coupled with a strong interest in human biology, focused on caring for the ill. These traits and goals lead to considerable intrinsic motivation." Solutions like Automated Speech Recognition (ASR) using Machine Learning can facilitate more patient-clinician interaction, reduce physician burnout, and improve physician professional satisfaction (Human-Centered AI). As Golden Krishna said, "The best interface is no interface."
At the implementational level, we focus on system requirements such as high-throughput, low-latency, fault tolerance, security, privacy, and compliance. We meet these requirements through a set of software architectural patterns (e.g., task parallelism and the Actor model), adequate testing, and specialized hardware. We also advise on patterns and pitfalls for avoiding Machine Learning technical debt.
We make a distinction between the verification & validation (V&V) of cyber-physical systems with embedded AI algorithms and the auditing of various activities during the system's lifecycle. In regulated industries like aviation, verification is typically part of a certification process using formal methods like probabilistic temporal logics. There is a growing literature on the use of formal methods based on probabilistic verification for providing provable guarantees of the robustness, safety, and fairness of Machine Learning algorithms. Formal methods are different from traditional Machine Learning testing approaches such as cross-validation and the Bootstrap. This approach allows AI systems to fit nicely into existing regulatory frameworks for verification (e.g., DO-333, Formal Methods Supplement to DO-178C for avionics software) and auditing (e.g., FAA Stage of Involvement audits) as opposed to creating new AI regulations.
Our expertise includes state-of-the-art AI methods like eXtreme Gradient Boosting, Gaussian Processes, Bayesian Optimization, Probabilistic Programming, Variational Inference, Deep Generative Models, Deep Reinforcement Learning, Causal Inference, Probabilistic Graphical Models (PGMs), Statistical Relational Learning, Computational Logic, Neural-Symbolic integration, and Evolutionary algorithms. We pay special attention to algorithmic transparency, interpretability, and accountability. We use techniques like human-centered design, simulation, and Visual Analytics to help end users understand risk, uncertainty, and evidence.
Obviously, the need for good project leadership, management, and governance applies to AI projects as well.
Parts of this section originally appeared on mindnetworks.ai and algorithmichealth.ai which are wholly owned by Mr. Vidjinnagni Amoussou, the Founder & Director of Atlantic AI Labs™.
Experience and wisdom.
Founder & Director
Chief Operating Officer
Our AI Implementation Methodology
The Pendjari Park in Benin is the last refuge for the region's largest remaining population of elephant and the critically endangered West African lion.
- African ParksGet in touch with us, for business or to submit your resume!
By email: vidjinnagni.amoussou@atlanticailabs.com; on Twitter: @atlanticailabs