Founded in March 2018 with headquarters in Cotonou, Benin, Atlantic Artificial Intelligence Laboratories (Atlantic AI Labs™) is fostering research, education, and implementation of AI and related technologies in Africa (see press release). We are interested in unleashing the potential of AI for sustainable development including: healthcare, precision agriculture, education, unmanned aerial vehicles, clean energy, environmental protection, and wildlife conservation.
Our guiding principles are: Innovation, Collaboration, and Excellence.
In addition to building a team of talented AI researchers and engineers, Atlantic AI Labs™ will also promote mathematics education and offer AI Research Fellowship and Residency Programs in collaboration with African universities. We aim to publish high quality reproducible research and will organize events to share and disseminate knowledge.About us
What we do.
A passionate team of creative & innovative minds.
We are a multidisciplinary team passionate about researching and implementing Artificial Intelligence (AI).
Our goal is to use AI to improve people's lives and health.
Parts of this section previously appeared on mindnetworks.ai and algorithmichealth.ai which are wholly owned by Joel Amoussou, Founder & Director of Atlantic AI Labs™.
Our interdisciplinary approach for solving complex problems in AI allows us to draw on diverse bodies of knowledge which include: biology, psychology, philosophy, cognitive science, neuroscience, mathematics, physics, medicine, statistics, computer science, and aviation.
Our methodology is based on the neuroscientist David Marr's three levels of analysis: computational, algorithmic, and implementational.
For example, flight cognition — the study of the cognitive and psychological processes that underlie pilot performance, decision making, and human errors — can inform the design of Cognitive Architectures for safe and autonomous agents. These autonomous agents will be very helpful to humans during future airline single-pilot operations, crewed spaceflight missions into deep space, and the exploration of Mars. A Cognitive Architecture implements a computational model of various mechanisms and processes involved in cognition such as: perception, memory, attention, learning, causality, reasoning, decision making, planning, action, motor control, language, emotions, drives (such as food, water, and reproduction), imagination, social interaction, adaptation, self-awareness, and metacognition. These Cognitive Architectures will enable the design of autonomous agents that can interact safely and effectively with humans (human-like AI).
Similarly, from biology and neuroscience, we can learn how hummingbirds develop fascinating learning and cognitive abilities (e.g., spatial memory, episodic memory, vision, motor control, and vocal learning) with tiny brains. This approach called Nature-Inspired Computing (NIC) can inform the development of more efficient intelligent machines. Deep neural networks (DNNs) have recently achieved impressive levels of performance in tasks such as object recognition, speech recognition, language translation, and control in game play. DNNs have proven to be effective at perception and pattern recognition tasks with high dimensional input spaces — a challenge for previous approaches to AI. However, they tend to overfit in low data regimes (most organizations don't have Google-scale data and computing infrastructure) and more work is needed to fully incorporate cognitive mechanisms and processes like memory, attention, commonsense reasoning, and causality.
Returning to our aviation example, we know that good pilots "stay ahead of the airplane". Through rigorous learning, simulation training, and planning, the pilot has acquired an internal model for reasoning about the flight. This internal model includes the aerodynamic, propulsion, and weather models. It allows the pilot to "stay ahead of the airplane" by maintaining situational awareness and by asking herself questions like: "What can happen next?" (prediction), "What if an unplanned situation arises?" (counterfactual causal reasoning), and "What will I do?" (procedural knowledge). Situational awareness is especially important during spatial disorientation in flight when the pilot's perception of the aircraft's attitude and spatial position turns into misperception and the pilot's awareness of her illusory sensations allows her to rely on flight instruments to ensure flight safety. The pilot uses her metacognitive abilities to monitor the accuracy and uncertainty of her perception of the environment and to assess and regulate her own reasoning and decision-making processes.
The human mind is also a very efficient learner. The US Federal Aviation Administration (FAA) requires airline first officers (second in command) to hold an Airline Transport Pilot (ATP) certificate which requires a knowledge and practical tests and 1,500 hours total time of flying experience. Up to 100 hours of the required flying experience can be accumulated on a full flight simulator. In contrast, Google's AlphaGo — designed using an approach to AI known as Deep Reinforcement Learning (DRL) — played more than 100 million game simulations. The latest incarnation of AlphaGo called AlphaZero used 5,000 tensor processing units (TPUs) and required significantly less game simulations to achieve superhuman performance at the games of chess, shogi, and Go. A previous incarnation called AlphaGo Zero used graphics processing units (GPUs) to train the deep neural networks through self-play with no human knowledge except the rules of the game.
How applicable is AlphaGo's approach to real world decision problems? In Go, the states of the game are fully observable which enables learning through self-play with Monte-Carlo Tree Search (MCTS). On the other hand, partial observability is typical of real world environments. Also, it is hard to imagine how an AI system can learn tabula rasa — with no human knowledge — through self-plays in the domain of aerospace. Such an AI system would have to rediscover the 300 years old Newton's and Euler's laws and the Navier-Stokes equations — the foundations of modern aerodynamics. Isaac Newton himself once famously remarked: "If I have seen further than others, it is by standing upon the shoulders of giants." Furthermore, the lack of explanability of the policies learned by DRL agents remains an impediment to their use in safety-critical applications like aviation. Nonetheless, DRL (preferably the model-based variant) can be helpful in teaching complex tasks like autonomous aircraft piloting to a robot, although we believe that DRL alone does not account for all the cognitive phenomena that underlie the performance of human pilots (more on that later). In 2016 and 2017, there were zero deaths in US-certified scheduled commercial airline flights. Beyond automation with current autopilot systems, the increasing demand for air travel worldwide will create a need for machine autonomy. The Canadian Council for Aviation and Aerospace (CCAA) predicts a shortage of 6,000 pilots in Canada by 2036.
We subscribe to the "No Free Lunch Theorem" and have experience in various state-of-the-art approaches to AI including: symbolic, connectionist, bayesian, frequentist, and evolutionary. In building cognitive systems, we seek synergies between these approaches. For example, Bayesian Deep Learning can help represent uncertainty in deep neural networks in a principled manner — a requirement for domains such as healthcare. Bayesian Decision Theory is also a principled methodology for solving decision making problems under uncertainty. We see Deep Generative Models combining Deep Learning and probabilistic reasoning as a promising avenue for unsupervised and human-like learning including concept learning, one-shot or few-shot generalization, and commonsense reasoning. Reminiscent of human metacognition, Meta Learning (or learning to learn) for Reinforcement Learning and Imitation Learning has generated a lot of interest at the NIPS 2017 conference and holds promise of learning algorithms that can generate algorithms tailored to specific domains and tasks.
Lately, there has been a resurgence of evolutionary algorithms proposed as an alternative to established Reinforcement Learning algorithms (like Q-learning and Policy Gradients) or as an efficient mechanism for training deep neural networks (neuroevolution). Evolutionary algorithms are also amenable to embarrassingly parallel computations on commodity hardware.
In addition, rumors of the demise of logic in AI in favor of statistical learning methods (in the era of Machine Learning and Deep Learning hype) have been greatly exaggerated. Since the seminal Dartmouth AI workshop of 1956, decades of research in logic-based methods (e.g., classical, nonmonotonic, probabilistic, description, modal, and temporal logics) have produced useful commonsense reasoning capabilities that are lacking in today's Deep Learning and Reinforcement Learning systems which are essentially based on pattern recognition. This lack of reasoning abilities in AI systems can potentially lead to sample inefficiency or difficulties in providing formal guarantees of system behavior — a concern that is exacerbated by known vulnerabilities such as adversarial attacks against Deep Neural Networks and reward hacking in Reinforcement Learning. Real world safety-critical systems like aircraft are indeed required by regulation to go through a formal verification process for certification. Consider the following rule in the US federal aviation regulations: "When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right." (14 CFR part 91.113(e)). This rule can be easily specified in a logic-based formalism such as probabilistic temporal logic to account for sensor and perception uncertainty. We can then formally verify that an autonomous robotic pilot complies with the rule. A Deep Reinforcement Learning approach would require trial-and-error using a very large number of flight simulations. Autonomous agents like robotic pilots must comply with the laws, regulations, and ethical norms of the country in which they operate — a concept related to algorithmic accountability.
Another issue is that modern machine learning algorithms like Deep Neural Networks (DNNs) and Random Forests are data-hungry. Organizations with low-data volumes can jumpstart their adoption of AI by modeling and automating their business processes and operational decisions with logic-based methods. For example, prior to 2009, less than 10% of US hospitals had an Electronic Medical Record (EMR) system. Logic-based Clinical Decision Support (CDS) systems for medical Knowledge Representation and Reasoning (KRR) have been successfully deployed for the automatic execution of Clinical Practice Guidelines (CPGs) at the point of care. Description Logic (DL) is also the foundation of the Systematized Nomenclature of Medicine (SNOMED) — an ontology which contains more than 300,000 carefully curated medical concepts organized into a class hierarchy and enabling automated reasoning capabilities by exploiting subsumption and attribute relationships between medical concepts.
Previous efforts like Statistical Relational Learning (SRL) and Neural-Symbolic integration (NeSy) effectively combined logic and probability to achieve sophisticated cognitive reasoning. Progress in AI should build on these previous efforts and we believe that the symbol grounding problem (SGP) can be addressed effectively in hybrid AI architectures comprising both symbolic and sub-symbolic representations through the autonomous agent's embodiment and sensorimotor interaction with the environment.
We believe that a unified theory of machine intelligence is needed, and we are approaching that challenge from a complex systems theory perspective. Networks (of arbitrary computational nodes, not just neural networks), emergence, and adaptation are key relevant concepts in the theory of complex systems. In particular, intelligent behavior can emerge from the interaction between diverse learning algorithms. In their drive to survive, predators and prey co-evolve through their interaction in nature.
An important reason why humans are efficient learners is that we are able to learn concepts and can represent and reason over those concepts efficiently. Pilots take a knowledge test to prove understanding of key concepts and the logical and causal connections between these concepts. Exam topics include: aerodynamics, propulsion, flight control, aircraft systems, the weather, aviation regulations, and human factors (aviation physiology and psychology). This knowledge and a lifetime of accumulated experience came in handy when on January 15, 2009, Captain Chesley "Sully" Sullenberger made the quick decision to ditch the Airbus A320 of US Airways flight 1549 on the Hudson River after the airplane experienced a complete loss of thrust in both engines due to the ingestion of large birds. All 150 passengers and 5 crewmembers survived. Simulation flights were conducted as part of the US National Transportation Safety Board (NTSB) investigation. In the final accident report, the NTSB concluded that "the captain's decision to ditch on the Hudson River rather than attempting to land at an airport provided the highest probability that the accident would be survivable." This event provides a good case study for AI research about decision making under not only uncertainty but also time pressure and high workload and stress levels.
In addition, humans are able to compose new concepts from existing ones — a thought process that Albert Einstein referred to as "combinatorial play". Learning abstract concepts also allows humans to generalize across domains and tasks — a requirement for continuous (life-long) learning in AI systems. For example, concepts learned in the aviation domain — simulation, checklists, Standard Operating Procedures (SOP), Crew Resource Management (CRM), and debriefings — have been successfully applied to medicine. This ability to learn, compose, reason over, and generalize abstract concepts is related to language as well. Current Deep Learning architectures fail to represent these abstract concepts which are the basis of human thought, imagination, and ingenuity. Therefore, we explore novel approaches to concept representation, commonsense reasoning, and language understanding. Effective and safe machine autonomy will also require the implementation of important cognitive mechanisms such as motivation, attention, metacognition, and understanding the causal structure of the world (causality).
Our expertise includes state-of-the-art AI methods like eXtreme Gradient Boosting, Gaussian Processes, Bayesian Optimization, Probabilistic Programming, Variational Inference, Deep Generative Models, Deep Reinforcement Learning, Causal Inference, Probabilistic Graphical Models (PGMs), Statistical Relational Learning, Computational Logic, Neural-Symbolic integration, and Evolutionary algorithms. We pay special attention to algorithmic transparency, interpretability, and accountability. We use techniques like human-centered design, simulation, and Visual Analytics to help end users understand the data, the model, and the associated uncertainty.
The introduction of AI algorithms into the real world requires validation, especially for applications that directly impact people's lives and health. For example, predictive models used for diagnosis and prognosis in clinical care should undergo rigorous validation. Internal validation using methods like cross-validation or preferably the bootstrap should provide clear performance measures such as discrimination (e.g., C-statistic or D-statistic) and calibration (e.g., calibration-in-the-large and calibration slope). In addition to internal validation, external validation should be performed as well to determine the generalizability of the model to other patient populations. External validation can be performed with data collected at a different time (temporal validation) or at different locations, countries, or clinical settings (geographic validation). The clinical usefulness of the prediction model (net benefit) can be evaluated using decision curve analysis.
Principled approaches should be used for handling missing data and for representing the uncertainty inherent in clinical data. We see the Bayesian approach as a promising alternative to null hypothesis significance testing (using statistical significance thresholds like the p-value) which has contributed to the current replication crisis in biomedicine. Furthermore, clinicians sometimes need answers to counterfactual questions at the point of care (for example, when estimating the causal effect of a clinical intervention). We believe that these questions are best answered within the framework of Causal Inference as opposed to prediction with Machine Learning. It is a well-known adage that correlation does not imply causation. Increasingly, observational studies using Causal Inference over real world clinical data are being recognized as complementary to randomized control trials (RCTs) — the gold standard for Evidence-Based Practice (EBP). These observational studies provide Practice-Based Evidence (PBE) which is necessary for closing the evidence loop.
The American engineer, statistician, and quality guru Edwards Deming once famously remarked: "In God we trust; all others must bring data." We strongly believe in data-driven evidence and decision making in medicine. However, AI in Healthcare is not a mere rebranding of biostatistics. It offers an opportunity to research and better understand the cognitive processes that underlie diagnostic reasoning, clinical decision making, cognitive biases, and medical errors. According to a study published in the BMJ in 2016 by patient safety researchers from John Hopkins University School of Medicine, medical error is the third leading cause of death in the US — with an incidence of more than 250,000 deaths a year. With the discovery of new biomarkers from imaging, genomics, proteomics, and microbiomics research, the number of data types that should be considered in clinical decision making will quickly surpass the information processing capacities of the human brain. In addition, there is an increasing awareness of the social, economic, and environmental determinants of human health. We believe that the safe use of AI will translate into reduced incidence of iatrogenic errors, improved health outcomes, and better quality of life for patients. It is estimated that physicians' decisions contribute to 80% of healthcare expenditures, hence the opportunity to reduce costs as well.
At the implementational level, we focus on system requirements such as high-throughput, low-latency, fault tolerance, security, privacy, and compliance. We meet these requirements through a set of software architectural patterns (e.g., task parallelism and the Actor model), adequate testing, and specialized hardware. We also advise on patterns and pitfalls for avoiding Machine Learning technical debt.
Obviously, the need for good project leadership, management, and governance applies to AI projects as well.
Experience and wisdom.
Founder & Director
Chief Operating Officer
Our AI Implementation Methodology