Understandable AI

With the proliferation of inscrutable black box AI tools, there has never been more critical time to support the ethical use of technology and data. That’s why we started Diveplane LLC, the Understandable AI company, and that’s why we set up this substack.

Diveplane offers AI-powered business solutions across multiple industries. With dozens of patents approved and many more pending, our groundbreaking next generation AI gives users full understanding and decision transparency in support of your ethical AI policies and data privacy strategies. We designed our technology to put machines and people in harmony to produce verifiable data intelligence in support of leading-edge competitive business strategies.

Our platform makes Understandable AI a reality. Unlike neural networks, Diveplane Reactor generates the model from the data itself, making for smooth, real-time deployment and lightning-fast optimization, even at high data loads.

Dr. Michael Capps

Co-founder and Chief Executive Officer

Dr. Capps spent most of his career building Epic Games from 20 employees to a global gaming powerhouse, leading the creation of the Unreal Engine, the Gears of War Franchise, and Fortnite.  He retired as president of Epic Games in 2012 to focus on family. In 2017, Capps came out of retirement to see if he could make a dent in one of the largest problems we face today – the ethical problems of black box AI making decisions that affect people’s lives and welfare.  The founders of Diveplane agreed that the company’s top goal was the replacement of black-box AI with transparent, ethical methods – even above profits – and Diveplane’s investors share the goal of social impact alongside entrepreneurial return.

Capps is an industry thought leader in the development of responsible data and AI practices.  In 2020, he co-founded the Data & Trust Alliance, a CEO-level non-profit focused on creating and sharing best practices for safe data and AI.  Alliance members include the CEOs of Wal-mart, IBM, American Express, Johnson & Johnson, Pfizer, and two dozen similar companies.  He personally led the Algorithmic Bias Safeguards for Workforce initiative, whose work has been adopted by global enterprises as part of vendor selection.

In addition to running Diveplane, Capps led the Competitiveness Council for the Business Executives for National Security, and regularly advises US lawmakers on issues of national resilience, modernization, and humanity’s existential need for responsible AI.

Dr. Chris Hazard

Co-founder and Chief Technology Officer

Dr. Hazard is a leading expert in interpretable AI and machine learning technologies. As CTO of Diveplane, he oversees all technical R&D and is responsible for both the long-term vision of our platform and the near-term delivery and operation of all technical assets.  He holds 63 patent assets and has authored dozens of publications on privacy risks, trust and reputation in AI and machine learning systems. His most prominent publications include “The game of scale: Decision making with economies of scale” for which he received a Best Paper Award from the International Conference on Electronic Commerce, “Privacy Risks in Intelligent User Interfaces” which he presented at the IEEE Internet Computing on Natural Web Interfaces, and “Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions,” published by Cornell University.

Dr. Hazard is a renowned speaker, having delivered hundreds of talks including the keynote address on AI at the NATO International What-If? Workshop and the International Association of Privacy Professionals Global Privacy Summit. He is an active member of the CompTIA AI Advisory Council and has served as an advisor to NATO on cyber security policies.

Prior to co-founding Diveplane, Chris founded Hazardous Software, which focuses on decision support, visualization, and simulation for hard strategy problems in large organizations, DoD, and government.  He holds a PhD in computer science from N.C. State, with a focus on AI for trust and reputation.  Earlier in his career, he served as a software architect of CDMA infrastructure at Motorola and worked on robot coordination and logistics at Kiva Systems (now Amazon Robotics).

User's avatar

Subscribe to Understandable AI

Making Understandable AI a Reality.

People

Making Understandable AI a reality. Unlike neural networks, our platform generates the model from the data itself, making for smooth, real-time deployment and lightning-fast optimization, even at high data loads.