Getting your Trinity Audio player ready... |
At the London Blockchain Conference 2024, Dr. Scott Zoldi, Chief of Data Analytics at FICO, talked to CoinGeek Backstage reporter Jon Southurst about artificial intelligence (AI), blockchain and how the two can work together to create better accountability. Check out the conversation via the video link below.
Defining responsible AI
Kicking things off, Southurst asks Zoldi to define responsible AI. He says these are models that can be trusted to make high-risk, critical decisions. In order to trust them, we must be able to audit and explain how they make decisions, and we must be able to share that with regulators and auditors.
Fundamentally, Zoldi and FICO believe in a concept called “humans in the loop.” This means data scientists and AI engineers can check data points and their relationships, finding things like bias so that they can overrule or weed them out.
Along with humans in the loop, blockchain technology will play a key role in creating the trust we need in AI models. Immutable records of decisions made by data scientists, auditors, and others create a strong incentive to do things right.
Dealing with bias
Expanding on the topic of data bias, Zoldi recognizes that both data and society at large are biased. AI models can explore all sorts of deeper relationships between data points, such as relationships between types of housing and certain races, for example.
How do we deal with the biases of data scientists and AI engineers themselves? Zoldi says model development standards are set by a roundtable and enforced by blockchain technology.
All FICO models use this responsible AI model. Furthermore, since FICO sees it as so important, it’s available to the industry as a whole, Zoldi told CoinGeek Backstage.
Auditing models and the necessity of humans
FICO uses the private Hyperledger Fabric blockchain. Southurst asks how someone outside the organization can audit this blockchain since it requires permission to access it.
Zoldi explains that people are authenticated and added as needed. For example, regulators can be added to check that the rules around responsible AI models are being followed. This is the happy medium that allows for appropriate checks and balances while protecting FICO’s intellectual property (IP).
Will we ever get to the point where AI models build their own models with no humans involved? “Not on my watch,” Zoldi replies unequivocally. The ‘humans in the loop’ concept is central at FICO. Since trust in AI is low, we need to keep humans involved to mitigate bad decisions and increase that trust.
How can we build a world in which humans trust AI? Zoldi emphasizes safeguards, regulations and companies seeing AI as a tool and building models responsibly. AI and blockchain paired together is the killer app, he says, and FICO already has it.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch AI Forge masterclass: Why AI & blockchain are powerhouses of technology