On June 5, 2025, Altares Dun & Bradstreet will launch a brand-new Belgian scorecard powered by AI. Promising? Absolutely. But it’s fair to ask: can you really trust AI when it comes to something as crucial as a credit decision? In this blog, we take you behind the scenes of our scoring model. We’ll show you how we make AI not only powerful and accurate, but also fair, transparent, ethical, and explainable – so you not only know what the score is, but also why.

AI and credit risks: Trust through explainability
In recent years, AI has evolved into a true game changer for the business world. Still, things get more sensitive when AI is used to assess credit risks. Because how can you trust a decision made by a ‘black box' you don’t understand? Without insight into why an AI model reaches a certain conclusion, there remains a risk of incorrect or inexplicable decisions.
Many AI systems use neural networks—algorithms inspired by the human brain. They consist of layers of digital ‘neurons’ that work together to recognize patterns and make predictions. Powerful, no doubt, but also complex. It’s often difficult to trace exactly how these networks arrive at a decision.
We understand that working with such advanced models is a significant step. Blindly trusting AI is not an option. That’s why, in our new scorecard, we combine the power of neural networks with an absolute focus on explainability. We ensure that every decision is not only powerful, but also fully understandable and accountable.
Interesting read: xAI for tomorrow's credit scoring
From ‘black box’ to open book
Our new Belgian Scorecard uses seven neural network segments, accounting for no less than three million times more model parameters than before. With this, we’re setting a new industry standard and offering insights that were previously unthinkable.
But it’s not just about scale – it’s about intelligence. Our model adapts effortlessly to financial shifts and delivers real-time, data-driven accuracy. Even within the boundaries of GDPR and during times of economic uncertainty, the model remains robust and exceeds expectations.
Interesting read: Coming soon: New bankruptcy score for businesses in Belgium
With models this complex, it’s crucial that their inner workings remain fully transparent. That’s why we’ve invested not only in performance, but also in explainability. We explored several existing methods, such as Shapley values—a well-known approach in the explainable AI (xAI) domain—but quickly realized these techniques weren’t sufficient for our application. The potentially significant impact of a negative credit score demanded more than standard tools.
That’s why we developed our own methods to make the workings of the network more transparent and easier to explain. It’s not just the outcome of the model that matters, but especially the ‘why’ behind it. By opening up the ‘black box’, we gain insight into the logic behind AI decisions. Transparency and explainability are not only key to maintaining control over AI, but also form the foundation for responsible use, trust among clients and regulators, and alignment with the new European regulations.
The future is promising, the future is AI
Last year, the European Council gave the green light to the EU AI Act – a milestone in the pursuit of safe, reliable, and transparent AI within Europe. As one of the few experts, we welcomed this legislation. Drawing on our expertise in data and AI, we actively contributed to the debate, because we believe AI plays a key role in addressing major societal challenges such as aging populations and shifts in the labor market.
We believe in responsible, explainable AI. Our new scoring system proves that it’s possible: transparent, explainable, and based on reliable data. This not only helps organizations predict risks, but also make responsible decisions. In doing so, we’re building a future together in which AI is not only powerful, but also understandable and manageable.