Modern supply chains have a high degree of integration and significant information flows across the involved participants and their systems. At the same time, when it comes to financing and risk decisions, most of this information is disregarded or at best is taken at intuitive levels because it is too complex for an analyst to process cognitively. This creates a major gap between enterprises (that know their ecosystem) and external financiers that rely on out-of-date financials or “back-of-the envelope” estimates to decide on risk, pricing, and other core parameters. The result is inefficient markets and lost business for both suppliers and financial institutions.
The current decision-making and risk assessment models used for customer and supply chain financing tend to be analyst-centric, based primarily on rules and textbook methods, and use standard and often outdated data modeling tools (spreadsheets, BA and BI tools). As they rely on subject matter experts’ opinions rather than on objective data-driven models, they can easily lose touch with enterprise business data at large. Analyst-centric models are human-driven and thus incapable of dealing with large data volumes and high velocities. They use only easily-accessible data, lack integration among siloed data sources and are not capable of dealing with data uncertainty. Static models based on historical data cannot account for rapid changes both within and outside of the enterprise.
The current credit risk models lack the ability to make a specific risk assessment for a given financial structure as they are based on buyer’s financials and other public or quasi-public information and are often qualitative rather than quantitative (e.g., a free-form credit manager’s assessment). A very limited quantitative tool set is available for performance risk analysis, which is often based on expert opinions and personal judgement, while the complex interplay between credit and operational risk remains under-researched. As a result, most of the financing structures lump performance and credit risk together, which results in low advance rates and limited availability. The market risk factors such as the price of inventory (collateral, end-of-life, etc.) are key to many financing structures. While there are well-established statistical methods for market risk analysis of commodities and commodity-like products (much less so for other product types), these models remain relatively simple and overlook many important factors and complex interrelationships.
The application of state-of-the-art data mining, predictive modeling, and machine learning technologies to discover critical insights from the rich, multi-faceted enterprise and external data and using these insights to conduct risk assessment and make data-driven financial decisions can create significant competitive advantage for organizations.
We build highly customized intelligent decision-making engines designed to generate credit, risk, and pricing decisions in near real-time, as well as recommend actions that return the highest possible value over both the short and long term. The inherent flexibility of our suite of self-learning AI tools allows fine-tuning our products to the needs of a broad spectrum of corporate users and across the entire supply chain finance ecosystem, including buyers, financiers, and suppliers.
Our products are deployed to one or more (or potentially all) supply chain participants and operate by continually learning a self-improving model of each of our clients’ business from:
· ERP and CRM data;
· Unstructured and free text data such as emails and meeting minutes;
· Low-level technical data, such as system execution logs, sensor signals, etc.;
· Relevant external data, e.g., technology and business trends.
TenzorAI products generate decisions that account for multiple, and often hidden, risk factors. Additionally, they help decision makers better understand complex factors and causal relationships unique to their business.
Due to some inherent mistrust among supply chain members and due to the existing commercial/regulatory pressures, organizations are often unwilling to share their internal data. To address the needs for trust, privacy, and auditability within supply chains as well as to improve our engines’ decision-making capabilities, we utilize the distributed ledger technology (i.e., blockchain).
How We Do It
At the core of our products is a suite of self-learning predictive models that are continually updated and enriched with new data using reinforcement feedback wherever possible. These models are trained to maximize cumulative value of the output decisions and actions. Our approaches to data transformation and integration allow the inclusion of all available and relevant data sources, both structured from enterprise systems and unstructured, allowing the resulting models to account for complex internal, cross-organizational, and external factors.
We use multiple AI engines deployed to supply chain participants and train them on the data available within those organizations, without the need to centrally process their sensitive internal data. To benefit from all the data available across the supply chain, we then integrate locally trained machine learning models.
The variety of input data and overall complexity of the problem requires a broad spectrum of modeling approaches. Natural language processing (NLP) algorithms are used to quantify and standardize free text (i.e., notes, email, meeting minutes), scan for usable text data, and convert text into relational tables. This is especially important for performance risk modeling based on the insights discovered in CRM data. Identity resolution algorithms ensure the consistency among organization, department, and individual names. Additionally, we use neural nets and deep learning methods for complex diagnostics and automated rule extraction. There is always a possibility to dig deeper – for example, instead of using standardized reports, we can train our models on the raw data (primary records) from the enterprise systems, thus eliminating possible biases and errors of the high-level aggregated parameters.
Discriminative supervised learning algorithms are very useful for decision-making as they produce mutually exclusive output (i.e., “This” vs “That”). Examples include regression (generalized linear models), SVM, Random Forests, etc. The real-life data sets that are not suitable for supervised learning due to various uncertainties such as the absence of a reliable “ground-truth” label or the presence of hidden variables and processes are modeled with generative probabilistic models, such as Bayesian Nets. If reliably predicting Y as a function of X is impossible, we can instead predict the states of the entire system.
Our machine learning tools allows discovering hidden processes and variables and causal relationships that are not accessible by subject matter experts via direct observation, heuristics, and intuition. The decision and actions generated based on these insights are communicated to the decision maker using a machine-human interface (UI) or executed via full, closed-loop process automation.
We use a private/permissioned blockchain built on top an existing enterprise-grade distributed ledger platform to serve as a data governance and audit layer. We learn integrated master models and perform model cross-training on local AI engines within particular supply chains to significantly improve the quality of predictions, while limiting exchange of sensitive underlying data.
We have a highly skilled team with subject matter expertise in banking, insurance, supply chain finance, and other industries as well as the experience in data science, machine learning, business analysis, software engineering, product management, and related academic research – both at large corporations and in various consulting engagements.