While 86% of companies surveyed by PwC define AI as the technology they will invest most in in 2021, there are still few success stories of AI-based solutions integrated into real products, services and processes.
Talking to IT departments in large companies, I had the opportunity to research the causes that strongly limit the adoption of artificial intelligence algorithms in products, services and business processes:
Identification of use cases → Despite the hype surrounding AI, less than 30% of the Global 500 companies are adopting the technology, with adoption referring to the use of AI in any form, and according to a KPMG survey, only 17% of companies are implementing the technology in production processes. Many business departments are still struggling to understand the benefits of using AI in 2021, and IT departments, which are more focused on implementing model techniques that solve a specific task (e.g., classifying text, recognizing an object in an image, etc.), are not conveying the right information to the business that can make AI applicable in real use cases.
Lack of appropriate technical skills → Creating a production-ready AI model requires the presence of a team with diverse skills that are unlikely to all reside in the same job profile but are divided between Data Scientist, Data Analyst, DevOps, and Backend developer. "There was a time in history when organizations thought of AI as a proof of concept," said Mark Esposito, a Harvard professor of business. "The problem now is that most companies are having difficulty deploying the solution, which is frequently relegated to Proof of Concept status." Among the Global 500 companies polled by KPMG, the five with the most advanced AI capabilities have, on average, 375 full-time employees dedicated to AI, costing each about $75 million. Needless to say, most businesses cannot afford this level of investment, particularly in markets such as Italy, where there is a far more pronounced systemic shortage of IT figures than in the United States.
Data availability → Data has been nicknamed the "gold of the twenty-first century," as it serves as the foundation for Artificial Intelligence models and frequently determines the success of AI projects. However, the vast majority of businesses lack properly structured data or have insufficient data to train AI models. Before even beginning to study and solve the problem, the majority of data scientists spend the large amounts of time collecting data, consolidating it from disparate sources, formatting it, and cleaning it up. Finding, comprehending, organizing, and cleaning data is the most difficult aspect of most AI projects, and it is one of the primary reasons why AI models fail to reach the production stage. Many businesses have been collecting data for years, but when they begin AI projects, they occasionally discover that some critical columns of information are missing. "The data collected may not be the correct data to train an AI model, regardless of how large it is or how long it has been collected. Companies must not only ensure that they have the right data and enough of it, but that the data is also properly structured." Indeed, prior to the training process, it is frequently necessary to integrate the database on which the training will take place, to harmonize external databases with internal data, and to manually label the data (one of the most time consuming activities in the entire process of training an AI algorithm).
Scalability → There are several reasons why Artificial Intelligence algorithm based solutions fail to see the production stage, first of all there is a performance issue. Very often it is only at the point of production that one realizes that the time an algorithm needs (both for training and inference) is not compatible with the volumes of data and the speed of the product/production process (e.g. an AI system that aims to instantly flag possible fraudulent transactions in real time would be unusable if it took minutes to make its prediction on real data). AI models often require adapting the data structure and infrastructure on which the models run to more innovative technologies (e.g. No-SQL databases, code optimization for GPU and parallelization, etc.).
Investment and payback period → All good things take time, but investors, whether external or internal, are always looking for quick returns. While AI promises increased revenue through better decision making, it makes no promises about how long it will take for those gains to materialize. To avoid discouraging investors or other beneficiaries of AI models, it is critical to break the problem down into pieces and begin selecting the one that will produce the best results in the short term. This is how organizations improve their AI literacy. They start to understand what AI can and cannot do, and they can see some of the benefits of AI materialize quickly. Aiming for an overly ambitious goal right out of the gate, on the other hand, can discourage investors/internal beneficiaries of the AI algorithm and slow, if not halt, the deployment of AI-based solutions entirely.
Also, Roboicly had to deal with the problems described above when developing its NLP algorithms, which are now used daily on over 50,000 news items in English, French, German, Spanish, and Italian, and in the process developed its own platform for optimizing the labeling, training, and deployment of an AI model. We discovered that such issues are so common during this process that we decided to rationalize what we created for internal use and develop a proprietary platform that, with some standardizations and optimizations, can assist non-tech companies in profitably adopting AI-based solutions.