David Cosgrave, Director for Business Operations and Customer Advisory, SAS.

David Cosgrave, Director for Business Operations and Customer Advisory, SAS.

As technology evolves, new developments such as artificial intelligence (AI) bring with them the promise of innovation and significant business impact. However, with something like AI, this promise is often overshadowed by a lack of trust in the data and the algorithms.

There are numerous issues relating to the data that AI would use to enable innovation and business improvements, indicates David Cosgrave, Director for Business Operations and Customer Advisory at SAS. Not only trust in how the data has been collected, but also concerns around how one should protect the data once one has it.

"These issues have somewhat put the brakes on using AI to drive innovation, as company boards are anxious about their value to the business, while consumers, too, are less trusting. And don't forget that AI projects tend to be costly for the organisations implementing them - after all, data scientists, innovative plans and big laboratories are expensive," he says.

"It is for this reason that many enterprises have turned to open source (OS) options. After all, the technology is cheap and the market is awash with graduates who have OS skills. However, where OS solutions often fail is that putting together a complete AI project using only OS may end up demanding a dozen or more different pieces of technology. As with anything, the more potential points of failure, the more complex it becomes to get it operational."

He adds that most OS models are also effectively ungoverned, something that quite clearly reduces their ability to accurately interpret any data that is received. Moreover, models that are not properly governed are difficult to keep current. Don't forget that the model will inevitably degrade over time, as the data, people and their behaviours, and the technology itself, slowly changes.

"This means that an AI model would need to be continually retrained - which, in an OS environment, means having to continuously change and redevelop the code - something that would slow things down significantly."

On the other hand, he says, proprietary solutions not only undertake the necessary changes automatically, they are also designed to build greater levels of trust in the accuracy of the model. After all, a key functionality they have is to enable the user to carefully interrogate both the data and how they came to obtain the result they got from a particular model.

"Saying this is not to suggest that there is no place for OS in an advancing AI world. Far from it, in fact, as the two models complement one another well and the reality is that a well-planned AI implementation should encompass elements of both."

"Ultimately, such a project should unify the OS programmer skills with a proprietary platform, as this will deliver a ‘best of both worlds' scenario, where plentiful OS skills can be accessed and cost efficiencies gained, while the enterprise can still build control and confidence in the accuracy of its models and speed up its time to market," he says.

Hybrid models are in vogue in other parts of the IT space already, he continues, and a successful AI implementation should definitely combine OS and proprietary solutions, rather than being viewed as an either/or decision.

"In this way, the organisation will be able to enjoy the cost and skills benefits offered by OS, while the data integrity ensured by the proprietary side will build trust in the minds of both the customers and the board. I believe such a hybrid platform will provide both the flexibility required, along with the governance necessary, to ensure compliance," he concludes.

View more content