Skip to main content

The State of Development and Operations of AI Applications: Report

By October 11, 2019Article, Reports

DevOps transformed the way software engineers deliver applications by making it possible to collaborate, test and deliver software continuously. Data science and machine learning (ML) teams today are stuck where software development was in the late 1990s before DevOps, mired in challenges ranging from difficulties collaborating on artificial intelligence (AI) model deployments to work being split and isolated across silos.

In the 2019 State of Development and Operations of AI Applications study conducted by Dotscience, 500 industry professionals were surveyed to:

• Examine the state of development and operations of AI applications
• Explore the practical business applications for ML; 
• Determine what tools, processes and techniques are required to implement AI successfully and safely

Here are the report’s key findings: 

1. Over 88% of IT professionals at organizations where AI is in production indicated that AI has either been impactful or highly impactful on their company’s competitive advantage, while only 0.7% stated that it had no impact.

2. While 63.2% of businesses reported they are spending between $500,000 and $10 million on their AI efforts, 60.6% of respondents are continuing to experience a variety of operational challenges.

3. The top three challenges respondents experienced with AI workloads are duplicating work (33%), rewriting a model after a team member leaves (27.6%) and difficulty justifying value (27%).

4. 64.4% of respondents take between 7 and 18 months to move ML and AI models from idea to production.

5. 44.5% of ML engineers and data scientists collaborate with each other using a shared spreadsheet for metrics which they update manually.

6. Nearly 90% of respondents either manually track the model provenance (i.e., a complete record of all the steps taken to create a model) of their AI models, data and code or do not track provenance at all.

7. For nearly half of respondents (43.4%), reviewing ML experiments as a team is conducted by sharing manually created documentation.

8. 28.4% noted that they rebuild their models every time they deploy them.

9. 36.4% of respondents reported that data scientists and ML engineers provision and access their model development environments on a local laptop or desktop.

10. Nearly 40% of respondents’ teams send models to another team for deployment into production.

To see the report’s infographic summary, click here. 
To read the full report, click here. 


Copy link
Powered by Social Snap