Making AI work for databases

In The Sorcerer’s Apprentice, Mickey Mouse uses a magic spell to do his chores. The spell animates a broom that is tasked with carrying water from the well. While the animated broom is managed, it gets the job done; when Mickey falls asleep, the broom carries on its work. When Mickey can’t stop the broom, he chops it to bits with an axe, but all the pieces re-animate and carry on as before. Finally the Sorcerer intervenes to stop the broom and clean up the mess.

Similarly, AI promises to lighten the burden of operating databases. For example, using AI to write SQL queries or optimize performance are obvious areas to apply this technology. There is a huge amount of SQL on the internet that can be used to train models around what good queries should look like, and transforming natural language into accurate SQL has a lot of promise.

Further, using AI to handle database management issues should deliver faster performance, more reliable systems, and more efficient use of resources. Customers demand more help around those pain points, and they expect that any supplier can respond to those issues faster with AI. For problems that companies view as “low hanging fruit,” they expect self-service AI to solve those problems on demand rather than waiting.

AI promise meets real-world challenge

Already, we have seen AI get deployed around SQL and database management. BIRD (BIg bench for laRge-scale Database grounded text-to-SQL evaluation) publishes its benchmark around how models perform, with the current top AI performing at nearly 82% execution accuracy, based on a Valid Efficiency Score (VES). (See the paper on BIRD for details.) How good is a VES of 82%? Currently, human database engineers have a VES of nearly 93%.

The current gap between human and AI performance will shrink over time. But it is currently a great example of the Pareto Principle at work — from around 20% of your effort, you can get 80% of your results. To achieve that remaining 20% of results, you have to put in 80% of your effort. With AI, dealing with the simpler issues is where you can achieve the best results, but the harder problems still need a human in the loop to solve the problem or reach the intended goal.

For database management, this is something that we have seen at Percona. Using previous consulting engagements and service delivery projects as a base, we looked at how to automate steps around database management so customers could use AI to solve problems. Once we had the model developed, we tested it internally on database installations. We found that AI did help our team to deliver more efficiently around those simple problems, speeding up how fast they could respond.

At the same time, while these AI systems could make progress on more complex requests, they could not complete the “last mile” by themselves at the start. To overcome this, we looked at how the AI models used data to formulate responses and what sources the model called on most often. This led to more refinement and improvement in the systems alongside a human decision-maker that could understand what the AI was recommending, why it would be suitable, and where it could be improved.

Databases are essential components in the technology stack. As systems of record and sources for data analysis, they have to be reliable, available, and secure. Any decision around databases — from which database you choose for the job through to choices on management or optimization — can have a big impact. Any change has to be managed, or the result can be a broken application.

AI and the future of databases

Database management needs AI. The demand from customers for faster fixes and better performance is not going away, and those customers expect their suppliers to use AI in the same way they might use AI internally. For companies involved in service and support around IT including databases, applying AI to solve problems faster isn’t something that you can avoid. However, the human in the loop model will be essential for these service and support requirements for the foreseeable future. With databases so critical to how applications function and support the business, fully automating service with AI is not yet reliable for 100% of requests. As AI improves, the speed will benefit the majority of potential issues. However, the more complex problems will still require human expertise and control.

The demands of database customers will force teams to use AI. Whether this is internal teams that adopt AI to help them manage database deployment within internal developer platforms, or external service providers that support customers around problems. Customers will move to alternatives if they can’t get the speed of response that they expect. This could be through adopting another service provider for a database like PostgreSQL, or moving to a cloud or managed service provider that can offer better response times. 

Mickey used magic to try and solve a problem, but he did not foresee all of the potential consequences. For those who are not database specialists, AI can help them write SQL, manage common tasks, or solve some of the simple problems, but there will always be edge cases where human skills and understanding will be needed. Arthur C. Clarke’s Third Law states that any sufficiently advanced technology is indistinguishable from magic, but the combination of AI and human skill around databases will have the greatest long-term impact without resorting to sorcery.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top