The field of AI is witnessing a surge of interest in what's referred to as "tool use." At the heart of this concept is the belief that large language models or multimodal models, despite their expertise in language and other domains such as mathematics and programming, often employ specific "tools" for more specialised tasks. These tools can be separate AI systems that the central model calls upon when confronted with unique problems like protein folding or playing chess.

The user might not be aware of this intricate process, as the interaction appears seamless, creating the illusion of a single, multifaceted AI system. Yet, behind the scenes, the complex tasks are often tackled by smaller, specialised AIs working under the umbrella of the large central model.

It's anticipated that the next generation of AI will capitalise more on this structure. The central model could function akin to a switchboard operator, routing queries to the appropriate tools, then communicating the solution back to the user in a digestible manner—typically through natural language.

Current AI models and their applications, as versatile and advanced as they may seem, are merely stepping stones in this vast technological landscape. Take chatbots, for instance. While they currently offer valuable services, their evolution will lead them to become comprehensive personal assistants utilised multiple times a day for an array of tasks, from recommending books and events, scheduling travel, to facilitating daily work.

That said, it's crucial to recognise that there's still a long way to go to reach this vision. Key elements like planning, reasoning, and memory are currently underdeveloped in today's chatbots, but there is significant work being done in these areas. 

It's reasonable to predict that in a few years, the chatbots of today will pale in comparison to the vastly superior versions that are yet to come.