Join our daily and weekly newsletters for the latest updates and exclusive content on our industry-leading AI coverage. He learns more
2025 is expected to be a pivotal year for enterprise AI. Last year saw rapid innovations, and this year will see the same. This has made it more important than ever to reconsider your site Amnesty International Strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas organizations should prioritize for their AI strategy this year.
1. Agents: the next generation of automation
AI agents are no longer theoretical. In 2025, they will be indispensable tools for organizations looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make precise decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs.
At the beginning of 2024, customers weren’t prepared for prime time, leading to frustrating errors like hallucinated URLs. They started to improve as the large frontier language models themselves improved.
“Let me put it this way,” said Sam Witveen, co-founder of Red Dragon, a company that develops agents for businesses, which recently reviewed the 48 agents it created last year. “Interestingly, the ones that we built at the beginning of the year, a lot of them worked better at the end of the year just because the models got better.” Witteveen shared this in the video we shot to discuss these five mega trends in detail.
The models improve, hallucinations decrease, and they are also trained to carry out agency tasks. Another feature that modelers are looking for is a way to use LLM as a referee, and as the cost of models decreases (something we will cover below), companies can use three or more models to choose the best results to make a decision on.
Another part of the secret sauce? Retrieval Augmented Retrieval (RAG) technology, which allows agents to store and reuse knowledge efficiently, is improving. Imagine a travel agent bot that not only plans trips, but books flights and hotels in real-time based on updated preferences and budgets.
Takeaway: Companies need to identify use cases where agents can provide high ROI – whether it’s in customer service, sales, or internal workflow. Tool use and advanced thinking abilities will determine the winners in this field.
2. Evaluations: the foundation of reliable AI
Evaluations, or “evaluations,” are the backbone of any robust AI deployment. This is the process of choosing an MBA – from the hundreds now available – to use in your assignment. This is important for accuracy, but also for aligning AI outputs with the organization’s goals. A good evaluation ensures that the chatbot understands the style, the recommendation system provides relevant options, and the predictive model avoids costly mistakes.
For example, a company’s evaluation of a customer support chatbot might include measures of average resolution time, accuracy of responses, and customer satisfaction scores.
Many companies have invested a lot of time in processing inputs and outputs so that they match company expectations and workflow, but this can take a lot of time and resources. As the models themselves improve, many companies save effort by relying more on the models themselves to do the work, so choosing the right model becomes more important.
This process forces clear communication and better decision making. “When you become more aware of how to evaluate the output of something and what you actually want, it not only makes you better with LLMs and AI, but it also makes you better with humans,” Witteveen said. “When you can clearly express to a human: This is what I want, this is what I want it to look like, and this is what I’m going to expect from it. When you get really specific about that, humans suddenly perform a lot better.
“Oh, you know, I’ve gotten much better at giving direction to my team just by being able to do agile engineering or just being able to look at writing tasks,” Witveen noted. Valid evaluations of models.”
By writing clear assessments, companies force themselves to clarify goals, which is a win-win for both humans and machines.
Takeaway: Crafting high-quality assessments is essential. Start with clear criteria: accuracy of response, time to resolution, and alignment with business goals. This ensures that your AI not only works but is aligned with your brand values.
3. Cost efficiency: Scale AI without spending large sums of money
AI is becoming cheaper, but strategic deployment remains key. Improvements at every level of the LLM chain significantly reduce costs. Intense competition among LLM providers, and from open source competitors, leads to regular price cuts.
Meanwhile, post-training software technologies are making MBAs more efficient.
Competition from new hardware vendors such as Groq’s LPUs, and improvements made by legacy GPU provider Nvidia, are dramatically reducing inference costs, making AI available for more use cases.
The real breakthroughs come from improving the way models run in applications, which is inference time, not training time, when models are first built using data. Other techniques such as model distillation, along with hardware innovations, mean companies can achieve more through technologyss. It’s no longer about whether you can afford AI — you can do most projects at a much lower cost this year than even six months ago — but how you scale it.
Takeaway: Conduct a cost-effectiveness analysis of your AI projects. Compare hardware options and explore techniques such as model distillation to reduce costs without compromising performance.
4. Memory Allocation: AI allocates to users
Personalization is no longer optional, but expected. In 2025, memory-powered AI systems will make this a reality. By remembering a user’s preferences and past interactions, AI can deliver more personalized and effective experiences.
Memory allocation is not widely or openly discussed because users often feel uncomfortable with AI applications storing personal information to improve service. There are privacy concerns, and the ick factor when a model gives answers that show she knows a lot about you — for example, how many kids you have, what you do for a living, and what your personal tastes are. OpenAI, for example, protects information about ChatGPT users in its system memory – which can be turned off and deleted, although it is turned on by default.
While companies using OpenAI and other models that do cannot obtain the same information, what they can do is create their own memory systems using RAG, ensuring that the data is safe and traceable. However, companies must tread carefully, balancing personalization and privacy.
Takeaway: Develop a clear strategy for memory allocation. Subscription systems and transparent policies can build trust while delivering value.
5. Heuristics and test time computation: The new frontiers of efficiency and heuristics
Inference is where artificial intelligence meets the real world. In 2025, the focus will be on making this process faster, cheaper and more robust. Sequential thinking — where models break down tasks into logical steps — is revolutionizing how organizations approach complex problems. Tasks that require deeper thinking, such as strategic planning, can now be handled effectively by AI.
For example, OpenAI’s o3-mini model is expected to be released later this month, followed by the full o3 model later. It offers advanced reasoning capabilities that break down complex problems into manageable parts, thus reducing AI hallucinations and improving decision-making accuracy. These logic improvements work in areas such as mathematics, programming and scientific applications where increased reasoning can help – although progress in other areas, such as language structure, may be limited.
However, these improvements will also come with increased computational requirements and thus higher operating costs. The o3-mini aims to provide a compromise offering to contain costs while maintaining high performance.
Takeaway: Identify courses of action that can benefit from advanced reasoning techniques. Implementing inference steps specific to your company’s train of thought, and choosing improved models, can give you an advantage here.
Conclusion: turning ideas into actions
AI in 2025 is not just about adopting new tools; It’s about making strategic choices. Whether it’s deploying agents, improving reviews, or scaling affordably, the path to success lies in thoughtful execution. Companies should embrace these trends through a clear and focused strategy.
For more details on these trends, watch the full video podcast between me and Sam Witteveen here:
https://venturebeat.com/wp-content/uploads/2025/01/DALL·E-2025-01-06-08.22.42-A-professional-and-clean-vector-style-image-representing-AI-strategy-in-2025.-The-image-features-futuristic-elements-like-a-circuit-board-a-glowing-A.webp?w=1024?w=1200&strip=all
Source link