Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
When the legal research company is Lexisnexis Protégé has created AI, who wanted to know the best way to take advantage of her experience without publishing a big model.
Protégé aims to help lawyers, partners and legal assistants in writing, proving legal documents, and ensuring anything they cite. Complaints and remnants are accurate. However, Lexisnexis did not want a general legal assistant; They wanted to build one that learns the company’s workflow and more customized.
Lexisnexis saw the opportunity to bring the power of the LLMS models from the Antarbur and the error and find the best models that answer the user’s best questions, Jeff Rehele, CTO from Lexisnexis Legal and Profesmonal, for Venturebeat.
“We use the best model for the specific use state as part of our multi -style approach. We use the model that provides the best result with the fastest response time,” said Riehl. “For some cases of use, this will be a small language model like Mistral or we do distillation to improve performance and reduce cost.”
Although LLMS still provides value in building artificial intelligence applications, some institutions turn into the use of small language models (SLMS) or LLMS distillation into small versions of the same form.
Brexit, where LLM “knows” a smaller model Be a common way For many organizations.
Small models of applications such as Chatbots often work or complete a simple symbol, which Lexisnexis wanted to use it for Protégé.
This is not the first time that Lexisnexis builds artificial intelligence applications, even before Hub Lexisnexis + AI launch in July 2024.
“We have used a lot of artificial intelligence in the past, which was more about the treatment of natural language, some deep learning and machine learning,” said Riehl. “This really changed in November 2022 when ChatGPT was launched, because before that, many of the artificial intelligence capabilities were behind the scenes. But once ChatGPT came out, the obstetric capabilities and the conversation capabilities were very exciting for us.”
Small models, light and directing models
Riehl said Lexisnexis uses different models of most of the main models when building AI platforms. Lexisnexis + AI used Claude models from Antarbur, Openai GPT models and Mistral model.
This multimedia approach helped break every task that users wanted to perform on the platform. To do this, Lexisnexis had to rely on its platform Switch between models.
“We will dismantle any task that is carried out in individual components, and then we will define the best large language model to support this component. One of the examples is that we will use Mistral to assess the query made by the user,” Rahle said.
For Protégé, the company wanted faster response times and more polished models for legal use. So it turns to what Riehl calls “settings” of models, or lower weight versions of LLMS or distilled models.
He said: “You do not need GPT-4O to make a query evaluation, so we use it to work more sophisticated, and we extinguish the models.”
When the Protégé user asks a question about a specific condition, the first model that he agrees with is a well -known “to evaluate the query, then determine the purpose of this query and its intent” before switching to the most appropriate model to complete the task. Riehl said the following model can be LLM that generates new search engines or another model that summarizes the results.
Currently, Lexisnexis is often dependent on the Mistral model that was seized although Riehl said he was using a difficult version of Claude “when I came out for the first time; we do not use it in the product today but in other ways.” Lexisnexis is also interested in using other Openai models, especially since the company came out with New Enhancing precise control capabilities last year. Lexisnexis evaluates the thinking models in Openai including O3 for its platforms.
Riehl added that he may also look for Google Gemini models.
Lexisnexis supports all its Amnesty International platforms with its own graphic fee to make the capabilities of the retrieval generation (RAG), especially since Protégé can help launch the procedures later.
The legal wing Amnesty International
Even before the emergence of obstetric intelligence, Lexisnexis tested the ability to put Chatbots to work in the legal industry. In 2017, and The company tested AI’s assistant This would compete with Ross and Protégé, which Watson works from IBM at the Lexisnexis + AI platform for the company, which collects AI services from Lexisnexis.
Protégé assists law firms with tasks that jurists or colleagues tend to do. It helps in writing legal summaries and complaints based on corporate documents and data, and suggests the following steps for the workflow, and proposes new claims to improve searches, a question official for sedimentation and discovery, interconnected quotes in deposits for accuracy, generating time schedules, and of course, summarizing complex legal documents.
“We see that Protégé is the initial step in allocation and agents.” Think about the different types of lawyers: integration and purchases, litigants, and real estate. You will continue to obtain more and more allocation based on the specific task you do. Our vision is that every legal professional will have a personal assistant to help them do their work based on what they do, and not what other lawyers do. ”
Protégé is now competing with research and other legal technology platforms. Thompson Reuters has dedicated the Openai’s O1-Mini for Legal coconsel assistant. Harvey, who 300 million dollars collected From investors including Lexisnexis, also has Amnesty International Legal Assistant.
https://venturebeat.com/wp-content/uploads/2025/03/crimedy7_illustration_of_a_robot_lawyer_in_a_courtroom_comic_st_0f329656-c60d-42b3-b0e9-daa3cdaa1eb6.jpeg?w=1024?w=1200&strip=all
Source link