At its GTC AI exhibition in San Jose, California, earlier this month, the chipping staff maker Nafidia A large number of partnerships, product advertisements and platforms revealed Amnesty International. Meanwhile, in San Francisco, NVIDIA held behind the roles that were suspended alongside Game developer conference To show gaming and media manufacturers how to give birth artificial intelligence technology to increase future video games.
Last year, GDC 2024 offer from NVIDIA You had practical demonstrations where you managed to speak with non -operational letters of artificial intelligence, or NPCS, in false conversations. They answered the things that I wrote, with reasonable contextual responses (albeit normal such as this text). AI is also radically updated old games for the appearance of contemporary graphics.
This year, in GDC 2025, NVIDIA once again invited the members of industry and pressure to a hotel room near the Moscow Center, where the conference was held. In a large ring room with computer platforms filled with the latest GeForce 5070, 5080 and 5090 graphics processing units, the company has offered ways in which players can see old artificial intelligence games, offer new animation options, and develop NPC reactions.
NVIDIA also showed how the latest technology for providing its GPU’s DLSS 4 drawings, works to improve image quality, light and framework path in modern games, and features that affect players every day, although these efforts of NVIDIA are more traditional than their other experiences. While some of these developments depend on studios to implement new technology in their games, others are now available to players to try.
Making animation from text claims
NVIDIA has detailed a new tool that creates a letter animated animation based on text claims – such as whether you can use Chatgpt in IMOVIE to make your game characters move in a text movement. the goal? Save the developer time. Using the tool can convert several hours sequence to a few minutes.
The movement of the body, as the tool, can be connected to many digital content creation platforms; I used Nvidia Product Manager John Malaska, who managed my illustration, Autodesk Maya. To start the demonstration, Malaska created a sample in which he wanted to jump one character over a box, landing and moving forward. On the scene’s timetable, choose the moment for each of these three procedures and wrote text claims to create the program for animation. Then it is time to tamper.
To refine its animation, use the movement of the body to generate four different forms of jumping on the character and choose the one he wants. (All animation is created from the licensed motion capacity data, as Malaska said) Then specify exactly where he wanted to drop the character, then choose the place he wants to end up. Body Motion simulated all the tires between those pivotal points that were carefully chosen, and Boom: The animation sector was achieved.
In the next section of the illustration, Malaska had the same character that walks through a fountain to reach a set of stairs. It can edit text demands and signs of the timeline to make the character sneak and circumvent the courtyards.
“We are excited about this,” said Malaska. “People will speed up the workflow and accelerate the workflow.”
He referred to the situations in which the developer may get the animation, but he wants to work a little different and send them to the animation to make adjustments. It will be a more time -consuming scenario if the animation is based on actual movement capture, and if the game requires such sincerity, then the return of MCAP representatives to recording may take days, weeks or months. An animation with the movement of the body can be switched upon a library to capture the movement, defraud all of this.
I will be negligent not to anxiety for artists to pick up movement and whether the body’s movement can be used to circumvent their work in part or completely. Genearily, this tool can be used well in making animation and the storytelling series almost before professional artists brought to the picking scenes. But like any tool, all this depends on those who use it.
Motion Body is scheduled to be released later in 2025 under the NVIDIA ENERPRISE license.
Another stab in Remastering Half-Life 2 using RTX Remix
At GDC last year, I saw some Half-Life 2 with the NVIDIA platform for MDDERS, RTX RemixThis aims to breathe a new life in old games. The latest NVIDIA’s latest stab is released in the classic Reviving Valve game as a free trial, players can Download on Steam To check themselves. What I saw in the NVIDIA press room was eventually a technology (not the full game), but it still shows what RTX Remix could do to update old games to meet modern graphics expectations.
The RTX Remix Half-Life 2 show was last year about seeing how much the flat wall consistency could be updated with depth effects, for example, making it look like a gravel pavement, and this is here as well. When looking at the wall, “it seems that the bricks come out because they use the scene blocking maps,” said Nile Osani, chief product manager in RTX, who led the illustration. But this year’s explanatory show was more about the interaction of lighting – even to the extent of simulating the shade that passes through the glass covering the gas counter tablet.
Usmani has been through all the effects of lighting and fire, which updated some of the most exciting parts of the Half-Life 2 Fallen Ravenholm. But the most surprising application was in an area attacked by the iconic Headcrab enemies, when Usmani stopped and pointed to how to filter the rear lighting through the thick parts of the slow, slow, false, which made them glow in the transparent red, very similar to what happens when it is placed in front of the lamp. In conjunction with GDC, NVIDIA released this effect, called Subsurface Scattering, in the software development group so that the game developers can start using it.
RTX Remix contains other tricks referred to by Usmani, such as a new nervous shading for the latest version of the platform-version in the Half-Life 2. In example, exchange between old and new RTX RMIX versions, which appears, in the new version, properly light through the broken rafters of the garage. Better, the tires per second were shocked to 100, up from 87.
Athani said: “Traditionally, we used to follow a beam and wear several times to shed light on the room.” “Now we follow a ray and bounce twice to only three times, then we end it, and artificial intelligence leads to many apostasy after that. On enough tires, it looks as if it calculates an infinite amount of bounce, so we are able to get more accurate because it follows less rays (get more performance).”
However, I was seeing the demonstration of the RTX 5070 graphics processing unit, which is sold for $ 550, and the illustration requires at least the RTX 3060 TI, so the owners of graphics cards are older than lucky. “This is purely to track the path is very expensive – I mean, it is the future, basically the edge, and it follows the most advanced path,” said Athani.
Nvidia Ace AI is used to help NPCS think
Last year, NPC AI showed how uniquely non -player characters can respond to the player, but this year’s Nvidia Ace Tech showed how players can suggest new ideas for NPCs that will change their behavior and life around them.
GPU showed technology as Inzoi, a SIMS game as players are interested in NPCS with their own behaviors. But with the upcoming update, players can switch on Smart Zoi, which uses Nvidia Ace to include ideas directly in the minds of Zois (characters) that they supervise … then watch them interact accordingly. Wayne Fiyawan, a technology marketing analyst at NVIDIA GeForce, Wayne Faysan, explained that these ideas cannot conflict with their own features, so they will send Zoi in logical directions.
“So, by encouraging them, for example,” I want to make people feel better, and will encourage them to speak to more Zoeat around them, “trying the keyword: they still fail. They are like humans.”
Riwan introduced an thought in the head of Zoe: “What if you were just Amnesty International in simulation?” The poor Zoe fear but is still running to the public bath to clean its teeth, which appears to suit its characteristics, in the health of the teeth.
These NPC procedures are run on the ideas that are entered from players by a small language model with half a billion teachers (large language models can move from one billion to more than 30 billion teachers, while giving a greater opportunity for accurate responses). And those used in the game are based on 8 billion Mysor Neemo Minotron The model is shrinking down to be able to use it by old and less powerful graphics processing units.
Riwan said: “We intentionally drop the model into a smaller model so that it is available to more people.”
Rayan said that the NVIDIA ACE technology works on the device using the GPUS Computer-Krafton, the inzoi publisher, recommends a minimum of GPU specifications from NVIDIA RTX 3060 with 8 GB of virtual memory to use this feature. Krafton Nvidia “gave a” budget “to Gigabyte from VRAM to ensure that the graphics card has sufficient resources to provide, well, graphics. Thus the need to reduce parameters.
NVIDIA still discusses internally how or whether it should be unlocked the ability to use large teacher language models if players have the most powerful graphics processing units. Players may be able to see the difference, because NPCs “do not interact dynamically because they better interact with your surroundings with a larger model.” “Now, with this, the focus is often on their ideas and feelings.”
The early access version of the SMART ZOI feature will come out to all users for free, starting from March 28. NVIDIA sees that and NVIDIA ACE as a starting stone can one day lead to a dynamic NPCS.
“If you have a mmorpgs with NVIDIA ACE, NPCS will not be stagnant and only continue to repeat the dialogue itself – it can be more dynamic and generate their own responses based on your reputation or something like that. Like, hey, you are a bad person, I don’t want to sell goods to you.”
Watch this: Everything was announced at the NVIDIA’s CES event in 12 minutes
https://www.cnet.com/a/img/resize/1300d1edb2fd283de67aa7716a0a88d6e4b01f1f/hub/2025/03/28/a2f4ca9a-537b-428c-96f8-0deb94d5a341/lede-dlss-4.jpg?auto=webp&fit=crop&height=675&width=1200
Source link