A follow -up question was answered correctly, but Gemini got the name of the first scorer of the first mistake: Artificial intelligence suggested that he was Johan Dutson. Dotson was offered to get a decline in the prominent points with the grades at 0-0, but it was excluded-an example of the nuances that artificial intelligence does not necessarily pick up.
Gemini was successfully determined when the Kansas City Chiefs team got their first points, and even included a chronological character that is directly linked to the YouTube clip. She also got the right scorer name. Gemini appears to be highly dependent on the comment on sports clips, which is not surprising.
Summarize video contents
After that, we tried to put Gemini against a Behind the scenes feature For the Grand Budapest Hotel, directed by Wes Anderson. The clip extends to four and a half minutes, and Gemini released some responses almost immediately: the name of the movie is talked about, and the recovery of the main clip narration.
However, everything depends on the sound (or text) again – it does not seem to have any analysis of the actual video contents. Artificial intelligence could not determine who had conversations in the video, although their names were shown on the screen, and they were unable to determine who the director was (although this was also mentioned in the video description).
On the positive side, Gemini did an impressive video collection. Correctly identified some of the challenges mentioned in the films that were mentioned all the time, and presented their timelines – from the search for a group to represent Grand Budapest, to fill them with additions.
Abstract interviews
Finally, we tried Google Gemini With an interviewThe Fourth Channel in the United Kingdom talks to Charlie Brocker and Sina Kelly about the latest series of Black mirror (Perhaps suitable for an article on artificial intelligence). Gemini has proven very capable of choosing talk points, adding timetables, although the entire video is often speaking.
Again, there is no context about anything outside the sound or text. Gemini AI could not locate the interview, or how the participants were behaving, or anything else about video images – which deserves to take into account if you use them yourself.
For the videos in which the answers you want in the sound of a video on YouTube, and the associated version, Gemini works well in summarizing and providing accurate answers (provided that the commentators mention when the landing is excluded, as well as when registering it). For any kind of visual information, you still have to watch the video yourself.
https://media.wired.com/photos/680a939674cb38e3c7f7fce2/191:100/w_1280,c_limit/Youtube-Gemini-AI-Gear-2211009964.jpg
Source link