Logged-out Icon

Google’s AI model is now making descriptions for YouTube Shorts

YouTube Shorts are currently viewed more than 50 billion times per day.

YouTube Shorts

Google recently merged its DeepMind and Google Brain teams to form a unified AI powerhouse, Google DeepMind. The team has now revealed the first collaborative project, where they are leveraging their visual language model (VLM), named Flamingo, to generate descriptions for YouTube Shorts. This development aims to address the challenge of limited metadata and descriptive information accompanying Shorts, making them harder to find through search.

Flamingo analyzes the initial frames of a video and generates text descriptions that explain the content. For example, it can produce descriptions like “a video of a man on a beach.” These descriptions are then stored as metadata, allowing for better categorization of videos and improved matching of search results to viewer queries.

The introduction of Flamingo has significant benefits for both users and creators. By enhancing discoverability, users can quickly search for relevant videos on YouTube. Furthermore, creators can increase the visibility of their Shorts without the need for additional effort in adding metadata and descriptions.

YouTube Shorts, which are rapidly gaining popularity, are currently viewed more than 50 billion times per day. However, due to their shorter duration and different viewing patterns, creators often omit metadata. Flamingo’s ability to generate accurate and relevant descriptions assists YouTube’s systems in understanding the content of Shorts, facilitating better matches for user searches.

Google DeepMind assures that the generated descriptions will not be user-facing but will remain behind-the-scenes metadata. The company says it emphasizes its commitment to accuracy and responsible AI practices. Stringent measures are in place to ensure that the generated text aligns with responsible standards and does not frame videos in a negative light.

Flamingo is already being applied to newly uploaded Shorts, as well as a significant corpus of existing videos, including the most viewed ones. While the current focus is on Shorts, the application of Flamingo to longer-form YouTube videos is conceivable, although the need may be relatively less. Longer-form videos typically involve more comprehensive metadata inclusion by creators, and viewers often rely on titles and thumbnails for discovery.

As Google continues to integrate AI technologies into its offerings, the potential future application of Flamingo to longer-form YouTube videos could have a significant impact on search capabilities and discoverability on the platform. It is crucial for Google to ensure the accuracy and reliability of Flamingo’s generated descriptions to avoid any unintended harm or misrepresentation.

This website uses cookies to ensure you get the best experience on our website