Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Robot wins half marathon faster than human record
    • Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data
    • Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them
    • Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)
    • Today’s NYT Connections: Sports Edition Hints, Answers for April 20 #574
    • Will Humans Live Forever? AI Races to Defeat Aging
    • AI evolves itself to speed up scientific discovery
    • Australia’s privacy commissioner tried, in vain, to sound the alarm on data protection during the u16s social media ban trials
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Unlocking Multimodal Video Transcription with Gemini
    Artificial Intelligence

    Unlocking Multimodal Video Transcription with Gemini

    Editor Times FeaturedBy Editor Times FeaturedSeptember 1, 2025No Comments72 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link



    A fast heads-up earlier than we begin:

    • I’m a developer at Google Cloud. I’m comfortable to share this text and hope you’ll study a couple of issues. Ideas and opinions are totally my very own.
    • The supply code for this text (and future updates) is on the market in this notebook (Apache License model 2.0).
    • You’ll be able to experiment without cost with Gemini in Google AI Studio and get an API key to name Gemini programmatically.
    • All pictures, except in any other case famous, are by me.

    ✨ Overview

    Conventional machine studying (ML) notion fashions sometimes give attention to particular options and single modalities, deriving insights solely from pure language, speech, or imaginative and prescient evaluation. Traditionally, extracting and consolidating info from a number of modalities has been difficult as a result of siloed processing, complicated architectures, and the chance of information being “misplaced in translation.” Nonetheless, multimodal and long-context giant language fashions (LLMs) like Gemini can overcome these points by processing all modalities throughout the similar context, opening new potentialities.

    Transferring past speech-to-text, this pocket book explores learn how to obtain complete video transcriptions by leveraging all out there modalities. It covers the next subjects:

    • A strategy for addressing new or complicated issues with a multimodal LLM
    • A immediate method for decoupling information and preserving consideration: tabular extraction
    • Methods for profiting from Gemini’s 1M-token context in a single request
    • Sensible examples of multimodal video transcriptions
    • Suggestions & optimizations

    🔥 Problem

    To completely transcribe a video, we’re seeking to reply the next questions:

    • 1️⃣ What was mentioned and when?
    • 2️⃣ Who’re the audio system?
    • 3️⃣ Who mentioned what?

    Can we resolve this downside in an easy and environment friendly method?


    🌟 Cutting-edge

    1️⃣ What was mentioned and when?

    This can be a identified downside with an present resolution:

    • Speech-to-Textual content (STT) is a course of that takes an audio enter and transforms speech into textual content. STT can present timestamps on the phrase stage. Additionally it is often known as computerized speech recognition (ASR).

    Within the final decade, task-specific ML fashions have most successfully addressed this.


    2️⃣ Who’re the audio system?

    We are able to retrieve speaker names in a video from two sources:

    • What’s written (e.g., audio system may be launched with on-screen info after they first communicate)
    • What’s spoken (e.g., “Hi there Bob! Alice! How are you doing?”)

    Imaginative and prescient and Pure Language Processing (NLP) fashions may also help with the next options:

    • Imaginative and prescient: Optical Character Recognition (OCR), additionally referred to as textual content detection, extracts the textual content seen in pictures.
    • Imaginative and prescient: Individual Detection identifies if and the place individuals are in a picture.
    • NLP: Entity Extraction can determine named entities in textual content.

    3️⃣ Who mentioned what?

    That is one other identified downside with a partial resolution (complementary to Speech-to-Textual content):

    • Speaker Diarization (also called speaker flip segmentation) is a course of that splits an audio stream into segments for the totally different detected audio system (“Speaker A”, “Speaker B”, and so on.).

    Researchers have made vital progress on this subject for many years, notably with ML fashions lately, however that is nonetheless an energetic subject of analysis. Present options have shortcomings, akin to requiring human supervision and hints (e.g., the minimal and most variety of audio system, the language spoken), and supporting a restricted set of languages.


    🏺 Conventional ML pipeline

    Fixing all of 1️⃣, 2️⃣, and three️⃣ isn’t simple. This is able to probably contain organising an elaborate supervised processing pipeline, primarily based on a couple of state-of-the-art ML fashions, akin to the next:

    We would want days or even weeks to design and arrange such a pipeline. Moreover, on the time of writing, our multimodal-video-transcription problem will not be a solved downside, so there’s completely no certainty of reaching a viable resolution.


    Gemini permits for fast prompt-based downside fixing. With simply textual content directions, we will extract info and remodel it into new insights, by an easy and automatic workflow.

    🎬 Multimodal

    Gemini is natively multimodal, which suggests it may well course of various kinds of inputs:

    • textual content
    • picture
    • audio
    • video
    • doc

    🌐 Multilingual

    Gemini can also be multilingual:

    • It will probably course of inputs and generate outputs in 100+ languages
    • If we will resolve the video problem for one language, that resolution ought to naturally lengthen to all different languages

    🧰 A natural-language toolbox

    Multimodal and multilingual understanding in a single mannequin lets us shift from counting on task-specific ML fashions to utilizing a single versatile LLM.

    Our problem now appears to be like lots easier:

    natural-language toolbox with gemini (L. Picard)

    In different phrases, let’s rephrase our problem: Can we absolutely transcribe a video with simply the next?

    • 1 video
    • 1 immediate
    • 1 request

    Let’s strive with Gemini…


    🏁 Setup

    🐍 Python packages

    We’ll use the next packages:

    • google-genai: the Google Gen AI Python SDK lets us name Gemini with a couple of traces of code
    • pandas for information visualization

    We’ll additionally use these packages (dependencies of google-genai):

    • pydantic for information administration
    • tenacity for request administration
    pip set up --quiet "google-genai>=1.31.0" "pandas[output-formatting]"

    🔗 Gemini API

    We now have two principal choices to ship requests to Gemini:

    • Vertex AI: Construct enterprise-ready tasks on Google Cloud
    • Google AI Studio: Experiment, prototype, and deploy small tasks

    The Google Gen AI SDK offers a unified interface to those APIs and we will use setting variables for the configuration.

    Possibility A – Gemini API by way of Vertex AI 🔽

    Requirement:

    • A Google Cloud undertaking
    • The Vertex AI API should be enabled for this undertaking

    Gen AI SDK setting variables:

    Study extra about setting up a project and a development environment.

    Possibility B – Gemini API by way of Google AI Studio 🔽

    Requirement:

    Gen AI SDK setting variables:

    • GOOGLE_GENAI_USE_VERTEXAI="False"
    • GOOGLE_API_KEY=""

    Study extra about getting a Gemini API key from Google AI Studio.

    💡 You’ll be able to retailer your setting configuration exterior of the supply code:

    Surroundings Technique
    IDE .env file (or equal)
    Colab Colab Secrets and techniques (🗝️ icon in left panel, see code under)
    Colab Enterprise Google Cloud undertaking and site are mechanically outlined
    Vertex AI Workbench Google Cloud undertaking and site are mechanically outlined
    Outline the next setting detection features. It’s also possible to outline your configuration manually if wanted. 🔽
    import os
    import sys
    from collections.abc import Callable
    
    from google import genai
    
    # Guide setup (go away unchanged if setup is environment-defined)
    
    # @markdown **Which API: Vertex AI or Google AI Studio?**
    GOOGLE_GENAI_USE_VERTEXAI = True  # @param {kind: "boolean"}
    
    # @markdown **Possibility A - Google Cloud undertaking [+location]**
    GOOGLE_CLOUD_PROJECT = ""  # @param {kind: "string"}
    GOOGLE_CLOUD_LOCATION = "international"  # @param {kind: "string"}
    
    # @markdown **Possibility B - Google AI Studio API key**
    GOOGLE_API_KEY = ""  # @param {kind: "string"}
    
    
    def check_environment() -> bool:
        check_colab_user_authentication()
        return check_manual_setup() or check_vertex_ai() or check_colab() or check_local()
    
    
    def check_manual_setup() -> bool:
        return check_define_env_vars(
            GOOGLE_GENAI_USE_VERTEXAI,
            GOOGLE_CLOUD_PROJECT.strip(),  # May need been pasted with line return
            GOOGLE_CLOUD_LOCATION,
            GOOGLE_API_KEY,
        )
    
    
    def check_vertex_ai() -> bool:
        # Workbench and Colab Enterprise
        match os.getenv("VERTEX_PRODUCT", ""):
            case "WORKBENCH_INSTANCE":
                go
            case "COLAB_ENTERPRISE":
                if not running_in_colab_env():
                    return False
            case _:
                return False
    
        return check_define_env_vars(
            True,
            os.getenv("GOOGLE_CLOUD_PROJECT", ""),
            os.getenv("GOOGLE_CLOUD_REGION", ""),
            "",
        )
    
    
    def check_colab() -> bool:
        if not running_in_colab_env():
            return False
    
        # Colab Enterprise was checked earlier than, so that is Colab solely
        from google.colab import auth as colab_auth  # kind: ignore
    
        colab_auth.authenticate_user()
    
        # Use Colab Secrets and techniques (🗝️ icon in left panel) to retailer the setting variables
        # Secrets and techniques are personal, seen solely to you and the notebooks that you choose
        # - Vertex AI: Retailer your settings as secrets and techniques
        # - Google AI: Immediately import your Gemini API key from the UI
        vertexai, undertaking, location, api_key = get_vars(get_colab_secret)
    
        return check_define_env_vars(vertexai, undertaking, location, api_key)
    
    
    def check_local() -> bool:
        vertexai, undertaking, location, api_key = get_vars(os.getenv)
    
        return check_define_env_vars(vertexai, undertaking, location, api_key)
    
    
    def running_in_colab_env() -> bool:
        # Colab or Colab Enterprise
        return "google.colab" in sys.modules
    
    
    def check_colab_user_authentication() -> None:
        if running_in_colab_env():
            from google.colab import auth as colab_auth  # kind: ignore
    
            colab_auth.authenticate_user()
    
    
    def get_colab_secret(secret_name: str, default: str) -> str:
        from google.colab import userdata  # kind: ignore
    
        strive:
            return userdata.get(secret_name)
        besides Exception as e:
            return default
    
    
    def get_vars(getenv: Callable[[str, str], str]) -> tuple[bool, str, str, str]:
        # Restrict getenv calls to the minimal (could set off UI affirmation for secret entry)
        vertexai_str = getenv("GOOGLE_GENAI_USE_VERTEXAI", "")
        if vertexai_str:
            vertexai = vertexai_str.decrease() in ["true", "1"]
        else:
            vertexai = bool(getenv("GOOGLE_CLOUD_PROJECT", ""))
    
        undertaking = getenv("GOOGLE_CLOUD_PROJECT", "") if vertexai else ""
        location = getenv("GOOGLE_CLOUD_LOCATION", "") if undertaking else ""
        api_key = getenv("GOOGLE_API_KEY", "") if not undertaking else ""
    
        return vertexai, undertaking, location, api_key
    
    
    def check_define_env_vars(
        vertexai: bool,
        undertaking: str,
        location: str,
        api_key: str,
    ) -> bool:
        match (vertexai, bool(undertaking), bool(location), bool(api_key)):
            case (True, True, _, _):
                # Vertex AI - Google Cloud undertaking [+location]
                location = location or "international"
                define_env_vars(vertexai, undertaking, location, "")
            case (True, False, _, True):
                # Vertex AI - API key
                define_env_vars(vertexai, "", "", api_key)
            case (False, _, _, True):
                # Google AI Studio - API key
                define_env_vars(vertexai, "", "", api_key)
            case _:
                return False
    
        return True
    
    
    def define_env_vars(vertexai: bool, undertaking: str, location: str, api_key: str) -> None:
        os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = str(vertexai)
        os.environ["GOOGLE_CLOUD_PROJECT"] = undertaking
        os.environ["GOOGLE_CLOUD_LOCATION"] = location
        os.environ["GOOGLE_API_KEY"] = api_key
    
    
    def check_configuration(shopper: genai.Consumer) -> None:
        service = "Vertex AI" if shopper.vertexai else "Google AI Studio"
        print(f"Utilizing the {service} API", finish="")
    
        if shopper._api_client.undertaking:
            print(f' with undertaking "{shopper._api_client.undertaking[:7]}…"', finish="")
            print(f' in location "{shopper._api_client.location}"')
        elif shopper._api_client.api_key:
            api_key = shopper._api_client.api_key
            print(f' with API key "{api_key[:5]}…{api_key[-5:]}"', finish="")
            print(f" (in case of error, make certain it was created for {service})")

    🤖 Gen AI SDK

    To ship Gemini requests, create a google.genai shopper:

    from google import genai
    
    check_environment()
    
    shopper = genai.Consumer()

    Verify your configuration:

    check_configuration(shopper)
    Utilizing the Vertex AI API with undertaking "lpdemo-…" in location "europe-west9"

    🧠 Gemini mannequin

    Gemini is available in totally different versions.

    Let’s get began with Gemini 2.0 Flash, because it presents each excessive efficiency and low latency:

    • GEMINI_2_0_FLASH = "gemini-2.0-flash"

    💡 We choose Gemini 2.0 Flash deliberately. The Gemini 2.5 mannequin household is mostly out there and much more succesful, however we wish to experiment and perceive Gemini’s core multimodal conduct. If we full our problem with 2.0, this must also work with newer fashions.


    ⚙️ Gemini configuration

    Gemini can be utilized in numerous methods, starting from factual to inventive mode. The issue we’re making an attempt to resolve is a information extraction use case. We wish outcomes as factual and deterministic as attainable. For this, we will change the content generation parameters.

    We’ll set the temperature, top_p, and seed parameters to reduce randomness:

    • temperature=0.0
    • top_p=0.0
    • seed=42 (arbitrary fastened worth)

    🎞️ Video sources

    Listed below are the primary video sources that Gemini can analyze:

    supply URI Vertex AI Google AI Studio
    Google Cloud Storage gs://bucket/path/to/video.* ✅
    Net URL https://path/to/video.* ✅
    YouTube https://www.youtube.com/watch?v=YOUTUBE_ID ✅ ✅

    ⚠️ Vital notes

    • Our video take a look at suite primarily makes use of public YouTube movies. That is for simplicity.
    • When analyzing YouTube sources, Gemini receives uncooked audio/video streams with none further metadata, precisely as if processing the corresponding video information from Cloud Storage.
    • YouTube does supply caption/subtitle/transcript options (user-provided or auto-generated). Nonetheless, these options give attention to word-level speech-to-text and are restricted to 40+ languages. Gemini doesn’t obtain any of this information and also you’ll see {that a} multimodal transcription with Gemini offers further advantages.
    • Moreover, our problem additionally includes figuring out audio system and extracting speaker information, a novel new functionality.

    🛠️ Helpers

    Outline our helper features and information 🔽
    import enum
    from dataclasses import dataclass
    from datetime import timedelta
    
    import IPython.show
    import tenacity
    from google.genai.errors import ClientError
    from google.genai.sorts import (
        FileData,
        FinishReason,
        GenerateContentConfig,
        GenerateContentResponse,
        Half,
        VideoMetadata,
    )
    
    
    class Mannequin(enum.Enum):
        # Typically Out there (GA)
        GEMINI_2_0_FLASH = "gemini-2.0-flash"
        GEMINI_2_5_FLASH = "gemini-2.5-flash"
        GEMINI_2_5_PRO = "gemini-2.5-pro"
        # Default mannequin
        DEFAULT = GEMINI_2_0_FLASH
    
    
    # Default configuration for extra deterministic outputs
    DEFAULT_CONFIG = GenerateContentConfig(
        temperature=0.0,
        top_p=0.0,
        seed=42,  # Arbitrary fastened worth
    )
    
    YOUTUBE_URL_PREFIX = "https://www.youtube.com/watch?v="
    CLOUD_STORAGE_URI_PREFIX = "gs://"
    
    
    def url_for_youtube_id(youtube_id: str) -> str:
        return f"{YOUTUBE_URL_PREFIX}{youtube_id}"
    
    
    class Video(enum.Enum):
        go
    
    
    class TestVideo(Video):
        # For testing functions, video period is statically specified within the enum title
        # Suffix (ISO 8601 primarily based): _PT[H][M][S]
    
        # Google DeepMind | The Podcast | Season 3 Trailer | 59s
        GDM_PODCAST_TRAILER_PT59S = url_for_youtube_id("0pJn3g8dfwk")
        # Google Maps | Stroll within the footsteps of Jane Goodall | 2min 42s
        JANE_GOODALL_PT2M42S = "gs://cloud-samples-data/video/JaneGoodall.mp4"
        # Google DeepMind | AlphaFold | The making of a scientific breakthrough | 7min 54s
        GDM_ALPHAFOLD_PT7M54S = url_for_youtube_id("gg7WjuFs8F4")
        # Brut | French reportage | 8min 28s
        BRUT_FR_DOGS_WATER_LEAK_PT8M28S = url_for_youtube_id("U_yYkb-ureI")
        # Google DeepMind | The Podcast | AI for science | 54min 23s
        GDM_AI_FOR_SCIENCE_FRONTIER_PT54M23S = url_for_youtube_id("nQKmVhLIGcs")
        # Google I/O 2025 | Developer Keynote | 1h 10min 03s
        GOOGLE_IO_DEV_KEYNOTE_PT1H10M03S = url_for_youtube_id("GjvgtwSOCao")
        # Google Cloud | Subsequent 2025 | Opening Keynote | 1h 40min 03s
        GOOGLE_CLOUD_NEXT_PT1H40M03S = url_for_youtube_id("Md4Fs-Zc3tg")
        # Google I/O 2025 | Keynote | 1h 56min 35s
        GOOGLE_IO_KEYNOTE_PT1H56M35S = url_for_youtube_id("o8NiE3XMPrM")
    
    
    class ShowAs(enum.Enum):
        DONT_SHOW = enum.auto()
        TEXT = enum.auto()
        MARKDOWN = enum.auto()
    
    
    @dataclass
    class VideoSegment:
        begin: timedelta
        finish: timedelta
    
    
    def generate_content(
        immediate: str,
        video: Video | None = None,
        video_segment: VideoSegment | None = None,
        mannequin: Mannequin | None = None,
        config: GenerateContentConfig | None = None,
        show_as: ShowAs = ShowAs.TEXT,
    ) -> None:
        immediate = immediate.strip()
        mannequin = mannequin or Mannequin.DEFAULT
        config = config or DEFAULT_CONFIG
    
        model_id = mannequin.worth
        if video:
            if not (video_part := get_video_part(video, video_segment)):
                return
            contents = [video_part, prompt]
            caption = f"{video.title} / {model_id}"
        else:
            contents = immediate
            caption = f"{model_id}"
        print(f" {caption} ".heart(80, "-"))
    
        for try in get_retrier():
            with try:
                response = shopper.fashions.generate_content(
                    mannequin=model_id,
                    contents=contents,
                    config=config,
                )
                display_response_info(response)
                display_response(response, show_as)
    
    
    def get_video_part(
        video: Video,
        video_segment: VideoSegment | None = None,
        fps: float | None = None,
    ) -> Half | None:
        video_uri: str = video.worth
    
        if not shopper.vertexai:
            video_uri = convert_to_https_url_if_cloud_storage_uri(video_uri)
            if not video_uri.startswith(YOUTUBE_URL_PREFIX):
                print("Google AI Studio API: Solely YouTube URLs are presently supported")
                return None
    
        file_data = FileData(file_uri=video_uri, mime_type="video/*")
        video_metadata = get_video_part_metadata(video_segment, fps)
    
        return Half(file_data=file_data, video_metadata=video_metadata)
    
    
    def get_video_part_metadata(
        video_segment: VideoSegment | None = None,
        fps: float | None = None,
    ) -> VideoMetadata:
        def offset_as_str(offset: timedelta) -> str:
            return f"{offset.total_seconds()}s"
    
        if video_segment:
            start_offset = offset_as_str(video_segment.begin)
            end_offset = offset_as_str(video_segment.finish)
        else:
            start_offset = None
            end_offset = None
    
        return VideoMetadata(start_offset=start_offset, end_offset=end_offset, fps=fps)
    
    
    def convert_to_https_url_if_cloud_storage_uri(uri: str) -> str:
        if uri.startswith(CLOUD_STORAGE_URI_PREFIX):
            return f"https://storage.googleapis.com/{uri.removeprefix(CLOUD_STORAGE_URI_PREFIX)}"
        else:
            return uri
    
    
    def get_retrier() -> tenacity.Retrying:
        return tenacity.Retrying(
            cease=tenacity.stop_after_attempt(7),
            wait=tenacity.wait_incrementing(begin=10, increment=1),
            retry=should_retry_request,
            reraise=True,
        )
    
    
    def should_retry_request(retry_state: tenacity.RetryCallState) -> bool:
        if not retry_state.consequence:
            return False
        err = retry_state.consequence.exception()
        if not isinstance(err, ClientError):
            return False
        print(f"❌ ClientError {err.code}: {err.message}")
    
        retry = False
        match err.code:
            case 400 if err.message will not be None and " strive once more " in err.message:
                # Workshop: undertaking accessing Cloud Storage for the primary time (service agent provisioning)
                retry = True
            case 429:
                # Workshop: momentary undertaking with 1 QPM quota
                retry = True
        print(f"🔄 Retry: {retry}")
    
        return retry
    
    
    def display_response_info(response: GenerateContentResponse) -> None:
        if usage_metadata := response.usage_metadata:
            if usage_metadata.prompt_token_count:
                print(f"Enter tokens   : {usage_metadata.prompt_token_count:9,d}")
            if usage_metadata.candidates_token_count:
                print(f"Output tokens  : {usage_metadata.candidates_token_count:9,d}")
            if usage_metadata.thoughts_token_count:
                print(f"Ideas tokens: {usage_metadata.thoughts_token_count:9,d}")
        if not response.candidates:
            print("❌ No `response.candidates`")
            return
        if (finish_reason := response.candidates[0].finish_reason) != FinishReason.STOP:
            print(f"❌ {finish_reason = }")
        if not response.textual content:
            print("❌ No `response.textual content`")
            return
    
    
    def display_response(
        response: GenerateContentResponse,
        show_as: ShowAs,
    ) -> None:
        if show_as == ShowAs.DONT_SHOW:
            return
        if not (response_text := response.textual content):
            return
        response_text = response.textual content.strip()
    
        print(" begin of response ".heart(80, "-"))
        match show_as:
            case ShowAs.TEXT:
                print(response_text)
            case ShowAs.MARKDOWN:
                display_markdown(response_text)
        print(" finish of response ".heart(80, "-"))
    
    
    def display_markdown(markdown: str) -> None:
        IPython.show.show(IPython.show.Markdown(markdown))
    
    
    def display_video(video: Video) -> None:
        video_url = convert_to_https_url_if_cloud_storage_uri(video.worth)
        assert video_url.startswith("https://")
    
        video_width = 600
        if video_url.startswith(YOUTUBE_URL_PREFIX):
            youtube_id = video_url.removeprefix(YOUTUBE_URL_PREFIX)
            ipython_video = IPython.show.YouTubeVideo(youtube_id, width=video_width)
        else:
            ipython_video = IPython.show.Video(video_url, width=video_width)
    
        display_markdown(f"### Video ([source]({video_url}))")
        IPython.show.show(ipython_video)

    🧪 Prototyping

    🌱 Pure conduct

    Earlier than diving any deeper, it’s attention-grabbing to see how Gemini responds to easy directions, to develop some instinct about its pure conduct.

    Let’s first see what we get with minimalistic prompts and a brief English video.

    video = TestVideo.GDM_PODCAST_TRAILER_PT59S
    display_video(video)
    
    immediate = "Transcribe the video's audio with time info."
    generate_content(immediate, video)
    

    Video (source)

    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,708
    Output tokens  :       421
    ------------------------------ begin of response -------------------------------
    [00:00:00] Do I've to name you Sir Demis now?
    [00:00:01] Oh, you do not.
    [00:00:02] Completely not.
    [00:00:04] Welcome to Google DeepMind the podcast with me, your host Professor Hannah Fry.
    [00:00:06] We wish to take you to the center of the place these concepts are coming from.
    [00:00:12] We wish to introduce you to the people who find themselves main the design of our collective future.
    [00:00:19] Getting the protection proper might be, I would say, one of the necessary challenges of our time.
    [00:00:25] I need secure and succesful.
    [00:00:27] I need a bridge that won't collapse.
    [00:00:30] simply give these scientists a superpower that they'd not imagined earlier.
    [00:00:34] autonomous autos.
    [00:00:35] It is exhausting to fathom that whenever you're engaged on a search engine.
    [00:00:38] We may even see totally new style or totally new types of artwork come up.
    [00:00:42] There could also be a brand new phrase that isn't music, portray, pictures, film making, and that AI may have helped us create it.
    [00:00:48] You really need AGI to have the ability to peer into the mysteries of the universe.
    [00:00:51] Sure, quantum mechanics, string principle, nicely, and the character of actuality.
    [00:00:55] Ow.
    [00:00:57] the magic of AI.
    ------------------------------- finish of response --------------------------------

    Outcomes:

    • Gemini naturally outputs an inventory of [time] transcript traces.
    • That’s Speech-to-Textual content in a single line!
    • It appears to be like like we will reply “1️⃣ What was mentioned and when?”.

    Now, what about “2️⃣ Who’re the audio system?”

    immediate = "Checklist the audio system identifiable within the video."
    generate_content(immediate, video)
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,705
    Output tokens  :        46
    ------------------------------ begin of response -------------------------------
    Listed below are the audio system identifiable within the video:
    
    *   Professor Hannah Fry
    *   Demis Hassabis
    *   Anca Dragan
    *   Pushmeet Kohli
    *   Jeff Dean
    *   Douglas Eck
    ------------------------------- finish of response --------------------------------

    Outcomes:

    • Gemini can consolidate the names seen on title playing cards in the course of the video.
    • That’s OCR + entity extraction in a single line!
    • “2️⃣ Who’re the audio system?” appears to be like solved too!

    ⏩ Not so quick!

    The pure subsequent step is to leap to the ultimate directions, to resolve our downside as soon as and for all.

    immediate = """
    Transcribe the video's audio together with speaker names (use "?" if not discovered).
    
    Format instance:
    [00:02] John Doe - Hi there Alice!
    """
    generate_content(immediate, video)
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,732
    Output tokens  :       378
    ------------------------------ begin of response -------------------------------
    Right here is the audio transcription of the video:
    
    [00:00] ? - Do I've to name you Sir Demis now?
    [00:01] Demis Hassabis - Oh, you do not. Completely not.
    [00:04] Professor Hannah Fry - Welcome to Google DeepMind the podcast with me, your host, Professor Hannah Fry.
    [00:06] Professor Hannah Fry - We wish to take you to the center of the place these concepts are coming from. We wish to introduce you to the people who find themselves main the design of our collective future.
    [00:19] Anca Dragan - Getting the protection proper might be, I would say, one of the necessary challenges of our time. I need secure and succesful. I need a bridge that won't collapse.
    [00:29] Pushmeet Kohli - Simply give these scientists a superpower that they'd not imagined earlier.
    [00:34] Jeff Dean - Autonomous autos. It is exhausting to fathom that whenever you're engaged on a search engine.
    [00:38] Douglas Eck - We may even see totally new style or totally new types of artwork come up. There could also be a brand new phrase that isn't music, portray, pictures, film making, and that AI may have helped us create it.
    [00:48] Professor Hannah Fry - You really need AGI to have the ability to peer into the mysteries of the universe.
    [00:51] Demis Hassabis - Sure, quantum mechanics, string principle, nicely, and the character of actuality.
    [00:55] Professor Hannah Fry - Ow!
    [00:57] Douglas Eck - The magic of AI.
    ------------------------------- finish of response --------------------------------

    That is nearly appropriate. The primary section isn’t attributed to the host (who is simply launched a bit later), however every thing else appears to be like appropriate.

    Nonetheless, these aren’t real-world situations:

    • The video could be very brief (lower than a minute)
    • The video can also be quite easy (audio system are clearly launched with on-screen title playing cards)

    Let’s strive with this 8-minute (and extra complicated) video:

    generate_content(immediate, TestVideo.GDM_ALPHAFOLD_PT7M54S)
    Output 🔽
    ------------------- GDM_ALPHAFOLD_PT7M54S / gemini-2.0-flash -------------------
    Enter tokens   :   134,177
    Output tokens  :     2,689
    ------------------------------ begin of response -------------------------------
    [00:02] ? - We have found extra in regards to the world than some other civilization earlier than us.
    [00:08] ? - However we have now been caught on this one downside.
    [00:11] ? - How do proteins fold up?
    [00:13] ? - How do proteins go from a string of amino acids to a compact form that acts as a machine and drives life?
    [00:22] ? - Whenever you discover out about proteins, it is extremely thrilling.
    [00:25] ? - You possibly can consider them as little organic nano machines.
    [00:28] ? - They're primarily the elemental constructing blocks that energy every thing residing on this planet.
    [00:34] ? - If we will reliably predict protein buildings utilizing AI, that might change the best way we perceive the pure world.
    [00:46] ? - Protein folding is considered one of these holy grail kind issues in biology.
    [00:50] Demis Hassabis - We have at all times hypothesized that AI ought to be useful to make these sorts of huge scientific breakthroughs extra rapidly.
    [00:58] ? - After which I will most likely be  little tunings that may make a distinction.
    [01:02] ? - It ought to be making a histogram on and a background ability.
    [01:04] ? - We have been engaged on our system AlphaFold actually exhausting now for over two years.
    [01:08] ? - Reasonably than having to do painstaking experiments, sooner or later biologists may be capable to as a substitute depend on AI strategies to straight predict buildings rapidly and effectively.
    [01:17] Kathryn Tunyasuvunakool - Typically talking, biologists are typically fairly skeptical of computational work, and I believe that skepticism is wholesome and I respect it, however I really feel very enthusiastic about what AlphaFold can obtain.
    [01:28] Andrew Senior - CASP is after we, we are saying, look, DeepMind is doing protein folding.
    [01:31] Andrew Senior - That is how good we're, and possibly it is higher than all people else, possibly it is not.
    [01:37] ? - We determined to enter CASP competitors as a result of it represented the Olympics of protein folding.
    [01:44] John Moult - CASP, we began to try to pace up the answer to the protein folding downside.
    [01:50] John Moult - After we began CASP in 1994, I definitely was naive about how exhausting this was going to be.
    [01:58] ? - It was very cumbersome to try this as a result of it took a very long time.
    [02:01] ? - Let's examine what, what, what are we doing nonetheless to enhance?
    [02:03] ? - Sometimes 100 totally different teams from all over the world take part in CASP, and we take a set of 100 proteins and we ask the teams to ship us what they assume the buildings seem like.
    [02:15] ? - We are able to attain 57.9 GDT on CASP 12 floor reality.
    [02:19] John Jumper - CASP has a metric on which you can be scored, which is that this GDT metric.
    [02:25] John Jumper - On a scale of zero to 100, you'll count on a GDT over 90 to be an answer to the issue.
    [02:33] ? - If we do obtain this, this has unimaginable medical relevance.
    [02:37] Pushmeet Kohli - The implications are immense, from how ailments progress, how one can uncover new medicine.
    [02:45] Pushmeet Kohli - It is limitless.
    [02:46] ? - I wished to make a, a very easy system and the outcomes have been surprisingly good.
    [02:50] ? - The crew received some outcomes with a brand new method, not solely is it extra correct, nevertheless it's a lot sooner than the outdated system.
    [02:56] ? - I believe we'll considerably exceed what we're doing proper now.
    [02:59] ? - This can be a recreation, recreation changer, I believe.
    [03:01] John Moult - In CASP 13, one thing very vital had occurred.
    [03:06] John Moult - For the primary time, we noticed the efficient software of synthetic intelligence.
    [03:11] ? - We have superior the state-of-the-art within the subject, in order that's unbelievable, however we nonetheless received a protracted method to go earlier than we have solved it.
    [03:18] ? - The shapes have been now roughly appropriate for lots of the proteins, however the particulars, precisely the place every atom sits, which is de facto what we might name an answer, we're not but there.
    [03:29] ? - It does not assist if in case you have the tallest ladder when you are going to the moon.
    [03:33] ? - We hit somewhat little bit of a brick wall, um, since we received CASP, then it was again to the drafting board and like what are our new concepts?
    [03:41] ? - Um, after which it is taken a short while, I'd say, for them to get again to the place they have been, however with the brand new concepts.
    [03:51] ? - They'll go additional, proper?
    [03:52] ? - So, um, in order that's a very necessary second.
    [03:55] ? - I've seen that second so many instances now, however I do know what meaning now, and I do know that is the time now to press.
    [04:02] ? - We have to double down and go as quick as attainable from right here.
    [04:05] ? - I believe we have got no time to lose.
    [04:07] ? - So the intention is to enter CASP once more.
    [04:09] ? - CASP is deeply worrying.
    [04:12] ? - There's one thing bizarre happening with, um, the educational as a result of it's studying one thing that is correlated with GDT, nevertheless it's not calibrated.
    [04:18] ? - I really feel barely uncomfortable.
    [04:20] ? - We ought to be studying this, you already know, within the blink of an eye fixed.
    [04:23] ? - The know-how advancing exterior DeepMind can also be doing unimaginable work.
    [04:27] Richard Evans - And there is at all times the likelihood one other crew has come someplace on the market subject that we do not even find out about.
    [04:32] ? - Somebody requested me, nicely, ought to we panic now?
    [04:33] ? - In fact, we must always have been panicking earlier than.
    [04:35] ? - It does appear to do higher, however nonetheless does not do fairly in addition to the perfect mannequin.
    [04:39] ? - Um, so it appears to be like like there's room for enchancment.
    [04:42] ? - There's at all times a threat that you've got missed one thing, and that is why blind assessments like CASP are so necessary to validate whether or not our outcomes are actual.
    [04:49] ? - Clearly, I am excited to see how CASP 14 goes.
    [04:51] ? - My expectation is we get our heads down, we give attention to the complete objective, which is to resolve the entire downside.
    [05:14] ? - We have been ready for CASP to begin on April fifteenth as a result of that is when it was initially scheduled to begin, and it has been delayed by a month as a result of coronavirus.
    [05:24] ? - I actually miss everybody.
    [05:25] ? - No, I struggled somewhat bit simply form of getting right into a routine, particularly, uh, my spouse, she got here down with the, the virus.
    [05:32] ? - I imply, fortunately it did not end up too critical.
    [05:34] ? - CASP began on Monday.
    [05:37] Demis Hassabis - Can I simply verify this diagram you have received right here, John, this one the place we ask floor reality.
    [05:40] Demis Hassabis - Is that this one we have finished badly on?
    [05:42] ? - We're really fairly good on this area.
    [05:43] ? - Should you think about that we hadn't have mentioned it got here round this manner, however had put it in.
    [05:47] ? - Yeah, and that as a substitute.
    [05:48] ? - Yeah.
    [05:49] ? - One of many hardest proteins we have gotten in CASP to this point is a SARS-CoV-2 protein, uh, referred to as Orf8.
    [05:55] ? - Orf8 is a coronavirus protein.
    [05:57] ? - We tried actually exhausting to enhance our prediction, like actually, actually exhausting, most likely essentially the most time that we have now ever spent on a single goal.
    [06:05] ? - So we're about two-thirds of the best way by CASP, and we have gotten three solutions again.
    [06:11] ? - We now have a floor reality for Orf8, which is among the coronavirus proteins.
    [06:17] ? - And it seems we did rather well in predicting that.
    [06:20] Demis Hassabis - Superb job, everybody, the entire crew.
    [06:23] Demis Hassabis - It has been an unimaginable effort.
    [06:24] John Moult - Right here what we noticed in CASP 14 was a gaggle delivering atomic accuracy off the bat, primarily fixing what in our world is 2 issues.
    [06:34] John Moult - How do you look to search out the suitable resolution, after which how do you acknowledge you have received the suitable resolution whenever you're there?
    [06:41] ? - All proper, are we, are we principally right here?
    [06:46] ? - I will learn an e mail.
    [06:48] ? - Uh, I received this from John Moult.
    [06:50] ? - Now I will simply learn it.
    [06:51] ? - It says, John, as I count on you already know, your group has carried out amazingly nicely in CASP 14, each relative to different teams and in absolute mannequin accuracy.
    [07:02] ? - Congratulations on this work.
    [07:05] ? - It's actually excellent.
    [07:07] Demis Hassabis - AlphaFold represents an enormous leap ahead that I hope will actually speed up drug discovery and assist us to higher perceive illness.
    [07:13] John Moult - It is fairly mind-blowing.
    [07:16] John Moult - , these outcomes have been, for me, having labored on this downside so lengthy, after many, many stops and begins and can this ever get there, instantly this can be a resolution.
    [07:28] John Moult - We have solved the issue.
    [07:29] John Moult - This provides you such pleasure about the best way science works, about how one can by no means see precisely and even roughly what is going on to occur subsequent.
    [07:37] John Moult - There are at all times these surprises, and that basically, as a scientist, is what retains you going.
    [07:41] John Moult - What is going on to be the subsequent shock?
    ------------------------------- finish of response --------------------------------

    This falls aside: Most segments don’t have any recognized speaker!

    As we try to resolve a brand new complicated downside, LLMs haven’t been educated on any identified resolution. That is probably why direct directions don’t yield the anticipated reply.

    At this stage:

    • We would conclude that we will’t resolve the issue with real-world movies.
    • Persevering by making an attempt increasingly more elaborate prompts for this unsolved downside may lead to a waste of time.

    Let’s take a step again and take into consideration what occurs beneath the hood…


    ⚛️ Underneath the hood

    Fashionable LLMs are principally constructed upon the Transformer structure, a brand new neural community design detailed in a 2017 paper by Google researchers titled Attention Is All You Need. The paper launched the self-attention mechanism, a key innovation that essentially modified the best way machines course of language.

    🪙 Tokens

    Tokens are the LLM constructing blocks. We are able to think about a token to symbolize a chunk of data.

    Examples of Gemini multimodal tokens (with default parameters):

    content material tokens particulars
    hey 1 1 token for widespread phrases/sequences
    passionately 2 ardour•ately
    passionnément 3 ardour•né•ment (similar adverb in French)
    picture 258 per picture (or per tile relying on picture decision)
    audio with out timecodes 25 / second dealt with by the audio tokenizer
    video with out audio 258 / body dealt with by the video tokenizer at 1 body per second
    MM:SS timecode 5 audio chunk or video body temporal reference
    H:MM:SS timecode 7 equally, for content material longer than 1 hour

    🎞️ Sampling body charge

    By default, video frames are sampled at 1 body per second (1 FPS). These frames are included within the context with their corresponding timecodes.

    You should use a customized sampling body charge with the Half.video_metadata.fps parameter:

    video kind change fps vary
    static, sluggish lower the body charge 0.0 < fps < 1.0
    dynamic, quick enhance the body charge 1.0 < fps <= 24.0

    💡 For 1.0 < fps, Gemini was educated to know MM:SS.sss and H:MM:SS.sss timecodes.


    🔍 Media decision

    By default, every sampled body is represented with 258 tokens.

    You’ll be able to specify a medium or low media decision with the GenerateContentConfig.media_resolution parameter:

    media_resolution for video inputs tokens/ body profit
    MEDIA_RESOLUTION_MEDIUM (default) 258 greater precision, permits extra detailed understanding
    MEDIA_RESOLUTION_LOW 66 sooner and cheaper inference, permits longer movies

    💡 The “media decision” may be seen because the “picture token decision”: the variety of tokens used to symbolize a picture.


    🧮 Chances all the best way down

    The power of LLMs to speak in flawless pure language could be very spectacular, nevertheless it’s straightforward to get carried away and make incorrect assumptions.

    Consider how LLMs work:

    • An LLM is educated on a large tokenized dataset, which represents its information (its long-term reminiscence)
    • In the course of the coaching, its neural community learns token patterns
    • Whenever you ship a request to an LLM, your inputs are reworked into tokens (tokenization)
    • To reply your request, the LLM predicts, token by token, the subsequent probably tokens
    • Total, LLMs are distinctive statistical token prediction machines that appear to imitate how some elements of our mind work

    This has a couple of penalties:

    • LLM outputs are simply statistically probably follow-ups to your inputs
    • LLMs present some types of reasoning: they will match complicated patterns however don’t have any precise deep understanding
    • LLMs don’t have any consciousness: they’re designed to generate tokens and can accomplish that primarily based in your directions
    • Order issues: Tokens which can be generated first will affect tokens which can be generated subsequent

    For the subsequent step, some methodical immediate crafting may assist…


    🏗️ Immediate crafting

    🪜 Methodology

    Immediate crafting, additionally referred to as immediate engineering, is a comparatively new subject. It includes designing and refining textual content directions to information LLMs in direction of producing desired outputs. Like writing, it’s each an artwork and a science, a ability that everybody can develop with follow.

    We are able to discover numerous reference supplies about immediate crafting. Some prompts may be very lengthy, complicated, and even scary. Crafting prompts with a high-performing LLM like Gemini is far easier. Listed below are three key adjectives to remember:

    • iterative
    • exact
    • concise

    Iterative

    Immediate crafting is often an iterative course of. Listed below are some suggestions:

    • Craft your immediate step-by-step
    • Preserve monitor of your successive iterations
    • At each iteration, make certain to measure what’s working versus what’s not
    • Should you attain a regression, backtrack to a profitable iteration

    Exact

    Precision is essential:

    • Use phrases as particular as attainable
    • Phrases with a number of meanings can introduce variability, so use exact expressions
    • Precision will affect possibilities in your favor

    Concise

    Concision has further benefits:

    • A brief immediate is simpler for us builders to know (and preserve!)
    • The longer your immediate is, the extra probably you might be to introduce inconsistencies and even contradictions, which ends up in variable interpretations of your directions
    • Take a look at and belief the LLM’s information: this information acts as an implicit context and may make your immediate shorter and clearer

    Total, although this may occasionally appear contradictory, in case you take the time to be iterative, exact, and concise, you might be prone to save plenty of time.

    💡 If you wish to discover this subject, try Prompting strategies (Google Cloud reference) and Prompt engineering (68-page PDF by Lee Boonstra).


    📚 Terminology

    We’re not consultants in video transcription (but!) however we would like Gemini to behave as one. Consequently, we’d like to jot down prompts as particular as attainable for this use case. Whereas LLMs course of directions primarily based on their coaching information, they will additionally share this information with us.

    We are able to study lots by straight asking Gemini:

    immediate = """
    What's the terminology used for video transcriptions?
    Please present a typical output instance.
    """
    generate_content(immediate, show_as=ShowAs.MARKDOWN)
    Output 🔽
    ------------------------------- gemini-2.0-flash -------------------------------
    Enter tokens   :        18
    Output tokens  :     1,814
    ------------------------------ begin of response -------------------------------
    ## Terminology Used for Video Transcriptions
    
    This is a breakdown of widespread terminology used when discussing video transcriptions:
    
    *   **Transcription:** The final time period for changing audio or video content material into textual content.
    *   **Video Transcription:** Particularly refers to transcribing the audio portion of a video.
    *   **Verbatim Transcription:** A transcription that features each spoken phrase, together with filler phrases ("um," "ah"), false begins, stutters, and background noises (if related).  It goals for a extremely correct illustration of the spoken phrase.
    *   **Clear Verbatim Transcription (or Edited Transcription):** A transcription that removes filler phrases, false begins, and stutters to create a extra readable and polished textual content. It focuses on conveying the which means clearly.
    *   **Clever Verbatim Transcription:** A center floor between verbatim and clear verbatim. It removes some filler phrases and stutters however retains parts that contribute to the speaker's type or the context of the dialog.
    *   **Abstract Transcription:** A quick overview of the video's content material, highlighting the details.
    *   **Timecoding (or Timestamping):** Including timestamps to the transcription to point when every part of textual content was spoken within the video.  That is essential for accessibility and navigation.
    *   **Speaker Identification:** Figuring out and labeling totally different audio system within the video.
    *   **Captioning (or Subtitling):** Creating textual content variations of the video's audio which can be displayed on the display screen, normally synchronized with the video.  Captions are sometimes used for accessibility (for deaf or hard-of-hearing viewers) and for viewers preferring to learn alongside.
    *   **Closed Captions:** Captions that may be turned on or off by the viewer.
    *   **Open Captions:** Captions which can be completely embedded within the video and can't be turned off.
    *   **SRT (SubRip Subtitle):** A standard file format for storing captions and subtitles.
    *   **VTT (Video Textual content Tracks):** A extra trendy file format for captions and subtitles, typically used with HTML5 video.
    *   **Burned-in Captions:** Captions which can be completely embedded into the video file.
    *   **Transcriber:** The particular person or software program that performs the transcription.
    *   **Accuracy Fee:** The share of phrases transcribed accurately.
    *   **Turnaround Time:** The time it takes to finish the transcription.
    
    ## Typical Output Instance
    
    For instance we have now a brief video clip of an interview. This is how various kinds of transcriptions may look:
    
    **State of affairs:** A brief clip of an interview with a chef discussing their new restaurant.
    
    **1. Verbatim Transcription (with Timecoding):**
    
    ```
    00:00:00 --> 00:00:03 Interviewer: So, Chef, welcome to the present!
    00:00:03 --> 00:00:06 Chef: Thanks for having me. It is, uh, nice to be right here.
    00:00:06 --> 00:00:12 Interviewer: So, inform us somewhat bit about your new restaurant, "The Golden Spoon."
    00:00:12 --> 00:00:20 Chef: Properly, um, it is, it has been a dream of mine for, like, years, you already know?  To, like, open a spot that basically focuses on, um, seasonal elements.
    00:00:20 --> 00:00:25 Interviewer: And what sort of delicacies can folks count on?
    00:00:25 --> 00:00:32 Chef: We're doing, uh, trendy American, however with, like, an actual emphasis on, uh, native farms and, you already know, sustainable practices.
    ```
    
    **2. Clear Verbatim Transcription (with Timecoding):**
    
    ```
    00:00:00 --> 00:00:03 Interviewer: So, Chef, welcome to the present!
    00:00:03 --> 00:00:06 Chef: Thanks for having me. It is nice to be right here.
    00:00:06 --> 00:00:12 Interviewer: So, inform us somewhat bit about your new restaurant, "The Golden Spoon."
    00:00:12 --> 00:00:20 Chef: Properly, it has been a dream of mine for years to open a spot that basically focuses on seasonal elements.
    00:00:20 --> 00:00:25 Interviewer: And what sort of delicacies can folks count on?
    00:00:25 --> 00:00:32 Chef: We're doing trendy American, however with an actual emphasis on native farms and sustainable practices.
    ```
    
    **3. Clever Verbatim Transcription (with Timecoding):**
    
    ```
    00:00:00 --> 00:00:03 Interviewer: So, Chef, welcome to the present!
    00:00:03 --> 00:00:06 Chef: Thanks for having me. It is nice to be right here.
    00:00:06 --> 00:00:12 Interviewer: So, inform us somewhat bit about your new restaurant, "The Golden Spoon."
    00:00:12 --> 00:00:20 Chef: Properly, it has been a dream of mine for, like, years, you already know? To open a spot that basically focuses on seasonal elements.
    00:00:20 --> 00:00:25 Interviewer: And what sort of delicacies can folks count on?
    00:00:25 --> 00:00:32 Chef: We're doing trendy American, however with, like, an actual emphasis on native farms and, you already know, sustainable practices.
    ```
    
    **4. Abstract Transcription:**
    
    ```
    Chef discusses their new restaurant, "The Golden Spoon," which focuses on seasonal elements and trendy American delicacies with an emphasis on native farms and sustainable practices.
    ```
    
    **5. SRT File (for Captions):**
    
    ```
    1
    00:00:00,000 --> 00:00:03,000
    Interviewer: So, Chef, welcome to the present!
    
    2
    00:00:03,000 --> 00:00:06,000
    Chef: Thanks for having me. It is nice to be right here.
    
    3
    00:00:06,000 --> 00:00:12,000
    Interviewer: So, inform us somewhat bit about your new restaurant, "The Golden Spoon."
    
    4
    00:00:12,000 --> 00:00:20,000
    Chef: Properly, it has been a dream of mine for years to open a spot that basically focuses on seasonal elements.
    
    5
    00:00:20,000 --> 00:00:25,000
    Interviewer: And what sort of delicacies can folks count on?
    
    6
    00:00:25,000 --> 00:00:32,000
    Chef: We're doing trendy American, however with an actual emphasis on native farms and sustainable practices.
    ```
    
    **Key Variations:**
    
    *   **Verbatim:** Captures each element, together with imperfections in speech.
    *   **Clear Verbatim:**  Focuses on readability and readability by eradicating pointless parts.
    *   **Clever Verbatim:** A steadiness between accuracy and readability.
    *   **Abstract:** Gives a concise overview.
    *   **SRT:**  A structured file format for displaying captions synchronized with the video.
    
    The very best kind of transcription is determined by the particular wants of the undertaking.  For authorized or analysis functions, a verbatim transcription could be obligatory. For basic accessibility or advertising and marketing functions, a clear verbatim or clever verbatim transcription is usually most well-liked.  Abstract transcriptions are helpful for rapidly understanding the content material of a video.  SRT information are important for including captions to movies.
    ------------------------------- finish of response --------------------------------

    To date, we’ve seen the next:

    • We didn’t handle to get the complete transcription with recognized audio system
    • Order issues (as a result of a generated token influences the possibilities for subsequent tokens)

    To deal with our problem, we want Gemini to deduce from the next multimodal info:

    • textual content (our directions + what could also be written within the video)
    • audio cues (every thing mentioned or audible within the video’s audio)
    • visible cues (every thing seen within the video)
    • time (when issues occur)

    That’s fairly a combination of data sorts!

    As video transcription is a knowledge extraction use case, if we take into consideration the ultimate end result as a database, our closing objective may be seen because the technology of two associated tables (transcripts and audio system). If we write it down, our preliminary three sub-problems now look decoupled:

    transcripts and speakers tables (L. Picard)

    💡 In laptop science, information decoupling enhances information locality, typically yielding improved efficiency throughout areas akin to cache utilization, information entry, semantic understanding, or system upkeep. Inside the LLM Transformer structure, core efficiency depends closely on the eye mechanism. Nonetheless, the eye pool is finite and tokens compete for consideration. Researchers generally seek advice from “consideration dilution” for long-context, million-token-scale benchmarks. Whereas we can’t straight debug LLMs as customers, intuitively, information decoupling could enhance the mannequin’s focus, resulting in a greater consideration span.

    Since Gemini is extraordinarily good with patterns, it may well mechanically generate identifiers to hyperlink our tables. As well as, since we ultimately need an automatic workflow, we will begin reasoning when it comes to information and fields:

    transcripts and speakers tables with id (L. Picard)

    Let’s name this method “tabular extraction”, break up our directions into two duties (tables), nonetheless in a single request, and organize them in a significant order…


    💬 Transcripts

    Initially, let’s give attention to getting the audio transcripts:

    • Gemini has confirmed to be natively good at audio transcription
    • This requires much less inference than picture evaluation
    • It’s central and impartial info

    💡 Producing an output that begins with appropriate solutions ought to assist to realize an total appropriate output.

    We’ve additionally seen what a typical transcription entry can seem like:

    00:02 speaker_1: Welcome!

    However, immediately, there may be some ambiguities in our multimodal use case:

    • What’s a speaker?
    • Is it somebody we see/hear?
    • What if the particular person seen within the video will not be the one talking?
    • What if the particular person talking is rarely seen within the video?

    How will we unconsciously determine who’s talking in a video?

    • First, most likely by figuring out the totally different voices on the fly?
    • Then, most likely by consolidating further audio and visible cues?

    Can Gemini perceive voice traits?

    immediate = """
    Utilizing solely the video's audio, listing the next audible traits:
    - Voice tones
    - Voice pitches
    - Languages
    - Accents
    - Talking kinds
    """
    video = TestVideo.GDM_PODCAST_TRAILER_PT59S
    
    generate_content(immediate, video, show_as=ShowAs.MARKDOWN)
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,730
    Output tokens  :       168
    ------------------------------ begin of response -------------------------------
    Okay, here is a breakdown of the audible traits within the video's audio:
    
    - **Voice Tones:** The tones vary from conversational and pleasant to extra critical and considerate. There are additionally moments of pleasure and humor.
    - **Voice Pitches:** There's a mixture of excessive and low pitches, relying on the speaker. The feminine audio system are likely to have greater pitches, whereas the male audio system have decrease pitches.
    - **Languages:** The first language is English.
    - **Accents:** There are a number of accents, together with British, American, and presumably others which can be more durable to pinpoint with out extra context.
    - **Talking Kinds:** The talking kinds differ from formal {and professional} (like in an interview setting) to extra informal and conversational. Some audio system are extra articulate and exact, whereas others are extra relaxed.
    ------------------------------- finish of response --------------------------------

    What a couple of French video?

    video = TestVideo.BRUT_FR_DOGS_WATER_LEAK_PT8M28S
    
    generate_content(immediate, video, show_as=ShowAs.MARKDOWN)
    -------------- BRUT_FR_DOGS_WATER_LEAK_PT8M28S / gemini-2.0-flash --------------
    Enter tokens   :   144,055
    Output tokens  :       147
    ------------------------------ begin of response -------------------------------
    This is a breakdown of the audible traits within the video, primarily based on the audio:
    
    *   **Languages:** Primarily French.
    *   **Accents:** French accents are current, with some variations relying on the speaker.
    *   **Voice Tones:** The voice tones differ relying on the speaker and the context. Some are conversational and informative, whereas others are extra enthusiastic and inspiring, particularly when interacting with the canines.
    *   **Voice Pitches:** The voice pitches differ relying on the speaker and the context.
    *   **Talking Kinds:** The talking kinds differ relying on the speaker and the context. Some are conversational and informative, whereas others are extra enthusiastic and inspiring, particularly when interacting with the canines.
    ------------------------------- finish of response --------------------------------

    ⚠️ We now have to be cautious right here: responses can consolidate multimodal info and even basic information. For instance, if an individual is legendary, their title is almost certainly a part of the LLM’s information. If they’re identified to be from the UK, a attainable inference is that they’ve a British accent. Because of this we made our immediate extra particular by together with “utilizing solely the video’s audio”.

    💡 Should you conduct extra exams, for instance on personal audio information (i.e., not a part of widespread information and with no further visible cues), you’ll see that Gemini’s audio tokenizer performs exceptionally nicely and extracts semantic speech info!

    After a couple of iterations, we will arrive at a transcription immediate specializing in the audio and voices:

    immediate = """
    Activity:
    - Watch the video and pay attention rigorously to the audio.
    - Establish every distinctive voice utilizing a `voice` ID (1, 2, 3, and so on.).
    - Transcribe the video's audio verbatim with voice diarization.
    - Embrace the `begin` timecode (MM:SS) for every speech section.
    - Output a JSON array the place every object has the next fields:
      - `begin`
      - `textual content`
      - `voice`
    """
    video = TestVideo.GDM_PODCAST_TRAILER_PT59S
    
    generate_content(immediate, video, show_as=ShowAs.MARKDOWN)
    Output 🔽
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,800
    Output tokens  :       635
    ------------------------------ begin of response -------------------------------
    [
      {
        "start": "00:00",
        "text": "Do I have to call you Sir Demis now?",
        "voice": 1
      },
      {
        "start": "00:01",
        "text": "Oh, you don't. Absolutely not.",
        "voice": 2
      },
      {
        "start": "00:03",
        "text": "Welcome to Google Deep Mind the podcast with me, your host Professor Hannah Fry.",
        "voice": 1
      },
      {
        "start": "00:06",
        "text": "We want to take you to the heart of where these ideas are coming from. We want to introduce you to the people who are leading the design of our collective future.",
        "voice": 1
      },
      {
        "start": "00:19",
        "text": "Getting the safety right is probably, I'd say, one of the most important challenges of our time. I want safe and capable.",
        "voice": 3
      },
      {
        "start": "00:26",
        "text": "I want a bridge that will not collapse.",
        "voice": 3
      },
      {
        "start": "00:30",
        "text": "just give these scientists a superpower that they had not imagined earlier.",
        "voice": 4
      },
      {
        "start": "00:34",
        "text": "autonomous vehicles. It's hard to fathom that when you're working on a search engine.",
        "voice": 5
      },
      {
        "start": "00:38",
        "text": "We may see entirely new genre or entirely new forms of art come up. There may be a new word that is not music, painting, photography, movie making, and that AI will have helped us create it.",
        "voice": 6
      },
      {
        "start": "00:48",
        "text": "You really want AGI to be able to peer into the mysteries of the universe.",
        "voice": 1
      },
      {
        "start": "00:51",
        "text": "Yes, quantum mechanics, string theory, well, and the nature of reality.",
        "voice": 2
      },
      {
        "start": "00:55",
        "text": "Ow.",
        "voice": 1
      },
      {
        "start": "00:56",
        "text": "the magic of AI.",
        "voice": 6
      }
    ]
    ------------------------------- finish of response --------------------------------

    That is wanting good! And in case you take a look at these directions on extra complicated movies, you’ll get equally promising outcomes.

    Discover how the immediate reuses cherry-picked phrases from the terminology beforehand supplied by Gemini, whereas aiming for precision and concision:

    • verbatim is unambiguous (in contrast to “spoken phrases”)
    • 1, 2, 3, and so on. is an ellipsis (Gemini can infer the sample)
    • timecode is particular (timestamp has extra meanings)
    • MM:SS clarifies the timecode format

    💡 Gemini 2.0 was educated to know the particular MM:SS timecode format. Gemini 2.5 additionally helps the H:MM:SS format for longer movies. For the newest updates, seek advice from the video understanding documentation.

    We’re midway there. Let’s full our database technology with a second activity…


    🧑 Audio system

    The second activity is fairly simple: we wish to extract speaker info right into a second desk. The 2 tables are logically linked by the voice ID.

    After a couple of iterations, we will attain a two-task immediate like the next:

    immediate = """
    Generate a JSON object with keys `task1_transcripts` and `task2_speakers` for the next duties.
    
    **Activity 1 - Transcripts**
    
    - Watch the video and pay attention rigorously to the audio.
    - Establish every distinctive voice utilizing a `voice` ID (1, 2, 3, and so on.).
    - Transcribe the video's audio verbatim with voice diarization.
    - Embrace the `begin` timecode (MM:SS) for every speech section.
    - Output a JSON array the place every object has the next fields:
      - `begin`
      - `textual content`
      - `voice`
    
    **Activity 2 - Audio system**
    
    - For every `voice` ID from Activity 1, extract details about the corresponding speaker.
    - Use visible and audio cues.
    - If a speaker's title can't be discovered, use a query mark (`?`) as the worth.
    - Output a JSON array the place every object has the next fields:
      - `voice`
      - `title`
    
    JSON:
    """
    video = TestVideo.GDM_PODCAST_TRAILER_PT59S
    
    generate_content(immediate, video, show_as=ShowAs.MARKDOWN)
    Output 🔽
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,920
    Output tokens  :       806
    ------------------------------ begin of response -------------------------------
    {
      "task1_transcripts": [
        {
          "start": "00:00",
          "text": "Do I have to call you Sir Demis now?",
          "voice": 1
        },
        {
          "start": "00:01",
          "text": "Oh, you don't. Absolutely not.",
          "voice": 2
        },
        {
          "start": "00:04",
          "text": "Welcome to Google Deep Mind the podcast with me, your host Professor Hannah Fry.",
          "voice": 1
        },
        {
          "start": "00:06",
          "text": "We want to take you to the heart of where these ideas are coming from. We want to introduce you to the people who are leading the design of our collective future.",
          "voice": 1
        },
        {
          "start": "00:19",
          "text": "Getting the safety right is probably, I'd say, one of the most important challenges of our time. I want safe and capable.",
          "voice": 3
        },
        {
          "start": "00:26",
          "text": "I want a bridge that will not collapse.",
          "voice": 3
        },
        {
          "start": "00:30",
          "text": "That just give these scientists a superpower that they had not imagined earlier.",
          "voice": 4
        },
        {
          "start": "00:34",
          "text": "autonomous vehicles. It's hard to fathom that when you're working on a search engine.",
          "voice": 5
        },
        {
          "start": "00:38",
          "text": "We may see entirely new genre or entirely new forms of art come up. There may be a new word that is not music, painting, photography, movie making, and that AI will have helped us create it.",
          "voice": 6
        },
        {
          "start": "00:48",
          "text": "You really want AGI to be able to peer into the mysteries of the universe.",
          "voice": 1
        },
        {
          "start": "00:51",
          "text": "Yes, quantum mechanics, string theory, well, and the nature of reality.",
          "voice": 2
        },
        {
          "start": "00:55",
          "text": "Ow.",
          "voice": 1
        },
        {
          "start": "00:56",
          "text": "the magic of AI.",
          "voice": 6
        }
      ],
      "task2_speakers": [
        {
          "voice": 1,
          "name": "Professor Hannah Fry"
        },
        {
          "voice": 2,
          "name": "Demis Hassabis"
        },
        {
          "voice": 3,
          "name": "Anca Dragan"
        },
        {
          "voice": 4,
          "name": "Pushmeet Kohli"
        },
        {
          "voice": 5,
          "name": "Jeff Dean"
        },
        {
          "voice": 6,
          "name": "Douglas Eck"
        }
      ]
    }
    ------------------------------- finish of response --------------------------------

    Take a look at this immediate on extra complicated movies: it’s nonetheless wanting good!


    🚀 Finalization

    🧩 Structured output

    We’ve iterated in direction of a exact and concise immediate. Now, we will give attention to Gemini’s response:

    • The response is obvious textual content containing fenced code blocks
    • As an alternative, we’d like a structured output, to obtain constantly formatted responses
    • Ideally, we’d additionally wish to keep away from having to parse the response, which is usually a upkeep burden

    Getting structured outputs is an LLM function additionally referred to as “managed technology”. Since we’ve already crafted our immediate when it comes to information tables and JSON fields, that is now a formality.

    In our request, we will add the next parameters:

    • response_mime_type="software/json"
    • response_schema="YOUR_JSON_SCHEMA" (docs)

    In Python, this will get even simpler:

    • Use the pydantic library
    • Mirror your output construction with lessons derived from pydantic.BaseModel

    We are able to simplify the immediate by eradicating the output specification elements:

    Generate a JSON object with keys `task1_transcripts` and `task2_speakers` for the next duties.
    …
    - Output a JSON array the place every object has the next fields:
      - `begin`
      - `textual content`
      - `voice`
    …
    - Output a JSON array the place every object has the next fields:
      - `voice`
      - `title`

    … to maneuver them to matching Python lessons as a substitute:

    import pydantic
    
    class Transcript(pydantic.BaseModel):
        begin: str
        textual content: str
        voice: int
    
    class Speaker(pydantic.BaseModel):
        voice: int
        title: str
    
    class VideoTranscription(pydantic.BaseModel):
        task1_transcripts: listing[Transcript] = pydantic.Subject(default_factory=listing)
        task2_speakers: listing[Speaker] = pydantic.Subject(default_factory=listing)

    … and request a structured response:

    response = shopper.fashions.generate_content(
        # …
        config=GenerateContentConfig(
            # …
            response_mime_type="software/json",
            response_schema=VideoTranscription,
            # …
        ),
    )

    Lastly, retrieving the objects from the response can also be direct:

    if isinstance(response.parsed, VideoTranscription):
        video_transcription = response.parsed
    else:
        video_transcription = VideoTranscription()  # Empty transcription

    The attention-grabbing elements of this method are the next:

    • The immediate focuses on the logic and the lessons give attention to the output
    • It’s simpler to replace and preserve typed lessons
    • The JSON schema is mechanically generated by the Gen AI SDK from the category supplied in response_schema and dispatched to Gemini
    • The response is mechanically parsed by the Gen AI SDK and deserialized into the corresponding Python objects

    ⚠️ Should you hold output specs in your immediate, guarantee there are not any contradictions between the immediate and the schema (e.g., similar subject names and order), as this could negatively influence the standard of the responses.

    💡 It’s attainable to have extra structural info straight within the schema (e.g., detailed subject definitions). See Controlled generation.


    ✨ Implementation

    Let’s finalize our code. As well as, now that we have now a steady immediate, we will even enrich our resolution to extract every speaker’s firm, place, and role_in_video:

    Closing code 🔽
    import re
    
    import pydantic
    from google.genai.sorts import MediaResolution, ThinkingConfig
    
    SamplingFrameRate = float
    
    VIDEO_TRANSCRIPTION_PROMPT = """
    **Activity 1 - Transcripts**
    
    - Watch the video and pay attention rigorously to the audio.
    - Establish every distinctive voice utilizing a `voice` ID (1, 2, 3, and so on.).
    - Transcribe the video's audio verbatim with voice diarization.
    - Embrace the `begin` timecode ({timecode_spec}) for every speech section.
    
    **Activity 2 - Audio system**
    
    - For every `voice` ID from Activity 1, extract details about the corresponding speaker.
    - Use visible and audio cues.
    - If a chunk of data can't be discovered, use a query mark (`?`) as the worth.
    """
    NOT_FOUND = "?"
    
    
    class Transcript(pydantic.BaseModel):
        begin: str
        textual content: str
        voice: int
    
    
    class Speaker(pydantic.BaseModel):
        voice: int
        title: str
        firm: str
        place: str
        role_in_video: str
    
    
    class VideoTranscription(pydantic.BaseModel):
        task1_transcripts: listing[Transcript] = pydantic.Subject(default_factory=listing)
        task2_speakers: listing[Speaker] = pydantic.Subject(default_factory=listing)
    
    
    def get_generate_content_config(mannequin: Mannequin, video: Video) -> GenerateContentConfig:
        media_resolution = get_media_resolution_for_video(video)
        thinking_config = get_thinking_config(mannequin)
    
        return GenerateContentConfig(
            temperature=DEFAULT_CONFIG.temperature,
            top_p=DEFAULT_CONFIG.top_p,
            seed=DEFAULT_CONFIG.seed,
            response_mime_type="software/json",
            response_schema=VideoTranscription,
            media_resolution=media_resolution,
            thinking_config=thinking_config,
        )
    
    
    def get_video_duration(video: Video) -> timedelta | None:
        # For testing functions, video period is statically specified within the enum title
        # Suffix (ISO 8601 primarily based): _PT[H][M][S]
        # For manufacturing,
        # - fetch durations dynamically or retailer them individually
        # - have in mind video VideoMetadata.start_offset & VideoMetadata.end_offset
        regex = r"_PT(?:(d+)H)?(?:(d+)M)?(?:(d+)S)?$"
        if not (match := re.search(regex, video.title)):
            print(f"⚠️ No period information in {video.title}. Will use defaults.")
            return None
    
        h_str, m_str, s_str = match.teams()
        return timedelta(
            hours=int(h_str) if h_str will not be None else 0,
            minutes=int(m_str) if m_str will not be None else 0,
            seconds=int(s_str) if s_str will not be None else 0,
        )
    
    
    def get_media_resolution_for_video(video: Video) -> MediaResolution | None:
        if not (video_duration := get_video_duration(video)):
            return None  # Default
    
        # For testing functions, that is primarily based on video period, as our brief movies are typically extra detailed
        less_than_five_minutes = video_duration < timedelta(minutes=5)
        if less_than_five_minutes:
            media_resolution = MediaResolution.MEDIA_RESOLUTION_MEDIUM
        else:
            media_resolution = MediaResolution.MEDIA_RESOLUTION_LOW
    
        return media_resolution
    
    
    def get_sampling_frame_rate_for_video(video: Video) -> SamplingFrameRate | None:
        sampling_frame_rate = None  # Default (1 FPS for present fashions)
    
        # [Optional] Outline a customized FPS: 0.0 < sampling_frame_rate <= 24.0
    
        return sampling_frame_rate
    
    
    def get_timecode_spec_for_model_and_video(mannequin: Mannequin, video: Video) -> str:
        timecode_spec = "MM:SS"  # Default
    
        match mannequin:
            case Mannequin.GEMINI_2_0_FLASH:  # Helps MM:SS
                go
            case Mannequin.GEMINI_2_5_FLASH | Mannequin.GEMINI_2_5_PRO:  # Help MM:SS and H:MM:SS
                period = get_video_duration(video)
                one_hour_or_more = period will not be None and timedelta(hours=1) <= period
                if one_hour_or_more:
                    timecode_spec = "MM:SS or H:MM:SS"
            case _:
                assert False, "Add timecode format for brand new mannequin"
    
        return timecode_spec
    
    
    def get_thinking_config(mannequin: Mannequin) -> ThinkingConfig | None:
        # Examples of pondering configurations (Gemini 2.5 fashions)
        match mannequin:
            case Mannequin.GEMINI_2_5_FLASH:  # Considering disabled
                return ThinkingConfig(thinking_budget=0, include_thoughts=False)
            case Mannequin.GEMINI_2_5_PRO:  # Minimal pondering funds and no summarized ideas
                return ThinkingConfig(thinking_budget=128, include_thoughts=False)
            case _:
                return None  # Default
    
    
    def get_video_transcription_from_response(
        response: GenerateContentResponse,
    ) -> VideoTranscription:
        if not isinstance(response.parsed, VideoTranscription):
            print("❌ Couldn't parse the JSON response")
            return VideoTranscription()  # Empty transcription
    
        return response.parsed
    
    
    def get_video_transcription(
        video: Video,
        video_segment: VideoSegment | None = None,
        fps: float | None = None,
        immediate: str | None = None,
        mannequin: Mannequin | None = None,
    ) -> VideoTranscription:
        mannequin = mannequin or Mannequin.DEFAULT
        model_id = mannequin.worth
    
        fps = fps or get_sampling_frame_rate_for_video(video)
        video_part = get_video_part(video, video_segment, fps)
        if not video_part:  # Unsupported supply, return an empty transcription
            return VideoTranscription()
        if immediate is None:
            timecode_spec = get_timecode_spec_for_model_and_video(mannequin, video)
            immediate = VIDEO_TRANSCRIPTION_PROMPT.format(timecode_spec=timecode_spec)
        contents = [video_part, prompt.strip()]
    
        config = get_generate_content_config(mannequin, video)
    
        print(f" {video.title} / {model_id} ".heart(80, "-"))
        response = None
        for try in get_retrier():
            with try:
                response = shopper.fashions.generate_content(
                    mannequin=model_id,
                    contents=contents,
                    config=config,
                )
                display_response_info(response)
    
        assert isinstance(response, GenerateContentResponse)
        return get_video_transcription_from_response(response)

    Take a look at it:

    def test_structured_video_transcription(video: Video) -> None:
        transcription = get_video_transcription(video)
    
        print("-" * 80)
        print(f"Transcripts : {len(transcription.task1_transcripts):3d}")
        print(f"Audio system    : {len(transcription.task2_speakers):3d}")
        for speaker in transcription.task2_speakers:
            print(f"- {speaker}")
    
    
    test_structured_video_transcription(TestVideo.GDM_PODCAST_TRAILER_PT59S)
    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,917
    Output tokens  :       989
    --------------------------------------------------------------------------------
    Transcripts :  13
    Audio system    :   6
    - voice=1 title='Professor Hannah Fry' firm='Google DeepMind' place='Host' role_in_video='Host'
    - voice=2 title='Demis Hassabis' firm='Google DeepMind' place='Co-Founder & CEO' role_in_video='Interviewee'
    - voice=3 title='Anca Dragan' firm='?' place='Director, AI Security & Alignment' role_in_video='Interviewee'
    - voice=4 title='Pushmeet Kohli' firm='?' place='VP Science & Strategic Initiatives' role_in_video='Interviewee'
    - voice=5 title='Jeff Dean' firm='?' place='Chief Scientist' role_in_video='Interviewee'
    - voice=6 title='Douglas Eck' firm='?' place='Senior Analysis Director' role_in_video='Interviewee'

    📊 Information visualization

    We began prototyping in pure language, crafted a immediate, and generated a structured output. Since studying uncooked information may be cumbersome, we will now current video transcriptions in a extra visually interesting method.

    Right here’s a attainable orchestrator perform:

    def transcribe_video(video: Video, …) -> None:
        display_video(video)
        transcription = get_video_transcription(video, …)
        display_speakers(transcription)
        display_transcripts(transcription)
    Let’s add some information visualization features 🔽
    import itertools
    from collections.abc import Callable, Iterator
    
    from pandas import DataFrame, Collection
    from pandas.io.codecs.type import Styler
    from pandas.io.codecs.style_render import CSSDict
    
    BGCOLOR_COLUMN = "bg_color"  # Hidden column to retailer row background colours
    
    
    def yield_known_speaker_color() -> Iterator[str]:
        PAL_40 = ("#669DF6", "#EE675C", "#FCC934", "#5BB974")
        PAL_30 = ("#8AB4F8", "#F28B82", "#FDD663", "#81C995")
        PAL_20 = ("#AECBFA", "#F6AEA9", "#FDE293", "#A8DAB5")
        PAL_10 = ("#D2E3FC", "#FAD2CF", "#FEEFC3", "#CEEAD6")
        PAL_05 = ("#E8F0FE", "#FCE8E6", "#FEF7E0", "#E6F4EA")
        return itertools.cycle([*PAL_40, *PAL_30, *PAL_20, *PAL_10, *PAL_05])
    
    
    def yield_unknown_speaker_color() -> Iterator[str]:
        GRAYS = ["#80868B", "#9AA0A6", "#BDC1C6", "#DADCE0", "#E8EAED", "#F1F3F4"]
        return itertools.cycle(GRAYS)
    
    
    def get_color_for_voice_mapping(audio system: listing[Speaker]) -> dict[int, str]:
        known_speaker_color = yield_known_speaker_color()
        unknown_speaker_color = yield_unknown_speaker_color()
    
        mapping: dict[int, str] = {}
        for speaker in audio system:
            if speaker.title != NOT_FOUND:
                coloration = subsequent(known_speaker_color)
            else:
                coloration = subsequent(unknown_speaker_color)
            mapping[speaker.voice] = coloration
    
        return mapping
    
    
    def get_table_styler(df: DataFrame) -> Styler:
        def join_styles(kinds: listing[str]) -> str:
            return ";".be part of(kinds)
    
        table_css = [
            "color: #202124",
            "background-color: #BDC1C6",
            "border: 0",
            "border-radius: 0.5rem",
            "border-spacing: 0px",
            "outline: 0.5rem solid #BDC1C6",
            "margin: 1rem 0.5rem",
        ]
        th_css = ["background-color: #E8EAED"]
        th_td_css = ["text-align:left", "padding: 0.25rem 1rem"]
        table_styles = [
            CSSDict(selector="", props=join_styles(table_css)),
            CSSDict(selector="th", props=join_styles(th_css)),
            CSSDict(selector="th,td", props=join_styles(th_td_css)),
        ]
    
        return df.type.set_table_styles(table_styles).disguise()
    
    
    def change_row_bgcolor(row: Collection) -> listing[str]:
        type = f"background-color:{row[BGCOLOR_COLUMN]}"
        return [style] * len(row)
    
    
    def display_table(yield_rows: Callable[[], Iterator[list[str]]]) -> None:
        information = yield_rows()
        df = DataFrame(columns=subsequent(information), information=information)
        styler = get_table_styler(df)
        styler.apply(change_row_bgcolor, axis=1)
        styler.disguise([BGCOLOR_COLUMN], axis="columns")
    
        html = styler.to_html()
        IPython.show.show(IPython.show.HTML(html))
    
    
    def display_speakers(transcription: VideoTranscription) -> None:
        def sanitize_field(s: str, symbol_if_unknown: str) -> str:
            return symbol_if_unknown if s == NOT_FOUND else s
    
        def yield_rows() -> Iterator[list[str]]:
            yield ["voice", "name", "company", "position", "role_in_video", BGCOLOR_COLUMN]
    
            color_for_voice = get_color_for_voice_mapping(transcription.task2_speakers)
            for speaker in transcription.task2_speakers:
                yield [
                    str(speaker.voice),
                    sanitize_field(speaker.name, NOT_FOUND),
                    sanitize_field(speaker.company, NOT_FOUND),
                    sanitize_field(speaker.position, NOT_FOUND),
                    sanitize_field(speaker.role_in_video, NOT_FOUND),
                    color_for_voice.get(speaker.voice, "red"),
                ]
    
        display_markdown(f"### Audio system ({len(transcription.task2_speakers)})")
        display_table(yield_rows)
    
    
    def display_transcripts(transcription: VideoTranscription) -> None:
        def yield_rows() -> Iterator[list[str]]:
            yield ["start", "speaker", "transcript", BGCOLOR_COLUMN]
    
            color_for_voice = get_color_for_voice_mapping(transcription.task2_speakers)
            speaker_for_voice = {
                speaker.voice: speaker for speaker in transcription.task2_speakers
            }
            previous_voice = None
            for transcript in transcription.task1_transcripts:
                current_voice = transcript.voice
                speaker_label = ""
                if speaker := speaker_for_voice.get(current_voice, None):
                    if speaker.title != NOT_FOUND:
                        speaker_label = speaker.title
                    elif speaker.place != NOT_FOUND:
                        speaker_label = f"[voice {current_voice}][{speaker.position}]"
                    elif speaker.role_in_video != NOT_FOUND:
                        speaker_label = f"[voice {current_voice}][{speaker.role_in_video}]"
                if not speaker_label:
                    speaker_label = f"[voice {current_voice}]"
                yield [
                    transcript.start,
                    speaker_label if current_voice != previous_voice else '"',
                    transcript.text,
                    color_for_voice.get(current_voice, "red"),
                ]
                previous_voice = current_voice
    
        display_markdown(f"### Transcripts ({len(transcription.task1_transcripts)})")
        display_table(yield_rows)
    
    
    def transcribe_video(
        video: Video,
        video_segment: VideoSegment | None = None,
        fps: float | None = None,
        immediate: str | None = None,
        mannequin: Mannequin | None = None,
    ) -> None:
        display_video(video)
        transcription = get_video_transcription(video, video_segment, fps, immediate, mannequin)
        display_speakers(transcription)
        display_transcripts(transcription)

    ✅ Problem accomplished

    🎬 Brief video

    This video is a trailer for the Google DeepMind podcast. It incorporates a fast-paced montage of 6 interviews. The multimodal transcription is great:

    transcribe_video(TestVideo.GDM_PODCAST_TRAILER_PT59S)

    Video (source)

    ----------------- GDM_PODCAST_TRAILER_PT59S / gemini-2.0-flash -----------------
    Enter tokens   :    16,917
    Output tokens  :       989

    Audio system (6)

    Transcripts (13)


    🎬 Narrator-only video

    This video is a documentary that takes viewers on a digital tour of the Gombe Nationwide Park in Tanzania. There’s no seen speaker. Jane Goodall is accurately detected because the narrator, her title is extracted from the credit:

    transcribe_video(TestVideo.JANE_GOODALL_PT2M42S)

    Video (source)

    ------------------- JANE_GOODALL_PT2M42S / gemini-2.0-flash --------------------
    Enter tokens   :    46,324
    Output tokens  :       717

    Audio system (1)

    Transcripts (14)

    💡 Over the previous few years, I’ve commonly used this video to check specialised ML fashions and it constantly resulted in numerous varieties of errors. Gemini’s transcription, together with punctuation, is ideal.


    🎬 French video

    This French reportage combines on-the-ground footage of a specialised crew that makes use of educated canines to detect leaks in underground consuming water pipes. The recording takes place totally open air in a rural setting. The interviewed staff are launched with on-screen textual content overlays. The audio, captured stay on location, contains ambient noise. There are additionally some off-screen or unidentified audio system. This video is quite complicated. The multimodal transcription offers wonderful outcomes with no false positives:

    transcribe_video(TestVideo.BRUT_FR_DOGS_WATER_LEAK_PT8M28S)

    Video (source)

    -------------- BRUT_FR_DOGS_WATER_LEAK_PT8M28S / gemini-2.0-flash --------------
    Enter tokens   :    46,514
    Output tokens  :     4,924

    Audio system (14)

    Transcripts (61)

    💡 Our immediate was crafted and examined with English movies, however works with out modification with this French video. It must also work for movies in these 100+ different languages.

    💡 In a multilingual resolution, we’d ask to translate our transcriptions into any of these 100+ languages and even carry out textual content cleanup. This may be finished in a second request, because the multimodal transcription is complicated sufficient by itself.

    💡 Gemini’s audio tokenizer detects greater than speech. Should you attempt to listing non-speech sounds on audio tracks solely (to make sure the response doesn’t profit from any visible cues), you’ll see it may well detect sounds akin to “canine bark”, “music”, “sound impact”, “footsteps”, “laughter”, “applause”…

    💡 In our information visualization tables, coloured rows are inference positives (audio system recognized by the mannequin), whereas grey rows correspond to negatives (unidentified audio system). This makes it simpler to know the outcomes. Because the immediate we crafted favors accuracy over recall, coloured rows are typically appropriate, and grey rows correspond both to unnamed/unidentifiable audio system (true negatives) or to audio system that ought to have been recognized (false negatives).


    🎬 Complicated video

    This Google DeepMind video is sort of complicated:

    • It’s extremely edited and really dynamic
    • Audio system are sometimes off-screen and different folks may be seen as a substitute
    • The researchers are sometimes in teams and it’s not at all times apparent who’s talking
    • Some video pictures have been taken 2 years aside: the identical audio system can sound and look totally different!

    Gemini 2.0 Flash generates a superb transcription regardless of the complexity. Nonetheless, it’s prone to listing duplicate audio system as a result of video kind. Gemini 2.5 Professional offers a deeper inference and manages to consolidate the audio system:

    transcribe_video(
        TestVideo.GDM_ALPHAFOLD_PT7M54S,
        mannequin=Mannequin.GEMINI_2_5_PRO,
    )

    Video (source)

    -------------------- GDM_ALPHAFOLD_PT7M54S / gemini-2.5-pro --------------------
    Enter tokens   :    43,354
    Output tokens  :     4,861
    Ideas tokens:        80

    Audio system (11)

    Transcripts (81)


    🎬 Lengthy transcription

    The entire size of the transcribed textual content can rapidly attain the utmost variety of output tokens. With our present JSON response schema, we will attain 8,192 output tokens (supported by Gemini 2.0) with transcriptions of ~25min movies. Gemini 2.5 fashions assist as much as 65,536 output tokens (8x extra) and allow us to transcribe longer movies.

    For this 54-minute panel dialogue, Gemini 2.5 Professional makes use of solely ~30-35% of the enter/output token limits:

    transcribe_video(
        TestVideo.GDM_AI_FOR_SCIENCE_FRONTIER_PT54M23S,
        mannequin=Mannequin.GEMINI_2_5_PRO,
    )

    Video (source)

    ------------ GDM_AI_FOR_SCIENCE_FRONTIER_PT54M23S / gemini-2.5-pro -------------
    Enter tokens   :   297,153
    Output tokens  :    22,896
    Ideas tokens:        65

    Audio system (14)

    Transcripts (593)

    💡 On this lengthy video, the 5 panelists are accurately transcribed, diarized, and recognized. Within the second half of the video, unseen attendees ask inquiries to the panel. They’re accurately recognized as viewers members and, although their names and firms are by no means written on the display screen, Gemini accurately extracts and even consolidates the knowledge from the audio cues.


    🎬 1h+ video

    Within the newest Google I/O keynote video (1h 10min):

    • ~30-35%% of the token restrict is used (383k/1M in, 20/64k out)
    • The dozen audio system are properly recognized, together with the demo “AI Voices” (“Gemini” and “Casey”)
    • Speaker names are extracted from slanted textual content on the background display screen for the stay keynote audio system (e.g., Josh Woodward at 0:07) and from lower-third on-screen textual content within the DolphinGemma reportage (e.g., Dr. Denise Herzing at 1:05:28)
    transcribe_video(
        TestVideo.GOOGLE_IO_DEV_KEYNOTE_PT1H10M03S,
        mannequin=Mannequin.GEMINI_2_5_PRO,
    )

    Video (source)

    -------------- GOOGLE_IO_DEV_KEYNOTE_PT1H10M03S / gemini-2.5-pro ---------------
    Enter tokens   :   382,699
    Output tokens  :    19,772
    Ideas tokens:        75

    Audio system (14)

    Transcripts (201)


    🎬 40 speaker video

    On this 1h 40min Google Cloud Subsequent keynote video:

    • ~50-70% of the token restrict is used (547k/1M in, 45/64k out)
    • 40 distinct voices are diarized
    • 29 audio system are recognized, linked to their 21 respective firms or divisions
    • The transcription takes as much as 8 minutes (roughly 4 minutes with video tokens cached), which is 13 to 23 instances sooner than watching all the video with out pauses.
    transcribe_video(
        TestVideo.GOOGLE_CLOUD_NEXT_PT1H40M03S,
        mannequin=Mannequin.GEMINI_2_5_PRO,
    )

    Video (source)

    ---------------- GOOGLE_CLOUD_NEXT_PT1H40M03S / gemini-2.5-pro -----------------
    Enter tokens   :   546,590
    Output tokens  :    45,398
    Ideas tokens:        74

    Audio system (40)

    Transcripts (853)


    ⚖️ Strengths & weaknesses

    👍 Strengths

    Total, Gemini is able to producing wonderful transcriptions that surpass human-generated ones in these elements:

    • Consistency of the transcription
    • Good grammar and punctuation
    • Spectacular semantic understanding
    • No typos or transcription system errors
    • Exhaustivity (each audible phrase is transcribed)

    💡 As you already know, a single incorrect/lacking phrase (and even letter) can utterly change the which means. These strengths assist guarantee high-quality transcriptions and cut back the chance of misunderstandings.

    If we examine YouTube’s user-provided transcriptions (generally by skilled caption distributors) to our auto-generated ones, we will observe some vital variations. Listed below are some examples from the final take a look at:

     timecode ❌ user-provided ✅ our transcription
    9:47 analysis and fashions analysis and mannequin
    13:32 used by 100,000 companies used by over 100,000 companies
    18:19 infrastructure core layer infrastructure core for AI
    20:21 {hardware} system {hardware} technology
    23:42 I do deployed ML fashions Toyota deployed ML fashions
    34:17 Vertex video Vertex Media
    41:11 pace up app growth pace up software coding and growth
    42:15 efficiency and confirmed insights efficiency enchancment insights
    50:20 throughout the milt agent ecosystem throughout the multi-agent ecosystem
    52:50 Salesforce, and Dun Salesforce, or Dun
    1:22:28 please nearly Please welcome
    1:31:07 organizations, like I say Charles organizations like Charles
    1:33:23 a number of public LOMs a number of public LLMs
    1:33:54 Gemini’s Agent tech AI Gemini’s agentic AI
    1:34:24 mitigated outsider threat mitigated insider threat
    1:35:58 from finish level, viral, networks from endpoint, firewall, networks
    1:38:45 We at Google are We at Google Cloud are

    👎 Weaknesses

    The present immediate will not be excellent although. It focuses first on the audio for transcription after which on all cues for speaker information extraction. Although Gemini natively ensures a really excessive consolidation from the context, the immediate can present these negative effects:

    • Sensitivity to audio system’ pronunciation or accent
    • Misspellings for correct nouns
    • Inconsistencies between transcription and completely recognized speaker title

    Listed below are examples from the identical take a look at:

    timecode ✅ user-provided ❌ our transcription
    3:31 Bosun Boson
    3:52 Imagen Think about
    3:52 Veo VO
    11:15 Berman Burman
    25:06 Huang Wang
    38:58 Allegiant Stadium Allegiance Stadium
    1:29:07 Snyk Sneak

    We’ll cease our exploration right here and go away it as an train, however listed below are attainable methods to repair these errors, so as of simplicity/price:

    • Replace the immediate to make use of visible cues for correct nouns, akin to “Guarantee all correct nouns (folks, firms, merchandise, and so on.) are spelled accurately and constantly. Prioritize on-screen textual content for reference.”
    • Enrich the immediate with a further preliminary desk to extract the correct nouns and use them explicitly within the context
    • Add out there video context metadata within the immediate
    • Cut up the immediate into two successive requests

    📈 Suggestions & optimizations

    🔧 Mannequin choice

    Every mannequin can differ when it comes to efficiency, pace, and value.

    Right here’s a sensible abstract primarily based on the mannequin specs, our video take a look at suite, and the present immediate:

    Mannequin Efficiency Pace Value Max. enter tokens Max. output tokens Video kind
    Gemini 2.0 Flash ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ 1,048,576
    = 1M
    8,192
    = 8k
    Commonplace video, as much as 25min
    Gemini 2.5 Flash ⭐⭐ ⭐⭐ ⭐⭐ 1,048,576
    = 1M
    65,536
    = 64k
    Commonplace video, 25min+
    Gemini 2.5 Professional ⭐⭐⭐ ⭐ ⭐ 1,048,576
    = 1M
    65,536
    = 64k
    Complicated video or 1h+ video

    🔧 Video section

    You don’t at all times want to research movies from begin to end. You’ll be able to point out a video section with begin and/or finish offsets within the VideoMetadata construction.

    On this instance, Gemini will solely analyze the 30:00-50:00 section of the video:

    video_metadata = VideoMetadata(
        start_offset="1800.0s",
        end_offset="3000.0s",
        …
    )

    🔧 Media decision

    In our take a look at suite, the movies are pretty commonplace. We received wonderful outcomes through the use of a “low” media decision (“medium” being the default), specified with the GenerateContentConfig.media_resolution parameter.

    💡 This offers sooner and cheaper inferences, whereas additionally enabling the evaluation of 3x longer movies.

    We used a easy heuristic primarily based on video period, however you may wish to make it dynamic on a per-video foundation:

    def get_media_resolution_for_video(video: Video) -> MediaResolution | None:
        if not (video_duration := get_video_duration(video)):
            return None  # Default
    
        # For testing functions, that is primarily based on video period, as our brief movies are typically extra detailed
        less_than_five_minutes = video_duration < timedelta(minutes=5)
        if less_than_five_minutes:
            media_resolution = MediaResolution.MEDIA_RESOLUTION_MEDIUM
        else:
            media_resolution = MediaResolution.MEDIA_RESOLUTION_LOW
    
        return media_resolution

    ⚠️ If you choose a “low” media decision and expertise an obvious lack of understanding, you could be shedding necessary particulars within the sampled video frames. That is straightforward to repair: swap again to the default media decision.


    🔧 Sampling body charge

    The default sampling body charge of 1 FPS labored positive in our exams. You may wish to customise it for every video:

    SamplingFrameRate = float
    
    def get_sampling_frame_rate_for_video(video: Video) -> SamplingFrameRate | None:
        sampling_frame_rate = None  # Default (1 FPS for present fashions)
    
        # [Optional] Outline a customized FPS: 0.0 < sampling_frame_rate <= 24.0
    
        return sampling_frame_rate

    💡 You’ll be able to combine the parameters. On this excessive instance, assuming the enter video has a 24fps body charge, all frames will probably be sampled for a 10s section:

    video_metadata = VideoMetadata(
        start_offset="42.0s",
        end_offset="52.0s",
        fps=24.0,
    )

    ⚠️ Should you use the next sampling charge, this multiplies the variety of frames (and tokens) accordingly, growing latency and value. As 10s × 24fps = 240 frames = 4×60s × 1fps, this 10-second evaluation at 24 FPS is equal to a 4-minute default evaluation at 1 FPS.


    🎯 Precision vs recall

    The immediate can affect the precision and recall of our information extractions, particularly when utilizing express versus implicit wording. If you’d like extra qualitative outcomes, favor precision utilizing express wording; if you need extra quantitative outcomes, favor recall utilizing implicit wording:

    wording favors generates much less LLM conduct
    express precision false positives depends extra (or solely) on the supplied context
    implicit recall false negatives depends on the general context, infers extra, and may use its coaching information

    Listed below are examples that may result in subtly totally different outcomes:

    wording verbs qualifiers
    express “extract”, “quote” “acknowledged”, “direct”, “precise”, “verbatim”
    implicit “determine”, “deduce” “discovered”, “oblique”, “attainable”, “potential”

    💡 Completely different fashions can even behave in another way for a similar immediate. Particularly, extra performant fashions may appear extra “assured” and make extra implicit inferences or consolidations.

    💡 For instance, on this AlphaFold video, on the 04:57 timecode, “Spring 2020” is first displayed as context. Then, a brief declaration from “The Prime Minister” is heard within the background (“You should keep at house”) with out some other hints. When requested to “determine” (quite than “extract”) the speaker, Gemini is prone to infer extra and attribute the voice to “Boris Johnson”. There’s completely no express point out of Boris Johnson; his identification is accurately inferred from the context (“UK”, “Spring 2020”, and “The Prime Minister”).


    🏷️ Metadata

    In our present exams, Gemini solely makes use of audio and body tokens, tokenized from sources on Google Cloud Storage or YouTube. If in case you have further video metadata, this is usually a goldmine; attempt to add it to your immediate and enrich the video context for higher outcomes upfront.

    Probably useful metadata:

    • Video description: This may present a greater understanding of the place and when the video was shot.
    • Speaker information: This may also help auto-correct names which can be solely heard and never apparent to spell.
    • Entity information: Total, this may also help get higher transcriptions for customized or personal information.

    💡 For YouTube movies, no further metadata or transcript is fetched. Gemini solely receives the uncooked audio and video streams. You’ll be able to verify this your self by evaluating your outcomes with YouTube’s computerized captioning (no punctuation, audio solely) or user-provided transcripts (cleaned up), when out there.

    💡 If you already know your video considerations a crew or an organization, including inside information within the context may also help appropriate or full the requested speaker names (supplied there are not any homonyms in the identical context), firms, and job titles.

    💡 On this French reportage, within the 06:16-06:31 video shot, there are two canines: Arnold and Rio. “Arnold” is clearly audible, repeated thrice, and accurately transcribed. “Rio” known as solely as soon as, audible for a fraction of a second in a loud setting, and the audio transcription can differ. Offering the names of the entire crew (homeowners & canines, even when they aren’t all within the video) may also help in transcribing this brief title constantly.

    💡 It must also be attainable to floor the outcomes with Google Search, Google Maps, or your personal RAG system. See Grounding overview.


    🔬 Debugging & proof

    Iterating by successive prompts and debugging LLM outputs may be difficult, particularly when making an attempt to know the explanations for the outcomes.

    It’s attainable to ask Gemini to supply proof within the response. In our video transcription resolution, we may request a timecoded “proof” for every speaker’s recognized title, firm, or function. This allows linking outcomes to their sources, discovering and understanding sudden insights, checking potential false positives…

    💡 Within the examined movies, when making an attempt to know the place the insights got here from, requesting proof yielded very insightful explanations, for instance:

    • Individual names could possibly be extracted from numerous sources (video convention captions, badges, unseen members introducing themselves when asking questions in a convention panel…)
    • Firm names could possibly be discovered from textual content on uniforms, backpacks, autos…

    💡 In a doc information extraction resolution, we may request to supply an “excerpt” as proof, together with web page quantity, chapter quantity, or some other related location info.


    🐘 Verbose JSON

    The JSON format is presently the commonest method to generate structured outputs with LLMs. Nonetheless, JSON is a quite verbose information format, as subject names are repeated for every object. For instance, an output can seem like the next, with many repeated underlying tokens:

    {
      "task1_transcripts": [
        { "start": "00:02", "text": "We've…", "voice": 1 },
        { "start": "00:07", "text": "But we…", "voice": 1 }
        // …
      ],
      "task2_speakers": [
        {
          "voice": 1,
          "name": "John Moult",
          "company": "University of Maryland",
          "position": "Co-Founder, CASP",
          "role_in_video": "Expert"
        },
        // …
        {
          "voice": 3,
          "name": "Demis Hassabis",
          "company": "DeepMind",
          "position": "Founder and CEO",
          "role_in_video": "Team Leader"
        }
        // …
      ]
    }

    To optimize output dimension, an attention-grabbing risk is to ask Gemini to generate an XML block containing a CSV for every of your tabular extractions. The sphere names are specified as soon as within the header, and through the use of tab separators, for instance, we will obtain extra compact outputs like the next:

    
    begin  textual content     voice
    00:02  We have…   1
    00:07  However we…  1
    …
    
    
    voice  title            firm                 place          role_in_video
    1      John Moult      College of Maryland  Co-Founder, CASP  Professional
    …
    3      Demis Hassabis  DeepMind                Founder and CEO   Workforce Chief
    …
    

    💡 Gemini excels at patterns and codecs. Relying in your wants, be happy to experiment with JSON, XML, CSV, YAML, and any customized structured codecs. It’s probably that the trade will evolve to permit much more elaborate structured outputs.


    🐿️ Context caching

    Context caching optimizes the fee and the latency of repeated requests utilizing the identical base inputs.

    There are two methods requests can profit from context caching:

    • Implicit caching: By default, upon the primary request, enter tokens are cached, to speed up responses for subsequent requests with the identical base inputs. That is absolutely automated and no code change is required.
    • Specific caching: You place particular inputs into the cache and reuse this cached content material as a base in your requests. This offers full management however requires managing the cache manually.
    Instance of implicit caching 🔽
    model_id = "gemini-2.0-flash"
    video_file_data = FileData(
        file_uri="gs://bucket/path/to/my-video.mp4",
        mime_type="video/mp4",
    )
    video = Half(file_data=video_file_data)
    prompt_1 = "Checklist the folks seen within the video."
    prompt_2 = "Summarize what occurs to John Smith."
    
    # ✅ Request A1: static information (video) positioned first
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents=,
    )
    
    # ✅ Request A2: probably cache hit for the video tokens
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents=,
    )

    💡 Implicit caching may be disabled on the undertaking stage (see data governance).

    Implicit caching is prefix-based, so it solely works in case you put static information first and variable information final.

    Instance of requests stopping implicit caching 🔽
    # ❌ Request B1: variable enter positioned first
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents=[prompt_1, video],
    )
    
    # ❌ Request B2: no cache hit
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents=[prompt_2, video],
    )

    💡 This explains why the data-plus-instructions enter order is most well-liked, for efficiency (not LLM-related) causes.

    Value-wise, the enter tokens retrieved with a cache hit profit from a 75% low cost within the following instances:

    • Implicit caching: With all Gemini fashions, cache hits are mechanically discounted (with none management on the cache).
    • Specific caching: With all Gemini fashions and supported fashions in Mannequin Backyard, you management your cached inputs and their lifespans to make sure cache hits.
    Instance of express caching 🔽
    from google.genai.sorts import (
        Content material,
        CreateCachedContentConfig,
        FileData,
        GenerateContentConfig,
        Half,
    )
    
    model_id = "gemini-2.0-flash-001"
    
    # Enter video
    video_file_data = FileData(
        file_uri="gs://cloud-samples-data/video/JaneGoodall.mp4",
        mime_type="video/mp4",
    )
    video_part = Half(file_data=video_file_data)
    video_contents = [Content(role="user", parts=[video_part])]
    
    # Video explicitly put in cache, with time-to-live (TTL) earlier than computerized deletion
    cached_content = shopper.caches.create(
        mannequin=model_id,
        config=CreateCachedContentConfig(
            ttl="1800s",
            display_name="video-cache",
            contents=video_contents,
        ),
    )
    if cached_content.usage_metadata:
        print(f"Cached tokens: {cached_content.usage_metadata.total_token_count or 0:,}")
        # Cached tokens: 46,171
        # ✅ Video tokens are cached (commonplace tokenization charge + storage price for TTL period)
    
    cache_config = GenerateContentConfig(cached_content=cached_content.title)
    
    # Request #1
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents="Checklist the folks talked about within the video.",
        config=cache_config,
    )
    if response.usage_metadata:
        print(f"Enter tokens : {response.usage_metadata.prompt_token_count or 0:,}")
        print(f"Cached tokens: {response.usage_metadata.cached_content_token_count or 0:,}")
        # Enter tokens : 46,178
        # Cached tokens: 46,171
        # ✅ Cache hit (75% low cost)
    
    # Request #i (throughout the TTL interval)
    # …
    
    # Request #n (throughout the TTL interval)
    response = shopper.fashions.generate_content(
        mannequin=model_id,
        contents="Checklist all of the timecodes when Jane Goodall is talked about.",
        config=cache_config,
    )
    if response.usage_metadata:
        print(f"Enter tokens : {response.usage_metadata.prompt_token_count or 0:,}")
        print(f"Cached tokens: {response.usage_metadata.cached_content_token_count or 0:,}")
        # Enter tokens : 46,182
        # Cached tokens: 46,171
        # ✅ Cache hit (75% low cost)

    💡 Specific caching wants a selected mannequin model (like …-001 on this instance) to make sure the cache stays legitimate and isn’t affected by a mannequin replace.

    ℹ️ Study extra about Context caching.


    ⏳ Batch prediction

    If it is advisable course of a big quantity of movies and don’t want synchronous responses, you should use a single batch request and cut back your price.

    💡 Batch requests for Gemini fashions get a 50% low cost in comparison with commonplace requests.

    ℹ️ Study extra about Batch prediction.


    ♾️ To manufacturing… and past

    A number of further notes:

    • The present immediate will not be excellent and may be improved. It has been preserved in its present state as an example its growth beginning with Gemini 2.0 Flash and a easy video take a look at suite.
    • The Gemini 2.5 fashions are extra succesful and intrinsically present a greater video understanding. Nonetheless, the present immediate has not been optimized for them. Writing optimum prompts for various fashions is one other problem.
    • Should you take a look at transcribing your personal movies, particularly various kinds of movies, chances are you’ll run into new or particular points. They’ll most likely be addressed by enriching the immediate.
    • Future fashions will probably assist extra output options. This could enable for richer structured outputs and for less complicated prompts.
    • As fashions continue to learn, it’s additionally attainable that multimodal video transcription will grow to be a one-liner immediate.
    • Gemini’s picture and audio tokenizers are actually spectacular and allow many different use instances. To completely grasp the extent of the probabilities, you possibly can run unit exams on pictures or audio information.
    • We constrained our problem to utilizing a single request, which may dilute the LLM’s consideration in such wealthy multimodal contexts. For optimum leads to a large-scale resolution, splitting the processing into two steps (i.e., requests) ought to assist Gemini’s consideration focus even additional. In step one, we’d extract and diarize the audio stream solely, which ought to lead to essentially the most exact speech-to-text transcription (possibly with extra voice identifiers than precise audio system, however with a minimal variety of false positives). Within the second step, we’d reinject the transcription to give attention to extracting and consolidating speaker information from the video frames. This is able to even be an answer to course of very lengthy movies, even these a number of hours in period.

    🏁 Conclusion

    Multimodal video transcription, which requires the complicated synthesis of audio and visible information, is a real problem for ML practitioners, with out mainstream options. A conventional method, involving an elaborate pipeline of specialised fashions, can be engineering-intensive with none assure of success. In distinction, Gemini proved to be a flexible toolbox for reaching a robust and simple resolution primarily based on a single immediate:

    multimodal video transcription solution (L. Picard)

    We managed to deal with this complicated downside with the next methods:

    • Prototyping with open prompts to develop instinct about Gemini’s pure strengths
    • Taking into consideration how LLMs work beneath the hood
    • Crafting more and more particular prompts utilizing a tabular extraction technique
    • Producing structured outputs to maneuver in direction of production-ready code
    • Including information visualization for simpler interpretation of responses and smoother iterations
    • Adapting default parameters to optimize the outcomes
    • Conducting extra exams, iterating, and even enriching the extracted information

    These ideas ought to apply to many different information extraction domains and can help you resolve your personal complicated issues. Have enjoyable and comfortable fixing!


    ➕ Extra!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Will Humans Live Forever? AI Races to Defeat Aging

    April 20, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Comments are closed.

    Editors Picks

    Robot wins half marathon faster than human record

    April 20, 2026

    Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data

    April 20, 2026

    Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them

    April 20, 2026

    Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)

    April 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Breaking From Tradition, ThinkPad X9 Offers a Cheap Path to OLED Ultraportable

    July 27, 2025

    HotChat.AI Image Generator Review: Features and Pricing Explained

    January 19, 2026

    Nolah Evolution Hybrid Mattress Review: A Jack of All Trades

    June 15, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.