Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • How AI Policy in South Africa Is Ruining Itself
    • Dual iris laser projector offers theater blacks
    • The Startup World Cup is your chance to pitch in Silicon Valley and win $1.4 million
    • 13 Best Coolers for Sunshine and Nighttime (2026)
    • Which States Actually Have the Best Laws Against License Plate Surveillance?
    • Portable smart TV, art frame, tablet
    • Former Startmate boss Michael Batko is back in founder mode building with Hourglass AI
    • Why Sharing a Screenshot Can Get You Jailed in the UAE
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Wednesday, April 29
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Preparing Video Data for Deep Learning: Introducing Vid Prepper
    Artificial Intelligence

    Preparing Video Data for Deep Learning: Introducing Vid Prepper

    Editor Times FeaturedBy Editor Times FeaturedOctober 4, 2025No Comments13 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    to making ready movies for machine studying/deep studying. Because of the dimension and computational value of video knowledge, it’s vital that it’s processed in as environment friendly a means attainable on your use case. This contains issues like metadata evaluation, standardization, augmentation, shot and object detection, and tensor loading. This text explores some methods how these could be finished and why we’d do them. I’ve additionally constructed an open supply Python package deal known as vid-prepper. I constructed the package deal with the intention of offering a quick and environment friendly technique to apply completely different preprocessing strategies to your video knowledge. The package deal builds off some giants of the machine studying and deep studying World, so while this package deal is helpful in bringing them collectively in a typical and straightforward to make use of framework, the actual work is most positively on them!

    Video has been an essential a part of my profession. I began my knowledge profession in an organization that constructed a SaaS platform for video analytics for main main video corporations (known as NPAW) and at the moment work for the BBC. Video at the moment dominates the net panorama, however with AI continues to be fairly restricted, though rising superfast. I wished to create one thing that helps pace up folks’s capability to strive issues out and contribute to this actually fascinating space. This text will talk about what the completely different package deal modules do and the way to use them, beginning with metadata evaluation.

    Metadata Evaluation

    from vid_prepper import metadata

    On the BBC, I’m fairly lucky to work at knowledgeable organisation with massively gifted folks creating broadcast high quality movies. Nevertheless, I do know that almost all video knowledge shouldn’t be this. Typically information will probably be combined codecs, colors, sizes, or they might be corrupted or have components lacking, they might even have quirks from older movies, like interlacing. It is very important concentrate on any of this earlier than processing movies for machine studying.

    We will probably be coaching our fashions on GPUs, and these are implausible for tensor calculations at scale however costly to run. When coaching giant fashions on GPUs, we wish to be as environment friendly as attainable to keep away from excessive prices. If now we have corrupted movies or movies in sudden or unsupported codecs it should waste time and sources, might make your fashions much less correct and even trigger the coaching pipeline to interrupt. Subsequently, checking and filtering your information beforehand is a necessity.

    Metadata Evaluation is sort of at all times an essential first step in making ready video knowledge (picture supply – Pexels)

    I’ve constructed the metadata evaluation module on the ffprobe library, a part of the FFmpeg library in-built C and Assembler. This can be a massively highly effective and environment friendly library used extensively within the career and the module can be utilized to analyse a single video file or a batch of them as proven within the code beneath.

    # Extract metadata
    video_path = [“sample.mp4”]
    video_info = metadata.Metadata.validate_videos(video_path)
    
    # Extract metadata batch
    video_paths = [“sample1.mp4”, “sample2.mp4”, “sample3.mp4”]
    video_info = metadata.Metadata.validate_videos(video_paths)

    This supplies a dictionary output of the video metadata together with codecs, sizes, body charges, period, pixel codecs, audio metadata and extra. That is actually helpful each for locating video knowledge with points or odd quirks, or additionally for choosing particular video knowledge or selecting the codecs and codec to standardize to primarily based on essentially the most generally used ones.

    Filtering Primarily based on Metadata Points

    Given this gave the impression to be a fairly common use case, I constructed within the capability to filter the record of movies primarily based on a set of checks. For instance, if there’s video or audio lacking, codecs or codecs not as specified, or body charges or durations completely different to these specified, then these movies could be recognized by setting the filter and only_errors parameters, as proven beneath.

    # Run assessments on movies
    movies = ["video1.mp4", "video2.mkv", "video3.mov"]
    
    all_filters_with_params = {
        "filter_missing_video": {},
        "filter_missing_audio": {},
        "filter_variable_framerate": {},
        "filter_resolution": {"min_width": 1280, "min_height": 720},
        "filter_duration": {"min_seconds": 5.0},
        "filter_pixel_format": {"allowed": ["yuv420p", "yuv422p"]},
        "filter_codecs": {"allowed": ["h264", "hevc", "vp9", "prores"]}
    }
    
    errors = Metadata.validate_videos(
        movies,
        filters=all_filters_with_params,
        only_errors=True
    )

    By eradicating or figuring out points with the info earlier than we get to the actual intensive work of mannequin coaching means we keep away from losing money and time, making it an important first step.

    Standardization

    from vid_prepper import standardize

    Standardization is often fairly essential in preprocessing for video machine studying. It could actually assist make issues rather more environment friendly and constant, and sometimes deep studying fashions require particular sizes (eg. 224 x 224). When you’ve got a variety of video knowledge then any time spent on this stage is commonly repaid many instances within the coaching stage in a while.

    Standardizing video knowledge could make processing a lot, rather more environment friendly and provides higher outcomes (picture supply – Pexels)

    Codecs

    Movies are sometimes structured for environment friendly storage and distribution over the web in order that they are often broadcast cheaply and rapidly. This often includes heavy compression to make movies as small as attainable. Sadly, that is just about diametrically opposed to what’s good for deep studying. 

    The bottleneck for deep studying is sort of at all times decoding movies and loading them to tensors, so the extra compressed a video file is, the longer that takes. This sometimes means avoiding extremely compressed codecs like H265 and VVC and going for lighter compressed options with {hardware} acceleration like H264 or VP9, or so long as you may keep away from I/O bottlenecks, utilizing one thing like uncompressed MJPEG which tends for use in manufacturing as it’s the quickest means of loading frames into tensors.

    Body Charge

    The usual body charges (FPS) for video are 24 for cinema, 30 for TV and on-line content material and 60 for quick movement content material. These body charges are decided by the variety of pictures required to be proven per second in order that our eyes see one clean movement. Nevertheless, deep studying fashions don’t essentially want as excessive a body charge within the coaching movies to create numeric representations of movement and generate clean trying movies. As each body is a further tensor to compute, we wish to decrease the body charge to the smallest we will get away with.

    Several types of movies and the use case of our fashions will decide how low we will go. The much less movement in a video, the decrease we will set the enter body charge with out compromising the outcomes. For instance, an enter dataset of studio information clips or speak exhibits goes to require a decrease body charge than a dataset made up of ice hockey matches. Additionally, if we’re engaged on a video understanding or video-to-text mannequin, reasonably than producing video for human consumption, it is likely to be attainable to set the body charge even decrease.

    Calculating Minimal Body Charge

    It’s truly attainable to mathematically decide a fairly good minimal body charge on your video dataset primarily based on movement statistics. Utilizing a RAFT or Farneback algorithm on a pattern of your dataset, you may calculate the optical movement per pixel for every body change. This supplies the horizontal and vertical displacement for every pixel to calculate the magnitude of the change (the sq. root of including the squared values).

    Averaging this worth over the body offers the body momentum and taking the median and ninety fifth percentile of all of the frames offers values that you could plug into the equation beneath to get a variety of probably optimum minimal body charges on your coaching knowledge.

    Optimum FPS (Decrease) = Present FPS x Max mannequin interpolation charge / Median momentum
    
    Optimum FPS (Increased) = Present FPS x Max mannequin interpolation charge / ninety fifth percentile momentum

    The place max mannequin interpolation is the utmost per body momentum the mannequin can deal with, often supplied within the mannequin card.

    Understanding momentum is nothing greater than a little bit of Pythagoras. No PHD maths right here! Supply – Pexels

    You possibly can then run small scale assessments of your coaching pipeline to find out the bottom body charge you may obtain for optimum efficiency.

    Vid Prepper

    The standardize module in vid-prepper can standardize the scale, codec, color format and body charge of a single video or batch of movies.

    Once more, it’s constructed on FFmpeg and has the flexibility to speed up issues on GPU if that’s accessible to you. To standardize movies, you may merely run the code beneath.

    # Standardize batch of movies
    video_file_paths = [“sample1.mp4”, “sample2.mp4”, “sample3.mp4”]
    standardizer = standardize.VideoStandardizer(
                dimension="224x224",
                fps=16,
                codec="h264",
                coloration="rgb",
                use_gpu=False  # Set to True you probably have CUDA
            )
    
    standardizer.batch_standardize(movies=video_file_paths, output_dir="movies/")

    In an effort to make issues extra environment friendly, particularly if you’re utilizing costly GPUs and don’t need an IO bottleneck from loading movies, the module additionally accepts webdatasets. These could be loaded equally to the next code:

    # Standardize webdataset
    standardizer = standardize.VideoStandardizer(
                dimension="224x224",
                fps=16,
                codec="h264",
                coloration="rgb",
                use_gpu=False  # Set to True you probably have CUDA
            )
    
    standardizer.standardize_wds("dataset.tar", key="mp4", label="cls")

    Tensor Loader

    from vid_prepper import loader

    A video tensor is usually 4 or 5 dimensions, consisting of the pixel color (often RGB), top and width of the body, time and batch (non-compulsory) parts. As talked about above, decoding movies into tensors is commonly the largest bottleneck within the preprocessing pipeline, so the steps taken up thus far make a giant distinction in how effectively we will load our tensors.

    This module converts movies into PyTorch tensors utilizing FFmpeg for body sampling and NVDec to permit for GPU acceleration. You possibly can alter the scale of the tensors to suit your mannequin together with choosing the variety of frames to pattern per clip and the body stride (spacing between the frames). As with standardization, the choice to make use of webdatasets can be accessible. The code beneath offers an instance on how that is finished.

    # Load clips into tensors
    loader = VideoLoader(num_frames=16, frame_stride=2, dimension=(224,224), gadget="cuda")
    video_paths = ["video1.mp4", "video2.mp4", "video3.mp4"]
    batch_tensor = loader.load_files(video_paths)
    
    # Load webdataset into tensors
    wds_path = "knowledge/shards/{00000..00009}.tar"
    dataset = loader.load_wds(wds_path, key="mp4", label="cls")

    Detector

    from vid_prepper import detector

    It’s usually a crucial a part of video preprocessing to detect issues throughout the video content material. These is likely to be explicit objects, pictures or transitions. This module brings collectively highly effective processes and fashions from PySceneDetector, HuggingFace, Thought Analysis and PyTorch to offer environment friendly detection.

    Video detection is commonly a helpful means of splitting movies into clips and getting solely the clips you want on your mannequin (picture supply – Pexels)

    Shot Detection

    In lots of video machine studying use instances (eg. semantic search, seq2seq trailer technology and plenty of extra), splitting movies into particular person pictures is a vital step. There are a couple of methods of doing this, however PySceneDetect is among the extra correct and dependable methods of doing this. This library supplies a wrapper for PySceneDetect’s content material detection methodology by calling the next methodology. It outputs the beginning and finish frames for every shot.

    # Detect pictures in movies
    video_path = "video.mp4"
    detector = VideoDetector(gadget="cuda")
    shot_frames = detector.detect_shots(video_path)

    Transition Detection

    While PySceneDetect is a powerful instrument for splitting up movies into particular person scenes, it’s not at all times 100% correct. There are occasions the place you could possibly reap the benefits of repeated content material (eg. transitions) breaking apart pictures. For instance, BBC Information has an upwards pink and white wipe transition between segments that may simply be detected utilizing one thing like PyTorch.

    Transition detection works straight on tensors by detecting pixel modifications in blocks of pixels exceeding a sure threshold change that you could set. The instance code beneath exhibits the way it works.

    # Detect gradual transitions/wipes
    video_path = "video.mp4"
    video_loader = loader.VideoLoader(num_frames=16, 
                                      frame_stride=2, 
                                      dimension=(224, 224), 
                                      gadget="cpu",
                                      use_nvdec=False  # Use "cuda" if accessible)
    video_tensor = loader.load_file(video_path)
    
    detector = VideoDetector(gadget="cpu" # or cuda)
    wipe_frames = detector.detect_wipes(video_tensor, 
                                        block_grid=(8,8), 
                                        threshold=0.3)

    Object Detection

    Object detection is commonly a requirement to discovering the clips you want in your video knowledge. For instance, it’s possible you’ll require clips with folks in them or animals. This methodology makes use of an open supply Dino model in opposition to a small set of objects from the usual COCO dataset labels for detecting objects. Each the mannequin alternative and record of objects are fully customisable and could be set by you. The mannequin loader is the HuggingFace transformers package deal so the mannequin you utilize will have to be accessible there. For customized labels, the default mannequin takes a string with the next construction within the text_queries parameter – “canine. cat. ambulance.”

    # Detect objects in movies
    video_path = "video.mp4"
    video_loader = loader.VideoLoader(num_frames=16, 
                                      frame_stride=2, 
                                      dimension=(224, 224), 
                                      gadget="cpu",
                                      use_nvdec=False  # Use "cuda" if accessible)
    video_tensor = loader.load_file(video_path)
    
    detector = VideoDetector(gadget="cpu" # or cuda)
    outcomes = detector.detect_objects(video, 
                                      text_queries=text_queries # if None will default to COCO record, 
                                      text_threshold=0.3, 
                                      model_id=”IDEA-Analysis/grounding-dino-tiny”)

    Knowledge Augmentation

    Issues like Video Transformers are extremely highly effective and can be utilized to create nice new fashions. Nevertheless, they usually require an enormous quantity of information which isn’t essentially simply accessible with issues like video. In these instances, we want a technique to generate different knowledge that stops our fashions overfitting. Data Augmentation is one such resolution to assist increase restricted knowledge availability.

    For video, there are a variety of ordinary strategies for augmenting the info and most of these are supported by the most important frameworks. Vid-prepper brings collectively two of the most effective – Kornia and Torchvision. With vid-prepper, you may carry out particular person augmentations like cropping, flipping, mirroring, padding, gaussian blurring, adjusting brightness, color, saturation and distinction, and coarse dropout (the place components of the video body are masked). You too can chain them collectively for larger effectivity.

    Augmentations all work on the video tensors reasonably than straight on the movies and assist GPU acceleration you probably have it. The instance code beneath exhibits the way to name the strategies individually and the way to chain them.

    # Particular person Augmentation Instance
    video_path = "video.mp4"
    video_loader = loader.VideoLoader(num_frames=16, 
                                      frame_stride=2, 
                                      dimension=(224, 224), 
                                      gadget="cpu",use_nvdec=False  # Use "cuda" if accessible)
    video_tensor = loader.load_file(video_path)
    
    video_augmentor = augmentor.VideoAugmentor(gadget="cpu", use_gpu=False)
    cropped = augmentor.crop(video_tensor, sort="middle", dimension=(200, 200))
    flipped = augmentor.flip(video_tensor, sort="horizontal")
    brightened = augmentor.brightness(video_tensor, quantity=0.2)
    
    
    # Chained Augmentations
    augmentations = [
                ('crop', {'type': 'random', 'size': (180, 180)}),
                ('flip', {'type': 'horizontal'}),
                ('brightness', {'amount': 0.1}),
                ('contrast', {'amount': 0.1})
            ]
            
    chained_result = augmentor.chain(video_tensor, augmentations)

    Summing Up

    Video preprocessing is massively essential in deep studying as a result of comparatively large dimension of the info in comparison with textual content. Transformer mannequin necessities for oceans of information compound this even additional. Three key parts make up the deep studying course of – time, cash and efficiency. By optimizing our enter video knowledge, we will decrease the quantity of the primary two parts we have to get the most effective out of the ultimate one.

    There are some superb open supply instruments accessible for Video Machine Studying, with extra coming alongside each day at the moment. Vid-prepper stands on the shoulders of a few of the finest and most generally utilized in an try to try to convey them collectively in a straightforward to make use of package deal. Hopefully you discover some worth in it and it lets you create the subsequent technology of video fashions, which is extraordinarily thrilling!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    How AI Policy in South Africa Is Ruining Itself

    April 29, 2026

    PyTorch NaNs Are Silent Killers — So I Built a 3ms Hook to Catch Them at the Exact Layer

    April 28, 2026

    Correlation Doesn’t Mean Causation! But What Does It Mean?

    April 28, 2026

    Let the AI Do the Experimenting

    April 28, 2026

    The Next Frontier of AI in Production Is Chaos Engineering

    April 28, 2026

    How Spreadsheets Quietly Cost Supply Chains Millions

    April 27, 2026

    Comments are closed.

    Editors Picks

    How AI Policy in South Africa Is Ruining Itself

    April 29, 2026

    Dual iris laser projector offers theater blacks

    April 29, 2026

    The Startup World Cup is your chance to pitch in Silicon Valley and win $1.4 million

    April 29, 2026

    13 Best Coolers for Sunshine and Nighttime (2026)

    April 29, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Multiple sclerosis may have two distinct biological pathways

    February 4, 2026

    Today’s NYT Wordle Hints, Answer and Help for July 25 #1497

    July 25, 2025

    Why 84% of Europe’s entrepreneurs refuse to quit despite income anxiety and regulatory hurdles

    February 24, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.