For years, video has been the creative format that many advertisers acknowledge in theory but quietly skip in practice. The barrier is understandable: video takes time, budget, and production resources that not every team has.
However, that reluctance is getting harder to justify. Two recent developments in Google Ads are quietly removing the two biggest reasons advertisers have avoided it: reporting transparency and the resources needed to create video content in the first place.
New “Ads using video” reporting
One of the frustrations with Performance Max (PMax) has always been its opacity. When using PMax campaigns, advertisers hand significant control over to Google’s automation, which makes the reporting question even more important: if you can’t see what’s working, how do you know what to invest in?
For video specifically, this has been a real sticking point. Advertisers running PMax with video assets in the mix had limited ability to isolate whether those videos were contributing to performance.
Google has now addressed this. A new “Ads using video” segment has been added to Performance Max channel performance reporting, allowing advertisers to break down results based on whether video assets were included in a given placement. That means you can now compare performance across placements that used video versus those that didn’t, much like you can with “Ads using product data” if running Performance Max with a shopping feed.
Within the Channel report, you can now choose to see results for ‘All ads’ or separate the data to see the results for ‘Ads using video’ and ‘Ads not using video’:
New AI-generated animated video clips
Google is currently testing an AI-powered feature inside PMax asset groups that generates animated video clips directly from a single source image. Advertisers can now upload a product photo, or a logo, and the tool creates several enhanced versions of that image, each of which produces two animated clips. You can select up to five animated clips per asset group.
Early results from testing suggest the output quality is genuinely usable for display advertising. However, we are yet to see the feature within any of our client accounts, so have been unable to test it firsthand.
For advertisers who have been running PMax campaigns on static images alone, this is worth checking out. If the feature is available in your account, adding animated clips to your asset groups could expand where and how your ads appear, and now, due to the added visibility on how video ads perform, you can see if they work or not.
However, it is important to note that faces cannot be used in source images, though the AI may generate people in the enhanced versions it creates. And since this feature hasn’t been formally confirmed or documented by Google, placement specifics are still emerging from early testing.
What does this mean for advertisers?
Put these two developments together and Google’s intentions are clear. Google is simultaneously making video easier to produce and easier to evaluate. These new developments show the importance of video to Google, but it doesn’t necessarily mean it will make an enormous difference to everyone’s ad campaigns.
The practical implications for advertisers right now:
If you’re running Performance Max campaigns, check your asset groups for the AI animated video feature. If it’s available, why not test it? All you need is a single image, so it’s worth seeing what Google can come up with. If you like the finished video content, then test it within your asset group, as you’ll be able to measure its contribution through the new video segment:
If you’ve dabbled with video content in the past, the reporting update gives you a reason to revisit that. Run a test, add video assets to a PMax campaign, and use the new segment to measure what happens. You may find the ROI case for video investment is stronger than you expected, or you’ll have concrete data to justify spending elsewhere.
These updates are part of a longer pattern. Google has been very gradually improving Performance Max’s transparency – adding asset-level reporting, search term insights, and now video-specific segmentation. But will it ever give advertisers full control?