With the arrival of AI video creator Sora 2, the internet has been flooded by AI slop videos. These videos are usually uploaded to TikTok, Instagram, and YouTube shorts, but are often also shared on platforms like X, BlueSky, and LinkedIn. Although AI video generation has improved much (people usually have five fingers on one hand and images seem to be more stable), there are still ways to identify AI video content. Here are some tips to tell a fake from something real. Although they might not all be applicable to the same video and AI videos might overcome these red flags in the future, they will still help you understand what to look for today.
short, single shot
Generative AI still struggles with longer, single shot videos (more than 1 minute). As Generative AI predicts the next pixel, it is still difficult to do such predictions for so many pixels over time. Therefore, AI video content is usually short.
When you have multiple shots, this will mostly contain animals or animations and not people. If the AI does not work with a model of a person, it is difficult to predict the exact same person in the next shot. With animals we often don’t notice the differences between shots, but we definitely have an eye for people.
watermark
Most AI video generators add a watermark to their videos, which is, of course, the best way to spot a fake video. However, when swiping through your shorts or reels, you might miss this.
There are websites that can remove these watermarks with AI, but in place you get a blurriness, more difficult to see but still not impossible. Look at the borders of the video and see if you can spot them.

focused on one subject
More background means more details means more predictions. AI videos like to focus on one subject. Another reason why it is mostly posted on TikTok and Instagram where there is a similar focus. Mistakes in the video are usually found in the background, not in the subject.
You will find strange objects in the background that either seem out of place or don’t exist at all. This is because these images are less common in the training data of the model. AI confabulates, it makes things up to complete the full picture regardless whether that filler makes sense.
blurriness
Every frame in an AI video is generated, which means there are minor differences in each frame. By making the video more blurry, these changes are harder to spot. That’s why many AI generated videos are not in high quality though many mobile phone cameras film in high resolution. With high resolution AI videos, you would spot the differences between frames easier.
text and numbers
Although AI has improved on text and numbers in generated images and videos, you can still find illegible letters or incorrect dates or time. This is because Generative AI does not understand text, numbers and time; it mimics patterns from its dataset. Analogue clocks are common, but digital time has too many variables. This red flag is especially helpful when watching ‘security camera footage’.
audio
Just like resolution, the audio seems to be of a lower quality with some glitches. The noise is also very ‘crowded,’ there is a lot of dialogue and business. This is because the person who created the video wanted to have as much information in the video as possible. And as we know, AI videos can’t be too long.
physics
Generative AI does not understand physics. Waves move away from a beach, a video of somebody filming somebody else does not show that person on that screen, arms appear, stairs end up to nowhere, cars disappear, and the world is not reflected properly in mirrors, glasses, or eyes.
Anatomy is a real pain in the neck for AI companies. Skins look too smooth, teeth are too perfect, joints move awkwardly or the movements are even impossible, and limbs melt into each other. Eyes seem dead as they miss the subtle movements of a real person.
situation
AI does not understand situations. The reaction of bystanders is often telling. They don’t respond like people would in real life. Social conventions are also missed. Sure, somebody can laugh at a joke or run away in fear, but these are common ‘tropes’ found in the model’s dataset. However, when you pay attention to bystanders, you usually spot strange responses (or no response at all).
context
Strange things happen in the world but some things are just too strange to have happened, especially when multiple videos start appearing on the same topic. We suddenly had many security camera footage of animals jumping on trampolines at night or teachers yelling at students in similar fashion about AI (see below). The more you know and understand about the world, the more you will be triggered by an internal warning that the content might not be true.
narrative
The problem with AI videos today is not so much that we might not be able to spot them. We can. But the problem is that images stick to the mind. Even when we know they are false, they influence our thinking. So, when you see fake videos about ranting teachers about AI, they influence your (unconscious) perception about teachers and their view on AI. It is about creating and supporting a narrative and in a post-truth world, fake does the job just as well as truth (perhaps even better). AI videos therefore also have a political component. A political party can post or share an AI video, apologise, yet still achieve their goal: influence thought.
AI videos will improve and not all red flags can be waved over every video. At the end of the day, it is your gut feeling that should tell you something is off. And that gut feeling can only be there with a wide and deep knowledge about the world. Education is key, but there also needs to be a genuine curiosity and healthy scepticism to fend off synthetic slop.





