Wyze’s AI capabilities are… awful. Yet when users submit feedback on what’s actually being seen in video recordings, the input is only partial. That’s because Wyze doesn’t learn the difference between what the camera thought it saw and what was actually recorded.
If Wyze claims that it recorded a vehicle when it actually captured a person walking a pet, it should know how wrong its classification was. When it records the shadow of leaves moving but thinks that’s also a vehicle, that should be part of the report.
There needs to be a column for what Wyze believed it captured and a separate column for what was actually recorded. That’s the only way to know what to improve.
The video submissions go straight to the AI, so it doesn’t matter what it thought it saw, all the AI cares about is correct input data. Additionally, the AI tags the AI used are stored by Wyze, so if needed they can be retrieved
The videos are actually tagged within the server with what it thought it was. The server has the video and the original tag to cross reference with your correction submission. That tag is the first bubble you see when opening the training feedback tray. Wyze already has that Metadata on their end, you don’t need to provide it again, they are the ones who told you what the AI thought it was.
When you change that tag, deselecting the original tag and choosing another, you are telling the AI “no, what you have already tagged is wrong. It is actually this”. So, the AI training protocols can compare the image in the context of the old mistake tag and the new corrected tag.
If you leave the original tag selected and add a second or third tag, you are telling the AI training “you got one right, but you missed this and that”.
If you choose to submit feedback on an untagged motion video, you are telling the AI training “you missed it all, this and that were in the video”.
There are other situations (besides animals that aren’t pets) that are not covered by the feedback form, such as the movement of shadows (from tree branches, birds and airplanes flying overhead, etc.). Windy days can result in more than 100 “junk” recordings of nothing but moving shadows.
The AI sensitivity also needs to ignore a lack of motion. The number of times that it indicates “Vehicle” for a parked car is another useless notification. Not even dogs chase parked cars. If it’s not moving, it shouldn’t be identified by a motion detector like WyzeCam.
Meanwhile, as we wait for better choices for AI feedback, Wyze has managed to add a new one… that no one could possibly need (or have asked for): Santa.
The developers obviously don’t have time to get around to legitimate requests, but they do have time to fool around on things that do not matter.
Looks like they removed that Santa label. But I wish that time was spent on fixing bugs and not adding stupid things we don’t need.
I just had an issue on my V3 camera where a bush moved by the wind and got the detection box on it yet the AI labeled the video as a vehicle, when the car was blocked out of the detection grid. Thought the AI was only supposed to identify things within the assigned grid? This stuff just doesn’t work!
Yes, Wyze’s AI is artificially intelligent, not close to actually intelligent. With people and pets walking through the frame, it cites parked cars as “Vehicle.” Dogs are smart enough not to chase parked cars. Not Wyze.
Once again my V3 camera had a motion detection event in my driveway by a bird this time, yet labeled the video as a vehicle. My vehicle was not within the motion detection grid. The AI doesn’t work.