Vehicle events being falsely labeled as people

I am aware of this. I am not referring to the green motion Tagging Box as it is a user defined feature that can be turned off. If it is turned off, the thumbnail will still return the snapshot of the first object that was AI tagged regardless of the motion tracking feature even if that snapshot AI tagged object comes at the end of the video. The green motion tracking box has nothing to do with this, I just happen to have mine on.

When you play events and scroll thru them from side to side in the Event player, the first snapshot is that of the AI tagged object. I have had videos that show the AI object in this thumbnail preview load that don’t appear until minutes into the event video and it is the only AI object but not the only motion object. This will happen regardless of the green motion tracking on or off. It is showing you a snapshot preview of that which it AI tagged.

I don’t believe that is a statement that can be made with any degree of certainty on either side of this discussion without the ability of every event video to actually tag the very object that was AI identified. Otherwise we are all guessing pointing around the screen saying maybe this or maybe that. But, how many people did you see in those videos?

Bottom line, the Person Detection AI is highly susceptible to false positives. I can post dozens of videos every single day to prove this. And it isn’t getting any better.

The testing you described is testing Wyze should already be doing in practical applications. It is going to take some effort, but I will restrict a majority of the zone on each of these and slowly open it up. That should reveal what is being AI tagged. But again… Nothing in the videos even resembled people.

1 Like

He is actually correct on this. A very recent change to the V2, V3, and Pan Firmware (current update) has restricted the identification of AI objects to only that which is viewed inside the detection zone. Previously, any object in the field, even outside the DZ, could be AI tagged. In my case, none of these cams has a DZ.

I was aware of the update. My point was I’m looking for more explanation / definition of “most” in the sentence

“The AI has recently been updated to to respect most of the detection zones and ignore areas that are blocked out.”

1 Like

I will provide an example. Say the following is a detection zone:

Note that in these examples, only the person’s legs are within the detection zone, but not their upper body. If the AI ONLY analyzed the detection zone, then it would say there was not a person in view, even though there kind of was a person or at least part of person that was inside the detection zone in every case. If it was 100% strict on only analyzing the detection zone, this person would never be notified of a person at their house when they clearly want those notifications.

While I haven’t yet confirmed the exact details of how it works, I infer these would all detect a person still because the person object area and the motion detection area overlap, which seems to be similar to what the AI team told me it is supposed to do now:

I think it was originally being strict on the detection zones because I did have one camera that suddenly stopped giving me detections for a while. That’s mostly been resolved for me now, but it made me realize there was a big change. I will have to ask them some more questions about this to be sure, but I am thinking it has to have some flexibility to analyze areas when an object is partially within and partially outside of the detection zones as in the examples above. That is why I said “Mostly respects” since I am not confident what the boundaries or considerations are for overlapping objects like these situations, and I won’t know until I can question them again. I didn’t want to be definitive when I can’t accurately say so. I hope that makes sense. :+1:

2 Likes

I missed that distinction on my first read thru. That actually makes a lot more sense to ensure positive AI tagging of objects that only partially fill the DZ.

1 Like

So I get what you are saying 100%, but it also raises questions on detecring things in “blocked out areas”.

I will block out the small area in my view that has a new brightly colored rectangular sprinkler, which, based on your explanation, could be the source of my issues. It’s simply a rectangular blue lawn sprinkler. From an AI perspective, it in no way resembles a person.

2 Likes

I just now did a series of experiments to test this hypothesis and I can fairly confidently say that this is not the case. You can do similar tests to verify this yourself.

What I did: I put a camera in a room that has no movement and made sure the doorway was in view. I then did several things to make sure it saw motion for a while without identifying a person. I did this for varying lengths of time, just a moment to up to more than 40 seconds. I also tried rushing in as fast as possible to see where it would thumbnail it. Anyway, the overall point is that I I can 100% accurately get the thumbnail to display something other than the person being identified later in the video. Here are just a few examples where it takes a thumbnail with no person in view, but a person comes in later and it still tags the event as a person even though they aren’t in the thumbnail:

Another example where the thumbnail is not of the person who is detected since the person shows up a few seconds later:

image

Also, in some of your examples, it should have either shown the car on the far right side when it first entered or else it wasn’t the car that was identified as a person:

I can definitely see how you developed your hypothesis on this though. It was very insightful and logical. The correlation is certainly extremely high and suggestive of your conclusions but some more extensive testing shows the thumbnail is actually unrelated to the timeline of the AI object detection.

Still your hypotheis makes for an EXCELLENT question the next time the AI team does an AMA! You should remind me to ask them the next time they host an AMA. I would LOVE to ask them more on the thumbnail choice, because it isn’t always the FIRST frame with motion or an object, and while it is usually toward the beginning of the event, it is not always the case. I would love to ask the devs more about this. Interesting oddity you discovered whatever the answer is. Sadly, I suspect we may not get the chance to ask them for clarification before they implement the new thumbnail feature they talked about launching soon, but the question will still be relevant in general.

In the end the testing demonstrates that we are back to concluding that the thumbnail and the AI Object are not directly related. The “Person” detected could be nearly anywhere in your frame and could be any of the frames anywhere in the video, not necessarily just the thumbnail images.

Great thinking though buddy! I enjoyed testing out your hypothesis on several of my cams and learning a bit more. :+1:

1 Like

Yeah my AMA summary in the forum often paraphrases the posed question, but includes word for word what the employee responded (which is what mostly matters). In this case, the context of the full question helps to see everything he is talking about when he confirms it is already launched and in place. See the difference:

Forum paraphrasing:

Original Question:

I used a screenshot of the original in full since the question context helps a lot with the explanation. In the forum I’m mostly just focussing on the answer for people who really only care to read what the Employee said and a brief understanding what it was about.

But it is certainly cool that they are implementing this now! It will be very helpful!


I do agree with your overall point regardless of all of the above though. Person detection needs some improvements to reduce False positives.

Wyze is working on it, especially to make sure Cam Plus Pro is as accurate as possible since it is security related. Standard cam plus will be able to benefit from this over time too. It will get there. By the end of the year we’ll have better tools to know what’s going on. I look forward to that so much!

1 Like

Thanks for testing that out. It certainly does look like there is a strong correlation, perhaps that is what they were trying to do, but as you pointed out, it isn’t the same every time and there is no way to tell without direct tagging embedded into the video.

Not sure I picked that one up either. What new feature is this?

I asked about this in the last 2 AMA events Wyze did recently. I already copied the answer for one of them earlier in this thread, but here is the new clarification they just gave us based on my clarification question:

So, it does MOSTLY respect the detection zone in the sense that if an object is partially in and partially out, it will still include it in the detection if enough of it is in the detection zone. On the other hand, if an object is 100% outside the detection zone then it won’t be considered [anymore]. Pretty cool way to do it to make sure things are not missed.

1 Like

I just bough a V3 and it is apparent the issues still is not resolved. Every car is identified as a car. AI is pretty useless. Shane on Wyze. They advertise a lot of great abilities in their products that are down right lies. Then stop supporting them. Perfect example is the outdoor can.

Welcome to the Wyze User Community Forum @kebrown1980!

Thank you for posting confirmation that the Wyze AI is working properly! They update it regularly and your feedback showing that it does what it is supposed to do is great news!

1 Like