Vehicle events being falsely labeled as people

FYI, I just got a notice that the monthly AI update just dropped. While not drastic the following are supposed to be the measurable improvements this month:

Even if the improvement is just 1.5%-5% every month, after a few months of compounding improvement it is drastic progress…maybe not noticeable in a single month, but usually noticeable after a few months in a row.

2 Likes

That it did, hopefully some will see a bit of improvement.

4 Likes

I’ll go find some video examples tomorrow, but here’s the problem I see with your response. I ordered my V3’s via the waitlist when they first came out, and I haven’t moved them since I put them up. So Your approach of “we’ll help you figure out what’s wrong with your placement” isn’t what I need. They’ve been working fine up until recently, then I started getting all these bad notifications starting about a month ago (maybe a little more). So it’s not a placement problem. It’s an AI issue.

And to your question about removing the street with detection zone exclusions, the answer is no I can’t do that. Part of the reason I have these cams is to report when there are people near my car in front of my house. Besides, that’s just avoiding the real issue of inaccurate AI detection issues.

I wish I could report that my issues were happening at night, but they are all during the day in full bright daylight, so sounds like these updates won’t help my issues. Thanks for posting this though.

You are not alone. I recently did a 10 day log of AI tagged events. The results did not instill confidence. My worst 3 V3 cams were the three pointed at the street with PD only. Cars are constantly being PD tagged, mostly in daylight. This has been an ongoing shortcoming of the AI for some time.

One of the features of the Events page is that it will show you the snapshot thumbnail of the actual object that was tagged in the video:

Then, when you load that video to view, it will again show you that snapshot in the larger viewer before starting the video. It is there that you will be able to see what object caught the tag:

For example this vehicle being tagged as a person:

If you watch as I click the video, that same thumbnail appears prior to the full video loading:

Here are 10 videos of cars being tagged as people just from the last two days.




















Perfect examples! Thanks for going to the effort to post these. I know it’s a hassle.

2 Likes

That is a misunderstanding. The green tag is ONLY identifying motion (something that changed between 1 frame and another, not what the AI actually detected. The AI is considering each frame individually at the moment and the green tag is meaningless to it.

The green tag is 100% decided by the local Camera processing motion. The AI does not take this into account at all. AI detection is totally separate.

In your case above, The car is the trigger for the video, but more than likely it is another unmoving object being identified as a person. We can reasonably conclude it is not shadows (at least not most of the time) since many videos were not shadow related. My first guess would be one or more of the mail boxes being similar to the shape of a person at a distance. I have seen fire hydrants and house lamps and such be labeled as a person before. Possibly some other weird things like this:

image
image
image

All of which have a similar shape/outline to a human at a distance, or sometimes when there is over-exposure.

Don’t get me wrong. I am very supportive of the AI progressing to resolve that and come to learn the difference.

However, if the AI was actually identifying cars as people, then I would also always have cars identified as people, and I do not, and neither do the majority of people. Cars are rarely identified as people, and thus we can safely believe that it is not the cars themselves that are actually being identified as a person in these cases and that something else in your particular environment is triggering this mis-detection. There are countless potential suspects I found even just from brief inspection of the environment.

There are some things you can do to figure it out though. For example, just for testing sake (you can undo this later), try using a detection zone and blocking out a couple of specific areas that have a vague human-esque shape…anything that could appears to have a head on top of something else, or a head with arms or legs, or sitting showing just a body with a head, round or protruding in some way, whether or not a neck is apparent, even allowing for potential head differences due to hats or whatever. Block out mailboxes, stumps, anything with a roundish top, most of the tree branches, etc. Try lots of different things, maybe all together to start with and then just 1 or a few at a time. THEN see the difference. The AI has recently been updated to to respect most of the detection zones and ignore areas that are blocked out. Thus if you suddenly have fewer person detections when events trigger (from cars), you will know that something (or multiple things) in those blocked out zones are the primary offender giving you false person detections. Maybe try blocking out nearly everything except a piece of the road where it will still see a full size of a car, but not much else, so all your camera sees are the cars (make sure the mailboxes are blocked out, etc), then see that your person notifications decrease drastically because all it ever checks is the cars, and the cars aren’t what it thinks is a person…the AI is still seeing the cars, but it won’t be saying that they are persons. Then you can basically rule out the cars being the main culprit and expand your detection zone until you see you start getting false identifications again. Then you’ll know something in that area is actual culprit and that the cars are just the event trigger.

Having said all that, if you don’t want to go through all that effort, you can just wait. Either the AI will improve organically soon enough as it evolves with enough data, or by the end of the year they said they are going to release new features that will show us exactly what is being identified as what by the AI. Just today Jimmy confirmed that the AI team is working on having AI tagging in the video (just like you thought these events were showing with the Green tag, but were only tagging motion, not what the AI actually identified), AND he said they are ALSO working on having the notifications actually show us a thumbnail of the identified detection instead of showing us the entire frame. So by the end of the year we should be able to accurately see what your false identifications are, but I can nearly guarantee that the majority of them having nothing to do with the cars/vehicles…those are just the motion triggers that initiate the event.

Either way, Wyze is working on a solution to help you better identify what the real issue is.

image

1 Like

This is helpful. Thanks for the lengthy post. I have to question your comment that the AI respects “most” detection zone…

You need to explain that in more detail.

I am aware of this. I am not referring to the green motion Tagging Box as it is a user defined feature that can be turned off. If it is turned off, the thumbnail will still return the snapshot of the first object that was AI tagged regardless of the motion tracking feature even if that snapshot AI tagged object comes at the end of the video. The green motion tracking box has nothing to do with this, I just happen to have mine on.

When you play events and scroll thru them from side to side in the Event player, the first snapshot is that of the AI tagged object. I have had videos that show the AI object in this thumbnail preview load that don’t appear until minutes into the event video and it is the only AI object but not the only motion object. This will happen regardless of the green motion tracking on or off. It is showing you a snapshot preview of that which it AI tagged.

I don’t believe that is a statement that can be made with any degree of certainty on either side of this discussion without the ability of every event video to actually tag the very object that was AI identified. Otherwise we are all guessing pointing around the screen saying maybe this or maybe that. But, how many people did you see in those videos?

Bottom line, the Person Detection AI is highly susceptible to false positives. I can post dozens of videos every single day to prove this. And it isn’t getting any better.

The testing you described is testing Wyze should already be doing in practical applications. It is going to take some effort, but I will restrict a majority of the zone on each of these and slowly open it up. That should reveal what is being AI tagged. But again… Nothing in the videos even resembled people.

1 Like

He is actually correct on this. A very recent change to the V2, V3, and Pan Firmware (current update) has restricted the identification of AI objects to only that which is viewed inside the detection zone. Previously, any object in the field, even outside the DZ, could be AI tagged. In my case, none of these cams has a DZ.

I was aware of the update. My point was I’m looking for more explanation / definition of “most” in the sentence

“The AI has recently been updated to to respect most of the detection zones and ignore areas that are blocked out.”

1 Like

I will provide an example. Say the following is a detection zone:

Note that in these examples, only the person’s legs are within the detection zone, but not their upper body. If the AI ONLY analyzed the detection zone, then it would say there was not a person in view, even though there kind of was a person or at least part of person that was inside the detection zone in every case. If it was 100% strict on only analyzing the detection zone, this person would never be notified of a person at their house when they clearly want those notifications.

While I haven’t yet confirmed the exact details of how it works, I infer these would all detect a person still because the person object area and the motion detection area overlap, which seems to be similar to what the AI team told me it is supposed to do now:

I think it was originally being strict on the detection zones because I did have one camera that suddenly stopped giving me detections for a while. That’s mostly been resolved for me now, but it made me realize there was a big change. I will have to ask them some more questions about this to be sure, but I am thinking it has to have some flexibility to analyze areas when an object is partially within and partially outside of the detection zones as in the examples above. That is why I said “Mostly respects” since I am not confident what the boundaries or considerations are for overlapping objects like these situations, and I won’t know until I can question them again. I didn’t want to be definitive when I can’t accurately say so. I hope that makes sense. :+1:

2 Likes

I missed that distinction on my first read thru. That actually makes a lot more sense to ensure positive AI tagging of objects that only partially fill the DZ.

1 Like

So I get what you are saying 100%, but it also raises questions on detecring things in “blocked out areas”.

I will block out the small area in my view that has a new brightly colored rectangular sprinkler, which, based on your explanation, could be the source of my issues. It’s simply a rectangular blue lawn sprinkler. From an AI perspective, it in no way resembles a person.

2 Likes

I just now did a series of experiments to test this hypothesis and I can fairly confidently say that this is not the case. You can do similar tests to verify this yourself.

What I did: I put a camera in a room that has no movement and made sure the doorway was in view. I then did several things to make sure it saw motion for a while without identifying a person. I did this for varying lengths of time, just a moment to up to more than 40 seconds. I also tried rushing in as fast as possible to see where it would thumbnail it. Anyway, the overall point is that I I can 100% accurately get the thumbnail to display something other than the person being identified later in the video. Here are just a few examples where it takes a thumbnail with no person in view, but a person comes in later and it still tags the event as a person even though they aren’t in the thumbnail:

Another example where the thumbnail is not of the person who is detected since the person shows up a few seconds later:

image

Also, in some of your examples, it should have either shown the car on the far right side when it first entered or else it wasn’t the car that was identified as a person:

I can definitely see how you developed your hypothesis on this though. It was very insightful and logical. The correlation is certainly extremely high and suggestive of your conclusions but some more extensive testing shows the thumbnail is actually unrelated to the timeline of the AI object detection.

Still your hypotheis makes for an EXCELLENT question the next time the AI team does an AMA! You should remind me to ask them the next time they host an AMA. I would LOVE to ask them more on the thumbnail choice, because it isn’t always the FIRST frame with motion or an object, and while it is usually toward the beginning of the event, it is not always the case. I would love to ask the devs more about this. Interesting oddity you discovered whatever the answer is. Sadly, I suspect we may not get the chance to ask them for clarification before they implement the new thumbnail feature they talked about launching soon, but the question will still be relevant in general.

In the end the testing demonstrates that we are back to concluding that the thumbnail and the AI Object are not directly related. The “Person” detected could be nearly anywhere in your frame and could be any of the frames anywhere in the video, not necessarily just the thumbnail images.

Great thinking though buddy! I enjoyed testing out your hypothesis on several of my cams and learning a bit more. :+1:

1 Like

Yeah my AMA summary in the forum often paraphrases the posed question, but includes word for word what the employee responded (which is what mostly matters). In this case, the context of the full question helps to see everything he is talking about when he confirms it is already launched and in place. See the difference:

Forum paraphrasing:

Original Question:

I used a screenshot of the original in full since the question context helps a lot with the explanation. In the forum I’m mostly just focussing on the answer for people who really only care to read what the Employee said and a brief understanding what it was about.

But it is certainly cool that they are implementing this now! It will be very helpful!


I do agree with your overall point regardless of all of the above though. Person detection needs some improvements to reduce False positives.

Wyze is working on it, especially to make sure Cam Plus Pro is as accurate as possible since it is security related. Standard cam plus will be able to benefit from this over time too. It will get there. By the end of the year we’ll have better tools to know what’s going on. I look forward to that so much!

1 Like

Thanks for testing that out. It certainly does look like there is a strong correlation, perhaps that is what they were trying to do, but as you pointed out, it isn’t the same every time and there is no way to tell without direct tagging embedded into the video.

Not sure I picked that one up either. What new feature is this?

I asked about this in the last 2 AMA events Wyze did recently. I already copied the answer for one of them earlier in this thread, but here is the new clarification they just gave us based on my clarification question:

So, it does MOSTLY respect the detection zone in the sense that if an object is partially in and partially out, it will still include it in the detection if enough of it is in the detection zone. On the other hand, if an object is 100% outside the detection zone then it won’t be considered [anymore]. Pretty cool way to do it to make sure things are not missed.

1 Like

I just bough a V3 and it is apparent the issues still is not resolved. Every car is identified as a car. AI is pretty useless. Shane on Wyze. They advertise a lot of great abilities in their products that are down right lies. Then stop supporting them. Perfect example is the outdoor can.

Welcome to the Wyze User Community Forum @kebrown1980!

Thank you for posting confirmation that the Wyze AI is working properly! They update it regularly and your feedback showing that it does what it is supposed to do is great news!

1 Like