Here you go. 2 photos. If you look at the 12:43 am on the events it clearly states vehicle on there. The second is the so called vehicle. Tell me where you see a real vehicle. And yes I know this was over a month ago but I’m still getting these.
I can’t say that I see anything that looks like it could clearly be mistaken at video. Perhaps rearranging things might help like separating the pet carriers in the center.
I think you’re forgetting this new AI is still in “pilot” aka testing. There will be false detections.
That does kind of resemble a semi-truck to a new and learning AI (which will start by identifying lots of boxy-shaped things as vehicles…then later recognize that wheels make a difference, then look for windows and mirrors to help out, maybe an antenna, and keep learning other things that help distinguish it from non vehicles…but it is still at the beginning stages, it’s a newborn AI right now). That’s why this is in BETA (the AI is just a toddler) and they are asking for submissions to help train it when it gets something wrong so it can further narrow down what’s appropriate. It starts with identifying general outlines, and as it gets more examples, it will add more details to figure out what the difference is.
True but look at my photo again. The tag was around the raccoon. Is it only tagging one thing now?
@carverofchoice I was actually thinking the same thing about the semi but thought it was a bit of a stretch
@FriendofFeralCats I Believe the green tag does not matter. The green tag is just tracking what is perceived as motion. Once the camera Triggers from motion the whole scene is analyzed for the AI categories and flags what it sees. I.e if a dog runs past and Triggers the camera and there is a car in view it should flag both.
I have the pay what you want PD not Cam+ and have been giving feedback on false detects for person, no motion, motion in blocked out areas. I also see in the feedback menu there are all these new options, face, package, vehicle etc. Today I also noticed there are different button if the phone is portrait or landscape. This seems like a bug? Since I don’t have CAM+ wondering why I have these options, I give feed back because I hope it becomes more accurate so I will eventually sign up for the subscription.
I tried to upload these through the app but kept getting message that the upload failed.
If you are giving feedback about motion in blocked out areas, please stop doing that, you are probably confusing and screwing up the AI. The AI has absolutely no access to your detection zone areas or blocked out areas. It reviews absolutely everything on screen, and if it sees a person (even outside your detection area) and you tell it there was no person [in your detection area that it doesn’t even know exists], then it will be confused why you are saying there is no person when it is clearly seeing a person somewhere in the video…telling it that was a bad detection or not a person will make it think that person is not a person even though everyone else is telling it someone exactly like that is a person.
The detection zone is 100% local only (only on the camera). If there is ANY movement at all in this area (a bug, a plant, a leaf, a piece of paper, wind, ANYTHING at all) the entire video is uploaded for the AI to analyze. It scans the ENTIRE video (not just the detection zone or non-blocked out areas, EVERYTHING), and doesn’t care about any blocked out areas…and if it finds a person anywhere in the video at all, even in the blocked out areas, it will notify/report that there was a person in the video. That’s unfortunately just the way it works because the detection zone is local only, and the AI knows nothing about it and can’t only tell you about people within that small selected area you made, but not the rest of the video area. So event analysis is triggered by ANY motion at all in the detection area (even small things, or a change in light brightness or shadows, or clouds, or nearly anything), then the whole video is analyzed ignoring any zones you set up. Keep that in mind.
So when you submit that the AI made a bad detection because it told you about something/someone in your “blocked area” you will just be confusing and screwing up the AI, not making things better. They may someday be able to relay a blocked out area back to the AI to ignore, but that is not the case currently.
I hope that helps clarify why you should please not submit any videos based on your detection zone. At best it has no effect, at worst those submissions will actually make the AI worse and worse over time and nobody wants that. I know you said your intent was to make it MORE accurate, so please continue submitting videos as you have been, but do so based on everything that shows on the screen during the event, not just things in your preferred detection area, because the AI does not know anything about that.
As for the rest of the submission options you see, I’d recommend ignoring them since you don’t have Cam Plus and so the AI will not label any of your videos for those things anyway, so it’s not really possible for you to tell it if it analyzed your video correctly for any of those options since you will never see any of those labels if you don’t have Cam Plus. Doing so, could potentially confuse the AI and make things worse again, instead of more accurate.
Thanks for donating clips to help improve things! Keep up the good work.
That is a great summary, and yet I hope you see that it also explains why it is impossible for a normal (non-geek) human to do that properly. The tagging thing looks like a crapshoot that hopes for the best in aggregate.
Thanks for the reply. Now I’m sure some of my feedback has not been appropriate to help the algorithm. That should have been explained when this rolled out.
That doesn’t explain why there are different options when the phone is portrait vs landscape.
So where / how do we report problems with the detection zone? I have a bush blocked even an extra block or 2 around it that still detects as motion. That causes more events uploaded for analysis that shouldn’t be flagged at all.
After I activated my new V3 Cam, I lost my pet detection on all my Cam Plus Cams even though I am part of the pilot! Any help?
Now I see the “Pet Notifications” but cannot FILTER on Pets. Any ideas?
In the event tab, click the funnel looking thing in the top right. Scroll all the way down to the bottom of the choices and you should see “pet”.
The pet filter option has been removed from there (and everywhere else it used to be) on my app too.
Android Beta v2.19.12
It might be a Beta issue and still working on other versions, I don’t know. I can only verify it is definitely missing on mine, so I assume that is what is happening for the above user.
Nice catch, thanks for the note. I should of said that if you are apart of the pilot that includes pet detection, that’s where the filter toggle is.
I sometimes wish all I had was the vanilla production app, hard to keep track of what’s what and what’s beta sometimes.
Yeah, I totally understand. It can be hard for you mavens/mods to know if a difference being posted about is lack of user’s knowledge, or a bug, or a difference in App versions (since there are so many). I mean, it’s hard enough for me to know just between beta and production versions, let alone product tester and Alpha tester version stuff! I figured you must be in the testing group for it and didn’t realize this change disappeared for some of us. No biggy…it’ll be back after some testing I’m sure. There’s no real way for you to have known before I showed you though
Yep, and being that this thread is about a closed pilot test group, I assumed the option should probably be there for the users in the group.
Thank-You both for the responses. From the very start, I have been part of the pilot group that included Pet and Facial recognition. I was one of the first to sign up and was enjoying testing these features for months now.
Unfortunately, after I activated my new V3, I did not see the option to FILTER on the Pet and Face anymore. It was suddenly removed from the funnel thing as an option to FILTER. Strange thing is that in my notifications that I get on my phone, I see PET notification, but I still cannot FILTER on it.
@carverofchoice let me know if you find a FIX. I contacted WYZE 2 times on this subject, and they just responded by saying, become a beta tester!
Like you, I too was accepted into Phase 1 testing when it came out.
WyzeShawn told us that Wyze was currently temporarily removing some things to do some closed in-house testing for some of the AI features in a phase 2 type of improved new generation of detection for all of these things. I had hoped it would only be facial recognition removal, but it looks like they decided to do the closed testing with pet detection too (which he said they also upgraded).
Regarding us being able to help with phase 2 testing of Pet and Facial recognition (both of which are what we’re missing), he said:
Q: How can I enroll in the AI feature testing like with Pet detection and Face Recognition?
A: The test crew is currently closed for now and we’re working on the next generation of the services. If you would like to help, you can submit videos with clear images of faces or pets inside the home to us. Thank you for your assistance!
Don’t worry about it too much for now, after they’re done with some in-house testing on the 2nd gen AI detections for faces/pets I am confident we’ll both have them re-enabled again. If I do find a way to get it earlier, I’ll try to remember to tell you. It’s just a coincidence that it coincided with your new V3. It was sadly a planned removal…but it should be a planned update back in not too long.
Awesome. Thanks so much for the information, I did not know all that. You are right, the day that I activated my V3 was the day that I lost it, so I attributed to that.
Thanks again for the info! Pet detection was working great so looking forward for it to be available again.