Come join us in shaping the future of Wyze AI

@WyzeShawn @WyzeGwendolyn I’ve been thinking more about AI on the cameras, and I’m curious what kind of plans you guys might have for the future (To the extent that you’re able to share, anyway)

I thought it was kind of fascinating to see how people were reporting false positives on pets and vehicles occasionally – and then to stumble across a guy who mentioned that he was tagging EVERY pet and vehicle motion event as a “person,” because he didn’t want to risk missing anything (Which would obviously skew the effectiveness of the person detection.)

Specifically, that got me thinking about the idea of teachable AI. To a certain extent, I guess that’s what the person detection feature already is, except that I don’t think it’s using my personal data to teach itself, right? (Aside from the extent to which the videos I share through the beta app wind up getting factored into the global algorithm)

Anyway, I was just thinking about some of the amazing things we could do if we were able to teach our cameras custom AI. Obviously, this kind of feature would be advanced, and probably not for the faint of heart. I’m sure it would require the user to do a lot of tinkering in order to get things just right, and to build the dataset in the first place. But that doesn’t really scare me. I’m a web developer, so I’m sort of a hacker/tinkerer by nature. I think it could open the door for some pretty cool things – some features that are light years beyond what any other camera companies are providing.

For example, with custom AI specific to the camera, it might be possible to cut down on the false positives from headlights at night. I get why that’s a difficult problem, because each camera is different, and there are some headlights that ARE real motion (for example, If someone is coming up the driveway and the camera is blinded by the light.) But if it were possible to feed the camera real-world examples from the specific camera in question (i.e. “THIS is important motion, THIS is unimportant motion”) it would be possible to get more meaningful alerts. The same idea could be applied to trees swaying when there’s too much wind during the daytime – if it’s specific to the camera, and the camera’s position, it’s much easier for the camera to be smart about it.

It might also be possible to get information about specific people on camera (I know people have already brought up facial recognition in the forum ) or specific vehicles on camera. Obviously, one red Mustang looks pretty much like another, but if the camera knows that a red Mustang tends to be in front of the house regularly, it could send an alert like “Jane is home!” with reasonable accuracy , assuming someone else with a red Mustang didn’t park in your driveway. It could also learn to recognize the mail truck, UPS truck, FedEx truck, etc.

Anyway, I know some of this is probably shooting for the moon, but I love tinkering with stuff like that. If you guys are even remotely considering doing advanced stuff like that, I’d be thrilled!


That does sound awesome! Though I don’t have the tech involvement to be able to comment further. :slight_smile:


1 Like

2022 is a big milestone:thinking:

The idea is awesome. Though it may happen within 3 months, this is definitely thrilling to the Wyze users if we can make it happen!

1 Like

I assume you meant “may not”, right? Yeah, I’m aware it’s not as simple as “Hey, throw this in with the next firmware update.” But based on your post and quiz, I figured if I could get anyone at Wyze excited about it, it would be you. Haha. Worth a shot. :slight_smile:

1 Like

Nice catch :slight_smile:

Hey Wyze! Please make a A.I. software to detect cars when you release the Outdoor Cam.

1 Like

The only issue with detecting cars is that there are always cars in my driveway, so anytime it detected motion it would see a car and tag it that way. It would have to find a way to realize if the vehicle is what caused the motion also. That is a more difficult task.

It really shouldn’t be that difficult, unless the AI is literally only detecting a SINGLE frame of the video. Even if it is, just increase that to two frames, one second apart, and you’re golden. If a car appears in one frame, and either doesn’t appear or appears in a different location in the second frame – voila! Vehicle detection!

I am not a programmer so I don’t know how difficult or easy it is, I just know it would take more than now. I know with person detection it scans the whole 12 second video but if I set up a detection zone and stand outside of it, then a tree in the detection zone moves it will tag it as a person even though i am out of the zone and not moving. So currently it ignores any zones and I think just looks for what is in the video not whether its moving.
The only other issue would be how much bugger it would make the program since it is done on the cam itself, it would have to fit within the memory constraints of the cam.
Might be super simple to do I just know it would require some changes to work effectively.

Isn’t that why you use AI? To simplify difficult tasks?

I’m not totally sure. I haven’t done that KIND of programming, (Firmware for hardware that has specific limitations like the camera) but if it’s capable of checking the whole video and running the AI, I don’t think it would be much more difficult to compare two frames and determine whether the detected vehicle exists in a different location between frames. That seems like a much easier task than detecting whether the vehicle is in the frame, which we already know it IS capable of doing.

1 Like

I was referring to the programming of the AI and ability of fitting it on the camera