I haven’t written CV software myself yet, and I understand that the constraints are not just tight, but crazy tight in WyzeCam. I wonder though, if motion sensitivity could be made to exclude areas of motion that are lighter than 95% of the rest of the frame, and darker than 95% of the rest of the frame. Perhaps the thresholds could be user adjustable. That should help reduce shadows and reflections without having to get in to image pattern recognition on the device itself.
In one of my projects I have a Nest camera pointed at railroad tracks, and I want motion detected when a train goes across them. You’d think it’d be easy to spot a giant train without a bunch of false positives. But at night, particularly after rain, the tracks reflect light from cars in the distance. The reflections ‘move’ with the cars. This wouldn’t trip up PIR motion sensing, but video-based shits the bed. Similarly in the early morning, cars cast shadows that extent onto the tracks and cause motion alerts. I can’t go on RR property. But I am very close to installing a pair of lasers and receivers on one side, and a mirror on the other to calculate the direction and speed, and do away with motion detection on the Nest.
A really perfect shadow-ignoring algorithm could take into account the gps location of the camera (which I doubt wyze includes a gps chip, so it would need to be recorded by the phone during setup), its angle, and time of day to know on what axis to expect symmetry. And then it could be really certain a dark object in the picture was a shadow and not a black cat.
Though dealing with shadows and reflections that change throughout the day could rightly be put off until the day comes when a version is actually intended for use outdoors… :-p