Sorry, I shouldn’t have worded it so softly. That’s the way it works. I think that info either came out in the introductory video or with my conversations with employees, so I don’t have anything real solid to point at.
There is one sentence in the “Person Detection with Edge AI” document that at least implies it, though: “Wyze’s Edge AI is powered by Xnor.ai, and utilizes a machine-learning algorithm to determine whether frames within a motion event video clip contain a person.” So it refers to frames, not detection zone.
However, it is easy to test:
Set a Motion Detection Zone that is small or low. Using a tablet gives you greater control to set a small zone. Then walk thru it, triggering motion detection. You pick the portion of a body that enters the frame. Doubtful the AI could identify a person from a portion of a body, so that should indicate it is looking at the whole frame.
Obviously, the standard person detection rules would still apply – camera not mounted low, person from 2-20 feet, etc.