This may overlap some other related topic areas but we all know that all too often we are running “blind” at 4 minute intervals between trigger events as part of Wyze’s strategy to minimize cloud spam and push notification floods. But it would be very useful for security situational awareness to be able to assess event context at any time - especially in these blind periods where we get no push processing. And often times the events that are pushed don’t really start on crisp video boundaries from the actual event that triggered the push (a few seconds before the video capture/copy).
So, how about giving us a SD “analysis mode” where we can enter a specific date time (date:hh:mm:ss) to have the cam run its motion detection algorithm (at faster than real-time) and generate local events to SD memory so we can look at those video clips (perhaps limit the video to 5 minute max segments). In keeping with the concept design already in place it would also be a consistent design principle to simply put in a “link” to the SD card’s date/time tag in the event notice itself to let us click on it to jump directly to that time (+/- some user optional offset - e.g. event-time - 5 secs) in SD card playback memory to bring up the video directly rather than using that graphics slider bar that is at best Kentucky windage over-under swipe and hunt to get close to what we want to see for events many hours earlier in the day. That GUI time slider bar is really generating lots of unneeded user overhead to set a simple date/time offset that takes forever to sync up with the actual video’s time watermark in the view screen. And the more fat-finger swipes we make to hone in on the desired time the more queued up video seek requests are transmit to the server and cam to confound the display sync and the network overhead - that often crashes the app.