I’m sorry maybe I’m a bit confusing. I want to be a regular beta tester not the AI thingy. Does that make sense? I signed up to test AI but now I can’t get this off my production app. It keeps re enabling all the AI features even after I go in and disable. The regular beta testing (forget the AI for a moment) resolved the stupid missing fragment and fail to upload errors. Even Shawn Niu (developer of the AI and beta) mentioned that the only way to get rid of those errors is to use the beta version. But stupid as I am I wanted to try the AI which is where the trouble started because I had to go back to the public release of the app and cam firmware.
Vehicle detection seems to be working on my end. But it seems like 90% of the time it says vehicle it also says person.
Probably a big ask, but I think a useful tool would be to tag whatever motion was picked up by the AI with different colors. Keep green as the general motion, but overlay a red box to indicate what the AI determined was a person, blue for cars etc… As of right now, I have no idea what is causing the false person alerts, maybe it’s a tree that looks like a person (to the AI anyway) and I can fine tune the detection zone.
And before anyone says it, I know vehicles have people in them lol
So I ran a wifi analyzer and this is what I found. Where the cam is situation I should not be having an issue. The ping is 17 ms. Download is 442.01 Mbps Stability is 99.0% transmitted data 585 MB and the upload is 20.59 Mbps Stability is 93.6% Transmitted data 16MB Why there is always this issue. And again it only started up once stopped using the beta app and firmware and went back to production app and firmware. Now the cams are totally bricked no matter what I try.
I wonder if the new app released today is supposed to work with the AI features.
I am still on the beta app until the new update arrives hopefully next week for beta, and the new AI is working good on this new update, it actually loads now on the faces tab and also in the events tab it is integrated better.
Maybe the new production app pushed out today will help fix your issues
Issues persist in production app. (I’ve already noted in that thread and sent a new log
EDIT: my issues anyways. No idea why the forum notified me when you replied to someone else…
I have it all disabled. I’m through with it. It bricked 2 of my cams and I can’t have it brick any more. Seems nothing is being done about it. This time if I disable Cam Plus that will be it There will be no going back to that service It’s messed up cams so bad the events tab is useless
Apple has yet to have released the updated app. Typical for Apple though
So have had this working for over a week now the facial recognition works great, but I think they should make it like nest and eufy where they zoom in on the face in the snapshot. #wishlist
Does anyone know if, and it’s a long shot I know, if they the will allow user training ability for object ID? (The models, not the users)
Specifically, the ability to improve the Face ID ability?
I have two RPi running variants of AI/ML image recognition based on Googles Coral project, and the ability to tune the model is key to effective and accurate identification.
Just bought 5 cam plus subscriptions today with the Black Friday deal, and then applied to be added to this (though it said my official licenses may take up to a week to be fully active, my cam plus trial is running in the meantime), really hoping they’ll me in. I am really excited about this, especially once it works with my Video Doorbell!
I hope Wyze will still let me in now that I bought 5 licenses for it! I know I’m running a bit late on applying for it, but I’d certainly love contributing to this stuff and submitting videos to help you train it.
Got added to this beta. Seems there are too many tagging buttons?
Also Ive yet to have it detect a vehicle despite a few of them being pointed at the street.
Are you running the production app or a beta app?
Beta, but it was my understanding this was running separate from that.
We were instructed to use the production app for testing the AI.
Aside from that, beta apps normally do have bugs so , there’s a good chance the beta app has the issue.
I don’t have duplicate tags in the production app, and I also run Android.
I don’t currently have any beta apps running on any devices so I can’t check that.
It might be best to post the issue in the beta section
Thanks, I’ll run it up the chain that way. At least we’ve narrowed down where its happening.
There’s a post somewhere (I think facebook) where they said they had added the AI testing into the Beta app and it was ok to use that. I actually didnt sign up for this beta until that point because I didnt want to try to manage different software on different devices.
That might be the case, I didn’t see that
Is there a non-Facebook (I don’t have Facebook) explanation or instructions anywhere for how to use the beta testing tags to help you out with this? (I don’t mean how to enroll in it, I’m already enrolled.) For example, with this video:
It tagged it as a person, but there is no person, it is my black cat going in his kitty-door. So which of the following are the appropriate submission:
- Tag it as “pet” and submit it to tell it there was just a pet…and the AI will presumably figure out it shouldn’t have marked it as a person.
- Tag it as Missing Detection and Pet (to tell it that it missed marking it as a pet)
- Tag it as False Detection and Person (to say that it falsely identified it as a person)
- Tag it as both False detection (it said person when it shouldn’t) and Missing detection (it didn’t say pet, and it should have), and then mark both Person (the false detection) and Pet (the missing detection) and let the AI figure out which is which…or will that make the AI think it should’ve marked it as both Person and as Pet and screw it all up?
- Do I just choose one or the other, but not both (pick whether to mark it missing detection and pet OR false detection and person)…I can’t submit one then the other, it only allows 1 submission, but if I try to enter both the false and the missing at the same time, it could confuse the AI.
- Also, what to do about packages? Do I mark it as package, or package and person and face. What if the package was delivered an hour ago and a new event shows the package still on my porch, do I still mark it as package?
There are a lot of other potentially acceptable-seeming variations on this that would make sense, all depending on how Wyze intends to use the feedback…especially when there are multiple variations in play (a person delivers a package, may or may not see his face, and may or may not have my pet in the video, and may or may not have vehicles in the street in the background.
I am very happy to help with all this and submitting my videos to improve the AI, I would just like clarification on how I can best help Wyze, rather than randomly deciding on one of the above actions myself which might be the opposite of “helping” to improve things if it turns out the opposite was wanted and planned for.
I didn’t see any other thread on this topic, so it seemed most appropriate to ask here (it’s fine if Wyze feels it would be better in another section or it’s own beta thread or something)…It just seemed okay here since Wyze officially posted about it here in News and didn’t give any instructions on how they’d like us to help.
I look forward to an official answer and I will provide a lot of help with this to train your AI the way you want it trained.