So say I have to perform certain ML capabilities for my app.
In scale, it is stated whether there are 100 units that take pictures to be processed by ML, and a picture is taken every minute for a few hours a day, it would be wiser to say to use Apple's ML frames and do everything locally before sending the output to the cloud OR send the image just outside the bat and maybe use google's ML (VIsion) frames and products to do what it takes to do so?
I & # 039; m thinking locally that I might want to ping to the user when there may be a problem with the image being classified or whatever. Furthermore, it is far cheaper to do it locally since it is essentially "free" to use Apple's ML (in scale).
Some thoughts? Perhaps some more pros and cons to look at? Lmk if you need clarification on something, poorly written and written very fast, then my apology
submitted by / u / vinetheme