Hey folks,
For those of you interested in Machine Learning using Tensorflow, or interesting applications that lie at the intersection of Augmented Reality & Artificial Intelligence, I’ll be giving a talk next Tuesday 1/29 at 7:30 PM in the Lecture Hall demonstrating native Android code that uses Tensorflow Lite (running a stock Inception V3 model) so that you can detect objects in a scene. Upon seeing an object whose class/category you want to react to, we will use Sceneform (a nice abstraction layer on top of ARCore so you don’t need to know OpenGL) to attach a 3D model to a designated point in space nearby in the Augmented Reality view. The idea here is you can attach AR objects to any detected item(s) in general.
I’ll also walk through how to put the code together, common gotchas/pitfalls one might encounter along the way, and touch on some special cases of items you can detect that involve slightly different APIs/models that enable even more features (such as face detection and finding facial features). This is my “dry run” of the talk before giving it at Windy City DevFest in Chicago next Friday & an abridged version of it at DevFest DFW on 2/16.
Come join me and see how easy it is to get started building such an app. You might be surprised! (It’s actually a lot less code than Google’s own TFLite example without AR.)