Poly AR
AR authoring tool for browsing, downloading, and placing 3D models in augmented reality
Tech Stack
Overview
Prototyping AR experiences typically requires pre-loading 3D assets into a project, rebuilding, and deploying — a slow cycle for exploring spatial layouts and visual ideas. Poly AR streamlined this by connecting to Google's Poly API (now deprecated) to let users search, download, and place 3D models directly into an AR scene at runtime using their Android phone. It functioned as a rapid AR authoring and prototyping tool.
Process & Approach
The application was built in Unity targeting ARCore-compatible Android devices. The core workflow: the user points their phone at a surface, the app detects the plane, and they can search Google Poly's library for 3D models, download them on-the-fly, and place them into the AR scene. Models could be repositioned, scaled, and rotated with touch gestures. The architecture separated the network/API layer from the AR rendering pipeline to keep the placement experience responsive even during model downloads.
Key Features
- Runtime 3D model search and download via Google Poly API
- AR surface detection and model placement
- Touch-based manipulation: move, scale, rotate
- Asynchronous model loading with visual feedback
- Lightweight Android AR application
Technical Challenges
Handling arbitrary 3D models of varying complexity at runtime — without knowing polygon count, texture size, or material setup in advance — required defensive loading and automatic LOD generation. Network latency for model downloads had to be masked with appropriate loading UI to maintain the feeling of a responsive authoring tool.
Impact & Learnings
Poly AR demonstrated that AR authoring could be made accessible without requiring development tools or 3D modeling expertise. While Google Poly's deprecation limited the tool's longevity, the architecture patterns for runtime asset loading in AR remain relevant to current AR content creation workflows.