Our Core Research Areas

Our research spans the full stack of AI-powered 3D generation, from foundational representations to production-ready output.

3D Representation Learning

Developing novel ways to encode 3D geometry, topology, and material properties into formats that neural networks can efficiently learn from and generate.

Generative Architecture

Designing model architectures specifically optimized for 3D output, combining insights from diffusion models, transformers, and geometric deep learning.

Quality & Fidelity

Targeting the key quality gaps in current AI 3D generation: clean topology, proper UV mapping, and physically-based materials — the requirements for professional use.

How We Build

Data-First Methodology

The quality of any AI model depends on the quality and structure of its training data. We have invested years in understanding how to curate, process, and represent 3D data in ways that maximize learning efficiency.

  • Custom 3D data processing pipelines
  • Multi-format mesh normalization
  • Topology-aware data augmentation
  • Scalable dataset architecture
Data processing pipeline

Architecture Innovation

Standard image-generation architectures do not directly translate to 3D. We are exploring neural network architectures that natively operate on three-dimensional space, building on research in graph neural networks, geometric transformers, and 3D-aware diffusion models.

  • Geometry-aware attention mechanisms
  • Hierarchical mesh generation
  • Multi-resolution output control
  • Conditional generation from text, images, or sketches
AI architecture diagram

Production Integration

We design our models with the end user in mind. Our goal is to minimize or eliminate the manual cleanup typically required before AI-generated assets can enter production workflows.

  • Industry-standard format output (FBX, OBJ, glTF)
  • Game-engine-ready asset generation
  • LOD-aware output for performance optimization
  • PBR material generation
Production-ready mesh topology

Research Directions

The challenges we are tackling sit at the intersection of computer graphics, machine learning, and computational geometry.

Text to 3D generation

Text-to-3D

Generating 3D models from natural language descriptions. The open challenge: producing not just approximate shapes, but assets with clean topology and usable materials.

Image-to-3D

Reconstructing full 3D models from single or few images, understanding depth, occluded geometry, and material properties from visual cues.

3D-to-3D Refinement

AI-powered tools that can take rough 3D sketches or low-quality models and refine them into production-ready assets automatically.

Interested in Our Research?

We welcome conversations with researchers, investors, and potential partners interested in the future of AI-driven 3D content.