Abstract
This project targets the development of advanced algorithms that transform raw 3D point clouds into structured geometric models enriched with semantic annotations. We systematically evaluate multiple methodological paradigms with respect to generalization across data sources and shape types, robustness against noise and irregular sampling, as well as scalability to large-scale datasets. The project establishes the algorithmic and conceptual basis for subsequent developments within the CARE Data Model, fostering close collaboration across research areas. The investigated approaches span geometric primitive regression for local segmentation, generalized symmetry detection for identifying repeating structures, hierarchical clustering across multiple levels of detail, and data-driven techniques that learn hierarchical segmentations from ground-truth reference models. By integrating expertise in 3D image analysis, reconstruction, geometry processing, and data integration, the project aims to deliver robust, versatile, and scalable solutions for semantic 3D modeling.