Scientific Area
Abstract Detail
Nº613/1300 - Cryptic Beauty Revealed: Quantifying Phenotypic Variation in Two Vicariant Ophrys aveyronensis Populations via Image Processing
Format: ORAL
Authors
Anais Gibert1, Florian Louty1, Roselyne Buccal2, Michel Baguette3,4, Bertrand Schatz5 and Joris AM. Bertrand1
Affiliations
1 Laboratoire Génome et Développement des Plantes (LGDP), UMR5096, Université de Perpignan Via Domitia—CNRS, F-66860 Perpignan, France
2 Centre de Formation et de Recherche sur les Environnements Méditerranéens (CEFREM), UMR5110, Université de Perpignan Via Domitia—CNRS, F-66860 Perpignan, France
3 Institut Systématique, Evolution, Biodiversité (ISEB), UMR 7205, Museum National d’Histoire Naturelle, CNRS, Sorbonne Université, EPHE, Université des Antilles, F-75005 Paris, France
4 Station d’Ecologie, Théorique et Experimentale (SETE), UMR 5321, CNRS, Université Toulouse III, F-09200 Moulis, France
5 CEFE, CNRS, Université Montpellier, EPHE, IRD, F-34293 Montpellier, France
Abstract
Studying phenotypic differentiation is essential in understanding population divergence and the traits involved in the speciation process. Detecting slight phenotypic variations is difficult at first glance, especially when species have a disconnected spatial distribution or conceal cryptic taxa. To examine this, we implemented image-based analyses in conjunction with a simple machine learning algorithm to distinguish between two vicariant population groups within an orchid species complex acknowledged for its resistance to differentiation using conventional morphological criteria: Ophrys aveyronensis.
Our all-encompassing approach involved three primary stages: (i) Capturing and measuring 109 individuals in their natural habitat, (ii) extracting morphometric, colour and colour pattern information from field photographs and (iii) utilising random forest algorithms for classification, the amalgamation of field and image-derived data resulted in an identification accuracy rate of 95%, highlighting the efficacy of our multidimensional approach.
It is worth noting that the variables pinpointed by the random forest algorithm for distinguishing between the two population groups differed from those typically proposed in current literature. This captivating discovery emphasises the demand for inventive techniques that go beyond traditional standards in phenotypic differentiation studies.
Our outcomes demonstrate the capacity to augment taxon identification through combining field-captured images with machine learning classification methods. Furthermore, they draw attention to potential traits for future eco-evolutionary investigations, offering valuable observations into the complex interplay of phenotypic variations within intricate species. This study not only aids in improving taxonomic methodologies but also highlights the significance of advanced analytical tools in enhancing our comprehension of species differentiation and evolutionary mechanisms.