AI Verse
Arnauld Lamorlette has a diverse work experience spanning different companies and roles. Arnauld started their career at PDI/Dreamworks in 2001, where they served as the Head of FX for Shrek2/Shrek3 and later as a Visual FX supervisor. In 2007, they became the CTO and CEO of The Bakery before transitioning to roles as a 3D production consultant at Busy Dragon and as a Consulting Senior TD at Weta Digital. In 2012, they worked as a Visual FX supervisor at DreamWorks Animation and later as a 3D production consultant at Busy Dragon. From 2013 to 2014, they served as a Consulting Senior TD at Weta Digital. In 2015, they worked as a visual FX supervisor at Original Force, and in 2016, they became a Visiting Professor in the 3D film industry. Since 2019, Arnauld has been the Director of Development at Dwarf Animation Studio. In 2020, they assumed the role of Chief Technology Officer at AI Verse, where they lead the development of new technologies for training Deep Learning networks.
Arnauld Lamorlette attended the Ecole sp茅ciale des Travaux publics, du B芒timent et de l'Industrie from 1986 to 1989. Arnauld pursued a degree in engineering with a specialization in Mechanical & Electrical.
This person is not in any teams
AI Verse
AI Verse offers a self-service image factory that produces high-quality annotated synthetic datasets for the needs of computer vision engineers. An entirely novel process enables the user to describe their ideal dataset and launch its generation on AI Verse鈥檚 render farm, a scalable cloud-hosted cluster of GPU machines, each able to procedurallybuild a 3D scene and render photorealistic images in a few seconds.The dataset builder offers simple but powerful inputs for scene description, lighting and camera placement, emulating the work of a 3D artist in just a few clicks. The user specifies a desired type of environment (e.g. living room, bedroom, office) and chooses object classes of interest from a catalog of 1000+ assets in ongoing expansion. These inputs are enough to procedurally generate any desired number of 3D scenes, respecting user-defined constraints while offering the variations in appearance and content necessary to AI robustness on tasks such as object detection and semantic segmentation. The user can also specify activities to be performed by human agents in the scenes, ranging from simple postures to leisure or work-related activities (e.g. typing on a keyboard, watching TV).A wide range of lighting scenarios can be applied in order to simulate varying weather and time of the day. The placement of the camera is also randomized according to simple and effective constraints that can be defined according to the engineer鈥檚 use case. Sensor parameters (e.g lens parameters, depth-of-field, exposure) can also be adjusted.The process of setting and adjusting the scene and image parameters is made interactive and engaging through the use of live previews, allowing the user to visualize within a few seconds an image rendered on-the-fly on one of our GPU machines, corresponding to the input parameters. Availability of GPU machines for previews is guaranteed though a session-booking system.Once satisfied with their inputs, the user can launch the generation of any desired number of 3D scenes and images captured from those scenes, along with their choice of automatically generated labels (e.g. object 2D/3D boxes, instance masks, depth image). Parallelized rendering on the cloud enables delivery of thousands of images in just a couple hours. The dataset management UI enables the engineer to track progress on any of their dataset orders, and visualize a sample of images from a given dataset as soon as it is available. Once the dataset is completed, the user can download it from the cloud by generating an expiring link whenever needed, which enhances data privacy. The downloadable dataset is presented in a format that makes it ready for AI training and easy to combine with other datasets.