Digital is sometimes huge volumes of data! And this data has become the language of our activities. To make them speak, we must collect them, format them, analyse them, consolidate them, store them, disseminate them... So many operations grouped into what is called Big Data. Today, Hadoop is the main Big Data platform. Used for the storage and processing of huge volumes of data, this software framework and its various components are used by many companies for their big data projects. Hadoop is an open source software framework for storing data and launching applications on standard machine clusters. This solution offers massive storage space for all types of data, immense processing power and the ability to support virtually unlimited amount of tasks. Based on Java, this framework is part of the Apache project, sponsored by Apache Software Foundation. Following initial R & D work, we found that three big big data architecture families exist: Data Lake architecture “lambda” architecture Kappa type architecture... Each of these architectures respond to specific business use cases and different technological locks. Within Soyhuce, we work a lot on the topics of “datalake” architecture. We meet a high demand around real-time issues, which we solve by doing query optimisation and the use of over-sized infrastructure. With a view to mastering new technologies as well as the impact of the carbon balance of our work, we wish to study lambda-type architectures that are adapted to this type of problem.