"Waterfall" (shudder).
Before 'big data', projects using the waterfall methodology weren't always that bad. It depends on the project. If it's a small project, sure, waterfall could work. Monoliths not so much. Modern data platform projects are intended for a wide array of data. That means a wide array of analysis and planning, development, testing, and delivery. Often this is just too much for waterfall.
A modern data platform like a Lakehouse lends itself more to the Agile approach. Modern architecture uses many different products requiring a lot of different skills and knowledge. It's almost a microservice approach. The split of tasks can be broken down by product/skill area. For example, data engineers can do Data Factory, Databricks, Event streaming and Database tasks. DevOps engineers can sort out the CI/CD to facilitate the deliverables. Data Analysts look after the identification of data sources, and the reports and dashboards for end users.
Source control along with DevOps CI/CD processes using Azure pipelines are a big enabler for Agile projects, overcoming concurrent development challenges and time-consuming ongoing testing and deployment processes. Automation has really moved Agile to the next level. A larger initial investment to set up the infrastructure as code, builds, unit tests, and approval release gates, really does pay dividends as each sprint goes on.
The Agile methodology tends to be the norm; 2 week sprints with a planning and retrospective ceremony, plus daily stand-ups. It's the default mode when starting projects. Pragmatic leads for management/planning and development with a robust CI/CD process from the start make a strong foundation for a data project to achieve success.
Kommentarer