Data integration and routing is a constantly evolving problem and one that is fraught with edge cases and complicated requirements. The Apache NiFi project models this problem as a collection of data flows that are created through a self-service graphical interface. This framework provides a flexible platform for building a wide variety of integrations that can be managed and scaled easily to fit your particular needs. In this episode project members Kevin Doran and Andy LoPresto discuss the ways that NiFi can be used, how to start using it in your environment, and plans for future development. They also explained how it fits in the broad landscape of data tools, the interesting and challenging aspects of the project, and how to build new extensions.
DataKitchen offers the first end-to-end DataOps Platform that empowers teams to reclaim control of their data pipelines and deliver business value instantly, without errors. The platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing and monitoring to development and deployment. It’s DataOps Delivered.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $60 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Your host is Tobias Macey and today I’m interviewing Kevin Doran and Andy LoPresto about Apache NiFi
- How did you get involved in the area of data management?
- Can you start by explaining what NiFi is?
- What is the motivation for building a GUI as the primary interface for the tool when the current trend is to represent everything as code?
- How did you get involved with the project?
- Where does it sit in the broader landscape of data tools?
- Does the data that is processed by NiFi flow through the servers that it is running on (á la Spark/Flink/Kafka), or does it orchestrate actions on other systems (á la Airflow/Oozie)?
- How do you manage versioning and backup of data flows, as well as promoting them between environments?
- One of the advertised features is tracking provenance for data flows that are managed by NiFi. How is that data collected and managed?
- What types of reporting are available across this information?
- What are some of the use cases or requirements that lend themselves well to being solved by NiFi?
- When is NiFi the wrong choice?
- What is involved in deploying and scaling a NiFi installation?
- What are some of the system/network parameters that should be considered?
- What are the scaling limitations?
- What have you found to be some of the most interesting, unexpected, and/or challenging aspects of building and maintaining the NiFi project and community?
- What do you have planned for the future of NiFi?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- HortonWorks DataFlow
- Apache Software Foundation
- Internet Scale
- Asset Management
- NSA (National Security Agency)
- 24 (TV Show)
- Technology Transfer Program
- Agile Software Development
- ETL (Extract, Transform, and Load)
- ESB (Enterprise Service Bus)
- Apache Atlas
- Data Governance
- K-Nearest Neighbors
- DSL (Domain Specific Language)
- NiFi Registry
- Artifact Repository
- NiFi CLI
- Maven Archetype
- NiFi Wiki
- TLS (Transport Layer Security)
- Mozilla TLS Observatory
- NiFi Flow Design System
- Data Lineage
- GDPR (General Data Protection Regulation)