The first stage of every good pipeline is to perform data integration. With the increasing pace of change and the need for up to date analytics the need to integrate that data in near real time is growing. With the improvements and increased variety of options for streaming data engines and improved tools for change data capture it is possible for data teams to make that goal a reality. However, despite all of the tools and managed distributions of those streaming engines it is still a challenge to build a robust and reliable pipeline for streaming data integration, especially if you need to expose those capabilities to non-engineers. In this episode Ido Friedman, CTO of Equalum, explains how they have built a no-code platform to make integration of streaming data and change data capture feeds easier to manage. He discusses the challenges that are inherent in the current state of CDC technologies, how they have architected their system to integrate well with existing data platforms, and how to build an appropriate level of abstraction for such a complex problem domain. If you are struggling with streaming data integration and change data capture then this interview is definitely worth a listen.
Managing data in your warehouse today is hard. Often you’ll find yourself relying on manual work and hacks to get something done. Data knowledge is fragmented across the team and the data is often unreliable. So you hire a data engineer and they spend all their time managing this custom infrastructure when what they want to be doing is focusing on writing code.
Dataform enables data teams to own the end to end data transformation process by giving them a collaborative web based platform where they can develop SQL code with real time error checking, run schedules, write data quality tests and use a data catalog to document.
Enabling analysts to publish tables and maintain complex SQL workflows without requiring the help of engineers. And letting data engineers focus their time on transformation code instead of having to maintain custom infrastructure.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
Businesses are increasingly faced with the challenge of satisfying several, often conflicting, demands regarding sensitive data. From sharing and using sensitive data to complying with regulations and navigating new cloud-based platforms, Immuta helps solve these needs and more.
With automated, scalable data access and privacy controls, and enhanced collaboration between data and compliance teams, Immuta empowers data teams to easily access the data they need, when they need it – all while protecting sensitive data and ensuring their customers’ privacy. Immuta integrates with leading technology and solutions providers so you can govern your data on your desired analytic system.
Start a free trial of Immuta to see the power of automated data governance for yourself.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
- Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
- Your host is Tobias Macey and today I’m interviewing Ido Friedman about Equalum, a no-code platform for streaming data integration
- How did you get involved in the area of data management?
- Can you start by giving an overview of what you are building at Equalum and how it got started?
- There are a number of projects and platforms on the market that target data integration. Can you give some context of how Equalum fits in that market and the differentiating factors that engineers should consider?
- What components of the data ecosystem might Equalum replace, and which are you designed to integrate with?
- Can you walk through the workflow for someone who is using Equalum for a simple data integration use case?
- What options are available for doing in-flight transformations of data or creating customized routing rules?
- How do you handle versioning and staged rollouts of changes to pipelines?
- How is the Equalum platform implemented?
- How has the design and architecture of Equalum evolved since it was first created?
- What have you found to be the most complex or challenging aspects of building the platform?
- Change data capture is a growing area of interest, with a significant level of difficulty in implementing well. How do you handle support for the variety of different sources that customers are working with?
- What are the edge cases that you typically run into when working with changes in databases?
- How do you approach the user experience of the platform given its focus as a low code/no code system?
- What options exist for sophisticated users to create custom operations?
- How much of the underlying concerns do you surface to end users, and how much are you able to hide?
- What is the process for a customer to integrate Equalum into their existing infrastructure and data systems?
- What are some of the most interesting, unexpected, or innovative ways that you have seen Equalum used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing the Equalum platform?
- When is Equalum the wrong choice?
- What do you have planned for the future of Equalum?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Change Data Capture
- SQL Server
- DBA == Database Administrator
- OBLP == Oracle Binary Log Parser
- Jupyter Notebooks