Data engineering is a constantly growing and evolving discipline. There are always new tools, systems, and design patterns to learn, which leads to a great deal of confusion for newcomers. Daniel Molnar has dedicated his time to helping data professionals get back to basics through presentations at conferences and meetups, and with his most recent endeavor of building the Pipeline Data Engineering Academy. In this episode he shares advice on how to cut through the noise, which principles are foundational to building a successful career as a data engineer, and his approach to educating the next generation of data practitioners. This was a useful conversation for anyone working with data who has found themselves spending too much time chasing the latest trends and wishes to develop a more focused approach to their work.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $100 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
Businesses are increasingly faced with the challenge of satisfying several, often conflicting, demands regarding sensitive data. From sharing and using sensitive data to complying with regulations and navigating new cloud-based platforms, Immuta helps solve these needs and more.
With automated, scalable data access and privacy controls, and enhanced collaboration between data and compliance teams, Immuta empowers data teams to easily access the data they need, when they need it – all while protecting sensitive data and ensuring their customers’ privacy. Immuta integrates with leading technology and solutions providers so you can govern your data on your desired analytic system.
Start a free trial of Immuta to see the power of automated data governance for yourself.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!
- Your host is Tobias Macey and today I’m interviewing Daniel Molnar about being a data janitor and how to cut through the hype to understand what to learn for the long run
- How did you get involved in the area of data management?
- Can you start by describing your thoughts on the current state of the data management industry?
- What is your strategy for being effective in the face of so much complexity and conflicting needs for data?
- What are some of the common difficulties that you see data engineers contend with, whether technical or social/organizational?
- What are the core fundamentals that you think are necessary for data engineers to be effective?
- What are the gaps in knowledge or experience that you have seen data engineers contend with?
- You recently started down the path of building a bootcamp for training data engineers. What was your motivation for embarking on that journey?
- How would you characterize your particular approach?
- What are some of the reasons that your applicants have for wanting to become versed in data engineering?
- What is the baseline of capabilities that you expect of your target audience?
- What level of proficiency do you aim for when someone has completed your training program?
- Who do you think would not be a good fit for your academy?
- As a hiring manager, what are the core capabilities that you look for in a data engineering candidate?
- What are some of the methods that you use to assess competence?
- What are the overall trends in the data management space that you are worried by?
- Which ones are you happy about?
- What are your plans and overall goals for the pipeline academy?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email email@example.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Pipeline Data Engineering Academy
- Data Janitor 101
- The Data Janitor Returns
- Berlin, Germany
- Urchin google analytics precursor
- AWS Redshift
- Nassim Nicholas Taleb
- Black Swans (affiliate link)
- KISS == Keep It Simple Stupid
- Dan McKinley
- Ralph Kimball Data Warehousing design
- Falsehoods Programmers Believe
- Apache Kafka
- AWS Kinesis
- Dêpeche Mode
- Designing Data Intensive Applications (affiliate link)
- Stop Hiring DevOps Engineers and Start Growing Them
- T Shaped Engineer
- Pipeline Data Engineering Academy Curriculum
- MPP == Massively Parallel Processing
- Apache Flink
- Flask web framework
- YAGNI == You Ain’t Gonna Need It
- Pair Programming