The rate of change in the data engineering industry is alternately exciting and exhausting. Joe Crobak found his way into the work of data management by accident as so many of us do. After being engrossed with researching the details of distributed systems and big data management for his work he began sharing his findings with friends. This led to his creation of the Hadoop Weekly newsletter, which he recently rebranded as the Data Engineering Weekly newsletter. In this episode he discusses his experiences working as a data engineer in industry and at the USDS, his motivations and methods for creating a newsleteter, and the insights that he has gleaned from it.
Do you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at promo.linode.com/dataengineeringpodcast or use the code dataengineering2018 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
- Your host is Tobias Macey and today I’m interviewing Joe Crobak about his work maintaining the Data Engineering Weekly newsletter, and the challenges of keeping up with the data engineering industry.
- How did you get involved in the area of data management?
- What are some of the projects that you have been involved in that were most personally fulfilling?
- As an engineer at the USDS working on the healthcare.gov and medicare systems, what were some of the approaches that you used to manage sensitive data?
- Healthcare.gov has a storied history, how did the systems for processing and managing the data get architected to handle the amount of load that it was subjected to?
- What was your motivation for starting a newsletter about the Hadoop space?
- Can you speak to your reasoning for the recent rebranding of the newsletter?
- How much of the content that you surface in your newsletter is found during your day-to-day work, versus explicitly searching for it?
- After over 5 years of following the trends in data analytics and data infrastructure what are some of the most interesting or surprising developments?
- What have you found to be the fundamental skills or areas of experience that have maintained relevance as new technologies in data engineering have emerged?
- What is your workflow for finding and curating the content that goes into your newsletter?
- What is your personal algorithm for filtering which articles, tools, or commentary gets added to the final newsletter?
- How has your experience managing the newsletter influenced your areas of focus in your work and vice-versa?
- What are your plans going forward?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- National Labs
- Amazon EMR (Elastic Map-Reduce)
- Recommendation Engine
- Netflix Prize
- Quality Payment Program
- NIST National Institute of Standards and Technology
- PII (Personally Identifiable Information)
- Threat Modeling
- Apache JBoss
- Apache Web Server
- JMS (Java Message Service)
- Load Balancer
- Hadoop Weekly
- Data Engineering Weekly
- Stream Processing
- The Flavors of Data Science and Engineering
- Change Data Capture
- Jay Kreps