The first stage in every data project is collecting information and routing it to a storage system for later analysis. For operational data this typically means collecting log messages and system metrics. Often a different tool is used for each class of data, increasing the overall complexity and number of moving parts. The engineers at Timber.io decided to build a new tool in the form of Vector that allows for processing both of these data types in a single framework that is reliable and performant. In this episode Ben Johnson and Luke Steensen explain how the project got started, how it compares to other tools in this space, and how you can get involved in making it even better.
Do you want to try out some of the tools and applications that you heard about on the Data Engineering Podcast? Do you have some ETL jobs that need somewhere to run? Check out Linode at linode.com/dataengineeringpodcast or use the code dataengineering2019 and get a $20 credit (that’s 4 months free!) to try out their fast and reliable Linux virtual servers. They’ve got lightning fast networking and SSD servers with plenty of power and storage to run whatever you want to experiment on.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, and Data Council. Upcoming events include the O’Reilly AI conference, the Strata Data conference, the combined events of the Data Architecture Summit and Graphorum, and Data Council in Barcelona. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
- Your host is Tobias Macey and today I’m interviewing Ben Johnson and Luke Steensen about Vector, a high-performance, open-source observability data router
- How did you get involved in the area of data management?
- Can you start by explaining what the Vector project is and your reason for creating it?
- What are some of the comparable tools that are available and what were they lacking that prompted you to start a new project?
- What strategy are you using for project governance and sustainability?
- What are the main use cases that Vector enables?
- Can you explain how Vector is implemented and how the system design has evolved since you began working on it?
- How did your experience building the business and products for Timber influence and inform your work on Vector?
- When you were planning the implementation, what were your criteria for the runtime implementation and why did you decide to use Rust?
- What led you to choose Lua as the embedded scripting environment?
- What data format does Vector use internally?
- Is there any support for defining and enforcing schemas?
- In the event of a malformed message is there any capacity for a dead letter queue?
- Is there any support for defining and enforcing schemas?
- What are some strategies for formatting source data to improve the effectiveness of the information that is gathered and the ability of Vector to parse it into useful data?
- When designing an event flow in Vector what are the available mechanisms for testing the overall delivery and any transformations?
- What options are available to operators to support visibility into the running system?
- In terms of deployment topologies, what capabilities does Vector have to support high availability and/or data redundancy?
- What are some of the other considerations that operators and administrators of Vector should be considering?
- You have a fairly well defined roadmap for the different point versions of Vector. How did you determine what the priority ordering was and how quickly are you progressing on your roadmap?
- What is the available interface for adding and extending the capabilities of Vector? (source/transform/sink)
- What are some of the most interesting/innovative/unexpected ways that you have seen Vector used?
- What are some of the challenges that you have faced in building/publicizing Vector?
- For someone who is interested in using Vector, how would you characterize the overall maturity of the project currently?
- What is missing that you would consider necessary for production readiness?
- When is Vector the wrong choice?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Apache Kafka
- Fluent Bit
- Tokio Rust library
- Web Assembly (WASM)
- Protocol Buffers