One of the longest running and most popular open source database projects is PostgreSQL. Because of its extensibility and a community focus on stability it has stayed relevant as the ecosystem of development environments and data requirements have changed and evolved over its lifetime. It is difficult to capture any single facet of this database in a single conversation, let alone the entire surface area, but in this episode Jonathan Katz does an admirable job of it. He explains how Postgres started and how it has grown over the years, highlights the fundamental features that make it such a popular choice for application developers, and the ongoing efforts to add the complex features needed by the demanding workloads of today’s data layer. To cap it off he reviews some of the exciting features that the community is working on building into future releases.
DataKitchen offers the first end-to-end DataOps Platform that empowers teams to reclaim control of their data pipelines and deliver business value instantly, without errors. The platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing and monitoring to development and deployment. It’s DataOps Delivered.
Your data platform needs to be scalable, fault tolerant, and performant, which means that you need the same from your cloud provider. Linode has been powering production systems for over 17 years, and now they’ve launched a fully managed Kubernetes platform. With the combined power of the Kubernetes engine for flexible and scalable deployments, and features like dedicated CPU instances, GPU instances, and object storage you’ve got everything you need to build a bulletproof data pipeline. If you go to dataengineeringpodcast.com/linode today you’ll even get a $60 credit to use on building your own cluster, or object storage, or reliable backups, or… And while you’re there don’t forget to thank them for being a long-time supporter of the Data Engineering Podcast!
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute.
- Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Jonathan Katz about a high level view of PostgreSQL and the unique capabilities that it offers
- How did you get involved in the area of data management?
- How did you get involved in the Postgres project?
- For anyone who hasn’t used it, can you describe what PostgreSQL is?
- Where did Postgres get started and how has it evolved over the intervening years?
- What are some of the primary characteristics of Postgres that would lead someone to choose it for a given project?
- What are some cases where Postgres is the wrong choice?
- What are some of the common points of confusion for new users of PostGreSQL? (particularly if they have prior database experience)
- The recent releases of Postgres have had some fairly substantial improvements and new features. How does the community manage to balance stability and reliability against the need to add new capabilities?
- What are the aspects of Postgres that allow it to remain relevant in the current landscape of rapid evolution at the data layer?
- Are there any plans to incorporate a distributed transaction layer into the core of the project along the lines of what has been done with Citus or CockroachDB?
- What is in store for the future of Postgres?
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Crunchy Data
- Paperless Post
- LAMP Stack
- Edgar Codd
- A Relational Model of Data for Large Shared Data Banks
- Relational Algebra
- Oracle DB
- UC Berkeley
- Dr. Michael Stonebraker
- ANSI C
- BSD License
- BTree Index
- GIN Index
- GIST Index
- KNN GIST
- Full Text Search
- BRIN Index
- WAL (Write-Ahead Log)
- OLAP (Online Analytical Processing)
- Postgres IRC
- Postgres Slack
- Postgres Conferences
- Postgres Roadmap
- Citus Data