Auto-archiving and Data Retention Management in Postgres with pg_partman
Crunchy Data | PostgreSQL Blog
by Keith Fiske
1d ago
You could be saving money every month on databases costs with a smarter data retention policy. One of the primary reasons, and a huge benefit of partitioning is using it to automatically archive your data. For example, you might have a huge log table. For business purposes, you need to keep this data for 30 days. This table grows continually over time and keeping all the data makes database maintenance challenging. With time-based partitioning, you can simply archive off data older than 30 days. The nature of most relational databases means that deleting large volumes of data can be very ineff ..read more
Visit website
Building PostgreSQL Extensions: Dropping Extensions and Cleanup
Crunchy Data | PostgreSQL Blog
by David Christensen
1w ago
I recently created a Postgres extension which utilizes the pg_cron extension to schedule recurring activities using the cron.schedule(). Everything worked great. The only problem was when I dropped my extension, it left the cron job scheduled, which resulted in regular errors: 2024-04-06 16:00:00.026 EST [1548187] LOG: cron job 2 starting: SELECT bridge_stats.update_stats('55 minutes', false) 2024-04-06 16:00:00.047 EST [1580698] ERROR: schema "bridge_stats" does not exist at character 8 2024-04-06 16:00:00.047 EST [1580698] STATEMENT: SELECT bridge_stats.update_stats('55 minutes', false ..read more
Visit website
Row Level Security for Tenants in Postgres
Crunchy Data | PostgreSQL Blog
by Craig Kerstiens
2w ago
Row-level security (RLS) in Postgres is a feature that allows you to control which rows a user is allowed to access in a particular table. It enables you to define security policies at the row level based on certain conditions, such as user roles or specific attributes in the data. Most commonly this is used to limit access based on the database user connecting, but it can also be handy to ensure data safety for multi-tenant applications. Creating tables with row level security We're going to assume our tenants in this case are part of an organization, and we have an events table with events t ..read more
Visit website
Contributing to Postgres 101: A Beginner's Experience
Crunchy Data | PostgreSQL Blog
by Elizabeth Christensen
3w ago
I recently got my very first patch into PostgreSQL! To be clear I'm not a C developer and didn't contribute some fancy new feature. However, I do love Postgres and wanted to contribute. Here's my journey and what I learned along the way. Oh, something’s missing from docs! A patch idea ? I had an idea for a docs patch while I was talking to Stephen Frost about some research and writing I was doing about HOT updates and fill factor. A recent update to HOT updates meant HOT could be compatible with BRIN. And while the HOT readme was up to date, the main PostgreSQL docs were missing a reference to ..read more
Visit website
Inside PostGIS: Calculating Distance
Crunchy Data | PostgreSQL Blog
by Paul Ramsey
1M ago
Calculating distance is a core feature of a spatial database, and the central function in many analytical queries. "How many houses are within the evacuation radius?" "Which responder is closest to the call?" "How many more miles until the school bus needs routine maintenance?" PostGIS and any other spatial database let you answer these kinds of questions in SQL, using ST_Distance(geom1, geom2) to return a distance, or ST_DWithin(geom1, geom2, radius) to return a true/false result within a tolerance. SELECT ST_Distance( 'LINESTRING (150 300, 226 274, 320 280, 370 320, 390 370)'::geometry ..read more
Visit website
Examining Postgres Upgrades with pg_upgrade
Crunchy Data | PostgreSQL Blog
by Greg Sabino Mullane
1M ago
Postgres is an amazing database system, but it does come with a five-year life cycle. This means you need to perform a major upgrade of it at least every five years. Luckily, Postgres ships with the pg_upgrade program, which enables a quick and easy migration from one major version of Postgres to another. Let's work through an example of how to upgrade - in this case, we will go from Postgres 12 to Postgres 16. You should always aim to go to the highest version possible. Check postgresql.org to see what the current version is. If you get stuck, the official documentation has a lot of details ..read more
Visit website
Examining Postgres Upgrades with pg_upgrade
Crunchy Data | PostgreSQL Blog
by Greg Mullane
1M ago
Postgres is an amazing database system, but it does come with a five-year life cycle. This means you need to perform a major upgrade of it at least every five years. Luckily, Postgres ships with the pg_upgrade program, which enables a quick and easy migration from one major version of Postgres to another. Let's work through an example of how to upgrade - in this case, we will go from Postgres 12 to Postgres 16. You should always aim to go to the highest version possible. Check postgresql.org to see what the current version is. If you get stuck, the official documentation has a lot of details ..read more
Visit website
Migrate from Heroku Postgres to Crunchy Bridge
Crunchy Data | PostgreSQL Blog
by Craig Kerstiens
1M ago
While database migrations are not an everyday thing for you, they are for us. Migrating to a new database provider isn't something you ever take lightly. Once you've come to the decision that you may want to migrate then you look at the time and effort cost of switching and wonder if it's really worth it. You decide it is, and still you're left with uncertainty of what-if: What about Postgres versions? What about Postgres extensions? What about collations? How do you minimize cutover time while not spending 6 months doing some custom application double writing? What about performance, how do ..read more
Visit website
The Rest is History: Investigations of WAL History Files
Crunchy Data | PostgreSQL Blog
by Brian Pace
2M ago
PostgreSQL uses the concept of a timeline to identify a series of WAL records in space and time. Each timeline is identified by a number, a decimal in some places, hexadecimal in others. Each time a database is recovered using point in time recovery and sometimes during standby/replica promotion, a new timeline is generated. A common mistake is to assume that a higher timeline number is synonymous with the most recent data. While the highest timeline points to the latest incarnation of the database, it doesn't guarantee that the database indeed holds the most useful data from an application st ..read more
Visit website
PostGIS Clustering with K-Means
Crunchy Data | PostgreSQL Blog
by Paul Ramsey
2M ago
Clustering points is a common task for geospatial data analysis, and PostGIS provides several functions for clustering. ST_ClusterDBSCAN ST_ClusterKMeans ST_ClusterIntersectingWin ST_ClusterWithinWin We previously looked at the popular DBSCAN spatial clustering algorithm, that builds clusters off of spatial density. This post explores the features of the PostGIS ST_ClusterKMeans function. K-means clustering is having a moment, as a popular way of grouping very high-dimensional LLM embeddings, but it is also useful in lower dimensions for spatial clustering. ST_ClusterKMeans will cluster 2-di ..read more
Visit website

Follow Crunchy Data | PostgreSQL Blog on FeedSpot

Continue with Google
Continue with Apple
OR