Hacks o’Clock
Friday, January 11, 2019
In the Bay Area? This Tuesday, 2019-01-15, SFPUG features David Fetter's talk on ASSERTIONs. RSVPs close Monday at noon, so don't hesitate! Thanks to Heap for hosting at their FiDi office.
Thursday, December 6, 2018
BDR talk by Mark Wong of 2nd Quadrant
In the Bay Area? This Wednesday, 2018-12-12, Mark Wong from 2nd Quadrant talking about BDR (Bi-Directional Replication, a form of multi-master) for PostgreSQL. This is a great chance to get inside, real-time info on BDR.
Multi-master is one of those features where when you need it, you really, really need it, and if you're in that category, this talk is for you. It's also of interest to anyone trying to figure out the best solution for scaling and redundancy beyond one machine and one data center.
To attend, you must RSVP at Meetup with your full name (for building security's guest list).
Multi-master is one of those features where when you need it, you really, really need it, and if you're in that category, this talk is for you. It's also of interest to anyone trying to figure out the best solution for scaling and redundancy beyond one machine and one data center.
To attend, you must RSVP at Meetup with your full name (for building security's guest list).
Tuesday, October 30, 2018
Remember your history
PostgreSQL keeps track of which WAL files go with which timelines in small history files. Each time you make a base backup, a history file is born. The file is written once and never updated. It's a simple system, and it works well and silently.
In fact, sometimes it works a little too silently.
At PostgreSQL Experts we've run into the problem where a client's history files disappear because they are stored in S3, and there's a lifecycle configuration in place that says to move everything over a certain age to Glacier. That's a good policy for WAL files!
Unfortunately, it's not a good policy for history files: without the latest history file you can't restore the latest backup, and without past history files, you are unable to do PITR to certain points in time.
The solution we used was to move the whole WAL archive to S3 Standard-Infrequent Access storage, dissolving the problem with lifecycle configurations while controlling costs. But you could also fix this by editing the lifecycle configuration.
The important thing is this: hold on to all history files. They're tiny text files, and when you need them, you really need them. This is also a good reason to test restores, not just of the latest backup, but of database states at arbitrary points in time.
Addendum: another very common problem we see is WAL archives that become corrupted because a client accidentally pointed a two primaries at the same WAL archive (for instance, they might have copied a postgresql.conf file by hand, or via a DevOps tool like Puppet). In this case, the whole archive is corrupted, and you're best off starting with a fresh S3 bucket or an empty directory and doing a new base backup immediately.
One of the many nice features of pgBackRest is that it will notice this and prevent you from doing it. Fewer footguns → better backups.
In fact, sometimes it works a little too silently.
At PostgreSQL Experts we've run into the problem where a client's history files disappear because they are stored in S3, and there's a lifecycle configuration in place that says to move everything over a certain age to Glacier. That's a good policy for WAL files!
Unfortunately, it's not a good policy for history files: without the latest history file you can't restore the latest backup, and without past history files, you are unable to do PITR to certain points in time.
The solution we used was to move the whole WAL archive to S3 Standard-Infrequent Access storage, dissolving the problem with lifecycle configurations while controlling costs. But you could also fix this by editing the lifecycle configuration.
The important thing is this: hold on to all history files. They're tiny text files, and when you need them, you really need them. This is also a good reason to test restores, not just of the latest backup, but of database states at arbitrary points in time.
* * *
Addendum: another very common problem we see is WAL archives that become corrupted because a client accidentally pointed a two primaries at the same WAL archive (for instance, they might have copied a postgresql.conf file by hand, or via a DevOps tool like Puppet). In this case, the whole archive is corrupted, and you're best off starting with a fresh S3 bucket or an empty directory and doing a new base backup immediately.
One of the many nice features of pgBackRest is that it will notice this and prevent you from doing it. Fewer footguns → better backups.
Monday, October 29, 2018
It's just an expression
PostgreSQL has lots of great features for text search.
In particular, there are a bunch of ways to do case-insensitive searches for text:
For my toy example, a four-row table, that's not too expensive, but for a real table with thousands or millions of rows, it can become quite a pain point. And a regular index on email_addresses(addy) won't help; the lower() operation forces a sequential scan, regardless of whether an index is present.
In particular, there are a bunch of ways to do case-insensitive searches for text:
- There's standard SQL ILIKE… but than can be expensive — especially if you put %'s at both start and end — and it's overkill if you want an exact string match.
- The same goes for case-insensitive regexp matching: overkill for simple case-insensitive matches. It does work with indexes, though!
- Then there's the citext extension, which is pretty much the perfect answer. It lets you use indexes and still get case-insensitive matching, which is really cool. It Just Works.
OK, but what if you didn't have the foresight to use citext? And what if you don't want to go through the pain of changing the data type of that column? In a case like this, an expression index can be really handy.
Without an index, a case-insensitive match like this…
sandbox# select addy from email_addresses where lower(addy) = 'quinn@example.com';
… is forced to use a sequential scan, lowercasing the addy column of each row before comparing it to the desired address:
Without an index, a case-insensitive match like this…
sandbox# select addy from email_addresses where lower(addy) = 'quinn@example.com';
… is forced to use a sequential scan, lowercasing the addy column of each row before comparing it to the desired address:
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Seq Scan on email_addresses (cost=0.00..1.04 rows=1 width=32) (actual time=0.031..0.032 rows=1 loops=1)
Filter: (lower(addy) = 'quinn@example.com'::text)
Rows Removed by Filter: 3
Planning time: 0.087 ms
Execution time: 0.051 ms
(5 rows)
For my toy example, a four-row table, that's not too expensive, but for a real table with thousands or millions of rows, it can become quite a pain point. And a regular index on email_addresses(addy) won't help; the lower() operation forces a sequential scan, regardless of whether an index is present.
But an expression index will do the trick. An astute commenter noted that, in my toy example, walking the index is actually slower than a sequential scan. But for a table with many rows (most of which are not the one email address I'm looking for!) an index scan will be dramatically faster.
In short, expression indexes are one of those neat tricks that can save you a lot of pain.
sandbox# create index email_addresses__lower__addy on email_addresses (lower(addy));
CREATE INDEX
sandbox# explain analyze select addy from email_addresses where lower(addy) = 'quinn@example.com';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using email_addresses__lower__addy on email_addresses (cost=0.13..8.15 rows=1 width=32) (actual time=0.093..0.095 rows=1 loops=1)
Index Cond: (lower(addy) = 'quinn@example.com'::text)
Planning time: 0.105 ms
Execution time: 0.115 ms
(4 rows)
In short, expression indexes are one of those neat tricks that can save you a lot of pain.
Tuesday, October 23, 2018
How to set up scram-sha-256 authentication in PostgreSQL
md5: everyone uses it. But it's insecure — unsuitable for authentication or password storage. PostgreSQL 10 adds support for SCRAM SHA-256 authentication, which is far better1… but how do you use it? Documentation is scarce.
Below I give the complete steps, for two different cases:
1. Because SCRAM passwords are stored in as a salted, iterated hash, even if they're leaked, they're harder to crack. Also, the SCRAM authentication protocol is immune to MITM attacks using certs from compromised CAs.
Support varies among libraries, though most wrap libpq, and hence should Just Work if compiled with libpq 10 (JDBC is a notable exception; it doesn't use libpq, so you need to get the right JDBC version).
Below I give the complete steps, for two different cases:
- The case where you migrate every user.
- The case where you migrate only a subset of users. Step 1 explains why you might need to do this. In this case, you can save time and complexity by creating a containing role, adding all the users you want to migrate to that role, and then configuring that role to work with scram-sha-256. That way you don't have to do it once for each user.
- First, decide which users to migrate to scram-sha-256. The ideal is to migrate everyone, all at once. In practice you may be unable to do that, for a few reasons:
- Older client tools don't support scram-sha-256. You need at least psql 10, or pgAdmin 4.
- Your app's client libraries may not support scram-sha-256. At a minimum, you will probably have to upgrade your client libraries2.
- And it's not enough just to upgrade your libraries; you also have to change the password for each user who's migrating, because PostgreSQL needs to hash this password in scram-sha-256 format. There's no automatic rehash (there can't be, since PostgreSQL doesn't store the cleartext of the old password).
- Finally, besides your own code, you may have third-party apps connecting to your database, whose library versioning policy and password reset policy you don't control.
- Having chosen the users to migrate, upgrade your relevant client libraries, psql/postgresql-client packages, and/or pgAdmin versions.
- If you're migrating all users, skip to the next step. If, instead, you're migrating a subset of users, then create a new role, and put all those users in that role. This step saves a bit of repetitive config work.
- Edit pg_hba.conf (make a backup copy first, and don't do a reload yet! We'll do that in a future step). If you're migrating all users, you just change 'md5' to 'scram-sha-256' wherever it appears in the file. In that case, also edit your postgresql.conf to set
- Coordinate with stakeholders to schedule a downtime. During the downtime, do the following:
- Stop your app, cron jobs, and any other code that will touch the database (or just be prepared for these to fail when they try to connect).
- Do a pg_ctl reload to load the new pg_hba.conf. Note that this will break logins for all the users you're migrating!
- Now it's time to change those passwords. Notably, you must pass the cleartext password to your <code>alter user</code>command, so that PostgreSQL itself can salt and (iteratively) hash it; unlike with md5, you can't do the hashing yourself. This is required by the SCRAM protocol.
- So first alter the postgres user's ~/.psqlrc, to make sure it won't save cleartext of password-related commands in its history file. Add this line at the bottom:
- Likewise, make sure the cleartext won't go to the log. Do
psql -U postgres
, and set these settings (just for this one session): - In the same psql session, change passwords for the relevant users. if your users are going to connect using psql, keep in mind that it won't work with passwords longer than 99 characters. Longer passwords won't crash psql, but it will cause logins to fail.
- Securely communicate the new passwords to human users.
- Update ~/.pgpass entries with the new passwords, for both human users and non-human runners of cron jobs.
- Change other application password settings (config files, Vault, et cetera).
- Test, test, test! Make sure your apps, human users, and cron jobs can all connect.
- End the downtime.
- Remove this line from your ~/.psqlrc:
$ sudo su - postgres
psql
postgres=# create role scram_sha_256_users nologin ROLE alice, bob, eve;
… where alice, bob, and eve are the users you're moving to scram-sha-256. We'll set their passwords in a minute.
password_encryption = 'scram-sha-256'
If, instead, you're migrating a subset of users, the pg_hba.conf edits depend on which users you're adding. As appropriate for your setup, alter or remove any lines that tell PostgreSQL to use md5 authentication for your affected users; then add equivalent lines for scram-sha-256 authentication.For instance, if those users were formerly allowed to log in to all DBs with an md5 password, either locally or from any IPv4 host, then you would do the following. Note that this is only an example, and the particulars depend on your existing setup!
# Local (Unix domain socket) connections:
local all +scram_sha_256_users scram-sha-256
# IPv4 connections, from any IP address:
host all +scram_sha_256_users 0.0.0.0/0 scram-sha-256
\set HISTFILE /dev/null
set log_min_duration_statement = -1;
set log_statement = 'none';
set password_encryption = 'scram-sha-256';
alter role scram_sha_256_users set password_encryption = 'scram-sha-256';
alter role bob with password 'bobsecret';
alter role alice with password 'alicesecret';
…
\set HISTFILE /dev/null
Support varies among libraries, though most wrap libpq, and hence should Just Work if compiled with libpq 10 (JDBC is a notable exception; it doesn't use libpq, so you need to get the right JDBC version).
Thursday, October 11, 2018
HyperLogLog at SFPUG
HyperLogLog is one of those simple, useful, and ingenious algorithms that everyone should know. In SQL terms, it's a way to do COUNT(DISTINCT …) estimates with guaranteed accuracy bounds.
At this month's San Francisco PostgreSQL Users' Group, Sai Srirampur will explain HLL's workings, the postgresql-hll extension, and its applications in distributed PostgreSQL à la Citus.
Check it out!
At this month's San Francisco PostgreSQL Users' Group, Sai Srirampur will explain HLL's workings, the postgresql-hll extension, and its applications in distributed PostgreSQL à la Citus.
Check it out!
Wednesday, September 5, 2018
Locks talk this Friday, at PostgresOpen! (2018-09-07)
UPDATE: my slides are available; the video has yet to be published.
Attending PostgresOpen?
Come join me Friday for a gentle introduction to locks in PostgreSQL. My example-driven talk covers basic lock theory, tools for lock debugging, and common pitfalls and solutions. I hope to see you there!
Time and place info is on the PostgresOpen SV website.
Attending PostgresOpen?
Come join me Friday for a gentle introduction to locks in PostgreSQL. My example-driven talk covers basic lock theory, tools for lock debugging, and common pitfalls and solutions. I hope to see you there!
Time and place info is on the PostgresOpen SV website.
Subscribe to:
Posts (Atom)