I'm the author of node-postgres. First, I apologize the documentation has failed to make the right option clear: that's my fault. I'll try to improve it. I wrote a Gist just now to explain this because the conversation grew too long for Twitter.
Using pg.connect
is the way to go in a web environment.
PostgreSQL server can only handle 1 query at a time per connection.
That means if you have 1 global new pg.Client()
connected to your
backend your entire app is bottleknecked based on how fast postgres
can respond to queries. It literally will line everything up, queuing
each query. Yeah, it's async and so that's alright...but wouldn't you
rather multiply your throughput by 10x? Use pg.connect
set the
pg.defaults.poolSize
to something sane (we do 25-100, not sure the
right number yet).
new pg.Client
is for when you know what you're doing. When you need
a single long lived client for some reason or need to very carefully
control the life-cycle. A good example of this is when using
LISTEN/NOTIFY
. The listening client needs to be around and
connected and not shared so it can properly handle NOTIFY
messages.
Other example would be when opening up a 1-off client to kill some
hung stuff or in command line scripts.
One very helpful thing is to centralize all access to your database in your app to one file. Don't litter pg.connect
calls or new clients throughout. Have a file like db.js
that looks something like this:
module.exports = {
query: function(text, values, cb) {
pg.connect(function(err, client, done) {
client.query(text, values, function(err, result) {
done();
cb(err, result);
})
});
}
}
This way you can change out your implementation from pg.connect
to a custom pool of clients or whatever and only have to change things in one place.
Have a look at the node-pg-query module that does just this.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…