The second way is a tad more efficient, but a much better way is to execute them in batches:
public void executeBatch(List<Entity> entities) throws SQLException {
try (
Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(SQL);
) {
for (Entity entity : entities) {
statement.setObject(1, entity.getSomeProperty());
// ...
statement.addBatch();
}
statement.executeBatch();
}
}
You're however dependent on the JDBC driver implementation how many batches you could execute at once. You may for example want to execute them every 1000 batches:
public void executeBatch(List<Entity> entities) throws SQLException {
try (
Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(SQL);
) {
int i = 0;
for (Entity entity : entities) {
statement.setObject(1, entity.getSomeProperty());
// ...
statement.addBatch();
i++;
if (i % 1000 == 0 || i == entities.size()) {
statement.executeBatch(); // Execute every 1000 items.
}
}
}
}
As to the multithreaded environments, you don't need to worry about this if you acquire and close the connection and the statement in the shortest possible scope inside the same method block according the normal JDBC idiom using try-with-resources statement as shown in above snippets.
If those batches are transactional, then you'd like to turn off autocommit of the connection and only commit the transaction when all batches are finished. Otherwise it may result in a dirty database when the first bunch of batches succeeded and the later not.
public void executeBatch(List<Entity> entities) throws SQLException {
try (Connection connection = dataSource.getConnection()) {
connection.setAutoCommit(false);
try (PreparedStatement statement = connection.prepareStatement(SQL)) {
// ...
try {
connection.commit();
} catch (SQLException e) {
connection.rollback();
throw e;
}
}
}
}
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…