Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
263 views
in Technique[技术] by (71.8m points)

sql - INSERT of 10 million queries under 10 minutes in Oracle?

I am working on a file loader program.

The purpose of this program is to take an input file, do some conversions on its data and then upload the data into the database of Oracle.

The problem that I am facing is that I need to optimize the insertion of very large input data on Oracle.

I am uploading data into the table, lets say ABC.

I am using the OCI library provided by Oracle in my C++ Program. In specific, I am using OCI Connection Pool for multi-threading and loading into ORACLE. (http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci09adv.htm )

The following are the DDL statements that have been used to create the table ABC –

CREATE TABLE ABC(
   seq_no         NUMBER NOT NULL,
   ssm_id         VARCHAR2(9)  NOT NULL,
   invocation_id  VARCHAR2(100)  NOT NULL,
   analytic_id    VARCHAR2(100) NOT NULL,
   analytic_value NUMBER NOT NULL,
   override       VARCHAR2(1)  DEFAULT  'N'   NOT NULL,
   update_source  VARCHAR2(255) NOT NULL,
   last_chg_user  CHAR(10)  DEFAULT  USER NOT NULL,
   last_chg_date  TIMESTAMP(3) DEFAULT  SYSTIMESTAMP NOT NULL
);

CREATE UNIQUE INDEX ABC_indx ON ABC(seq_no, ssm_id, invocation_id, analytic_id);
/
CREATE SEQUENCE ABC_seq;
/

CREATE OR REPLACE TRIGGER ABC_insert
BEFORE INSERT ON ABC
FOR EACH ROW
BEGIN
SELECT ABC_seq.nextval INTO :new.seq_no FROM DUAL;
END;

I am currently using the following Query pattern to upload the data into the database. I am sending data in batches of 500 queries via various threads of OCI connection pool.

Sample of SQL insert query used -

insert into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source)
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual

EXECUTION PLAN by Oracle for the above query -

-----------------------------------------------------------------------------
| Id  | Operation                | Name|Rows| Cost (%CPU) | Time     |
-----------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |     | 4  |     8   (0) | 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | ABC |    |             |          |
|   2 |   UNION-ALL              |     |    |             |          |
|   3 |    FAST DUAL             |     | 1  |     2   (0) | 00:00:01 |
|   4 |    FAST DUAL             |     | 1  |     2   (0) | 00:00:01 |
|   5 |    FAST DUAL             |     | 1  |     2   (0) | 00:00:01 |
|   6 |    FAST DUAL             |     | 1  |     2   (0) | 00:00:01 |

The Run times of the program loading 1 million lines -

Batch Size = 500
Number of threads - Execution Time -
10                  4:19
20                  1:58
30                  1:17
40                  1:34
45                  2:06
50                  1:21
60                  1:24
70                  1:41
80                  1:43
90                  2:17
100                 2:06


Average Run Time = 1:57    (Roughly 2 minutes)

I need to optimize and reduce this time further. The problem that I am facing is when I put 10 million rows for uploading.

The average run time for 10 million came out to be = 21 minutes

(My target is to reduce this time to below 10 minutes)

So I tried the following steps as well -

[1] Did the partitioning of the table ABC on the basis of seq_no. Used 30 partitions. Tested with 1 million rows - The performance was very poor. almost 4 times more than the unpartitioned table.

[2] Another partitioning of the table ABC on the basis of last_chg_date. Used 30 partitions.

2.a) Tested with 1 million rows - The performance was almost equal to the unpartitioned table. Very little difference was there so it was not considered.

2.b) Again tested the same with 10 million rows. The performance was almost equal to the unpartitioned table. No noticable difference.

The following was the DDL commands were used to achieve partitioning -

CREATE TABLESPACE ts1 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts2 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts3 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts4 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts5 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts6 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts7 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts8 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts9 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts10 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts11 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts12 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts13 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts14 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts15 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts16 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts17 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts18 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts19 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts20 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts21 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts22 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts23 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts24 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts25 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts26 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts27 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts28 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts29 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts30 DATAFILE AUTOEXTEND ON;

CREATE TABLE ABC(
   seq_no           NUMBER NOT NULL,
   ssm_id           VARCHAR2(9)  NOT NULL,
   invocation_id    VARCHAR2(100)  NOT NULL,
   calc_id          VARCHAR2(100) NULL,
   analytic_id      VARCHAR2(100) NOT NULL,
   ANALYTIC_VALUE   NUMBER NOT NULL,
   override         VARCHAR2(1)  DEFAULT  'N'   NOT NULL,
   update_source    VARCHAR2(255) NOT NULL,
   last_chg_user    CHAR(10)  DEFAULT  USER NOT NULL,
   last_chg_date    TIMESTAMP(3) DEFAULT  SYSTIMESTAMP NOT NULL
)
PARTITION BY HASH(last_chg_date)
PARTITIONS 30
STORE IN (ts1, ts2, ts3, ts4, ts5, ts6, ts7, ts8, ts9, ts10, ts11, ts12, ts13,
ts14, ts15, ts16, ts17, ts18, ts19, ts20, ts21, ts22, ts23, ts24, ts25, ts26,
ts27, ts28, ts29, ts30);

CODE that I am using in the thread function (written in C++), using OCI -

void OracleLoader::bulkInsertThread(std::vector<std::string> const & statements)
{

    try
    {
        INFO("ORACLE_LOADER_THREAD","Entered Thread = %1%", m_env);
        string useOraUsr = "some_user";
        string useOraPwd = "some_password";

        int user_name_len   = useOraUsr.length();
        int passwd_name_len = useOraPwd.length();

        text* username((text*)useOraUsr.c_str());
        text* password((text*)useOraPwd.c_str());


        if(! m_env)
        {
            CreateOraEnvAndConnect();
        }
        OCISvcCtx *m_svc = (OCISvcCtx *) 0;
        OCIStmt *m_stm = (OCIStmt *)0;

        checkerr(m_err,OCILogon2(m_env,
                                 m_err,
                                 &m_svc,
                                 (CONST OraText *)username,
                                 user_name_len,
                                 (CONST OraText *)password,
                                 passwd_name_len,
                                 (CONST OraText *)poolName,
                                 poolNameLen,
                                 OCI_CPOOL));

        OCIHandleAlloc(m_env, (dvoid **)&m_stm, OCI_HTYPE_STMT, (size_t)0, (dvoid **)0);

////////// Execution Queries in the format of - /////////////////
//        insert into pm_own.sec_analytics (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, override, update_source)
//        select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
//        union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
//        union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
//        union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
//////////////////////////////////////////////////////////////////

        size_t startOffset = 0;
        const int batch_size = PCSecAnalyticsContext::instance().getBatchCount();
        while (startOffset < statements.size())
        {
            int remaining = (startOffset + batch_size < statements.size() ) ? batch_size : (statements.size() - startOffset );
            // Break the query vector to meet the batch size
            std::vector<std::string> items(statements.begin() + startOffset,
                                           statements.begin() + startOffset + remaining);

            //! Preparing the Query
            std::string insert_query = "insert into ";
            insert_query += Context::instance().getUpdateTable();
            insert_query += " (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, override, update_source)
";

            std::vector<std::string>::const_iterator i3 = items.begin();
            insert_query += *i3 ;

            for( i3 = items.begin() + 1; i3 != items.end(); ++i3)
                insert_query += "union " + *i3 ;
            // Preparing the Statement and Then Executing it in the next step
            text *txtQuery((text *)(insert_query).c_str());
            checkerr(m_err, OCIStmtPrepare (m_stm, m_err, txtQuery, strlen((char *)txtQuery), OCI_NTV_SYNTAX, OCI_DEFAULT));
            checkerr(m_err, OCIStmtExecute (m_svc, m_stm, m_err, (ub4)1, (ub4)0, (OCISnapshot *)0, (OCISnapshot *)0, OCI_DEFAULT ));

            startOffset += batch_size;
        }

        // Here is the commit statement. I am committing at the end of each thread.
        checkerr(m_err, OCITransCommit(m_svc,m_err,(ub4)0));

        checkerr(m_err, OCIHandleFree((dvoid *) m_stm, OCI_HTYPE_STMT));
        checkerr(m_err, OCILogoff(m_svc, m_err));

        INFO("ORACLE_LOADER_THREAD","Thread Complete. Leaving Thread.");
    }

    catch(AnException &ex)
    {
        ERROR("ORACLE_LOADER_THREAD", "Oracle query failed with : %1%", std::string(ex.what()));
        throw AnException(string("Oracle query failed with : ") + ex.what());
    }
}

While the post was being answered, I was suggested several methods to optimize my INSERT QUERY. I have chosen and used QUERY I in my program for the following reasons that I discovered while testing the various INSERT Queries. On running the SQL Queries that were suggested to me - QUERY I -

insert into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source)
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual

EXECUTION PLAN by Oracle for Query I -

--------------------------------------------------------------------------
| Id  | Operation                | Name| Rows | Cost (%CPU)   | Time     |
--------------------------------------------------------------------------
|   0 | INSE

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I know others have mentioned this and you don't want to hear it but use SQL*Loader or external tables. My average load time for tables of approximately the same width is 12.57 seconds for just over 10m rows. These utilities have been explicitly designed to load data into the database quickly and are pretty good at it. This may incur some additional time penalties depending on the format of your input file, but there are quite a few options and I've rarely had to change files prior to loading.

If you're unwilling to do this then you don't have to upgrade your hardware yet; you need to remove every possible impediment to loading this quickly. To enumerate them, remove:

  1. The index
  2. The trigger
  3. The sequence
  4. The partition

With all of these you're obliging the database to perform more work and because you're doing this transactionally, you're not using the database to its full potential.

Load the data into a separate table, say ABC_LOAD. After the data has been completely loaded perform a single INSERT statement into ABC.

insert into abc
select abc_seq.nextval, a.*
  from abc_load a

When you do this (and even if you don't) ensure that the sequence cache size is correct; to quote:

When an application accesses a sequence in the sequence cache, the sequence numbers are read quickly. However, if an application accesses a sequence that is not in the cache, then the sequence must be read from disk to the cache before the sequence numbers are used.

If your applications use many sequences concurrently, then your sequence cache might not be large enough to hold all the sequences. In this case, access to sequence numbers might often require disk reads. For fast access to all sequences, be sure your cache has enough entries to hold all the sequences used concurrently by your applications.

This means that if you have 10 threads concurrently writing 500 records each using this sequence then you need a cache size of 5,000. The ALTER SEQUENCE document states how to change this:

alter sequence abc_seq cache 5000

If you follow my suggestion I'd up the cache size to something around 10.5m.

Look into using the APPEND hint (see also Oracle Base); this instructs Oracle to use a direct-path insert, which appends data directly to the end of the table rather than looking for space to put it. You won't be able to use this if your table has indexes but you could use it in ABC_LOAD

insert /*+ append */ into ABC (SSM_ID, invocation_id , calc_id, ... )
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual

If you use the APPEND hint; I'd add TRUNCATE ABC_LOAD after you've inserted into ABC otherwise this table will grow indefinitely. This should be safe as you will have finished using the table by then.

You don't mention what version or edition or Oracle you're using. There are a number of extra little tricks you can use:

  • Oracle 12c

    This version supports identity columns; you could get rid of the sequence completely.

    CREATE TABLE ABC(
       seq_no         NUMBER GENERATED AS IDENTITY (increment by 5000)
    
  • Oracle 11g r2

    If you keep the trigger; you can assign the sequence value directly.

    :new.seq_no := ABC_seq.nextval;
    
  • Oracle Enterprise Edition

    If you're using Oracle Enterprise you can speed up the INSERT from ABC_LOAD by using the PARALLEL hint:

    insert /*+ parallel */ into abc
    select abc_seq.nextval, a.*
      from abc_load a
    

    This can cause it's own problems (too many parallel processes etc), so test. It might help for the smaller batch inserts but it's less likely as you'll lose time computing what thread should process what.


tl;dr

Use the utilities that come with the database.

If you can't use them then get rid of everything that might slow the insert down and do it in bulk, 'cause that's what the database is good at.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...