Logo Background

Oracle SQL Profile Tuning Command

  • By on October 23, 2010 | No Comments

    Oracle SQL Profile helps generate a better execution plan than the normal optimization. It is now possible to track SQL behavior over time and ensure that all SQL is using an optimal execution plan since Oracle 10g provides the ability to track SQL execution metrics with new dba_hist tables, most notably dba_hist_sqlstat and dba_hist_sql_plan.

    When SQL statements are executed by the Oracle database, the query optimizer is used to generate the execution plans of the SQL statements. The query optimizer operates in two modes: a normal mode and a tuning mode.

    The query optimizer can sometimes produce inaccurate estimates about an attribute of a statement due to lack of information, leading to poor execution plans.

    Traditionally, users have corrected this problem by manually adding hints to the application code to guide the optimizer into making correct decisions.

    For packaged applications, changing application code is not an option and the only alternative available is to log a bug with the application vendor and wait for a fix.

    Automatic SQL tuning deals with this problem with its SQL profiling capability. The Automatic Tuning Optimizer creates a profile of the SQL statement called a SQL Profile, consisting of auxiliary statistics specific to that statement.

    The query optimizer under normal mode makes estimates about cardinality, selectivity, and cost that can sometimes be off by a significant amount resulting in poor execution plans.

    SQL Profile addresses this problem by collecting additional information using sampling and partial execution techniques to verify and, if necessary, adjust these estimates.

    During SQL Profiling, the Automatic Tuning Optimizer also uses execution history information of the SQL statement to appropriately set optimizer parameter settings, such as changing the OPTIMIZER_MODE initialization parameter setting from ALL_ROWS to FIRST_ROWS for that SQL statement.

    REPRODUCE AN EXECUTION PLAN FROM ONE SYSTEM INTO ANOTHER

    You need two similar systems: SOURCE and TARGET. SQLT must be installed in both.
    SOURCE and TARGET must have the same schema objects (ie. PROD, TEST, DEV, QA, etc.).
    Required files are generated in SOURCE when SQLT is executed.

    Steps:
    1. Import into staging table in TARGET the CBO Stats generated by SQLT in SOURCE, connecting as SQLTXPLAIN.

    UNIX> imp SQLTXPLAIN/<pwd> tables='sqlt$_stattab' file=sqlt_s3407.dmp ignore=y

    2. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user.

    SQL> START sqlt/utl/sqltimp.sql s3407_prd1_db NULL

    3. Review and set the optimizer environment environment, connected as the application user.

    SQL> START sqlt_s3407_prd1_db_setenv.sql

    4. Use SQLT XECUTE or XPLAIN methods, connected as the application user.

    Notes:
    1. SOURCE and TARGET should be similar and contain the same schema objects.
    2. RDBMS release from TARGET should be equal or greater than SOURCE.

    CREATE A SQLT TEST CASE

    You need two similar systems: SOURCE and TARGET. SQLT must be installed in both.
    SOURCE and TARGET could even be on same server and database, under different schemas.
    Required files are generated in SOURCE when SQLT is executed.

    Steps:
    1. Review “metadata” script generated in SOURCE and execute it into TARGET, connected as SYSTEM or SYSDBA.

    SQL> START sqlt_s3407_prd1_db_metadata.sql

    In most cases you want to consolidate all schema objects into one test case user, (for example TC3407).

    2. Import CBO Stats generated in SOURCE into staging table in TARGET, connecting as SQLTXPLAIN.

    UNIX> imp SQLTXPLAIN/<pwd> tables='sqlt$_stattab' file=sqlt_s3407.dmp ignore=y

    3. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user (for example TC3407):

    SQL> START sqlt/utl/sqltimp.sql s3407_prd1_db TC3407

    If you decided in metadata step 1 to create schema objects into their original owner(s) use syntax below instead. Notice the “null” as the 2nd parameter.

    SQL> START sqlt/utl/sqltimp.sql s3407_prd1_db NULL

    4. Review and set the optimizer environment environment, connected as the application user.

    SQL> START sqlt_s3407_prd1_db_setenv.sql

    5. Use SQLT XECUTE or XPLAIN methods to reproduce desired plan, connected as the application user.

    Notes:
    1. If SOURCE and TARGET are on same system (different schemas), then step 2 is redundant.
    2. SOURCE and TARGET should be similar in all senses.
    3. RDBMS release from TARGET should be equal or greater than SOURCE.

    CREATE A STAND-ALONE TEST CASE BASED ON A SQLT TEST CASE

    You need to create first a SQLT Test Case following instructions above.
    Instructions below apply when schema objects were consolidated into on TC user TC3407.
    If method used in SQLT Test Case was XPLAIN, you will need to modify script with
    one SQL so it can be executed stand alone in step 4 (you may need to replace binds).

    Steps:
    1. Export CBO Stats captured automatically during step 5 of SQLT Test Case, connecting as TC3407.

    UNIX> exp TC3407/TC3407 tables=CBO_STAT_TAB_4TC file=STATTAB.dmp \
    statistics=none log=STATTAB.log

    2. Write stand-alone Test Case instructions into a readme.txt file. Suggested content follows:

    -- create test case user TC3407 and schema objects:
    UNIX> sqlplus / AS sysdba;
    SQL> START sqlt_s3407_prd1_db_metadata.sql;
    -- import and restore cbo stats:
    UNIX> imp TC3407/TC3407 TABLES=CBO_STAT_TAB_4TC file=STATTAB.dmp IGNORE=y
    UNIX> sqlplus TC3407/TC3407
    SQL> EXEC DBMS_STATS.IMPORT_SCHEMA_STATS('TC3407', 'CBO_STAT_TAB_4TC');
    -- set cbo environment and generate 10053
    UNIX> sqlplus TC3407/TC3407
    SQL> START sqlt_s3407_prd1_db_setenv.sql;
    SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = "TC3407_10053";
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SQL> DEF unique_id = TC3407
    SQL> START sqlt_s3407_prd1_db_tc_script.sql;

    3. Create a TC3407 directory and place into it the following files:

       o CBO Stats dump file FROM step 1:         STATTAB.dmp
       o Instructions file FROM step 2:           readme.txt
       o Metadata script used IN SQLT TC:         sqlt_s3407_prd1_db_metadata.sql
       o CBO SET Environment script FROM SQLT TC: sqlt_s3407_prd1_db_setenv.sql
       o Script WITH one SQL FROM SQLT TC:        sqlt_s3407_prd1_db_tc_script.sql

    sqlt_s3407_prd1_db_tc_script.sql must be executable stand-alone (binds declaration and assignemt, or replaced by literals).

    4. Fully test your stand-alone TC following your own readme.txt created in step 2.

    5. ZIP stand-alone TC directory as TC3407.zip.

    Notes:
    1. Use the readme.txt file that you followed when you created the SQLT Test Case.

    GATHER A CBO STATISTICS BASELINE

    If suspecting poor schema object CBO statitics, use commands below
    to generate a new baseline. Then, test SQLT (any method) on your SQL.
    Connect as SYSTEM, SYSDBA, or the application user.

    Baseline using FND_STATS:

    SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', 
    tabname => '"GL_JE_HEADERS"', percent => 100, cascade => TRUE);
    SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', 
    tabname => '"GL_JE_LINES"', percent => 10, cascade => TRUE);
    SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', 
    tabname => '"GL_JE_SOURCES_TL"', percent => 100, cascade => TRUE);

    TRANSFER A STORED OUTLINE

    If your SQL uses an Stored Outline, you can export the SO from SOURCE
    and import it into TARGET.

    Steps:
    1. Export Stored Outline from SOURCE connecting as OUTLN, SYSTEM or SYSDBA.

    UNIX> exp system/<pwd> tables=outln.ol% file=sqlt_s3407_outln.dmp \
    statistics=none query=\"WHERE ol_name = \'<stored_outline_name>\'\" log=sqlt_exp_outln.log

    2. Import Stored Outline into TARGET connecting as OUTLN, SYSTEM or SYSDBA.

    UNIX> imp system/<pwd> file=sqlt_s3407_outln.dmp fromuser=outln touser=outln IGNORE=y

    Notes:
    1. If TARGET already contains a Stored Outline for your SQL, find its name and drop it before import step. Connect as OUTLN, SYSTEM or SYSDBA to drop an outline.

    SQL> DROP OUTLINE <stored_outline_name>;

    TRANSFER A SQL PROFILE

    To transfer a SQL Profile you need to pack and export it from SOURCE, then import and unpack it into TARGET. Connect as SQLTXPLAIN, SYSTEM or SYSDBA

    Steps
    1. Create staging table in SOURCE

    SQL> EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (TABLE_NAME => 'STGTAB_SQLPROF', 
    schema_name => USER);

    2. Pack SQL Profile into staging table in SOURCE

    SQL> EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_name => '<sql_profile_name>', 
    profile_category => 'DEFAULT', staging_table_name => 'STGTAB_SQLPROF', 
    staging_schema_owner => USER);

    3. Export staging table from SOURCE

    UNIX> exp <usr>/<pwd> tables=stgtab_sqlprof file=sqlprof.dmp \
    statistics=none log=sqlprof_exp.log

    4. Import staging table into TARGET

    UNIX> imp <usr>/<pwd> tables=stgtab_sqlprof file=sqlprof.dmp ignore=y log=sqlprof_imp.log

    5. Unpack SQL Profile from staging table in TARGET

    SQL> EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF (profile_name => '<sql_profile_name>',
    profile_category => 'DEFAULT', REPLACE => TRUE, staging_table_name => 'STGTAB_SQLPROF', 
    staging_schema_owner => USER);

    Notes:
    1. Connect with same user in both SOURCE and TARGET.
    2. User must have CREATE ANY SQL PROFILE privilege and the SELECT privilege on staging table.

    CREATE A STORED OUTLINE

    If you want to create an Stored Ouline for your SQL, execute these commands connected as the application user:

    SQL> ALTER SESSION SET create_stored_outlines = TRUE;
    SQL> EXEC DBMS_OUTLN.CREATE_OUTLINE (hash_value => 644832611, child_number => 0);
    SQL> ALTER SESSION SET create_stored_outlines = FALSE;
    SQL> SELECT * FROM DBA_OUTLINES WHERE signature = '914E567776565E496F27F2C5B3C0F9D2';

    Notes:
    1. User must have CREATE ANY OUTLINE grant or DBA role.
    2. Set your optimizer environment first (you may want to use the setenv script).

    EXTRACT A PLAN FROM MEMORY OR AWR AND PIN IT TO A SQL IN SAME OR DIFFERENT SYSTEM

    SQLT XTRACT and XECUTE, record into SQLT repository all known plans for one SQL.
    Any of these plans can be extracted and then associate to that SQL in same SOURCE
    or similar TARGET system by using a manual custom SQL Profile.
    Connect as SQLTXPLAIN, SYSDBA, or the application user

    Steps
    1. Execute sqltprofile utility in SOURCE, connecting as SQLTXPLAIN, SYSDBA, or the application user

    SQL> START sqlt/utl/sqltprofile.sql s3407_prd1_db <plan_hash_value>

    Both parameters can be entered inline, else the script will list valid values.

    2. Execute generated script in SOURCE or TARGET system, connecting as SQLTXPLAIN, SYSTEM or SYSDBA

    SQL> START sqlt_s3407_prd1_dbp<plan_hash_value>_sqlprof.sql;

    Notes:
    1. Generated script calls DBMS_SQLTUNE.IMPORT_SQL_PROFILE, which generates a manual custom SQL Profile based on the hints from plan_table.other_xml.
    2. If SQLT is not installed, use sqlt/utl/coe_xfr_sql_profile.sql instead.

    Previous
    Next
    » Bulk Collect In Oracle PL/SQL
Leave a Comment