Table of contents
Preamble
One of the top 12.1.0.2 feature is the In-Memory column store that store tables’ figures in memory in a (compressed) column format for faster access. The important thing is that it does not changed the default row store on disk (in tablespaces), it adds a memory column store for fast ad hoc queries with less impact on current workload. This is a non-free Enterprise edition option.
From Oracle official documentation In-Memory column store works great for below operation:
A query that scans a large number of rows and applies filters that use operators such as the following: =, <, >, and IN
A query that selects a small number of columns from a table or materialized view with a large number of columns, such as a query that selects five columns from a table with 100 columns
A query that joins a small table to a large table
A query that aggregates data
While, again from Oracle official documentation, it does not improve performance for:
Queries with complex predicates
Queries that select a large number of columns
Queries that return a large number of rows
Queries with multiple large table joins
The example I had in mind was a small dimension table (few cities) joined with a big fact tables (cities sales). So should perfectly fit in the “A query that joins a small table to a large table” category…
My test system is Oracle 12.1.0.2.0 Enterprise Edition running on Red Hat Enterprise Linux Server release 6.3 (Santiago).
Oracle database In-Memory Base Level Feature has been made free starting with 19.8 Release Update. To ensure you stick with free options set:
- INMEMORY_FORCE = BASE_LEVEL
- INMEMORY_SIZE = 16GB
In-memory test case preparation
Small dimension table containing few French cities:
DROP TABLE city; CREATE TABLE city ( code NUMBER NOT NULL , descr VARCHAR2(50) , CONSTRAINT city_pk PRIMARY KEY ( code ) USING INDEX ( CREATE UNIQUE INDEX city_pk ON city (code ASC) TABLESPACE move_data ) enable ) nologging TABLESPACE move_data; INSERT INTO city VALUES (1,'Paris'); INSERT INTO city VALUES (2,'Lyon'); INSERT INTO city VALUES (3,'Marseille'); INSERT INTO city VALUES (4,'Bordeaux'); INSERT INTO city VALUES (5,'Lille'); INSERT INTO city VALUES (6,'Toulouse'); INSERT INTO city VALUES (7,'Nantes'); INSERT INTO city VALUES (8,'Montpellier'); INSERT INTO city VALUES (9,'Dijon'); INSERT INTO city VALUES (10,'Arbois'); INSERT INTO city VALUES (11,'Nancy'); INSERT INTO city VALUES (12,'Troyes'); INSERT INTO city VALUES (13,'Bourges'); INSERT INTO city VALUES (14,'Tours'); INSERT INTO city VALUES (15,'Rennes'); INSERT INTO city VALUES (16,'Grenoble'); INSERT INTO city VALUES (17,'Annecy'); INSERT INTO city VALUES (18,'Rouen'); INSERT INTO city VALUES (19,'Brest'); INSERT INTO city VALUES (20,'Poitiers'); INSERT INTO city VALUES (21,'Metz'); INSERT INTO city VALUES (22,'Reims'); INSERT INTO city VALUES (23,'Amiens'); INSERT INTO city VALUES (24,'Caen'); INSERT INTO city VALUES (25,'Le Havre'); INSERT INTO city VALUES (26,'Quimper'); INSERT INTO city VALUES (27,'Saint-Nazaire'); INSERT INTO city VALUES (28,'Carcassonne'); INSERT INTO city VALUES (29,'Nimes'); INSERT INTO city VALUES (30,'Cannes'); COMMIT; EXEC dbms_stats.gather_table_stats('yjaquier', 'city'); |
And a big fact table:
DROP TABLE sales; CREATE TABLE sales(id NUMBER, city__code NUMBER, qty NUMBER, val VARCHAR2(100)) NOLOGGING TABLESPACE move_data; DECLARE i NUMBER; nbrows NUMBER; BEGIN i:=1; nbrows:=50000000; LOOP EXIT WHEN i>nbrows; INSERT INTO sales VALUES(i,ROUND(dbms_random.VALUE(1,30)),ROUND(dbms_random.VALUE(1,10000)),dbms_random.VALUE(1,10000)); IF (MOD(i,100000)=0) THEN COMMIT; END IF; i:=i+1; END LOOP; COMMIT; END; / EXEC dbms_stats.gather_table_stats('yjaquier', 'sales'); |
I have been able to generate a 3GB fact table with 50,000,000 rows:
SQL> SELECT SUM(bytes)/(1024*1024) AS "Size MB" FROM dba_extents WHERE owner='YJAQUIER' AND segment_name='SALES'; SIZE MB ---------- 3200 |
In-Memory testing
For testing I set below in my session:
sql> SET lines 200 pages 200 TIMING ON sql> col sql_text FOR a90 sql> col plan_table_output FOR a180 SQL> SET autotrace traceonly EXPLAIN STATISTICS sql> ALTER SESSION SET statistics_level=ALL; SESSION altered. |
I have used the below SQL statement, all in lowercase, aiming at getting sales in quantity and value per city. The comment /* Yannick */ helps to retrieve cursor in library cache:
select /* Yannick */ b.descr,sum(qty) as qty, sum(val) as val from sales a, city b where a.city__code=b.code group by b.descr order by b.descr;
SQL> SELECT /* Yannick */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code GROUP BY b.descr ORDER BY b.descr; 30 ROWS selected. Elapsed: 00:00:11.52 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 518732835 ------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | TQ |IN-OUT| PQ Distrib | ------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 30 | 1770 | 8354 (2)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (ORDER) | :TQ10001 | 30 | 1770 | 8354 (2)| 00:00:01 | Q1,01 | P->S | QC (ORDER) | | 3 | SORT GROUP BY | | 30 | 1770 | 8354 (2)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 30 | 1770 | 8354 (2)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND RANGE | :TQ10000 | 30 | 1770 | 8354 (2)| 00:00:01 | Q1,00 | P->P | RANGE | | 6 | HASH GROUP BY | | 30 | 1770 | 8354 (2)| 00:00:01 | Q1,00 | PCWP | | |* 7 | HASH JOIN | | 50M| 2813M| 8238 (1)| 00:00:01 | Q1,00 | PCWP | | | 8 | TABLE ACCESS FULL | CITY | 30 | 330 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | | 9 | PX BLOCK ITERATOR | | 50M| 2288M| 8225 (1)| 00:00:01 | Q1,00 | PCWC | | | 10 | TABLE ACCESS FULL| SALES | 50M| 2288M| 8225 (1)| 00:00:01 | Q1,00 | PCWP | | ------------------------------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 7 - ACCESS("A"."CITY__CODE"="B"."CODE") Note ----- - dynamic STATISTICS used: dynamic sampling (LEVEL=AUTO) - automatic DOP: Computed Degree OF Parallelism IS 15 STATISTICS ---------------------------------------------------------- 90 recursive calls 0 db block gets 410173 consistent gets 406420 physical reads 0 redo SIZE 2092 bytes sent via SQL*Net TO client 563 bytes received via SQL*Net FROM client 3 SQL*Net roundtrips TO/FROM client 16 sorts (memory) 0 sorts (disk) 30 ROWS processed |
Then I set In-Memory pool (static parameter):
inmemory_size=2G parallel_degree_policy=auto
Remark:
The parallel_degree_policy=auto tuning is a good advise from a DBA colleague.
I started with the default compression method but my In-Memory pool was too small to store the sales table entirely:
SQL> ALTER TABLE sales inmemory priority critical; TABLE altered. SQL> col segment_name FOR a15 SQL> SELECT segment_name,inmemory_size,bytes,bytes_not_populated,populate_status,inmemory_priority,inmemory_distribute,inmemory_duplicate,inmemory_compression FROM V$IM_SEGMENTS WHERE owner='YJAQUIER' AND segment_name IN ('CITY','SALES') ORDER BY segment_name; SEGMENT_NAME INMEMORY_SIZE BYTES BYTES_NOT_POPULATED POPULATE_ INMEMORY INMEMORY_DISTRI INMEMORY_DUPL INMEMORY_COMPRESS --------------- ------------- ---------- ------------------- --------- -------- --------------- ------------- ----------------- SALES 1590755328 3355443200 1757536256 COMPLETED CRITICAL AUTO NO DUPLICATE FOR QUERY LOW |
As it was difficult at that time to bounce the database (and also because my database server was running low on free memory) I decided to choose a different compression method (the best one to be clear). And this time my table fit in In-Memory pool. According to the In-Memory blog the Advanced Compression Enterprise edition option is not needed for those different compression algorithms:
SQL> ALTER TABLE city inmemory priority critical memcompress FOR capacity high; TABLE altered. SQL> SELECT segment_name,inmemory_size,bytes,bytes_not_populated,populate_status,inmemory_priority,inmemory_distribute,inmemory_duplicate,inmemory_compression FROM V$IM_SEGMENTS WHERE owner='YJAQUIER' AND segment_name IN ('CITY','SALES') ORDER BY segment_name; SEGMENT_NAME INMEMORY_SIZE BYTES BYTES_NOT_POPULATED POPULATE_ INMEMORY INMEMORY_DISTRI INMEMORY_DUPL INMEMORY_COMPRESS --------------- ------------- ---------- ------------------- --------- -------- --------------- ------------- ----------------- SALES 1337262080 3355443200 0 COMPLETED CRITICAL AUTO NO DUPLICATE FOR CAPACITY HIGH |
I have also tried to put the city small table in memory but as you see above there is no inmemory segment for this table even if the property is well changed in DBA_TABLES:
SQL> col TABLE_NAME FOR a15 SQL> SELECT table_name,inmemory FROM dba_tables WHERE owner='YJAQUIER' AND table_name IN ('CITY','SALES'); TABLE_NAME INMEMORY --------------- -------- CITY ENABLED SALES ENABLED |
I finally found why in 1950831.1 MOS note:
In the above example the in memory segment is not populated because the object is smaller than the “_inmemory_small_segment_threshold”, though the same would be true if the column/table had not yet completed loading into the in-memory segment
I initially started by executing the exact same query: we are supposed to change nothing as Oracle claims. But I have been disappointed:
SQL> SELECT /* Yannick */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code GROUP BY b.descr ORDER BY b.descr; 30 ROWS selected. Elapsed: 00:00:38.05 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 6666487 ------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 30 | 1200 | 10798 (62)| 00:00:01 | | 1 | SORT GROUP BY | | 30 | 1200 | 10798 (62)| 00:00:01 | |* 2 | HASH JOIN | | 30 | 1200 | 10797 (62)| 00:00:01 | | 3 | VIEW | VW_GBC_5 | 30 | 870 | 10796 (62)| 00:00:01 | | 4 | HASH GROUP BY | | 30 | 1440 | 10796 (62)| 00:00:01 | | 5 | TABLE ACCESS INMEMORY FULL| SALES | 50M| 2288M| 8939 (54)| 00:00:01 | | 6 | TABLE ACCESS INMEMORY FULL | CITY | 30 | 330 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 2 - ACCESS("ITEM_1"="B"."CODE") Note ----- - automatic DOP: Computed Degree OF Parallelism IS 1 because OF parallel threshold STATISTICS ---------------------------------------------------------- 343 recursive calls 0 db block gets 532 consistent gets 0 physical reads 0 redo SIZE 2089 bytes sent via SQL*Net TO client 563 bytes received via SQL*Net FROM client 3 SQL*Net roundtrips TO/FROM client 70 sorts (memory) 0 sorts (disk) 30 ROWS processed |
More than 38 seconds, mainly due to the parallel degree that is not like in initial statement i.e. 15. The number of physical reads is obviously quite difference as here we use data that is fully stored in memory. I have initially tried to set the degree of parallelism of city and sales table to 15 but (strangely) it did not changed anything…
To try to trigger Auto_DOP (Degree Of Parallelism) I have tried playing with parallel_min_time_threshold initialization parameter at session level. This was providing intermediate result mainly because of auto-DOP that was at 5 and not 15. Execution time was twice higher so around 20 seconds:
SELECT /* Yannick */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code GROUP BY b.descr ORDER BY b.descr; 30 ROWS selected. Elapsed: 00:00:19.95 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 518732835 ---------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 30 | 1770 | 2391 (62)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (ORDER) | :TQ10001 | 30 | 1770 | 2391 (62)| 00:00:01 | Q1,01 | P->S | QC (ORDER) | | 3 | SORT GROUP BY | | 30 | 1770 | 2391 (62)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 30 | 1770 | 2391 (62)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND RANGE | :TQ10000 | 30 | 1770 | 2391 (62)| 00:00:01 | Q1,00 | P->P | RANGE | | 6 | HASH GROUP BY | | 30 | 1770 | 2391 (62)| 00:00:01 | Q1,00 | PCWP | | |* 7 | HASH JOIN | | 50M| 2813M| 2021 (55)| 00:00:01 | Q1,00 | PCWP | | | 8 | TABLE ACCESS INMEMORY FULL | CITY | 30 | 330 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | | 9 | PX BLOCK ITERATOR | | 50M| 2288M| 1986 (54)| 00:00:01 | Q1,00 | PCWC | | | 10 | TABLE ACCESS INMEMORY FULL| SALES | 50M| 2288M| 1986 (54)| 00:00:01 | Q1,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 7 - ACCESS("A"."CITY__CODE"="B"."CODE") Note ----- - dynamic STATISTICS used: dynamic sampling (LEVEL=AUTO) - automatic DOP: Computed Degree OF Parallelism IS 5 - parallel scans affinitized FOR buffer cache - parallel scans affinitized FOR inmemory STATISTICS ---------------------------------------------------------- 308 recursive calls 24 db block gets 5219 consistent gets 0 physical reads 0 redo SIZE 2091 bytes sent via SQL*Net TO client 563 bytes received via SQL*Net FROM client 3 SQL*Net roundtrips TO/FROM client 6 sorts (memory) 0 sorts (disk) 30 ROWS processed |
So re-trying with a hint to force the DOP. Please note it is a bit in contradiction with what Oracle claims as we are not supposed to change anything while implementing In-Memory:
SELECT /* Yannick */ /*+ parallel (15) */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code GROUP BY b.descr ORDER BY b.descr; 30 ROWS selected. Elapsed: 00:00:06.66 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 518732835 ---------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 30 | 1770 | 791 (62)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (ORDER) | :TQ10001 | 30 | 1770 | 791 (62)| 00:00:01 | Q1,01 | P->S | QC (ORDER) | | 3 | SORT GROUP BY | | 30 | 1770 | 791 (62)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 30 | 1770 | 791 (62)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND RANGE | :TQ10000 | 30 | 1770 | 791 (62)| 00:00:01 | Q1,00 | P->P | RANGE | | 6 | HASH GROUP BY | | 30 | 1770 | 791 (62)| 00:00:01 | Q1,00 | PCWP | | |* 7 | HASH JOIN | | 50M| 2813M| 675 (55)| 00:00:01 | Q1,00 | PCWP | | | 8 | TABLE ACCESS INMEMORY FULL | CITY | 30 | 330 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | | 9 | PX BLOCK ITERATOR | | 50M| 2288M| 662 (54)| 00:00:01 | Q1,00 | PCWC | | | 10 | TABLE ACCESS INMEMORY FULL| SALES | 50M| 2288M| 662 (54)| 00:00:01 | Q1,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 7 - ACCESS("A"."CITY__CODE"="B"."CODE") Note ----- - dynamic STATISTICS used: dynamic sampling (LEVEL=AUTO) - Degree OF Parallelism IS 15 because OF hint STATISTICS ---------------------------------------------------------- 90 recursive calls 0 db block gets 1784 consistent gets 0 physical reads 0 redo SIZE 2092 bytes sent via SQL*Net TO client 563 bytes received via SQL*Net FROM client 3 SQL*Net roundtrips TO/FROM client 16 sorts (memory) 0 sorts (disk) 30 ROWS processed |
Ok it is better the response time is divided by two around, I even had execution time of 4 and half seconds…
Conclusion
As we have seen the implementation in itself is really straightforward. Then in real life my simple example did not benefit automatically from In-Memory as the auto-DOP feature did not fire as exactly without the option. But at the end I have been able to drastically improve the overall response time.
You might ask what if I want the sales for only one city and create an index on the sales.city__code column ? It has been stated at a global conference at Oracle Open Word 2014, an index access should be as quick as an In-Memory access. If you create this index (index and histograms on sales.city__code column) you will notice that CBO does not use it, forcing its usage does not speed up the query not to say it slows it down:
SQL> CREATE INDEX sales_idx_city__code ON sales (city__code) TABLESPACE move_data; INDEX created. SQL> EXEC dbms_stats.gather_table_stats('yjaquier', 'sales'); PL/SQL PROCEDURE successfully completed. SQL> col column_name FOR a15 SQL> SELECT column_name,histogram FROM dba_tab_col_statistics WHERE owner='YJAQUIER' AND table_name='SALES' ORDER BY column_name; COLUMN_NAME HISTOGRAM --------------- --------------- CITY__CODE FREQUENCY ID NONE QTY NONE VAL NONE |
Then the default COB choice is to not use the index (the In-Memory options have been removed):
SELECT /* Yannick */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code AND b.descr='Arbois' GROUP BY b.descr ORDER BY b.descr; Elapsed: 00:00:09.86 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 3852374841 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 59 | 12355 (1)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 59 | 12355 (1)| 00:00:01 | Q1,01 | P->S | QC (RAND) | | 3 | HASH GROUP BY | | 1 | 59 | 12355 (1)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 1 | 59 | 12355 (1)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND HASH | :TQ10000 | 1 | 59 | 12355 (1)| 00:00:01 | Q1,00 | P->P | HASH | | 6 | HASH GROUP BY | | 1 | 59 | 12355 (1)| 00:00:01 | Q1,00 | PCWP | | |* 7 | HASH JOIN | | 1435K| 80M| 12355 (1)| 00:00:01 | Q1,00 | PCWP | | | 8 | JOIN FILTER CREATE | :BF0000 | 1 | 11 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | |* 9 | TABLE ACCESS FULL | CITY | 1 | 11 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | | 10 | JOIN FILTER USE | :BF0000 | 50M| 2288M| 12337 (1)| 00:00:01 | Q1,00 | PCWP | | | 11 | PX BLOCK ITERATOR | | 50M| 2288M| 12337 (1)| 00:00:01 | Q1,00 | PCWC | | |* 12 | TABLE ACCESS FULL| SALES | 50M| 2288M| 12337 (1)| 00:00:01 | Q1,00 | PCWP | | -------------------------------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 7 - ACCESS("A"."CITY__CODE"="B"."CODE") 9 - filter("B"."DESCR"='Arbois') 12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"."CITY__CODE")) Note ----- - dynamic STATISTICS used: dynamic sampling (LEVEL=AUTO) - automatic DOP: Computed Degree OF Parallelism IS 10 STATISTICS ---------------------------------------------------------- 60 recursive calls 0 db block gets 409033 consistent gets 406420 physical reads 0 redo SIZE 702 bytes sent via SQL*Net TO client 552 bytes received via SQL*Net FROM client 2 SQL*Net roundtrips TO/FROM client 0 sorts (memory) 0 sorts (disk) 1 ROWS processed |
The explain plan with a NESTED LOOP (NL) and fixing the parallelism to 10 for apple to apple comparison:
SELECT /* Yannick */ /*+ index (a sales_idx_city__code) parallel (10) */ b.descr,SUM(qty) AS qty, SUM(val) AS val FROM sales a, city b WHERE a.city__code=b.code AND b.descr='Arbois' GROUP BY b.descr ORDER BY b.descr; Elapsed: 00:00:09.41 Execution PLAN ---------------------------------------------------------- PLAN hash VALUE: 2685725527 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | ROWS | Bytes | COST (%CPU)| TIME | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 59 | 45750 (1)| 00:00:02 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 59 | 45750 (1)| 00:00:02 | Q1,01 | P->S | QC (RAND) | | 3 | HASH GROUP BY | | 1 | 59 | 45750 (1)| 00:00:02 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 1 | 59 | 45750 (1)| 00:00:02 | Q1,01 | PCWP | | | 5 | PX SEND HASH | :TQ10000 | 1 | 59 | 45750 (1)| 00:00:02 | Q1,00 | P->P | HASH | | 6 | HASH GROUP BY | | 1 | 59 | 45750 (1)| 00:00:02 | Q1,00 | PCWP | | | 7 | NESTED LOOPS | | 1435K| 80M| 45750 (1)| 00:00:02 | Q1,00 | PCWP | | | 8 | NESTED LOOPS | | 1704K| 80M| 45750 (1)| 00:00:02 | Q1,00 | PCWP | | | 9 | PX BLOCK ITERATOR | | | | | | Q1,00 | PCWC | | |* 10 | TABLE ACCESS FULL | CITY | 1 | 11 | 2 (0)| 00:00:01 | Q1,00 | PCWP | | |* 11 | INDEX RANGE SCAN | SALES_IDX_CITY__CODE | 1704K| | 371 (1)| 00:00:01 | Q1,00 | PCWP | | | 12 | TABLE ACCESS BY INDEX ROWID| SALES | 1435K| 65M| 45748 (1)| 00:00:02 | Q1,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (IDENTIFIED BY operation id): --------------------------------------------------- 10 - filter("B"."DESCR"='Arbois') 11 - ACCESS("A"."CITY__CODE"="B"."CODE") Note ----- - dynamic STATISTICS used: dynamic sampling (LEVEL=AUTO) - Degree OF Parallelism IS 10 because OF hint STATISTICS ---------------------------------------------------------- 60 recursive calls 0 db block gets 404388 consistent gets 382963 physical reads 0 redo SIZE 702 bytes sent via SQL*Net TO client 552 bytes received via SQL*Net FROM client 2 SQL*Net roundtrips TO/FROM client 0 sorts (memory) 0 sorts (disk) 1 ROWS processed |
The execution time is almost the same, number of consistent read is constant but normally NL access are good when number of rows retrieve with the index is small but here the detailed explain plan returns 3,446,000 rows retrieve by index rowid…
And it is really where Oracle In-Memory option has a clear added value because:
- You cannot create and index on every column of your fact table. Think of a BI environment with a fact table having tens of columns and also because creating so many indexes will use disk space and add overhead in maintenance.
- Even if an index is there Oracle might not use it and favor an HASH JOIN (HJ) with a Full Table Scan (FTS) instead of a NL.
References
- Execution Plan Shows In-Memory Option Used When It Is Not Being Used (Doc ID 1950831.1)
- How to Analyze Why PX Queries Are Slow with AutoDOP Feature (Doc ID 1380736.1)
- Oracle Database In-Memory
- Automating Parallelism
amit says:
It seems only few post access internet on this topic thanks for sharing your thoughts its helpful