Yaping's Weblog

September 19, 2008

MATERIALIZED VIEW

Filed under: Oracle — Yaping @ 6:50 am
Tags:

Materialized view basic information

test@TEST>create table test.t1(id number,a char(1),b varchar2(1));
Table created.

test@TEST>create MATERIALIZED VIEW log on test.t1;
create MATERIALIZED VIEW log on test.t1
*
ERROR at line 1:
ORA-12014: table ‘T1′ does not contain a primary key constraint

test@TEST>alter table test.t1 add constraint t1_pk primary key(id);
Table altered.

test@TEST>create MATERIALIZED VIEW log on test.t1;
Materialized view log created.

test@TEST>desc MLOG$_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 ID                                                             NUMBER
 SNAPTIME$$                                                     DATE
 DMLTYPE$$                                                      VARCHAR2(1)
 OLD_NEW$$                                                      VARCHAR2(1)
 CHANGE_VECTOR$$                                                RAW(255)

test@TEST>desc RUPD$_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 ID                                                             NUMBER
 DMLTYPE$$                                                      VARCHAR2(1)
 SNAPID                                                         NUMBER(38)
 CHANGE_VECTOR$$                                                RAW(255)

test@TEST>insert into test.t1 values(1,’A’,’B’);
1 row created.

test@TEST>delete from test.t1;
1 row deleted.

test@TEST>select ID,DMLTYPE$$,OLD_NEW$$ from MLOG$_T1;
        ID D O
———- – –
         1 I N
         1 D O

test@TEST>select * from RUPD$_T1;
no rows selected

 
The default clause on create materialized view log is with primary key, so if no clause be specified, primary key on the base table must exist.
The definition of materialized view log (MLOG$_) depends on the specified clause.
Table RUPD$_ only exists when materialized view log uses primary key clause, it’s a special log for updatable snapshots, which would make sense in the context of 2 way replication.

 
test@TEST>drop  MATERIALIZED VIEW log on test.t1;
Materialized view log dropped.

test@TEST>create MATERIALIZED VIEW log on test.t1 with rowid, SEQUENCE;
Materialized view log created.

test@TEST>desc MLOG$_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 M_ROW$$                                                        VARCHAR2(255)
 SEQUENCE$$                                                     NUMBER
 SNAPTIME$$                                                     DATE
 DMLTYPE$$                                                      VARCHAR2(1)
 OLD_NEW$$                                                      VARCHAR2(1)
 CHANGE_VECTOR$$                                                RAW(255)

test@TEST>desc RUPD$_T1
ERROR:
ORA-04043: object RUPD$_T1 does not exist

test@TEST>create table test.t2(id number,a char(1), b varchar2(1));
Table created.

test@TEST>create MATERIALIZED VIEW log on test.t2 with rowid, SEQUENCE;
Materialized view log created.

test@TEST>insert into test.t1 values(2,’A’,’B’);
1 row created.

test@TEST>insert into test.t2 values(1,’A’,’B’);
1 row created.

test@TEST>update test.t1 set a=’C’ where id=2;
1 row updated.

test@TEST>select M_ROW$$,SEQUENCE$$,DMLTYPE$$,OLD_NEW$$ from MLOG$_T1;
M_ROW$$              SEQUENCE$$ D O
——————– ———- – –
AAAOCJAAFAAAAAyAAB           23 I N
AAAOCJAAFAAAAAyAAB           25 U U
test@TEST>select M_ROW$$,SEQUENCE$$,DMLTYPE$$,OLD_NEW$$ from MLOG$_T2;
M_ROW$$              SEQUENCE$$ D O
——————– ———- – –
AAAOCOAAFAAAAAiAAA           24 I N
The column SEQUENCE$$ on MLOG$_ records the base table’s changing sequence, all materialized view logs use the same sequence to generate the value. It help Oracle apply updates to materialized view logs in the correct order when a mix of DML commands, e.g. insert, update and delete, are performed on multiple base tables in a single transaction.

test@TEST>drop  MATERIALIZED VIEW log on test.t1;
Materialized view log dropped.

test@TEST>create MATERIALIZED VIEW log on test.t1 with PRIMARY KEY, rowid, SEQUENCE(a,b) including new values;
Materialized view log created.

test@TEST>desc MLOG$_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 ID                                                             NUMBER
 A                                                              CHAR(1)
 B                                                              VARCHAR2(1)
 M_ROW$$                                                        VARCHAR2(255)
 SEQUENCE$$                                                     NUMBER
 SNAPTIME$$                                                     DATE
 DMLTYPE$$                                                      VARCHAR2(1)
 OLD_NEW$$                                                      VARCHAR2(1)
 CHANGE_VECTOR$$                                                RAW(255)

test@TEST>desc RUPD$_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 ID                                                             NUMBER
 DMLTYPE$$                                                      VARCHAR2(1)
 SNAPID                                                         NUMBER(38)
 CHANGE_VECTOR$$                                                RAW(255)

test@TEST>CREATE MATERIALIZED VIEW test.mv_t1 build immediate refresh force on demand
  2  as select * from test.t1;
Materialized view created.

test@TEST>select object_id,object_name,object_type from user_objects where OBJECT_TYPE not in (’PROCEDURE’,’FUNCTION’,’DATABASE LINK’);
 OBJECT_ID OBJECT_NAME                    OBJECT_TYPE
———- —————————— ——————-
     57482 T1_PK                          INDEX
     57505 RUPD$_T1                       TABLE
     57506 MV_T1                          TABLE
     57507 T1_PK1                         INDEX
     57508 MV_T1                          MATERIALIZED VIEW
     57504 MLOG$_T1                       TABLE
     57481 T1                             TABLE
7 rows selected.
After create materialized view, one materialized view, one table with the same name as materialized view, one index on the table are created automatically.
Materialized view refresh

test@TEST>insert into test.t1 values(1,’A’,’B’);
1 row created.

test@TEST>update test.t1 set a=’a’,b=’b’ where id=2;
1 row updated.

test@TEST>commit;
Commit complete.

test@TEST>insert into test.t1 values(3,’A’,’B’);
1 row created.

test@TEST>select ID,A,B,M_ROW$$,SEQUENCE$$,DMLTYPE$$,OLD_NEW$$ from MLOG$_T1;
        ID A B M_ROW$$              SEQUENCE$$ D O
———- – – ——————– ———- – –
         1 A B AAAOCJAAFAAAAAyAAC           26 I N
         2 C B AAAOCJAAFAAAAAyAAB           27 U O
         2 a b AAAOCJAAFAAAAAyAAB           28 U N
         3 A B AAAOCJAAFAAAAAyAAA           29 I N

test@TEST>alter session set events ‘10046 trace name context forever,level 12′;
Session altered.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>select * from mv_t1;
        ID A B
———- – –
         2 a b
         1 A B
         3 A B

sys@TEST>select * from test.t1;
        ID A B
———- – –
         3 A B
         2 a b
         1 A B

PARSING IN CURSOR #5 len=45 dep=0 uid=66 oct=47 lid=66 tim=1177042417020396 hv=3981352777 ad=’288fbcc8′
BEGIN dbms_mview.refresh(’MV_T1′,’F’); END;
END OF STMT
PARSE #5:c=999,e=1439,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1177042417020386
BINDS #5:
=====================
PARSING IN CURSOR #10 len=11 dep=1 uid=0 oct=44 lid=0 tim=1177042417021593 hv=1180858989 ad=’0′
COMMIT WORK
END OF STMT
PARSE #10:c=0,e=126,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=1177042417021584
XCTEND rlbk=0, rd_only=0
EXEC #10:c=3000,e=3190,p=0,cr=0,cu=1,mis=0,r=0,dep=1,og=0,tim=1177042417024940

PARSING IN CURSOR #23 len=241 dep=1 uid=66 oct=3 lid=66 tim=1177042417243005 hv=1545147654 ad=’289d935c’
SELECT DISTINCT LOG$.”ID” FROM (SELECT MLOG$.”ID” FROM “TEST”.”MLOG$_T1″ MLOG$ WHERE “SNAPTIME$$” > :1 AND (”DMLTYPE$$” != ‘I’)) LOG$ WHERE (LOG$.”ID”) NOT IN (SELECT MAS_TAB$.”ID” FROM “TEST”.”T1″ “MAS_TAB$” WHERE LOG$.”ID” = MAS_TAB$.”ID”)
END OF STMT
PARSE #23:c=1000,e=649,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=1177042417243001

PARSING IN CURSOR #23 len=277 dep=1 uid=66 oct=3 lid=66 tim=1177042417256460 hv=354087628 ad=’27e4eba0′
SELECT CURRENT$.”ID”,CURRENT$.”A”,CURRENT$.”B” FROM (SELECT “T1″.”ID” “ID”,”T1″.”A” “A”,”T1″.”B” “B” FROM “TEST”.”T1″ “T1″) CURRENT$, (SELECT DISTINCT MLOG$.”ID” FROM “TEST”.”MLOG$_T1″ MLOG$ WHERE “SNAPTIME$$” > :1 AND (”DMLTYPE$$” != ‘D’)) LOG$ WHERE CURRENT$.”ID” = LOG$.”ID”
END OF STMT
PARSE #23:c=999,e=554,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=1177042417256456

PARSING IN CURSOR #14 len=69 dep=2 uid=66 oct=6 lid=66 tim=1177042417280452 hv=1853616947 ad=’27d0c5f0′
UPDATE “TEST”.”MV_T1″ SET “ID” = :1,”A” = :2,”B” = :3 WHERE “ID” = :1
END OF STMT
PARSE #14:c=0,e=226,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1177042417280445
BINDS #14:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b70f5484  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f549c  bln=32  avl=01  flg=09
  value=”A”
 Bind#2
  oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f54a0  bln=32  avl=01  flg=09
  value=”B”
 Bind#3
  No oacdef for this bind.
EXEC #14:c=3999,e=5391,p=0,cr=2,cu=0,mis=1,r=0,dep=2,og=1,tim=1177042417285916

PARSING IN CURSOR #22 len=60 dep=2 uid=66 oct=2 lid=66 tim=1177042417286092 hv=1178564284 ad=’27e1d10c’
INSERT INTO “TEST”.”MV_T1″  (”ID”,”A”,”B”) VALUES (:1,:2,:3)
END OF STMT
PARSE #22:c=0,e=130,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1177042417286087
BINDS #22:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b70f5484  bln=22  avl=02  flg=09
  value=1
 Bind#1
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f549c  bln=32  avl=01  flg=09
  value=”A”
 Bind#2
  oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f54a0  bln=32  avl=01  flg=09
  value=”B”
EXEC #22:c=1000,e=678,p=0,cr=1,cu=8,mis=1,r=1,dep=2,og=1,tim=1177042417286879
EXEC #22:c=1000,e=678,p=0,cr=1,cu=8,mis=1,r=1,dep=2,og=1,tim=1177042417286879
BINDS #14:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b70f5484  bln=22  avl=02  flg=09
  value=2
 Bind#1
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f549c  bln=32  avl=01  flg=09
  value=”a”
 Bind#2
  oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f54a0  bln=32  avl=01  flg=09
  value=”b”
 Bind#3
  No oacdef for this bind.
EXEC #14:c=0,e=313,p=0,cr=1,cu=3,mis=0,r=1,dep=2,og=1,tim=1177042417287291
BINDS #14:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b70f5484  bln=22  avl=02  flg=09
  value=3
 Bind#1
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f549c  bln=32  avl=01  flg=09
  value=”A”
 Bind#2
  oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f54a0  bln=32  avl=01  flg=09
  value=”B”
 Bind#3
  No oacdef for this bind.
EXEC #14:c=1000,e=1185,p=0,cr=1,cu=0,mis=0,r=0,dep=2,og=1,tim=1177042417288518
BINDS #22:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=b70f5484  bln=22  avl=02  flg=09
  value=3
 Bind#1
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f549c  bln=32  avl=01  flg=09
  value=”A”
 Bind#2
  oacdty=01 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=0001 frm=01 csi=852 siz=32 off=0
  kxsbbbfp=b70f54a0  bln=32  avl=01  flg=09
  value=”B”
EXEC #22:c=0,e=180,p=0,cr=0,cu=3,mis=0,r=1,dep=2,og=1,tim=1177042417288728

PARSING IN CURSOR #14 len=52 dep=1 uid=0 oct=7 lid=0 tim=1177042417319986 hv=271018863 ad=’28993e00′
delete from “TEST”.”MLOG$_T1″ where snaptime$$ <= :1
END OF STMT
PARSE #14:c=999,e=1093,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=1177042417319982
BINDS #14:
kkscoacd
 Bind#0
  oacdty=12 mxl=07(07) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=00 csi=00 siz=8 off=0
  kxsbbbfp=bfff9cf0  bln=07  avl=07  flg=09
  value=”3/12/2008 11:10:35″
EXEC #14:c=1000,e=1636,p=0,cr=3,cu=4,mis=1,r=4,dep=1,og=4,tim=1177042417321698

 
Even though the MLOG$_ table contains the old and new values, the materialized view obtains the current value through access the base table.

 

Change base table’s definition

sys@TEST>exec DBMS_REDEFINITION.CAN_REDEF_TABLE(’TEST’,’T1′);
BEGIN DBMS_REDEFINITION.CAN_REDEF_TABLE(’TEST’,’T1′); END;
*
ERROR at line 1:
ORA-12091: cannot online redefine table “TEST”.”T1″ with materialized views
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 137
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 1478
ORA-06512: at line 1

test@TEST>insert into test.t1 values(4,’A’,’B’);
1 row created.

test@TEST>alter table test.t1 modify b varchar2(2);
Table altered.

test@TEST>insert into test.t1 values(5,’A’,’BB’);
insert into test.t1 values(5,’A’,’BB’)
                 *
ERROR at line 1:
ORA-12096: error in materialized view log on “TEST”.”T1″
ORA-12899: value too large for column “TEST”.”MLOG$_T1″.”B” (actual: 2, maximum: 1)

test@TEST>insert into test.t1 values(5,’A’,’B’);
1 row created.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>alter  MATERIALIZED VIEW log on test.t1 modify b varchar2(2);
Materialized view log altered.

test@TEST>insert into test.t1 values(6,’A’,’BB’);
1 row created.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
BEGIN dbms_mview.refresh(’MV_T1′,’F’); END;
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-12899: value too large for column “TEST”.”MV_T1″.”B” (actual: 2, maximum: 1)
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2254
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2460
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2429
ORA-06512: at line 1

test@TEST>desc test.MV_T1
 Name                                                  Null?    Type
 —————————————————– ——– ————————————
 ID                                                    NOT NULL NUMBER
 A                                                              CHAR(1)
 B                                                              VARCHAR2(1)
 
test@TEST>alter table test.mv_t1 modify b varchar2(2);
Table altered.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>alter table test.t1 modify a char(2);
alter table test.t1 modify a char(2)
*
ERROR at line 1:
ORA-12096: error in materialized view log on “TEST”.”T1″
ORA-12899: value too large for column “TEST”.”MLOG$_T1″.”A” (actual: 2, maximum: 1)

test@TEST>alter  MATERIALIZED VIEW log on test.t1 modify a char(2);
Materialized view log altered.

test@TEST>alter table test.t1 modify a char(2);
Table altered.

test@TEST>alter table test.mv_t1 modify a char(2);
alter table test.mv_t1 modify a char(2)
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

test@TEST>drop MATERIALIZED VIEW mv_t1;
Materialized view dropped.

test@TEST>drop MATERIALIZED VIEW log on test.t1;
Materialized view log dropped.

test@TEST>create MATERIALIZED VIEW log on test.t1 with PRIMARY KEY, rowid, SEQUENCE(a,b) including new values;
Materialized view log created.

test@TEST>CREATE MATERIALIZED VIEW test.mv_t1 build immediate refresh force on demand as select * from test.t1;
Materialized view created.

test@TEST>select * from test.mv_t1;
        ID A  B
———- — –
         3 A  B
         2 a  b
         1 A  B
         4 A  B
         5 A  B
         6 A  BB
6 rows selected.

test@TEST>insert into test.t1(id) values(7);
1 row created.

test@TEST>alter table test.t1 modify b default ‘NA’;
Table altered.
 
test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>insert into test.t1(id) values(8);
1 row created.

test@TEST>alter table test.t1 modify a default ‘NA’;
Table altered.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>select * from test.mv_t1;
        ID A  B
———- — –
         3 A  B
         2 a  b
         1 A  B
         4 A  B
         5 A  B
         6 A  BB
         7
         8    NA
8 rows selected.

test@TEST>alter table test.t1 drop column a;
Table altered.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’F’);
PL/SQL procedure successfully completed.

test@TEST>select * from mv_t1;
        ID A  B
———- — –
         3 A  B
         2 a  b
         1 A  B
         4 A  B
         5 A  B
         6 A  BB
         9 NA NA
         7
         8    NA
9 rows selected.

test@TEST>exec dbms_mview.refresh(’MV_T1′,’C’);
BEGIN dbms_mview.refresh(’MV_T1′,’C’); END;
*
ERROR at line 1:
ORA-12018: following error encountered during code generation for “TEST”.”MV_T1″
ORA-00904: “T1″.”A”: invalid identifier
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2254
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2460
ORA-06512: at “SYS.DBMS_SNAPSHOT”, line 2429
ORA-06512: at line 1

test@TEST>alter table test.MLOG$_T1 drop column a;
Table altered.

test@TEST>alter table test.mv_t1 drop column a;
Table altered.

test@TEST>insert into test.t1(id) values(10);
1 row created.

test@TEST>exec dbms_mview.UNREGISTER_MVIEW(’TEST’,’MV_T1′,’TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM’);
PL/SQL procedure successfully completed.

test@TEST>exec dbms_mview.REGISTER_MVIEW(’TEST’,’MV_T1′,’TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM’,100,dbms_mview.reg_primary_key_mview,’select id,b from test.t1′,dbms_mview.reg_v8_snapshot);
PL/SQL procedure successfully completed.

sys@TEST>update sys.snap$ set QUERY_TXT=’SELECT “T1″.”ID” “ID”,”T1″.”B” “B” FROM “TEST”.”T1″ “T1″‘,QUERY_LEN=56 where VNAME=’MV_T1′;
1 row updated.

sys@TEST>commit;
Commit complete.

sys@TEST>exec dbms_mview.refresh(’TEST.MV_T1′,’F’);
PL/SQL procedure successfully completed.

sys@TEST>select * from test.mv_t1;
        ID B
———- –
         3 B
         2 b
         1 B
         4 B
         5 B
         6 BB
         9 NA
         7
         8 NA
        10 NA
10 rows selected.
From the above result, the base table can change the column’s length on some data type, such as varchar2, but not char. There’s different behavior to specify the default value on varchar2 and char after the columns have exist. Drop base table’s columns which be referenced by materialized view maybe be possible (Don’t do that on production). Add columns to base table which materialized view does not contains them also be possible. If multi-tables join or contain aggregation function, there’s more restriction to change base table’s definition.

 
Move materialized view to different tablespace.

test@TEST>alter table test.mv_t1 move tablespace users;
Table altered.

test@TEST>alter index t1_pk1 rebuild;
Index altered.

 
Complete refresh changed from 9i to 10g
Materialized view complete refresh has changed from Oracle 9i to 10g a little. Also atomic parameter has the same default value, but the behavior is different when complete refresh one materialized view. After upgrade from Oracle 9i to 10g, complete refresh maybe take a long time and more redo size.

SQL> select * from v$version;
BANNER
—————————————————————-
Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production
PL/SQL Release 9.2.0.4.0 – Production
CORE 9.2.0.3.0 Production
TNS for Linux: Version 9.2.0.4.0 – Production
NLSRTL Version 9.2.0.4.0 – Production
 
SQL> create table t1 as select rownum id from all_objects where rownum<6;
Table created.
 
SQL> alter table t1 add constraint t1_pk primary key (id);
Table altered.

SQL> create materialized view mv_t1 as select * from t1;
Materialized view created.

SQL> select * from mv_t1;
ID
———-
1
2
3
4
5

SQL> alter session set events ‘10046 trace name context forever,level 12′;
Session altered.

SQL> exec dbms_mview.refresh(’MV_T1′,method=>’c’);
PL/SQL procedure successfully completed.

SQL> alter session set events ‘10046 trace name context off’;
Session altered.
Extract data from trace file.

PARSING IN CURSOR #31 len=48 dep=1 uid=22 oct=85 lid=22 tim=1173899130266695 hv=1203315168 ad=’5939c468′
truncate table “TEST”.”MV_T1″ purge snapshot log
END OF STMT
PARSE #31:c=0,e=11549,p=0,cr=0,cu=1,mis=1,r=0,dep=1,og=4,tim=1173899130266689

PARSING IN CURSOR #31 len=78 dep=1 uid=22 oct=2 lid=22 tim=1173899130361469 hv=2194498332 ad=’59373ac4′
INSERT /*+ APPEND */ INTO “TEST”.”MV_T1″(”ID”) SELECT “T1″.”ID” FROM “T1″ “T1″
END OF STMT

From the above result, the materialized view be truncated firstly, then insert append data, if this materialized view is nologging, it maybe cause recover issue after that time.

SQL> select NAME,UNRECOVERABLE_CHANGE#,UNRECOVERABLE_TIME from v$datafile;
NAME UNRECOVERABLE_CHANGE# UNRECOVER
————————————————– ——————— ———
/ddms01/dms/o02dms0suprt1/system_1.O02DMS0 0
/ddms01/dms/o02dms0data1/undo_tbs_01.O02DMS0 0
/ddms01/dms/o02dms0data1/users_1.O02DMS0 0
/ddms01/dms/o02dms0data1/cqowner_ts_01.O02DMS0 0
/ddms01/dms/o02dms0data1/tools01.dbf 0
/ddms01/dms/o02dms0data1/test_2k_01.dbf 0
/ddms01/dms/o02dms0data1/test01.dbf 0
/ddms01/dms/oracle/9.2.0/dbs/MISSING00008 0
8 rows selected.

SQL> drop materialized view mv_t1;
Materialized view dropped.

SQL> create materialized view mv_t1 nologging tablespace users as select * from t1;
Materialized view created.

SQL> select NAME,UNRECOVERABLE_CHANGE#,UNRECOVERABLE_TIME from v$datafile;
NAME UNRECOVERABLE_CHANGE# UNRECOVERABLE_TIME
————————————————– ——————— ——————-

/ddms01/dms/o02dms0data1/users_1.O02DMS0 1.1329E+10 2008-02-03 13:16:57

SQL> exec dbms_mview.refresh(’MV_T1′,method=>’c’);
PL/SQL procedure successfully completed.

SQL> select NAME,UNRECOVERABLE_CHANGE#,UNRECOVERABLE_TIME from v$datafile;
NAME UNRECOVERABLE_CHANGE# UNRECOVERABLE_TIME
————————————————– ——————— ——————-

/ddms01/dms/o02dms0data1/users_1.O02DMS0 1.1329E+10 2008-02-03 13:19:09

SQL> create materialized view mv2_t1 as select * from t1;
Materialized view created.

SQL> alter session set events ‘10046 trace name context forever,level 12′;
Session altered.

SQL> exec dbms_mview.refresh(’MV_T1,MV2_t1′,method=>’c’);
PL/SQL procedure successfully completed.


PARSING IN CURSOR #18 len=26 dep=1 uid=22 oct=7 lid=22 tim=1173901762258356 hv=2659115118 ad=’5af5f0e0′
delete from “TEST”.”MV_T1″
END OF STMT

PARSING IN CURSOR #18 len=64 dep=1 uid=22 oct=2 lid=22 tim=1173901762261353 hv=3235319373 ad=’593e7a60′
INSERT INTO “TEST”.”MV_T1″(”ID”) SELECT “T1″.”ID” FROM “T1″ “T1″
END OF STMT

PARSING IN CURSOR #18 len=27 dep=1 uid=22 oct=7 lid=22 tim=1173901762287626 hv=2749552578 ad=’5aee50f0′
delete from “TEST”.”MV2_T1″
END OF STMT

PARSING IN CURSOR #20 len=65 dep=1 uid=22 oct=2 lid=22 tim=1173901762283788 hv=1756284514 ad=’5aee9ec4′
INSERT INTO “TEST”.”MV2_T1″(”ID”) SELECT “T1″.”ID” FROM “T1″ “T1″
END OF STMT
 

If complete refresh a group of materialized view, deleting instead of truncate data so that all actions are in one transaction.

SQL> select * from v$version;
BANNER
—————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
PL/SQL Release 10.2.0.1.0 – Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 – Production
NLSRTL Version 10.2.0.1.0 – Production

SQL> create table t1 as select rownum id from all_objects where rownum<6;
Table created.

SQL> alter table t1 add constraint t1_pk primary key (id);
Table altered.
 
SQL> create materialized view mv_t1 as select * from t1;
Materialized view created.

SQL> alter session set events ‘10046 trace name context forever,level 12′;
Session altered.

SQL> exec dbms_mview.refresh(’MV_T1′,method=>’c’);
PL/SQL procedure successfully completed.

SQL> alter session set events ‘10046 trace name context off’;
Session altered.


PARSING IN CURSOR #21 len=27 dep=1 uid=32 oct=7 lid=32 tim=1173900388085768 hv=2156255936 ad=’2f8da41c’
delete from “TEST”.”MV_T1″
END OF STMT

PARSING IN CURSOR #19 len=64 dep=1 uid=32 oct=2 lid=32 tim=1173900387983740 hv=1185993576 ad=’2f960aac’
INSERT INTO “TEST”.”MV_T1″(”ID”) SELECT “T1″.”ID” FROM “T1″ “T1″
END OF STMT

From the above result, we can see that the default behavior is different between 9i and 10g when complete refersh one materialized view.

SQL> alter session set events ‘10046 trace name context forever,level 12′;
Session altered.

SQL> exec dbms_mview.refresh(’MV_T1′,method=>’c’,atomic_refresh=>false);
PL/SQL procedure successfully completed.

PARSING IN CURSOR #21 len=49 dep=1 uid=32 oct=85 lid=32 tim=1173903443911624 hv=1821482286 ad=’2c538e0c’
truncate table “TEST”.”MV_T1″ purge snapshot log
END OF STMT

PARSING IN CURSOR #21 len=123 dep=1 uid=32 oct=2 lid=32 tim=1173903444814792 hv=2267269472 ad=’2f87a8cc’
INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO “TEST”.”MV_T1″(”ID”) SELECT “T1″.”ID” FROM “T1″ “T1″
END OF STMT

Advertisements

September 2, 2008

Password

Filed under: Oracle — Yaping @ 2:25 am
Tags: ,

The main password encryption principles are:
1.       Enforcement of Complex Passwords
2.       Use of Salted Passwords
3.       Slow One-way Algorithm

We should consider to forcibly use complex password and password lifetime, this principle can be defined within the system. At the same time, we also should consider to adopt password management tools, such as password2000, to mange password conveniently.
It will be hard to crack the password if salted value is used, even though this salted value is clear text or known.
Slow one-way algorithm makes encrypt slowly, it need more time to get the final cipher. And we can change the encrypt algorithm at the mid-way, it will be more difficult to crack this algorithm.
Assume that it can calculate 1 million final ciphers every one second if it uses one pass to encrypt password. It’ll be just 10 thousand final ciphers if use the calculated value to repeat encrypt 1 hundred times. This means it needs one day to crack out the password before, now it needs 100 days. If the password lifetime is 30 days, this password has been changed before the old password is cracked.

Oracle Password Mechanism:
1.       Concatenate the username and the password to produce a plaintext string;
2.       Convert the plaintext string to uppercase characters;
3.       Convert the plaintext string to multi-byte storage format; ASCII characters have the high byte set to 0×00;
4.       Encrypt the plaintext string (padded with 0s if necessary to the next even block length) using the DES algorithm in cipher block chaining (CBC) mode with a fixed key value of 0×0123456789ABCDEF;
5.       Encrypt the plaintext string again with DES-CBC, but using the last block of the output of the previous step (ignoring parity bits) as the encryption key. The last block of the output is converted into a printable string to produce the password hash value.

Weak Salt Selection
Oracle password hashes use a non-conventional technique for salt selection by prepending the username to the password before calculating the hash.

Firstly, it is quite possible to obtain information about a user password based solely on its hash value and the known credentials of another user.

A second weakness is the use of non-random salt values. Although the salt used can still reduce the effectiveness of a precomputed dictionary attack against a large password hash table, an attacker could still precompute a table of possible passwords using a common username (e.g. SYSTEM), and use it to attempt to recover the password for this particular user in many different systems.

@>alter user system identified by p1;
User altered.
@>alter user sys identified by temp1;
User altered.
@>select username, password from dba_users where username like ‘SYS%’;
USERNAME                       PASSWORD
—————————— ——————————
SYS                            2E1168309B5B9B7A
SYSTEM                         2E1168309B5B9B7A

Lack of Case Preservation
Another weakness in the Oracle password hashing mechanism is the lack of alphabetic case preservation. Before the password hash is calculated, the user’s password is converted to all uppercase characters, regardless of the input case selection.
This behaviour represents a significant weakness in the password hashing algorithm, as it reduces the number of possible passwords,

@>alter user system identified by P1;
User altered.
@>select username, password from dba_users where username = ‘SYSTEM’;
USERNAME                       PASSWORD
—————————— ——————————
SYSTEM                         2E1168309B5B9B7A

Recommends
1.       Force complex password & length
Force use complex password. Refer to ?/rdbms/admin/utlpwdmg.sql.
We must specify a standalone function owned by SYS, but can use subprocedures/functions/package in this function.
We can use double quotation marks to involved special symbol, increases the choice of characters. But in some logon tools, it maybe encounter issues.

2.       Force password lifetime
Specify profile attribute to limit password lifetime. If this password policy has been introduced, password changed will be recorded in user_history$ table.

Since Oracle 10g R2, the default profile’s password life changed from unlimited to 10 days, if your database be upgraded from low release, use the default profile and PASSWORD_LIFE_TIME is unlimited, then you need keep an eye on it.

3.       Lock unused accounts, change default password

4.       Restrict access to password hashes & Audit

@>audit SELECT on dba_users;
Audit succeeded.

@>audit select on sys.user$;
audit select on sys.user$
                    *
ERROR at line 1:
ORA-00701: object necessary for warmstarting database cannot be altered

User$ is a specific table, used when database bootstrap, you can’t audit it.

Change password through password command, the password will be encrypted and then be sent through network.
While alter user XXX identified by command will be sent with clear text.

@>password
Changing password for TEST
Old password: ****
New password: ****
Retype new password: ****
Password changed

 
Changed in 11g

@>alter user system identified by p1;
User altered.
@>select USERNAME,PASSWORD from dba_users where USERNAME=’SYSTEM’;
USERNAME                       PASSWORD
—————————— ——————————
SYSTEM
@>select NAME,PASSWORD,SPARE4 from user$ where NAME=’SYSTEM’;
NAME                 PASSWORD                       SPARE4
——————– —————————— ——————————————————————————–
SYSTEM               2E1168309B5B9B7A               S:8388CBF57687E6E8CF97BB672C2EDE394140FADE024E16329E8A2BF2E9BF

@>alter user system identified by p1;
User altered.
@>select NAME,PASSWORD,SPARE4 from user$ where NAME=’SYSTEM’;
NAME                 PASSWORD                       SPARE4
——————– —————————— ——————————————————————————–
SYSTEM               2E1168309B5B9B7A               S:09043B9ABFA366DF41DD16DE6768FDC04C57EF1374E0B04DAC8616716074

[oracle@chen ~]$ echo -ne “p1\x74\xE0\xB0\x4D\xAC\x86\x16\x71\x60\x74″|sha1sum
09043b9abfa366df41dd16de6768fdc04c57ef13  –

sys@DMS>@getPar
Enter value for parameter: case
old   6: ksppinm like lower(’%&parameter%’)
new   6: ksppinm like lower(’%case%’)
NAME                                     VALUE                                    DESCRIPTION
—————————————- —————————————- —————————————-
_case_sensitive_logon                    TRUE                                     case sensitive logon enabled
sec_case_sensitive_logon                 TRUE                                     case sensitive password enabled for logon

sys@DMS>alter user test identified by test;
sys@DMS>conn test/test
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.

test@DMS>conn /as sysdba
Connected.
sys@DMS>show sqlcase
sqlcase UPPER
sys@DMS>conn test/TEST
Connected.
test@DMS>conn /as sysdba
Connected.
sys@DMS>set sqlcase mixed
sys@DMS>alter user test identified by Test;
sys@DMS>conn test/TEST
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
sys@DMS>conn test/Test
Connected.

You can find that password hash value doesn’t display in dba_users. There’re two password encryption values in user$, one is the same as before, this is still one weakness, another encrypted with SHA-1, it changes every time when changed password, even though the password is the same (is the time be used to generate salt?). There’s one initial parameter to control password case-sensitive.
The new encryption algorithm is simple. Oracle generates 10 bytes salt, the password and the salt are concatenated and SHA-1 hash is generated from the concatenated value. The result value is stored in the spare4 column of the sys.user$ table.

Rollback Segments Corruption

Filed under: Oracle — Yaping @ 2:15 am
Tags: ,

If the rollback segments corrupt, we should recover it from backup. Sometimes there’s no backup or it’s not necessary to recover, we maybe consider using undocument parameter to avoid access these rollback segments. Oracle provides _offline_rollback_segments and _corrupted_rollback_segments, before use them, we should be clear the difference between them.

 

When opening a database, any rollback segments listed in _offline_rollback_segments or _corrupted_rollback_segments parameter:

l         Are not scanned, and any active transactions are neither marked as dead nor rolled back;

l         Appear offline in undo$;

l         Can’t be acquired by instance for new transaction;

 

Note that although the rollback segments listed in _offline_rollback_segments, Oracle actually reads the segments to find transaction status and gather undo records to perform rollback. If a corrupt block is found, the oracle will still fail.

 

If an open ITL is found to associated with _corrupted_rollback_segments the segment is not read to find the transaction status:

It is as though the rollback segment had been dropped, the transaction is assumed to be committed and delayed block cleanout is performed;

So the transaction was not committed, logical corruption will occur.

 

 

Session 1

test@CHEN>create table t1 as select rownum id from dual connect by level<6;

Table created.

 

test@CHEN>delete t1 where id=5;

1 row deleted.

 

test@CHEN>select * from t1;

 

        ID

———-

         1

         2

         3

         4

 

test@CHEN>@myid

Wrote file /tmp/myvar.sql

sid:11 serial:4 pid:12 spid:12961

 

Session 2

test@CHEN>insert into t1 values(6);

1 row created.

 

test@CHEN>select * from t1;

        ID

———-

         1

         2

         3

         4

         5

         6

6 rows selected.

 

test@CHEN>@myid

Wrote file /tmp/myvar.sql

sid:15 serial:17 pid:13 spid:12981

 

Session 3

sys@CHEN>@tran1

       SID    SERIAL# USERNAME                 XIDUSN    XIDSLOT     XIDSQN SEGMENT_NAME

———- ———- ——————– ———- ———- ———- ——————————

        15         17 TEST                          8          7        368 _ YSSMU8$S

        11          4 TEST                          2         40        401 _SYSSMU2$

 

sys@CHEN>@getpar _offline_rollback_segments

NAME                                     VALUE                                    DESCRIPTION

—————————————- —————————————- —————————————-

_offline_rollback_segments                                                        offline undo segment list

 

sys@CHEN>@getpar _corrupted_rollback_segments

ANAME                                     VALUE                                    DESCRIPTION

—————————————- —————————————- —————————————-

_corrupted_rollback_segments                                                      corrupted undo segment list

 

sys@CHEN>alter system set “_offline_rollback_segments”=”_SYSSMU8$” scope=spfile;

 

sys@CHEN>alter system set “_corrupted_rollback_segments”=”_SYSSMU2$” scope=spfile;

 

sys@CHEN>alter system checkpoint;

 

sys@CHEN>shutdown abort

ORACLE instance shut down.

 

sys@CHEN>startup mount

ORACLE instance started.

Total System Global Area  487658188 bytes

Fixed Size                   451276 bytes

Variable Size             369098752 bytes

Database Buffers          117440512 bytes

Redo Buffers                 667648 bytes

Database mounted.

 

sys@CHEN>alter system set events ‘10226 trace name context forever’;

 

sys@CHEN>alter database open;

 

sys@CHEN>select NAME,STATUS$ from undo$;

NAME                                        STATUS$

—————————————- ———-

SYSTEM                                            3

_SYSSMU1$                                         3

_SYSSMU2$                                         5

_SYSSMU3$                                         3

_SYSSMU4$                                         3

_SYSSMU5$                                         3

_SYSSMU6$                                         3

_SYSSMU7$                                         3

_SYSSMU8$                                         5

_SYSSMU9$                                         3

_SYSSMU10$                                        3

 

Session 4

test@CHEN>select * from t1;

        ID

———-

         1

         2

         3

         4

 

The trace file shows that Oracle uses the segments listed in _offline_rollback_segments to perform read consistent.

applied to 0x519bc014:

KTB Redo

op: 0x03  ver: 0x01

op: Z

KDO Op code: DRP row dependencies Disabled

  xtype: CR  bdba: 0x00c00013  hdba: 0x00c00011

itli: 1  ispac: 0  maxfr: 4863

tabn: 0 slot: 0(0x0)

 

August 30, 2008

exp/imp

Filed under: Oracle — Yaping @ 4:48 am
Tags: ,

l         Order of exp/imp tables

We create five tables, named d1, c2, a3, b4, e5, and created by this order.

 

SQL> conn test/test

Connected.

SQL> create table d1(id number);

Table created.

SQL> create table c2(id number);

Table created.

SQL> create table a3(id number);

Table created.

SQL> create table b4(id number);

Table created.

SQL> create table e5(id number);

Table created.

 

Then we export tables by owner mode.

 

$exp test/test file=test.dmp

Export: Release 9.2.0.4.0 – Production on Mon Jun 11 00:40:03 2007

……

. about to export TEST’s tables via Conventional Path …

. . exporting table                             A3          0 rows exported

. . exporting table                             B4          0 rows exported

. . exporting table                             C2          0 rows exported

. . exporting table                             D1          0 rows exported

. . exporting table                             E5          0 rows exported

……

 

We can find that order of tables exported is according to character order of the table’s name, not the order of creating, or others.

Then we export tables by table mode.

 

$exp test/test tables=(d1,c2,b4,e5,a3) file=test.dmp

……

About to export specified tables via Conventional Path …

. . exporting table                             D1          0 rows exported

. . exporting table                             C2          0 rows exported

. . exporting table                             B4          0 rows exported

. . exporting table                             E5          0 rows exported

. . exporting table                             A3          0 rows exported

……

 

We can find that order of tables exported is according to order of tables specified.

Then we import tables by table mode.

 

$imp test/test tables=(a3,c2) file=test.dmp ignore=y

……

. importing TEST’s objects into TEST

. . importing table                           “C2”          0 rows imported

. . importing table                           “A3”          0 rows imported

……

 

We find that importing order is relative to order of tables stored in the dump file, not the order by specified. It is expected.

 

According to the result, we should consider, when we archive data through export tables, those tables should be specified ahead which maybe be restored more frequently, especially when dump files is very large.

 

 

l         Rearrange data order

When we reorganize data or transmit data between databases via exp/imp utilities, we maybe need to reorganize data order according to characteristic of the applications, in order to optimize performance.

Such as, some queries via index range scan query are very frequent. And the associated indexed columns on the based table are very dispersive. The query maybe is inefficient and the same or continuous data should be arranged together.

Opposite, in some applications, the accessed data are so compact, and raise hot block conflict, etc. So the data may be dispersed.

 

Here I introduce QUERY parameter to achieve it.

 

SQL> create table t1 as select round(dbms_random.value*10000) id,owner||’.’||object_name name,rownum seq from all_objects where rownum<1000;

 

SQL> l

  1* select * from t1 where rownum<10

SQL> /

 

        ID NAME                                                      SEQ

———- ————————————————– ———-

      5326 SYS.AGGXMLIMP                                               1

      7180 SYS.AGGXMLINPUTTYPE                                         2

      1511 SYS.ALL_ALL_TABLES                                          3

      9381 SYS.ALL_APPLY                                               4

      1090 SYS.ALL_APPLY_CONFLICT_COLUMNS                              5

       216 SYS.ALL_APPLY_DML_HANDLERS                                  6

      5032 SYS.ALL_APPLY_ERROR                                         7

      3314 SYS.ALL_APPLY_KEY_COLUMNS                                   8

      2363 SYS.ALL_APPLY_PARAMETERS                                    9

9 rows selected.

 

$exp test/test tables=(t1) query=(\”where 1=1 order by id\”) file=t1_o.dmp

 

SQL> truncate table t1;

Table truncated.

 

$imp test/test file=t1_o.dmp ignore=y

SQL> select * from t1 where rownum<10;

 

        ID NAME                                                      SEQ

———- ————————————————– ———-

        33 SYS.GV_$AW_OLAP                                           510

        48 SYS.EXU8IOVU                                              396

        79 SYS.EXU8JBQU                                              398

        84 SYS.KU$_M_VIEW_SRT_LIST_T                                 664

        88 SYS.EXU9FGA                                               460

        91 SYS.DICTIONARY                                            318

        94 SYS.AQ$_REG_INFO_LIST                                     219

       127 SYS.EXU9COOU                                              455

       128 SYS.SYSSEGOBJ                                             932

9 rows selected.

 

 

l         DDL text in dump file is visible and can be edited

In some situations, we can edit dump file to resolve special problems. Here I use one demo to explain it.

In some systems, there are many user-defined types. Assume source scheme and target scheme have the same structure user-defined type, and these types are being used. So when we import data with these types, it will fail to import, because these types have been existed. But we can’t drop the types also.

We may be able to modify the dump file directly to resolve this problem.

 

SQL> show user

USER is “TEST”

 

SQL> create or replace type t_mytest as varray(10) of number;

  2  /

Type created.

SQL> create table t2(name varchar2(30),member t_mytest);

Table created.

SQL> insert into t2 select object_name,t_mytest(rownum,rownum+1,rownum+2) from all_objects where rownum<100;

99 rows created.

SQL> commit;

Commit complete.

SQL> select * from t2 where rownum<10;

 

NAME                           MEMBER

—————————— ————————————————–

ACCESS$                        T_MYTEST(1, 2, 3)

AGGXMLIMP                      T_MYTEST(2, 3, 4)

AGGXMLINPUTTYPE                T_MYTEST(3, 4, 5)

ALL_ALL_TABLES                 T_MYTEST(4, 5, 6)

ALL_APPLY                      T_MYTEST(5, 6, 7)

ALL_APPLY_CONFLICT_COLUMNS     T_MYTEST(6, 7, 8)

ALL_APPLY_DML_HANDLERS         T_MYTEST(7, 8, 9)

ALL_APPLY_ERROR                T_MYTEST(8, 9, 10)

ALL_APPLY_KEY_COLUMNS          T_MYTEST(9, 10, 11)

 

SQL> conn yp/yaping

Connected.

SQL> create or replace type t_mytest as varray(10) of number;

  2  /

Type created.

SQL> create table t2(name varchar2(30),member t_mytest);

Table created.

 

$exp test/test tables=(t2) file=t2.dmp

 

Before modify the dump file.

$imp yp/yaping fromuser=test file=t2.dmp ignore=y

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:00:19 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. importing TEST’s objects into YP

IMP-00061: Warning: Object type “YP”.”T_MYTEST” already exists with a different identifier

 “CREATE TYPE “T_MYTEST” TIMESTAMP ‘2007-06-11:02:43:07’ OID ‘3291809FE886C60″

 “9E040007F01002BAC’   as varray(10) of number;”

IMP-00063: Warning: Skipping table “YP”.”T2″ because object type “YP”.”T_MYTEST” cannot be created or has different identifier

Import terminated successfully with warnings.

 

Then we modify t2.dmp file. We use vi command to edit and find out the following contents and remove the connects highlighted with red color.

 

TABLE “T2”

BEGINTYPE “T_MYTEST” “3291809FE886C609E040007F01002BAC” “2186F75D29F3ED2CB659AE4D0AEE92E6C7”

“^@^@^@^@^^&^A^@^A^@^A)^@^@^@^@^@^S^\^@^@^@^]^@^@^@

^C*^F^@<81>^@^G

CREATE TYPE

媒每^^@CREATE TYPE “T_MYTEST” TIMESTAMP ‘2007-06-11:02:43:07’ OID ‘3291809FE886C609E040007F01002BAC’ ^A^@ ^Y^@ as varray(10) of numb;^@^@

GRANT EXECUTE ON “T_MYTEST” TO “YP”

ENDTYPE

CREATE TABLE “T2” (“NAME” VARCHAR2(30), “MEMBER” “T_MYTEST”)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “USERS” LOGGING NOCOMPRESS

 

 

After modify this file and save it, we import it again.

 

$ imp yp/yaping fromuser=test file=t2.dmp ignore=y

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:02:39 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. importing TEST’s objects into YP

. . importing table                           “T2”         99 rows imported

Import terminated successfully without warnings.

 

 

This problem can also be resolved by set TOID_NOVALIDATE parameter.

 

$ imp yp/yaping fromuser=test touser=yp file=t2.dmp ignore=y TOID_NOVALIDATE=(t_mytest)

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:04:16 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. . skipping TOID validation on type YP.T_MYTEST

. . importing table                           “T2”         99 rows imported

Import terminated successfully without warnings.

 

 

 

l         Buffer, commit and consistent parameters

The default buffer size is too small, we need increase it to gain efficiency. Buffer and commit are usually used together, but we should bear in mind some side effects.

When we set commit=y, if the process of import abort, we maybe put the table’s data in an unknown status, and don’t decide where to start again. Before we use a feature which may cause dubious, we must be familiar with the application firstly.

Commit parameter can’t take effectually for some types, such LOB, LONG, REF, etc, these data will be committed every row.

Before we set consistent=y, we may consider if it’s meaningful that tables are exported in several times, put the tables which need be consistent into the same export, separated from others tables.

 

 

l         Trace parameter

If you want to know which SQL are performed when run exp/imp, there’s one easy method, you may specify parameter trace=y, it equals setting sql_trace and timed_statistics within this session. The old exp/imp release maybe have no this parameter.

 

 

l         Buffer parameter

If the buffer parameter is specified, where does it be allocated from, Database or OS?

 

We use one trigger to record the session’s statistics about memory allocation.

create or replace trigger trigger_logoff

before logoff on database

–before logoff on test.schema

declare

  v_program varchar2(48);

  v_sid number;

  v_paddr RAW(4);

begin

 

if user=’TEST’ then

 

   select sid,PROGRAM,paddr into v_sid,v_program,v_paddr from v$session where AUDSID=sys_context(‘userenv’,’sessionid’);

 

   insert into sys.sesstat select sid,name,value,sysdate,v_program from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and (name like ‘%uga%’ or name like ‘%pga%’ or name = ‘user commits’);

 

   insert into procmem select v_sid,spid,PROGRAM,PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM,sysdate from v$process where addr=v_paddr;

 

end if;

 

exception

  when others then

       dbms_system.KSDWRT(2,sqlerrm);

 

end;

/

 

We use one demo table, its size is a little more than 250M.

SQL> analyze table test.t1 compute statistics;

 Table analyzed.

 

SQL> select OWNER,AVG_ROW_LEN,NUM_ROWS from dba_tables where table_name=’T1′;

 OWNER                          AVG_ROW_LEN   NUM_ROWS

—————————— ———– ———-

TEST                                  6016      44352

 

SQL> select 6016*44352/1024/1024 from dual;

 6016*44352/1024/1024

——————–

          254.460938

 

The following results show session memory allocation with different buffer value, default buffer value and 20M buffer.

       SID NAME                          VALUE DT                  PROGRAM

———- ———————— ———- ——————- ————————————————       

         9 session uga memory           158356 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)  –imp test/test  file=t1.dmp trace=y log=3.log ignore=y

         9 session uga memory max       162756 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory           561736 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max       561736 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

        

         9 session uga memory           132932 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)  –imp test/test  file=t1.dmp trace=y log=4.log ignore=y buffer=20971520

         9 session uga memory max       141108 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory           408836 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max       408836 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

        

 

According to the following result we can confirm that the buffer value took effect.

       SID NAME                                VALUE DT                  PROGRAM

———- —————————— ———- ——————- ——————————

        14 user commits                            1 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3) –imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520

        14 session uga memory                 134040 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session uga memory max             138440 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session pga memory                 425860 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session pga memory max             425860 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

 

         9 user commits                           13 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3) –imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520 commit=y

         9 session uga memory                 165040 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session uga memory max             186384 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory                 572832 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max             572832 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

 

 

 SID       SPID PROGRAM                        PGA_USED_MEM PGA_ALLOC_MEM PGA_FREEABLE_MEM PGA_MAX_MEM DT

—- ———- —————————— ———— ————- —————- ———– ——————-

  14       4500 oracle@TCLUX3245 (TNS V1-V3)         435321        480213                0      480213 2007-12-29 14:21:29

   9       4884 oracle@TCLUX3245 (TNS V1-V3)         562997        627185                0      627185 2007-12-29 14:52:04

 

From the above result, we can conclude that the value of buffer parameter doesn’t be allocated from database. Now we observe the processes through ps command.

Relation between parent-child processes display clearly that the value of buffer parameter comes from imp/exp command.

———————————————————————

USER       PID %CPU %MEM   VSZ  RSS TTY      STAT START   TIME COMMAND

 

root     31739  0.0  0.1  6860 1080 ?        S    Dec27   0:00  \_ sshd: oracle [priv]

oracle   31741  0.0  0.1  7020 1204 ?        S    Dec27   0:01  |   \_ sshd: oracle@pts/7

oracle   31744  0.0  0.1  4404 1196 pts/7    S    Dec27   0:00  |       \_ -bash

oracle    5049  7.7  2.5 33508 25492 pts/7   S    15:32   0:02  |           \_ imp           file=t1.dmp trace=y log=p1.log ig

oracle    5050 12.3  6.4 269728 65472 ?      S    15:32   0:04  |               \_ oracleO02DMS0 (DESCRIPTION=(LOCAL=YES)(ADDR

 

–imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520 commit=y

–buffer=20971520 =20M

–25492 KB

 

root     31739  0.0  0.1  6860 1080 ?        S    Dec27   0:00  \_ sshd: oracle [priv]

oracle   31741  0.0  0.1  7020 1204 ?        S    Dec27   0:01  |   \_ sshd: oracle@pts/7

oracle   31744  0.0  0.1  4404 1200 pts/7    S    Dec27   0:00  |       \_ -bash

oracle    5125  1.3  0.4 13028 4988 pts/7    S    16:08   0:06  |           \_ imp           file=t1.dmp trace=y log=p1.log ig

oracle    5126  2.1  6.4 269712 65772 ?      S    16:08   0:09  |               \_ oracleO02DMS0 (DESCRIPTION=(LOCAL=YES)(ADDR

 

–imp test/test  file=t1.dmp trace=y log=p1.log ignore=y

 

———————————————————————

 

 

l         Unique constraint violated

Unique constraint may use unique index or none-unique index, but the two type indexes are different when to validate the constraint.

Unique index, first generate redo and undo, then modify data blocks which contain DML rows, validate constraint, if available, generate redo and undo for index entries, then modify index entries, if fail, then redo owed to validation failure and implicit rollback against table.

None-unique index, first generate redo and undo, then modify data blocks which contain DML rows, generate redo and undo for index entries, modify index entries, validate constraint, if fail, then redo owed to validation failure and implicit rollback against index, then rollback table.

The import utility will be extremely costly if data being imported fail a unique constraint. As is the case with standard insert operations the redo cost for failed rows via imp is influenced by the type of index used to enforce the unique constraint. Aside from the index-type influence the number of failed rows per batch violating a unique constraint greatly impacts the redo generated. The buffer parameter will control the number per batch.

 

I illuminate it with the following experiment.

test@CHEN>create table t1(id number);

Table created.

 

test@CHEN>create table t2(id number);

Table created.

 

test@CHEN>alter table t1 add constraint t1_pk primary key(id);

Table altered.

 

test@CHEN>create index idx_t2_id on t2(id);

Index created.

 

test@CHEN>alter table t2 add constraint t2_pk primary key(id) using index;

Table altered.

 

test@CHEN>insert into t1 select rownum from dual connect by level<16;

15 rows created.

 

test@CHEN>insert into t2 select rownum from dual connect by level<16;

15 rows created.

 

test@CHEN>commit;

Commit complete.

 

test@CHEN>!exp test/test tables=(t1,t2) file=t1_t2.dmp silent=y

About to export specified tables via Conventional Path …

. . exporting table                             T1         15 rows exported

. . exporting table                             T2         15 rows exported

Export terminated successfully without warnings.

 

test@CHEN>delete from t2 where id<6 or id>10;

10 rows deleted.

 

test@CHEN>delete from t1 where id<6 or id>10;

10 rows deleted.

 

test@CHEN>commit;

Commit complete.

 

test@CHEN>!imp test/test file=t1_t2.dmp ignore=y silent=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

. . importing table                           “T1”

IMP-00019: row rejected due to ORACLE error 1

IMP-00003: ORACLE error 1 encountered

. . importing table                           “T2”

IMP-00019: row rejected due to ORACLE error 1

IMP-00003: ORACLE error 1 encountered

ORA-00001: unique constraint (TEST.T2_PK) violated

Column 1 6

 

Through checking redo log, Oracle performed the following actions.

T1

STEP 1

Multiple rows insert(1..15), controled by buffer parameter

 

STEP 2

Index leaf rows insert through array, stop when value=6, then clean

 

SETP 3

Multiple rows delete(1..15), all rows insert just now

 

STEP 4

Single row insert value 1, then insert index leaf row

Repeat above actions with row value from 2 to 5

 

STEP 5

Multiple rows insert(7..15), skip the value last failed, then start from there

 

STEP 6

Multiple rows delete(7..15)

 

STEP 7

Repeat STEP 5 & 6, until value is 10

 

STEP 8

Multiple rows insert(11..15)

 

STEP 9

Index leaf rows insert through array

 

STEP 10

Mark this transaction committed

 

T2

STEP 1

Multiple rows insert(1..15), controled by buffer parameter

 

STEP 2

Index leaf row insert, start from value 1 to 5

 

STEP 3

Index leaf row insert, value=6, validate failed, then clean all leaf rows

 

STEP 4

Multiple rows delete(1..15), all rows insert just now

 

STEP 5

Single row insert value 1, then insert index leaf row

Repeat above actions with row value from 2 to 5

 

STEP 6

Multiple rows insert(7..15), skip the value last failed, then start from there

Index leaf row insert, value=7, validate failed, then clean

 

STEP 7

Repeat STEP 6 until value is 10

 

STEP 8

Perform STEP 1 & 2

 

STEP 9

Mark this transaction committed

 

From above result we can find out that it’ll generate mass redo if import encounters unique constraint violated issue.

 

 

l         Can we export a table and import into another table

 

Maybe.

 

[oracle@chen ~]$ exp test/test tables=t2 file=t2.dmp silent=y

About to export specified tables via Conventional Path …

. . exporting table                             T2         10 rows exported

Export terminated successfully without warnings.

 

Edit dump file t2.dmp with vi tools, change all T2 words to T8 in the file, then save it.

 

[oracle@chen ~]$ vi t2.dmp

TABLE “T2”

CREATE TABLE “T2” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING NOCOMPRESS

INSERT INTO “T2” (“ID”) VALUES (:1)

^A^@^B^@^V^@^@^@^@^@^B^@Ã^B^@^@^B^@Ã^C^@^@^B^@Ã^D^@^@^B^@Ã^E^@^@^B^@Ã^F^@^@^B^@Ã^G^@^@^B^@Ã^H^@^@^B^@à ^@^@^B^@Ã

^@^@^B^@Ã^K^@^@ÿÿ

CREATE INDEX “IDX_T2_ID” ON “T2” (“ID” )  PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING

 

[oracle@chen ~]$ strings t2.dmp

EXPORT:V09.02.00

UTEST

RTABLES

8192

                                        Fri Jun 27 0:34:44 2008t2.dmp

#C##

#C##

+08:00

BYTE

INTERPRETED

TABLE “T8”

CREATE TABLE “T8” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING NOCOMPRESS

INSERT INTO “T8” (“ID”) VALUES (:1)

CREATE INDEX “IDX_T8_ID” ON “T8” (“ID” )  PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING

EXIT

EXIT

 

[oracle@chen ~]$ imp test/test file=t2.dmp silent=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

. . importing table                           “T8”         10 rows imported

Import terminated successfully without warnings.

 

The above method is very dangerous, there’s easy approach. We just create the same structure table T8 first, then create synonym named T2 which refer to T8 and import data.

 

 

l         Consistent parameter

 

Prepare for data.

Session 1

test@CHEN>create table t1 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t2 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t3 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t4 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

We consider how to slow down export progress.

Session 2

sys@CHEN>AUDIT SELECT ON test.t1 WHENEVER SUCCESSFUL;

Audit succeeded.

 

sys@CHEN>create or replace trigger after_audit

  2  after insert or update on sys.aud$

  3  begin

  4    dbms_lock.sleep(60);

  5  end;

  6  /

create or replace trigger after_audit

                          *

ERROR at line 1:

ORA-04089: cannot create triggers on objects owned by SYS

 

sys@CHEN>create table system.aud$ as select * from sys.aud$ where 1=2;

Table created.

 

sys@TEST>drop table sys.aud$;

Table dropped.

 

sys@CHEN>create synonym aud$ for system.aud$;

Synonym created.

 

sys@CHEN>create or replace trigger after_audit

  2    after insert or update on system.aud$

  3    begin

  4      dbms_lock.sleep(60);

  5    end;

  6  /

Trigger created.

 

Export with consistent=y parameter

Session 2

sys@CHEN>!exp test/test file=test.dmp CONSISTENT=y trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported –hold on 240 seconds

. . exporting table                             T2          4 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00056: ORACLE error 942 encountered

ORA-00942: table or view does not exist

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics

Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

Session 1

test@CHEN>@case1

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘consistent=y:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS “

 “255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” “

 “LOGGING NOCOMPRESS”

. . skipping table “T3”

 

 “CREATE TABLE “T4” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS “

 “255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” “

 “LOGGING NOCOMPRESS”

. . skipping table “T4”

 

“CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

From above output, we can see that the definition of dropped table still be exported, but it doesn’t contain any data. Even though the table’s structure changed, it exports the previous definition, new objects don’t be exported.

 

Export with consistent=n parameter

 

Session 2

sys@CHEN>!exp test/test file=test.dmp consistent=n trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported — hold on

. . exporting table                             T2          5 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00007: dictionary shows no columns for TEST.T4

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

Session 1

Tables t1, t2, t3, t4 have been recreated before run the above export.

test@CHEN>@case2

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘consistent=n:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER, “A” VARCHAR2(1))  PCTFREE 10 PCTUSED 40 INI”

 “TRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) T”

 “ABLESPACE “TEST” LOGGING NOCOMPRESS”

 ” ALTER TABLE “T3” MODIFY (“A” DEFAULT ‘Y’)”

. . skipping table “T3”

“CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return ‘consistent=n:’||p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

We can find that the dropped table is gone, and changed objects export their definition and data at the point of reading them, but don’t contain new objects.

 

Export with object_consistent=y parameter

 

Session 2

sys@CHEN>!exp test/test file=test.dmp OBJECT_CONSISTENT=y trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported –hold on

. . exporting table                             T2          5 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00007: dictionary shows no columns for TEST.T4

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics

Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

 

Session 1

Tables t1, t2, t3, t4 have been recreated before run the above export.

test@CHEN>@case3

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘object_consistent=y:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER, “A” VARCHAR2(1))  PCTFREE 10 PCTUSED 40 INI”

 “TRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) T”

 “ABLESPACE “TEST” LOGGING NOCOMPRESS”

 ” ALTER TABLE “T3” MODIFY (“A” DEFAULT ‘Y’)”

. . skipping table “T3”

…”

 “CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return ‘object_consistent=y:’||p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

 

From above results, it seems that tables are obtained at beginning of export.

 

 

 

l         Others

There’s one tool which can extract DDL statements from dump file. Refer to http://www.ddlwizard.com/.

 

ARCH relevant

Filed under: Oracle — Yaping @ 4:29 am
Tags:

Previous Oracle 10g, when ARCH process need archive redo log, it firstly build an archive destination list. Once this list is completed, the ARCH process reads 1 MB chunk of redo log data that is to be archived. The chunk size is controlled by one hidden parameter called _log_archive_buffer_size, its default value is 2048 redo log blocks, the redo log block size is 512 byte, so the chunk size is just 1 MB. Then this 1MB chunk is sent to the first destination in the list, after write has completed, the same 1 MB chunk is sent to the second destination. It continues until this chunk data has been written to all destinations. Next, the ARCH process reads the next 1MB chunk of redo log, repeats the above actions until this redo log has been written to all destination. So archiving is only as fast as the slowest destination.

 

So what will happen if the writing progress is slow or dead?

In some situations, the network is slow or the server is under hard work load, it can cause the arch progress is very slow. If the redo log does not be archived completely, the database can switch to the next redo log file if there’re redo log groups available. It is possible that the log writer process recycles through all available online redo log groups and tries to use the redo log file which has not yet been archived completely. Now the database will be suspended.

If the ARCH process can’t get response from archive destination, the network or server is maybe dead, the result will be different from progress slow. The unavailable destination just be closed from archive and the ARCH process continues to archive log to available destinations. The OPTIONAL and MANDATORY parameters just control that whether the online redo log file can be reused when this redo log doesn’t be completed archive to the destination.

 

We may pay more attention if there’re archive destinations through network (NFS or standby). If the data is transmitted over a slow network, no error is encountered and the destination is not closed. Transmission continues, but is very slow. Ultimately, the database will be suspended caused by lack for available redo log groups.

 

Since Oracle 9.2.0.5, it introduces one parameter called _LOG_ARCHIVE_CALLOUT, allow the DBA to change the default behavior:

_LOG_ARCHIVE_CALLOUT=’LOCAL_FIRST=TRUE’

If this parameter is set and the standby adopts ARCH process to archive log, then the ARCH process will archive to the local destination first. Once the redo log has been completely and successfully archived at least one local destination, it will then be transmitted to the remote destination. This is default behavior since Oracle 10g Release 1.

 

 

@>@getPar

Enter value for parameter: _log_archive_buffer_size

old   6: ksppinm like lower(‘%&parameter%’)

new   6: ksppinm like lower(‘%_log_archive_buffer_size%’)

NAME                                     VALUE                                    DESCRIPTION

—————————————- —————————————- —————————————-

_log_archive_buffer_size                 2048                                     Size of each archival buffer in log file

                                                                                   Blocks

The max value is 2048 within Oracle 9i under Linux.

 

[oracle@chen ~]$ ps -ef|grep arc

oracle   12148     1  0 03:25 ?        00:00:00 ora_arc0_chen

oracle   12150     1  0 03:25 ?        00:00:00 ora_arc1_chen

oracle   12529 12495  1 04:34 pts/9    00:00:00 grep arc

 

[oracle@chen ~]$ strace -p 12148 -o ora_arc0_chen.log &

[1] 12532

 

[oracle@chen ~]$ strace -p 12150 -o ora_arc1_chen.log &

[2] 12559

 

@>alter system switch logfile;

System altered.          

 

We can find the similar following records in the trace files.

… …

open(“/u03/oradata/9208/chen/redo02.log”, O_RDONLY|O_DIRECT|O_LARGEFILE) = 16

… …

open(“/u03/oradata/arch/1_329.dbf”, O_RDWR|O_SYNC|O_DIRECT|O_LARGEFILE) = 18

… …

pread(16, “I\1\1c\241\201&\270\326 \t \t\200″…, 1048576, 512) = 1048576

… …

pwrite(18, “I\1\1\10M\241\201&D\212\233                “…, 1048576, 1049088) = 1048576

… …

 

 

@>@getPar

Enter value for parameter: _log_archive_callout

old   6: ksppinm like lower(‘%&parameter%’)

new   6: ksppinm like lower(‘%_log_archive_callout%’)

NAME                                     VALUE                                    DESCRIPTION

—————————————- —————————————- —————————————-

_log_archive_callout          

 

 

Logs gap resolution

Since Oracle 9i Release 1, automatic gap resolution is implemented during log transport processing. As the LGWR or ARCH process begins to send redo over to the standby, the sequence number of the log being archived is compared to the last sequence received by the RFS process on the standby. If the RFS process detects that the archive log being received is greater than the last sequence received plus one, then the RFS will piggyback a request to the primary to send the missing archive logs.

 

Starting in Oracle Release 2, automatic gap resolution has been enhanced. In addition to the above, the ARCH process on the primary database polls all standby databases every minute to see if there is a gap in the sequence of archived redo logs. If a gap is detected then the ARCH process sends the missing archived redo log files to the standby databases that reported the gap. Once the gap is resolved, the LGWR process is notified that the site is up to date.

 

If the MRP process finds that the archived log is missing or is corrupt on standby, FAL is called to resolve the gap or obtain a new copy.  Since MRP has no direct communications link with the primary, it must use the FAL_SERVER and FAL_CLIENT initialization parameters to resolve the gap. Both of these parameters must be set in the standby site. The two parameters are defined as:

 

FAL_SERVER: An OracleNet service name that exist in the standby tnsnames.ora file that points to the primary database listener.

FAL_CLIENT: An OracleNet service name that exist in the primary tnsnames.ora file that points to the standby database listener.

 

Once MRP needs to resolve a gap it uses the value from FAL_SERVER to call the primary database. Once communication with the primary has been established, MRP passes the FAL_CLIENT value to the primary ARCH process. The primary ARCH process locates the remote archive destination with the corresponding service name and ships the missing archived redo logs.

 

 

Archive process

When we take a database backup, we generally archive current redo log first, after backup datafiles, we archive current redo log again. We can achieve it by using commands alter system switch logfile or alter system archive log current. They have a different. The switch command will fire background process to archive log and return back to the command line immediately. So we can continue to next tasks, but the archive task maybe doesn’t be completed. But the archive command will fire this user process to archive log, it doesn’t return to the command line until it complete to archive. So we should consider use the archive command to archive current log when we write backup scripts.

 

We can use the following experiment to confirm it.

 

@>@myid

Wrote file /tmp/myvar.sql

sid:10 serial:7 pid:11 spid:12691

 

[oracle@chen ~]$ strace -p 12691 -o 12691.log &

[3] 12693

 

@>alter system archive log current;

System altered.

 

when freed blocks be reused

Filed under: Oracle — Yaping @ 4:12 am
Tags:

Table and index have different when will the freed blocks be reused within one transaction/uncommit.

Table blocks allocated in freelist is followed by following order:
1.         allocate from TFL(if current transaction need);
2.         allocate from PFL(if freelists is 1, then PFL and MFL are same);
3.         if PFL empty, then move blocks from MFL to PFL;
4.         if MFL empty, then move from TFL(if TFL usable, commited);
5.         if MFL empty(if TFL unusable), then advance HWM;
6.         if extent has no free space, then allocate extent;
7.         if no tablespace, then tablespace allocate;
8.         if tablespace fail to allocate, then error

I create one table, every block can only contain one row, and create index on this table, one leaf block contains one key.

SQL> create table t1(id number, a char(2000),b char(2000),c char(2000));
Table created.

SQL> create index idx_t1_id on t1(rpad(’0′,1400,’0′)) tablespace TEST_2K;
Index created.

SQL> begin
for i in 1..10 loop
insert into t1 values (i,’a’,’b’,’c’);
end loop;
commit;
end;
/  2    3    4    5    6    7
PL/SQL procedure successfully completed.

Load data into table t1. As expect, this table will consume 10 blocks to contain 10 rows, and 10 leaf blocks,

SQL> analyze table t1 compute statistics for table for all indexes;
Table analyzed.

SQL> select table_name PCT_USED,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN from user_tables;
PCT_USED                         NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
—————————— ———- ———- ———— ———–
T1                                     10         11            0        6015

SQL> select BLEVEL,LEAF_BLOCKS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY,NUM_ROWS from user_indexes;
    BLEVEL LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY   NUM_ROWS
———- ———– ———————– ———————– ———-
         4          10                      10                      10         10

SQL> analyze index idx_t1_id validate structure;
Index analyzed.

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;
    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE
———- ———- ———- ———- ———– ———- ———-
         5         32         10       1852           0         69      26847

Then run a sql block, delete and insert rows within one transaction.

SQL> begin
for i in 1..20 loop
delete from t1 where id=i;
insert into t1 values (i,’a’,’b’,’c’);
end loop;
commit;
end;
/  2    3    4    5    6    7    8
PL/SQL procedure successfully completed.

SQL> analyze table t1 compute statistics for table for all indexes;
Table analyzed.

SQL> select table_name PCT_USED,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN from user_tables;
PCT_USED                         NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
—————————— ———- ———- ———— ———–
T1                                     20         20            0        6015

SQL> select BLEVEL,LEAF_BLOCKS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY,NUM_ROWS from user_indexes;
    BLEVEL LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY   NUM_ROWS
———- ———– ———————– ———————– ———-
         5          30                      30                      20         20

SQL> analyze index idx_t1_id validate structure;
Index analyzed.

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;
    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE
———- ———- ———- ———- ———– ———- ———-
         6         64         30       1852          10         75      83387

The result shows,
1.      the deleted blocks in table be reused;
2.      the leaf block didn’t be reused.

Notes:
Analyze … compute and analyze … structure commands count empty blocks, but dbms_stats doesn’t count it. In some situation, it may cause the SQL statements to perform through fast full index scan, even though it’s not optimization plan.

SQL> exec dbms_stats.gather_table_stats(OWNNAME=>’TEST’,TABNAME=>’T1′,CASCADE=>true);
PL/SQL procedure successfully completed.

SQL> select BLEVEL,LEAF_BLOCKS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY,NUM_ROWS from user_indexes;
    BLEVEL LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY   NUM_ROWS
———- ———– ———————– ———————– ———-
         5          20                      20                      20         20

Now I rerun the sql block, but this time I commit after delete every rows.

SQL> drop table t1;
Table dropped.

SQL> create table t1(id number, a char(2000),b char(2000),c char(2000));
Table created.

SQL>create index idx_t1_id on t1(rpad(’0′,1400,’0′)) tablespace TEST_2K;
Index created.

SQL> begin
for i in 1..10 loop
insert into t1 values (i,’a’,’b’,’c’);
end loop;
commit;
end;
/  2    3    4    5    6    7
PL/SQL procedure successfully completed.

SQL> begin
for i in 1..20 loop
delete from t1 where id=i;
insert into t1 values (i,’a’,’b’,’c’);
commit;
end loop;
end;
/  2    3    4    5    6    7    8
PL/SQL procedure successfully completed.

SQL> analyze table t1 compute statistics for table for all indexes;
Table analyzed.

SQL> select table_name PCT_USED,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN from user_tables;
PCT_USED                         NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
—————————— ———- ———- ———— ———–
T1                                     20         20            0        6015

SQL> select BLEVEL,LEAF_BLOCKS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY,NUM_ROWS from user_indexes;
    BLEVEL LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY   NUM_ROWS
———- ———– ———————– ———————– ———-
         5          20                      20                      20         20

SQL> analyze index idx_t1_id validate structure;
Index analyzed.

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;
    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE
———- ———- ———- ———- ———– ———- ———-
         6         64         20       1852           0         59      55107

The result shows, the leaf block was reused.
If freed blocks within one transaction/uncommit can not be reused in the index, the reason is for rollback.

Following test is more detail on tables. There are two cases, the first is insertion after deletion be committed.
The second one is deletion and insertion data within one transaction.

The following steps have order.

Case 1
============================================
session 1

SQL> create table t1(id number, a char(2000),b char(2000),c char(2000));
Table created.

SQL> begin
for i in 1..9 loop
insert into t1 values (i,’a’,’b’,’c’);
end loop;
commit;
end;
/   2    3    4    5    6    7
PL/SQL procedure successfully completed.

Then open anther session, run following sql.

 

session 2

SQL> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where owner=’TEST’;
SEGMENT_NAME          EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
——————– ———- ———- ———- ———-
T1                            0          7          9          8
T1                            1          7         17          8

SQL> alter system checkpoint;
System altered.

SQL> alter system dump datafile 7 block min 9 block max 24;
System altered.

Then check trace file, there is no TFL now, first block in freelist is block 18(x12).

buffer tsn: 6 rdba: 0×01c00009 (7/9)
  nfl = 1, nfb = 1 typ = 1 nxf = 0 ccnt = 7
  SEG LST:: flg: USED   lhd: 0×01c00012 ltl: 0×01c00015

Then I return back to session 1, run following sql.

session 1
SQL> delete from t1 where rownum<5;
4 rows deleted.
SQL> commit;
Commit complete.

session 2
SQL> alter system checkpoint;
System altered.
SQL> alter system dump datafile 7 block min 9 block max 24;
System altered.

Check trace file, now there is one TFL, the start block is 13(x0d).

  nfl = 1, nfb = 1 typ = 1 nxf = 1 ccnt = 7
  SEG LST:: flg: USED   lhd: 0×01c00012 ltl: 0×01c00015
  XCT LST:: flg: USED   lhd: 0×01c0000d ltl: 0×01c0000a xid: 0×0009.028.00000285

We can also follow block 13, then find the next block.
1.         If flg is O then this block in freelist, if it is -, then this block not in freelist;
2.         fnx shows the next block in the freelist.

buffer tsn: 6 rdba: 0×01c0000d (7/13)
scn: 0×0000.004a040f seq: 0×01 flg: 0×06 tail: 0×040f0601
frmt: 0×02 chkval: 0×364b type: 0×06=trans data
Block header dump:  0×01c0000d
 Object id on Block? Y
 seg/obj: 0×17ab  csc: 0×00.4a040b  itc: 2  flg: O  typ: 1 – DATA
     fsl: 2  fnx: 0×1c0000c ver: 0×01

session 3
SQL> insert into t1 values (99999,’a’,’b’,’c’);
1 row created.
SQL> commit;
Commit complete.

session 2
I dump data block again, it shows that session 3 used one block which is in MFL, not in TFL.

  nfl = 1, nfb = 1 typ = 1 nxf = 1 ccnt = 7
  SEG LST:: flg: USED   lhd: 0×01c00013 ltl: 0×01c00015
  XCT LST:: flg: USED   lhd: 0×01c0000d ltl: 0×01c0000a xid: 0×0009.028.00000285

Case 2
============================================
session 1
SQL> drop table t1;
Table dropped.
SQL> create table t1(id number, a char(2000),b char(2000),c char(2000));
Table created.
SQL> begin
for i in 1..9 loop
insert into t1 values (i,’a’,’b’,’c’);
end loop;
commit;
end;
/  2    3    4    5    6    7
PL/SQL procedure successfully completed.
SQL> delete from t1 where rownum<5;
4 rows deleted.

session 2
SQL> select SEGMENT_NAME,EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where owner=’TEST’;
SEGMENT_NAME          EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
——————– ———- ———- ———- ———-
T1                            0          7          9          8
T1                            1          7         17          8
SQL> alter system checkpoint;
System altered.
SQL> alter system dump datafile 7 block min 9 block max 24;
System altered.

Check trace file. We can find that TFL exists, although transaction is uncommit.

  nfl = 1, nfb = 1 typ = 1 nxf = 1 ccnt = 7
  SEG LST:: flg: USED   lhd: 0×01c00012 ltl: 0×01c00015
  XCT LST:: flg: USED   lhd: 0×01c0000d ltl: 0×01c0000a xid: 0×0001.019.000001da

session 1
SQL> insert into t1 values (99999,’a’,’b’,’c’);
1 row created.
SQL> insert into t1 values (9999,’a’,’b’,’c’);
1 row created.
SQL> insert into t1 values (999,’a’,’b’,’c’);
1 row created.
SQL> commit;
Commit complete.

session 2
SQL> alter system checkpoint;
System altered.
SQL> alter system dump datafile 7 block min 9 block max 24;
System altered.

This time, we find the transaction use block in TFL, not in MFL.

  nfl = 1, nfb = 1 typ = 1 nxf = 1 ccnt = 7
  SEG LST:: flg: USED   lhd: 0×01c00012 ltl: 0×01c00015
  XCT LST:: flg: USED   lhd: 0×01c0000b ltl: 0×01c0000a xid: 0×0001.019.000001da

If we use ASSM tablespace, it’s very different from freelist management.

Refer to http://www.ixora.com.au/q+a/datablock.htm

Low value/High value

Filed under: Oracle — Yaping @ 4:10 am
Tags:

How to convert internal format value to visible value about low value/high value in *_tab_columns.
There is a procedure named convert_raw_value in dbms_stats to achieve it.

SQL> create table t1 as select object_id,object_name,created from all_objects;
Table created.

SQL> analyze table t1 compute statistics;
Table analyzed.

SQL> l
  1* select COLUMN_NAME,DATA_TYPE,LOW_VALUE,HIGH_VALUE from user_tab_columns
SQL> /
COLUMN_NAME                    DATA_TYPE            LOW_VALUE                      HIGH_VALUE
—————————— ——————– —————————— —————————————-
OBJECT_ID                      NUMBER               C20317                         C23D26
OBJECT_NAME                    VARCHAR2             414747584D4C494D50             5F414C4C5F52455053495445535F4E4557
CREATED                        DATE                 786B051B0F1505                 786B0719182E2D

SQL> create or replace function convert_raw_value(p_rawval in raw,p_type in varchar2 default ‘VARCHAR2’)
  2    return varchar2
  3  as
  4    v_type varchar2(20) := p_type;
  5    v_res_char varchar2(50);
  6    v_res_date date;
  7    v_res_number number;
  8    v_res_val varchar2(50);
  9  begin
10    if v_type = ‘VARCHAR2’ or v_type is null then
11       dbms_stats.convert_raw_value(p_rawval,v_res_char);
12       v_res_val := v_res_char;
13    end if;
14   
15    if v_type = ‘NUMBER’ then
16       dbms_stats.convert_raw_value(p_rawval,v_res_number);
17       v_res_val := to_char(v_res_number);
18    end if;
19   
20    if v_type = ‘DATE’ then
21       dbms_stats.convert_raw_value(p_rawval,v_res_date);
22       v_res_val := to_char(v_res_date,’yyyy/mm/dd hh24:mi:ss’);
23    end if;
24   
25    return v_res_val;
26 
27  exception
28    when others then
29         return ‘ERROR’;
30  end;
31  /
Function created.

SQL> select COLUMN_NAME,convert_raw_value(LOW_VALUE,DATA_TYPE) LOW_VALUE,convert_raw_value(HIGH_VALUE,DATA_TYPE) HIGH_VALUE from user_tab_columns;
COLUMN_NAME                    LOW_VALUE                      HIGH_VALUE
—————————— —————————— ——————————
OBJECT_ID                      222                            6037
OBJECT_NAME                    AGGXMLIMP                      _ALL_REPSITES_NEW
CREATED                        2007/05/27 14:20:04            2007/07/25 23:45:44

 

————————————

Manual convert

C20317 => 0x C2 03 17 => (193+1) 03 23 => 03–1* 100 * 1 + 23–1= 222

414747584D4C494D50 => 0x 41 47 47 58 4D 4C 49 4D 50 => 65 71 71 88 77 76 73 77 80 => A G G X M L I M P

786B051B0F1505 => 0x 78 6B 05 1B 0F 15 05 => 120 107 05 27 15 21 05 => (120–100) (107-100) 05 27 (15-1) (21-1) (05-1) => 2007/05/27 14:20:04

 

Index split

Filed under: Oracle — Yaping @ 3:00 am
Tags:

There are two methods to handle index split, called 50/50 and 90/10 split. When index’s value changes monotonously, such as sequence, block split will use 90/10 split, it will decrease time and save space. But there is bug on 90/10 split.

 

Version 9204

 

Session 1

SQL> drop table t1;

Table dropped.

 

SQL> create table t1 (id number);

Table created.

 

SQL> create index idx_t1_id on t1(id);

Index created.

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                        179

leaf node 90-10 splits                                                   36

branch node splits                                                        0

 

 

Session 1

SQL> begin

  2    for i in 1..10000 loop

  3        insert into t1 values (round(dbms_random.value(0,100000)));

  4    end loop;

  5    commit;

  6  end;

  7  /

PL/SQL procedure successfully completed.

 

SQL> analyze index idx_t1_id validate structure;

Index analyzed.

 

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;

    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE

———- ———- ———- ———- ———– ———- ———-

         2         40         32       7996           0         61     159251

 

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

 

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                        210

leaf node 90-10 splits                                                   36

branch node splits                                                        0

 

 

 

session 1

SQL> truncate table t1;

Table truncated.

 

SQL> begin

  2    for i in 1..10000 loop

  3        insert into t1 values (i);

  4    end loop;

  5    commit;        

  6  end;

  7  /

PL/SQL procedure successfully completed.

 

SQL> analyze index idx_t1_id validate structure;

Index analyzed.

 

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;

    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE

———- ———- ———- ———- ———– ———- ———-

         2         24         19       7996           0         94     149999

 

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

 

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                        228

leaf node 90-10 splits                                                   54

branch node splits                                                        0

 

 

session 1

SQL> truncate table t1; 

Table truncated.

 

SQL> begin

  2    for i in 1..10000 loop

  3        insert into t1 values (i);

  4        commit;

  5    end loop;       

  6  end;

  7  /

PL/SQL procedure successfully completed.

 

SQL> analyze index idx_t1_id validate structure;

Index analyzed.

 

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;

    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE

———- ———- ———- ———- ———– ———- ———-

         2         40         36       7996           0         51     150184

 

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                        263

leaf node 90-10 splits                                                   54

branch node splits                                                        0

 

 

Oops, the last case split leaf blocks by 50/50 method, but the key value increases monotonously, the less half will never reinsert data again. Some systems have primary key based on sequence on every tables, if in this version, a mass of space are wasted.

 

Behaving of index split in 9208 is similar.

 

 

Version 10203

 

Session 2

SQL> create tablespace test datafile ‘/oradata/ora10g/test/test01.dbf’ size 20m segment space management manual;

Tablespace created.

 

SQL> select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’);

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                          0

leaf node 90-10 splits                                                    0

branch node splits                                                        0

 

 

session 1

SQL> drop table t1;

Table dropped.

 

SQL> create table t1 (id number) tablespace test;

Table created.

 

SQL> create index idx_t1_id on t1(id) tablespace test;

Index created.

 

SQL> begin

  2    for i in 1..10000 loop

  3        insert into t1 values (i);

  4        commit;

  5    end loop;      

  6  end;

  7  /

PL/SQL procedure successfully completed.

 

SQL> analyze index idx_t1_id validate structure;

Index analyzed.

 

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;

    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE

———- ———- ———- ———- ———– ———- ———-

         2         24         19       7996           0         94     149999

 

 

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                         18

leaf node 90-10 splits                                                   18

branch node splits                                                        0

 

 

 

session 1

SQL> truncate table t1;

Table truncated.

 

SQL> begin

  2    for i in 1..10000 loop

  3        insert into t1 values (i);

  4    end loop;

  5    commit;      

  6  end;

  7  /

PL/SQL procedure successfully completed.

 

SQL> analyze index idx_t1_id validate structure;

Index analyzed.

 

SQL> select HEIGHT,BLOCKS,LF_BLKS,LF_BLK_LEN,DEL_LF_ROWS,PCT_USED,USED_SPACE from index_stats;

    HEIGHT     BLOCKS    LF_BLKS LF_BLK_LEN DEL_LF_ROWS   PCT_USED USED_SPACE

———- ———- ———- ———- ———– ———- ———-

         2         24         19       7996           0         94     149999

 

 

Session 2

SQL> l

  1* select name,value from v$sysstat where name in (‘leaf node splits’,’leaf node 90-10 splits’,’branch node splits’)

SQL> /

NAME                                                                  VALUE

—————————————————————- ———-

leaf node splits                                                         36

leaf node 90-10 splits                                                   36

branch node splits                                                        0

 

It seems this issue is fixed in this version.

Undo/rollback

Filed under: Oracle — Yaping @ 2:51 am
Tags:

Overview

Undo/rollback segment is used for transaction rollback and read consistent or flashback query.

 

There’re two management methods: auto and manual, respectively called undo and rollback. The basic structure is the same.

 

Every database should have two type undo/rollback space: system and user data. The system undo/rollback is allocated from system tablespace, it mainly used for dictionary data.

 

In RAC/OPS environment, every instance has its own private undo/rollback space, it can be read by other instances, but it’s written only by its owner. The system undo/rollback can be shared by every instance.

 

 

Structure

Every segment header contains some control data, the most known structure is transaction table entries. The number of transaction table entries is limited by block size, within 8k block size, the number can’t exceed 98, it means there’re no more than 98 active transactions in one undo/rollback segment at the same time.

 

TRN TBL::

index state cflags wrap# uel scn dba parent-xid nub stmt_num

————————————————————————————————

0x00 9 0x00 0x00b4 0x002e 0x0000.00295c5e 0x00801210 0x0000.000.00000000 0x00000001 0x00000000

 

State

The state value 9 indicates this slot holds not active transaction, 10 means active transaction, the state can be other value, such as for distributed transaction, 2PC force rollback etc.

 

Wrap#

Sequence number indicates reuse of the slot. This number is incremented as each slot is reused.

 

Uel

Used to store starting extent of an active transaction, or the next pointer in the list of committed transaction if transaction inactive.

 

Nub

Hold the number of blocks in the undo chain.

 

You can query v$resource_limit view to determine the database has how many transactions at peak time. Be careful of distributed query, it produces transaction, even if it doesn’t consume undo block, but every transaction need one transaction table entry, if these query doesn’t be committed or exit normally, it’ll lead to use up all transaction table entries, then no transactions can start if no space extends.

 

 

TRN CTL:: seq: 0x0098 chd: 0x0006 ctl: 0x0013 inc: 0x00000000 nfb: 0x0005 mgc: 0x8201 xts: 0x0068 flg: 0x0001 opt: 2147483646 (0x7ffffffe) uba: 0x00800025.0098.01 scn: 0x0000.00293471

 

Scn

The committed SCN of the transaction rolled out of the transaction table. It means any transaction associated with SCN lower than this value is committed within this rollback segment. It can be used as commit SCN when delayed block cleanout and the slot has been reused.

 

CHD (commit head)

Head of the chain of the list of committed transactions stored by increasing scn. It indicates this slot will be reused for new transaction first.

 

CTL (commit tail)

Tail of the chain of the list of committed transaction.

 

OPT

Value for the parameter optimal.

 

 

Retention Table

———————————————————–

Extent Number:0 Commit Time: 1202330459

 

There’re also one structure called retention table, it’s introduced since Oracle 9i. The time format is Julian date. The commit time is not updated at every transaction commits, it is related to undo entries crossing the segment highwatermark.

 

 

Itl Xid Uba Flag Lck Scn/Fsc

0x01 0x000a.026.000000ba 0x00800035.00d2.19 –U- 3 fsc 0x0000.00297f88

 

Fsc: Free space credit, track space freed in this block.

 

 

Rollback segments

The number of rollback segments is derived from the initial parameter transactions and transactions_per_rollback_segment.

 

System rbs + ceil(transactions/transactions_per_rollback_segment)

 

Every rollback segment must contains at least 2 extents, suggest set the minimum 20 extents.

 

The instance first acquires private rollback segments specified in initial parameter rollback_segments, if there are not sufficient, it attempts to acquire additional public rollback segments not already acquired by other instances.

 

Generally, every rollback segment should be the same size, but in some environment, we may consider to use different size rollback segments. For example, database runs as OLTP system by day, it performs batch data by night, we can create two rollback tablesapce with different size rollback segments, to switch them between day and night.

 

Generally, I don’t suggest to set optimal value for rollback segment. If there’s large transaction occasionally, you may consider to set optimal to let rollback segment to extend and shrink back. It can save some space. But if the database always needs rollback segment size larger than optimal value, you should not set optimal value and directly allocate rollback segment to expected size.

 

You should remember that transaction causes rollback segment to extend, not select statement. Even if there’s enough space within rollback tablespace, long run select statement still can raise ORA-01555 error.

 

Transactions should be allocated to rollback segments according to a least recently committed algorithm. As a transaction commits, the segment is placed in line behind the segments that do not have current transactions.

When transaction reaches the end of an extent, it has two choices: either move into the recently least used already allocated extent or allocate a new extent.

 

The choice hinges on whether the recently least used already allocated extent contains any active transactions or not. If there’re any undo blocks in that extent that belong to a transaction which has not yet been committed, then that block belongs to an active transaction. Oracle then can’t use that extent, also can’t skip that extent to use the next extent which even if has no active transactions in it. Then a new extent is added to the transaction.

 

v$rollstat.EXTENDS indicates adding new extents, WRAPS indicates reusing the already allocated extent. If WRAPS is large, it maybe means that transactions are large or extent size is too small.

 

From the above explain, some little transactions which don’t be committed may cause rollback segment extend continually and use up all rollback tablespace finally. If all transactions are committed promptly, even If hundreds or thousands are active simultaneously, there’s a very good chance that none of the rollback segments will extend even once.

 

 

Undo

One of the benefits of undo is the dynamic creation of undo segments as need.

 

undo_retention

This parameter’s functionality is available either in manual or auto mode since Oracle 9i. The key enhancement with respect to ORA-01555 error resolve is the undo_retention parameter, rather than undo_management parameter.

 

Undo retention is just soft limited, if there’s no enough space to keep data, committed transaction data will be overwritten. Since Oracle 10g, it introduced new functionality to guarantee the retention time. You can specify retention guarantee clause when create or alter tablespace.

 

You can calculate how much undo space the instance needs with a given retention time through query v$undostat.

 

The following is space allocating algorithm. The highlight part is changed since Oracle 9i.

 

1. As space is needed for transactions, a simple “next-undo-block” pointer advances from block to block within an extent in the body of a rollback segment.

 

Each transaction takes the next undo block as needed, and then advances the pointer to the next block.

 

2. At some point just before the last blocks in the current undo extent are reached, one of the transactions is selected to make a decision:

 

Looking ahead to the next extent of the rollback segment, are there any undo blocks currently in use by active (uncommitted) transactions?

 

    If so, then EXTEND the rollback segment by adding a new extent.

 

If the EXTEND operation is not possible because the tablespace is out of space, then raise the ORA-01562 (out of space) error message

 

      Else, are any of the “inactive” transactions in the next extent of the rollback segment committed less than UNDO_RETENTION seconds ago?

 

If so, then EXTEND the rollback segment by adding a new extent

 

Is there enough space in the tablespace to EXTEND the rollback segment by another extent?

 

If not, then WRAP into the next existing extent of the rollback segment.

 

      Else, then WRAP the rollback segment by directing the “next-undo-block pointer” into that next extent.

 

3. And so the advance of the “next-undo-block” pointer sweeps on into the extent chosen. If an EXTEND has occurred, then no overwrite of previous undo information is taking place. If a WRAP has occurred, then all of the blocks being written are overwriting undo information from previously concluded or inactive transactions.

 

 

Others

v$waitstat

save undo block

A save undo segment is generated when a tablespace is taken offline in an emergency situation, this tells how many waits have occurred on a save undo segment block.

 

save undo header

This tells how many waits have occurred in a save undo header.

 

In-Memory Undo

Saving undo information in memory areas, not in undo segment. It can decrease disk IO. This functionality is introduced since Oracle 10g.

 

v$transaction

UBAFIL, UBABLK: current undo block address

 

START_UBAFIL, START_UBABLK: undo block address when transaction start

 

USED_UBLK: Number of undo blocks consumed by the transaction. One undo block can only hold one active transaction. Undo block can contain active transaction record and inactive transaction record simultaneously. Undo blocks within one transaction can be not consecutive. We can determine whether the transaction is rollback through this value increases or decreases.

 

USED_UREC: This value includes record of table and index based on the table be updated. The number of record maybe be a little different between bitmap index and b*tree index. It also has difference on bitmap index between Oracle 9i and Oracle 10g.

 

x$ktuxe

This table stores all transaction table entries. Sometimes the abort transaction can not be queried from v$transaction, but we can query x$ktuxe with KTUXESTA=’ ACTIVE’ condition to obtain all active transactions.

 

Dump undo blocks associated with specific transaction

Alter system dump undo block ‘segment_name’ xid usn slot seq;

 

 

 

 

References

CATS, DOGS, AND ORA-01555 by Tim Gorman, SageLogix

Data Management by Oracle

 

August 29, 2008

Put datafiles online without recovery

Filed under: Oracle — Yaping @ 2:55 am
Tags: ,

Here I’ll introduce one method to put offline datafiles to be online without recovery.

We know Oracle check conrolfile information with the datafile header information to determine whether this DB is consistent, whether the DB need crash recover or media recover .

When tablespaces/datafiles are offline normally, the DB issues checkpoint on these datafiles and update information on the datafiles header and controlfies. If these files are offline immediate, only controlfile information is updated. The files can be put online without recovery if they are offline normally, otherwise they need recovery.

 

I take this experiment on noarchive mode DB. The main steps are:

1.       Put one datafile offline;

2.       Switch logfile, cause this offline datafile miss necessary redo logs to put it online;

3.       Modify this file header directly, advance the miss redo logs;

4.       Re-create control file, the main purpose is to get rid of controlfile information affect

 

 

@>conn test/test

Connected.

 

@>drop table t1;

Table dropped.

 

@>create table t1 tablespace test as select rownum id from all_objects where rownum<6;

Table created.

 

@>conn /as sysdba

Connected.

 

@>archive log list

Database log mode              No Archive Mode

Automatic archival             Disabled

Archive destination            /opt/app/oracle/product/9.2.0/dbs/arch

Oldest online log sequence     37

Current log sequence           39

 

@>select tablespace_name ,file_name from dba_data_files;

TABLESPACE_NAME                FILE_NAME

—————————— ————————————————–

SYSTEM                         /u03/oradata/9204/chen/system01.dbf

UNDOTBS                        /u03/oradata/9204/chen/undotbs01.dbf

TEST                           /u03/oradata/9204/chen/test01.dbf

ASSM                           /u03/oradata/9204/chen/assm01.dbf

 

@>alter database datafile ‘/u03/oradata/9204/chen/test01.dbf’ offline drop;

Database altered.

 

@>desc test.t1

 Name                                      Null?    Type

 —————————————– ——– —————————-

 ID                                                 NUMBER

 

@>select * from test.t1;

select * from test.t1

                   *

ERROR at line 1:

ORA-00376: file 3 cannot be read at this time

ORA-01110: data file 3: ‘/u03/oradata/9204/chen/test01.dbf’

 

 

@>select * from v$recover_file;

     FILE# ONLINE  ONLINE_ ERROR                                                                CHANGE# TIME

———- ——- ——- —————————————————————– ———- ———

         3 OFFLINE OFFLINE                                                                       639738 13-FEB-08

 

 

@>alter system switch logfile;

System altered.

 

@>alter database datafile ‘/u03/oradata/9204/chen/test01.dbf’ online;

alter database datafile ‘/u03/oradata/9204/chen/test01.dbf’ online

*

ERROR at line 1:

ORA-01113: file 3 needs media recovery

ORA-01110: data file 3: ‘/u03/oradata/9204/chen/test01.dbf’

 

 

@>alter database recover datafile ‘/u03/oradata/9204/chen/test01.dbf’;

alter database recover datafile ‘/u03/oradata/9204/chen/test01.dbf’

*

ERROR at line 1:

ORA-00279: change 639738 generated at 02/13/2008 06:09:57 needed for thread 1

ORA-00289: suggestion : /opt/app/oracle/product/9.2.0/dbs/arch1_39.dbf

ORA-00280: change 639738 for thread 1 is in sequence #39

 

 

@>select name,STATUS,RECOVER,FUZZY,CHECKPOINT_CHANGE#,CHECKPOINT_COUNT from v$datafile_header;

NAME                                               STATUS  REC FUZ CHECKPOINT_CHANGE# CHECKPOINT_COUNT

————————————————– ——- — — —————— —————-

/u03/oradata/9204/chen/system01.dbf                ONLINE  NO  YES             654168               77

/u03/oradata/9204/chen/undotbs01.dbf               ONLINE  NO  YES             654168               77

/u03/oradata/9204/chen/test01.dbf                  OFFLINE YES YES             639738               37

/u03/oradata/9204/chen/assm01.dbf                  ONLINE  NO  YES             654341               45

 

@>select name,STATUS,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE# from v$datafile;

NAME                                               STATUS  CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE#

————————————————– ——- —————— ———— —————

/u03/oradata/9204/chen/system01.dbf                SYSTEM              654168                            0

/u03/oradata/9204/chen/undotbs01.dbf               ONLINE              654168                            0

/u03/oradata/9204/chen/test01.dbf                  RECOVER             639738       639868          639725

/u03/oradata/9204/chen/assm01.dbf                  ONLINE              654341                       653877

 

 

Now I use BBED to modify /u03/oradata/9204/chen/test01.dbf file header information. Here I change four parts data: checkpoint SCN, checkpoint time, checkpoint number and rba. I refer to system file to obtain the information.

 

 

BBED> set dba 3,1

        DBA             0x00c00001 (12582913 3,1)

 

BBED>

BBED> print kcvfh

struct kcvfh, 360 bytes                     @0

   struct kcvfhckp, 36 bytes                @140

      struct kcvcpscn, 8 bytes              @140

         ub4 kscnbas                        @140      0x0009c2fa

         ub2 kscnwrp                        @144      0x0000

      ub4 kcvcptim                          @148      0x26899f35

      ub2 kcvcpthr                          @152      0x0001

      union u, 12 bytes                     @156

         struct kcvcprba, 12 bytes          @156

            ub4 kcrbaseq                    @156      0x00000027

            ub4 kcrbabno                    @160      0x000000ab

            ub2 kcrbabof                    @164      0x0010

         struct kcvcptr, 12 bytes           @156

            struct kcrtrscn, 8 bytes        @156

               ub4 kscnbas                  @156      0x00000027

               ub2 kscnwrp                  @160      0x00ab

            ub4 kcrtrtim                    @164      0x00000010

   ub4 kcvfhcpc                             @176      0x00000025

   ub4 kcvfhrts                             @180      0x2689a13d

   ub4 kcvfhccc                             @184      0x00000024

 

 

BBED> dump /v dba 3,1 offset 140 count 32

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1       Offsets:  140 to  171  Dba:0x00c00001

——————————————————-

 fac20900 00000000 359f8926 01009162 l ú?……5..&…b

 27000000 ab000000 10000000 02000000 l ‘…?………..

 

 <16 bytes per line>

 

BBED> set dba 1,1

        DBA             0x00400001 (4194305 1,1)

 

BBED>

 

BBED> print kcvfh

struct kcvfh, 360 bytes                     @0

   struct kcvfhckp, 36 bytes                @140

      struct kcvcpscn, 8 bytes              @140

         ub4 kscnbas                        @140      0x0009fb58

         ub2 kscnwrp                        @144      0x0000

      ub4 kcvcptim                          @148      0x268a4e7e

      ub2 kcvcpthr                          @152      0x0001

      union u, 12 bytes                     @156

         struct kcvcprba, 12 bytes          @156

            ub4 kcrbaseq                    @156      0x00000031

            ub4 kcrbabno                    @160      0x0000000e

            ub2 kcrbabof                    @164      0x0010

         struct kcvcptr, 12 bytes           @156

            struct kcrtrscn, 8 bytes        @156

               ub4 kscnbas                  @156      0x00000031

               ub2 kscnwrp                  @160      0x000e

            ub4 kcrtrtim                    @164      0xbfff0010

   ub4 kcvfhcpc                             @176      0x0000004d

   ub4 kcvfhrts                             @180      0x26899cac

   ub4 kcvfhccc                             @184      0x0000004c

 

BBED> dump /v dba 1,1 offset 140 count 32

 File: /u03/oradata/9204/chen/system01.dbf (1)

 Block: 1       Offsets:  140 to  171  Dba:0x00400001

——————————————————-

 58fb0900 00000000 7e4e8a26 01003495 l X?……~N.&..4.

 31000000 0e000000 1000ffbf 02000000 l 1……….?….

 

 <16 bytes per line>

 

 

BBED> modify /x 58fb0900 dba 3,1 offset 140

Warning: contents of previous BIFILE will be lost. Proceed? (Y/N) y

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  140 to  171           Dba:0x00c00001

————————————————————————

 58fb0900 00000000 359f8926 01009162 27000000 ab000000 10000000 02000000

 

 <32 bytes per line>

 

BBED> modify /x 7e4e8a26 dba 3,1 offset 148

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  148 to  179           Dba:0x00c00001

————————————————————————

 7e4e8a26 01009162 27000000 ab000000 10000000 02000000 00000000 25000000

 

 <32 bytes per line>

 

BBED> modify /x 31 dba 3,1 offset 156

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  156 to  187           Dba:0x00c00001

————————————————————————

 31000000 ab000000 10000000 02000000 00000000 25000000 3da18926 24000000

 

 <32 bytes per line>

 

BBED> modify /x 0e dba 3,1 offset 160

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  160 to  191           Dba:0x00c00001

————————————————————————

 0e000000 10000000 02000000 00000000 25000000 3da18926 24000000 00000000

 

 <32 bytes per line>

 

BBED> modify /x 10 dba 3,1 offset 164

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  164 to  195           Dba:0x00c00001

————————————————————————

 10000000 02000000 00000000 25000000 3da18926 24000000 00000000 00000000

 

 <32 bytes per line>

 

The checkpoint number will be calculated according to the above information.

The sequence in the offline file is 0x27, and checkpoint is 0x25, now the sequence is 0x31, so the checkpoint number can be 0x2f. I think there’re no effects if the checkpoint number doesn’t be changed.

 

0x27 -> 0x25

0x31 -> 0x2f

 

BBED> modify /x 2f dba 3,1 offset 176

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  176 to  207           Dba:0x00c00001

————————————————————————

 2f000000 3da18926 24000000 00000000 00000000 00000000 00000000 00000000

 

 <32 bytes per line>

 

BBED> modify /x 2e dba 3,1 offset 184

 File: /u03/oradata/9204/chen/test01.dbf (3)

 Block: 1                Offsets:  184 to  215           Dba:0x00c00001

————————————————————————

 2e000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

 

 <32 bytes per line>

 

BBED> sum dba 3,1 apply

Check value for File 3, Block 1:

current = 0x615a, required = 0x615a

 

 

Now I put this offline file online again through create controlfile and skip to recovery through the missing redo logs.

 

@>alter database backup controlfile to trace;

Database altered.

 

@>shutdown abort

ORACLE instance shut down.

 

 

@>STARTUP NOMOUNT

ORACLE instance started.

 

Total System Global Area  470881660 bytes

Fixed Size                   451964 bytes

Variable Size             369098752 bytes

Database Buffers          100663296 bytes

Redo Buffers                 667648 bytes

 

@>CREATE CONTROLFILE REUSE DATABASE “CHEN” NORESETLOGS  NOARCHIVELOG

  2    SET STANDBY TO MAXIMIZE PERFORMANCE

  3      MAXLOGFILES 5

  4      MAXLOGMEMBERS 5

  5      MAXDATAFILES 100

  6      MAXINSTANCES 1

  7      MAXLOGHISTORY 226

  8  LOGFILE

  9    GROUP 1 ‘/u03/oradata/9204/chen/redo01.log’  SIZE 10M,

 10    GROUP 2 ‘/u03/oradata/9204/chen/redo02.log’  SIZE 10M,

 11    GROUP 3 ‘/u03/oradata/9204/chen/redo03.log’  SIZE 10M

 12  — STANDBY LOGFILE

 13  DATAFILE

 14    ‘/u03/oradata/9204/chen/system01.dbf’,

 15    ‘/u03/oradata/9204/chen/undotbs01.dbf’,

 16    ‘/u03/oradata/9204/chen/test01.dbf’,

 17    ‘/u03/oradata/9204/chen/assm01.dbf’

 18  CHARACTER SET US7ASCII

 19  ;

Control file created.

 

@>RECOVER DATABASE

Media recovery complete.

 

@>ALTER DATABASE OPEN;

Database altered.

 

@>desc test.t1

 Name                                                  Null?    Type

 —————————————————– ——– ————————————

 ID                                                             NUMBER

 

@>select * from test.t1;

        ID

———-

         1

         2

         3

         4

         5

 

 

We can use this approach to get data back in some situations which data can’t be got through normal methods.

But it maybe miss some data and the dictionary will mismatch with the actual data.

 

 

References

Disassembling the Oracle Data Block

Advanced Backup, Restore, and Recover Techniques

Recovery architecture Components

Next Page »

Blog at WordPress.com.