Yaping's Weblog

September 2, 2008

Change Password with BBED

Filed under: Oracle — Yaping @ 2:33 am
Tags: ,

Firstly, we prepare one user & password we want to logon, assuming we use system/system to logon database.

Then we obtain this password hash value, it can be gotten from test environment.

@>alter user system identified by system;
User altered.

@>select USERNAME,PASSWORD from dba_users where USERNAME=’SYSTEM’;
USERNAME                       PASSWORD
—————————— ——————————
SYSTEM                         970BAA5B81930A40

Or obtain this value by oracle hash create tools.

Then we need get the actual password hash value and the table’s location in data file.

@>select USERNAME,PASSWORD from dba_users where USERNAME=’SYSTEM’;
USERNAME                       PASSWORD
—————————— ——————————
SYSTEM                         292EB6812EBA50C2

@>select CLUSTER_NAME from dba_tables where TABLE_NAME=’USER$’;
CLUSTER_NAME
——————————
C_USER#

@>select EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where SEGMENT_NAME=’C_USER#’;
 EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
———- ———- ———- ———-
         0          1         89          8

Next, we use bbed to modify data file directly.

BBED> set dba 1,90
        DBA             0×0040005a (4194394 1,90)
BBED> find /c 292EB6812EBA50C2 top
 File: /u03/oradata/9204/chen/system01.dbf (1)
 Block: 90               Offsets: 5895 to 6406           Dba:0×0040005a
————————————————————————
 32393245 42363831 32454241 35304332 018002c1 0307786b 0a0b0c06 3507786b
 0c17022f 1cffff01 80ff02c1 02ffff01 80018009 5359535f 47524f55 506c0011
 1113474c 4f42414c 5f41515f 55534552 5f524f4c 45018006 474c4f42 414c0180
 01800778 6b0a0b0c 0d18ffff ff0180ff 02c102ff ff018001 80164445 4641554c
 545f434f 4e53554d 45525f47 524f5550 ac000101 00010000 40005a00 12004000
 5a001202 c1136c00 11100c41 515f5553 45525f52 4f4c4501 80ff0180 01800778
 6b0a0b0c 0d18ffff ff0180ff 02c102ff ff018001 80164445 4641554c 545f434f
 4e53554d 45525f47 524f5550 ac000101 00010000 40005a00 11004000 5a001102
 c1126c00 110f1541 515f4144 4d494e49 53545241 544f525f 524f4c45 0180ff01
 80018007 786b0a0b 0c0d17ff ffff0180 ff02c102 ffff0180 01801644 45464155
 4c545f43 4f4e5355 4d45525f 47524f55 50ac0001 01000100 0040005a 00100040
 005a0010 02c1116c 00110e16 4c4f4753 54444259 5f41444d 494e4953 54524154
 4f520180 ff018001 8007786b 0a0b0c0c 3bffffff 0180ff02 c102ffff 01800180
 16444546 41554c54 5f434f4e 53554d45 525f4752 4f5550ac 00010100 01000040
 005a000f 0040005a 000f02c1 106c0011 0d184741 54484552 5f535953 54454d5f
 53544154 49535449 43530180 ff018001 8007786b 0a0b0c0c 3bffffff 0180ff02
 <32 bytes per line>

BBED>
BBED> dump /v dba 1,90 offset 5895 count 32
 File: /u03/oradata/9204/chen/system01.dbf (1)
 Block: 90      Offsets: 5895 to 5926  Dba:0×0040005a
——————————————————-
 32393245 42363831 32454241 35304332 l 292EB6812EBA50C2
 018002c1 0307786b 0a0b0c06 3507786b l …á..xk….5.xk
 <16 bytes per line>

BBED> modify /c 970BAA5B81930A40 dba 1,90 offset 5895
BBED-00215: editing not allowed in BROWSE mode

BBED> set mode edit
        MODE            Edit
BBED> modify /c 970BAA5B81930A40 dba 1,90 offset 5895
 File: /u03/oradata/9204/chen/system01.dbf (1)
 Block: 90               Offsets: 5895 to 5926           Dba:0×0040005a
————————————————————————
 39373042 41413542 38313933 30413430 018002c1 0307786b 0a0b0c06 3507786c
 <32 bytes per line>

BBED> dump /v dba 1,90 offset 5895 count 32
 File: /u03/oradata/9204/chen/system01.dbf (1)
 Block: 90      Offsets: 5895 to 5926  Dba:0×0040005a
——————————————————-
 39373042 41413542 38313933 30413430 l 970BAA5B81930A40
 018002c1 0307786b 0a0b0c06 3507786c l …á..xk….5.xl
 <16 bytes per line>

BBED> sum dba 1,90 apply
Check value for File 1, Block 90:
current = 0xc51f, required = 0xc51f
BBED> exit

[oracle@chen data]$ sql
SQL*Plus: Release 9.2.0.4.0 – Production on Thu Jan 10 06:39:56 2008
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
@>conn system/system
Connected.

August 30, 2008

exp/imp

Filed under: Oracle — Yaping @ 4:48 am
Tags: ,

l         Order of exp/imp tables

We create five tables, named d1, c2, a3, b4, e5, and created by this order.

 

SQL> conn test/test

Connected.

SQL> create table d1(id number);

Table created.

SQL> create table c2(id number);

Table created.

SQL> create table a3(id number);

Table created.

SQL> create table b4(id number);

Table created.

SQL> create table e5(id number);

Table created.

 

Then we export tables by owner mode.

 

$exp test/test file=test.dmp

Export: Release 9.2.0.4.0 – Production on Mon Jun 11 00:40:03 2007

……

. about to export TEST’s tables via Conventional Path …

. . exporting table                             A3          0 rows exported

. . exporting table                             B4          0 rows exported

. . exporting table                             C2          0 rows exported

. . exporting table                             D1          0 rows exported

. . exporting table                             E5          0 rows exported

……

 

We can find that order of tables exported is according to character order of the table’s name, not the order of creating, or others.

Then we export tables by table mode.

 

$exp test/test tables=(d1,c2,b4,e5,a3) file=test.dmp

……

About to export specified tables via Conventional Path …

. . exporting table                             D1          0 rows exported

. . exporting table                             C2          0 rows exported

. . exporting table                             B4          0 rows exported

. . exporting table                             E5          0 rows exported

. . exporting table                             A3          0 rows exported

……

 

We can find that order of tables exported is according to order of tables specified.

Then we import tables by table mode.

 

$imp test/test tables=(a3,c2) file=test.dmp ignore=y

……

. importing TEST’s objects into TEST

. . importing table                           “C2”          0 rows imported

. . importing table                           “A3”          0 rows imported

……

 

We find that importing order is relative to order of tables stored in the dump file, not the order by specified. It is expected.

 

According to the result, we should consider, when we archive data through export tables, those tables should be specified ahead which maybe be restored more frequently, especially when dump files is very large.

 

 

l         Rearrange data order

When we reorganize data or transmit data between databases via exp/imp utilities, we maybe need to reorganize data order according to characteristic of the applications, in order to optimize performance.

Such as, some queries via index range scan query are very frequent. And the associated indexed columns on the based table are very dispersive. The query maybe is inefficient and the same or continuous data should be arranged together.

Opposite, in some applications, the accessed data are so compact, and raise hot block conflict, etc. So the data may be dispersed.

 

Here I introduce QUERY parameter to achieve it.

 

SQL> create table t1 as select round(dbms_random.value*10000) id,owner||’.’||object_name name,rownum seq from all_objects where rownum<1000;

 

SQL> l

  1* select * from t1 where rownum<10

SQL> /

 

        ID NAME                                                      SEQ

———- ————————————————– ———-

      5326 SYS.AGGXMLIMP                                               1

      7180 SYS.AGGXMLINPUTTYPE                                         2

      1511 SYS.ALL_ALL_TABLES                                          3

      9381 SYS.ALL_APPLY                                               4

      1090 SYS.ALL_APPLY_CONFLICT_COLUMNS                              5

       216 SYS.ALL_APPLY_DML_HANDLERS                                  6

      5032 SYS.ALL_APPLY_ERROR                                         7

      3314 SYS.ALL_APPLY_KEY_COLUMNS                                   8

      2363 SYS.ALL_APPLY_PARAMETERS                                    9

9 rows selected.

 

$exp test/test tables=(t1) query=(\”where 1=1 order by id\”) file=t1_o.dmp

 

SQL> truncate table t1;

Table truncated.

 

$imp test/test file=t1_o.dmp ignore=y

SQL> select * from t1 where rownum<10;

 

        ID NAME                                                      SEQ

———- ————————————————– ———-

        33 SYS.GV_$AW_OLAP                                           510

        48 SYS.EXU8IOVU                                              396

        79 SYS.EXU8JBQU                                              398

        84 SYS.KU$_M_VIEW_SRT_LIST_T                                 664

        88 SYS.EXU9FGA                                               460

        91 SYS.DICTIONARY                                            318

        94 SYS.AQ$_REG_INFO_LIST                                     219

       127 SYS.EXU9COOU                                              455

       128 SYS.SYSSEGOBJ                                             932

9 rows selected.

 

 

l         DDL text in dump file is visible and can be edited

In some situations, we can edit dump file to resolve special problems. Here I use one demo to explain it.

In some systems, there are many user-defined types. Assume source scheme and target scheme have the same structure user-defined type, and these types are being used. So when we import data with these types, it will fail to import, because these types have been existed. But we can’t drop the types also.

We may be able to modify the dump file directly to resolve this problem.

 

SQL> show user

USER is “TEST”

 

SQL> create or replace type t_mytest as varray(10) of number;

  2  /

Type created.

SQL> create table t2(name varchar2(30),member t_mytest);

Table created.

SQL> insert into t2 select object_name,t_mytest(rownum,rownum+1,rownum+2) from all_objects where rownum<100;

99 rows created.

SQL> commit;

Commit complete.

SQL> select * from t2 where rownum<10;

 

NAME                           MEMBER

—————————— ————————————————–

ACCESS$                        T_MYTEST(1, 2, 3)

AGGXMLIMP                      T_MYTEST(2, 3, 4)

AGGXMLINPUTTYPE                T_MYTEST(3, 4, 5)

ALL_ALL_TABLES                 T_MYTEST(4, 5, 6)

ALL_APPLY                      T_MYTEST(5, 6, 7)

ALL_APPLY_CONFLICT_COLUMNS     T_MYTEST(6, 7, 8)

ALL_APPLY_DML_HANDLERS         T_MYTEST(7, 8, 9)

ALL_APPLY_ERROR                T_MYTEST(8, 9, 10)

ALL_APPLY_KEY_COLUMNS          T_MYTEST(9, 10, 11)

 

SQL> conn yp/yaping

Connected.

SQL> create or replace type t_mytest as varray(10) of number;

  2  /

Type created.

SQL> create table t2(name varchar2(30),member t_mytest);

Table created.

 

$exp test/test tables=(t2) file=t2.dmp

 

Before modify the dump file.

$imp yp/yaping fromuser=test file=t2.dmp ignore=y

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:00:19 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. importing TEST’s objects into YP

IMP-00061: Warning: Object type “YP”.”T_MYTEST” already exists with a different identifier

 “CREATE TYPE “T_MYTEST” TIMESTAMP ‘2007-06-11:02:43:07’ OID ‘3291809FE886C60″

 “9E040007F01002BAC’   as varray(10) of number;”

IMP-00063: Warning: Skipping table “YP”.”T2″ because object type “YP”.”T_MYTEST” cannot be created or has different identifier

Import terminated successfully with warnings.

 

Then we modify t2.dmp file. We use vi command to edit and find out the following contents and remove the connects highlighted with red color.

 

TABLE “T2”

BEGINTYPE “T_MYTEST” “3291809FE886C609E040007F01002BAC” “2186F75D29F3ED2CB659AE4D0AEE92E6C7”

“^@^@^@^@^^&^A^@^A^@^A)^@^@^@^@^@^S^\^@^@^@^]^@^@^@

^C*^F^@<81>^@^G

CREATE TYPE

媒每^^@CREATE TYPE “T_MYTEST” TIMESTAMP ‘2007-06-11:02:43:07’ OID ‘3291809FE886C609E040007F01002BAC’ ^A^@ ^Y^@ as varray(10) of numb;^@^@

GRANT EXECUTE ON “T_MYTEST” TO “YP”

ENDTYPE

CREATE TABLE “T2” (“NAME” VARCHAR2(30), “MEMBER” “T_MYTEST”)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “USERS” LOGGING NOCOMPRESS

 

 

After modify this file and save it, we import it again.

 

$ imp yp/yaping fromuser=test file=t2.dmp ignore=y

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:02:39 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. importing TEST’s objects into YP

. . importing table                           “T2”         99 rows imported

Import terminated successfully without warnings.

 

 

This problem can also be resolved by set TOID_NOVALIDATE parameter.

 

$ imp yp/yaping fromuser=test touser=yp file=t2.dmp ignore=y TOID_NOVALIDATE=(t_mytest)

 

Import: Release 9.2.0.4.0 – Production on Mon Jun 11 04:04:16 2007

 

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

 

 

Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.4.0 – Production

 

Export file created by EXPORT:V09.02.00 via conventional path

 

Warning: the objects were exported by TEST, not by you

 

import done in US7ASCII character set and AL16UTF16 NCHAR character set

. . skipping TOID validation on type YP.T_MYTEST

. . importing table                           “T2”         99 rows imported

Import terminated successfully without warnings.

 

 

 

l         Buffer, commit and consistent parameters

The default buffer size is too small, we need increase it to gain efficiency. Buffer and commit are usually used together, but we should bear in mind some side effects.

When we set commit=y, if the process of import abort, we maybe put the table’s data in an unknown status, and don’t decide where to start again. Before we use a feature which may cause dubious, we must be familiar with the application firstly.

Commit parameter can’t take effectually for some types, such LOB, LONG, REF, etc, these data will be committed every row.

Before we set consistent=y, we may consider if it’s meaningful that tables are exported in several times, put the tables which need be consistent into the same export, separated from others tables.

 

 

l         Trace parameter

If you want to know which SQL are performed when run exp/imp, there’s one easy method, you may specify parameter trace=y, it equals setting sql_trace and timed_statistics within this session. The old exp/imp release maybe have no this parameter.

 

 

l         Buffer parameter

If the buffer parameter is specified, where does it be allocated from, Database or OS?

 

We use one trigger to record the session’s statistics about memory allocation.

create or replace trigger trigger_logoff

before logoff on database

–before logoff on test.schema

declare

  v_program varchar2(48);

  v_sid number;

  v_paddr RAW(4);

begin

 

if user=’TEST’ then

 

   select sid,PROGRAM,paddr into v_sid,v_program,v_paddr from v$session where AUDSID=sys_context(‘userenv’,’sessionid’);

 

   insert into sys.sesstat select sid,name,value,sysdate,v_program from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and (name like ‘%uga%’ or name like ‘%pga%’ or name = ‘user commits’);

 

   insert into procmem select v_sid,spid,PROGRAM,PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM,sysdate from v$process where addr=v_paddr;

 

end if;

 

exception

  when others then

       dbms_system.KSDWRT(2,sqlerrm);

 

end;

/

 

We use one demo table, its size is a little more than 250M.

SQL> analyze table test.t1 compute statistics;

 Table analyzed.

 

SQL> select OWNER,AVG_ROW_LEN,NUM_ROWS from dba_tables where table_name=’T1′;

 OWNER                          AVG_ROW_LEN   NUM_ROWS

—————————— ———– ———-

TEST                                  6016      44352

 

SQL> select 6016*44352/1024/1024 from dual;

 6016*44352/1024/1024

——————–

          254.460938

 

The following results show session memory allocation with different buffer value, default buffer value and 20M buffer.

       SID NAME                          VALUE DT                  PROGRAM

———- ———————— ———- ——————- ————————————————       

         9 session uga memory           158356 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)  –imp test/test  file=t1.dmp trace=y log=3.log ignore=y

         9 session uga memory max       162756 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory           561736 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max       561736 2007-12-29 15:03:48 imp@TCLUX3245 (TNS V1-V3)

        

         9 session uga memory           132932 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)  –imp test/test  file=t1.dmp trace=y log=4.log ignore=y buffer=20971520

         9 session uga memory max       141108 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory           408836 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max       408836 2007-12-29 15:06:22 imp@TCLUX3245 (TNS V1-V3)

        

 

According to the following result we can confirm that the buffer value took effect.

       SID NAME                                VALUE DT                  PROGRAM

———- —————————— ———- ——————- ——————————

        14 user commits                            1 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3) –imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520

        14 session uga memory                 134040 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session uga memory max             138440 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session pga memory                 425860 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

        14 session pga memory max             425860 2007-12-29 14:21:29 imp@TCLUX3245 (TNS V1-V3)

 

         9 user commits                           13 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3) –imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520 commit=y

         9 session uga memory                 165040 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session uga memory max             186384 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory                 572832 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

         9 session pga memory max             572832 2007-12-29 14:52:04 imp@TCLUX3245 (TNS V1-V3)

 

 

 SID       SPID PROGRAM                        PGA_USED_MEM PGA_ALLOC_MEM PGA_FREEABLE_MEM PGA_MAX_MEM DT

—- ———- —————————— ———— ————- —————- ———– ——————-

  14       4500 oracle@TCLUX3245 (TNS V1-V3)         435321        480213                0      480213 2007-12-29 14:21:29

   9       4884 oracle@TCLUX3245 (TNS V1-V3)         562997        627185                0      627185 2007-12-29 14:52:04

 

From the above result, we can conclude that the value of buffer parameter doesn’t be allocated from database. Now we observe the processes through ps command.

Relation between parent-child processes display clearly that the value of buffer parameter comes from imp/exp command.

———————————————————————

USER       PID %CPU %MEM   VSZ  RSS TTY      STAT START   TIME COMMAND

 

root     31739  0.0  0.1  6860 1080 ?        S    Dec27   0:00  \_ sshd: oracle [priv]

oracle   31741  0.0  0.1  7020 1204 ?        S    Dec27   0:01  |   \_ sshd: oracle@pts/7

oracle   31744  0.0  0.1  4404 1196 pts/7    S    Dec27   0:00  |       \_ -bash

oracle    5049  7.7  2.5 33508 25492 pts/7   S    15:32   0:02  |           \_ imp           file=t1.dmp trace=y log=p1.log ig

oracle    5050 12.3  6.4 269728 65472 ?      S    15:32   0:04  |               \_ oracleO02DMS0 (DESCRIPTION=(LOCAL=YES)(ADDR

 

–imp test/test  file=t1.dmp trace=y log=p1.log ignore=y buffer=20971520 commit=y

–buffer=20971520 =20M

–25492 KB

 

root     31739  0.0  0.1  6860 1080 ?        S    Dec27   0:00  \_ sshd: oracle [priv]

oracle   31741  0.0  0.1  7020 1204 ?        S    Dec27   0:01  |   \_ sshd: oracle@pts/7

oracle   31744  0.0  0.1  4404 1200 pts/7    S    Dec27   0:00  |       \_ -bash

oracle    5125  1.3  0.4 13028 4988 pts/7    S    16:08   0:06  |           \_ imp           file=t1.dmp trace=y log=p1.log ig

oracle    5126  2.1  6.4 269712 65772 ?      S    16:08   0:09  |               \_ oracleO02DMS0 (DESCRIPTION=(LOCAL=YES)(ADDR

 

–imp test/test  file=t1.dmp trace=y log=p1.log ignore=y

 

———————————————————————

 

 

l         Unique constraint violated

Unique constraint may use unique index or none-unique index, but the two type indexes are different when to validate the constraint.

Unique index, first generate redo and undo, then modify data blocks which contain DML rows, validate constraint, if available, generate redo and undo for index entries, then modify index entries, if fail, then redo owed to validation failure and implicit rollback against table.

None-unique index, first generate redo and undo, then modify data blocks which contain DML rows, generate redo and undo for index entries, modify index entries, validate constraint, if fail, then redo owed to validation failure and implicit rollback against index, then rollback table.

The import utility will be extremely costly if data being imported fail a unique constraint. As is the case with standard insert operations the redo cost for failed rows via imp is influenced by the type of index used to enforce the unique constraint. Aside from the index-type influence the number of failed rows per batch violating a unique constraint greatly impacts the redo generated. The buffer parameter will control the number per batch.

 

I illuminate it with the following experiment.

test@CHEN>create table t1(id number);

Table created.

 

test@CHEN>create table t2(id number);

Table created.

 

test@CHEN>alter table t1 add constraint t1_pk primary key(id);

Table altered.

 

test@CHEN>create index idx_t2_id on t2(id);

Index created.

 

test@CHEN>alter table t2 add constraint t2_pk primary key(id) using index;

Table altered.

 

test@CHEN>insert into t1 select rownum from dual connect by level<16;

15 rows created.

 

test@CHEN>insert into t2 select rownum from dual connect by level<16;

15 rows created.

 

test@CHEN>commit;

Commit complete.

 

test@CHEN>!exp test/test tables=(t1,t2) file=t1_t2.dmp silent=y

About to export specified tables via Conventional Path …

. . exporting table                             T1         15 rows exported

. . exporting table                             T2         15 rows exported

Export terminated successfully without warnings.

 

test@CHEN>delete from t2 where id<6 or id>10;

10 rows deleted.

 

test@CHEN>delete from t1 where id<6 or id>10;

10 rows deleted.

 

test@CHEN>commit;

Commit complete.

 

test@CHEN>!imp test/test file=t1_t2.dmp ignore=y silent=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

. . importing table                           “T1”

IMP-00019: row rejected due to ORACLE error 1

IMP-00003: ORACLE error 1 encountered

. . importing table                           “T2”

IMP-00019: row rejected due to ORACLE error 1

IMP-00003: ORACLE error 1 encountered

ORA-00001: unique constraint (TEST.T2_PK) violated

Column 1 6

 

Through checking redo log, Oracle performed the following actions.

T1

STEP 1

Multiple rows insert(1..15), controled by buffer parameter

 

STEP 2

Index leaf rows insert through array, stop when value=6, then clean

 

SETP 3

Multiple rows delete(1..15), all rows insert just now

 

STEP 4

Single row insert value 1, then insert index leaf row

Repeat above actions with row value from 2 to 5

 

STEP 5

Multiple rows insert(7..15), skip the value last failed, then start from there

 

STEP 6

Multiple rows delete(7..15)

 

STEP 7

Repeat STEP 5 & 6, until value is 10

 

STEP 8

Multiple rows insert(11..15)

 

STEP 9

Index leaf rows insert through array

 

STEP 10

Mark this transaction committed

 

T2

STEP 1

Multiple rows insert(1..15), controled by buffer parameter

 

STEP 2

Index leaf row insert, start from value 1 to 5

 

STEP 3

Index leaf row insert, value=6, validate failed, then clean all leaf rows

 

STEP 4

Multiple rows delete(1..15), all rows insert just now

 

STEP 5

Single row insert value 1, then insert index leaf row

Repeat above actions with row value from 2 to 5

 

STEP 6

Multiple rows insert(7..15), skip the value last failed, then start from there

Index leaf row insert, value=7, validate failed, then clean

 

STEP 7

Repeat STEP 6 until value is 10

 

STEP 8

Perform STEP 1 & 2

 

STEP 9

Mark this transaction committed

 

From above result we can find out that it’ll generate mass redo if import encounters unique constraint violated issue.

 

 

l         Can we export a table and import into another table

 

Maybe.

 

[oracle@chen ~]$ exp test/test tables=t2 file=t2.dmp silent=y

About to export specified tables via Conventional Path …

. . exporting table                             T2         10 rows exported

Export terminated successfully without warnings.

 

Edit dump file t2.dmp with vi tools, change all T2 words to T8 in the file, then save it.

 

[oracle@chen ~]$ vi t2.dmp

TABLE “T2”

CREATE TABLE “T2” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING NOCOMPRESS

INSERT INTO “T2” (“ID”) VALUES (:1)

^A^@^B^@^V^@^@^@^@^@^B^@Ã^B^@^@^B^@Ã^C^@^@^B^@Ã^D^@^@^B^@Ã^E^@^@^B^@Ã^F^@^@^B^@Ã^G^@^@^B^@Ã^H^@^@^B^@à ^@^@^B^@Ã

^@^@^B^@Ã^K^@^@ÿÿ

CREATE INDEX “IDX_T2_ID” ON “T2” (“ID” )  PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING

 

[oracle@chen ~]$ strings t2.dmp

EXPORT:V09.02.00

UTEST

RTABLES

8192

                                        Fri Jun 27 0:34:44 2008t2.dmp

#C##

#C##

+08:00

BYTE

INTERPRETED

TABLE “T8”

CREATE TABLE “T8” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING NOCOMPRESS

INSERT INTO “T8” (“ID”) VALUES (:1)

CREATE INDEX “IDX_T8_ID” ON “T8” (“ID” )  PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” LOGGING

EXIT

EXIT

 

[oracle@chen ~]$ imp test/test file=t2.dmp silent=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

. . importing table                           “T8”         10 rows imported

Import terminated successfully without warnings.

 

The above method is very dangerous, there’s easy approach. We just create the same structure table T8 first, then create synonym named T2 which refer to T8 and import data.

 

 

l         Consistent parameter

 

Prepare for data.

Session 1

test@CHEN>create table t1 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t2 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t3 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create table t4 as select rownum id from dual connect by level<5;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

We consider how to slow down export progress.

Session 2

sys@CHEN>AUDIT SELECT ON test.t1 WHENEVER SUCCESSFUL;

Audit succeeded.

 

sys@CHEN>create or replace trigger after_audit

  2  after insert or update on sys.aud$

  3  begin

  4    dbms_lock.sleep(60);

  5  end;

  6  /

create or replace trigger after_audit

                          *

ERROR at line 1:

ORA-04089: cannot create triggers on objects owned by SYS

 

sys@CHEN>create table system.aud$ as select * from sys.aud$ where 1=2;

Table created.

 

sys@TEST>drop table sys.aud$;

Table dropped.

 

sys@CHEN>create synonym aud$ for system.aud$;

Synonym created.

 

sys@CHEN>create or replace trigger after_audit

  2    after insert or update on system.aud$

  3    begin

  4      dbms_lock.sleep(60);

  5    end;

  6  /

Trigger created.

 

Export with consistent=y parameter

Session 2

sys@CHEN>!exp test/test file=test.dmp CONSISTENT=y trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported –hold on 240 seconds

. . exporting table                             T2          4 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00056: ORACLE error 942 encountered

ORA-00942: table or view does not exist

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics

Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

Session 1

test@CHEN>@case1

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘consistent=y:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS “

 “255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” “

 “LOGGING NOCOMPRESS”

. . skipping table “T3”

 

 “CREATE TABLE “T4” (“ID” NUMBER)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS “

 “255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE “TEST” “

 “LOGGING NOCOMPRESS”

. . skipping table “T4”

 

“CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

From above output, we can see that the definition of dropped table still be exported, but it doesn’t contain any data. Even though the table’s structure changed, it exports the previous definition, new objects don’t be exported.

 

Export with consistent=n parameter

 

Session 2

sys@CHEN>!exp test/test file=test.dmp consistent=n trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported — hold on

. . exporting table                             T2          5 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00007: dictionary shows no columns for TEST.T4

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

Session 1

Tables t1, t2, t3, t4 have been recreated before run the above export.

test@CHEN>@case2

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘consistent=n:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER, “A” VARCHAR2(1))  PCTFREE 10 PCTUSED 40 INI”

 “TRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) T”

 “ABLESPACE “TEST” LOGGING NOCOMPRESS”

 ” ALTER TABLE “T3” MODIFY (“A” DEFAULT ‘Y’)”

. . skipping table “T3”

“CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return ‘consistent=n:’||p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

We can find that the dropped table is gone, and changed objects export their definition and data at the point of reading them, but don’t contain new objects.

 

Export with object_consistent=y parameter

 

Session 2

sys@CHEN>!exp test/test file=test.dmp OBJECT_CONSISTENT=y trace=y silent=y

About to export specified users …

. exporting pre-schema procedural objects and actions

. exporting foreign function library names for user TEST

. exporting PUBLIC type synonyms

. exporting private type synonyms

. exporting object type definitions for user TEST

About to export TEST’s objects …

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export TEST’s tables via Conventional Path …

. . exporting table                             T1          4 rows exported –hold on

. . exporting table                             T2          5 rows exported

. . exporting table                             T3          4 rows exported

. . exporting table                             T4

EXP-00007: dictionary shows no columns for TEST.T4

. exporting synonyms

. exporting views

. exporting stored procedures

. exporting operators

. exporting referential integrity constraints

. exporting triggers

. exporting indextypes

. exporting bitmap, functional and extensible indexes

. exporting posttables actions

. exporting materialized views

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting dimensions

. exporting post-schema procedural objects and actions

. exporting statistics

Export terminated successfully with warnings.

 

After submit exp command in session 2, run the following SQL in another session immediately.

 

Session 1

Tables t1, t2, t3, t4 have been recreated before run the above export.

test@CHEN>@case3

test@CHEN>insert into t1 values(-1);

1 row created.

 

test@CHEN>insert into t2 values(-1);

1 row created.

 

test@CHEN>alter table t3 add a varchar2(1) default ‘Y’;

Table altered.

 

test@CHEN>drop table t4;

Table dropped.

 

test@CHEN>

test@CHEN>create table t5 as select * from t2;

Table created.

 

test@CHEN>create or replace function f_test(p_str varchar2)

  2    return varchar2 as

  3  begin

  4    return ‘object_consistent=y:’||p_str;

  5    exception

  6    when others then

  7         return ‘ERROR’;

  8  end;

  9  /

Function created.

 

Check the exported data.

Session 2

sys@CHEN>!imp test/test file=test.dmp rows=n show=y silent=y full=y

Export file created by EXPORT:V09.02.00 via conventional path

. importing TEST’s objects into TEST

 “BEGIN 

 “CREATE TABLE “T3” (“ID” NUMBER, “A” VARCHAR2(1))  PCTFREE 10 PCTUSED 40 INI”

 “TRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) T”

 “ABLESPACE “TEST” LOGGING NOCOMPRESS”

 ” ALTER TABLE “T3” MODIFY (“A” DEFAULT ‘Y’)”

. . skipping table “T3”

…”

 “CREATE function f_test(p_str varchar2)”

   return varchar2 as”

 “begin”

   return ‘object_consistent=y:’||p_str;”

   exception”

   when others then”

        return ‘ERROR’;”

 “end;”

Import terminated successfully without warnings.

 

 

From above results, it seems that tables are obtained at beginning of export.

 

 

 

l         Others

There’s one tool which can extract DDL statements from dump file. Refer to http://www.ddlwizard.com/.

 

A very cool tool — orasrp

Filed under: Oracle — Yaping @ 4:22 am
Tags:

Orasrp is a very good tool to analyze trace file and it’s free.

@>alter session set events ‘10046 trace name context forever,level 12′;
Session altered.
@>alter index idx_t1_id rebuild online;
Index altered.
@>alter session set events ‘10046 trace name context off’;
Session altered.
@>@tracefile
Wrote file /tmp/myvar.sql
/opt/app/oracle/admin/chen/udump/chen_ora_2604.trc

[oracle@chen tmp]$ ./orasrp /opt/app/oracle/admin/chen/udump/chen_ora_2604.trc 1.html

Now we can view 1.html, the format is human and the result contains a lot of anlysis value.
Bind vary also specifies its value, very readable.

Refer to http://www.oracledba.ru/orasrp/

Blog at WordPress.com.