Wednesday, December 2, 2009

Data Pump Enhancements in Oracle Database 11g Release 1

COMPRESSION
The COMPRESSION parameter allows you to decide what, if anything, you wish to compress in your export. The syntax is shown below.

COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}The available options are:

ALL: Both metadata and data are compressed.
DATA_ONLY: Only data is compressed.
METADATA_ONLY: Only metadata is compressed. This is the default setting.
NONE: Nothing is compressed.
Here is an example of the COMPRESSION parameter being used.

expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
compression=allThe COMPATIBLE initialization parameter should be set to "11.0.0" or higher to use these options, except for the METADATA_ONLY option, which is available with a COMPATIBLE setting of "10.2".
Encryption Parameters
Data pump encryption is an Enterprise Edition feature, so the parameters described below are only relevant for Enterprise Edition installations. In addition, the COMPATIBLE initialisation parameter must be set to "11.0.0" or higher to use these features.

ENCRYPTION and ENCRYPTION_PASSWORD
The use of encryption is controlled by a combination of the ENCRYPTION or ENCRYPTION_PASSWORD parameters. The syntax for the ENCRYPTION parameter is shown below.
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
The available options are:

ALL: Both metadata and data are encrypted.
DATA_ONLY: Only data is encrypted.
ENCRYPTED_COLUMNS_ONLY: Only encrypted columns are written to the dump file in an encrypted format.
METADATA_ONLY: Only metadata is encrypted.
NONE: Nothing is encrypted.
If neither the ENCRYPTION or ENCRYPTION_PASSWORD parameters are set, it is assumed the required level of encryption is NONE. If only the ENCRYPTION_PASSWORD parameter is specified, it is assumed the required level of encryption is ALL. Here is an example of these parameters being used.

expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=passwordENCRYPTION_ALGORITHM
The ENCRYPTION_ALGORITHM parameter specifies the encryption algorithm to be used during the export, with the default being "AES128". The syntax is shown below.

ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }The ENCRYPTION_ALGORITHM parameter must be used in conjunction with the ENCRYPTION or ENCRYPTION_PASSWORD parameters, as shown below.

expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=password encryption_algorithm=AES256ENCRYPTION_MODE
The ENCRYPTION_MODE parameter specifies the type of security used during export and import operations. The syntax is shown below.

ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }The allowable values and their default settings are explained below:

DUAL: This mode creates a dump file that can be imported using an Oracle Encryption Wallet, or the the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there is an open wallet.
PASSWORD: This mode creates a dump file that can only be imported using the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there isn't an open wallet.
TRANSPARENT: This mode creates an encrypted dump file using and open Oracle Encryption Wallet. If the ENCRYPTION_PASSWORD is specified while using this mode and error is produced. This is the default setting of only the ENCRYPTION parameter is set.
Wallet setup is described here.

The ENCRYPTION_MODE requires either the ENCRYPTION or ENCRYPTION_PASSWORD parameter to be specified.

expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=password encryption_mode=passwordTRANSPORTABLE
The TRANSPORTABLE parameter is similar to the TRANSPORT_TABLESPACES parameter available previously in that it only exports and imports metadata about a table, relying on you to manually transfer the relevent tablespace datafiles. The export operation lists the tablespaces that must be transfered. The syntax is shown below.

TRANSPORTABLE = {ALWAYS | NEVER}The value ALWAYS turns on the transportable mode, while the default value of NEVER indicates this is a regular export/import.

The following restrictions apply during exports using the TRANSPORTABLE parameter:

This parameter is only applicable during table-level exports.
The user performing the operation must have the EXP_FULL_DATABASE privilege.
Tablespaces containing the source objects must be read-only.
The COMPATIBLE initialization parameter must be set to 11.0.0 or higher.
The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.
Some extra restictions apply during import operations:

The NETWORK_LINK parameter must be specified during the import operation. This parameter is set to a valid database link to the source schema.
The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges.
The TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.
Examples of the export and import operations are shown below.

expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
transportable=ALWAYS

impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'PARTITION_OPTIONS
The PARTITION_OPTIONS parameter determines how partitions will be handled during export and import operations. The syntax is shown below.

PARTITION_OPTIONS={none | departition | merge}The allowable values are:

NONE: The partitions are created exactly as they were on the system the export was taken from.
DEPARTITION: Each partition and sub-partition is created as a separate table, named using a combination of the table and (sub-)partition name.
MERGE: Combines all partitions into a single table.
The NONE and MERGE options are not possible if the export was done using the TRANSPORTABLE parameter with a partition or subpartition filter. If there are any grants on objects being departitioned, an error message is generated and the objects are not loaded.

expdp test/test directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log tables=test.tab1
partition_options=mergeREUSE_DUMPFILES
The REUSE_DUMPFILES parameter can be used to prevent errors being issued if the export attempts to write to a dump file that already exists.

REUSE_DUMPFILES={Y | N}When set to "Y", any existing dumpfiles will be overwritten. When the default values of "N" is used, an error is issued if the dump file already exists.

expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
reuse_dumpfiles=yREMAP_TABLE
This parameter allows a table to be renamed during the import operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports. The syntax is shown below.

REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablenameAn example is shown below.

impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
remap_table=TEST.TAB1:TAB2Existing tables are not renamed, only tables created by the import.

DATA_OPTIONS
SKIP_CONSTRAINT_ERRORS
During import operations using the external table acces method, setting the DATA_OPTIONS parameter to SKIP_CONSTRAINT_ERRORS allows load operations to continue through non-deferred constraint violations, with any violations logged for future reference. Without this, the default action would be to roll back the whole operation. The syntax is shown below.

DATA_OPTIONS=SKIP_CONSTRAINT_ERRORSAn example is shown below.

impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
data_options=SKIP_CONSTRAINT_ERRORSThis parameter has no impact on deferred constraints, which still cause the operation to be rolled back once a violation is detected. If the object being loaded has existing unique indexes or constraints, the APPEND hint will not be used, which may adversely affect performance.

XML_CLOBS
During an export, if XMLTYPE columns are currently stored as CLOBs, they will automatically be exported as uncompressed CLOBs. If on the other hand they are currently stored as any combination of object-relational, binary or CLOB formats, they will be exported in compressed format by default. Setting the DATA_OPTIONS parameter to XML_CLOBS specifies that all XMLTYPE columns should be exported as uncompressed CLOBs, regardless of the default action. The syntax is shown below.

DATA_OPTIONS=XML_CLOBSAn example is shown below.

expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
version=11.1 data_options=XML_CLOBSBoth the export and import must use the same XML schema and the job version must be set to 11.0.0 or higher.

REMAP_DATA
During export and import operations, the REMAP_DATA parameter allows you to associate a remap packaged function that will accept the column value as a parameter and return a modified version of the data. The syntax is shown below.

REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.functionThis can be used to mask sensitive data during export and import operations by replacing the original data with random alternatives. The mapping is done on a column-by-column basis, as shown below.

expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
remap_data:tab1.col1:remap_pkg.remap_col1
remap_data:tab1.col2:remap_pkg.remap_col2The remapping function must return the same datatype as the source column and it must not perform a commit or rollback.

How To: Configure Data Guard Broker

Configuring Data Guard Broker:
NOTE: You should have your physical standby already setup.


1.) Check parameter DG_BROKER_START on primary:
SQL> show parameter DG_BROKER_START

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dg_broker_start boolean FALSE

2.) Set DG_BROKER_START to true on primary:
SQL> alter system set DG_BROKER_START=true scope=both;

System altered.

3.)Check DG_BROKER_START on standby side:
SQL> show parameter DG_BROKER_START

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dg_broker_start boolean FALSE

4.) Set DG_BROKER_START to true on standby:
SQL> alter system set DG_BROKER_START=true scope=both;

System altered.

5.) Define a service for the listener that includes the db_unique_name_DGMGRL.db_domain values for the GLOBAL_DBNAME:
Update the listener.ora file on primary and standby
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = orcl.abc.com)
(ORACLE_HOME = /opt/app/oracle/10.2)
(SID_NAME = orcl)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl_DGMGRL.abc.com) -->_DGMGRL.
(ORACLE_HOME = /opt/app/oracle/10.2)
(SID_NAME = orcl)
)
)

6.) Now to create the data guard broker configurations:

[oracle@APP3 admin]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle
Connected.
DGMGRL> create configuration 'DBTEST'
> as primary database is 'orcl'
> connect identifier is orcl;
Configuration "DBTEST" created with primary database "orcl"
DGMGRL>

DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database

Current status for "DBTEST":
DISABLED

DGMGRL>

7.) Now add standby DB to the data broker configuration:

DGMGRL> add database 'orcl1' as
> connect identifier is orcl1
> maintained as physical;
Database "orcl1" added
DGMGRL>
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
DISABLED

8.) Now enable the configurations:
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
Warning: ORA-16607: one or more databases have failed


DGMGRL> show database verbose orcl1

Database
Name: orcl1
Role: PHYSICAL STANDBY
Enabled: YES
Intended State: ONLINE
Instance(s):
orcl

Properties:
InitialConnectIdentifier = 'orcl1'
ObserverConnectIdentifier = ''
LogXptMode = 'ARCH'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'MANUAL'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '3'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '/opt/app/oracle/oradata/orcl, /home/oracle/oradata/orcl'
LogFileNameConvert = '/opt/app/oracle/oradata/orcl, /home/oracle/oradata/orcl, /opt/app/oracle/flash_recovery_area/orcl/onlinelog, /home/oracle/oradata/flash_recovery_area/orcl'
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'wrpws'
SidName = 'orcl'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=wrpws)(PORT=1521))'
StandbyArchiveLocation = '/home/oracle/oradata/orcl/archive'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.arc'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "orcl1":
Error: ORA-12545: Connect failed because target host or object does not exist


DGMGRL>

Well the problem in my case was that the hostname was not getting resolved. I added the hostname to the host file and it started working.

After this error I got error:
Current status for "orcl1":
Error: ORA-16664: unable to receive the result from a remote database

Well same problem, communication between the 2 database. Make sure that the /etc/hosts files have entries of all other servers, with name and IP address.
Always review the log_archive_dest_2 parameter, to have the right entries and service is accessible.

DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration
Name: DBTEST
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orcl1 - Physical standby database

Current status for "DBTEST":
SUCCESS

DGMGRL>

This should setup Data Guard Broker.

COMMANDS Available in DGMGRL:

DGMGRL> help

The following commands are available:

add Add a standby database to the broker configuration
connect Connect to an Oracle instance
create Create a broker configuration
disable Disable a configuration, a database, or Fast-Start Failover
edit Edit a configuration, database, or instance
enable Enable a configuration, a database, or Fast-Start Failover
exit Exit the program
failover Change a standby database to be the primary database
help Display description and syntax for a command
quit Exit the program
reinstate Change a disabled database into a viable standby database
rem Comment to be ignored by DGMGRL
remove Remove a configuration, database, or instance
show Display information about a configuration, database, or instance
shutdown Shutdown a currently running Oracle instance
start Start Fast-Start Failover observer
startup Start an Oracle database instance
stop Stop Fast-Start Failover observer
switchover Switch roles between the primary database and a standby database

Use "help " to see syntax for individual commands

Data Guard Setup for Oracle 10g

Steps:
1.) Make sure archive log mode is enabled on your database:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/app/oracle/oradata/orcl/archive
Oldest online log sequence 108
Next log sequence to archive 109
Current log sequence 109

SQL> select name, log_mode from v$database;

NAME LOG_MODE
--------- ------------
ORCL ARCHIVELOG

If archive log mode is not enabled. Please enable it using the following link.
How to enable archivelog mode in Oracle 11g database

2.) Enable force logging on the database, so that there is no problems with no logging operations in the future.

SQL> alter database force logging;

Database altered.

3.) Create password file, if you do not have one already.

[oracle@APP3 dbs]$ cd $ORACLE_HOME/dbs
[oracle@APP3 dbs]$ orapwd file=orapworcl password=oracle force=y
[oracle@APP3 dbs]$ ls -lrt orapworcl
-rw-r----- 1 oracle oinstall 1536 Sep 14 08:21 orapworcl

SQL> select * from v$pwfile_users;

USERNAME SYSDB SYSOP
------------------------------ ----- -----
SYS TRUE TRUE


4.) Create Standby Redo Logfiles on primary DB.

Current logfile:
SQL> col member a40
SQL> select a.group#,a.status,a.member,b.bytes/1024/1024 from v$logfile a,v$log b
2 where a.group#=b.group#;

GROUP# STATUS MEMBER B.BYTES/1024/1024
---------- ------- ---------------------------------------- -----------------
1 /opt/app/oracle/oradata/orcl/redo01.log 50
2 /opt/app/oracle/oradata/orcl/redo02.log 50

Add standby redo log groups:

SQL> alter database add standby logfile group 3 size 50M;

Database altered.

SQL> alter database add standby logfile group 4 size 50M;

Database altered.

SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- --------------------------------------------------------------------------- ---
1 ONLINE /opt/app/oracle/oradata/ORCL/redo01.log NO
2 ONLINE /opt/app/oracle/oradata/ORCL/redo02.log NO
3 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_3_5bvzkzgs_.log YES
4 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_4_5bvzl8hf_.log YES

SQL> select * from v$standby_log;

GROUP# DBID THREAD# SEQUENCE# BYTES USED ARC STATUS FIRST_CHANGE# FIRST_TIM LAST_CHANGE# LAST_TIME
----------------------------------------------------------------------------------------------------------------------
3 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0
4 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0

5.) Check parameter db_unique_name
SQL> show parameters unique

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string orcl

6.) Add standby related entried to Primary database:
SQL> create pfile='/home/oracle/initprim.ora' from spfile;

Sample init.ora from Primary:
orcl.__db_cache_size=2097152000
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__shared_pool_size=536870912
orcl.__streams_pool_size=0
*.audit_file_dest='/opt/app/oracle/admin/orcl/adump'
*.background_dump_dest='/opt/app/oracle/admin/orcl/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/opt/app/oracle/oradata/orcl/control01.ctl','/opt/app/oracle/oradata/orcl/control02.ctl','/opt/app/oracle/oradata/orcl/control03.ctl'
*.core_dump_dest='/opt/app/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='orcl'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='location=/opt/app/oracle/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl'
*.open_cursors=300
*.pga_aggregate_target=823132160
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2684354560
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/opt/app/oracle/admin/orcl/udump'
db_unique_name=orcl
LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)'
*.LOG_ARCHIVE_DEST_2='SERVICE=ORCL1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl1'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=3
DB_FILE_NAME_CONVERT='/u01/oradata/orcl','/opt/app/oracle/oradata/orcl'
LOG_FILE_NAME_CONVERT='/u01/oradata/orcl/archive',/opt/app/oracle/oradata/orcl/archive','/u01/oradata/flash_recovery_area/orcl','/opt/app/oracle/flash_recovery_area/orcl/onlinelog'
FAL_SERVER=orcl1
FAL_CLIENT=orcl

Copy the init.ora and make necessary changes to the file to be used at standby side. Changes like location of various files, FAL_SERVER, FAL_CLIENT etc.

Sample init.ora in Standby DB:
orcl.__db_cache_size=2097152000
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__shared_pool_size=536870912
orcl.__streams_pool_size=0
*.audit_file_dest='/u01/oradata/orcl/adump'
*.background_dump_dest='/u01/oradata/orcl/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/u01/oradata/orcl/control01.ctl','/u01/oradata/orcl/control02.ctl','/u01/oradata/orcl/control03.ctl'
*.core_dump_dest='/u01/oradata/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='/u01/oradata/flash_recovery_area/orcl'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='orcl1'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='location=/u01/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl'
*.open_cursors=300
*.pga_aggregate_target=823132160
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=2684354560
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/oradata/orcl/udump'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)'
*.LOG_ARCHIVE_DEST_2='SERVICE=ORCL LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=3
DB_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl','/u01/oradata/orcl'
LOG_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl/archive','/u01/oradata/orcl/archive','/opt/app/oracle/flash_recovery_area/orcl/onlinelog','/u01/oradata/flash_recovery_area/orcl'
FAL_SERVER=orcl
FAL_CLIENT=orcl1

7.) Shutdown the primary database. Use the newly created pfile to startup nomount the database. Then create a spfile for the database. Mount the database and create a standby controlfile.
Shutdown the database and take a cold back of the database, all files including the redo log files. You can also create a standby DB from hot backup.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount pfile='/home/oracle/pfileorcl.ora'
ORACLE instance started.

Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
SQL> create spfile from pfile='/home/oracle/pfileorcl.ora';

File created.
Meanwhile I also received the error:
create spfile from pfile='/home/oracle/pfileregdb.ora'
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kspsetpao1], [1753], [1700], [*],
[user_dump_dest], [33], [], []

Note: This error usually comes when the syntax of the pfile is wrong somewhere, please fix the pfile and try again. It worked for me.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
Database mounted.
Database opened.

8.) Shutdown the database again and take a cold backup of all files.

9.) Create standby control file.
SQL> startup mount
ORACLE instance started.
Total System Global Area 2684354560 bytes
Fixed Size 2086352 bytes
Variable Size 570427952 bytes
Database Buffers 2097152000 bytes
Redo Buffers 14688256 bytes
Database mounted.

--Then mount and create a standby controlfile.
SQL> alter database create standby controlfile as 'standby.ctl';
Database altered.

-- Open the primary read write.
SQL> alter database open;
Database altered.

10.) Transfer all the file from the cold backup from Primary to Standby server. Also copy the password file from primary to standby.
Also copy the standby controlfile created in step 9 and copy if with the right name and location on standby server.
I use SFTP for transferring the files.

11.) Add entries for the primary db and standby DB in both primary and standby servers. i.e. primary server should have its own (orcl) and standby server (orcl1) tns entry.

12.) Copy the pfile from step 6 for standby DB.
Now try to nomount the standby database with the new pfile.

[oracle@dbtest dbs]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 15 04:57:32 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora';
ORACLE instance started.

Total System Global Area 1694498816 bytes
Fixed Size 1219784 bytes
Variable Size 402654008 bytes
Database Buffers 1275068416 bytes
Redo Buffers 15556608 bytes

13.) Create spfile from pfile.
SQL> create spfile from pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora';

File created.

14.) Shutdown the DB and do a startup mount.
SQL>startup mount;

15.) Start REDO apply process:
SQL> alter database recover managed standby database disconnect from session;
OR
SQL> alter database recover managed standby database nodelay disconnect parallel 8;

16.) Verification.
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

To check archive gap:
SELECT thread#, low_sequence#, high_sequence# from V$archive_gap;
for RAC
SELECT thread#, low_sequence#, high_sequence# from gv$archive_gap;

To stop redo apply:
alter database recover managed standby database cancel;

17.) Check alert log files and verify that you did not receive any error.

18.) Switch some logfiles on the Primary and check if the same are getting applied to the standby.
on Primary:
SQL> alter system switch logfile;

on standby:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

Thanks should be it, your Physical Standby DB should be working fine.

How To: Setup up of Oracle Streams Replication

Note: I have started using TOAD now, so you wont see the statement complication output.

Step 1: Create stream administration user in both databases.

create user streamadmin identified by streamadmin default tablespace users;

Step 2: Required grants to the user streamadmin.

grant dba,select_catalog_role to streamadmin;
exec dbms_streams_auth.grant_admin_privilege('streamadmin',true);

Step 3: We will use default SCOTT schema for seting up this replication.

Step 4: Check database paramters required for setting up stream replication

For our example:
DB 1:
Name: TEST1
Global Name should be true
SQL> show parameter global_name
NAME TYPE VALUE
------------------------------------ ----------- --------------------------
global_names boolean TRUE

SQL> select * from global_name;
GLOBAL_NAME
-----------------------------------------------
TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM

SQL> show parameter job_queue

NAME TYPE VALUE
------------------------------------ ----------- ------
job_queue_processes integer 10

SQL> show parameter db_recovery_file_dest_size
NAME TYPE VALUE
------------------------------------ ----------- -----------
db_recovery_file_dest_size big integer 2G

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- -------------------------------------
db_recovery_file_dest string D:\oracle\product/flash_recovery_area

DB 2:
Name: TEST
Global Name should be true.
SQL> show parameter global_name
NAME TYPE VALUE
------------------------------------ ----------- -------
global_names boolean TRUE

SQL> select * from global_name;
GLOBAL_NAME
--------------------------------------
TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM

SQL> show parameter job_queue
NAME TYPE VALUE
------------------------------------ ----------- ------
job_queue_processes integer 10

SQL> show parameter db_recovery_file_dest_size

NAME TYPE VALUE
------------------------------------ ----------- --------
db_recovery_file_dest_size big integer 2G

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string D:\oracle\product\10.2.0/flash_recovery_area

Step 5: Enable supplemental logging on the tables of the scott user in both the databases:

Oracle has two types of supplimental logging options:
1.) Unconditional Supplemental Logging: Logs the before images of specified columns any time the table is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.
2.) Conditional Supplemental Log Groups: Logs the before images of all specified columns only if at least one of the columns in the log group is updated.
(From Oracle Documentation)

Login to the SCOTT schema to start conditional supplemental logging:
SQL> alter table emp add supplemental log group supp_log_emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO);
Table altered.
SQL> alter table dept add supplemental log group supp_log_dept (deptno,dname,loc);
Table altered.
SQL> alter table bonus add supplemental log group supp_log_bonus (ename,job,sal,comm);
Table altered.
SQL> alter table salgrade add supplemental log group supp_log_sal (grade,losal,hisal);
Table altered.

The same needs to be done on the other database also.

SQL> select log_group_name, table_name from dba_log_groups where owner='SCOTT';
LOG_GROUP_NAME TABLE_NAME
------------------------------ ------------------------------
SUPP_LOG_EMP EMP
SUPP_LOG_DEPT DEPT
SUPP_LOG_BONUS BONUS
SUPP_LOG_SAL SALGRADE

Step 6: Create Database Links between the stream administrator users in the 2 databases.
SQL> CREATE DATABASE LINK TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST_REP2';
Database link created.

SQL> CREATE DATABASE LINK TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST_REP1';
Database link created.

Step 7: Create Stream Queues under the streamadmin user to apply and capture the database changes to be replicated.
This also needs to run on both the databases as streamadmin.

SQL> show user
USER is "STREAMADMIN"
SQL> exec dbms_streams_adm.set_up_queue(queue_table => 'apply_q', queue_name => 'apply_q', queue_user => 'streamadmin');
PL/SQL procedure successfully completed.

SQL> exec dbms_streams_adm.set_up_queue(queue_table => 'capture_q',queue_name => 'capture_q',queue_user => 'streamadmin');
PL/SQL procedure successfully completed.

Step 8: Setup data capture on both the databases:
Procedure dbms_streams_adm is the administration package for setting up capture process when using streams, like dbms_repcat when using normal replication.
http://www.psoug.org/reference/dbms_streams_adm.html for details reference of this package.

Views sys.streams$_process_params, sys.streams$_capture_process

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'CAPTURE',
STREAMS_NAME => 'CAPTURE_STREAM',
QUEUE_NAME => 'CAPTURE_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
INCLUSION_RULE => TRUE);
END;
/
PL/SQL procedure successfully completed.

The above procedure need to run on both the databases.

Step 9: Setup data apply on both the databases:
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'APPLY',
STREAMS_NAME => 'APPLY_STREAM',
QUEUE_NAME => 'APPLY_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_TYPE => 'APPLY',
STREAMS_NAME => 'APPLY_STREAM',
QUEUE_NAME => 'APPLY_Q',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

Step 10: Setup propogation process on both the databases:
Its basically setting up related between the capture process on one database and apply process on the other database.
Thes need to run as streamadmin user.
DB 1:

SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_NAME => 'TEST1_TO_TEST',
SOURCE_QUEUE_NAME => 'CAPTURE_Q',
DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

DB 2:
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME => 'SCOTT',
STREAMS_NAME => 'TEST_TO_TEST1',
SOURCE_QUEUE_NAME => 'CAPTURE_Q',
DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INCLUDE_DML => TRUE,
INCLUDE_DDL => TRUE,
SOURCE_DATABASE => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/
PL/SQL procedure successfully completed.

Step 11: Setup schema instantiation SCN on DB 2 (TEST) & DB 1 (TEST1):

SQL> DECLARE
ISCN NUMBER;
BEGIN
ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM (
SOURCE_SCHEMA_NAME => 'SCOTT',
SOURCE_DATABASE_NAME => 'TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INSTANTIATION_SCN => ISCN,
RECURSIVE => TRUE);
END;
/
PL/SQL procedure successfully completed.

DECLARE
ISCN NUMBER;
BEGIN
ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM (
SOURCE_SCHEMA_NAME => 'SCOTT',
SOURCE_DATABASE_NAME => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
INSTANTIATION_SCN => ISCN,
RECURSIVE => TRUE);
END;
/
PL/SQL procedure successfully completed.

Step 12: Start capture and apply process:
Setting the disable_on_error parameter to ‘N’ allows the apply process to continue applying row LCRs even when it encounters errors. The default value is ‘Y’ which disables the apply process automatically on the first error encountered.
DB 1 (TEST1):

SQL> EXEC DBMS_APPLY_ADM.SET_PARAMETER (APPLY_NAME => 'APPLY_STREAM', PARAMETER => 'DISABLE_ON_ERROR', VALUE => 'N');
PL/SQL procedure successfully completed.

SQL> EXEC DBMS_APPLY_ADM.START_APPLY (APPLY_NAME => 'APPLY_STREAM');
PL/SQL procedure successfully completed.

SQL> EXEC DBMS_CAPTURE_ADM.START_CAPTURE (CAPTURE_NAME => 'CAPTURE_STREAM');
PL/SQL procedure successfully completed.

Same steps for the other DB also.

Steps 13: Test the setup. I just realized that its "Step 13" for testing, not good not good, not number 13.
Actually I faced a few problems with the test.
1.) The AQ_TM_PROCESSES parameter in my case was 0, but the data was not getting transferred.
2.) I had not set the instantiation SCN for the second DB.

I found the above errors from the below query:
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;

After rectifying the error, to apply the failed transaction:
BEGIN
DBMS_APPLY_ADM.EXECUTE_ERROR(
local_transaction_id => '4.30.434',
execute_as_user => false);
END;
You ge the local_transaction_id from the above query.

Anyway lets test the setup once again:
Insert test:
------------
SQL> connect scott/tiger@test_rep1
Connected.
SQL> insert into dept values (50,'RESEARCH','MEMPHIS');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from dept@TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM;

DEPTNO DNAME LOC
---------- -------------- -------------
50 RESEARCH MEMPHIS
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

Update Test:
------------
SQL> connect scott/tiger@test_rep2
Connected.
SQL> update dept set loc='DELHI' where deptno=50;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from dept@TEST1.REGRESS.RDBMS.DEV.US.ORACLE.COM;

DEPTNO DNAME LOC
---------- -------------- -------------
50 RESEARCH DELHI
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

Finally u can see that basic steps of replication is working.

DBMS Solutions

Hey If you want any kind of Database related solution ...... Contact me
Sandy
ocpdba007@gmail.com