Monday, April 20, 2015

How to import stats on new database :


source
source 1

Step 1: Create the stats_table:


SQL> exec dbms_stats.create_stat_table(ownname =>'ULTIMUS',stattab => 'prod_stats', tblspace => 'SYSTEM');
PL/SQL procedure successfully completed.


---
SQL> show user
USER is "ULTIMUS"
SQL> select tname from tab where tname='PROD_STATS';

TNAME
------------------------------
PROD_STATS
---


Step 2: Gather the statistics with gather_system_stats.  In this dbms_stats example, we compute histograms on all indexed columns:

DBMS_STATS.gather_schema_stats(
   ownname=>’’,
   estimate_percent=>dbms_stats.auto_sample_size
   cascade=>TRUE,
   method_opt=>’FOR ALL COLUMNS SIZE AUTO’)

This gather_stat takes a lot of times .. ...




Step 3: Export the stats to the prod_stats table using export_system_stats::

exec dbms_stats.export_schema_stats(ownname => 'ULTIMUS',stattab => 'prod_stats');

---------
SQL> exec dbms_stats.export_schema_stats(ownname => 'ULTIMUS',stattab => 'prod_stats');

PL/SQL procedure successfully completed.

SQL> select count(*) from PROD_STATS;

  COUNT(*)
----------
    118464

-------------


Step 4: Export the stats to the prod_stats table using exp:

   expdp ultimus/ultimus directory=test_dir dumpfile=prod_stats.dmp tables=prod_stats



Step 5: copy the export file (e.g. FTP) over to the production server:


step 6:  Import the stats to TEST :

 SQL> exec dbms_stats.import_schema_stats(ownname => 'ULTIMUS', stattab => 'prod_stats');

PL/SQL procedure successfully completed.




Step 7: We can now use the import_system_stats procedure in Oracle dbms_stats to overlay the existing CBO statistics from the smaller TEST instance:


exec dbms_stats.import_schema_stats(ownname => 'ULTIMUS', stattab => 'prod_stats');

Wednesday, April 8, 2015

ASM voting Disk !


Voting files manage information about node membership. OCR is a system that manages cluster and Oracle Real Application Clusters (Oracle RAC) database configuration information. A quorum failure group is a special type of failure group and disks in these failure groups do not contain user data. A quorum failure group is not considered when determining redundancy requirements in respect to storing user data. However, a quorum failure group counts when mounting a disk group.

11gR2 clusterware voting disk files can be stored in ASM through the ASM diskgroup.  To ensure the high availability of the clusterware, it is highly recommended to store multiple voting disk files. When storing the voting disk files in ASM, the redundancy setting of the ASM diskgroup determines the number of voting disk files. The following settings are recommended for planning the proper configuration of ASM diskgroup that is used to store the voting disk files.
  • External Redundancy: The ASM diskgroup only has one failure group without mirroring and stores one voting disk file. It is recommended the disks have external redundancy RAID configuration to provide the storage high availability.
  • Normal Redundancy:  The ASM diskgroup requires three failure groups and it will provide three voting disks files.
  • High Redundancy: The ASM diskgroup requires five failure groups and it has five voting disk files.
Introduction
In RAC, CSSD processes (Cluster Services Synchronization Daemon) monitor the health of RAC nodes employing two distinct heart beats: Network heart beat and Disk heart beat. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario. There are few different scenarios possible with missing heart beats:
  1. Network heart beat is successful, but disk heart beat is missed.
  2. Disk heart beat is successful, but network heart beat is missed.
  3. Both heart beats failed.
In addition, with numerous nodes, there are other possible scenarios too. Few possible scenarios:
  1. Nodes have split in to N sets of nodes, communicating within the set, but not with members in other set.
  2. Just one node is unhealthy.
Nodes with quorum will maintain active membership of the cluster and other node(s) will be fenced/rebooted. I can’t discuss all possible scenarios in a blog entry, so we will discuss a simplistic 2-node single voting disk alone here.
Voting disks are used to monitor the disk heart beats. It is preferable to have at least 3 voting disks or odd number of voting disks greater than or equal to 3.
CSSD is a multi-threaded process
Voting disks are shared between the nodes and should be visible from all nodes, stating the obvious. CSSD process is a multi-threaded process and a thread of the CSSD process monitors the disk heart beat. The disk HB (Heart Beat) thread is scheduled approximately every second and that thread verifies the disk heart beat from all active nodes in the cluster. Also, another thread of CSSD monitors the network heart beat. Pstack (Solaris) of CSSD process would show the threads of CSSD process.

Voting disk in 11g

In this post, I will write about voting disk – what does it contain, who updates it, how is it used, where is it stored and so on…
Voting disk a key component of clusterware and its failure can lead to inoperability of the cluster.
In RAC at any point in time the clusterware must know which nodes are member of the cluster so that
- it can perform load balancing
- In case a node fails, it can perform failover of resources as defined in the resource profiles
- If a node joins, it can start resources on it as defined in OCR/OLR
- If a node joins, it can assign VIP to it in case GNS is in use
- If a node fails, it can execute callouts if defined
and so on

Hence, there must be a way by which clusterware can find out about the node membership.
  That is where voting disk comes into picture. It is the place where nodes mark their attendance. Consider an analogy where a manager wants to find out which of his subordinates are present. He can just check the attendance register and assign them their tasks accordingly. Similarly, CSSD process on every node makes entries in the voting disk to ascertain the membership of that node. The voting disk  records node membership information. If it ever fails, the entire clustered environment for Oracle 11g RAC will be adversely affected and a possible outage may result if the vote disks is/are lost.
Also, in a cluster communication between various nodes is of paramount importance.  Nodes which can’t communicate with other nodes  should be evicted from the cluster. While marking their own presence, all the nodes also register the information about their communicability with other nodes in voting disk . This is called network heartbeat. CSSD process in each RAC node maintains its heart beat in a block of size 1 OS block, in the hot  block of voting disk  at a specific offset.  The written block has a header area with the node name.  The heartbeat counter increments every second on every write call. Thus heartbeat of various nodes is recorded at different offsets in the voting disk. In addition to maintaining its own disk block, CSSD processes also monitors the disk blocks maintained by the CSSD processes running in other cluster nodes. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario.If the disk block is not updated in a short timeout period, that node is considered unhealthy and  may be rebooted to protect the database information. In this case , a message to this effect is written in the kill block of the node. Each node  reads its kill block once per second, if the kill block is overwritten node commits  suicide.
During reconfig (join or leave) CSSD monitors all nodes and determines whether  a node has a disk heartbeat, including those with no network heartbeat. If no disk  heartbeat is detected  then node is declared as dead.
What is stored in voting disk?
——————————
Voting disks contain static and dynamic data.
Static data : Info about nodes in the cluster
Dynamic data : Disk heartbeat logging
It maintains and consists of important details about the cluster nodes membership, such as
- which node is part of the cluster,
- who (node) is joining the cluster, and
- who (node) is leaving the cluster.

Why is voting disk needed ?
—————————
The Voting Disk Files are used by Oracle Clusterware  by way of a health check .
- by CSS to determine which nodes are currently members of the cluster
- in concert with other Cluster components such as CRS to shut down, fence, or reboot either single or multiple nodes whenever network communication is lost between any node within the cluster, in order to prevent the dreaded split-brain condition in which two or more instances attempt to control the RAC database. It  thus protects the database information.
- It will be used by the CSS daemon to arbitrate with peers that it cannot see over the private interconnect in the event of an outage, allowing it to salvage the largest fully connected subcluster for further operation.  It checks the voting disk to determine if there is a failure on any other nodes in the cluster. During this operation, NM will make an entry in the voting disk to inform its vote on availability. Similar operations are performed by other instances in the cluster. The three voting disks configured also provide a method to determine who in the cluster should survive. For example, if eviction of one of the nodes is necessitated by an unresponsive action, then the node that has two voting disks will start evicting the other node. NM alternates its action between the heartbeat and the voting disk to determine the availability of other nodes in the cluster.
The Voting disk is the key communication mechanism within the Oracle Clusterware where all nodes in the cluster read and write heartbeat information. CSSD processes (Cluster Services Synchronization Daemon) monitor the health of  RAC nodes employing two distinct heart beats: Network heart beat and Disk heart beat. Healthy nodes will have continuous network and disk heartbeats exchanged between the  nodes. Break in heart beat indicates a possible error scenario. There are few different scenarios possible with missing heart beats:
1. Network heart beat is successful, but disk heart beat is missed.
2. Disk heart beat is successful, but network heart beat is missed.
3. Both heart beats failed.
In addition, with numerous nodes, there are other possible scenarios too. Few possible scenarios:
1. Nodes have split in to N sets of nodes, communicating within the set, but not with members in other set.
2. Just one node is unhealthy.
Nodes with quorum will maintain active membership of the cluster and other node(s) will be fenced/rebooted.

Why should we have an odd number of voting disks?
————————————————-
The odd number of voting disks configured provide a method to determine who in the cluster should survive.
A node must be able to access more than half of the voting disks at any time. For example, let’s have a two node cluster with an even number of let’s say 2 voting disks. Let Node1 is able to access voting disk1 and Node2 is able to access voting disk2 . This means that there is no common file where clusterware can check the heartbeat of both the nodes.  If we have 3 voting disks and both the nodes are able to access more than half i.e. 2 voting disks, there will be at least on disk which will be accessible by both the nodes. The clusterware can use that disk to check the heartbeat of both the nodes. Hence, each  node should be  able to access more than half the number of voting disks. A node not able  to do so will have to be evicted from the cluster by another node that has more than half the voting disks, to maintain the integrity of the cluster  . After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.
   Loss of more than half your voting disks will cause the entire cluster to fail !!
Where is voting disk stored?
—————————-
 The Voting Disk is a shared disk that will be accessed by all member nodes in the cluster during an operation. Hence, the voting disks must be on shared accessible storage .
- You should plan on allocating 280MB for each voting disk file.
- Prior to 11g R2 RAC, it could be placed on
     . a raw device
   . a clustered filesystem supported by Oracle RAC such as OCFS, Sun Cluster, or Veritas Cluster filesystem
- As of  11g R2 RAC, it can be placed on  ASM disks . This simplifies management and improves performance.  But this brought up a puzzle too. For a node to join the cluster, it must be able to access voting disk but voting disk is on ASM and ASM can’t be up until node is up. To resolve this issue, Oracle ASM reserves several blocks at a fixed location for every Oracle ASM disk used for storing the voting disk.As a result , Oracle Clusterware can access the voting disks present in ASM even if the ASM instance is down and CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.The physical location of the voting files in used ASM disks is fixed, i.e. the cluster stack does not rely on a running ASM instance to access the files. The location of the file is visible in the ASM disk header (dumping the file out of ASM with dd is quite easy):
oracle@rac1:~/ [+ASM1] kfed read /dev/sdf | grep -E ‘vfstart|vfend’
kfdhdb.vfstart:                   96 ; 0x0ec: 0x00000060                          <
kfdhdb.vfend:                    128 ; 0x0f0: 0x00000080                         <
 – The voting disk is not striped but put as a whole on ASM Disks
 – In the event that the disk containing the voting disk fails, Oracle ASM will choose another disk on which to store this data.
 – It eliminates the need for using a third-party cluster volume manager.
 – you can reduce the complexity of managing disk partitions for voting disks during Oracle Clusterware installations.
 –  Voting disk needs to be mirrored, should it become unavailable, cluster will come down. Hence, you should maintain multiple copies of the voting disks on separate disk LUNs so that you eliminate a Single Point of Failure (SPOF) in your Oracle 11g RAC configuration.
– If voting disk is stored on ASM, multiplexing level of voting disk is decided by the redundancy of the diskgroup.
Redundancy of the diskgroup       #of copies of voting disk        ( Minimum # of disks in the diskgroup)
External                                               1                                                  1
Normal                                                3                                                  3
High                                                 5                                                  5- If voting disk is on a diskgroup with external redundancy, one copy of voting file will be stored on one disk in the diskgroup.-  If we store voting disk on a diskgroup with normal redundancy, we should be able to tolerate the loss of one disk i.e. even if we lose one disk, we should have sufficient number of voting disks so that clusterware can continue.  If the diskgroup has 2 disks (minimum required for normal redundancy), we can store 2 copies of voting disk on it. If we lose one disk, only one copy of voting disk will be left  and clusterware won’t be able to continue,  because to continue, clusterware should be able to access more than  half the no. of voting disks i.e.> (2*1/2)
i.e. > 1
i.e.=  2
Hence, to be able to tolerate the loss of one disk, we should have 3 copies of the voting disk on a diskgroup with normal redundancy . So, a normal redundancy diskgroup having voting disk should have minimum 3 disks in it.
– Similarly, if we store voting disk on diskgroup with high redundancy, 5 Voting Files are placed, each on one ASM Disk i.e a high redundancy diskgroup should have at least 5 disks so that even of we lose 2 disks, clusterware can continue .
 – Ensure that all the nodes participating in the cluster have read/write permissions on disks.
 – You can have up to a maximum of 15 voting disks. However, Oracle recommends that you do not go beyond five voting disks.

Backing up voting disk
———————–
In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command. Starting with Oracle Clusterware 11g Release 2 you no longer need to backup the voting disks. The voting disks are automatically backed up as a part of the OCR. In fact, Oracle explicitly indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.
Although the Voting disk contents are not changed frequently, you will need to back up the Voting disk file every time
- you add or remove a node from the cluster or
- immediately after you configure or upgrade a cluster.
  A node in the cluster must be able to access more than half of the voting disks at any time in order to be able to tolerate a failure of n voting disks. Therefore, it is strongly recommended that you configure an odd number of voting disks such as 3, 5, and so on.

 Check the location of voting disk
grid@host01$crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   243ec3b2a3cf4fbbbfed6f20a1ef4319 (ORCL:ASMDISK01) [DATA]
Located 1 voting disk(s).
– we can see that only one copy of the voting disk is there on data diskgroup which has external redundancy.
As I mentioned earlier, Oracle writes the voting devices to the underlying disks at pre-designated locations so that it can get the contents of these files when the cluster starts up.
Let’s see that with an actual example. Let’s see the logs from CSS . They are located at $ORACLE_HOME/log//cssd  Here is an excerpt from one of the logs. The line says that it found a “potential” voting file on one of the disks – 243ec3b2-a3cf4fbb-bfed6f20-a1ef4319
.
grid@host01$ vi /u01/app/11.2.0/grid/log/host01/cssd/ocssd.log
search for string potential or File Universal ID – 243ec3……
2012-10-09 03:54:28.423: [    CSSD][986175376]clssnmvDiskVerify: Successful discovery for disk ORCL:ASMDISK01, UID 243ec3b2-a3cf4fbb-bfed6f20-a1ef4319,
– Create another diskgroup test with normal redundancy and 2 disks.
– Try to move voting disk from diskgroup data to test diskgroup
– Fails as we should have at least 3 disks in the test diskgropup

[grid@host01 cssd]$ crsctl replace votedisk +test
Failed to create voting files on disk group test.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
– Add another disk to test diskgroup and mark it as quorum disk. The quorum disk is one small Disk (300 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. Other two disks will contain each one Voting File and all the other stripes of the Database Area as well, but quorum  will only get that one Voting File.
– Now try to move the voting disk from data diskgroup tp test diskgroup
– Now the operation is successful
[grid@host01 cssd]$ crsctl replace votedisk +test
Successful addition of voting disk 00ce3c95c6534f44bfffa645a3430bc3.
Successful addition of voting disk a3751063aec14f8ebfe8fb89fccf45ff.
Successful addition of voting disk 0fce89ac35834f99bff7b04ccaaa8006.
Successful deletion of voting disk 243ec3b2a3cf4fbbbfed6f20a1ef4319.
Successfully replaced voting disk group with +test.
CRS-4266: Voting file(s) successfully replaced
– Check the ocssd.log – search for 00ce3c9……
grid@host01$vi $ORACLE_HOME/log/host01/cssd/ocssd.log
2012-10-09 05:08:19.484: [    CSSD][997631888]  Listing unique IDs for 3 voting files:
2012-10-09 05:08:19.484: [    CSSD][997631888]    voting file 1: 00ce3c95-c6534f44-bfffa645-a3430bc3
2012-10-09 05:08:19.484: [    CSSD][997631888]    voting file 2: a3751063-aec14f8e-bfe8fb89-fccf45ff
2012-10-09 05:08:19.484: [    CSSD][997631888]    voting file 3: 0fce89ac35834f99bff7b04ccaaa8006
I hope this information was useful.
Keep visiting the blog. Thanks for your time!

ASM disk High Redundancy with Ten Disks !

My last question related with ASM Disk high redundancy . I have some confusion on this area .

Let's say I have 10 disks on a diskgroup '+DATA' and I have redundancy level 'HIGH', means 3 way redundancy . Here I know 3-way redundancy means , it writes data on 3 disks at a time . But what about other 7 disks . Actually I need to know the writing mechanism ?
- Does each disk is identical with other disk ?
- For HIGH redundancy, How many disks failure can be tolerated for any data loss  ? 



ASM Diskgroups with NORMAL or HIGH redudancy are further divided into failure groups. These failures groups are logical groups of disks that each mirrors asm file extents on other groups. The mirroring is done at asm file extent level, not disk level. So a disk will containt asm file extents which are primary copies and also copies that are secondary mirrors of other disks. A nice visual explanation of this concept is found here: Brief introduction to ASM mirroring | The Oracle Instructor


Oracle doesn't mirror entire disks.  As Alvaro has explained, Oracle mirrors extents.  With High Redundancy, an extent in 1 disk is mirrored to two different failure groups.  If you have defined your disks as 7 distinct failure groups, an extent from disk 1 may be mirrored in any two of the other six disks -- say disks 3 and 6.  The next extent in disk 1 is, again, mirrored to any two of the other six disks, possibly disks 4 and 7. And so on .  So no one disk is entirely mirrored to another disk.  It is a small portion (the extent) that is mirrored. Two extents from the same disk may be mirrored to two different other disks.

What happens in Standby after a nologging operation on Primary DB ?

https://community.oracle.com/thread/666455

1]In Primary, if a table "A" is in NOLOGGING, redo's are not generated(except the dictionary stuff) if INSERT is performed with "APPEND" hint. Isn't it?
(900 rows as inserted with append hint into A)

2]So if there is a standby, while the archive logs are applied the table "A" isn't populated with the data. (isn't it ?)

3]On Primary, if i "DELETE from A where id=3",
what would happen while that particular archive log is applied on the standby(which has the table but no data 'coz of nologging and append ?

What would happen if after step2, i open the standby in read only and perform
 select * from A; 
Will it through error? or give "no rows selected" ?

TIA,
JJ
Average User Rating: No ratings (0 ratings)
Average User Rating
No ratings
(0 ratings)
    • 1. Re: What happens in Standby after a nologging operation on Primary DB ?
      mbobakGrand Master
      JJ,

      I don't have standby db setup to test it, but I'm pretty sure you'll see ORA-26040 'data block was loaded with NOLOGGING option'. This is the same error you'd get on the primary if you did media recovery using a backup that was taken before the nologging operation.

      To avoid this, set force_logging to TRUE on your primary database.

      -Mark
      • 2. Re: What happens in Standby after a nologging operation on Primary DB ?
        639907Whiz
        Hi J.J,

        This is part of a material I'm preparing about this topic:

        Redo generation is a vital part of the Oracle recovery mechanism. Without it crashed instances will not recover and will not start in a consistent state. Excessive LOGGING is the result of excessive work on the database.

        The Oracle® Database Administrator's Guide 10g Release 2 say regarding the main benefits of the NOLOGGING option:

        •     Space is saved in the redo log files
        •     The time it takes to create the table is decreased
        •     Performance improves for parallel creation of large tables

        “A very important rule with respect to data is to never put yourselft into an unrecoverable situation. The importance of this guideline cannot be stressed enough, but it does not mean that you can never use time saving or performance enhancing options. “

        Oracle gave the user the ability to limit redo generation on tables and indexes by setting them in NOLOGGING mode. NOLOGGING affect the recoverability and before going into how to limit the redo generation it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, this are some points regarding it:

        •     NOLOGGING is designed to handle bulk inserts for data which can be re-produced.
        •     Writing to undo blocks causes generation of redo regardless of LOGGING status.
        •     LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. This means that the NOLOGGING attribute will not have any effect on the segments if the database is in FORCE LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.
        •     Any change to the database dictionary will cause LOGGING. This will happen to protect the data dictionary. One example is if we allocated a space above the HWM for a table, and the system fail in the middle of one INSERT /*+ APPEND */ , the Oracle will need to rollback that data dictionary update. There will be redo generated but it is to protect the data dictionary, not your newly inserted data (Oracle just undo the space allocation if it fails, your data will disappear).
        •     Data which are not logged can not be recovered. The data should be backed up after the modification.
        •     Tables and indexes should be set back to LOGGING mode when the NOLOGGING is no longer needed.
        •     NOLOGGING is not needed for Direct Path Insert if the database is in NO Archive log mode.
        •     NOLOGGING should not be used for the data which can not be reproduced. If data which can not be reloaded was loaded using NOLOGGING and the database crashes before backing this data up, the data can not be recovered.
        •     NOLOGGING does not apply to UPDATE and DELETE.
        •     NOLOGGING will work during certain situations but subsequent DML will generate redo. Some of these situations are: direct load INSERT (using APPEND hint), CREATE TABLE ... AS SELECT, CREATE INDEX.
        •     If the LOGGING or NOLOGGING clause is not specified when creating a table, partition, or index the default to the LOGGING attribute will be the LOGGING attribute of the tablespace in which it resides.


        Disabling Logging (NOLOGGING)


        Logging can be disabled at the table level or the tablespace level. If it is done at the tablespace level then every newly created index or table in this tablespace will be in NOLOGGING mode you can have logging tables inside a NOLOGGING tablespace). A table or an index can be created with NOLOGGING mode or it can be altered using ALTER TABLE/INDEX NOLOGGING. It is important to note that just because an index or a table was created with NOLOGGING does not mean that redo generation has been stopped for this table or index. NOLOGGING is active in the following situations and while running one of the following commands but not after that. This is not a full list:

        •     DIRECT LOAD (SQL*Loader)
        •     DIRECT LOAD INSERT (using APPEND hint)
        •     CREATE TABLE ... AS SELECT
        •     CREATE INDEX
        •     ALTER TABLE MOVE
        •     ALTER TABLE ... MOVE PARTITION
        •     ALTER TABLE ... SPLIT PARTITION
        •     ALTER TABLE … ADD PARTITION (if HASH partition)
        •     ALTER TABLE … MERGE PARTITION
        •     ALTER TABLE … MODIFY PARTITION
        o ADD SUBPARTITON
        o COALESCE SUBPARTITON
        o REBUILD UNUSABLE INDEXES
        •     ALTER INDEX ... SPLIT PARTITION
        •     ALTER INDEX ... REBUILD
        •     ALTER INDEX ... REBUILD PARTITION


        Logging is stopped only while one of the commands above is running, so if a user runs this:

        •     ALTER INDEX new_index NOLOGGING.

        The actual rebuild of the index does not generate redo (all data dictionary changes associated with the rebuild will do) but after that any DML on the index will generate redo this includes direct load insert on the table which the index belongs to.

        Here is another example to make this point more clear:

        •     CREATE TABLE new_table_nolog_test NOLOGGING(….);

        All the following statements will generate redo despite the fact the table is in NOLOGGING mode:

        •     INSERT INTO new_table_nolog_test ...,
        •     UPDATE new_table_nolog_test SET …,
        •     DELETE FROM new_table_nolog_test ..

        The following will not generate redo (except from dictionary changes and indexes):

        •     INSERT /*APPEND/ …
        •     ALTER TABLE new_table_nolog_test MOVE …
        •     ALTER TABLE new_table_nolog_test MOVE PARTITION …

        To activate the NOLOGGING for one of the ALTER commands above add the NOLOGGING clause after the end of the ALTER command.

        For example:

        •     ALTER TABLE new_table_nolog_test MOVE PARTITION parti_001 TABLESPACE new_ts_001 NOLOGGING;

        The same applies for CREATE INDEX but for CREATE TABLE the NOLOGGING should come after the table name.

        Example:

        •     CREATE TABLE new_table_nolog_test NOLOGGING AS SELECT * FROM big_table;

        "It is a common mistake to add the NOLOGGING option at the end of the SQL (Because oracle will consider it an alias and the table will generate a lot of logging)."

        To user Direct Path Load in SQL*Loader you must run the $ORACLE_HOME/rdbms/admin/catldr.sql script before your first sqlldr is run in direct path mode. To run sqlldr in direct path mode use direct=true.

        Note: Even though direct path load reduces the generation of redo, it is not totally eliminated. That's because those inserts still generate undo which in turn generates redo.

        If there is an index on the table, and an +APPEND INSERT is made on the table, the indexes will produce redo. This can be circumvented by setting the index to unusable and altering the session's (before 10g you only can set this at session level, after 10g you also can set this parameter at instance level) skip_unusable_indexes to true (This doesn't apply to UNIQUE indexes.).

        Note: Please be free to comment or correct it.
        Regards,

        Francisco Munoz Alvarez
        • 3. Re: What happens in Standby after a nologging operation on Primary DB ?
          rayb12617Newbie
          I understand the process of nologging on a single operation on a primary database.

          I would like to know what happens when appliying redo to a standby database afet having performed such an operation on the primary (ex alter index ... NOLOGGING) and then performing normal updates.

          Will application of redo logs fail altogether? What would the error code be?
          Will application of redo logs fail only for the affected object? or tablespace?
          or will the logs get applied anyway and actually corrupt the stanby?

          Thanks

          RayB
          • 4. Re: What happens in Standby after a nologging operation on Primary DB ?
            mbobakGrand Master
            Hi Ray,

            Hmm...this thread is almost a year old....

            Anyhow, to answer your questions:
            RayB wrote:
            I understand the process of nologging on a single operation on a primary database.
            Hmm...I'm not sure you do....let me explain why I think that.
            I would like to know what happens when appliying redo to a standby database afet having performed such an operation on the primary (ex alter index ... NOLOGGING) and then performing normal updates.
            This isn't a valid example. Altering an index to nologging has no effect on normal DML on that index. All index updates will log. It would affect a rebuild, in that the rebuild would not log. But normal DML would be unaffected.

            But, suppose you altered a heap table to be nologging. What then? Well, still, normal DML would log, and there'd be no effect. If however, after setting nologging, you did a direct load insert, i.e., insert /*+ append */, then those changes would not be logged. However, block invalidation records are generated for those new blocks.
            Will application of redo logs fail altogether? What would the error code be?
            No, at redo application time, there would be no error.
            Will application of redo logs fail only for the affected object? or tablespace?
            Redo application will not fail.
            or will the logs get applied anyway and actually corrupt the stanby?
            Essentially, yes. The redo gets applied, but, that redo, instead of having the data changes that occurred on primary, have block invalidation records in them. So, the blocks are allocated to the table and marked as invalid. If/when you try to read them on the standby, you'll get ORA-26040.

            If you have concerns about this, you should consider enabling force logging.

            Hope that helps,

            -Mark
            • 5. Re: What happens in Standby after a nologging operation on Primary DB ?
              ajallenWhiz
              RayB,

              Since it appears you are using dataguard, please execute the following query and post the results here.

              SELECT FORCE_LOGGING FROM V$DATABASE;
              • 6. Re: What happens in Standby after a nologging operation on Primary DB ?
                rayb12617Newbie
                The answer is "NO".
                Thanks for the answer, I keep forgetting about the v$database object.

                RayB
                • 7. Re: What happens in Standby after a nologging operation on Primary DB ?
                  rayb12617Newbie
                  I meant an operation like "alter index [rebuild | move] nologging;" which I suspect will not log the rebuild operation but continue logging after the operation completes.

                  Otherwise, thanks for conferming that corruption will occur on the standby.

                  RayB
                  • 8. Re: What happens in Standby after a nologging operation on Primary DB ?
                    ajallenWhiz
                    You need to run ALTER DATABASE FORCE LOGGING; So that NOLOGGING will be ignored. Else your standby database can get corrupted by not seeing all changes made in the primary. See the entry by F.Munoz Alvarez, above.
                    • 9. Re: What happens in Standby after a nologging operation on Primary DB ?
                      user11204758Newbie
                      So what's the process if i need to failover to the standby database where nologging has been used in the primary database?
                      • 10. Re: What happens in Standby after a nologging operation on Primary DB ?
                        jgarryLegend
                        [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1015738]Buy more personal lubricant.
                        • 11. Re: What happens in Standby after a nologging operation on Primary DB ?
                          Hemant K ChitaleTitan
                          Joel,

                          I almost flagged your post as s_p_a_m until I looked at what the URL actually was !


                          Hemant K Chitale