ORA-12906 cannot drop default temporary tablespace

If you try to drop default temporary tablespace of database as it will result in error as:-

SQL>SELECT TABLESPACE_NAME FROM DBA_TEMP_FILES;
TABLESPACE_NAME
---------------
TEMP

SQL>alter tablespace temp offline;
alter tablespace temp offline
*
Error at line 1;
ORA-12905:default temporary tablepspace cannot be offline

SQL>drop tablespace temp including contents and datafiles;
drop tablespace TEMP including contents and datafiles
*
Error at line2:
ORA-12906:cannot drop default Temporary tablespace

In order to solve this problem you must assign database default tenporary tablespace to a new one. To do this create a new tempoary tablespace temp2 and then assign it to database default tablespace.

To create a new one,

SQL>create temporary tablespace temp1 tempfile'/opt/nit/temp01/dbf' size 10M;
 Tablespace created.

Make this tablespace to database default temporary tablespace,
SQL>alter database default temporary tablespace temp1;
Database altered.

SQL>drop tablespace temp including contents and datafiles;
Tablespace dropped. 

Recover or Recreate TEMPORARY Tablespace in Oracle

In database you may discover that your temporary tablespace is deleted from OS or it might got corrupted. In order to get it back you might think about recover it. The recovery process is simply recover the temporary file from backup and roll the file forward using archived log files if you are in archivelog mode.

Another solution is simply drop the temporary tablespace and then create a new one and assign new one to the database users.

In order to do that follow the steps here:

1)Find out the temporary datafiles.
 
SQL> select file_name,file_id, tablespace_name from dba_temp_files;
 
FILE_NAME                              FILE_ID     TABLESPACE_NAME
--------------------------------------------- ------------ ------------------------------
/oradata2/temp2.dbf                          1            TEMP01
/oradata2/temp.dbf                            2            TEMP02
/oradata2/temp3.dbf                          4            TEMP03

2)Make the affected temporary files offline.
SQL> Alter database tempfile 1,2,4 offline;
Database altered.

3)Create a new temporary tablespace and make it database default tablespace.
SQL> create temporary tablespace temp tempfile '/opt/nit/temp.dbf' size 10M;
Tablespace created.

SQL> alter database default temporary tablespace temp;
Database altered.

4)Check for users who are not pointed to this default temporary tablespaces. Make this default for those users also.
 
SQL>select temporary_tablespace,username from dba_users
          where temporary_tablespace<>'TEMP';

TEMPORARY_TABLESPACE           USERNAME
---------------------------------------------  -------------------------
TEMP                                                     US1
TEMP2                                                   US2

5)Explicitly assign temporary tablespace for users US1 and US2.
SQL> alter user us1 temporary tablespace temp;
User altered.

SQL> alter user us2 temporary tablespace temp;
User altered.
6)Drop the old temporary tablespace.

SQL> drop tablespace temp01;
Tablespace dropped.

SQL> drop tablespace temp02;
Tablespace dropped.

SQL> drop tablespace temp03;
Tablespace dropped.

Space Allocation in a TEMPORARY Tablespace

Default Temporary Tablespace at database creation time, or by issuing an "ALTER DATABASE" statement:
 SQL>ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Each database can be assigned one and only one Default Temporary Tablespace. Using this feature, a Temporary Tablespace is automatically assigned to users.To see the default temporary tablespace for a database:
SQL>SELECT * FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE'; 


Space Allocation in a TEMPORARY Tablespace

You can view the allocation and deallocation of space in a temporary tablespace sort segment using the V$SORT_SEGMENT view. The V$SORT_USAGE view identifies the current sort users in those segments.

When a sort operation that uses temporary space completes, allocated extents in the sort segment are not deallocated; they are just marked as free and available for reuse. The DBA_TEMP_FREE_SPACE view displays the total allocated and free space in each temporary tablespace

The DBA_TEMP_FREE_SPACE dictionary view contains information about space usage for each temporary tablespace. The information includes the space allocated and the free space. You can query this view for these statistics using the following command.

SELECT * from DBA_TEMP_FREE_SPACE;
 TABLESPACE_NAME    TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE
----------------------------------- ------------------------------ ------------------------------- ------------------------
TEMP                                250609664               250609664             249561088

Benefits Of TEMPORARY Tablespace:

Temporary tablespace group has the following benefits:
  • A temporary tablespace allows multiple default temporary tablespaces to be specified at the database level.
  • A temporary tablespace allows the user to use multiple temporary tablespaces in different sessions at the same time.
  • A temporary tablespace allows a single SQL operation to use multiple temporary tablespaces for sorting.

TEMPORARY Tablespace And TEMPFILE

TEMPORARY TABLESPACE
 
Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables.A temporary tablespace contains transient data that persists only for the duration of the session. Temporary tablespaces can improve the concurrency of multiple sort operations that do not fit in memory and can improve the efficiency of space management operations during sorts.If you join two large tables and Oracle cannot do the sort in memory, space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT etc.
Temporary tablespaces are used to store the following:
  • Intermediate sort results
  • Temporary tables and temporary indexes
  • Temporary LOBs
  • Temporary B-trees
By default, a single temporary tablespace named TEMP is created for each new Oracle Database installation. You can create additional temporary tablespaces with the CREATE TABLESPACE statement. You can assign a temporary tablespace to each database user with the CREATE USER or ALTER USER statement.

SQL>CREATE USER NITIN DEFAULT TABLESPACE tbs TEMPORARY TABLESPACE temp;
SQL>ALTER USER NITIN TEMPORARY TABLESPACE TEMP;

Note that a temporary tablespace cannot contain permanent objects and therefore doesn't need to be backed up:

TEMPFILE

Unlike normal data files, TEMPFILEs are not fully initialised (sparse). When you create a TEMPFILE, Oracle only writes to the header and last block of the file. This is why it is much quicker to create a TEMPFILE than to create a normal database file.

TEMPFILEs are not recorded in the database's control file. This implies that one can just recreate them whenever you restore the database, or after deleting them by accident. This opens interesting possibilities like having different TEMPFILE configurations between permanent and standby databases, or configure TEMPFILEs to be local instead of shared in a RAC environment.

One cannot remove datafiles from a tablespace until you drop the entire tablespace. However, one can remove a TEMPFILE from a database. Look at his example:

SQL>ALTER TABLESPACE temp ADD TEMPFILE '/opt/nitin/temp.dbf' SIZE 100M;

Refresh Materialized View

Oracle maintains the data in materialized views by refreshing them after changes are made to their master tables. The refresh method can be incremental (fast refresh) or complete. For materialized views that use the fast refresh method, a materialized view log or direct loader log keeps a record of changes to the master tables.

Materialized views can be refreshed either on demand or at regular time intervals. Alternatively, materialized views in the same database as their master tables can be refreshed whenever a transaction commits its changes to the master tables.
Materialized View Logs

A materialized view log is a schema object that records changes to a master table's data so that a materialized view defined on the master table can be refreshed incrementally.

Each materialized view log is associated with a single master table. The materialized view log resides in the same database and schema as its master table.

CREATE MATERIALIZED VIEW V1 REFRESH FAST FOR UPDATES AS
SELECT * FROM TAB; 

Materialized Views

Materialized views are schema objects that can be used to summarize, compute, replicate, and distribute data. They are suitable in various computing environments such as data warehousing, decision support, and distributed or mobile computing:
 
In data warehouses, materialized views are used to compute and store aggregated data such as sums and averages. Materialized views in these environments are typically referred to as summaries because they store summarized data. They can also be used to compute joins with or without aggregations. If compatibility is set to Oracle9i or higher, then materialized views can be used for queries that include filter selections.
 
Cost-based optimization can use materialized views to improve query performance by automatically recognizing when a materialized view can and should be used to satisfy a request. The optimizer transparently rewrites the request to use the materialized view. Queries are then directed to the materialized view and not to the underlying detail tables or views.
 
In distributed environments, materialized views are used to replicate data at distributed sites and synchronize updates done at several sites with conflict resolution methods. The materialized views as replicas provide local access to data that otherwise has to be accessed from remote sites.
    In mobile computing environments, materialized views are used to download a subset of data from central servers to mobile clients, with periodic refreshes from the central servers and propagation of updates by clients back to the central servers.

Materialized views are similar to indexes in several ways:
  • They consume storage space.
  • They must be refreshed when the data in their master tables changes.
  • They improve the performance of SQL execution when they are used for query rewrites.
  • Their existence is transparent to SQL applications and users.
Unlike indexes, materialized views can be accessed directly using a SELECT statement. Depending on the types of refresh that are required, they can also be accessed directly in an INSERT, UPDATE, or DELETE statement.

A materialized view can be partitioned. You can define a materialized view on a partitioned table and one or more indexes on the materialized view.

Extents Overview

An extent is a contiguous set (side-by-side) of Oracle data blocks allocated to a segment within a tablespace. The size of an extent is controlled by storage parameters used when you CREATE or ALTER the segment (INITIAL, NEXT and PCT_INCREASE) and tablespace parameters. 

When Extents Are Allocated
 
When you create a table or index then oracle automatically creates an initial extent with a number of specified data blocks under the table's segment.
If the data blocks of a segment's initial extent become full and more space is required to hold new data, Oracle automatically allocates an incremental extent for that segment. An incremental extent is a subsequent extent of the same or greater size than the previously allocated extent in that segment.

 Extent Allocation:-

SQL> SELECT tablespace_name, extent_id, blocks, bytes
  2    FROM user_extents
  3   WHERE segment_name = 'EMP';

TABLESPACE_NAME                 EXTENT_ID     BLOCKS      BYTES
------------------------------ ---------- ---------- -------------- ----------
USERS                                                  0                     8               65536
USERS                                                  1                     8               65536


When Extents Are Deallocated

In general, the extents of a segment do not return to the tablespace until you drop the schema object whose data is stored in the segment. Exceptions to this include the following:
 
  • The owner of a table or cluster, or a user with the DELETE ANY privilege, can truncate the table or cluster with a TRUNCATE...DROP STORAGE statement.
  • A database administrator (DBA) can deallocate unused extents using the following SQL syntax:  ALTER TABLE table_name DEALLOCATE UNUSED; 
  • Periodically, Oracle deallocates one or more extents of a rollback segment if it has the OPTIMAL size specified.
When extents are freed, Oracle modifies the bitmap in the datafile (for locally managed tablespaces) or updates the data dictionary (for dictionary managed tablespaces) to reflect the regained extents as available space. Any data in the blocks of freed extents becomes inaccessible.

Row Chaining, Row Migrating, PCTFREE, PCTUSED

Row Chaining  

The row is too large to fit into an EMPTY data block that is known as chained row. Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. If  you have a 2k blocksize but your row size is 4k, we will use 3 blocks to store that row in pieces.  Any table with a long/long raw will have chained rows.Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG or LONG RAW. Row chaining in these cases isnot avoided. Any table whose rowsize exceeds the  blocksize will have chained rows.  Any table with more then 255 columns will have chained rows.You can see if you "have it" by looking at the system wide statistic:

select * from v$sysstat where name like 'table pop cont%';

Row Migrating

A row that was moved to another block due to an update making it too large to fit on its original block with the other rows there is known as migrated row.Oracle tries to find another Blockwith enough free space to hold the entire row.If such block is available Oracle moves the entire ROW to the NEW BLOCK.Oracle keeps the original Row piece of a Migrated row row to POINT to the NEW BLOCK containing the actual row.The ROWID of the MIGRATED rows does not change. INDEXES are not updated and they point to the ORIGINAL row LOCATION.

Migration and Chaining have negative effects on performance. INSERT and UPDATE statements that cause migration and chaining perform very poorly since due to additional PROCESSING. Queries that use an Index to select migrated or chained rows must perform additional I/O's.
 
PCTFREE Parameter

The PCTFREE parameter specifies the minimum percentage of data block to be reserved as free space. If you used PCTFREE 20 then at least 20% size of data block will be reserved as free space. For example if you use data block size 8K (DB_BLOCK_SIZE=8192) then PCTFREE 20 will reserved 1638 bytes as free space in a data block. This parameter is used to update to the existing rows already within each block.You can specify this parameter which issuing CREATE TABLE statement.
   

 

PCTUSED Parameter

This parameter specifies the minimum percentage of a block that can be used for row data plus overhead(data block header, table directory, and row directory) before new rows are added to the block.

After a data block is filled to the limit determined by PCTFREE, Oracle considers the block unavailable for the insertion of new rows until the percentage of that block falls below the parameter PCTUSED.

If we set PCTUSED 40 in the CREATE TABLE statement then data block used for this table's data segment is considered unavailable for the insertion of any new rows until the amount of used space in the block falls to 39% or less (Here the block's used space has previously reached PCTFREE).
Related Post:- Row Format

Row Format In DB

Oracle stores each row of a database table containing data for less than 256 columns as one or more row pieces. If an entire row can be inserted into a single data block, then Oracle stores the row as one row piece.If all of a row's data cannot be inserted into a single data block or an update to an existing row causes the row to outgrow its data block, Oracle stores the row using multiple row pieces. A data block usually contains only one row piece for each row. When Oracle must store a row in more than one row piece, it is chained across multiple blocks.

 





When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces.Each row piece, chained or unchained, contains a row header and data for all or some of the row's columns.
The row header contains information about:
  •     Row pieces
  •     Chained row pieces only)
  •     Columns in the row piece
  •     Cluster keys

A row fully contained in one block has at least 3 bytes of row header. After the row header information, each row contains column length and data. The column length requires 1 byte for columns that store 250 bytes or less, or 3 bytes for columns that store more than 250 bytes, and precedes the column data. Space required for column data depends on the datatype. If the datatype of a column is variable length, then the space required to hold a value can grow and shrink with updates to the data.

Oracle Data Block Format

The Format of Oracle Data Block is as follows:
  1. Header 
  2. Table Directory 
  3. Row Directory 
  4. Row Data
  5.  Free Space 

 
Header  
The header contains general block information, such as the block address and the type of segment (for example, data or index).
 
Table Directory

This portion of the data block contains information about the table having rows in this block.
 
Row Directory

This portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).

After the space has been allocated in the row directory of a data block's overhead, this space is not reclaimed when the row is deleted. Therefore, a block that is currently empty but had up to 50 rows at one time continues to' have 100 bytes allocated in the header for the row directory. Oracle reuses this space only when new rows are inserted in the block.
 
Row Data

This portion of the data block contains table or index data. Rows can span blocks.
Related Post:- Row Format
 
Free Space

- Free space is allocated for insertion of new rows and for updates to rows that require additional space.
- In data blocks allocated for the data segment of a table or cluster, or for the index segment of an index, free space can also hold transaction entries.
- A transaction entry is required in a block for each INSERT, UPDATE, DELETE, and SELECT...FOR UPDATE statement accessing one or more rows in the block. 
- The space required for transaction entries in most operating systems require approximately 23 bytes.

Oracle Data Block Overview

  • Oracle manages the storage space in the datafiles of a database in units called data blocks.  
  • Data block is the smallest unit of data used by a database.  
  • At the physical, operating system level, all data is stored in bytes. Each operating system has a block size. Oracle requests data in multiples of Oracle data blocks, not operating system blocks.  
  • The standard block size is specified by the initialization parameter DB_BLOCK_SIZE. 
  • You can specify of up to five nonstandard block sizes. 
  • The data block sizes should be a multiple of the operating system's block size within the maximum limit to avoid unnecessary I/O. 
  • Oracle data blocks are the smallest units of storage that Oracle can use or allocate.
Related Topic: Data Block Format

Rules To Specify Values In An Initialization Parameter

•Parameter within initialization file can be specified in any order.

•If a line starts with pound sign(#) the rest words of the line is ignored. # indicates it is comment.

•If OS is case sensitive then value of filename within initialization parameter is case sensitive.

•To enter several parameters in a line use spaces between the parameters. Like,
db_name=dbase sga_target=300M sga_max_size=300M

•The parameter that take multiple values enter multiple values enclosed in parentheses and separated by commas. Like,
control_files=('/oradata1/arju/control01.ctl','/oradata1/arju/control02.ctl')

Alternatively you can enter multiple values without parentheses and commas. Like,
control_files='/oradata1/arju/control01.ctl' '/oradata1/arju/control02.ctl'

•You can use the IFILE initialization parameter to embed the contents of another initialization parameter file into the current initialization parameter file.

•If you specify a parameter value in multiple location then the last executed value override the prior one since the parameter is executed in sequential order. For example, if my pfile contains value in the following order
sga_target=300M
open_cursor=100
sga_target=400M
then sga_target=400M is in effect.

•Enclose in quotation marks any parameter value that contains a special character.
For example you can specify DB_DOMAIN in any of following ways,

DB_DOMAIN = 'ARJU.ORACLE#.COM'
DB_DOMAIN = ARJU.ORACLE\#.COM

Related Topic:-

Solution Of Startup Files With An Error ORA-00119 and ORA-00132

Error Description: Database Startup fails with oracle error ORA-00119, ORA-00132
SQL> startup mount
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_DBASE'


Cause of The problem: If the tnsnames.ora entry was used for the value of the LOCAL_LISTENER and the LOCAL_LISTENER entry inside tnsnames.ora is changed or tnsnames.ora file is moved and hence the tns alias to which the LOCAL_LISTENER parameter points is no longer valid. So, the entry inside spfile or pfile to be unresolvable, and the database will not start.

PMON must be able to resolve whatever value the LOCAL_LISTENER or REMOTE_LISTENER parameter is set to. Here LISTENER_DBASE is the name of the local listener.

Remember the LISTENER_DBASE is NOT the listener name reflected in the listener.ora file but rather it is an alias stored in the tnsnames.ora file.

Solution of The problem 
A)Correct the tnsnames.ora

i)Determine if the tns alias is good by using tnsping.

SQL> !tnsping LISTENER_DBASE
TNS Ping Utility for Solaris: Version 10.2.0.1.0 - Production on 15-MAY-2008 11:57:06
Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
/oracle/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
TNS-03505: Failed to resolve name

So it could not find the name inside tnsnames.ora.
ii)
Add the LISTENER_DBASE entry in the tnsnames.ora file.
SQL>!vi /oracle/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
LISTENER_DBASE=(DESCRIPTION=(address=(protocol=tcp)(host=neptune)(port=1522)) (sid='dbase')

and run tnsping utility,

SQL> !tnsping LISTENER_DBASE
TNS Ping Utility for Solaris: Version 10.2.0.1.0 - Production on 15-MAY-2008 12:11:40
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/oracle/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION=(address=(protocol=tcp)(host=neptune)(port=1522)) (sid='dbase')
TNS-12533: TNS:illegal ADDRESS parameters

So there is syntax error in the LISTENER_DBASE alias. Correct it as here add parenthesis.

SQL>!vi /oracle/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
LISTENER_DBASE=(DESCRIPTION=(address=(protocol=tcp)(host=neptune)(port=1522)) (sid='dbase')))

iii)After correction run tnsping and start the database.
SQL> !tnsping LISTENER_DBASE
TNS Ping Utility for Solaris: Version 10.2.0.1.0 - Production on 15-MAY-2008 12:14:25
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/oracle/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION=(address=(protocol=tcp)(host=neptune)(port=1522)) (sid='dbase'))
OK (0 msec)

SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 2019288 bytes
Variable Size 83886120 bytes
Database Buffers 75497472 bytes
Redo Buffers 6369280 bytes
Database mounted.
Database opened.


B)An alternative solution is to remove the LOCAL_LISTENER entry from the listener.ora file.
1)Create spfile from pfile if you don't have pfile upadted.
SQL>CREATE PFILE FROM SPFILE;

2)Open the pfile and remove the LOCAL_LISTENER entry.

$vi /oracle/app/oracle/product/10.2.0/db_1/dbs/initdbase.ora

3)Create spfile from pfile.
SQL>CREATE PFILE FROM SPFILE;

4)Start the database.

SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 2019288 bytes
Variable Size 83886120 bytes
Database Buffers 75497472 bytes
Redo Buffers 6369280 bytes
Database mounted.
Database opened.

Recommendation:
------------------
Though you can set the value of LOCAL_LISTENER (or REMOTE_LISTENER parameter if used) as alias in tnsnames.ora like in this example but it is not recommended setting. Instead use a full address descriptor inside pfile or set dynamically inside spfile.

In order to set dynamically inside spfile and in memory log in with dba privileges and issue:
ALTER SYSTEM set LOCAL_LISTENER="(address=(protocol=tcp)(host=yourhost)(port=yourport))" scope=both sid='instancename';

In my system,
SQL>ALTER SYSTEM set LOCAL_LISTENER="(address=(protocol=tcp)(host=neptune)(port=1522))" scope=both sid='dbase';

SQL> show parameter local

NAME TYPE VALUE 
local_listener string (address=(protocol=tcp)(host=n
eptune)(port=1522))

Solution OF An Error ORA-27102

Scenario of The problem: 
In my computer I have two database. One database is running smoothly but another database is not started whenever I invoke startup. It fails with error,

SQL> startup
ORA-27102: out of memory
Solaris-AMD64 Error: 22: Invalid argument

Reason of The problem:
The database which could not start is because of the low memory on the system or in the sga_max_size there is high value set. So the system could not allocate so large memory as it does not have free so much. There may be other reasons like OS limitation in order of usage the memory. As in this case one database is ok (Both database are running on same user)and another database is failed so I suspect either low memory on the system or in the sga_max_size parameter inside spfile there is high value set.

Solution of The problem:
1)Set a lower amount of memory in the first database.
You can do it by, On dbase1,

SQL> alter system set sga_max_size=1600M scope=spfile;
System altered.

SQL> alter system set sga_target=1600M;
System altered.

SQL>shutdown

Now set ORACLE_SID and start the instance.

bash-3.00$ export ORACLE_SID=dupbase
bash-3.00$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 15 01:34:15 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 171966464 bytes
Fixed Size 2019320 bytes
Variable Size 113246216 bytes
Database Buffers 50331648 bytes
Redo Buffers 6369280 bytes
Database mounted.
Database opened.

Or,
2)Lower the setting of SGA_MAX_SIZE, SGA_TARGET on the 2nd database. To do this create pfile from spfile.
SQL>create pfile from spfile;

And then edit the pfile parameter of SGA_MAX_SIZE and SGA_TARGET.

And start the database with the pfile.

SQL>STARTUP PFILE='pfile_name';

Later , Create spfile from pfile,
SQL>CREATE SPFILE from PFILE;

Oracle Initialization Parameters File Types

There are different types of initialization parameters in oracle, such as,

1)Derived Parameters
2)Operating System-Dependent Parameters
3)Variable Parameters

1)Derived Parameters:
As the name indicates these parameter values are calculated from the values of other parameters. Normally, you should not alter values for derived parameters, but if you do, then the value you specify will override the calculated value.

For example, the default value of the SESSIONS parameter is derived from the value of the PROCESSES parameter. If the value of PROCESSES changes, then the default value of SESSIONS changes as well, unless you override it with a specified value.

2)Operating System-Dependent Parameters 
The valid value of value ranges of these parameters are host Operating System dependent. For example, the value of the parameter DB_BLOCK_SIZE, has an operating system-dependent default value.

3)Variable Parameters 
The variable initialization parameters offer the most potential for improving system performance.
Some variable parameters set capacity limits but do not affect performance. For example, when the value of OPEN_CURSORS is 10, a user process attempting to open its eleventh cursor receives an error. Other variable parameters affect performance but do not impose absolute limits.

Oracle Database Configuration Assistant

The Database Configuration Assistant (DBCA) an Oracle supplied tool that enables you to create an Oracle database, configure database options for an existing Oracle database, delete an Oracle database, or manage database templates. DBCA is launched automatically by the Oracle Universal Installer, but it can be invoked standalone from the Windows operating system start menu (under Configuration Assistants) or by entering the following on the UNIX command line:

dbca

The Database Configuration Assistant (DBCA) is a graphical user interface that interacts with the Oracle Universal Installer when you choose to create a database automatically when you install the Oracle 9i Server. The DBCA is a Java-based interface that can be launched from any platform that has access to a Java engine.

During installation of the server software, DBCA is launched by the Universal Installer and can automatically create a starter database if you want it to. You can choose at this point whether to use the DBCA at this time, use it later to assist you with database creation, or create your database manually.
With the DBCA, you can not only create a database, you can also configure database options, delete existing databases, and manage templates.
To launch the DBCA on Unix, type dbca at the Unix prompt. In Windows, you can launch the assistant from the Windows operating system Start menu where it is found under Configuration Assistants.

On a Unix platform, you need to be running an XWindows session either on the terminal or on your PC to run the DBCA product.

When you start the DBCA, you are presented with a screen with four options:

  • Create a Database 
  • Configure Database Options in a Database 
  • Delete a Database 
  • Manage Templates    


Create a Database
This option allows you to create a new database or a database template. The default template allows you to choose to create a database either with or without data files.

Without datafiles contains only the structure of the database, but allows you to specify or change all the database parameters.

With datafiles contains both the structure and the physical datafiles of the database. All the log files and control files are automatically created for the database, but you can add or remove control files and/or log groups. You can also change the destination and name of any or all of the data files. You cannot add or remove datafiles, tablespaces, or rollback segments. You cannot change initialization parameters.

Whenever you create a database with the DBCA, you need to specify the global database name as well as the unique SID. What's more, you need to decide what added features you want in your resulting database. These added features that you can choose to add to your installation include the following:
  • Oracle Spatial 
  • Oracle Intermedia (or Oracle Text) 
  • Oracle OLAP Services 
  • Example Schemas 
  • Human Resources 
  • Order Entry 
  • Product Media 
  • Sales History 
  • Shipping 
  • When you are creating a database with the DBCA, you have the option of creating either a typical or a custom database.
A custom database allows you to customize the creation of your database. This option is only for experienced database administrators with advanced knowledge of the database creation process. This advanced knowledge includes experience with the data, control, and redo settings; tablespace and extent sizing; database memory parameters and other initialization parameters; archive log formats and destinations; trace file destinations; and character set values.
A typical database creation with minimal user input allows you to choose the following different types of databases to mimic.

The steps involved in creating a database with DBCA include the following:
  • Create Database Create the database immediately. 
  • Save as a Database Template Save creation parameters as a template and that template is added to the list of available templates. 
  • Generate Database Creation Scripts Save the database creation parameters as a script file for later use. 
  • Determine the type of database that you want to create: 
  1. Online Transaction Processing (OLTP) An OLTP database typically must be capable of processing many thousands of transactions (read, write, update, delete) from many concurrent users every day. Performance, in an OLTP, is in terms of throughput of transactions and availability of data. 
  2. Data Warehouse These databases must be capable of processing a wide variety of queries; however, they are typically read-only queries. These processes range from simple queries that fetch a few records, to queries that many long-running complex queries take hours to process. Data Warehouse database performance is measured in terms of response time. 
  3. Multipurpose Many environments require a combination of the two different kinds of processing. A mixed database supports both OLTP and Data Warehousing environments. This is the default database that will be installed using DBCA.
 
Database Configuration Options
The Configure Database option lets you add options to your database that had not previously been configured. These include (if they are available for your installation) JVM, InterMedia, Visual Information Retrieval, Spatial, Oracle OLAP services, advanced replication, and SQL*Plus help.
The Configure Database option is not available in the 9i Real Application Clusters installation.
Additional database configuration options, some shown earlier in Figure 3.2, include the following:
  • Delete a Database Allows you to delete a database. 
  • Manage Templates Allows you to create a template though one of three methods:

1.  From predefined template settings Create a new template from the predefined template settings; add or change any settings, parameters, storage characteristics; or use custom scripts.
2.  From an existing database, structure only Contains the structural information similar to an existing database. The structure, in this case, means all the database options as well (the tablespaces, datafiles, and initialization parameters) specified in the source database. No schemas or data will be a part of the created database.
3.  From an existing database, structure and data Create a database that copies the structure as well as the data of the existing database. User-defined schemas and their data will be a part of the template. You are not allowed to add or remove any data files, tablespaces, or rollback segments.
  • Required Parameters Besides the information already covered, the creation of an Oracle Database and Oracle instance requires some further decisions to be made, particularly if you are choosing the custom database creation option. Figure 3.4 shows you a tabbed page on which you can choose various SGA settings, character sets to define for your database, data block sizing decisions, file locations, and archive log locations for the database.


Choosing database parameters.

  • Archive Parameters Archive redo logs are used for database recovery and often for implementation of a standby database. Running your database in archive log mode enables the archiving of redo log files before those redo files are reused. This is done for the purpose of recovery facilitation.
 It is often not necessary, or wise, to put a Data Warehouse database into archive log mode. Because Data Warehouses are updated infrequently, and typically by means of a bulk SQL*Ldr job, creating logs for restoration purposes in this case is often simply a waste of resources.

  • Data Block Sizing DB_BLOCK_SIZE helps to define the default database block size and determine the SORT_AREA_SIZE. DB_BLOCK_SIZE, the parameter that gets set during the use of this parameter, can be specified only at database creation time. SORT_AREA_SIZE is the amount of memory used for sorting operations. 
Remember that the only time you can set the DB_BLOCK_SIZE of the database, regardless of the method used to create the database, is at database creation time. If for any reason you determine later that you have chosen an incorrect value for this parameter, the database has to be rebuilt.
  •  File Locations By using the UDUMP, BDUMP, and CDUMP parameters, you can specify the particular locations for trace files.
 More information about the Initialization Parameter file can be found in Chapter 4, "Managing an Instance."

  • Database Storage This helps you to specify the storage parameters for the various files in your database creation. You can make custom alterations to the control files (do you want to multiplex?), tablespaces (do you want to have dictionary managed or locally managed?), datafiles, undo segments (do you want the new, and Oracle suggested, Automatic Undo Management, or do you want to manage rollback segments manually?), and redo log groups (how many log groups do you want, where do you want them, and do you want to multiplex them?).
Related Post:-  

Logminer Fails With ORA-01284, ORA-00308

Problem Description:
Whenever you try to run the logminer in the mining database (different from source database. Source database is one in which redo or archived redo logs are generated and mining database is one in which you try to analysis those logs) to add logfile using DBMS_LOGMNR.ADD_LOGFILE it generates error ORA-01284, ORA-00308, ORA-27047.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/export/home/oracle/o1_mf_1_61856_48637xkh_.arc', -
OPTIONS => DBMS_LOGMNR.NEW);

BEGIN DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/o1_mf_1_61856_48637xkh_.arc', OPTIONS => DBMS_LOGMNR.NEW); END;

*
ERROR at line 1:
ORA-01284: file /export/home/oracle/o1_mf_1_61856_48637xkh_.arc cannot be
opened
ORA-00308: cannot open archived log
'/export/home/oracle/o1_mf_1_61856_48637xkh_.arc'
ORA-27047: unable to read the header block of file
Additional information: 2
ORA-06512: at "SYS.DBMS_LOGMNR", line 68
ORA-06512: at line 1

But if you see in your mining database hard disk the archived redo log file actually exist.
SQL> !ls -l /export/home/oracle/o1_mf_1_61856_48637xkh_.arc
-rwxrwxrwx 1 oracle oinstall 24671232 Jul 21 00:38 /export/home/oracle/o1_mf_1_61856_48637xkh_.arc

Cause of The Problem:
To be able to run oracle logminer there are several requirements. If the requirements are not satisfy logminer will not run successfully. One of the requirement to run logminer is both the source database and the mining database must be running on the same hardware platform.

The database block sizes of the analyzing instance and the log source database must also be the same.

If they are different then logminer will not work and working with it will produce error ORA-01284, ORA-00308, ORA-27047.

Solution of The problem:
Use the mining database as the same hardware platform as of source database. If you don't have same platform then it is not possible. In that case you can use source database for mining.

Related Post:-

How Oracle Logminer Is Used To Analysis Logfile

Any changes to database is recored in online redo logfiles. If your database archival mode on then online redo log files are archived which contains the database changes. With these logfile we can analysis any changes in database.
 
we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. Because undo segments are database changes, they also generate redo entries. So we can get them from online redo logs and then to archived logs.

So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.

Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.

In this step I will show you the step by step procedure how we can use logminer.

1)Ensure that you have on at a minimal level supplemental logging
To work with logminer you must have database supplemental logging on of the source database at a minimum level.By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. To make it use you need to on it.

You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
--------
NO

In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

2)Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE ... command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql

3)Grant the EXECUTE_CATALOG_ROLE role.

The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is ARJU.

SQL>GRANT EXECUTE_CATALOG_ROLE TO ARJU;

4)Create the synonym. ARJU creates a public synonym:

CREATE PUBLIC SYNONYM DBMS_LOGMNR FOR SYS.DBMS_LOGMNR;

All above four steps are needed just for once.

5)Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario. In this post suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
--------------------------------------------------------------------------------
/oradata2/data1/dbase/redo01.log
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo3.log

Sometimes, you want to mine the redo log file that was most recently archived.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
--------------------------------------------------------------------------------
/oradata2/flash_recovery_area/DBASE/archivelog/2008_08_03/o1_mf_1_839_49bodr0k_.
arc
With the DBMS_LOGMNR.ADD_LOGFILE specify those files that you want to mine.
Here I have given the online redo log files redo01.log and redo03.log.
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo01.log');
DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo03.log');
END;
/

6)Start the LogMiner session and specify a dictionary.
To start the LogMiner session:

BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/

Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.

7)Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table TEST3 we can issue
SQL>select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'TEST3';

USERNAM TIMESTAMP SEG_TY SEG_NAME TABLE_SPAC SID SERIAL# OPERATION
------- ----------------- ------ ---------- ---------- ---------- ---------- --------------------------------
08/03/08 02:57:35 TABLE TEST3 153 3272 DDL
08/03/08 02:57:41 TABLE TEST3 SYSTEM 153 3272 INSERT
08/03/08 02:57:55 TABLE TEST3 SYSTEM 153 3272 UPDATE
08/03/08 03:01:35 TABLE TEST3 SYSTEM 153 3272 UPDATE

We can get SQL_UNDO and SQL_REDO information by,

SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = 'TEST3' and OPERATION='UPDATE';

SQL_UNDO SQL_REDO
-------------------------------------------------- --------------------------------------------------
update "SYS"."TEST3" set "A" = '3' where "A" = '9' update "SYS"."TEST3" set "A" = '9' where "A" = '3'
and ROWID = 'AAAOKVAABAAAP8qAAA'; and ROWID = 'AAAOKVAABAAAP8qAAA';

update "SYS"."TEST3" set "A" = '9' where "A" = '10 update "SYS"."TEST3" set "A" = '10' where "A" = '9
' and ROWID = 'AAAOKVAABAAAP8qAAA'; ' and ROWID = 'AAAOKVAABAAAP8qAAA';

8)End the LogMiner session.

Use the DBMS_LOGMNR.END_LOGMNR procedure.
SQL> BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

PL/SQL procedure successfully completed.

Related Post:-

Share

Twitter Delicious Facebook Digg Stumbleupon Favorites More