1. What is an Oracle
Instance?
An Oracle database
server consists of an Oracle database and an Oracle instance. Every time a
database is started, a system global area (SGA) is allocated and Oracle
background processes are started. The combination of the background
processes and memory buffers is called an Oracle instance. We can run
multiple instances on the same Oracle Database Server, where each instance
connects to its database.
Oracle instance
includes:
SGA - System or Shared Global Area
Components of SGA:
·
DBBC - Database Buffer
Cache
·
SP - Shared Pool;
divided into Library Cache (LC) and Data Dictionary Cache (DDC) or Row Cache.
·
RLB - Redo log Buffer
Background Process
(10/11g database):
Mandatory Processes
·
SMON - System Monitor
·
PMON - Process Monitor
·
DBWR - Database writer
·
LGWR - Log Writer
·
CKPT - Check point
·
RECO - Recoverer
·
DIAG - Diagnosability
(new in 11g)
·
VKTM - Virtual keeper of
time (keeps "SGA Time" variable in current, new in 11g)
Optional Process
·
ARCN - Archiver
·
MMAN - Memory Manager - ASMM
·
MMON - Memory Monitor
·
MMNL - Memory Monitor
Light - AWR
and few more...
TIP: For a complete overview of Database 11g Architecture
check out this poster: Database 11g Architecture Poster [2.74 MB]
List of running processes of a single instance (11g) on Linux:
[oracle@hostname ~]$ top -n 1 -U
oracle -c
PID USER PR NI VIRT RES
SHR S %CPU %MEM TIME+ COMMAND
9181 oracle 15 0 745m
111m 109m S 6.9 7.5 1:11.15 ora_j000_DB1_SID
9163 oracle 16 0 745m
68m 65m S 5.3 4.6 0:11.95 ora_j001_DB1_SID
10420 oracle 18 0 744m
12m 11m R 3.0 0.8 0:00.09 ora_j002_DB1_SID
6773 oracle 16 0 748m
40m 34m S 0.7 2.7 0:03.16 ora_dbw0_DB1_SID
6775 oracle 16 0 759m
34m 33m S 0.7 2.4 0:10.74 ora_lgwr_DB1_SID
6767 oracle 15 0 744m
13m 11m S 0.3 0.9 0:02.17 ora_psp0_DB1_SID
6785 oracle 15 0 744m
19m 18m S 0.3 1.3 0:02.66 ora_mmnl_DB1_SID
6560 oracle 18 0 42048 9348
6788 S 0.0 0.6 0:00.86 tnslsnr LISTENER -inherit
6755 oracle 15 0 744m
16m 14m S 0.0 1.1 0:02.50 ora_pmon_DB1_SID
6757 oracle -2 0 744m
13m 11m S 0.0 0.9 0:04.31 ora_vktm_DB1_SID
6761 oracle 15 0 744m
13m 11m S 0.0 0.9 0:00.34 ora_gen0_DB1_SID
6763 oracle 18 0 744m
12m 11m S 0.0 0.9 0:00.53 ora_diag_DB1_SID
6765 oracle 15 0 744m
19m 18m S 0.0 1.3 0:00.59 ora_dbrm_DB1_SID
6769 oracle 18 0 744m
16m 14m S 0.0 1.1 0:07.11 ora_dia0_DB1_SID
6771 oracle 18 0 744m
17m 16m S 0.0 1.2 0:11.13 ora_mman_DB1_SID
6777 oracle 16 0 744m
16m 14m S 0.0 1.1 0:08.51 ora_ckpt_DB1_SID
6779 oracle 15 0 748m
87m 84m S 0.0 5.9 0:04.61 ora_smon_DB1_SID
6781 oracle 18 0 744m
18m 17m R 0.0 1.3 0:00.52 ora_reco_DB1_SID
6783 oracle 15 0 748m
56m 51m S 0.0 3.8 0:06.01 ora_mmon_DB1_SID
6787 oracle 15 0 744m
13m 11m S 0.0 0.9 0:00.35 ora_d000_DB1_SID
6789 oracle 15 0 744m
12m 11m S 0.0 0.8 0:00.31 ora_s000_DB1_SID
6852 oracle 18 0 744m
14m 13m S 0.0 1.0 0:00.40 ora_qmnc_DB1_SID
6859 oracle 15 0 744m
25m 23m S 0.0 1.7 0:00.53 ora_q000_DB1_SID
6864 oracle 18 0 744m
15m 14m S 0.0 1.0 0:00.21 ora_q001_DB1_SID
6983 oracle 15 0 748m
54m 48m S 0.0 3.7 0:05.40 ora_cjq0_DB1_SID
7141 oracle 15 0 744m
13m 12m S 0.0 0.9 0:00.26 ora_smco_DB1_SID
7722 oracle 16 0 753m
58m 55m S 0.0 4.0 0:07.64 oracleDB1_SID (LOCAL=NO)
10254 oracle 15 0 744m
14m 12m S 0.0 1.0 0:00.10 ora_w000_DB1_SID
2. What information is
stored in Control File?
Oracle Database must
have at least one control file.
It's a binary file
contains some of the following information:
·
The database name and
unique ID
·
The timestamp of
database creation
·
The names and locations
of associated datafiles and redo log files
·
Tablespace information
·
Datafile offline ranges
·
Archived log information
and history
·
Backup set and backup
piece information
·
Backup datafile and redo
log information
·
Datafile copy information
·
Log records: sequence
numbers, SCN range in each log
·
RMAN Catalog
·
Database block
corruption information
The location of the control files is specified through the control_files init param:
SYS@DB1_SID SQL>show
parameter control_file;
NAME TYPE VALUE
------------------------------------
----------- ------------------------------
control_file_record_keep_time
integer 7
control_files string /u01/app/oracle/oradata/DB1_SID
/control01.ctl, /u01/app/oracl
e/flash_recovery_area/DB1_SID/c
ontrol02.ctl
3. When you start an
Oracle DB which file is accessed first?
Oracle first opens and
reads the initialization parameter file (init.ora)
[oracle@hostname ~]$ ls -la $ORACLE_HOME/dbs/initDB1_SID.ora
-rw-r--r-- 1 oracle
oinstall 1023 May 10 19:27 /u01/app/oracle/product/11.2.0/dbs/initDB1_SID.ora
4. What is the job of SMON and PMON processes?
SMON - System Monitor Process - Performs
recovery after instance failure, monitors temporary segments and extents;
cleans temp segments, coalesces free space (mandatory process for DB and starts
by default)
PMON - Process Monitor - Recovers failed process resources. In Shared Server architecture, monitors and retarts any failed dispatcher or server proceses (mandatory process for DB and starts by default)
[oracle@hostname ~]$ ps -ef |grep
-e pmon -e smon |grep -v grep
oracle 6755 1 0 12:59 ?
00:00:05 ora_pmon_DB1_SID
oracle 6779 1 0 12:59 ?
00:00:06 ora_smon_DB1_SID
5. What is Instance Recovery?
While Oracle instance fails, Oracle performs an Instance Recovery when the associated database is being re-started.
Instance recovery occurs
in two steps:
Cache recovery:
Changes being made to a
database are recorded in the database buffer cache.
These changes are also
recorded in online redo log files simultaneously. When there are enough data in
the database buffer cache,they are written to data files.
If an Oracle instance
fails before the data in the database buffer cache are written to data files,
Oracle uses the data recorded in the online redo log files to recover the lost
data when the associated database is re-started.
This process is called
cache recovery.
Transaction recovery:
When a transaction
modifies data in a database, the before image of the modified data is stored in
an undo segment.
The data stored in the
undo segment is used to restore the original values in case a transaction is
rolled back.
At the time of an
instance failure, the database may have uncommitted transactions. It is
possible that changes made by these uncommitted transactions have gotten saved
in data files.
To maintain read
consistency, Oracle rolls back all uncommitted transactions when the associated
database is re-started.
Oracle uses the undo
data stored in undo segments to accomplish this.
This process is called
transaction recovery.
6. What is being written into the Redo Log Files?
Redo log records all
changes made in datafiles.
In the Oracle database,
redo logs comprise files in a proprietary format which log a history of all
changes made to the database. Each redo log file consists of redo records.
A redo record, also called a redo entry, holds a group of
change-vectors, each of which describes or represents a change made to a
single block in the database.
Let's get into this
topic a little bit dipper:
Log writer (LGWR) writes
redo log buffer contents Into Redo Log FIles. LGWR does this every three
seconds, when the redo log buffer is 1/3 full and immediately before the
Database Writer (DBWn) writes its changed buffers into the datafile. The
redo log of a database consists of two or more redo log files. The database
requires a minimum of two files to guarantee that one is always available for
writing while the
other is being archived
(if the DB is in ARCHIVELOG mode). LGWR writes to redo log files in a circular
fashion. When the current redo log file fills, LGWR begins writing to the
next available redo log file. When the last available redo log file is
filled, LGWR returns to the first redo log file and writes to it, starting the
cycle again.
Filled redo log files
are available to LGWR for reuse depending on whether archiving is enabled.
If archiving is disabled
(the database is in NOARCHIVELOG mode), a filled redo log file is available
after the changes recorded in it have been written to the datafiles.
If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file is available to LGWR after the changes recorded in it have been written to the datafiles and the file has been archived.
Oracle Database uses only one redo log files at a time to store redo records written from the redo log buffer. The redo log file that LGWR is actively writing to is called the current redo log file. Redo log files that are required for instance recovery are called active redo log files. Redo log files that are no longer required for instance recovery are called inactive redo log files.
If the database is in
ARCHIVELOG mode it cannot reuse or overwrite an active online log file until
one of the archiver background processes (ARCn) has archived its contents.
If archiving is disabled
(DB is in NOARCHIVELOG mode), then when the last redo log file is full, LGWR
continues by overwriting the first available active file.
A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. However, you can configure log switches to occur at regular intervals, regardless of whether the current redo log file is completely filled. You can also force log switches manually.
Oracle Database assigns
each redo log file a new log sequence number every time a log switch occurs and
LGWR begins writing to it.
When the database
archives redo log files, the archived log retains its log sequence number.
7. How do you control number of Datafiles one can have in an Oracle database?
The db_files parameter
is a "soft limit " parameter that controls the maximum number of
physical OS files that can map to an Oracle instance.
The maxdatafiles
parameter is a different - "hard limit" parameter.
When issuing a
"create database" command, the value specified for maxdatafiles is
stored in Oracle control files and default value is 32.
The maximum number of
database files can be set with the init parameter db_files.
8. How many Maximum Datafiles can there be in Oracle Database?
Regardless of the setting of this paramter, maximum per database: 65533 (May be less on some operating systems)
Maximum number of
datafiles per tablespace: OS dependent = usually 1022
Limited also by size of
database blocks and by the DB_FILES initialization parameter for a particular
instance
Bigfile tablespaces can
contain only one file, but that file can have up to 4G blocks.
9. What is a Tablespace
A tablespace is a logical storage unit within an Oracle database.
Tablespace is not
visible in the file system of the machine on which the database resides.
A tablespace, in turn,
consists of at least one datafile which, in turn, are physically located in the
file system of the server.
A datafile belongs to
exactly one tablespace. Each table, index and so on that is stored in an Oracle
database belongs to a tablespace.
The tablespace builds
the bridge between the Oracle database and the filesystem in which the table's
or index' data is stored.
There are three types of
tablespaces in Oracle:
·
Permanent tablespaces
·
Undo tablespaces
·
Temporary tablespaces
10. What is the purpose
of Redo Log files?
Before Oracle changes
data in a datafile it writes these changes to the redo log.
If something happens to
one of the datafiles, a backed up datafile can be restored and the redo, that
was written since, replied, which brings the datafile to the state it had
before it became unavailable.
11. Which default Database roles are created when you create a Database?
CONNECT , RESOURCE and DBA are three default roles. The DBA_ROLES data dictionary view can be used to list all roles of a database and the authentication used for each role.
The following query lists all the roles in the database:
SELECT * FROM DBA_ROLES;
ROLE PASSWORD
----------------
--------
CONNECT NO
RESOURCE NO
DBA NO
SECURITY_ADMIN YES
12. What is a
Checkpoint?
A checkpoint occurs when
the DBWR (database writer) process writes all modified buffers in the SGA
buffer cache to the database data files.
Data file headers are
also updated with the latest checkpoint SCN, even if the file had no changed
blocks. Checkpoints occur AFTER (not during) every redo log switch and
also at intervals specified by initialization parameters.
Set parameter
LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the
database alert log.
Checkpoints can be
forced with the ALTER SYSTEM CHECKPOINT; command.
SCN can refer to:
System Change Number - A
number, internal to Oracle that is incremented over time as change vectors are
generated, applied, and written to the Redo log.
System Commit Number - A
number, internal to Oracle that is incremented with each database COMMIT.
Note: System Commit
Numbers and System Change Numbers share the same internal sequence generator.
13. Which Process reads data from Datafiles?
Server Process - There is no background process which reads data
from datafile or database buffer.
Oracle creates server
processes to handle requests from connected user processes. A server
process communicates with the user process and interacts with Oracle to carry
out requests from the associated user process. For example, if a user
queries some data not already in the database buffers of the SGA, then the
associated server process reads the proper data blocks from the datafiles into
the SGA.
Oracle can be configured
to vary the number of user processes for each server process.
In a dedicated server
configuration, a server process handles requests for a single user process.
A shared server
configuration lets many user processes share a small number of server
processes, minimizing the number of server processes and maximizing the use of
available system resources.
14. Which Process writes data in Datafiles?
Database Writer
background process DBWn (20 possible) writes dirty buffers from the buffer
cache to the data files.
In other words, this
process writes modified blocks permanently to disk.
15. Can you make a Datafile auto extendible. If yes, how?
YES. A Datafile can be
auto extendible.
Here's how to enable
auto extend on a Datafile:
SQL>alter database
datafile '/u01/app/oracle/product/10.2.0/oradata/DBSID/EXAMPLE01.DBF'
autoextend on;
Note: For tablespaces defined with multiple data files (and partitioned table files), only the "last" data file needs the autoextend option.
SQL>spool runts.sql
SQL>select 'alter
database datafile '|| file_name|| ' '|| ' autoextend on;' from dba_data_files;
SQL>@runts
16. What is a Shared Pool?
The shared pool portion of the SGA contains the library cache, the dictionary cache, buffers for parallel execution messages, and control structures. The total size of the shared pool is determined by the initialization parameter SHARED_POOL_SIZE.
The default value of
this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms.
Increasing the value of
this parameter increases the amount of memory reserved for the shared pool.
17. What is kept in the Database Buffer Cache?
The database buffer
cache is the portion of the SGA that holds copies of data blocks read from
datafiles.
All user processes
concurrently connected to the instance share access to the database buffer
cache.
18. How many maximum Redo Logfiles one can have in a Database?
Maximum number of logfiles is limited by value of MAXLOGFILES parameter in the CREATE DATABASE statement. Control file can be resized to allow more entries; ultimately an operating system limit. Maximum number of logfiles per group - Unlimited
Consider the parameters
that can limit the number of redo log files before setting up or altering the
configuration of an instance redo log.
The following parameters
limit the number of redo log files that you can add to a database: MAXLOGFILES
& MAXLOGMEMBERS.
The MAXLOGFILES
parameter used in the CREATE DATABASE statement determines the maximum number
of groups of redo log files for each database. Group values can range from
1 to MAXLOGFILES.
When the compatibility
level is set earlier than 10.2.0, the only way to override this upper limit is
to re-create the database or its control file. Therefore, it is important
to consider this limit before creating a database.
When compatibility is
set to 10.2.0 or later, you can exceed the MAXLOGFILES limit, and the control
files expand as needed.
If MAXLOGFILES is not
specified for the CREATE DATABASE statement, then the database uses an
operating system specific default value.
The MAXLOGMEMBERS
parameter used in the CREATE DATABASE statement determines the maximum number
of members for each group. As with MAXLOGFILES, the only way to override this
upper limit is to re-create the database or control file. Therefore, it is
important to consider this limit before creating a database.
If no MAXLOGMEMBERS
parameter is specified for the CREATE DATABASE statement, then the database
uses an operating system default value.
19. What is difference between PFile and SPFile?
A PFILE is
a static, text file located in $ORACLE_HOME/dbs - UNIX
An SPFILE (Server
Parameter File) is a persistent server-side binary file that can only be
modified with the "ALTER SYSTEM SET" command.
20. What is PGA_AGGREGATE_TARGET parameter?
PGA_AGGREGATE_TARGET: specifies the target aggregate PGA memory available to all server processes attached to the instance.
21. Large Pool is used for what?
The large pool is an optional memory area and provides large memory allocations for:
·
Session memory for the
shared server and the Oracle XA interface (used where transactions interact
with more than one database)
·
I/O server processes,
buffer area
·
Oracle backup and
restore operations (RMAN)
·
User Global Area (UGA)
for shared servers
22. What is PCT Increase setting?
PCTINCREASE refers to the percentage by which each next
extent (beginning with the third extend) will grow.
The size of each subsequent
extent is equal to the size of the previous extent plus this percentage
increase.
Preventing tablespace
fragmentation
Try to set PCTINCREASE
to 0 or 100. Bizarre values for PCTINCREASE will contribute to fragmentation.
For example if you set
PCTINCREASE to 1 you will see that your extents are going to have weird and
wacky sizes: 100K, 100K, 101K, 102K, etc. Such extents of bizarre size are
rarely re-used in their entirety.
PCTINCREASE of 0 or 100
gives you nice round extent sizes that can easily be reused. Eg. 100K, 100K,
200K, 400K, etc.
Locally Managed
tablespaces (available from Oracle 8i onwards) with uniform extent sizes
virtually eliminates any tablespace fragmentation.
Note that the number of
extents per segment does not cause any performance issue anymore, unless
they run into thousands and thousands where additional I/O may be required to
fetch the additional blocks where extent maps of the segment are stored.
23. What is PCTFREE and PCTUSED Setting?
PCTFREE is a block storage parameter used to specify how
much space should be left in a database block for future updates.
For example, for
PCTFREE=10, Oracle will keep on adding new rows to a block until it is 90%
full. This leaves 10% for future updates (row expansion).
When using Oracle Advanced
Compression, Oracle will trigger block compression when the PCTFREE is
reached. This eliminates holes created by row deletions and maximizes
contiguous free space in blocks.
See the PCTFREE setting for a table:
SQL> SELECT pct_free FROM user_tables WHERE table_name = 'EMP';
PCT_FREE
----------
10
PCTUSED is a block storage parameter used to specify when Oracle should consider a database block to be empty enough to be added to the freelist. Oracle will only insert new rows in blocks that is enqueued on the freelist.
For example, if
PCTUSED=40, Oracle will not add new rows to the block unless sufficient rows
are deleted from the block so that it falls below 40% empty.
24. What is Row Migration and Row Chaining?
Row Migration refers to rows that were moved to another blocks due to an update making them too large to fit into their original blocks.
Oracle will leave a
forwarding pointer in the original block so indexes will still be able to
"find" the row. Note that Oracle does not discriminate between
chained and migrated rows, even though they have different causes. A
chained row is a row that is too large to fit into a single database data
block.
For example, if you use
a 4KB blocksize for your database, and you need to insert a row of 8KB into it,
Oracle will use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are:
·
Tables whose row size
exceeds the blocksize
·
Tables with long and
long raw columns are prone to having chained rows
·
Tables with more then
255 columns will have chained rows as Oracle break wide tables up into pieces.
Detecting row chaining:
This query will show how many chained (and migrated) rows each table has:
SQL>SELECT owner,
table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
To see which rows are chained:
SQL>ANALYZE TABLE
tablename LIST CHAINED ROWS;
This will put the rows into the INVALID_ROWS table which is created by the utlvalid.sql script (located in $ORACLE_HOME/rdbms/admin).
25. What is ORA-01555 - Snapshot Too Old error and how do you avoid it?
The ORA-01555 is
caused by Oracle read consistency mechanism. If you have a long running SQL
that starts at 11:30 AM, Oracle ensures that all rows are as they appeared at
11:30 AM, even if the query runs until noon!
Oracles does this by
reading the "before image" of changed rows from the online undo
segments. If you have lots of updates, long running SQL and too small
UNDO, the ORA-01555 error will appear. ORA-01555 error relates to
insufficient undo storage or a too small value for the undo_retention
parameter:
ORA-01555: snapshot too old: rollback segment number string with name "string" too small
Cause: Rollback records needed by a reader for
consistent read are overwritten by other writers.
Action: If in Automatic Undo Management mode, increase
the setting of UNDO_RETENTION. Otherwise, use larger rollback segments.
You can get an ORA-01555 error with a too-small undo_retention, even with a large undo tables.
However, you can set a
super-high value for undo_retention and still get an ORA-01555 error.
The ORA-01555 snapshot
too old error can be addressed by several remedies:
·
Re-schedule long-running
queries when the system has less DML load
·
Increasing the size of
your rollback segment (undo) size
·
The ORA-01555 snapshot
too old also relates to your setting for automatic undo retention
·
Don't fetch between
commits
26. What is a Locally Managed Tablespace?
Locally Managed
Tablespace is a tablespace that record extent allocation in the tablespace
header.
Each tablespace manages
it's own free and used space within a bitmap structure stored in one of the
tablespace's data files.
Advantages of Locally Managed Tablespaces:
·
Eliminates the need for
recursive SQL operations against the data dictionary (UET$ and FET$ tables)
·
Reduce contention on
data dictionary tables (single ST enqueue)
·
Locally managed
tablespaces eliminate the need to periodically coalesce free space
(automatically tracks adjacent free space)
·
Changes to the extent
bitmaps do not generate rollback information
27. Can you audit SELECT statements?
YES. But beware, you
will need a storage mechanism to hold your SQL SELECT audits, a high data
volume that can exceed the size of your whole database, everyday.
SQL SELECT auditing can
be accomplished in several ways:
·
Oracle audit table
command: audit SELECT table by FRED by access;
·
Oracle Fined-grained
Auditing
In a busy database, the
volume of the SELECT audit trail could easily exceed the size of the database
every data.
Plus, all data in the
audit trail must also be audited to see who has selected data from the audit
trail.
28. What does DBMS_FGA package do?
The DBMS_FGA package
provides fine-grained security functions. DBMS_FGA is a PL/SQL package
used to define Fine Grain Auditing on objects.
DBMS_FGA Package
Subprograms:
·
ADD_POLICY Procedure - Creates an audit policy using the
supplied predicate as the audit condition
·
DISABLE_POLICY Procedure - Disables an audit policy
·
DROP_POLICY Procedure - Drops an audit policy
·
ENABLE_POLICY Procedure - Enables an audit policy
29. What is Cost Based
Optimization?
The Oracle Cost Based
Optimizer (CBO) is a SQL Query optimizer that uses data statistics to identify
the query plan with lowest cost before execution. The cost is based on the
number of rows in a table, index efficiency, etc.
All applications should
be converted to use the Cost Based Optimizer as the Rule Based Optimizer is not
be supported in Oracle 10g and above releases.
30. How often you should collect statistics for a table?
Analyse if it's necessary!
- Refresh STALE
statistics before the batch processes run but only for tables involved in batch
run,
- Don't do it if you
don't have to.
- Oracle databse has
default, scheduled job "gather_stats_job" that analyses stats on a
daily basis during the maintenance window time.
31. How do you collect statistics for a table, schema and Database?
Using DBMS_STATS package
to gather Oracle dictionary statistics.
32. Can you make collection of Statistics for tables automatically?
YES. Oracle databse has default, scheduled job "gather_stats_job" that analyses stats on a daily basis during the maintenance window time.
There are two scheduled
activities related to the collection of Oracle "statistics":
·
AWR statistics:
Oracle has an automatic method to collect AWR "snapshots" of
data that is used to create elapsed-time performance reports.
·
Optimizer statistics:
Oracle has an automatic job to collect statistics to help the optimizer
make intelligent decisions about the best access method to fetch the desired
rows.
This job can be disabled
with this command: exec dbms_scheduler.disable(’SYS.GATHER_STATS_JOB’);
Oracle collects
optimizer statistics for SQL via the default of autostats_target = auto.
33. On which columns you should create Indexes?
In general, you should create an index on a column in any of the following situations:
·
The column is queried
frequently
·
A referential integrity
constraint exists on the column
·
A UNIQUE key integrity
constraint exists on the column
The following list gives
guidelines in choosing columns to index:
·
You should create
indexes on columns that are used frequently in WHERE clauses
·
Are used frequently to
join tables
·
Are used frequently in
ORDER BY clauses
·
On columns that have few
of the same values or unique values in the table
34. What type of Indexes
are available in Oracle?
There are many index
types within Oracle:
B*Tree Indexes - common indexes in Oracle. They are
similar construct to a binary tree, they provide fast access by key, to an
individual row or range of rows, normally requiring very few reads to find the
correct row.
The B*Tree index has
several subtypes:
·
Index Organised Tables -
A table stored in a B*Tree structure
·
B*Tree Cluster Indexes -
They are used to index the cluster keys
·
Reverse Key Indexes - The
bytes in the key are reversed. This is used to stop sequential keys being on
the same block like 999001, 999002, 999003 would be reversed to 100999, 200999,
300999 thus these would be located on different blocks.
·
Descending Indexes -
They allow data to be sorted from big to small (descending) instead of small to
big (ascending).
Bitmap Indexes - With a bitmap index , a single index
entry uses a bitmap to point to many rows simultaneously, they are used with
low data that is mostly read-only. Schould be avoided in OLTP systems.
Function Based Indexes - These are B*Tree or bitmap indexes that store the computed result of a function on a row(s) (for example sorted results)- not the column data itself.
Application Domain Indexes - These are indexes you build and store yuorself, either in Oracle or outside of Oracle
interMedia Text Indexes
- This is a specialised index built into Oracle to allow for keyword searching
of large bodies of text.
35. What is B-Tree Index?
A B-Tree index is a data
structure in the form of a tree, but it is a tree of database blocks, not rows.
Note: "B" is not for binary; it's
balanced.
36. A table is having few rows, should you create indexes on this table
Small tables do not require indexes; if a query is taking too long, then the table might have grown from small to large.
You can create an index
on any column; however, if the column is not used in any of these
situations, creating an index on the column does not increase performance
and the index takes up resources unnecessarily.
37. A Column is having many repeated values which type of index you should create on this column, if you have to?
For example, assume
there is a motor vehicle database with numerous low-cardinality columns such as
car_color, car_make, car_model, and car_year. Each column contains less
than 100 distinct values by themselves, and a b-tree index would be fairly
useless in a database of 20 million vehicles.
38. When should you rebuilt indexes?
In 90% cases - NEVER.
When the data in index
is sparse (lots of holes in index, due to deletes or updates) and your query is
usually range based.
Also index blevel is one
of the key indicators of performance of sql queries doing Index range scans.
39. Can you built indexes online?
YES. You can create and
rebuild indexes online.
This enables you to
update base tables at the same time you are building or rebuilding indexes on
that table.
You can perform DML
operations while the index build is taking place, but DDL operations are not
allowed.
Parallel execution is
not supported when creating or rebuilding an index online.
The following statements
illustrate online index build operations:
CREATE INDEX emp_name ON
emp (mgr, emp1, emp2, emp3) ONLINE;
40. Can you see Execution Plan of a statement?
YES. In many ways, for
example from GUI based tools like TOAD, Oracle SQL Developer.
Configuring AUTOTRACE, a
SQL*Plus facility
AUTOTRACE is a facility
within SQL*Plus to show us the explain plan of the queries we've executed, and
the resources they used.
Once the PLAN_TABLE has
been installed in the database, You can control the report by setting the
AUTOTRACE system variable.
·
SET AUTOTRACE OFF - No
AUTOTRACE report is generated. This is the default.
·
SET AUTOTRACE ON EXPLAIN
- The AUTOTRACE report shows only the optimizer execution path.
·
SET AUTOTRACE ON
STATISTICS - The AUTOTRACE report shows only the SQL statement execution
statistics.
·
SET AUTOTRACE ON - The
AUTOTRACE report includes both the optimizer execution path and the SQL
statement execution statistics.
·
SET AUTOTRACE TRACEONLY
- Like SET AUTOTRACE ON, but suppresses the printing of the user's query
output, if any.
41. A table has been
created with below settings. What will be size of 4th extent?
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
"NEXT" Specify in bytes the size of the next extent to be allocated to the object.
Percent Increase allows
your segment to grow at an increasing rate.
The first two extents
will be of a size determined by the Initial and Next parameter (200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent (1,4*200=280k).
AND The fourth extent will be 1 + PCTINCREASE/100 times the third extent (1,4*280=392k!!!), and so on...
42. What is DB Buffer Cache Advisor?
The Buffer Cache Advisor
provides advice on how to size the Database Buffer Cache to obtain optimal
cache hit ratios.
Member of Performance
Advisors --> Memory Advisor pack.
43. What is STATSPACK tool?
STATSPACK is a performance diagnosis tool provided
by Oracle starting from Oracle 8i and above.
STATSPACK is a diagnosis
tool for instance-wide performance problems; it also supports application
tuning activities by providing data which identifies high-load SQL statements.
Although AWR and ADDM (introduced
in Oracle 10g) provide better statistics than STATSPACK,
users that are not
licensed to use the Enterprise Manager Diagnostic Pack should continue to use
statspack.
More information about
STATSPACK, can be found in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
44. Can you change SHARED_POOL_SIZE online?
YES. That's possible.
SQL>alter system set
shared_pool_size=500M scope=both;
System altered.
It's a lot quicker to
bounce the instance when changing this.
45. Can you Redefine a table Online?
Yes you can. In any database system, it is occasionally necessary to modify the logical or physical structure of a table to:
·
Improve the performance
of queries or DML
·
Accommodate application
changes
·
Manage storage
Oracle Database provides
a mechanism to make table structure modifications without significantly
affecting the availability of the table.
The mechanism is called
online table redefinition.
When a table is
redefined online, it is accessible to both queries and DML during much of the
redefinition process.
The table is locked in
the exclusive mode only during a very small window that is independent of the
size of the table and complexity of the redefinition, and that is
completely transparent to users.
Online table
redefinition requires an amount of free space that is approximately equivalent
to the space used by the table being redefined. More space may be required if
new columns are added.
You can perform online
table redefinition with the Enterprise Manager Reorganize Objects wizard or
with the DBMS_REDEFINITION package.
46. Can you assign Priority to users?
YES. This is achievable
with Oracle Resource Manager.
DBMS_RESOURCE_MANAGER is
the packcage to administer the Database Resource Manager.
The
DBMS_RESOURCE_MANAGER package maintains plans, consumer groups, and plan
directives. It also provides semantics so that you may group together changes
to the plan schema.
47. You want users to change their passwords every 2 months. How do you enforce this?
Oracle password security
is implemented via Oracle "profiles" which are assigned to users.
PASSWORD_LIFE_TIME - limits the number of days the same password
can be used for authentication
First, start by creating
security "profile" in Oracle database and then alter the user to
belong to the profile group.
1) creating a profile:
create profile all_users
limit
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME
UNLIMITED
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME
UNLIMITED;
2) Create user and assign user to the all_users profile
SQL>create user chuck
identified by norris profile all_users;
3) To "alter profile" parameter, say; change to three months:
SQL>alter
profile all_users set PASSWORD_LIFE_TIME = 90;
48. How do you delete duplicate rows in a table?
There is a few ways to achieve that:
·
Using subquery to delete
duplicate rows:
DELETE FROM table_name
WHERE rowid NOT IN (SELECT max(rowid) FROM table_name GROUP BY id);
More ways:
·
Use RANK to find and
remove duplicate table rows
·
Use self-join to remove
duplicate rows
·
Use analytics to detect
and remove duplicate rows
·
Delete duplicate table
rows that contain NULL values
49. What is Automatic Management of Segment Space setting?
Oracle9i New Feature
Series: Automatic Segment Space Management
Automatic Segment Space
Management (ASSM) introduced in Oracle9i is an easier way of managing space in
a segment using bitmaps.
It eliminates the DBA
from setting the parameters pctused, freelists, and freelist groups.
ASSM can be specified
only with the locally managed tablespaces (LMT).
Oracle uses bitmaps to
manage the free space. Bitmaps allow Oracle to manage free space more
automatically.
Here is an example:
CREATE TABLESPACE example
DATAFILE
'/oradata/ORA_SID/example01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 2M
SEGMENT SPACE MANAGEMENT
AUTO;
The storage parameters PCTUSED, FREELISTS and FREELIST GROUPS specified while creating a table are ignored by Oracle on a LMT ASSM tablespace. Oracle does not produce an error.
One huge benefit of
having ASSM is to reduce the “Buffer Busy Waits” you see on segments.
Beware:
Using ASSM can hinder
database DML performance, and most Oracle experts will use manual freelists and
freelist groups.
50. What is the difference between DELETE and TRUNCATE statements?
The DELETE command
is used to remove rows from a table. A WHERE clause can be used to only remove
some rows.
If no WHERE condition is
specified, all rows will be removed. After performing a DELETE operation you
need to COMMIT or ROLLBACK the transaction to make the change permanent or to
undo it.
DELETE will cause all
DELETE triggers on the table to fire.
TRUNCATE removes all rows from a table. A WHERE clause is not permited. The operation cannot be rolled back and no triggers will be fired.
As such, TRUCATE is
faster and doesn't use as much undo space as a DELETE.
51. What is COMPRESS and CONSISTENT setting in EXPORT utility?
COMPRESS
Simply: COMPRESS=n -
Allocated space in database for imported table will be exactly as the space
required to hold the data.
COMPRESS=y - The INITIAL
extent of the table would be as large as the sum of all the extents allocated
to the table in the original database.
In other words:
The default, COMPRESS=y,
causes Export to flag table data for consolidation into one initial extent upon
import.
If extent sizes are
large (for example, because of the PCTINCREASE parameter), the allocated space
will be larger than the space required to hold the data.
If you specify
COMPRESS=n, Export uses the current storage parameters, including the values of
initial extent size and next extent size.
If you are using locally
managed tablespaces you should always export with COMPRESS=n
CONSISTENT
Default: n. Specifies
whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure
that the data seen by Export is consistent to a single point in time and does
not change during the execution of the exp command.
You should specify
CONSISTENT=y when you anticipate that other applications will be updating the
target data after an export has started.
If you use CONSISTENT=n,
each table is usually exported in a single transaction. However, if a table
contains nested tables, the outer table and each inner table are exported as
separate transactions.
If a table is
partitioned, each partition is exported as a separate transaction.
Therefore, if nested
tables and partitioned tables are being updated by other applications, the data
that is exported could be inconsistent. To minimize this
possibility, export those tables at a time when updates are not being
done.
52. What is the difference between Direct Path and Conventional Path loading?
A conventional path load
executes SQL INSERT statements to populate tables in an Oracle database.
A direct path load
eliminates much of the Oracle database overhead by formatting Oracle data
blocks and writing the data blocks directly to the database files.
53. Can you disable and enable Primary key?
You can use the ALTER TABLE statement to enable, disable, modify, or drop a constraint.
When the database is
using a UNIQUE or PRIMARY KEY index to enforce a constraint, and constraints
associated with that index are dropped or disabled, the index is dropped,
unless you specify otherwise.
While enabled foreign
keys reference a PRIMARY or UNIQUE key, you cannot disable or drop the PRIMARY
or UNIQUE key constraint or the index.
Disabling Enabled Constraints
The following statements
disable integrity constraints. The second statement specifies that the
associated indexes are to be kept.
ALTER TABLE
dept DISABLE CONSTRAINT dname_ukey;
ALTER TABLE
dept DISABLE PRIMARY KEY KEEP INDEX, DISABLE UNIQUE (dname, loc) KEEP
INDEX;
The following statements enable novalidate disabled integrity constraints:
ALTER TABLE
dept ENABLE NOVALIDATE CONSTRAINT dname_ukey;
ALTER TABLE
dept ENABLE NOVALIDATE PRIMARY KEY, ENABLE NOVALIDATE UNIQUE (dname,
loc);
The following statements enable or validate disabled integrity constraints:
ALTER TABLE
dept MODIFY CONSTRAINT dname_key VALIDATE;
ALTER TABLE
dept MODIFY PRIMARY KEY ENABLE NOVALIDATE;
The following statements enable disabled integrity constraints:
ALTER TABLE
dept ENABLE CONSTRAINT dname_ukey;
ALTER TABLE
dept ENABLE PRIMARY KEY, ENABLE UNIQUE (dname, loc);
To disable or drop a UNIQUE key or PRIMARY KEY constraint and all dependent FOREIGN KEY constraints in a single step, use the CASCADE option of the DISABLE or DROP clauses.
For example, the following statement disables a PRIMARY KEY constraint and any FOREIGN KEY constraints that depend on it:
ALTER TABLE
dept DISABLE PRIMARY KEY CASCADE;
54. What is an Index Organized Table?
An index-organized table
(IOT) is a type of table that stores data in a B*Tree index
structure. Normal relational tables, called heap-organized tables, store
rows in any order (unsorted). In contrast to this, index-organized tables
store rows in a B-tree index structure that is logically sorted in primary key
order. Unlike normal primary key indexes, which store only the columns included
in it definition, IOT indexes store all the columns of the table (an exception
to this rule - is being called the overflow area).
Properties and restrictions:
·
An IOT must contain a
primary key
·
Rows are accessed via a
logical rowid and not a physical rowid like in heap-organized tables
·
An IOT cannot be in a
cluster
·
An IOT cannot contain a
column of LONG data type
·
You cannot modify an IOT
index property using ALTER INDEX (error ORA-25176), you must use an ALTER TABLE
instead.
Advantages of an IOT
·
As an IOT has the
structure of an index and stores all the columns of the row, accesses via
primary key conditions are faster as they don't need to access the table to get
additional column values.
·
As an IOT has the
structure of an index and is thus sorted in the order of the primary key,
accesses of a range of primary key values are also faster.
·
As the index and the
table are in the same segment, less storage space is needed.
·
In addition, as rows are
stored in the primary key order, you can further reduce space with key
compression.
·
As all indexes on an IOT
uses logical rowids, they will not become unusable if the table is reorganized.
Row overflow area
If some columns of the
table are infrequently accessed, it is possible to offload them into another
segment named the overflow area. An overflow segment will decrease the size of
the main (or top) segment and will increase the performance of statements that
do not need access the columns in the overflow area.
Notes:
The overflow area can
contains only columns that are not part of the primary key.
If a row cannot fit in a
block, you must define an overflow area.
Consequently, the
primary key values of an IOT must fit in a single block.
The columns of the table
that are recorded in the overflow segment are defined using the PCTHRESHOLD
and/or INCLUDING options of the OVERFLOW clause (examples on source website).
55. What is a Global Index and Local Index?
Local Index - each partition of a local index is
associated with exactly one partition of the table.
Global Index - global index is associated with multiple
partitions of the table.
Oracle offers two types of global partitioned index:
- Global Range Partitioned
Indexes
- Global Hash Partitioned
Indexes
Global Nonpartitioned Indexes - behave just like a nonpartitioned index.
56. What is the difference between Range Partitioning and Hash Partitioning?
Range Partitioning maps data to partitions based on a range of column values (e.g. a date column)
Hash Partitioning maps
data to partitions based on a hashing algorithm, evenly distributing data
between the partitions.
This is typically used
where ranges aren't appropriate, i.e. customer number, product ID
57. What is difference between Multithreaded/Shared Server and Dedicated Server?
Oracle Database creates server processes to handle the requests of user processes connected to an instance.
A server process can be either of the following:
- A dedicated server
process, which services only one user process
- A shared server
process, which can service multiple user processes
Your database is always enabled to allow dedicated server processes, but you must specifically configure and enable shared server by setting one or more initialization parameters.
58. Can you import objects from Oracle ver. 7.3 to 9i?
Different versions of the import utility are upwards compatible. This means that one can take an export file created from an old export version, and import it using a later version of the import utility.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling older imp/exp versions to work (for backwards compatibility).
For example, one can run $ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3 exp/imp utilities to run against an Oracle 8 database.
59. How do you move tables from one tablespace to another tablespace?
There are several methods to do this;
1) export the table,
drop the table, create the table definition in the new
tablespace, and then
import the data (imp ignore=y).
2) Create a new table in the new tablespace with the CREATE TABLE statement AS SELECT all from source table
command:
CREATE TABLE temp_name
TABLESPACE new_tablespace AS SELECT * FROM source_table;
Then drop the original table and rename the temporary table as the original:
DROP TABLE real_table;
RENAME temp_name TO
real_table;
Note: don't forget to rebuild any indexes.
60. How to display how much space is used and free in a tablespace?
Example query to check free and used space per tablespace:
SELECT
/* + RULE */
df.tablespace_name "Tablespace", df.bytes / (1024 * 1024) "Size
(MB)",
SUM(fs.bytes)
/ (1024 * 1024) "Free (MB)", NVL(
ROUND(SUM(fs.bytes)
* 100 / df.bytes),1) "% Free",
ROUND((df.bytes
- SUM(fs.bytes)) * 100 / df.bytes)
"%
Used"
FROM dba_free_space fs,
(
SELECT tablespace_name,SUM(bytes)
bytes
FROM
dba_data_files
GROUP BY
tablespace_name
)
df
WHERE fs.tablespace_name
(+) = df.tablespace_name
GROUP BY
df.tablespace_name,df.bytes
UNION ALL
SELECT
/* + RULE */
df.tablespace_name tspace, fs.bytes / (1024 * 1024),
SUM(df.bytes_free)
/ (1024 * 1024), NVL(ROUND((SUM(fs.bytes)
- df.bytes_used) * 100 / fs.bytes), 1),
ROUND((SUM(fs.bytes) -
df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(
SELECT
tablespace_name,bytes_free,
bytes_used
FROM
v$temp_space_header
GROUP BY
tablespace_name,bytes_free,
bytes_used
)
df
WHERE fs.tablespace_name
(+) = df.tablespace_name
GROUP BY
df.tablespace_name,fs.bytes,
df.bytes_free,df.bytes_used;
Sample output:
Tablespace Size (MB)
Free (MB) % Free % Used
------------------------------
---------- ---------- ---------- ----------
UNDOTBS1 65 17.8125 27 73
EXAMPLE 100 22.625 23 77
USERS 5 1.0625 21 79
TEMP 20 2 10 90
SYSAUX 625.125 54.5 9 91
SYSTEM 700 9.0625 1 99
No comments:
Post a Comment