Skip to main content.

Web Based Programming Tutorials

Homepage | Forum - Join the forum to discuss anything related to programming! | Programming Resources

Oracle Unleashed

Oracle Unleashed oun19fi.htm

Previous Page TOC Next Page Home


19

Transaction Processing

Understanding how a transaction begins, executes, and ends, and knowing what happens along each step of the way are vital parts of making Oracle work for you. This knowledge is helpful not only to system and database administrators, but to Oracle developers as well. Knowing when a transaction is issued a rollback segment, or how locking occurs in the database can drastically change the strategy of creating applications or nightly processes.

This chapter covers, from start to finish:

Sessions Versus Transactions

A transaction is directly related to a session, but it is still considered a separate entity. A session, simply stated, is a single connection to a database instance based upon a username and, optionally, a password. All sessions in a database instance are unique, which means that they have a unique identifier setting them apart from the other users and processes accessing the database. This unique identifier, called a SID, is assigned by the instance and can be reused after a session is ended. The combination of the SID and a session serial number guarantees that each no session, even if the number is reused, is identical.


The serial number is used to uniquely identify the objects being manipulated in a given session, and the combination of the SID and serial number guarantees uniqueness. The serial number is used to ensure that any session-level commands are applied to the correct object in the event that a session is terminated and the SID is reassigned.

A transaction, also in simplified terms, is a specific task, or set of tasks, to be executed against the database. Transactions start with an executable DML statement and end when the statement or multiple statements are all either rolled back or committed to the database, or when a DDL (Data Definition Language) statement is issued during the transaction.

If COMMIT or ROLLBACK statements are issued from the command line, the transaction is said to have been explicitly ended. However, if you issue a DDL command (DROP TABLE, ALTER TABLE, and so on), the previous statements in your transaction will be committed (or rolled back if unable to commit), the transaction will be implicitly ended, and a new transaction will begin and then end.


A DDL statement constitutes an entire transaction, due to the nature of the statements. When a DDL statement that begins your previous transaction is implicitly ended, a new transaction begins, and then the transaction is ended.

To illustrate these rules, assume that you log into your database to update and modify your customer tables. What you would like to do is enter 100 new customers to your database. You do this by creating a temporary table to hold that customer information, search your customer table for duplicates, and update those records if they do not exist. Though this is unlikely, assume that you must update customer table before checking for duplicate entries. The sequence would look like the steps listed in the following sequence of session and transaction begins and ends without savepoints.

  1. Connect to SQL*Plus (begin session #1).

  2. Create temporary customer table (begin and end transaction #1).

  3. Insert new customer information into temporary table (begin transaction #2).

  4. Step through entries in temporary table (continue transaction #2).

  5. Update customer table (continue transaction #2).

  6. Check for duplicate entries (continue transaction #2).

    If duplicates exist, roll back entire transaction (end transaction #2).

  7. Repeat steps 4-7 until complete or duplicates are found.

  8. Drop temporary table (end transaction #2, begin and end transaction #3).

  9. Exit from SQL*Plus (end session #1).

Notice how the create-table and drop-table steps (2 and 8) begin and end a transaction. If you found duplicate entries in your tables, step 8 would actually end transaction #3 and begin and end transaction #4. Also note that the DDL command in step 5 implicitly ended transaction #2 by committing any changes made before beginning transaction #3. Finally, it is important to realize that if you had done another update between steps 5 and 6, the exit from SQL*Plus would have implicitly ended transaction #4 (started by the update) by issuing a commit before exiting.


The relationship between tables is irrelevant when discussing transaction begins and ends. For example, if you update a set of related tables and then attempt to issue a DDL statement against another set of tables from within the same session, the DDL statement attempts to commit the previous set of changes (your previous transaction), if they have not already been committed or rolled back, and then executes the DDL statement as yet another transaction.

One other form of implicitly ending a transaction includes terminating a session either normally or abnormally. When these situations arise, the instance automatically attempts to commit the current transaction. If that is not possible, the transaction will be rolled back.

Commits, Rollbacks, and Savepoints

Although these topics are discussed elsewhere in this book, it is important to note how they affect a given transaction. As mentioned earlier, commits and rollbacks both end a transaction. Commit makes all changes made to the data permanent. Rollback reverses all changes made during the transaction by restoring the previous state of all modified data. With the use of savepoints, the ROLLBACK command can also be used to roll back only a portion of a transaction.

Savepoints were designed to be used as logical stopping points from within a single transaction. They are helpful in splitting up extremely long transactions into smaller portions, and they provide points of recovery along the way. Using savepoints within a transaction enables you to roll back the transaction to any given savepoint as long as a commit has not been issued (which immediately commits all data, erases all savepoints, and ends the transaction). Refer to Chapter 6, "SQL*Plus," to learn more about the SAVEPOINT command as well as how to use ROLLBACK to roll the current transaction back to a specified savepoint.

The following list is an update to the previously shown sequence with the addition of savepoints. Refer to this example to show how savepoints affect the transaction.

  1. Connect to SQL*Plus (begin session #1).

  2. Create temporary customer table (begin and end transaction #1).

  3. Insert new customer information into temporary table (begin transaction #2).

  4. Step through each entry in the temporary table (continue transaction #2).

  5. Create unique savepoint (continue transaction #2).

  6. Update customer table with information from temporary table (continue transaction #2).

  7. If duplicate customer is found, roll back to savepoint (continue transaction #2).

  8. Repeat steps 4-7 until finished.

  9. Issue commit (end transaction #2).

  10. Drop temporary table (begin and end transaction #3).

  11. Exit from SQL*Plus (end session #1).

Notice how the savepoint enables you to roll back to a point within your current transaction without affecting the previous updates before the savepoint. Anywhere within your procedure, you can roll back to any savepoint or you can roll back the entire transaction. By using the savepoints, you are providing a collection of recovery points that are available to you until you end that transaction. Once the transaction is ended, all savepoints are erased.


Use savepoints to logically break up large commits. For example, if you have a situation in which you must update a large history table before checking for inaccurate data, you could:

1. Complete previous processing.

2. Issue savepoint.

3. Update the history table.

4. Check for inaccurate data.

5. Roll back to the savepoint (if required).

6. Commit and continue with processing if
possible.

Multiple savepoints give you even greater flexibility in rolling back portions of long or complex transactions if an error occurs before the completion of the process.

There is no limit to the number of savepoints you can create during any given transaction, but be careful that the ones you do create are logically named in case you must roll back to them.


Once a transaction is ended (that is, commit or rollback), it is impossible to roll back to a savepoint. At that point, all savepoints are deleted and can be reused. If you specify a duplicate name for a savepoint, the previous savepoint will be deleted and a new one set at the current point in the transaction.

Transaction Control Statements

Transaction control statements are statements that affect the execution or properties of a transaction, whether it is the management of data or characteristics of how the transaction executes. The family of transaction control statements include:

Types of Transactions

Several names are used to identify transactions and their states. Knowing these terms is helpful in understanding the terms mentioned by Oracle and interpreting Oracle errors returned during a transaction. These terms cover types of transactions as well as other terms used in identifying them.

Concurrent Transactions

Concurrent transactions are transactions that are executed in the same general time. These transactions, because they have started so close to each other, generally do not see the changes made by the other transactions. Any data that has been updated by a concurrent transaction and requested by another concurrently running transaction must be read from rollback segments until the transaction requesting the data has completed. This has the potential of leading to the error message Snapshot too old, which is discussed in more detail under the "Assigning Rollback Segments" section of this chapter.

Discreet Transactions

A discreet transaction is used to improve the performance of short transactions. For developers creating custom applications, the procedure BEGIN_DISCREET_TRANSACTION() changes the steps followed during the duration of a session in order to speed its processing. The main differences are as follows:

Because the overhead associated with redo and rollback segments is bypassed by storing information directly in the SGA, these transactions bypass a goodly amount of processing overhead. At first this sounds more like a flaw than a feature, but on second thought, most changes can be made in a very short amount of time because these transactions are short in nature.


Discreet transactions will not work as distributed transactions due to the changes in transaction processing.

Distributed Transactions

Distributed transactions are transactions in which one or more statements manipulate data on two or more nodes, or remote systems, of a distributed database. If a transaction manipulates data on only one node, it is considered a remote transaction. As in a remote transaction, none of the redo information is stored locally.

In-Doubt Transactions

An in-doubt transaction is actually a state of a transaction instead of a type and refers to transactions within a distributed database environment. One situation that causes this state is if an instance involved in a currently running transaction fails, that transaction must be either rolled back or committed. It is difficult, however, to do either without knowing the state of the transaction in the affected database. In this case, all other instances in the distributed environment mark this transaction as in-doubt. Once the instance is restarted, the transaction can be analyzed and all instances can either commit or rollback.

It is possible to force the commit or rollback of a distributed transaction by using either SQL*DBA and doing a Recover In-Doubt Transaction, or the command COMMIT WORK ... FORCE with the local or global transaction ID of the in-doubt transaction. Refer to Chapter 7, "SQL*DBA," or Chapter 6, "SQL*Plus," for further information on how to roll back or commit this transaction.

Normal Transactions

Normal transaction is a term used to refer to a local (non-remote) transaction. All redo information is stored in the local database, and all data manipulation is done to the same database. This type of transaction is the focus for the discussion on transaction processing.

Read-Only Transactions

Read-only refers to the type of read consistency that is set or defaulted to for a given transaction. By default, the level of read consistency is statement level, which is also known as read-write. This means that each consecutive statement in your transaction will see the changes made to the database by any previous statements regardless of whose transaction has committed the changes.

By changing the read consistency from statement level to transaction level, you force the current transaction to ignore any changes made to the database during this transaction and view the data as it existed immediately before the transaction started. This mode is helpful if you are executing long running reports against tables that might change during the duration of the report.

When you are creating a read-only transaction, two major changes take effect. First, the number of commands available to the read-only transaction is limited. Second, because the process is literally read-only, it does not require additional locks against tables and does not acquire a rollback segment or redo log. This is helpful because it limits the processing overhead from the database associated with normal transactions.

Please refer to the command reference at the end of this chapter for a list of commands to which read-only transactions are limited.

Remote Transactions

Remote transactions are transactions containing single or multiple statement(s) to be executed against a non-local database. These statements all reference the same node. If they do not, they are considered separate remote transactions and the instance will split them up. One of the major differences between remote and normal transactions is that redo and rollback information against a remote transaction is stored on the remote database. None of this information is transferred to your local database to be used for recovery.

Read-Consistency

Read-consistency is not a difficult concept to grasp. In short, read-consistency guarantees that the data you are viewing while executing a transaction does not change during that transaction. With read-consistency, if two users are updating the same table at the same time, user1 will not see the changes made by the other user during their transaction. User2, likewise, cannot see any changes committed by user1 until both transactions are complete. If they happen to be working on the same row in the table, this becomes a locking issue instead of read-consistency. A later section discusses locking.

Read-consistency is the major building block that enables multiple users to update, select, and delete from the same tables without having to keep a private copy for each user. When combined with locking techniques, read-consistency provides the foundation for a multi-user database in which users can do similar or identical operations without difficulty.

Take a look at an example of the way read-consistency works in a theoretical telemarketing department. user1 is entering an order for a customer, while user2 is changing customer information. Both users have concurrent transactions (they are executing at the same time), but user1 began their transaction first. Suppose that user2 makes a mistake and changes the phone number for the same customer whose order is being entered. Because user1 began their transaction first, they will always be looking at the "before picture" of the customer data and will see the customer's previous phone number when querying the user's data. This is true even if user2 commits their changes. Why? Because it is possible that user1's transaction is solely dependent on the data that existed when their transaction began. Imagine the confusion that would result if data could be changed while an already executing query were making changes based on that data! It would be nearly impossible to guarantee the coordination and functioning of all processing within the application.

Read-consistency is also a secondary function of rollback segments. Aside from being able to undo changes from a transaction, they also provide other users with a "before picture" of the data being modified by any process. If a transaction must review data that has been modified by a concurrent uncommitted transaction, it must look in the rollback segments to find that data. You can find more information in the section on rollback segments within this chapter.

Steps to Processing a Transaction

Understanding the steps followed during the execution of a transaction can be quite helpful in planning and implementing custom applications. It is also important for the database administrator to know these steps because they can help in understanding and tuning the database parameters and processes. This discussion covers normal transactions. Other transactions, such as distributed, remote, and discreet, are treated a bit differently, because these transactions are short in nature, and those differences are documented throughout this chapter. The processing steps follow.

  1. DML/DDL statement is entered.

  2. Rollback segment is assigned or requested.

  3. Statement is optimized.

  4. Optimizer generates an execution plan.

  5. The execution plan is followed to manipulate/return data.

  6. Transaction loops through steps 1-5 until commit, rollback or session termination.

The following sections examine each step individually.

Entering DML/DDL Statements

The issuing of DML or DDL statements can take place through a number of ways, including SQL*Forms, SQL*Plus, and custom C programs. The rules governing the start and end of transactions are the same no matter which way a SQL statement is issued.

Assigning Rollback Segments

Rollback segments are assigned randomly by Oracle at the beginning of each transaction (not session) when the first DML statement is issued, and they are used to roll back the transaction to a specified savepoint or to the beginning of a transaction. The selection of a rollback segment is based on which rollback segments are currently available and how busy each segment is. By default, DDL statements are not issued rollback segments due to the nature of DDL commands. They are, however, issued redo entries so that the modifications can be reapplied if the database must be recovered.


In an Oracle7 database, you can examine the number of times a particular rollback segment was requested and was waited for. Check the data dictionary view STATS$ROLL for the column TRANS_TBL_WAITS and TRANS_TBL_GETS. If the ratio of GETS to WAITS is high, you may need to increase the number of rollback segments in your database.

Two types of transactions do not acquire rollback segments. These are read-only transactions and remote transactions. Read-only transactions, by their nature, do not modify data, so they do not require a rollback segment. Remote transactions actually do acquire rollback segments, but these rollback segments are allocated on the remote database that the transaction is executed on. Distributed transactions are really a form of remote transactions and follow the same rule.

There is no limit to the number of rollback segments a user can access throughout a given session, but only one rollback segment will be used at any given time for any transaction. In other words, a transaction will acquire one and only one rollback segment to be used for the duration of the transaction. Once the transaction is complete, the rollback segment is released. Space used in that rollback segment is dictated by the amount of data that is modified.

Long-Running Transactions and Rollback Segment Allocation

Transactions that modify large amounts of data require larger rollback segments. By using the SET TRANSACTION command, you can specify a specific rollback segment to be used by a given transaction. Reference the section on SET TRANSACTION for a further explanation of how to do this. It is important to note, however, that a SET TRANSACTION command must be the very first command issued in a transaction. If it is not, an error message will be returned.

Once a transaction is completed, the rollback segment is released. This does not mean that the segment's data is overwritten immediately, though. Sometimes other transactions that started before this transaction finished need access to the unmodified data for read-consistency. In this case, the rollback segment containing the "before picture" will be used. Unfortunately, Oracle does not lock this data into the rollback segment to prevent the data blocks from being reused if needed. If your rollback segments are too small, you may encounter the error rollback segment too old. If this error occurs, the transaction that received the error is forced to rollback. This error implies two things:

In either situation, a transaction was accessing the before picture of some data that was still in a rollback segment when the system was forced to reclaim that extent to use for a currently executing transaction. Because this before picture is no longer available, the executing transaction cannot continue. You can use three steps (separately or in conjunction with each other) to alleviate this problem:

  1. Increase the size of your rollback segments.

  2. Increase the OPTIMAL size of your rollback segments.

  3. Reschedule your processing so that no two processes are updating and/or reading from updated tables while the other is running.


Make sure that your rollback segments are sized according to the largest common transaction that takes place. Create a large rollback segment to accommodate unusually large updates, and use the SET TRANSACTION command to force that transaction to use the larger rollback segment.

Using the Optimizer

Oracle's optimizer is a critical part in the execution of a transaction. The optimizer is responsible for taking a SQL statement, identifying the most efficient way of executing the statement, and then returning the data requested. There is a high likelihood that a SQL statement can be executed in more than one way. The optimizer is responsible for identifying the most efficient means of executing that statement.

Optimization can take many steps to complete, depending on the SQL statement. The steps used to execute the SQL statement are called an execution plan. Once the execution plan is completed, it is then followed to provide the desired results (updated or returned data).

Many factors govern how the optimizer creates an execution plan. These factors are based on the type of optimization method the database uses. At the database level, you have two types of optimization: cost-based and rule-based. The database parameter OPTIMIZER_MODE, located in the init.ora parameter file, determines which type of optimization mode your instance will use. The parameter has two possible values:


Two words of wisdom: First, with each version of Oracle that is released, there are more modifications to the way the optimizer makes a statement more efficient. With this in mind, it is important to note that the optimizer in each version may function differently, and each statement passed to the optimizer may perform differently.

Second, although the optimizer makes every attempt to create an execution plan that is optimal in performance, the Developer has the true knowledge of the data and its purpose. In some situations, the Developer may be able to choose a more efficient means of executing the statement than the optimizer can. Whenever possible, the developer should use the EXPLAIN PLAN option of the database to examine the execution plan provided by the optimizer.

Cost-Based Analysis

Cost-based analysis is a mode of analyzing SQL statements to provide the most efficient way of execution. When the optimizer is running in cost-based mode, it follows these steps to decide which plan is the best way to execute the statement unless the developer has provided a hint to use in the execution.

  1. Generate a set of execution plans based on available access paths.

  2. Rank each plan based on estimated elapsed time to complete.

  3. Choose the plan with the lowest ranking (shortest elapsed time).

Cost-based analysis uses statistics generated by the ANALYZE command for tables, indexes, and clusters to estimate the total I/O, CPU, and memory requirements required to run each execution plan. Because the goal of the cost-based approach is to provide maximum throughput, the execution plan with the lowest ranking or lowest estimated I/O, CPU, and memory requirements will be used.

The analysis used to provide the final cost of an execution plan is based on the following data dictionary views:

Rule-Based Analysis

Rule-based analysis rates the execution plans according to the access paths available and the information in Table 19.1. The rule-based approach uses those rankings to provide an overall rating on the execution plan and uses the plan with the lowest ranking. Generally speaking, the lower the rating, the shorter the execution time, though this is not always the case.

Ranking


Type of Access


1

Single row by ROWID

2

Single row by cluster join

3

Single row by hash cluster key with unique or primary key

4

Single row by unique or primary key

5

Cluster join

6

Hash cluster key

7

Indexed cluster key

8

Composite index

9

Single-column index

10

Bounded range search on indexed columns

11

Unbounded range search on indexed columns

12

Sort-merge join

13

MAX() or MIN() of indexed column

14

ORDER BY on indexed columns

15

Full table scan

Overriding the OPTIMIZER_MODE Parameter

Because the developer can sometimes optimize code more efficiently than the optimizer can, various directives, called hints, can be issued from within the SQL statement to force the optimizer to choose a different method of optimization. This method works at the statement level from within the transaction and affects only the current statement.

To affect all statements at the transaction level, the SQL command ALTER SESSION SET OPTIMIZER_GOAL can be used. This command overrides the OPTIMIZER_MODE initialization parameter and forces all statements within the current transaction to be optimized according to this value. This parameter has four possible values:

This parameter affects all SQL statements issued from within the transaction, including functions and stored procedures that are called. OPTIMIZER_MODE is still used for any recursive SQL calls issued by Oracle on behalf of the transaction, though.

Parsing Statements

A parsed statement is not to be confused with an execution plan of a statement. Whereas an execution plan examines the most efficient way to execute a statement, parsing the statement creates the actual executable statements to be used in retrieving the data. Parsing a statement is a one-step process by the optimizer to do the following:

When checking the syntax and semantics, the instance is verifying that no key words or necessary parameters are missing. If the statement is in correct form, the instance then verifies that the user has the correct privileges required to carry out the execution of the statement. Once these have been verified, space is allocated in the private SQL area for the user's statement. This statement is saved until either it is needed again or the memory space is required to store another parsed statement.

After allocating space in the private SQL area, the instance searches through the shared SQL area for any duplicate statements. If a duplicate statement is found, the executable version of the statement is retrieved from memory and executed by the process, and the private SQL area is pointed to the statement in the shared area. If it is not found, an executable version is created and stored in the private SQL area only.


When parsing a SQL statement, all characters are not treated equally. Uppercase and lowercase letters are considered different characters. Two SQL statements, identical in nature but differing in case (even if by only one character) will be treated as a separate statement. The "different" statement will be parsed and placed in the private SQL area as would a completely different SQL statement.

The following two statements are completely different statements, according to the parser:

SELECT * FROM V$LOCK
SELECT * FROM v$lock

This is a major reasoning behind packaged procedures. There is only one procedure to maintain, and all users execute that same code.

Handling Locks

The locking of data rows and/or tables is completely automated and transparent to the user. Once the executable version of the SQL statement is run, Oracle automatically attempts to lock data at the lowest level required. This means that if possible, a row will be locked instead of the entire table. This is dependent solely on how the SQL statement was written and what types of access are required (full table scan versus single rows).

A form of manual, or explicit, locking can take place by using the LOCK TABLE command. By default, these commands are not necessary in day-to-day processing. Oracle recommends that you allow the database to handle all locking of data whenever possible.


One of the most overlooked causes of locking problems in a database is failing to create indexes on the columns you have used as foreign keys. If the foreign keys are unindexed and you attempt to modify the child table (the table where the foreign keys reside), the database may require a shared lock on the parent table, or the table where the keys reference in order to modify the child table. When the foreign keys have corresponding indexes, the database can do row-level locking on the parent table.


One other overlooked cause of locking problems is the PCTFREE parameter on the table. If the value of PCTFREE is set too low, many concurrent transactions are executing DML statements against the table, and the data blocks are filled, a shared lock may be requested on the entire table. This transaction is not waiting for a lock, but instead is either waiting on free space or a release of an INITRANS within the transaction layer of the data block.

Generating Redo Logs

Each transaction that is committed has a corresponding redo log entry generated. This entry records just the changes applied to the database files, as well as rollback segment information. These entries in the redo logs are not traceable to a user process. Should the database be brought offline by a system or database failure, you can use these redo logs to reconstruct the database files to a usable state.

Redo log entries are written to the redo log buffer in the SGA. These entries are then written to the online redo logs by the LGWR process. If the instance is running in archive log mode, the redo log files, once filled, are then written to corresponding archived redo log files, which are separate from the data files that hold tables and data. These archived redo logs are the primary recovery method when the online redo logs have been cycled through or corrupted and no longer hold the data needed for recovery.


Although the results of a DDL statement, DROP TABLE for example, have associated redo logs generated for them, they have no rollback segments. Therefore, it is not easy to recover from an accidentally dropped table. To recover from a dropped table or tablespace that contains data, the tablespace in question must be brought offline, an older version of the data file(s) must be restored from a previous backup, and then archived redo logs must be used to recover the tablespace to a specific point in time just before the drop command was issued.

Stepping Through the Transaction

From this point, there are several paths that a transaction can take to completion. Most commonly, the transaction is committed. Still, handling must be taken into account for transactions that are rolled back. Following are the steps taken during a commit.

  1. Instance's transaction table marks transaction as complete.

  2. A unique SCN (system change number) is generated.

  3. Redo log entries are written to disk.

  4. Any acquired locks are released.

  5. Transaction is marked as having completed.

Should any of these steps fail, the transaction cannot be committed. Depending on the nature of the error, the transaction will either wait for the problem to be fixed so it can complete the transaction, or it will be rolled back.


The DBWR process is responsible for writing updated data from the buffer cache in the SGA back to the database. It is possible that even though a transaction has not been committed, its data has been written to disk. Uncommitted data is written to prevent all of the buffers in the SGA from filling up. Should the buffers fill, the database cannot continue modifying data until they are freed. The writing of uncommitted data is not a problem, however. Should a process be rolled back, its data is taken from the rollback segment and reapplied to the modified table(s).

The following steps illustrate what must take place if a transaction is rolled back.

  1. All changes are rolled back to previous savepoint and savepoint is preserved (or beginning of transaction if no savepoints have been specified).

  2. If savepoint is not the last specified savepoint, all savepoints after this one are erased.

  3. Acquired locks are released.

  4. Transaction continues (if no savepoints were specified then the transaction is ended).

  5. If transaction is ended, rollback segments are released as well, though no SCN is recorded.

Processing a Remote or Distributed Transaction

The steps required to process remote and distributed transactions are nearly identical to the way normal transactions are processed. The biggest difference is where the statement is parsed, and the instance whose resources are used for processing. The following steps add the logic required for remote and distributed transaction processing.


In order for a remote transaction to work, you must have a networking package, such as SQL*Net Version 1 or 2, installed on all systems holding a database you want to retrieve or update information on. If all databases reside on the same physical system, that system must have SQL*Net installed.

Distributed transactions require SQL*Net as well as Oracle's distributed option installed and working.

Once all of these requirements are met, standard SQL connection strings can be used to access the remote database(s). Refer to Chapter 6, "SQL*Plus," for further information regarding SQL*Net connection strings.

  1. DML/DDL statement is entered.

  2. If transaction modifies data in current instance, a rollback segment is assigned.

  3. Statement is broken down into separate statements by references to remote database(s).

  4. Any statements that modify local data are optimized.

  5. Statements referencing remote databases are forwarded to the remote database to be optimized.

  6. On the remote databases, statements that require rollback segments or redo information to be stored acquire those resources from themselves.

  7. The optimizer on each remote database generates an execution plan for received statements.

  8. The execution plans are followed to return data to the database that originally started the remote or distributed transaction.

  9. The local database collects all data returned and sorts through data and finishes processing like joins.

  10. Transaction loops through steps 1-9 until commit, rollback, or session termination.

Once again, look at each step individually in the following sections.

Entering DDL/DML Statements

All statements for remote and distributed transactions are entered on a local database, or a database where local data resides. It is not necessary to log in to a database where data will be manipulated in order to issue queries against that database, because that is essentially what a remote or distributed transaction is.

Assigning Rollback Segments

Just as in a normal transaction, if any part of the transaction's statements modify data on the local database, a rollback segment is assigned to track any changes made to the data.

Breaking Down Statements

Oracle must break down all statements that query or modify remote data in order to send them as a group to the remote database(s). Once they are grouped according to remote database, the statements are sent, via SQL*Net, to their intended destination.

Optimizing Local Statements

Just as in a normal transaction, the local statements are optimized, based on either the database parameter OPTIMIZER_MODE or the transaction-level parameter OPTIMIZER_GOAL. Once the desired explain plan is created and executed, data is returned and held until all data from remote transactions has been received.

Forwarding Remote Commands

All remote commands are forwarded to the intended database before they are optimized or parsed. Once the remote database has received the information, it acts identically as it would on a local database: the statement is parsed, the shared SQL area is searched in order to find an identical parsed representation of the statement, the statement is optimized if necessary, and then the statement is executed.

At this point, all data is returned to the local database user or application. If data is to be compared with other data from the local or another remote database, that action takes place on the local database. The local database is responsible for correlating all returned data to provide the final desired output.


DDL statements (such as DROP TABLE or ALTER TABLE) are not allowed on remote database. To execute this family of commands, you must log into the database as a local connection or issue a CONNECT command to access the remote database.

Assigning Remote Rollback Segments and Writing Redo Logs

All statements that are sent to remote databases to update/manipulate data are assigned a rollback segment on the remote database as they would if manipulating data. The remote database is then responsible for all recovery operations should the database fail or should the transaction require a rollback. Remote transactions function like normal transactions when a commit or rollback statement is issued.

Optimizing Remote Statements

Statements that are sent to remote databases are not parsed by the local database. This is so that the remote database's shared SQL area can be searched for identical statements. Once the statement is parsed, it is either optimized or the optimized execution plan for the identical statement is used. Data is then returned to the local database.

Returning Data to the Local Database

As stated earlier, it is the responsibility of the local database, or the database from where the transaction was initiated, to receive data from all remote databases, correlate it, and return only the data that the original statement requested. This can include joins, WHERE and IN clauses, and GROUP BY statements.

Summarizing Remote and Distributed Transactions

Despite the differences in where the bulk of the transaction processing resides, the steps are much the same. When working in a distributed environment, though, you need to take into account quite a few other steps when dealing with complex updates and error handling. Should a transaction be abnormally terminated or an instance in the distributed environment go down, there are quite a few extra steps needed to help decide whether a distributed transaction should be committed or rolled back. It is better to refer to more in-depth documentation to learn more about two-phase commits and exactly how Oracle deals with the problems resulting from a downed instance or terminated session from within a distributed environment.

The SET TRANSACTION Command Reference

SET TRANSACTION is used to alter the current transaction's properties. There are three options available, and the command must be the first command in your transaction. If you have already issued commands in the current transaction, you must either issue a COMMIT or ROLLBACK before the SET TRANSACTION command will be accepted.

SET TRANSACTION has the following options:

READ ONLY Option

READ ONLY refers to a read-only process where no updates can be performed on any tables in the database. It also sets the read-consistency level to transaction level, where all data viewed is a snapshot of the data as it existed when the transaction first started. This option is helpful for sessions that will only query data because the processing overhead for this type of transaction is smaller than that of a normal transaction. It is also helpful for transactions, such as reports, that require a snapshot in time of the current data. For these types of queries, though, processing overhead may be higher because other transactions that modify data will force the reporting transaction to search through rollback segments for the original unmodified data.

In a read-only transaction, the command set is limited to five groups of commands:

READ WRITE Option

This type of transaction is the default, where the user has the ability to update and delete as well as query tables if they have the appropriate database privileges, and the read consistency is set to statement level. No SET TRANSACTION command must take place for this option to be in effect because it is the default.

USE ROLLBACK SEGMENT Option

This option is used to set the rollback segment for transactions that update large amounts of data and therefore create larger than normal rollback segments. This rollback segment may have been created with larger initial and next extents to prevent the maximum number of extents being reached in the rollback segment. The OPTIMAL parameter may also have been set higher, or not used, to prevent extents from being reclaimed and causing a read-consistency error for other concurrent transactions.

Transaction Space

Three parameters affect how transactions can work with objects. These parameters affect snapshots, clusters, indexes and tables. These parameters are:

The PCTFREE parameter is used to set aside a percentage of a data block for work space in the object. This space is usually used for extending columns (such as VARCHAR2 data types). If this value is set too high, the result is wasted space in your data file that cannot be reclaimed without extra work by the database administrator. If the value is too low, the result is either relocated or chained rows. Chained rows are rows that span more than one data block. This creates a problem in that the database must do an additional seek to read the second data block for the row.

INITRANS specifies initial transactions, or the average number of concurrent transactions. For each transaction that will concurrently update a given object, 23 bytes of space are set aside for each data block to keep track of each transaction against the row(s) located within that block. If this value is set too low, the database must dynamically allocate space from the free space in the object's data blocks to use as temporary storage. Dynamically allocating this space can slow down the execution time of a transaction. If no free space can be allocated, the transaction will hang until free space can be allocated. The process may time-out if the wait is long enough.

MAXTRANS is the maximum number concurrent processes that can update a data block in use by the object. If this value is reached, further transactions cannot continue until other transactions have completed.

Summary

As you can see, knowing the steps that must be taken to process a transaction can greatly affect how a custom application is developed. If a transaction is querying data only, using a read-only transaction can greatly reduce the amount of processing overhead required to run an application. If many users are running the same read-only query, this savings on overhead can be a considerable amount.

Likewise, knowing more about how statements are optimized can save a great deal of time in reaching the goals set for a new application. Because optimization methods take a critical role in meeting those goals, it is imperative that you take this into account.

Overall, knowing transaction processing steps is just plain helpful for administrators and developers alike.

Previous Page TOC Next Page Home