Product Sun Fire 3800 Server Sun Fire 4800 Server Sun Fire 4810 Server Sun Fire 6800 Server Sun Fire V1280 Server Netra 1280 Server
Bug Id SUNBUG: 4876369
Date of Workaround Release 10-SEP-2003
Date of Resolved Release 04-NOV-2003
Impact
On very rare occasions, the Time of Day (TOD) on Sun Fire 3800/4800/4810/6800, V1280, and Netra 1280 domains may be susceptible to a clock drift or jump. As a result, any functionality that relies upon the System Controller (SC) timer may be inaccurate.
Contributing Factors
This issue can occur in the following releases:
SPARC Platform
* Sun Fire V1280 and Netra 1280 with firmware (ScApp) 5.13.0014 or earlier * Sun Fire 3800/4800/4810/6800 with firmware (ScApp) 5.12.x * Sun Fire 3800/4800/4810/6800 with firmware (ScApp) 5.13.x * Sun Fire 3800/4800/4810/6800 with firmware (ScApp) 5.14.x * Sun Fire 3800/4800/4810/6800 with firmware (ScApp) 5.15.0, 5.15.1 and 5.15.2
Note: Systems with firmware 5.11.x are not affected by this issue. Use the "showsc -v" command to display the firmware version of the SC.
Symptoms
This issue may occur after 528 days of SC continuous uptime, where the TOD within a domain in the system may become random and unstable. The intervals reported have varied, but the behavior is generally that the TOD jumps backwards approximately one hour up to as much as one month. The TOD as seen by the SC itself does not jump.
There are no specific messages that would indicate this issue has occurred. It can only be discovered by the domain exhibiting unexpected behavior due to the domain TOD changing unexpectedly.
Resolution
This issue is addressed in the following releases:
* Sun Fire V1280 and Netra 1280 with firmware (ScApp) 5.13.0015 (as delivered in patch 113751-05 or later) * Sun Fire 3800/4800/4810/6800 with firmware (ScApp) 5.15.3 (as delivered in patch 112884-04 or later)
Note: The patch must be added to both system controllers to remedy this issue.
When running the addischtl.sh start script, you see the following message:
Registering Discoverer Preference Repository oadutil.reg:Failed to bind to OAD Workaround
Set the TMPDIR environment variable to the Oracle home directory of the release 8.0.6 visibroker directory. The following is an example of commands required to make this change:
When attempting to Apply Patch 6678700,
the following error occurs. ERROR
----------------------- sqlplus -s APPS/***** AD_TOP\12.0.0\patch\115\sql\adsstats.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.declare
*
ERROR at line 1:
ORA-20000: ORA-20001: invalid column name or duplicate columns/column
groups/expressions in method_opt Error while executing
FND_STATS.GATHER_SCHEMA_STATISTICS package.
ORA-06512: at line 91
ERROR: While running adsstats.sql :
object_name=FII.FII_FIN_ITEM_HIERARCHIES***ORA-20001: invalid column name or duplicate columns/column
groups/expressions in method_opt Error while executing
FND_STATS.GATHER_SCHEMA_STATISTICS package.
ORA-06512: at line 9
Solution
Please apply the following solution:
1. Make sure to have a valid backup for FND_HISTOGRAM_COLS table.
2. Run the following to delete column 'PARENT_FLEX_VALUE' from FND_HISTOGRAM_COLS.
delete from FND_HISTOGRAM_COLS
where table_name = 'FII_FIN_ITEM_HIERARCHIES'
and column_name = 'PARENT_FLEX_VALUE'
After installing E-Business Suite R12, try to access system by Forms and the following error is displayed:
FRM-92101: Forms Server
Java Exception:
oracle.forms.net.ConnectionException: Forms session <1> failed during startup: no response from runtime process
at oracle.forms.net.ConnectionException.createConnectionException(Unknown Source)
at oracle.forms.net.HTTPNStream.getResponse(Unknown Source)
at oracle.forms.net.HTTPNStream.doFlush(Unknown Source)
at oracle.forms.net.HTTPNStream.flush(Unknown Source)
at java.io.DataOutputStream.flush(Unknown Source)
at oracle.forms.net.HTTPConnection.connect(Unknown Source)
at oracle.forms.engine.FormsDispatcher.initConnection(Unknown Source)
at oracle.forms.engine.FormsDispatcher.init(Unknown Source)
at oracle.forms.engine.Runform.initConnection(Unknown Source)
at oracle.forms.engine.Runform.startRunform(Unknown Source)
at oracle.forms.engine.Main.createRunform(Unknown Source)
at oracle.forms.engine.Main.start(Unknown Source)
at sun.applet.AppletPanel.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
In $LOG_HOME/ora/10.1.3/j2ee//_/application.log next error is displayed:
yy/dd/mm
In formsstd.out next error is displayed:
$ORACLE_HOME/bin/frmweb: error while loading shared
libraries: libXm.so.2: cannot open shared object file: No such file or directory
Solution
To implement the solution, please execute the following steps:
1. Install package openmotif rpm.
For example, for Oracle Enterprise Linux 5 or Red Hat Enterprise Linux 5 the missing package is:
openmotif21-2.1.30-11.EL5.i386.rpm.
2. Bounce the services.
3. Retest.
References
NOTE:402310.1 - Oracle Applications Installation and Upgrade Notes Release 12 (12.0.4) for Linux (32-bit)
In R12, when clicking on the 'Monitor requests' button after submitting a request, It is taking longer time for this screen to open. This performance issue can be reproduced when invoking the standard concurrent request submission (or) View request page from Oracle Application Framework (OAF) from various modules like Quoting, Report Manager, etc.
Solution
To implement the solution, please execute the following steps:
1. Download and review the readme and pre-requisites for Patch 9747291
2. Ensure that you have taken a backup and Apply the patch in a test environment. .
3. Confirm the following file versions
RequestSummarySearchVO.xml 120.14.12010000.3
You can use the commands like the following:
strings -a $JAVA_TOP/oracle/apps/fnd/cp/viewreq/server/RequestSummarySearchVO.xml | grep Header
4. Retest the issue.
5. Migrate the solution as appropriate to other environments.
Cause
This can be caused by a failure with the OUI native clone process. If an ORACLE_HOME is already registered in the Global inventory, whether the purpose is to refresh an existing Target instance, or the original instance was removed from the file system (but not from the inventory), then OUI native cloning will assume the instance is already registered and no further configuration is required. This will result in an incomplete target system with files and links still referencing the source instance. This can also occur for any failure of runInstaller during cloning.
Any problems with ouicli.pl will be detailed in the log files :
/appsutil/log/$CONTEXT_NAME/ohclone.log
$INST_TOP/admin/log/ohclone.log
/logs/cloneActions.log
/logs/oraInstall.log
/logs/silentInstall.log
The following extract from ohclone.log shows a typical example of this problem:-
SEVERE: OUI-10197:Unable to create a new Oracle Home at .
Oracle Home already exists at this location. Select another location.
INFO: User Selected: Yes/OK
java.io.IOException: OUI-10197:Unable to create a new Oracle Home at
. Oracle Home already exists at this location. Select another
location.
Possible solutions to this are to rerun the native OUI cloning :
Solution 1 :
Use this if the Global Inventory is shared amongst multiple instances :
- Run OUI and "De-install" the unwanted ORACLE_HOME
- Recopy the files from Source system
- Reconfigure the Target system using the "perl adcfgclone.pl" scripts.
Solution 2 :
Use this if the Global Inventory is local to this ORACLE_HOME, i.e oraInst.loc "inventory_loc" points to :
Database Tier clone fails with adlnkoh.sh on 64bit machines
When attempting to clone 'perl adcfgclone.pl dbtier' fails on db tier, adlnkoh.sh fails with the following error:
ERROR
Failed linking target libnmarl on ...
<$ORACLE_HOME>/sysman/lib
cc -o <$ORACLE_HOME>/sysman/lib/emagent
-L<$ORACLE_HOME>/lib32/
-L<$ORACLE_HOME>/sysman/lib32/
-L<$ORACLE_HOME>/lib32/stubs/ `cat
<$ORACLE_HOME>/lib32/sysliblist` -lm `cat
<$ORACLE_HOME>/lib32/sysliblist` -lm -lpthread -lrt -lpthread -Wl,-E
+DD64 <$ORACLE_HOME>/sysman/lib32/s0nmem.o -lclntsh
-L<$ORACLE_HOME>/lib32
-L<$ORACLE_HOME>/sysman/lib32/ -lnmemso -lcore11
-L/app/oracle/EBSCRP2A/db/tech_st/11.1.0/jdk/jre/lib/IA64W/hotspot
-L/app/oracle/EBSCRP2A/db/tech_st/11.1.0/jdk/jre/lib/IA64W -ljava -ljvm -lverify `cat
<$ORACLE_HOME>/lib32/sysliblist` -lm -lpthread -lrt -lpthread -lnsl -lm
`cat <$ORACLE_HOME>/lib32/sysliblist` `cat
<$ORACLE_HOME>/lib32/sysliblist` -lm -lpthread -lrt
cat: Cannot open <$ORACLE_HOME>/lib32/sysliblist: No such file or directory
cat: Cannot open <$ORACLE_HOME>/lib32/sysliblist: No such file or directory
cat: Cannot open <$ORACLE_HOME>/lib32/sysliblist: No such file or directory
cat: Cannot open <$ORACLE_HOME>/lib32/sysliblist: No such file or directory
cat: Cannot open <$ORACLE_HOME>/lib32/sysliblist: No such file or directory
(Bundled) cc: warning 1913: `<$ORACLE_HOME>/sysman/lib32/s0nmem.o' does
not exist or cannot be read
ld: Can't find library for -lclntsh
Fatal error
CauseThe problem is linked to Note:471476.132 Bit Libraries Are Not Installed With Oracle11g (11.1.0.6) For RapidClone, this was raised in Bug 7602049
Solution
Apply Patch:7207440TXK - 12.0.4 Consolidated Patch 1 to the Source instance and redo the cloning steps.
This patch is already included in 12.0.6
For application version 11.5.10 onwards ensure ATG_PF.H is applied onto the instance. Also ensure that you are using the latest version of the JDBC driver. You may run the following sql to get the current JDBC driver version in the system :
Killing the blocking session
===============================
Normally, you should use the KILL USER SESSION menu found in TopSessions.
You can also issue an ALTER SESSION KILL SESSION sid, serial#; in an sqlplus
session.
When on unix or vms,you can kill the unix/vms shadow process directly. Is is not recommended.
When killing the shadow process, please be careful of shared servers in a multi-threaded environment.
e.g.
kill -9 6246 (shadow process on unix)
stop/id= (PROC SPID=SESS SPID on vms running single task)
To find process information, one can execute this query:
column "ORACLE USER" format a11
column SERIAL# format 9999999
column "OS USER" format a8
select distinct substr(s.username,1,11) "ORACLE USER", p.pid "PROCESS ID",
s.sid "SESSION ID", s.serial#, osuser "OS USER", p.spid "PROC SPID",
s.process "SESS SPID", s.lockwait "LOCK WAIT"
from v$process p, v$session s, v$access a
where a.sid=s.sid and
p.addr=s.paddr and
s.username != 'SYS'
and a.sid in (select l.sid from v$lock l
where (id1,id2) in
(select b.id1, b.id2 from v$lock b where b.id1=l.id1 and
b.id2=l.id2 and b.request>0));
ORACLE USER PROCESS ID SESSION ID SERIAL# OS USER PROC SPID SESS SPID LOCK WAI
----------- ---------- ---------- -------- -------- --------- --------- --------
SCOTT 17 11 3313 oracle8i 6247 6246 801113A4
SCOTT 20 19 3611 oracle8i 6258 6257
Previous releases of Internet Explorer have allowed users to logon to the same Oracle E-Business Suite environment concurrently from the same desktop by launching each application session through a new browser session.
By default the ‘Session Merging’ feature within IE8 prevents this functionality from working correctly. This IE8 functionality causes the same session id to be used when accessing the same Oracle E-Business Suite environment despite opening a new browser window to launch a second session.
This can result in unusual behavior with invalid sessions causing errors such as “Your login session has become invalid" and failures to launch forms causing errors such as “FRM-92050: Failed to connect to the server: /forms/formservlet: -1"
To retain the use of multiple sessions to the same Oracle E-Business Suite environment with IE8, please use one of the workarounds below to ensure each browser session remains independent;
Open a new explorer window using its own session cookie by selecting 'File -> New Session' from an existing browser window.
Run from the command line adding the -nomerge parameter i.e. Start -> Run -> iexplore -nomerge
Create a new IE8 icon adding the -nomerge parameter i.e.
Start -> Program Files
Right click on the Internet Explorer icon and select Properties
Add -nomerge to the end of the link in the Target field i.e. "C:\Program Files\Internet Explorer\iexplore.exe" -nomerge
Click the OK button to close the window
This icon may then be copied to the desktop if desired
Error: ORA 603
Text: ORACLE server session terminated by fatal error
-------------------------------------------------------------------------------
Cause: An ORACLE server session is in an unrecoverable state.
Action: Login to ORACLE again so a new server session will be created
*** Important: The notes below are for experienced users - See Note:22080.1
Explanation:
A severe error occurred so this session died.
This is a fairly generic error that can be raised in many places
so the error itself is of little use.
Diagnosis:
See the server side trace file and alert log. There is usually another
error on the error stack to show why the session died.
Eg: ORA 1092 - Instance terminated.
The stack trace should show what we were doing when we died but this is
often a side effect of some other session dying .
Symptoms
On Release 12.0, the “Import Standard Purchase Orders" concurrent program fails with the following error:
ERROR
ORA-01422: exact fetch returns more than requested number of rows in Package po.plsql.PO_PDOI_PVT.
Procedure init_sys_parameters.0
ORA-01422: exact fetch returns more than requested number of rows in Package
po.plsql.PO_PDOI_PVT. Procedure init_startup_values.10
ORA-01422: exact fetch returns more than requested number of rows in Package po.plsql.PO_PDOI_PVT.
Procedure start_process.50
User-Defined Exception in Package po.plsql.PO_PDOI_Concurrent. Procedure POXPDOI.30
Steps To Reproduce:
1. Populate the interface table with the PO details.
2. Navigate to Requests -> Run -> Single request.
3. Select 'Import Standard Purchase Orders'.
Solution
To implement the solution, please execute the following steps:
1. Ensure that you have taken a backup of your system before applying the recommended solution.
2. Run the following scripts in a TEST environment first:
Update fnd_concurrent_programs
set multi_org_category = 'S'
where concurrent_program_name='POXPOPDOI';
3. Commit the transaction using 'commit' command.
4. Once the scripts complete, confirm that the data is corrected.
You can use the following SQL to confirm:
Select multi_org_category from fnd_concurrent_programs where concurrent_program_name='POXPOPDOI';
-- should return a value 'S'.
5. Confirm that the data is corrected, run the "Import Standard Purchase Orders" concurrent program.
6. If you are satisfied that the issue is resolved, migrate the solution as appropriate to other environments.
Symptoms
When running Create Accounting using SLA with the profile option FA: Use Workflow Account Generation set to Yes at the Site level, the following error occurs:
ERROR
Error Number Error Message
95353 Subledger Accounting was unable to derive an accounting code combination using the account derivation rule Assets Book Default Account for Offset owned by Oracle. Please review the account derivation rule and make sure it derives a valid accounting flexfield combination for the source values passed for the transaction.
FND logging of the accounting program reveals: 'ERROR: XLA_AP_CCID_NULL'
Solution
1. Alter the XXX Generate Default Account process including sub processes XXX: Generate Book Level Account and XXX: Generate Category Level Account, exchanging all NLS Accounting flexfield names used in it with the English name used in the XXX Accounting Flexfield.
2. Save a copy of the altered process on the desktop (FA_ACCOUNTS.wft) and save it also in the database. In the Account Generation Processes window, choose the XXX Generate Default Account process for the FA Account Generator and save it.
3. Run Generate Accounts.
4. Verify that the default AAD (Application Accounting Definition) is still valid.
5. Run Create Accounting.
6. To apply the solution on other instances, it will only be necessary to load FA_ACCOUNTS.wft into the database and perform steps 2 to 5.
some services= = output post processor ======================================
C_AQCT_SVC - C AQCART Service Debug_Service - Debug Service FNDCPOPP - Output Post Processor WFALSNRSVC - Workflow Agent Listener Service WFMLRSVC - Workflow Mailer Service WFWSSVC - Workflow Document Web Services Service XDP_APPL_SVC - SFM Application Monitoring Service XDP_CTRL_SVC - SFM Controller Service XDP_Q_EVENT_SVC - SFM Event Manager Queue Service XDP_Q_FA_SVC - SFM Fulfillment Actions Queue Service XDP_Q_FE_READY_SVC - SFM Fulfillment Element Ready Queue Service XDP_Q_IN_MSG_SVC - SFM Inbound Messages Queue Service XDP_Q_ORDER_SVC - SFM Order Queue Service XDP_Q_TIMER_SVC - SFM Timer Queue Service XDP_Q_WI_SVC - SFM Work Item Queue Service XDP_SMIT_SVC - SFM SM Interface Test Service The following symptoms have been identified in regards to this issue.
After a fresh installation of Oracle E-Business Suite Release 12 , the Tools -> Copy Functionality fails with the following error message
An error occurred while attempting to establish an Applications File Server connection with the node FNDFS_. There may be a network configuration problem, or the TNS listener on node FNDFS_ may not be running. Please contact your system administrator.
The Following aspects have been verified
a) All the entries in tnsnames.ora is correct b) Tnsping to all the entries in tnsnames.ora return successful results
Solution Ensure that the following profile options have the Desired value
a) "RRA: Enabled"/FS_ENABLED should be set to "Yes"
b) "RRA: Service Prefix"/FS_SVC_PREFIX should be null-ed out , no spaces allowed
After making the changes, Log-out and Login back and retest the issue.
Oracle Apps Version: 11.5.10.2
Platform: Solaris
This is the version of the product and platform for our installation of Oracle Apps. This document is also applicable for the upgrade to 11.5.10.2 from the lower versions. All the information given in this note is obviously for the platform Solaris SPARC.
Operating System Details:
Version of the Solaris SPARC which we should follow is Maintenance pack of 6(5.6) or the Installation and Maintenance pack of 8(5.8), 9(5.9) and 10(5.10).
Other Required Applications:
1. Oracle Server: 9.2.0
2. Sun swap: match RAM
3. Sun tmp: 10 GB
4. Sun JDK, JRE: JDK 1.3.1_02 & 1.1.8_12 (32bit) 1.1.8_15 (64bit)
5. Sun Patches/ Packages: 32bit/64bit SUNWarc, SUNWbtool, SUNWhea, SUNWlibm,
SUNWlibms,
SUNWmfrun,
SUNWsprot,
SUNWtoo
SUNWi1of and SUNWxwfnt (for Java)
SUNWsprox (for 64bit Oracle) (Not applicable to Solaris 10)
SUNWuiu8,SUNWulcf (if Bug 4025250 occurs)
SUNWxcu4
6. Sun Kernel Settings: SEMMNI 100 ***
SEMMSL 256 ***
SHMMAX 4294967295
SHMMNI 100***
NOEXEC_USER_STACK 1
*** default value in Solaris 10 is already larger
7. Maintenance Tools:
ar
gcc
g++
ld
ksh
make
X Display Server
Main enhancements to VPD in Oracle Database 10g include: column-level privacy, customization, and shared policies. These enhancements are explained in the following three paragraphs.
Column-Level Privacy VPD provides row-level security (RLS). Oracle Database 10g enforces row-level access control only when a command accesses or references security-relevant columns. You can specify more than one relevant column in a policy. If you do not specify any relevant columns, the database applies VPD rewrites to all commands that access or reference the object. This behavior is consistent with previous releases.
You use row-level access control when you want to limit the rows returned. For example, queries on nonsensitive data (such as LAST_NAME or DEPARTMENT_ID) should not be limited because there is no harm in returning a list of all departments but queries that reference sensitive or private columns (such as national identification) should be limited by VPD.
The enforcement of relevant-column VPD occurs whenever a command references the column:
Explicitly: When the column name is included in the query Implicitly: In a SELECT * command because all columns are returned The ability to specify relevant columns in VPD policies lends itself to privacy. Often privacy-relevant information, or personally identifiable information (PII), is stored in columns. For example: national identifier, credit card number, address, and account number are all PII. Only queries that access the relevant columns should have VPD’s granular access control applied to them. A VPD policy need not restrict queries that do not reference relevant columns.
The column-level VPD increases performance by limiting the number of queries that the database rewrites. Rewrites occur only when the statement references relevant columns.
Customization You use customization to base the VPD on the individual requirements of a customer’s deployment. You can customize VPD to always enforce the same predicate with a static policy or you can have VPD predicates that change dynamically with a nonstatic policy.
Shared Policies In Oracle Database 10g, you apply a single VPD policy to multiple objects. This feature reduces the administration costs.
Defining a Virtual Private Database (VPD) VPD enables the database to perform query modification based on a security policy you have defined in a package, and associated with a table or view. VPD provides fine-grained access control that is data driven, context dependent, and row based. VPD is a key enabling technology in building three-tier systems that expose mission-critical resources to customers and partners.
When a user directly or indirectly accesses a table or view associated with a VPD security policy, the server dynamically modifies the user’s SQL statement. The modification is based on a WHERE condition (also known as a predicate) returned by a function that implements the security policy. The database modifies the statement dynamically, transparently to the user, using any condition that can be expressed in, or returned by a function.
Oracle Database 10g enhances the VPD to further increase effectiveness, security, flexibility, and performance. You can use VPD to assist with privacy initiatives.
Enhancements in Oracle Database 10g are applicable to private information. For example, VPD now includes the notion of security-relevant columns. Within a VPD policy, you can reference “relevant columns” so that the database appends the VPD predicate to queries that reference these relevant columns.
In Oracle Database 10g, to provide flexibility for all types of implementations, policies can be:
Static: These policies strictly enforce the same predicate at all times. Non-static: These policies can change dynamically.
How To Find the FND_PATCHSET_LEVEL While Applying India Localization Patch
Solution For finding the FND patchset level use the below query
select patch_level from fnd_product_installations where patch_level like '%FND%' ; The above query for example will return values like 11i.FND.C or 11i.FND.D
Based on this pass the parameter value as FND_PATCHSET_LEVEL = 'C' if the query returns 11i.FND.C,"D if it returns 11i.FND.D"
How To Find the FND_PATCHSET_LEVEL While Applying India Localization Patch
Solution For finding the FND patchset level use the below query
select patch_level from fnd_product_installations where patch_level like '%FND%' ; The above query for example will return values like 11i.FND.C or 11i.FND.D
Based on this pass the parameter value as FND_PATCHSET_LEVEL = 'C' if the query returns 11i.FND.C,"D if it returns 11i.FND.D" Oracle E-Business Suite > Financial Management > Localizations > Oracle Financials for India
Removing Your Oracle Database The task of removing an Oracle database differs based on what OS you are running on. For example, if you are running on UNIX, generally you must remove the related database files (database data files, online redo logs, and control files), files in the admin directories related to the database (database parameter files, trace files, etc.), and entries in ancillary database files such as the oratab, tnsnames.ora, and listener.ora files. If you are running on Windows, you need to make the same changes required on UNIX systems, but you also need to remove the database services for the database that you are removing. This will prevent errors from occurring when you start your Oracle database.
You can also use the DBCA to remove a database. The DBCA can be used to drop a database that you created with the DBCA or one that you manually created.
Oracle Database 10g offers a new command, drop database, that takes care of dropping your database for you. You still need to perform ancillary tasks, such as removing the database service and changing database files such as oratab, tnsnames.ora, and listener.ora.
Removing an Oracle database is quite different than removing the Oracle database software. If removing the software is your quest, you should first remove all Oracle databases on your system, and then use the Oracle Universal Installer to remove the Oracle RDBMS software. To completely remove the Oracle RDBMS software, you will probably have to manually remove files from the old ORACLE_HOME, as the installer is not always perfect in its removal process.
ORA-01079 ORACLE database was not properly created, operation aborted
Possible Causes and Solutions
An error occurred during the creation of the Oracle database. Generally, this is followed by other error messages. Reasons for this error message might include
• Errors in the database initialization file. • Errors in the create database command. • Oracle RDBMS bugs.
Several possible causes (generally followed by another error message):
• The parameter file could not be opened (followed by an LRM-00109 error message). Check that the parameter file exists, and that it can be opened and read by the database executable. By default, the database expects the parameter file to be in ORACLE_HOME/dbs (this is port-specific, however; for example, Windows expects it to be in $ORACLE_HOME/database by default).
• An unknown parameter may be included in the parameter file (error LRM-00101 also appears). Correct the unknown parameter and restart the database.
Generally, some security violation has occurred. Check the following: If the remote_login_passwordfile parameter is not set, or is set to none: 1. Make sure you are logged in as the Oracle user (or equivalent) if you are running in UNIX. 2. Make sure you are logged in to a Windows account that is a member of the ORA_DBA group. If the remote_login_passwordfile parameter is set, then make sure you are logging in with the correct password. If you cannot remember the password for your database, you can 1. Use the orapwd utility to create an Oracle database password file. 2. Add the Windows user you are signed in as to the ORA_DBA group.
An error has occurred when trying to create the control file, or on subsequent writes to the control file. You should make sure that the file system can be written to by the Oracle database login account. Also make sure that a control file does not already exist.
Generally, you are re-creating a database that previously existed, or a previous database creation failed. Remove the control files, and try to create the database again. Exercise caution, and make sure you remove the correct control file. This error may indicate that you are trying to overwrite the control files of another database.
Possible Causes and Solutions You have used the startup pfile command and the pfile or ifile specified in the command could not be found. Accompanied with the ORA-01078 error message.
One note of caution: if you don’t know what a specific keyword of a command does, don’t use it without checking out its purpose. This is a reference for those who understand what something like cascade constraints means when associated with a drop table command. So, without further delay, let’s get on with the examples!
alter cluster
ALTER CLUSTER pub_cluster SIZE 4K; ALTER CLUSTER pub_cluster DEALLOCATE UNUSED KEEP 1M; alter database: Alter a Data File
ALTER DATABASE DATAFILE 4 OFFLINE; ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' OFFLINE; ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' RESIZE 100m; ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' AUTOEXTEND ON NEXT 100M MAXSIZE 1000M; ALTER DATABASE DATAFILE 4 END BACKUP; alter database: Alter a Tempfile
ALTER DATABASE TEMPFILE 4 RESIZE 100M; ALTER DATABASE TEMPFILE 4 AUTOEXTEND ON NEXT 100M MAXSIZE 1000M; ALTER DATABASE TEMPFILE 4 DROP INCLUDING DATAFILES; ALTER DATABASE TEMPFILE 4 OFFLINE; alter database: ARCHIVELOG Mode Commands
ALTER DATABASE ARCHIVELOG; ALTER DATABASE NOARCHIVELOG; ALTER DATABASE FORCE LOGGING; ALTER DATABASE CLEAR LOGFILE '/opt/oracle/logfiles/redo01.rdo'; ALTER DATABASE CLEAR UNARCHIVED LOGFILE '/opt/oracle/logfiles/redo01.rdo'; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE); ALTER DATABASE DROP SUPPLEMENTAL LOG DATA; alter database: Control File Operations
ALTER DATABASE BACKUP CONTROLFILE TO TRACE; ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/opt/oracle/logfile_backup/backup_logfile.trc' REUSE RESETLOGS; ALTER DATABASE BACKUP CONTROLFILE TO '/opt/oracle/logfile_backup/backup_logfile.ctl'; alter database: Create a Data File
ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS '/opt/oracle/datafile/users01.dbf'; ALTER DATABASE CREATE DATAFILE 4 AS '/opt/oracle/datafile/users01.dbf'; ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS NEW; alter database: Datafile Offline/Online See alter database: Alter a Data File
alter database: Logfile Commands
ALTER DATABASE ADD LOGFILE GROUP 2 ('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE; ALTER DATABASE ADD LOGFILE MEMBER '/opt/oracle/logfiles/redo02c.rdo' to GROUP 2; ALTER DATABASE ADD LOGFILE thread 3 GROUP 2 ('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE; ALTER DATABASE DROP LOGFILE GROUP 3; ALTER DATABASE DROP LOGFILE MEMBER '/opt/oracle/logfiles/redo02b.rdo'; alter database: Mount and Open the Database
ALTER DATABASE MOUNT; ALTER DATABASE OPEN; alter database: Move or Rename a Database File or Online Redo Log Note Note The database must be mounted to rename or move online redo logs. The database must be mounted or the data files taken offline to move database data files.
ALTER DATABASE RENAME FILE '/ora/datafile/oldfile.dbf' TO '/ora/datafile/newfile.dbf';
alter database: Open the Database Read-Only
ALTER DATABASE OPEN READ ONLY; alter database: Open the Database with resetlogs
ALTER DATABASE OPEN RESETLOGS; alter database: Recover the Database For database recovery, I recommend the use of the recover command instead. See the “recover” section, later in the chapter.
alter function: Recompile a Function
ALTER FUNCTION my_function COMPILE; alter index: Allocate and Deallocate Extents
ALTER INDEX ix_my_tab ALLOCATE EXTENT; ALTER INDEX ix_my_tab ALLOCATE EXTENT DATAFILE '/ora/datafile/newidx.dbf'; ALTER INDEX ix_my_tab DEALLOCATE UNUSED; ALTER INDEX ix_my_tab DEALLOCATE UNUSED KEEP 100M; alter index: Miscellaneous Maintenance
ALTER INDEX ix_my_tab PARALLEL 3; ALTER INDEX ix_my_tab NOPARALLEL; ALTER INDEX ix_my_tab NOCOMPRESS; ALTER INDEX ix_my_tab COMPRESS; alter index: Modify Logging Attributes
ALTER INDEX ix_my_tab LOGGING; ALTER INDEX ix_my_tab NOLOGGING; alter index: Modify Storage and Physical Attributes
ALTER INDEX ix_my_tab PCTFREE 10 PCTUSED 40 INITRANS 5 STORAGE (NEXT 100k MAXEXTENTS UNLIMITED FREELISTS 10 BUFFER_POOL KEEP); alter index: Partition – Add Hash Index Partition
ALTER INDEX ix_my_tab ADD PARTITION TABLESPACE NEWIDXTBS; alter index: Partition – Coalesce Partition
ALTER INDEX ix_my_tab COALESCE PARTITION; alter index: Partition – Drop Partition
ALTER INDEX ix_my_tab DROP PARTITION ix_my_tab_jan_04; alter index: Partition – Modify Default Attributes
ALTER INDEX ix_my_tab MODIFY DEFAULT ATTRIBUTES FOR PARTITION ix_my_tab_jan_04 PCTFREE 10 PCTUSED 40 TABLESPACE newidxtbs NOLOGGING COMPRESS; alter index: Partition – Modify Partition
ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04 DEALLOCATE UNUSED KEEP 100M; ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04 ALLOCATE EXTENT SIZE 100m; ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04 PCTUSED 40 STORAGE(NEXT 50m) NOLOGGING; alter index: Partition – Modify Subpartition
ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04 DEALLOCATE UNUSED KEEP 100M; ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04 ALLOCATE EXTENT SIZE 100m; ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04 PCTUSED 40 STORAGE(NEXT 50m) NOLOGGING; alter index: Partition – Rename
ALTER INDEX ix_my_tab RENAME PARTITION ix_my_tab_jan_04 TO ix_my_tab_jan_05; ALTER INDEX ix_my_tab RENAME SUBPARTITION ix_my_tab_jan_04 TO ix_my_tab_jan_05; alter index: Partition – Split
ALTER INDEX ix_my_tab SPLIT PARTITION ix_my_tab_jan_05 AT ('15-JAN-05') INTO PARTITION ix_my_tab_jan_05a TABLESPACE myidxtbs STORAGE (INITIAL 100m NEXT 50M FREELISTS 5); alter index: Rebuild Nonpartitioned Indexes
ALTER INDEX ix_my_tab REBUILD ONLINE; ALTER INDEX ix_my_tab REBUILD ONLINE TABLESPACE idx_tbs_new PCTFREE 1 STORAGE (INITIAL 50M NEXT 50m FREELISTS 5) COMPUTE STATISTICS PARALLEL 0; alter index: Rebuild Partitions
ALTER INDEX ix_my_tab REBUILD PARTITION ix_my_tab_jan_04 ONLINE; ALTER INDEX ix_my_tab REBUILD SUBPARTITION ix_my_tab_jan_04 ONLINE PCTFREE 1 STORAGE (INITIAL 50M NEXT 50m FREELISTS 5) COMPUTE STATISTICS PARALLEL 0; alter index: Rename
ALTER INDEX ix_my_tab RENAME TO 'ix_my_tab_01'; alter index: Shrink
ALTER INDEX ix_my_tab SHRINK SPACE; ALTER INDEX ix_my_tab SHRINK SPACE COMPACT CASCADE; alter materialized view: Allocate and Deallocate Extents
ALTER MATERIALIZED VIEW mv_my_tab ALLOCATE EXTENT; ALTER MATERIALIZED VIEW mv_my_tab DEALLOCATE UNUSED; alter materialized view: Miscellaneous
ALTER MATERIALIZED VIEW mv_my_tab COMPRESS; ALTER MATERIALIZED VIEW mv_my_tab PARALLEL 3; ALTER MATERIALIZED VIEW mv_my_tab NOLOGGING; ALTER MATERIALIZED VIEW mv_my_tab LOGGING; ALTER MATERIALIZED VIEW mv_my_tab CONSIDER FRESH; ALTER MATERIALIZED VIEW mv_my_tab ENABLE QUERY REWRITE; alter materialized view: Physical Attributes and Storage
ALTER MATERIALIZED VIEW mv_my_tab PCTFREE 5 PCTUSED 60 STORAGE (NEXT 100m FREELISTS 5); alter materialized view: Refresh
ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST; ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE; ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST ON DEMAND; ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST ON COMMIT; ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE START WITH sysdate; ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE START WITH sysdate NEXT sysdate+1/24; alter materialized view: Shrink Space
ALTER MATERIALIZED VIEW mv_my_tab SHRINK SPACE; ALTER MATERIALIZED VIEW mv_my_tab SHRINK SPACE COMPACT CASCADE; alter materialized view log: Add Components
ALTER MATERIALIZED VIEW LOG ON my_tab ADD PRIMARY KEY; ALTER MATERIALIZED VIEW LOG ON my_tab ADD (col1, col2) INCLUDING NEW VALUES; ALTER MATERIALIZED VIEW LOG ON my_tab ADD (col1, col2), ROWID, SEQUENCE INCLUDING NEW VALUES; alter materialized view log: Allocate and Deallocate Extents
ALTER MATERIALIZED VIEW LOG ON my_tab ALLOCATE EXTENT; ALTER MATERIALIZED VIEW LOG ON my_tab DEALLOCATE UNUSED; alter materialized view log: Miscellaneous
ALTER MATERIALIZED VIEW LOG ON my_tab PARALLEL 3; ALTER MATERIALIZED VIEW LOG ON my_tab NOLOGGING; ALTER MATERIALIZED VIEW LOG ON my_tab SHRINK SPACE; alter materialized view log: Physical Attributes and Storage
ALTER MATERIALIZED VIEW LOG ON my_tab PCTFREE 5 PCTUSED 60 STORAGE (NEXT 100m FREELISTS 5); alter package: Compile
ALTER PACKAGE pk_my_package COMPILE; ALTER PACKAGE pk_my_package COMPILE SPECIFICATION; ALTER PACKAGE pk_my_package COMPILE BODY; alter procedure: Compile
ALTER PROCEDURE pk_my_package COMPILE; alter profile: Miscellaneous
ALTER ROLE my_role IDENTIFIED BY password; ALTER ROLE my_role NOT IDENTIFIED; alter profile: Modify Limits (Password)
ALTER PROFILE my_profile LIMIT FAILED_LOGIN_ATTEMPTS=3; ALTER PROFILE my_profile LIMIT PASSWORD_LOCK_TIME=2/24; ALTER PROFILE my_profile LIMIT PASSWORD_GRACE_TIME=5; ALTER PROFILE my_profile LIMIT PASSWORD_LIFETIME=60; ALTER PROFILE my_profile LIMIT PASSWORD_REUSE_TIME=365 PASSWORD_REUSE_MAX=3; alter profile: Modify Limits (Resource)
ALTER PROFILE my_profile LIMIT SESSIONS_PER_CPU=10; ALTER PROFILE my_profile LIMIT CONNECT_TIME=1000; ALTER PROFILE my_profile LIMIT IDLE_TIME=60; ALTER PROFILE my_profile LIMIT PRIVATE_SGA=1000000; alter rollback segment: Online/Offline
ALTER ROLLBACK SEGMENT rbs01 OFFLINE; ALTER ROLLBACK SEGMENT rbs01 ONLINE; alter rollback segment: Shrink
ALTER ROLLBACK SEGMENT rbs01 SHRINK; ALTER ROLLBACK SEGMENT rbs01 SHRINK TO 100M; alter rollback segment: storage Clause
ALTER ROLLBACK SEGMENT rbs01 STORAGE(NEXT 50M OPTIMAL 100M); alter sequence: Miscellaneous
ALTER SEQUENCE my_seq INCREMENT BY –5; ALTER SEQUENCE my_seq INCREMENT BY 1 MAXVALUE 50000 CYCLE; ALTER SEQUENCE my_seq NOMAXVALUE; ALTER SEQUENCE my_seq CACHE ORDER; ALTER SEQUENCE my_seq INCREMENT BY 1 MINVALUE 1 MAXVALUE 500 CYCLE; alter session: Enable and Disable Parallel Operations
ALTER SESSION ENABLE PARALLEL DML PARALLEL 3; ALTER SESSION ENABLE PARALLEL DDL; ALTER SESSION DISABLE PARALLEL QUERY; alter session: Resumable Space Management
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600; ALTER SESSION DISABLE RESUMABLE; alter session: Set Session Parameters
ALTER SESSION SET nls_date_format='MM/DD/YYYY HH24:MI:SS'; ALTER SESSION SET sort_area_size=10000000; ALTER SESSION SET query_rewrite_enabled=TRUE; ALTER SESSION SET resumable_timeout=3600; ALTER SESSION SET skip_unusable_indexes=TRUE; ALTER SESSION SET SQL_TRACE=TRUE; alter system: Logfile and Archive Logfile Management
ALTER SYSTEM SWITCH LOGFILE; ALTER SYSTEM ARCHIVE LOG START; ALTER SYSTEM ARCHIVE LOG STOP; ALTER SYSTEM ARCHIVE LOG ALL; ALTER SYSTEM ARCHIVE LOG THREAD 1 ALL; ALTER SYSTEM ARCHIVE LOG ALL TO 'C:\oracle\allarch'; alter system: Set System Parameters
ALTER SYSTEM SET db_cache_size=325M COMMENT='This change is to add more memory to the system' SCOPE=BOTH; ALTER SYSTEM SET COMPATIBLE=10.0.0 COMMENT='GOING TO 10G!' SCOPE=SPFILE; alter system: System Management
ALTER SYSTEM CHECKPOINT GLOBAL; ALTER SYSTEM KILL SESSION '145,334'; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM DISABLE RESTRICTED SESSION; ALTER SYSTEM SUSPEND; ALTER SYSTEM QUIESCE RESTRICTED; ALTER SYSTEM UNQUIESCE; ALTER SYSTEM RESUME; ALTER SYSTEM FLUSH SHARED_POOL; ALTER SYSTEM FLUSH BUFFER_CACHE; alter table: External Table Operations
ALTER TABLE ext_parts REJECT LIMIT 500; ALTER TABLE ext_parts DEFUALT DIRECTORY ext_employee_dir; ALTER TABLE ext_parts ACCESS PARAMETERS (FIELDS TERMINATED BY ’,’); ALTER TABLE ext_parts LOCATION (’PARTS01.TXT’,’PARTS02.TXT’); ALTER TABLE ext_parts ADD COLUMN (SSN NUMBER); alter table: Move Table
ALTER TABLE parts move TABLESPACE parts_new_tbs PCTFREE 10 PCTUSED 60; alter table: Table Column – Add
ALTER TABLE PARTS ADD (part_location VARCHAR2(20) ); ALTER TABLE PARTS ADD (part_location VARCHAR2(20), part_bin VARCHAR2(30) ); ALTER TABLE parts ADD (photo BLOB) LOB (photo) STORE AS lob_parts_photo (TABLESPACE parts_lob_tbs); alter table: Table Column – Modify
ALTER TABLE PARTS MODIFY (part_location VARCHAR2(30) ); ALTER TABLE PARTS MODIFY part_location VARCHAR2(30), part_bin VARCHAR2(20) ); ALTER TABLE parts modify (name NOT NULL); ALTER TABLE parts modify (name NULL); ALTER TABLE parts MODIFY LOB (photo) (STORAGE(FREELISTS 2)); ALTER TABLE parts MODIFY LOB (photo) (PCTVERSION 50); alter table: Table Column – Remove
ALTER TABLE parts DROP (part_location); ALTER TABLE parts DROP (part_location, part_bin); alter table: Table Column – Rename
ALTER TABLE parts RENAME COLUMN part_location TO part_loc; alter table: Table Constraints – Add Check Constraint
ALTER TABLE parts ADD (CONSTRAINT ck_parts_01 CHECK (id > 0) ); alter table: Table Constraints – Add Default Value
ALTER TABLE PARTS MODIFY (name DEFAULT 'Not Available'); ALTER TABLE PARTS ADD (vendor_code NUMBER DEFAULT 0); ALTER TABLE PARTS MODIFY (part_description DEFAULT NULL); alter table: Table Constraints – Add Foreign Key
ALTER TABLE parts ADD CONSTRAINT fk_part_bin FOREIGN KEY (bin_code) REFERENCES part_bin; alter table: Table Constraints – Add Primary and Unique Key
ALTER TABLE parts ADD CONSTRAINT pk_parts_part_id PRIMARY KEY (id) USING INDEX TABLESPACE parts_index STORAGE (INITIAL 100K NEXT 100K PCTINCREASE 0); ALTER TABLE parts ADD CONSTRAINT uk_parts_part_bin UNIQUE (part_bin)USING INDEX TABLESPACE parts_index STORAGE (INITIAL 100K NEXT 100K PCTINCREASE 0); alter table: Table Constraints – Modify
ALTER TABLE parts DISABLE UNIQUE (part_bin); ALTER TABLE parts DISABLE CONSTRAINT uk_parts_part_bin; ALTER TABLE parts DISABLE CONSTRAINT uk_parts_part_bin KEEP INDEX; ALTER TABLE parts DISABLE CONSTRAINT fk_part_bin; ALTER TABLE parts DISABLE CONSTRAINT fk_part_bin DISABLE PRIMARY KEY KEEP INDEX; ALTER TABLE parts ENABLE CONSTRAINT fk_part_bin; ALTER TABLE parts ENABLE PRIMARY KEY; ALTER TABLE parts ENABLE UNIQUE (part_bin); ALTER TABLE parts ENABLE NOVALIDATE CONSTRAINT fk_part_bin; ALTER TABLE parts ENABLE NOVALIDATE PRIMARY KEY; ALTER TABLE parts ENABLE NOVALIDATE UNIQUE (part_bin); ALTER TABLE parts ENABLE NOVALIDATE PRIMARY KEY ENABLE NOVALIDATE CONSTRAINT fk_part_bin; alter table: Table Constraints – Remove
ALTER TABLE parts DROP CONSTRAINT fk_part_bin; ALTER TABLE parts DROP PRIMARY KEY; ALTER TABLE parts DROP PRIMARY KEY CASCADE; ALTER TABLE parts DROP UNIQUE (uk_parts_part_bin); alter table: Table Partition – Add
ALTER TABLE store_sales ADD PARTITION sales_q1_04 VALUES LESS THAN (TO_DATE('01-APR-2004','DD-MON-YYYY')) TABLESPACE data_0104_tbs UPDATE GLOBAL INDEXES; ALTER TABLE daily_transactions ADD PARTITION; ALTER TABLE daily_transactions ADD PARTITION Alaska VALUES ('AK'); ALTER TABLE daily_transactions add PARTITION SALES_2004_Q1 VALUES LESS THAN (TO_DATE('01-APR-2004','DD-MON-YYYY')) SUBPARTITIONS 4; alter table: Table Partition – Merge
ALTER TABLE store_sales MERGE PARTITIONS Oklahoma, texas INTO PARTITION oktx; alter table: Table Partition – Move
ALTER TABLE store_sales MOVE PARTITION sales_overflow TABLESPACE new_sales_overflow STORAGE (INITIAL 100m NEXT 100m PCTINCREASE 0) UPDATE GLOBAL INDEXES; alter table: Table Partition – Remove
ALTER TABLE store_sales DROP PARTITION sales_q1_04 UPDATE GLOBAL INDEXES; alter table: Table Partition – Rename
ALTER TABLE store_sales RENAME PARTITION sales_q1 TO sales_first_quarter; alter table: Table Partition – Split
ALTER TABLE store_sales SPLIT PARTITION sales_overflow AT (TO_DATE('01-FEB-2004','DD-MON-YYYY') ) INTO (PARTITION sales_q4_2003, PARTITION sales_overflow) UPDATE GLOBAL INDEXES; ALTER TABLE composite_sales SPLIT PARTITION sales_q1 AT (TO_DATE('15-FEB-2003','DD-MON-YYYY')) INTO (PARTITION sales_q1_01 SUBPARTITIONS 4 STORE IN (q1_01_tab1, q1_01_tab2, q1_01_tab3, q1_01_tab4), PARTITION sales_q1_02 SUBPARTITIONS 4 STORE IN (q1_02_tab1, q1_02_tab2, q1_02_tab3, q1_02_tab4) ) UPDATE GLOBAL INDEXES; alter table: Table Partition – Truncate
ALTER TABLE store_sales TRUNCATE PARTITION sales_overflow UPDATE GLOBAL INDEXES; alter table: Table Properties
ALTER TABLE parts PCTFREE 10 PCTUSED 60; ALTER TABLE parts STORAGE (NEXT 1M); ALTER TABLE parts PARALLEL 4; alter table: Triggers – Modify Status
ALTER TABLE parts DISABLE ALL TRIGGERS; ALTER TABLE parts ENABLE ALL TRIGGERS; alter tablespace: Backups
ALTER TABLESPACE my_data_tbs BEGIN BACKUP; ALTER TABLESPACE my_data_tbs END BACKUP; alter tablespace: Data Files and Tempfiles
ALTER TABLESPACE mytbs ADD DATAFILE '/ora100/oracle/mydb/mydb_mytbs_01.dbf' SIZE 100M; ALTER TABLESPACE mytemp ADD TEMPFILE '/ora100/oracle/mydb/mydb_mytemp_01.dbf' SIZE 100M; ALTER TABLESPACE mytemp AUTOEXTEND OFF; ALTER TABLESPACE mytemp AUTOEXTEND ON NEXT 100m MAXSIZE 1G; alter tablespace: Rename
ALTER TABLESPACE my_data_tbs RENAME TO my_newdata_tbs; alter tablespace: Tablespace Management
ALTER TABLESPACE my_data_tbs DEFAULT STORAGE (INITIAL 100m NEXT 100m FREELISTS 3); ALTER TABLESPACE my_data_tbs MINIMUM EXTENT 500k; ALTER TABLESPACE my_data_tbs RESIZE 100m; ALTER TABLESPACE my_data_tbs COALESCE; ALTER TABLESPACE my_data_tbs OFFLINE; ALTER TABLESPACE my_data_tbs ONLINE; ALTER TABLESPACE mytbs READ ONLY; ALTER TABLESPACE mytbs READ WRITE; ALTER TABLESPACE mytbs FORCE LOGGING; ALTER TABLESPACE mytbs NOLOGGING; ALTER TABLESPACE mytbs FLASHBACK ON; ALTER TABLESPACE mytbs FLASHBACK OFF; ALTER TABLESPACE mytbs RETENTION GUARANTEE; ALTER TABLESPACE mytbs RETENTION NOGUARANTEE; alter trigger
ALTER TRIGGER tr_my_trigger DISABLE; ALTER TRIGGER tr_my_trigger ENABLE; ALTER TRIGGER tr_my_trigger RENAME TO tr_new_my_trigger; ALTER TRIGGER tr_my_trigger COMPILE; alter user: Change Password
ALTER USER olduser IDENTIFIED BY newpassword; ALTER USER olduser IDENTIFIED EXTERNALLY; alter user: Password and Account Management
ALTER USER olduser PASSWORD EXPIRE; ALTER USER olduser ACCOUNT LOCK; ALTER USER olduser ACCOUNT UNLOCK; alter user: Profile
ALTER USER olduser PROFILE admin_profile; alter user: Quotas
ALTER USER olduser QUOTA UNLIMITED ON users; ALTER USER olduser QUOTA 10000M ON USERS; alter user: Roles
ALTER USER olduser DEFAULT ROLE admin_role; ALTER USER olduser DEFAULT ROLE NONE; ALTER USER olduser DEFAULT ROLE ALL EXCEPT admin_role; alter user: Tablespace Assignments
ALTER USER olduser DEFAULT TABLESPACE users; ALTER USER olduser TEMPORARY TABLESPACE temp; alter view: Constraints
ALTER VIEW my_view ADD CONSTRAINT u_my_view_01 UNIQUE (empno) RELY DISABLE NOVALIDATE; ALTER VIEW my_view DROP CONSTRAINT u_my_view_01; ALTER VIEW my_view DROP PRIMARY KEY; ALTER VIEW my_view MODIFY CONSTRAINT u_my_view_01 NORELY; ALTER VIEW my_view MODIFY CONSTRAINT u_my_view_01 RELY; alter view: Recompile
ALTER VIEW my_view RECOMPILE; analyze: Analyze Cluster
ANALYZE CLUSTER my_cluster_tab COMPUTE STATISTICS FOR ALL ROWS; ANALYZE CLUSTER my_cluster_tab ESTIMATE STATISTICS SAMPLE 10000 ROWS FOR ALL ROWS; analyze: Analyze Index
ANALYZE INDEX ix_tab_01 COMPUTE STATISTICS FOR ALL ROWS; ANALYZE INDEX ix_tab_01 ESTIMATE STATISTICS SAMPLE 10000 ROWS FOR ALL ROWS; analyze: Analyze Table
ANALYZE TABLE mytab COMPUTE STATISTICS FOR ALL INDEXED COLUMNS SIZE 100; ANALYZE TABLE mytab COMPUTE STATISTICS FOR ALL INDEXES; audit
AUDIT ALL ON scott.emp; AUDIT UPDATE, DELETE ON scott.emp; AUDIT SELECT on scott.emp WHENEVER NOT SUCCESSFUL; AUDIT INSERT, UPDATE, DELETE ON DEFAULT; comment
COMMENT ON TABLE scott.mytab IS 'This is a comment on the mytab table'; COMMENT ON COLUMN scott.mytab.col1 IS 'This is a comment on the col1 column'; COMMENT ON MATERIALIZED VIEW scott.mview IS 'This is a comment on the materialized view mview'; create cluster
CREATE DATABASE prodb MAXINSTANCES 1 MAXLOGHISTORY 1 MAXLOGFILES 5 MAXLOGMEMBERS 3 MAXDATAFILES 100 DATAFILE 'C:\oracle\ora92010\prodb\system01.dbf' SIZE 250M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'C:\oracle\ora92010\prodb\temp01.dbf' SIZE 40M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED SYSAUX TABLESPACE DATAFILE 'C:\oracle\ora92010\prodb\sysauxtbs01.dbf' SIZE 300M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED UNDO TABLESPACE "UNDOTBS1" DATAFILE 'C:\oracle\ora92010\prodb\undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED CHARACTER SET WE8MSWIN1252 NATIONAL CHARACTER SET AL16UTF16 LOGFILE GROUP 1 ('C:\oracle\ora92010\prodb\redo01.log') SIZE 102400K, GROUP 2 ('C:\oracle\ora92010\prodb\redo02.log') SIZE 102400K, GROUP 3 ('C:\oracle\ora92010\prodb\redo03.log') SIZE 102400K; create database link
CREATE DATABASE LINK my_db_link CONNECT TO current_user USING 'my_db'; CREATE PUBLIC DATABASE LINK my_db_link CONNECT TO remote_user IDENTIFIED BY psicorp USING 'my_db'; create directory
CREATE OR REPLACE DIRECTORY mydir AS '/opt/oracle/admin/directories/mydir'; create function
CREATE OR REPLACE FUNCTION find_value_in_table (p_value IN NUMBER, p_table IN VARCHAR2, p_column IN VARCHAR2) RETURN NUMBER IS v_found NUMBER; v_sql VARCHAR2(2000); BEGIN v_sql:='SELECT 1 FROM '||p_table||' WHERE '||p_column|| ' = '||p_value; execute immediate v_sql into v_found; return v_found; END; / create index: Function-Based Index
CREATE INDEX fb_upper_last_name_emp ON emp_info (UPPER(last_name) ); create index: Global Partitioned Indexes
CREATE INDEX ix_part_my_tab_01 ON store_sales (invoice_number) GLOBAL PARTITION BY RANGE (invoice_number) (PARTITION part_001 VALUES LESS THAN (1000), PARTITION part_002 VALUES LESS THAN (10000), PARTITION part_003 VALUES LESS THAN (MAXVALUE) ); CREATE INDEX ix_part_my_tab_02 ON store_sales (store_id, time_id) GLOBAL PARTITION BY RANGE (store_id, time_id) (PARTITION PART_001 VALUES LESS THAN (1000, TO_DATE('04-01-2003','MM-DD-YYYY') ) TABLESPACE partition_001 STORAGE (INITIAL 100M NEXT 200M PCTINCREASE 0), PARTITION part_002 VALUES LESS THAN (1000, TO_DATE('07-01-2003','MM-DD-YYYY') ) TABLESPACE partition_002 STORAGE (INITIAL 200M NEXT 400M PCTINCREASE 0), PARTITION part_003 VALUES LESS THAN (maxvalue, maxvalue) TABLESPACE partition_003 ); create index: Local Partitioned Indexes
CREATE INDEX ix_part_my_tab_01 ON my_tab (col_one, col_two, col_three) LOCAL (PARTITION tbs_part_01 TABLESPACE part_tbs_01, PARTITION tbs_part_02 TABLESPACE part_tbs_02, PARTITION tbs_part_03 TABLESPACE part_tbs_03, PARTITION tbs_part_04 TABLESPACE part_tbs_04); CREATE INDEX ix_part_my_tab_01 ON my_tab (col_one, col_two, col_three) LOCAL STORE IN (part_tbs_01, part_tbs_02, part_tbs_03, part_tbs_04); CREATE INDEX ix_part_my_tab_01 ON my_tab (col_one, col_two, col_three) LOCAL STORE IN ( part_tbs_01 STORAGE (INITIAL 10M NEXT 10M MAXEXTENTS 200), part_tbs_02, part_tbs_03 STORAGE (INITIAL 100M NEXT 100M MAXEXTENTS 200), part_tbs_04 STORAGE (INITIAL 1000M NEXT 1000M MAXEXTENTS 200)); create index: Local Subpartitioned Indexes
CREATE INDEX ix_mytab_01 ON mytab(column_1); CREATE UNIQUE INDEX ix_mytab_01 ON mytab(column_1, column_2, column_3); CREATE INDEX ix_mytab_01 ON mytab(column_1, column_2, column_3) TABLESPACE my_indexes COMPRESS STORAGE (INITIAL 10K NEXT 10K PCTFREE 10) COMPUTE STATISTICS; CREATE BITMAP INDEX bit_mytab_01 ON my_tab(col_two) TABLESPACE my_tbs; create materialized view
CREATE MATERIALIZED VIEW emp_dept_mv1 TABLESPACE users BUILD IMMEDIATE REFRESH FAST ON COMMIT WITH ROWID ENABLE QUERY REWRITE AS SELECT d.rowid deptrowid, e.rowid emprowid, e.empno, e.ename, e.job, d.loc FROM dept d, emp e WHERE d.deptno = e.deptno; CREATE MATERIALIZED VIEW emp_dept_mv3 TABLESPACE users BUILD IMMEDIATE REFRESH FAST ON COMMIT WITH ROWID DISABLE QUERY REWRITE AS SELECT d.rowid deptrowid, e.rowid emprowid, d.dname, d.loc, e.ename, e.job FROM dept d, emp e WHERE d.deptno (+) = e.deptno; create materialized view: Partitioned Materialized View
CREATE MATERIALIZED VIEW part_emp_mv1 PARTITION BY RANGE (hiredate) (PARTITION month1 VALUES LESS THAN (TO_DATE('01-APR-1981', 'DD-MON-YYYY')) PCTFREE 0 PCTUSED 99 STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0) TABLESPACE users, PARTITION month2 VALUES LESS THAN (TO_DATE('01-DEC-1981', 'DD-MON-YYYY')) PCTFREE 0 PCTUSED 99 STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0) TABLESPACE users, PARTITION month3 VALUES LESS THAN (TO_DATE('01-APR-1988', 'DD-MON-YYYY')) PCTFREE 0 PCTUSED 99 STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0) TABLESPACE users) BUILD IMMEDIATE REFRESH FAST ENABLE QUERY REWRITE AS SELECT hiredate, count(*) as hires FROM emp GROUP BY hiredate; create materialized view log
CREATE MATERIALIZED VIEW LOG ON emp TABLESPACE users WITH PRIMARY KEY, SEQUENCE, (ename, job, mgr, hiredate, sal, comm, deptno) INCLUDING NEW VALUES; create package/create package body
CREATE OR REPLACE PACKAGE get_Tomdate_pkg IS FUNCTION GetTomdate RETURN DATE; PRAGMA RESTRICT_REFERENCES (GetTomdate, WNDS); PROCEDURE ResetSysDate; END get_Tomdate_pkg; /
CREATE OR REPLACE PACKAGE BODY get_Tomdate_pkg IS v_Sysdate DATE := TRUNC(SYSDATE); FUNCTION GetTomdate RETURN DATE IS BEGIN RETURN v_sysdate+1; END GetTomdate; PROCEDURE ResetSysdate IS BEGIN v_Sysdate := SYSDATE; END ResetSysdate; END get_Tomdate_pkg; / create pfile
CREATE PFILE FROM SPFILE; CREATE PFILE='/opt/oracle/admin/mydb/pfile/initmybd.ora' FROM SPFILE='/opt/oracle/admin/mydb/pfile/spfilemybd.ora'; create procedure
CREATE OR REPLACE PROCEDURE new_emp_salary (p_empid IN NUMBER, p_increase IN NUMBER) AS BEGIN UPDATE emp SET salary=salary*p_increase WHERE empid=p_empid; END; / create profile
CREATE ROLE developer_role IDENTIFIED USING develop; create rollback segment
CREATE ROLLBACK SEGMENT r01 TABLESPACE RBS STORAGE (INITIAL 100m NEXT 100M MINEXTENTS 5 OPTIMAL 500M); create sequence
CREATE SEQUENCE my_seq START WITH 1 INCREMENT BY 1 MAXVALUE 1000000 CYCLE CACHE; create spfile
CREATE SPFILE FROM PFILE; CREATE SPFILE='/opt/oracle/admin/mydb/pfile/spfilemybd.ora' FROM PFILE='/opt/oracle/admin/mydb/pfile/initmybd.ora'; create synonym
CREATE SYNONYM scott_user.emp FOR scott.EMP; CREATE PUBLIC SYNONYM emp FOR scott.EMP; create table
CREATE TABLE my_tab (id NUMBER, current_value VARCHAR2(2000) ) COMPRESS; CREATE TABLE parts (id NUMBER, version NUMBER, name VARCHAR2(30), Bin_code NUMBER, upc NUMBER, active_code VARCHAR2(1) NOT NULL CONSTRAINT ck_parts_active_code_01 CHECK (UPPER(active_code)= 'Y' or UPPER(active_code)='N'), CONSTRAINT pk_parts PRIMARY KEY (id, version) USING INDEX TABLESPACE parts_index STORAGE (INITIAL 1m NEXT 1m) ) TABLESPACE parts_tablespace PCTFREE 20 PCTUSED 60 STORAGE ( INITIAL 10m NEXT 10m PCTINCREASE 0); create tablespace: Permanent Tablespace
CREATE TABLESPACE data_tbs DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf' SIZE 100m; CREATE TABLESPACE data_tbs DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf' SIZE 100m FORCE LOGGING BLOCKSIZE 8k; CREATE TABLESPACE data_tbs DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf' SIZE 100m NOLOGGING DEFAULT COMPRESS EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M; CREATE TABLESPACE data_tbs DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf' SIZE 100m NOLOGGING DEFAULT COMPRESS EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE BIGFILE TABLESPACE data_tbs DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf' SIZE 10G; create tablespace: Temporary Tablespace
CREATE OR REPLACE TRIGGER emp_comm_after_insert BEFORE INSERT ON emp FOR EACH ROW DECLARE v_sal number; v_comm number; BEGIN -- Find username of person performing the INSERT into the table v_sal:=:new.salary; :new.comm:=v_sal*.10; END; / create user
CREATE USER Robert IDENTIFIED BY Freeman DEFAULT TABLESPACE users_tbs TEMPORARY TABLESPACE temp QUOTA 100M ON users_tbs QUOTA UNLIMITED ON data_tbs; create view
CREATE OR REPLACE VIEW vw_emp_dept_10 AS SELECT * FROM EMP WHERE dept=10; CREATE OR REPLACE VIEW vw_public_email AS SELECT ename_first, ename_last, email_address FROM EMP WHERE public='Y' delete
DELETE FROM emp WHERE empid=100; DELETE FROM emp e WHERE e.rowid > (SELECT MIN (esub.ROWID) FROM emp esub WHERE e.empid=esub.empid); drop cluster
DROP CLUSTER scott.emp_cluster INCLUDING TABLES CASCADE CONSTRAINTS; drop database
DROP DATABASE; drop database link
DROP DATABASE LINK my_db_link; DROP PUBLIC DATABASE LINK my_db_link; drop directory
DROP DIRECTORY mydir; drop function
DROP FUNCTION find_value_in_table; drop index
DROP INDEX ix_my_tab; drop materialized view
DROP MATERIALIZED VIEW my_mview; DROP MATERIALIZED VIEW my_mview PRESERVE TABLE; drop materialized view log
DROP MATERIALIZED VIEW LOG ON mytab; drop package/drop package body
DROP PACKAGE scott.my_package DROP PACKAGE BODY scott.my_package; drop procedure
DROP PROCEDURE my_proc; drop profile
DROP PROFILE my_profile CASCADE; drop role
DROP ROLE my_role; drop rollback segment
DROP ROLLBACK SEGMENT rbs01; drop sequence
DROP SEQUENCE my_seq; drop synonym
DROP SYNONYM my_synonym; DROP PUBLIC SYNONYM my_synonym; drop table
DROP TABLE my_tab; DROP TABLE my_tab CASCADE CONSTRAINTS; DROP TABLE my_tab CASCADE CONSTRAINTS PURGE; drop tablespace
DROP TABLESPACE my_tbs; DROP TABLESPACE my_tbs INCLUDING CONTENTS; DROP TABLESPACE my_tbs INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS; drop trigger
DROP TRIGGER my_trigger; drop user
DROP USER my_user CASCADE; drop view
DROP VIEW my_view CASCADE CONSTRAINTS; explain plan
EXPLAIN PLAN SET STATEMENT_ID='TEST' FOR SELECT * FROM emp WHERE EMPID=100; flashback database
FLASHBACK DATABASE TO SCN 10000; FLASHBACK DATABASE TO TIMESTAMP SYSDATE – 1/24; FLASHBACK DATABASE TO BEFORE TIMESTAMP SYSDATE – 1/24; flashback table
FLASHBACK TABLE my_tab TO SCN 10000; FLASHBACK TABLE my_tab TO TIMESTAMP SYSDATE – 1/24 ENABLE TRIGGERS; FLASHBACK TABLE my_tab TO BEFORE DROP; FLASHBACK TABLE my_tab TO BEFORE DROP RENAME TO rec_tab; grants: Object Grants
GRANT SELECT ON scott.my_tab TO my_user; GRANT INSERT, UPDATE, SELECT ON scott.my_tab TO my_user; GRANT SELECT ON scott.my_tab TO my_user WITH GRANT OPTION; GRANT SELECT ON scott.my_tab TO PUBLIC WITH GRANT OPTION; grants: System Grants
GRANT CREATE TABLE to my_user; GRANT CREATE ANY TABLE to my_user WITH ADMIN OPTION; GRANT ALL PRIVILEGES to my_user WITH ADMIN OPTION; insert
INSERT INTO dept VALUES (100, 'Marketing', 'Y'); INSERT INTO dept (deptid, dept_name, active) VALUES (100, 'Marketing', 'Y'); INSERT INTO emp_history SELECT * FROM emp a WHERE a.empid NOT IN (SELECT empid FROM emp_history); INSERT INTO emp_pay_summary SELECT empid, sum(gross_pay) FROM emp_pay_history GROUP BY empid; INSERT ALL INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date, deptid, mon_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+1, deptid, tue_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+2, deptid, wed_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+3, deptid, thur_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+4, deptid, fri_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+5, deptid, sat_sales) INTO store_sales (store_id, sales_date, deptid, sales_amt) VALUES (store_id, start_date+6, deptid, sun_sales) SELECT store_id, start_date, deptid, mon_sales, tue_sales, wed_sales, thur_sales, fri_sales, sat_sales, sun_sales FROM store_sales_load; INSERT ALL WHEN store_id < 100 THEN INTO east_stores WHEN store_id >= 100 THEN INTO west_stores ELSE INTO misc_stores SELECT * FROM store_sales_load; INSERT /*+ APPEND */ INTO emp VALUES (100, 'Jacob','Freeman',1000,20, null, 10, sysdate, 100, sysdate+365); lock table
LOCK TABLE my_table IN EXCLUSIVE MODE NOWAIT; LOCK TABLE my_table IN ROW EXCLUSIVE MODE; merge
MERGE INTO emp_retire A USING (SELECT empno, ename_last, ename_first, salary FROM emp WHERE retire_cd='Y') B ON (a.empid=b.empid) WHEN MATCHED THEN UPDATE SET a.ename_last=b.ename_last, a.ename_first=b.ename_first, a.salary=b.salary DELETE WHERE (b.retire_cd='D') WHEN NOT MATCHED THEN INSERT (a.empid, a.ename_last, a.ename_first, a.salary) VALUES (b.empid, b.ename_last, b.ename_first, b.salary) WHERE (b.retire_cd!='D'); noaudit
NOAUDIT ALL ON scott.emp; NOAUDIT UPDATE, DELETE ON scott.emp; NOAUDIT SELECT on scott.emp WHENEVER NOT SUCCESSFUL; NOAUDIT INSERT, UPDATE, DELETE ON DEFAULT; purge
PURGE TABLE my_tab; PURGE INDEX ix_my_tab; PURGE RECYCLEBIN; PURGE DBA_RECYCLEBIN; PURGE TABLESPACE data_tbs USER scott; recover
RECOVER DATABASE; RECOVER TABLESPACE user_data, user_index; RECOVER DATAFILE '/opt/oracle/admin/mydb/datafile/mydb_users_01.dbf'; RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; RECOVER DATABASE UNTIL CHANGE 94044; RECOVER DATABASE UNTIL TIME '2004-08-01:22:00:04'; rename
RENAME my_table to my_tab; revoke: Object Grants
REVOKE SELECT ON scott.my_tab FROM my_user; REVOKE INSERT, UPDATE, SELECT ON scott.my_tab FROM my_user; REVOKE SELECT ON scott.my_tab FROM my_user; REVOKE SELECT ON scott.my_tab FROM PUBLIC; revoke: System Grants
REVOKE CREATE TABLE FROM my_user; REVOKE CREATE ANY TABLE FROM my_user; REVOKE ALL PRIVILEGES FROM my_user; rollback
ROLLBACK; savepoint
SAVEPOINT alpha; select
SELECT ename_last, dname FROM emp a, dept b WHERE a.deptid=b.deptid; SELECT a.empid, b.dept_name FROM emp a, dept b WHERE a.deptid=b.deptid (+); SELECT a.empid, b.dept_name FROM emp a LEFT OUTER JOIN dept b ON a.deptid=b.deptid; SELECT * FROM dept WHERE EXISTS (SELECT * FROM emp WHERE emp.deptid=dept.deptid AND emp.salary > 100); SELECT ename_first, ename_last, CASE deptid WHEN 10 THEN 'Acounting' WHEN 20 THEN 'Sales' ELSE 'None' END FROM emp; SELECT empid, ename_last, salary, comm FROM emp a WHERE salary*.10 > (SELECT AVG(comm) FROM emp z WHERE a.deptid=z.deptid); WITH avg_dept_sales AS ( SELECT a.deptid, avg(b.sales_amt) avg_sales FROM emp a, dept_sales b WHERE a.deptid=b.deptid GROUP BY a.deptid), emp_salaries AS (SELECT empid, AVG(salary) avg_salary FROM emp GROUP BY empid) SELECT * FROM emp_salaries b WHERE avg_salary*.05 > (SELECT avg_sales FROM avg_dept_sales); SELECT /*+ INDEX (a, emp_last_name_ix) */ empid FROM emp a WHERE ename_last='Freeman' SELECT empid, TO_CHAR(retire_date, 'MM/DD/YYYY') FROM emp WHERE retire_date IS NOT NULL ORDER BY retire_date SELECT empid, COUNT(*) FROM emp GROUP BY empid HAVING COUNT(*) > 1; SELECT empid, salary FROM emp AS OF TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' DAY) WHERE empid=20; SELECT empid, salary FROM emp VERSIONS BETWEEN TIMESTAMP SYSTIMESTAMP - INTERVAL '1' DAY AND SYSTIMESTAMP - INTERVAL '1' HOUR WHERE empid=20; set constraints
SET CONSTRAINTS ALL IMMEDIATE; SET CONSTRAINTS ALL DEFERRED; SET CONSTRAINT fk_my_tab DEFERRED; set transaction
SET TRANSACTION USE ROLLBACK SEGMENT rbs01; SET TRANSACTION READ ONLY; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; truncate
UPDATE emp SET salary=100 WHERE empid=100; UPDATE emp SET salary=NULL, retire_date=SYSDATE WHERE empid=100; UPDATE emp SET salary=salary*1.10 WHERE deptid IN (SELECT deptid FROM dept WHERE dept_name = 'Sales'); UPDATE emp a SET (salary, comm)= (SELECT salary*1.10, comm*1.10 FROM emp b WHERE a.empid=b.empid); INSERT INTO store_sales PARTITION (store_sales_jan_2004) sa SET sa.sales_amt=1.10 where store_id=100;