Optimize Your
Oracle Network Configuration
Oracle databases are often shared across geographical areas, so it's
imperative that the Oracle professional understand how database
performance is affected by network communications. The Transparent
Network Substrate (TNS), provided by Oracle, allows distributed
communications between databases
The performance of distributed transactions can be improved using some
important settings that I'll illustrate below. These include parameters
within the sqlnet.ora, tnsnames.ora, and protocol.ora files. These
parameters can be used to change the configuration and size of TCP
packets, and adjusting these parameters can have a profound impact on
the underlying network transport layer to improve the throughput of all
Oracle transactions
As I noted, Oracle Net does not allow the Oracle professional the
ability to tune the underlying network layer, and the majority of
network traffic cannot be tuned from within the Oracle environment.
Remember, Oracle Net is a layer in the OSI model that resides above the
network-specific protocol stack
The frequency and size of network packets, however, can be controlled
by the Oracle DBA. Oracle has a wealth of tools to change packet
frequency and size. A simple example involves changing the refresh
interval for a snapshot to ship larger amounts at less frequent
intervals. The frequency and size of packet shipping across the network
can be affected by using settings contained in the following parameter
files:
- The protocol.ora file—The tcp.nodelay parameter
- The sqlnet.ora server file—The automatic_ipc parameter
- The tnsnames.ora and listener.ora files—The SDU and TDU
parameters
The
tcp.nodelay parameter in the protocol.ora file
Oracle Net, by default, waits until the buffer is filled before
transmitting data. Therefore, requests aren't always sent immediately
to their destinations. This is most common when large amounts of data
are streamed from one end to another, and Oracle Net does not transmit
the packet until the buffer is full. Adding a protocol.ora file, and
specifying a tcp.nodelay to stop buffer flushing delays, can remedy
this problem.
The protocol.ora file can be specified to indicate no data buffering
for all TCP/IP implementations. The parameter can be used on both the
client and server. The protocol.ora statement is:
tcp.nodelay = yes
Specifying this parameter causes TCP buffering to be skipped so that
every request is sent immediately. Keep in mind, however, that network
traffic can increase due to smaller and more frequent packet
transmission, causing slowdowns in the network.
The tcp.nodelay parameter should be used only if TCP timeouts are
encountered. Setting tcp.nodelay can cause a huge improvement in
performance when there is high-volume traffic between database servers.
The
automatic_ipc parameter of the sqlnet.ora file
The automatic_ipc parameter bypasses the network layer, thereby
speeding local connections to the database. When automatic_ipc=on,
Oracle Net checks to see if a local database is defined by the same
alias. If so, network layers are bypassed as the connection is
translated directly to the local IPC connections. This is useful on
database servers, but it's absolutely useless for Oracle Net clients.
The automatic_ipc parameter should be used only on the database server
when an Oracle Net connection must be made to the local database. If
local connections are not needed or required, set this parameter to
off; with this setting, all Oracle Net clients can improve performance.
The SDU
and TDU parameters in the tnsnames.ora file
The session data unit (SDU) and transport date unit (TDU) parameters
are located in the tnsnames.ora and listener.ora files.
The Session Data Unit (SDU) is what SQL*Net buffers data into before
sending it across the network. It sends the data stored in this buffer
when the buffer is full or when an application tries to read the data.
When large amounts of data are being retrieved, and when packet size is
consistently the same, it may speed retrieval to adjust the default SDU
size. Optimal SDU size depends on the normal packet size. Use a sniffer
to find out the frame size, or set tracing on to its highest level to
check the number of packets sent and received, and to see if they are
fragmented. Tune your system to limit the amount of fragmentation.
Doing so requires experimenting with various sized buffers, as there is
no formula. The default SDU size is 2048. It is adjustable from 512 to
32K.
SDU size should be set on both the client and server sides, and should
generally be the same; if different SDU sizes are requested on the
client and the server, the SDU size is negotiated down to the lower of
the two.
Ideally, SDU should not surpass the size of the maximum transmission
unit (MTU). MTU is a fixed value that depends on the actual network
implementation used. Oracle recommends that SDU be set equal to MTU.
The SDU size should be set as a multiple of the normal transport frame
size. Since the normal Ethernet frame size is 1024, the most efficient
SDU size over an Ethernet protocol should be a multiple of 1024, but
not more than four times the amount of 1024.
The TDU is the default packet size used within Oracle Net to group data
together. The TDU parameter should ideally be a multiple of the SDU
parameter. The default value for both SDU and TDU is 2,048, and the
maximum value is 32,767 bytes.
The following guidelines apply to SDU and TDU:
- The SDU should never be set greater than TDU because you'll waste
network resources by shipping wasted space in each packet.
- If your users are connecting via modem lines, you may want to set
SDU and TDU to smaller values because of the frequent resends that
occur over modem lines.
- On fast network connections (T1 or T3 lines), you should set SDU
and TDU equal to the MTU for your network. On standard Ethernet
networks, the default MTU size is set to 1,514 bytes. On standard token
ring networks, the default MTU size is 4,202.
- If the Multi-Threaded Server (MTS) is used, you must also set the
mts_dispatchers with the proper MTU TDU configuration.
The SDU and TDU settings are a direct function of the connection speed
between the hosts. For fast T1 lines, set SDU=TDU=MTU. For slower modem
lines, experiment with smaller values of SDU and TDU.
The following is an example of these parameters on a token ring network
with an MTU of 4,202 bytes.
tnsnames.ora (I have included 'server=(dediated) to try to ensure
that your
attempts at using this feature work)
DATABASE =
(DESCRIPTION =
(SDU=4202)
(TDU=4202)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =
CONCORD)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DIE)
(SERVER=DEDICATED)
)
)
listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 4202)
(TDU = 4202)
(SID_NAME = ORCL)
(GLOBAL_DBNAME =
DATABASE)
)
)
Modify session data unit size when:
- The data coming back from the server is fragmented into separate
packets
- You are on a wide area network (WAN) that has long delays
- Your packet size is consistently the same
- Large amounts of data are returned
Do not modify session data unit size when:
- Your application can be tuned to account for the delays
- You have a higher speed network where the effect of the data
transmission is negligible
- Your requests return small amounts of data from the server
Since Oracle8i, the database will automatically register instances in
the listener.ora file unless you implement one of the following actions:
- Disable automatic service registration. To do this, you must set
the init.ora parameter local_listener
to use a TCP port other than the one defined in your listener.ora file.
- Implement the MTS and define the mts_dispatchers in your init.ora
file, like this:
MTS_DISPATCHERS="(DESCRIPTION=(SDU=8192)(TDU=8192)\
ADDRESS=(PARTIAL=TRUE)(PROTOCOL=TCP)(HOST=supsund3)))\
(DISPATCHERS=1)"
- Use service_name=global_dbname
in the Connect_Data section
of the tnsnames.ora file,
where global_dbname is
configured in listener.ora.
Note that this setting will disable the use of Transparent Application
Failover (TAF), which is not supported using global_dbname
Use Listener Load
Balancing
Listener load balancing is useful when there are many connection
requests made to a server. By having more than one listener to receive
those requests, the burden on a single listener is reduced and
connection time is faster. Listener load balancing is also useful in a
multiple server environment, in which there can be multiple listeners
that listen for replicated servers.
For more detailed information about listener load balancing, see "Listener
Load Balancing".
Detecting
the Problem
The query below shows a method of assessing network performance from
the perspective of Oracle where the average waiting time for the client
or the server are recorded based on time waited for a message from the
server or the client respectively
select event, average_wait
from v$session_event
where event in ('SQL*Net
message from client','SQL*Net message to client')
and
average_wait > 0;
The query below shows the average waiting time for client and server
processes and the bytes actually transferred.
select v$sesstat.sid,
substr(v$session_event.event,1,24) "Event",
v$session_event.average_wait "Avg Wait",
v$sesstat.value "Bytes",
substr(v$session.program,1,20) "Program"
from
v$session_event,v$sesstat, v$session
where
v$session_event.event in ('SQL*Net message from client','SQL*Net
message to client')
and
v$session_event.sid = v$sesstat.sid
and
v$sesstat.value > 0
and
v$session.sid = v$sesstat.sid
and
v$session_event.average_wait > 0;
SID
Event
Avg Wait Bytes Program
----------
------------------------ ---------- ---------- --------------
16 SQL*Net message from cli
1563 19 SQLNav4.exe
16 SQL*Net message from cli
1563 150 SQLNav4.exe
16 SQL*Net message from cli
1563 4 SQLNav4.exe
16 SQL*Net message from cli
1563 11122 SQLNav4.exe
16 SQL*Net message from cli
1563 274712 SQLNav4.exe
16 SQL*Net message from cli
1563 143896 SQLNav4.exe
16 SQL*Net message from cli 1563
1114450960 SQLNav4.exe
16 SQL*Net message from cli 1563
1114450960 SQLNav4.exe
16 SQL*Net message from cli
1563 76 SQLNav4.exe
16 SQL*Net message from cli
1563 76 SQLNav4.exe
13 SQL*Net message from cli
5510 31 service.exe
13 SQL*Net message from cli
5510 77 service.exe
13 SQL*Net message from cli
5510 22912 service.exe
13 SQL*Net message from cli
5510 145648 service.exe
13 SQL*Net message from cli
5510 111 service.exe
13 SQL*Net message from cli
5510 77 service.exe
Tuning network usage by Oracle is all a matter of using the network as
little as possible. Probably the most important factor is grouping of
SQL statements together into transactions. For instance, if ten rows
are to be fetched it is more efficient to send a single request for ten
rows at once rather than ten requests for each row. Since the network
is much slower than the database server or a client machine this should
be obvious.
Also you can modify the sqlnet.ora file set up high
level
tracing (temporarily) so that you can see what pipeline sizes are
currently in use -
trace_level_client=16
trace_directory_client=/u05/app/oracle/admin/network
trace_file_client=client.trc
trace_unique_client = true
trace_level_server=16
trace_directory_server=/u05/app/oracle/admin/network
trace_file_server=server.trc
This will result in vast quantities of data being churned out
into trace files as you create and use a simple SQL*Plus session.
(You might notice the OFA-like structure in the name of the
directory I have used for the SQL*Net output - the default is
under $ORACLE_HOME but I prefer to put it closer to the
udump/bdump directories used for other trace files).
If you scan through a couple of trace files, you will find
lines like the following which is from a server trace with an
explicit IPC connection and the multi-threaded server not running
- the actual content of the packet will vary significantly;
nsprecv: 187 bytes from transport
nsprecv: tlen=187, plen=187, type=1
nsprecv:packet dump
nsprecv:00 BB 00 00 01 00 00 00 |........|
nsprecv:01 35 01 2C 08 01 7F 80 |.5.,....|
nsprecv:7F FF 73 08 00 00 00 01 |..s.....|
nsprecv:00 89 00 32 00 00 08 00 |...2....|
nsprecv:09 09 00 00 00 00 00 00 |........|
nsprecv:00 00 00 00 00 00 00 00 |........|
nsprecv:00 00 28 44 45 53 43 52 |..(DESCR|
nsprecv:49 50 54 49 4F 4E 3D 28 |IPTION=(|
nsprecv:73 64 75 3D 33 32 36 34 |sdu=3264|
nsprecv:30 29 28 43 4F 4E 4E 45 |0)(CONNE|
nsprecv:43 54 5F 44 41 54 41 3D |CT_DATA=|
nsprecv:28 53 49 44 3D 44 37 33 |(SID=D73|
nsprecv:34 29 28 43 49 44 3D 28 |4)(CID=(|
nsprecv:50 52 4F 47 52 41 4D 3D |PROGRAM=|
nsprecv:29 28 48 4F 53 54 3D 68 |)(HOST=h|
nsprecv:70 37 31 32 29 28 55 53 |p712)(US|
nsprecv:45 52 3D 6A 70 6C 29 29 |ER=jpl))|
nsprecv:29 28 41 44 44 52 45 53 |)(ADDRES|
nsprecv:53 5F 4C 49 53 54 3D 28 |S_LIST=(|
nsprecv:41 44 44 52 45 53 53 3D |ADDRESS=|
nsprecv:28 50 52 4F 54 4F 43 4F |(PROTOCO|
nsprecv:4C 3D 49 50 43 29 28 4B |L=IPC)(K|
nsprecv:45 59 3D 44 37 33 34 29 |EY=D734)|
nsprecv:29 29 29 00 00 00 00 00 |))).....|
nsprecv: normal exit
nscon: got NSPTCN packet
nsconneg: entry
nsconneg: vsn=309, lov=300, opt=0x801, sdu=32640, tdu=32767, ntc=0x7308
nsconneg: vsn=309, gbl=0x801, sdu=32640, tdu=32767
nsconneg: normal exit
This is part of the negotiation between the client side and
the server side to determine the Session Data Unit to be used. In
the first section you can see the connection data being passed in
by the client (note especially the chunk sdu=32640)
and in the last section (my guess is that nsconneg
is short of Network Substrate CONnection
NEGotiation) we see the lines where the server
reports what it can do and what the negotiation has arrived at.
In a default setup, the sdu figures will probably be 2048,
rather than 32640. You may also find that the two nsconneg lines
show two different values for the sdu - the client and server
both volunteer information about what they can do, and the final
result is the lower of the two offers.
Confirmation:
Apart from checking the sqlnet trace files to see that the sdu
has been set as expected, an alternative for identifying sessions
which are enhancing the size of their pipelines is the Unix ps
command. With a little luck, and with depending on the
implementation of ps on your platform, you may see enough of the
shadow's 'command line' to see the SDU setting:
ps -ef | grep oracleSID
oracle 2324 1 0 19:49:40 ? 0:01 oracleD734 (DESCRIPTION=(LOCAL=NO)(SDU=32640))
oracle 2327 2318 12 19:50:09 pts/2 0:00 grep oracleD