Total Pageviews

Saturday, March 30, 2013

TCP Chimney Offloads and SQL Server Implementation

A lot of us often see the following type of sporadic messages although everything is running as usual. After the application or user receives the network related messages, there will be some trouble occurring with the system however it will behave normally again after each event.

Some logged messages can be found in SQL Server Error Log and Windows Event logs, some can be popped up within the application and others can be detected by querying the SQL Server Extended Event.

Some observed Error Messages:
·        ERROR [08S01] [Microsoft][SQL NativeClient]Communication link failure
·        System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
·        Network error 0x40 occurred while sending data to the client on process ID XX batch ID 0. A common cause for this error is if the client disconnected without reading the entire response from the server. This connection will be terminated.
·        A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
·        A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: XXX, output error: XXX).
·        Network error code 0x2746 occurred while establishing a connection; the connection has been closed. 


Possible mystery behind Network Latency:
There could be a number of reasons why this happens. Following are some of the potential areas where network latency and overheads can occur sporadically or continuously.

A.      Implicit Transaction: Implicit transaction mode generates a continuous chain of transactions which causes a delay to commit or rollback each transaction. This significantly degrades the performance and reduces concurrency in a high throughput OLTP system.

B.      Network bandwidth bound application: Database Mirroring, Always ON and Transitional Replication are usually Network bound application processes. Reading and writing large amounts of data or documenting continuously may increase latency and degrade network performance.

C.      TCP Chimney offloads: “TCP Chimney Offload transfers Transmission Control Protocol (TCP) traffic processing, such as packet segmentation and reassembly processing tasks, from a computer’s CPU to a network adapter that supports TCP Chimney Offload. Moving TCP/IP processing from the CPU to the network adapter can free the CPU to perform more application-level functions. TCP Chimney Offload can offload the processing for both TCP/IPv4 and TCP/IPv6 connections if supported by the network adapter”. Thus if this setting is incorrect on the both server OS and NIC level, then performance issues are guaranteed.

D.     NIC Teaming: "NIC teaming makes two or more physical NICs appear as a single virtual one to the application, which isolates application from failures of an individual NIC. There are two types of teaming:

1.    Switch independent (the switch does not know the NICs are teamed) and
2.    Switch dependent (the network switch participates in the teaming).

NIC teaming can also result in bandwidth aggregation such that, for example, four 1GB/Sec NICs can provide an aggregate 4GB/Sec throughput. In the event of problems, the teaming has to be disabled in order to verify it is not causing the problem."

E.      Jumbo Frames: Jumbo frames are only available on gigabit networks, and all devices in the network path must support them (switches, routers, NICs, and so on). If all the networking hardware doesn't support end-to-end Jumbo Frames they should not be used and should be disabled.

"Jumbo frames" extends Ethernet to 9000 bytes and is large enough to carry an 8 KB application datagram (e.g. NFS) plus packet header overhead. If Jumbo frames is supported on a Gigabit Network then “Network Packet Size” configuration in SQL Server can be increased to 8192 in a high throughput environment.

To check whether the jumbo frames is supported by the target machine, execute following command:

ping <IP or server name> -f –l 9000

Figure: Jumbo frames support check


F.       Maximum Transmission Unit (MTU): The Maximum Transmission Unit (MTU) feature is an advanced configuration that allows determining the largest data size permitted on a network connection. Generally, if the MTU is too large for the connection, computer will experience packet loss or dropping connection.

To determine the correct MTU size for a network, we have to do a specific ping test on the destination (target server).For Windows environment, use the following command for the ping test:

Syntaxping [server or IP address] –f –l xxxx
Example: ping 192.168.0.50 –f –l 1472

This test can start from 1472 until we can reach the exact packet size which will not result to the prompt “Packet needs to be fragmented but DF set.” You may adjust (decrease/increase) the value by 10.

Implicit Transaction and TCP Chimney OffLoads:
The DML centric application which mostly depends on implicit transactions of SQL Server generates a lot of LOG I/O which initiates further delay on log writing process and introduces network latency.  Turning of TCP Chimney offloads is one of the “first things to do” that can be considered in order to eliminate any possibilities in this area.

“TCP Chimney Offloads” functionality does not work with virtualized Windows OS (vmware and hyper-v), therefore it can be disabled permanently.

This is one of the potential overlooked areas by many sysadmin, which needs further investigation if the received error message looks similar to the issues which have been mentioned before in the article. Although it is not the best practice, it is still recommended by many Network Experts that the “TCP Chimney Offloads” setting must be disabled in both Windows OS and NIC level to reduce latency in SQL Server implementation whether it is virtual or physical.

Windows OS and TCP Chimney offloads:
In Windows 2003 - TCP Chimney Offload is enabled by default.
In Windows 2008 - TCP Chimney Offload is disabled by default.
In Windows 2008 R2 - TCP Chimney Offload is automatic by default.
In Windows 2012 -TCP Chimney Offload is disabled by default.

How to check TCP Chimney Offloads:
·         In Windows 2003 use the following command:
netsh interface ip show offload
·         In Windows 2008 and above use the following command:
netsh int tcp show global

Figure: Screenshot from Windows 2008 R2



Screenshot from Windows 2012


XE Events (Extended Events) Query:
/***************************************************************
** Errors reported along with the error counts
****************************************************************/

SET NOCOUNT ON

-- Store the XML data in a temporary table
SELECT  CAST(xet.target_data AS XMLAS XMLDATA
INTO    #xeTmpTbl
FROM    sys.dm_xe_session_targets xet
        JOIN sys.dm_xe_sessions xe ON xe.address = xet.event_session_address )
WHERE   xe.name = 'system_health'

-- Get statistical information about all the errors reported
;WITH    myXEinfo EventXML )
          AS SELECT   C.query('.'EventXML
               FROM     #xeTmpTbl a
                        CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event'AS T C )
             ),
        myXEErrorInfo EventTimeErrorNum )
          AS SELECT   EventXML.value('(/event/@timestamp)[1]''datetime'AS EventTime ,
                        EventXML.value('(/event/data/value)[1]''int'AS ErrorNum
               FROM     myXEinfo
               WHERE    EventXML.value('(/event/@name)[1]''varchar(255)') = 'error_reported'
             )
    SELECT  ErrorNum ,
            MAX(EventTimeAS LastRecordedEvent ,
            MIN(EventTimeAS FirstRecordedEvent ,
            COUNT(*) AS Occurrences ,
            b.[text] AS ErrDescription
    FROM    myXEErrorInfo a
            INNER JOIN sys.messages b ON a.ErrorNum = b.message_id
    WHERE   b.language_id = SERVERPROPERTY('LCID')
    GROUP BY a.ErrorNum ,
            b.[text]

--  Get information about each of the errors reported
 ;
WITH    myXEinfo EventXML )
          AS SELECT   C.query('.'EventXML
               FROM     #xeTmpTbl a
                        CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event'AS T C )
               WHERE    C.query('.').value('(/event/@name)[1]''varchar(255)') = 'error_reported'
             )
    SELECT  EventXML.value('(/event/@timestamp)[1]''datetime'AS EventTime ,
            EventXML.value('(/event/data/value)[1]''int'AS ErrNum ,
            EventXML.value('(/event/data/value)[2]''int'AS ErrSeverity ,
            EventXML.value('(/event/data/value)[3]''int'AS ErrState ,
            EventXML.value('(/event/data/value)[5]''varchar(max)'AS ErrText
            --EventXML.value('(/event/action/value)[2]', 'varchar(10)') AS Session_ID
    FROM    myXEinfo
    ORDER BY EventTime DESC
--  Drop the temporary table
DROP TABLE #xeTmpTbl


/**************************************************************************
** Extract Ring Buffer Information for SQL Server 2008 instances and above
**************************************************************************/
SELECT  CONVERT (VARCHAR(30), GETDATE(), 121) AS run_time ,
        DATEADD(ms, a.[record_time] - sys.ms_ticks ), GETDATE()) AS [notification_time] ,
        a.* ,
        sys.ms_ticks AS [current_time]
FROM    SELECT    x.value('(//Record/Error/ErrorCode)[1]''varchar(30)'AS [error_code] ,
                    x.value('(//Record/Error/CallingAPIName)[1]''varchar(255)'AS [calling_API_name] ,
                    x.value('(//Record/Error/APIName)[1]''varchar(255)'AS [API_name] ,
                    x.value('(//Record/Error/SPID)[1]''int'AS [SPID] ,
                    x.value('(//Record/@id)[1]''bigint'AS [record_id] ,
                    x.value('(//Record/@type)[1]''varchar(30)'AS [type] ,
                    x.value('(//Record/@time)[1]''bigint'AS [record_time]
          FROM      SELECT    CAST (record AS XML)
                      FROM      sys.dm_os_ring_buffers
                      WHERE     ring_buffer_type = 'RING_BUFFER_SECURITY_ERROR'
                    ) AS R x )
        ) a
        CROSS JOIN sys.dm_os_sys_info sys
ORDER BY a.[record_time] ASC

Reference:
·         Using TCP Chimney Offload
·         TCP Offloading again?!
·         Network Changes Affect Windows Server 2012
·         Information about the TCP Chimney Offload, Receive Side Scaling, and Network Direct Memory Access features in Windows Server 2008
·         Error message when an application connects to SQL Server on a server that is running Windows Server 2003: "General Network error," "Communication link failure," or "A transport-level error"
·         How to Disable TCP Chimney, TCPIP Offload Engine (TOE) or TCP Segmentation Offload (TSO).
·         TCP Chimney Offload – Possible Performance and Concurrency Impacts to SQL Server Workloads


Friday, March 29, 2013

Covering Index – A “Adam and Eve” Style

We know that in SQL Server there is a limitation on index key which is 900 bytes. This means that the total length of an index must not exceed 900 bytes. Until SQL 2000, we used to create a covering index by concatenating a number of columns to support a query execution. Starting from SQL Server 2005, Microsoft has added a new feature which is known as the “INCLUDE” column index.

In this article we will be researching on covering indexes, its performance comparison, which is between “multiple-columns index” (also known as composite, or concatenated) and “single column index” with the “include” option.

Covering index:
In a simple way, a covering index is one which can satisfy all requested columns in a query without performing a further lookup into the base table (clustered or heap).  So, we can say a covering index is an index which will satisfy JOIN and WHERE clause as well as all the columns from the SELECT statement.

Example of covering index:
Say we have the following SELECT query from a table “tblLarge”,

SELECT  xID ,              -- Primary and Clustered key
        sName1 ,           -- Key column for search
        sName2 ,           -- non-key column
        sName3 ,           -- non-key column
        sIdentifier        -- key column for search
FROM tblLarge

Based on the above select criteria, we may have the following covering indexes:

(a)    Multiple-column index without include option:
sIdentifier +  sName1 + sName2 + sName3
(b)   Multiple-column index with include option:
sIdentifier +  sName1  INCLUDE ( sName2 , sName3)

Please keep in mind that the clustered key does not need be part of the index.

Index Advantages:
The biggest advantage of a covering index is that it completely offloads the locking overheads from a table to the index. This reduces

1.        I/O operation
2.       Row lookup
3.       CPU usage

The overall benefit is a dramatic improvement of query response time.

Some Terms:
Let’s learn what we mean by Key and non-key column. Usually, a column said to be a key column is used on a JOIN or on a WHERE clause. The columns which are not used in search or in join clause are known as non-key columns.

A fact:
It is recommended that an index key should be narrow as possible- this means that Query Optimizer of SQL Server will always try to utilize the smallest index. SQL Server Query optimize is very picky because it is a cost based optimizer.

While creating an execution plan, Query Optimizer evaluates mainly two things; Selectivity and data retrieval cost. Based on the estimated cost of an index, it may not be selective enough so an index may not be used or if it selects a sub-optimal index, then query performance may hinder rather than improvement. In that case Index Hints can be used to force SQL Server to use a particular index.

Our Testing Approach:
1.       We will use a database called “TestDB”.
2.       A table “tblLarge” with 2,000,000 records.
3.       A clustered index on the Primary key “xID”.
4.       We will create two indexes.
5.       Buffer cache will be cleared before query execution.
6.       We will also collect STATISTICS IO output.

Sample Database, Table and Rows creation:
1.       Create a database “TestDB”

CREATE DATABASE TestDB
GO
USE TestDB
GO
2.        Create a “tblLarge” with default fill factor (which is 0 or 100)

SET NOCOUNT ON
IF OBJECT_ID('tblLarge') IS NOT NULL
    DROP TABLE tblLarge
GO

CREATE TABLE [tblLarge](
       [xID] [int] IDENTITY(1,1) NOT NULL,
       [sName1] [varchar](100) NULL,
       [sName2] [varchar](200) NULL,
       [sName3] [varchar](300) NULL,
       [sIdentifier] [char](10) NULL,
       [dDOB] [datetime] NULL,
       [nWage] [numeric](20, 2) NULL,
       [sLicense] [varchar](25) NULL,
 CONSTRAINT [PK_tblLarge] PRIMARY KEY CLUSTERED ([xID] ASC) )

3.       Populate 2,000,000 rows.

SET NOCOUNT ON
INSERT  INTO tblLarge
        ( sName1 ,
          sName2 ,
          sName3 ,
          sIdentifier ,
          dDOB ,
          nWage ,
          sLicense
        )
VALUES  ( LEFT(CAST(NEWID() AS VARCHAR(36)), RAND()*50) ,     -- sName1
          LEFT(CAST(NEWID() AS VARCHAR(36)), RAND()*60) ,     -- sName2
          LEFT(CAST(NEWID() AS VARCHAR(36)), RAND()*70) ,     -- sName2
          LEFT(CAST(NEWID() AS VARCHAR(36)), 2) ,             -- sIdentifier      
          DATEADD(dd, -RAND() * 20000, GETDATE()) ,           -- dDOB
          ( RAND() * 1000 ) ,                                 -- nWage
          SUBSTRING(CAST(NEWID() AS VARCHAR(36)), 6, 7)       -- sLicense
         )

GO 2000000

Sample SELECT query:
We will be using the following SELECT query to perform our test.

DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON

SELECT  xID ,              -- Primary and Clustered key
        sName1 ,           -- Key column for search
        sName2 ,           -- non-key column
        sIdentifier        -- key column for search
FROM    tblLarge
WHERE   sIdentifier = 'AB'
        AND sName1 LIKE 'ABC%'

Any search string can be used to run the above query.

Covering Index with “INCLUDE” option:
In our first test, let’s create two covering indexes with the “INCLUDE” option as follows.

Ind#1
CREATE INDEX [IX_sIdentifier] ON [tblLarge] ([sIdentifier])
INCLUDE ([sName1], [sName2])

Once the index#1 is created, we run the above query, and it provides the following Execution Plan.
Let’s create a variation of the above index and move the “sName1” column to the index key as follows.

Ind#2
CREATE INDEX [IX_sIdentifier_sName1] ON [tblLarge] ([sIdentifier], [sName1])
INCLUDE ([sName2])

When we run the query, we get the following Execution Plan.


Covering Index without “INCLUDE” option:
In our second test, we will create a multiple-column index without the “INCLUDE” option as follows.

Ind#3
CREATE INDEX [IX_sIdent_sName1_sName2] ON [tblLarge] ([sIdentifier], [sName1], [sName2])

Now if we run our query, it will result in the following Execution Plan.
STATISTICS IO Comparison:
When  #X Index
Scan Count
Read-Ahead Reads
Physical Reads
Logical Reads
Ind#1
1
72
2
75
Ind#2
1
0
0
3
Ind#3
1
0
0
4


Index Properties of each Index:

 

 


Review of Query execution:

1.       When we first created the index “ind#1”, query utilized this index although the index is not completely covered. When the query was first executed, there were no statistics for the column “sName1”, so SQL Server created missing statistics.
2.       In case of “ind#2”, all the predicates are found in this index and as well “sName2” which is included with the “INCLUDE” option. So this index has totally covered the query.
3.       For the “ind#3”, we created this index at the end without deleting the previous two indexes. Though the second index is far better than the third index, SQL Server started using this multiple-column index although it is a little expensive.

As an explanation, when we create multiple-column indexes, a multi-column statistics is also created along with the index which provides better cardinality estimations to evaluate the query. However, if we examine the properties of all the indexes, then you will notice that the depth of “ind#3” is 4, which makes it more complex in nature by adding one more leaf level in the b-tree structure.

Summary:
There is a saying which goes like “Query writing is an Art, index is a Science”. As SQL Server uses indexes, it does not mean that it is efficient too. Reviewing the "Index Depth" is one of the critical factors for a performance boost while creating a covering index for a query.
While creating indexes, we should not forget that there is a cost and penalty to maintain indexes where DML operation happens very frequently, and which often reduces the benefit of having indexes. Also we should not have duplicate and overlapping indexes, unless there is a very specific reason.

Read More:
SQL Server Optimization

Increase fan-out to reduce index depth

Clustered and Nonclustered Indexes Described

Introduction to Indexes