Pages

Monday 30 June 2014

SQL Server: MERGE Statement

The MERGE statement allows DBA/Developer to combine the inserts, deletes, and updates, and to use a single statement to perform multiple DML actions. Using a MERGE statement instead of issuing multiple DML statements can improve performance. In a MERGE statement, DBA/Developer specifies a source and a target and includes a join. Then, DBA/Developer uses the MATCHED clauses to specify the actions to be performed. The basic syntax of the MERGE statement is as follows:

MERGE [INTO] target_table
USING source_table
ON join_condition [WHEN MATCHED THEN matched_action]
[WHEN NOT MATCHED [BY TARGET] THEN notmatched_action]
[WHEN NOT MATCHED BY SOURCE
THEN notmatchedsource_action];

The WHEN NOT MATCHED THEN clause specifies the action to take if the records from the source table are not in the target table. The WHEN MATCHED THEN clause specifies the action to take if the records from the source table are in the target table. In this scenario, DBA/Developer specified an UPDATE statement in the WHEN MATCHED THEN clause. Therefore, if a row in the ProductStaging table has the same ProductID as a row in the ProductDetails table, the row will be updated with new values for the ModifiedDate, Color, Style, and ProductLine columns. In this scenario, DBA/Developer also included an INSERT statement in the WHEN NOT MATCHED THEN clause. If a row in the ProductStaging table does not have a ProductID that matches a row in the ProductDetails table, the row will be inserted with the current date as the ModifiedDate column value and a ReorderPoint value of 0.In this scenario, DBA/Developer also included an OUTPUT clause. The OUTPUT clause allows DBA/Developer to retrieve and display information about the rows that were affected by the MERGE statement. The OUTPUT clause can display this information to the user, insert the data into another permanent or temporary table or table variable using an INTO clause, or pass the data to a nested DML statement for processing. Within the OUTPUT clause, DBA/Developer would specify the column values that should be retrieved by using the column names with the INSERTED and DELETED prefixes. The DELETED prefix returns the column value before the DML operation, and the INSERTED prefix returns the column value after the DML operation, but before executing any triggers. DBA/Developer can also use $action to return a string indicating which type of DML operation affected the row.

http://msdn.microsoft.com/en-us/library/bb510625.aspx

 

Sunday 29 June 2014

SQL Server: FOR XML clause

The FOR XML clause specifies that the result of the SELECT statement should be returned in XML format. DBA/Developer can specify one of the following modes with the FOR XML clause:
RAW: A single <row> element is generated for each row in the rowset. Each non-null column value generates an attribute with the name identical to the column's name or the column's alias.
AUTO: Nested elements are returned using the columns specified in the SELECT list. Each non-null column value generates an attribute that is named according to the column name or column alias. The element nesting is based on the order in which the columns are specified in the SELECT list.
EXPLICIT: Each row in the rowset can be defined in detail by specifying attributes and elements.
PATH: Each row in the rowset can be defined in detail, but column names and column aliases are specified as XPath expressions.
In this scenario, DBA/Developer wanted to retrieve an element for each row that contains an attribute for each referenced column and include a custom root element. When DBA/Developer use AUTO mode, the elements and attributes are generated based on their order in the SELECT list, and the attribute names will be created using column aliases. Specifying the ROOT option will add a root element with the given name. The following query could be used to generate the correctly formatted XML:
 
SELECT ProductID
AS ID, Name, ProductNumber
AS ProdNum
FROM ProductDetails
FOR XML AUTO, ROOT('ProdData');

DBA/Developer should not use FOR XML RAW, ELEMENTS, ROOT('ProdData'). The RAW mode generates a single <row> element for each row. The ELEMENTS option indicates that DBA/Developer want to return columns in the SELECT list as subelements, rather than as attributes. The ROOT option adds a root element with the specified name. Using this clause in DBA/Developer query would produce output in the following format:
 
<ProdData> <row> <ID>1</ID><Name>Adjustable Race</Name><ProdNum>AR-
5381</ProdNum></row><row><ID>2</ID><Name>Bearing Ball</Name><ProdNum>BA-
8327</ProdNum></row><row> <ID>3</ ID><Name>BB Ball
Bearing</Name><ProdNum>BE-2349</ProdNum></row></ProdData>

DBA/Developer should not use FOR XML AUTO, ELEMENTS, ROOT ('ProdData'). Using this clause in DBA/Developer query would produce XML output in the following format:

<ProdData> <ProductDetails> <ID>1</ID><Name>Adjustable
Race</Name><ProdNum>AR-5381</ProdNum></ProductDetails><ProductDetails>
<ID>2</ID><Name>Bearing Ball</ Name><ProdNum>BA-
8327</ProdNum></ProductDetails><ProductDetails> <ID>3</ ID><Name>BB Ball
Bearing</Name><ProdNum>BE-2349</ProdNum></ProductDetails></ ProdData>

DBA/Developer should not use FOR XML PATH(''), ROOT('ProdData'). PATH mode allows DBA/Developer to define in detail the attributes and elements that should be generated in the XML output. The ROOT option will generate a root element with the specified name. Specifying PATH('') will cause all subsequent elements to be subelements of the root element. Using this clause in DBA/Developer query would produce output in the following format:

<ProdData> <ID>1</ID><Name>Adjustable Race</Name><ProdNum>AR-
5381</ProdNum><ID>4</ ID><Name>Headset Ball Bearings</Name><ProdNum>BE-
2908</ProdNum></ProdData>

Saturday 28 June 2014

SQL Server: Non-Clustered composite index

SQL developer has created a table of contacts using following statement:

CREATE TABLE Contact (ContactID int PRIMARY KEY, ContactType nvarchar(10), FirstName nvarchar(30) NOT NULL, LastName nvarchar(50) NOT NULL, Territory nvarchar(20), Region nvarchar(10), RepID int, InitContact datetime
DEFAULT GETDATE());

SQL developer wants to optimize the stored procedure frequently run on the table. To optimize the query in the stored procedure, DBA/Developer should create a composite index on the two columns. As contains many contacts, and each territory contains only a few. Because there are fewer Region values than Territory values, DBA/Developer should specify the Territory column first when creating the index. With the composite index, SQL Server will find the rows that match the territory and region criteria in the WHERE clause with a single index search. The search will be performed only by reading the index pages, and the data pages will be accessed only once to retrieve the result set. DBA/Developer might create the index using the following statement:

CREATE INDEX IX_Contact_TerritoryRegion
ON Contact (Territory, Region);

Using this index would generally offer better performance than if DBA/Developer had created the index with the Region column first, because the Territorycolumn contains more unique values. DBA/Developer should not create a view on the Contact table and modify the stored procedure to use the view. Using a view would not improve performance. The stored procedure would still have to query the view, and the view would access the underlying table. DBA/Developer should not create two non clustered indexes, one on the Territory column and one on the Region column. In this scenario, a single composite non clustered index would provide better performance. Although SQL Server can use more than one index on a table, it is unlikely that the index on the Region column would be used in this scenario because of the low uniqueness of the column's values. DBA/Developer should not re-create the stored procedure including the WITH SCHEMABINDING clause because the WITH SCHEMABINDING clause is not allowed for a stored procedure. In addition, it has no affect on performance. DBA/Developer can include the WITH SCHEMABINDING clause in a CREATE VIEW statement to ensure that no base tables on which the view is based are dropped or modified in a way that might affect the view's definition. To drop base tables or make such modifications, a user would first need to drop the view, alter the view omitting SCHEMABINDING, or alter the view to remove any unwanted dependencies. DBA/Developer can also include the WITH SCHEMABINDING clause when creating a function to ensure base tables are not changed in such a way that would render the function unusable.

Thursday 26 June 2014

SQL Server: WITH CUBE Clause

The WITH CUBE clause is used with a GROUP BY clause to return rows that contain summary information. When DBA/Developer specifies WITH CUBE, a summary row is included in the result set for each possible combination of the columns specified in the GROUP BY clause. For example, assume DBA/Developer executed the following statement:

SELECT TrxType, TrxDisposition,
COUNT(TrxID) AS TrxCount
FROM dbo.Transaction
GROUP BY TrxType, TrxDisposition
WITH CUBE;

The summary rows that represent totals for each transaction type would have a transaction type in the TrxType column, and a NULL value in the TrxDisposition column. The summary rows that represent totals for each disposition would have a disposition in the TrxDisposition column, and a NULL value in the TrxType column. The grand total summary rows would have a NULL value for both the TrxType and TrxDisposition columns.DBA/Developer should not include only the TrxID column in the GROUP BY clause and specify a grouping set that includes only an empty set, nor include only the TrxType column in DBA/Developer GROUP BY clause and create a grouping set that includes the TrxType column. Neither of these approaches would generate the desired result. DBA/Developer can use grouping sets to explicitly specify the groups for which aggregate information should displayed. Grouping sets are specified by including the GROUPING SETS clause with the GROUP BY clause. The GROUPING SETS clause allows DBA/Developer to explicitly specify the groups for which aggregate information should be displayed. This allows DBA/Developer to use more than one grouping within a single query. The syntax of the GROUP BY clause with a GROUPING SETS clause is as follows:

GROUP BY GROUPING SETS (groupingset1 [,...groupingsetn])

Each grouping set specified can contain one or more columns or an empty set, specified as (). Aggregate rows are returned in the result set for only the specified groups. Specifying an empty set indicates that a grand total row should also be returned in the result set.DBA/Developer should not include the TrxType and TrxDisposition columns in DBA/Developer GROUP BY clause and include the WITH ROLLUP option. Including the WITH ROLLUP clause would display the total number of transactions of each type and grand totals, but not the number of transactions for each disposition.

Wednesday 25 June 2014

SQL Server: Service Broker application

In a Service Broker conversation, a contract defines the types of messages allowed and the conversation participants that can send these messages. In this scenario, DBA/Developer created a message type named//NuTex.com/ Research/SurveyData. DBA/Developer can then use the CREATE CONTRACT statement to create the contract to be used in conversations. The syntax of the CREATE CONTRACT statement is as follows:
CREATE CONTRACT contract_name
[AUTHORIZATION owner_name]({{message_type | [DEFAULT]}SENT BY {INITIATOR |TARGET | ANY} [,...n]);

In this scenario, the Research database is the initiator and receives messages from the Prod database. To accomplish this, DBA/Developer must indicate that the conversation initiator can send messages by specifying either SENT BY INITIATOR or SENT BY ANY in the CREATE CONTRACT statement for the message type.DBA/Developer should not use the statement that includes the SENT BY TARGET clause because this statement will generate an error. Specifying the SENT BY TARGET clause indicates that only the target in the conversation can send messages of the specified message type. Each contract must contain at least one message specifying SENT BY INITIATOR or SENT BY ANY, or an error occurs. DBA/Developer should not use the statement that specifies SurveyData as the message type because this statement will generate a syntax error. In this scenario, the name of the message type is //NuTex.com/Research/ SurveyData. DBA/Developer should not use the statement that specifies [DEFAULT] as the message type because the [DEFAULT] message type has validation set to NONE, indicating the data is not validated. In this scenario, the data is in XML format and requires the validation as specified in the message type DBA/Developer created. When DBA/Developer create a message type, DBA/Developer can specify one of the following types of validation:

EMPTY: Indicates that the message body will be NULL.
NONE: Indicates that validation will not be performed. This type of validation is used for binary data.
WELL_FORMED_XML: Indicates that the message body will contain well-formed XML data.
VALID_XML WITH SCHEMA COLLECTION schema_collection_name: Indicates that the message body will contain XML data that will be validated against a predefined XML schema in the specified schema collection.
 

Tuesday 24 June 2014

SQL Server: Table Instance

A Developer creates the following table: 

CREATE TABLE EventDetails
(EventID int IDENTITY(1,1), EventName varchar(50),
Description varchar(400), FacilitatorID int, EventDt datetime,
CONSTRAINT PK_EventID
PRIMARY KEY CLUSTERED (EventID));

DBA/Developer execute the following Transact-SQL statement:

CREATE FULLTEXT CATALOG EventFTCatalog
AS DEFAULT;
CREATE FULLTEXT INDEX ON dbo.EventDetails
(Description)KEY INDEX PK_EventID
ON EventFTCatalog
WITH CHANGE_TRACKING AUTO;
 
DBA/Developer wants to display the names of all events in the EventDetails table. Modify the thesaurus file to include all words with the same meaning as introductory. Reload the thesaurus file using the sys.sp_fulltext_load_thesaurus_file system stored procedure. Create a query that uses the CONTAINS predicate with the FORMS OF THESAURUS clause. SQL Server 2008 has default thesaurus files that can be used with full-text searches to search for synonyms. These thesaurus files are XML files that can be modified to contain the desired synonyms or pattern replacements. Thesaurus files are used with queries that use the FREETEXT predicate and the FREETEXTTABLE function, and with queries that use the CONTAINS predicate and the CONTAINSTABLE function with the FORMS OF THESAURUS clause. A global thesaurus file, named tsGlobal. xml, and a thesaurus file for each language are located in the SQL_Server_install_path\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\FTDATA\ folder. These default thesaurus files have XML elements commented out initially, but DBA/Developer can modify them as required. In this scenario, DBA/Developer wanted to display the names of all events in the EventDetails table with a description containing the word introductory or a word with the same meaning as introductory. To accomplish this, DBA/Developer should first modify the thesaurus file to include the desired synonyms for the word introductory. For example, DBA/Developer would modify the thesaurus file as follows to identify the word beginning as a synonym for the word introductory:

 <XML ID="Microsoft Search Thesaurus">
<thesaurus xmlns="x-schema:tsSchema.xml">
<diacritics_sensitive>0</diacritics_sensitive>
<expansion><sub>InternetExplorer</sub><sub>
IE</sub><sub>IE5</sub></expansion><replacement><pat>
NT5</pat><pat>W2K</pat><sub>Windows2000</sub>
</replacement><expansion><sub>introductory</sub><sub>
beginning</sub></expansion></ thesaurus></XML>

After modifying the thesaurus file, DBA/Developer should reload it using the sys.sp_fulltext_load_thesaurus_file system stored procedure for the changes to take effect.
For example, the following statement would reload the English thesaurus file:

EXEC sys.sp_fulltext_load_thesaurus_file 1033;

Finally, DBA/Developer should create a query that uses the modified thesaurus file. The following query uses the CONTAINS predicate with the FORMS OF THESAURUS clause to return the desired results:

SELECT EventName FROM dbo.EventDetails
WHERE CONTAINS (Description, 'FORMSOF (THESAURUS, introductory)');

DBA/Developer should not create a query that uses the LIKE operator. The LIKE operator is used to identify values that contain a specific character pattern, not synonyms. DBA/Developer should not rebuild the full-text index on the EventDetails table. In this scenario, DBA/Developer specified the WITH CHANGE_TRACKING AUTO clause with the CREATE FULLTEXT INDEX statement. This configures automatic population of the full-text index. Each time the Description column of the EventDetails table is updated, the changes are also made to the full-text index. Therefore, there is no need to rebuild the full-text index.DBA/Developer should not enable full-text support for the database. In previous versions of SQL Server, full-text support had to be enabled, but with SQL Server 2008, full-text search support is enabled for all user databases by default.DBA/Developer should not create a query that uses the CONTAINS TABLE function with the ISABOUT clause. The ISABOUT clause is used to search for words with different weightings, not for word synonyms. For example, the following query uses the CONTAINS TABLE function in the FROM clause of a query to search the EventDetails table for the words novice, beginning, and introductory with a different weighting assigned to each word, and returns the rows with the highest rank first:

SELECT * FROM EventDetails
AS e INNER JOIN
CONTAINS TABLE (EventDetails, Description, 'ISABOUT (novice weight(.8), beginning weight(.4),introductory weight (.1))')
AS k
ON e.EventID = k.[KEY]
ORDER BY k.[RANK] DESC;

Monday 23 June 2014

SQL Server: ROLLBACK Option

Transactions can be specified explicitly, implicitly, or automatically. In autocommit mode, which is the default, each Transact-SQL statement is treated as a separate transaction, and each statement is automatically committed when it successfully executes. To ensure that both modifications either succeed or fail, DBA/Developer should include them within an explicit transaction. The XACT_ABORT option controls how SQL Server handles transactions when a run-time error occurs. When the SET XACT_ABORT option is set to ON, and a Transact-SQL statement raises an error at run time, the entire transaction is terminated and rolled back. When the XACT_ABORT option is set to OFF, only the Transact-SQL statement that raised the error is rolled back. The remaining statements in the transaction will be executed. The default value of the XACT_ABORT option is OFF. In this scenario, DBA/Developer wanted to ensure that either both statements execute successfully or both are rolled back. Setting XACT_ABORT to ON before the transaction executes will ensure that if either statement generates an error, both statements will be rolled back. For example, in this scenario, DBA/Developer might use the following code:

BEGIN TRANSACTION
UPDATE Inventory
SET UnitsInStock = 0
WHERE InvID = 4;
INSERT INTO InvHistory (InvID, ModifiedDate)
VALUES (4, GETDATE());
COMMIT TRANSACTION

If either DML statement fails, the entire transaction would be rolled back. DBA/Developer can also use the ROLLBACK statement and other procedural constructs within a block of Transact-SQL code to control when transactions are committed or rolled back. DBA/Developer might want to return a custom error message or perform other actions when specific statements are unsuccessful. To accomplish this, DBA/Developer would need to include additional error-handling code. For example, the following code would accomplish the same result, but DBA/Developer could include custom error processing within each BEGIN...END block:

BEGIN TRANSACTION
UPDATE Inventory
SET UnitsInStock = 0
WHERE InvID = 4;
IF@@ERROR <> 0
BEGIN ROLLBACK
RETURN
END INSERT INTO InvHistory(InvID, ModifiedDate)
VALUES (4, GETDATE());
IF @@ERROR <> 0
BEGIN ROLLBACK
RETURN
END ELSE
COMMIT

DBA/Developer should not set the XACT_ABORT option to OFF and enclose the code within a single explicit transaction. With this setting, if one of the statements failed, only the failed statement would be rolled back, rather than the entire transaction. DBA/Developer should not issue the UPDATE and INSERT statements, then check @@ERROR and roll back the transaction if @@ERROR returns a non-zero value. The @@ERROR function returns a non-zero value, but only for the most recently executed statement. Therefore, DBA/Developer would have to check the value of @@ERROR after each DML statement. DBA/Developer should not include the code for both DML operations in a TRY block, and include a CATCH block. Although DBA/Developer can use a TRY...CATCH construct to customize error handling in a transaction, simply including the DML within a TRY block and including a CATCH block is not sufficient. DBA/Developer would need to include the appropriate code to roll back the transaction in the CATCH block if an error occurred.

SQL Server: Stored Procedures for XML Documents

Untyped XML is well-formed XML that is represented by an xml data type. Untyped xml columns and variables are defined with the built-in xml data type. Untyped XML is not validated in any other way unless DBA/Developer includes special code to do so. For example, the following Transact-SQL code creates a table that contains a column, OrderDetail, which contains untyped XML:

CREATE TABLE MyXMLOrderTable
(OrderID int NOT NULL PRIMARY KEY, OrderDetail xml, LoadDate datetime);

Untyped XML is used when DBA/Developer need to load XML that does not conform to its specified schema, DBA/Developer does not have defined schemas, or the XML has already been validated and DBA/Developer does not need to perform validation on the server. DBA/Developer should not create a schema collection before importing the XML documents. Schema collections are used to validate XML. In this scenario, the XML is validated by a client application. Therefore, this task does not need to be performed on the database server. DBA/Developer should not use FILESTREAM storage for the XML documents. In this scenario, DBA/Developer wanted to store the XML documents in the database. FILESTREAM storage is used to store data outside the database on the file system. DBA/Developer should not use typed XML when storing the XML documents because in this scenario, DBA/Developer did not need to perform validation. Typed XML can be used if XML needs to validated, such as if DBA/Developer needs to validate that the XML has valid data types and values for specific elements and attributes. To create a typed xml column or variable, DBA/Developer must first use the CREATE XML SCHEMA COLLECTION statement to create a schema used to validate the XML. Then, DBA/Developer can use the schema collection when declaring an xml variable or creating an xml column in a table. DBA/Developer can also use an existing XSD when DBA/Developer creates the schema collection, if one exists.

Saturday 21 June 2014

SQL Server: Separating Developer and Tester Database

Create a database schema named Special. Create a Contractor database role and grant the role SELECT and VIEW DEFINITION permissions on the dbo schema and the CONTROL permission on the Special schema. Grant the desired CREATE permissions to the Contractor role.Create a user account for each contractor with the Special schema as the default schema. Make each contractor a member of the Contractor role. Schemas contain database objects. Using schemas allows DBA/Developer to manage ownership and permissions of database objects more effectively. In this scenario, DBA/Developer should create a separate schema to contain all database objects that contractors create. This will allow DBA/Developer to keep the contractors' database objects logically separated from the objects created by the development team. The schema name is displayed in Object Manager to make objects easier to identify. DBA/Developer can also grant permissions at the schema level to simplify the management of permissions. If DBA/Developer grant a permission at the schema level, the same permission is implicitly granted for all database objects within the schema, even future objects. DBA/Developer can use the CREATE SCHEMA statement to create a schema, and optionally specify an AUTHORIZATION clause to specify the schema's owner. The schema's owner may be a user or role. If no schema owner is specified, dbo is the default schema owner. In this scenario, DBA/Developer could create the Special schema owned by dbo using the following statement: CREATE SCHEMA Special; Next, DBA/Developer need to give contractors the ability to perform their necessary tasks. The best way to implement this is to create a database role and grant the role the needed permissions. Then, DBA/Developer can make each contractor a member of the role. Permissions may be granted at the schema level instead of at the object level. Therefore, DBA/Developer could use the following Transact- SQL to create the Contractor role and grant the role the necessary permissions:

-- Create the Contractor role CREATE ROLE Contractor;
GO

-- Allows contractors to query and view the definitions of table in -- the dbo schema GRANT

SELECT, VIEW DEFINITIONON SCHEMA::[dbo] to [Contractor];
GO

-- Allows contractors to control Special schema

GRANT CONTROL ON
SCHEMA::[Special] to [Contractor];
GO

-- Allows contractors to create tables, views, and stored procedures-- in the database GRANT
 
CREATE TABLE, CREATE VIEW,
CREATE PROCEDURE TO [Contractor];
GO

After DBA/Developer create the Contractor role, DBA/Developer should then create contractor accounts with the Special schema as a default schema and make them a member of the Contractor role. When a member of the role creates a database object, it will be created in the user's default schema. The following Transact-SQL will create the Contractor1 user with a default schema of Special and assign the user to the Contractor role:

CREATE LOGIN [Contractor1]
WITH PASSWORD=N'12345'
GO

CREATE USER [Contractor1]
FOR LOGIN [Contractor1]
WITH
DEFAULT_SCHEMA=[Special]
GO EXEC sp_addrolemember N'Contractor ', N'Contractor1'

--Allow user to connect to database
GRANT CONNECT TO [Contractor1]

DBA/Developer can specify a schema as the user's default schema before the schema has even been created. DBA/Developer should note that if the user is authenticated via a Windows group, the user will have no default schema assigned. If such a user creates an object, SQL Server will create a new schema in which the object is created. The new schema created will have the same name as the user that created the object. When a schema is no longer needed, such as when the project is complete, DBA/Developer can drop it using the DROP SCHEMA statement. To be dropped, a schema must be empty, or the DROP SCHEMA statement will generate an error. Therefore, in this scenario, DBA/Developer could first move the objects from the Special to the dbo schema. DBA/Developer can use the TRANSFER clause of the ALTER SCHEMA statement to transfer objects from one schema to another. After transferring all objects to another schema, DBA/Developer could drop the schema. The following Transact-SQL will move the ContractorTbl table from the Special schema to the dbo schema and then drop the Special schema:

ALTER SCHEMA dbo
TRANSFER Special.ContractorTbl;
DROP SCHEMA Special;

SQL Server: Common Table Expression (CTE)

Transact-SQL successfully implements a Common Table Expression (CTE) with the following:

WITH LowPricedProducts (ProductID, ProductName, Price)
AS (SELECT p.ProductID, p.ProductName, MIN(c.Price)
FROM Product p
INNER JOIN ProductPriceHistory
ON c.ProductID = p.ProductID
GROUP BY p.ProductID, p.ProductName
HAVING MIN (c.Price) < 10)
SELECT * FROM LowPricedProducts
ORDER BY ProductName;

When defining a CTE, the WITH clause specifies the expression name that will be used in the subsequent statement. The WITH clause must contain a column list identifying the available columns, unless all columns in the expression's query have distinct names. The syntax for creating a CTE is as follows:

WITH expression_name [(column_name [,...n])]
AS (CTE_query_definition)

After you create the CTE, the statement immediately following the CTE definition can reference the CTE expression by name one or more times, as if it were a table or view. Only the columns defined in the CTE expression are accessible.You can also create two CTEs in a single WITH clause by separating the expressions with a comma. Within the WITH clause, the CTE can also reference itself or another CTE that you previously created. Note that only one WITH clause is allowed, even if the query defining the CTE contains a subquery. In addition, a CTE query definition cannot contain an ORDER BY, COMPUTE, COMPUTE BY, INTO, FOR XML, or FOR BROWSE clause, or an OPTION clause that specifies query hints. CTE expressions can be used as an alternative to a view, temporary table, or subquery. Using a CTE expression makes the Transact-SQL code more readable than using a subquery, especially if the query using the CTE needs to reference the same result set multiple times. The Transact-SQL that includes two SELECT statements after the CTE definition will generate an error because the statement that uses the CTE must be the first statement after the CTE definition.
The Transact-SQL that uses an ORDER BY clause in the CTE definition will generate an error because an ORDERBY clause is not allowed for a query that defines a CTE. The Transact-SQL that omits the WITH keyword and includes an OUTPUT clause in the CTE definition will generate an error. The WITH keyword must be included to define a CTE. In addition, the OUTPUT clause is not allowed for a query that defines a CTE because the OUTPUT clause is not valid for SELECT statements. However, a DML statement that immediately follows the CTE
 
Definition and references the CTE can include an OUTPUT clause. When performing inserts, updates, deletes, and merges, you can use the OUTPUT clause to obtain and display information about the rows affected by the DML operation. The OUTPUT clause can display this information to the user, insert the data into another permanent or temporary table or table variable using an INTO clause, or pass the data to a nested DML statement for processing. Within the OUTPUT clause, you specify the column values that should be captured by using the column names with the INSERTED and DELETED prefixes.
Fig. 1 Database Containing Product History and Product Price Tables

Friday 20 June 2014

SQL Server: Page Level Compression

Page compression is implemented internally by SQL Server to minimize the storage required for duplicate data. SQL Server uses a column-prefix or page-dictionary compression technique to eliminate redundant data within each page. In this scenario, because DBA/Developer has a large amount of duplicate data, DBA/Developer should use page compression. When a DBA/Developer implements page compression, SQL Server also implements row compression. Row compression saves storage space for each row by internally compressing each fixed-length numeric, date/time, or character data type. When DBA/Developer implements row compression for a table, columns in the table only use the actual space required, rather than using the number of bytes required for each data type. DBA/Developer can implement table compression when the table is created by specifying the DATA_COMPRESSION option in the CREATE TABLE statement. Valid options for the DATA_COMPRESSION option are as follows:

NONE: Implements no compression for the table.
ROW: Implements only row compression for the table.
PAGE: Implements row and page compression for the table.

In this scenario, DBA/Developer could create the SalaryAudit table with page-level compression using the following statement:

CREATE TABLE Salary Audit
(ID int PRIMARY KEY, EmpID int, LastName varchar(35),
FirstName varchar(25), DeptName varchar(30),OldAmt money, NewAmt money, UpdDate datetime,UpdID int)
WITH (DATA_COMPRESSION = PAGE);
DBA/Developer can also alter an existing table to use data compression. For example, DBA/Developer could use the following statement to modify a previously created table, Table1, to use row compression:

ALTER TABLE Table1
REBUILD WITH (DATA_COMPRESSION=ROW);

DBA/Developer should note that compression requires additional overhead, especially if the table is frequently accessed or modified, because the data must be compressed and uncompressed when it is accessed or modified. Therefore, DBA/Developer should closely evaluate not only the storage requirements but also how the table is used before implementing compression. In this scenario, the data is written only once to the table when a salary update occurs, and is rarely queried. Therefore, the additional overhead would likely have no adverse effects. DBA/Developer should not use FILESTREAM storage and implement NTFS compression. FILESTREAM storage is implemented to store large binary objects, such as image or video files, as files on the file system and be able to manage them using Transact-SQL. FILESTREAM data can also be accessed using Win32 APIs. FILESTREAM storage is tightly integrated with most database functionality, including backup and recovery. When a DBA/Developer takes a database backup, FILESTREAM storage is also backed up unless DBA/Developer override this functionality by performing a partial backup. To create a table that can store FILESTREAM data, DBA/Developer creates a table that contains a column of the varbinary(max) data type and include the FILESTREAM attribute. DBA/Developer cannot implement column-level compression for the table because column-level compression is not a compression strategy supported by SQL Server. SQL Server can implement compression at the row or page levels only. DBA/Developer should not set the PERSISTED property for the EmpID, LastName, FirstName, and UpdID columns. The PERSISTED property is only applicable to computed columns, which are not used in this scenario. Computed columns are virtual columns that by default are not physically stored in the table. Each computed column uses the AS keyword, followed by an expression that evaluates to a value. The expression can contain constants, functions, operators, and references to other columns within the table. The value of the computed column is calculated each time a query references it executes. Computed columns that specify the PERSISTED property are physically stored in the table and recalculated each time a column value referenced in the calculation expression is changed.

Thursday 19 June 2014

SQL Server: CREATE FUNCTION with a RETURNS Clause

Function, named GetSample, accepts a single input parameter and returns a result set containing a random sampling of data from several tables in current database

CREATE ASSEMBLY CLRAssembly
FROM 'C:\CLRTest.dll'
WITH PERMISSION_SET = SAFE;
GO
CREATE FUNCTION MyFunction (@parm int)
RETURNS varchar
AS EXTERNAL NAME
AssemblyName.NamespaceName.ClassName.GetSample;

DBA/Developer issued the CREATE FUNCTION statement, DBA/Developer specified that the function returned a scalar value. DBA/Developer need to re-create the function with a RETURNS clause that specifies a tabular result set instead of a scalar value. DBA/Developer would specify the details of the columns returned by the GetSample CLR UDF. DBA/Developer should not re-register the assembly with the EXTERNAL_ACCESS permission set and re-create the function as an inline table-valued function. In this scenario, there is no need for the assembly to be given the EXTERNAL_ACCESS permission set because it uses tables within DBA/Developer current database.
The EXTERNAL_ACCESS permission set allows access to the registry, the file system, environment variables, and unmanaged code. In addition, DBA/Developer would not re-create the function as an inline table-valued function. An inline table-valued function would not be able to provide the details of the table returned by the GetSample function. Inline table-valued functions are defined by including a single SELECT statement without specifically naming the columns of the result. For example, the following statement would create an inline-table valued function without specifying the details of the returned table value.

CREATE FUNCTION dbo.GetData (@parm int)
RETURNS table
AS RETURN (SELECT * FROM MyTable WHERE MyColumn = @parm);

DBA/Developer should not remove the EXTERNAL NAME clause from the CREATE FUNCTION statement and drop and re- create the function. The EXTERNAL NAME clause must be specified when creating a CLR UDF to identify the .NET function that will be referenced. DBA/Developer should not re-create the desired functionality using a Transact-SQL stored procedure. In this scenario, DBA/Developer can access the previously created CLR function without additional development effort.

Wednesday 18 June 2014

SQL Server: Clustered Versus Non-Clustered Index

SELECT * FROM Sales
WITH (INDEX(PK_Sales))
WHERE Unit-Price > 100
AND Unit-Price < 300
AND Carrier-Tracking-Number = '0123-567A-A1'

For the above query the only index that exists is the non-clustered index on the primary key column. The given query uses the Carrier-Tracking-Number and Unit-Price columns in WHERE clause conditions. Therefore, DBA/Developer should create a non-clustered index on these columns. This would improve query performance because the optimizer would use an Index Scan or an Index Seek operation instead of an RID Lookup. An Index Scan scans the entire non-clustered index, while Index Seek does not. Therefore, to optimize query performance, DBA/Developer might also ensure that an Index Seek is performed. In this scenario, DBA/Developer could use the following statement to create a non-clustered index on the two columns:

CREATE INDEX IX_TrackingUnit-Price
ON Sales (Carrier-Tracking-Number, Unit-Price);

Then, DBA/Developer could remove the INDEX hint in your query so that the optimizer would not be forced to use the PK_Sales non-clustered index. DBA/Developer should not create a plan guide including the OPTIMIZE FOR hint. Plan guides can be used to optimize queries without modifying the query directly. Plan guides are helpful when DBA/Developer needs to specify query hints but cannot directly access the query, such as when it is embedded within an application. In a plan guide, DBA/Developer includes the query to be optimized, and the query hints or a query plan that should be used for optimization. The OPTIMIZE FOR query hint forces the optimizer to use a specific parameter value when optimizing the query. DBA/Developer should not include the MAXDOP query hint in the query. The MAXDOP query hint specifies an integer value that identifies the number of processors on which a single query may run. If the query is run on a computer with a single processor, the query hint is ignored. DBA/Developer can also accomplish this by setting the max degree of parallelism configuration option on the SQL Server instance, but this setting would affect all queries. DBA/Developer should not modify the WHERE clause to use the BETWEEN operator. The query would still access the same underlying data and would use a lookup operator to do so.

Tuesday 17 June 2014

SQL Server: XML Value () Query

The value() method must return a single value, and the XML contains multiple events. Therefore, the correct query includes the nodes() XML method in the FROM clause of the query. The nodes() method accepts an XQuery string and returns all of the specified nodes as a result set. In this scenario, the nodes() method returns a result set that contains one row for each event. Each row contains the corresponding XML for the event. Then, in the SELECT list, the table alias defined is used with the value() method to return the details for each specific event. DBA/Developer could also use the nodes() method to separate XML. For example, the following statement would generate a result set containing a single xml column, Events, with each row containing the XML representing a specific event.

SELECT x.Event.query('.')
AS Events FROM @xmldoc.nodes('//EventData/Event')
AS x (Event);

The following query will not display the desired results:

SELECT @xmldoc.query('/EventData/Event[@ID]')
AS EventID,@xmldoc.query('/
EventData/Event[@EventDesc]')
AS Description,@xmldoc.query('/EventData/Event
[@EventLoc]')
AS Location,@xmldoc.query('/EventData/Event/AvailSeats')
AS TotalSeats,@xmldoc.query('/EventData/Event/Enrolled')
AS Enrolled,@xmldoc.query('/
EventData/Event/AvailSeats') @xmldoc.query('/EventData/Event/Enrolled')
AS Remaining;

 

 

Monday 16 June 2014

SQL Server: Computed Columns

Computed columns are virtual columns that are not physically stored in the table by default. Each computed column uses the AS keyword followed by an expression that evaluates to a value. The expression can contain constants, functions, operators, and references to other columns within the table. The value of the computed column is calculated each time a query referencing it executes. DBA/Developer can also include the optional PERSISTED keyword when creating a computed column. When a persisted computed column is created, it is physically stored in the table, and is recalculated each time a column value referenced in the calculation expression is changed. For an index to be created on a computed column, the column must be deterministic and precise. A computed column is considered deterministic if it produces the same value each time it is passed the same values. For example, the GETDATE() function returns a different value each time it is called. Therefore, a computed column that referenced the GETDATE() function would be non-deterministic. A computed column is considered precise if it does not perform floating-point calculations using a float or real data type. For example, a computed column that returns an int data type but uses a float or real data type in its definition is imprecise, and a computed column that returns an int data type and does not use a float or real data type is precise. In this scenario, the calculation for the Duration is deterministic and precise. Therefore, DBA/Developer could use either of the following

ALTER TABLE statements to create the Duration column:

ALTER TABLE dba.AssemblyTaskADD Duration AS EndTime - StartTime PERSISTED;
ALTER TABLE dba.AssemblyTaskADD Duration AS EndTime - StartTime;


Then, DBA/Developer could create an index on the Duration column to improve query performance. DBA/Developer can use the IsDeterministic and IsPrecise properties with the COLUMNPROPERTY function to determine if an expression used in a computed column is deterministic and precise, respectively. DBA/Developer should not create a dedicated column to store each task's duration and use a DML trigger to populate the column. A DML trigger defines Transact-SQL code that executes when DML statements, such as INSERT, UPDATE, and DELETE statements, are issued on tables or views. Within the body of the trigger, DBA/Developer could include Transact-SQL code to prevent the original operation or perform additional or alternative actions. For the database in this scenario, DBA/Developer could use an INSTEAD OF trigger to fire for each insert or update. The trigger could calculate the duration value. A DML trigger might be used if DBA/Developer wanted to log DML errors but continue processing or perform complex validation that roll backs the triggering operation or returns an error message to the caller. If DBA/Developer use triggers to additional actions, DBA/Developer should note that AFTER triggers do not fire if the triggering SQL statement fails, for example if constraint checks are not successful. However, in this scenario, performance is a primary concern. Therefore, DBA/Developer should avoid using triggers because they require more overhead. DBA/Developer should not create a stored procedure that calculates each task's duration. This would not improve query performance because the stored procedure call would have to be included for each query. DBA/Developer should not create a non persisted computed column that includes a CLR user-defined function to store each task's duration, and create an index on the computed column. To create an index on a computed column that references a CLR user-defined function, the computed column must be persisted.

                               Fig. 1 Table Structure of Production Database
                                   


 

SQL Server: SPOTLIST Optimization Tool

A stoplist contains a list of words that are ignored in full-text searches. These words are also known as noise words. When a full-text index is created, DBA/Developer can include the WITH STOPLIST clause to specify the stoplist that should be used for the full-text index. These stopwords are not included in the full-text index. By default, SQL Server uses a system stoplist that contains meaningless words like articles and prepositions. However, DBA/Developer can create DBA/Developer own stoplist and add words to it. For example, the following statements create a new stoplist, named CustomStoplist, from the system stoplist and add the word unknown to the stoplist:

CREATE FULLTEXT STOPLIST CustomStoplist
FROM SYSTEM STOPLIST;
ALTER FULLTEXT STOPLIST CustomStoplist ADD 'unknown';

After creating the stoplist, DBA/Developer specifies the stoplist when creating the full-text index as follows:

CREATE FULLTEXT INDEX ON dba.Product(Details)
KEY INDEX
PK_Product_ProductID WITH STOPLIST = CustomStoplist;

Previous versions of SQL Server used noise-word files instead of stoplists. Any modifications to the default noise-word file will not be included in the system stoplist when the database is upgraded to SQL Server 2008. DBA/Developer must manually modify the system stoplist after upgrading to SQL Server 2008.DBA/Developer should not use a full-text search filter. Full-text search filters are used when performing full-text searches on varbinary, varbinary(max), image, and xml data type columns. The filter processes the document to obtain the text information that should be used in the search. DBA/Developer should not use a thesaurus. Thesaurus files identify words with similar meanings. Thesaurus files are used with queries that use the FREETEXT predicate and the FREETEXTTABLE function, and with queries that use the CONTAINS predicate and the CONTAINSTABLE function with the FORMS OF THESAURUS clause. A global thesaurus file, named tsGlobal.xml, and a thesaurus file for each language are located in the SQL_Server_install_path\Microsoft SQL
Server\MSSQL10.MSSQLSERVER\MSSQL\FTDATA\ folder. These default thesaurus files have XML elements commented out initially, but DBA/Developer can modify them as required to add or remove synonyms. DBA/Developer should not use a dictionary. A dictionary is not a valid full-text index component. SQL Server does use a thesaurus file to identify synonyms, but does not use a dictionary.

SQL Server: User Defined Function

Creating a user defined function to identify the employee with the highest salary who meets other specific criteria. DBA/Developer use the following statement to create the UDF:

CREATE FUNCTION dba.udf_find_emp
(@Commission money, @Status varchar(8) = 'FA')
RETURNS int WITH SCHEMABINDING, ENCRYPTION AS
BEGIN DECLARE
@v_emp int;SELECT @v_emp = EmpIDFROM dba.Employee
WHERE
Status = @StatusAND Commission > @Commission AND Salary = (SELECT MAX(Salary)
FROM dba. Employee
WHERE Status = @StatusAND Commission > @Commission);
RETURN @v_emp
END;

CREATE FUNCTION statement for scalar functions is as follows:

CREATE FUNCTION [schema_name.]function_name ([{@parm_name
[AS][parmtype_schema.] parm_data_type [=default] [READONLY]} [,...n]])RETURNS
return_type [WITH function_opt [,...n]][AS]
BEGIN function_body RETURN scalar_expression END;

When creating the function, DBA/Developer defined a default value for the @Status parameter. Therefore, to successfully call the function without an @Statusvalue specified, DBA/Developer must also specify the DEFAULT keyword. When calling stored procedures, DBA/Developer can omit the DEFAULT keyword, but for user-defined functions, DBA/Developer cannot. For example, in this scenario, the following statement would execute successfully:

SELECT dba.udf_find_emp(1000, DEFAULT);

When DBA/Developer creates a scalar user-defined function, the function can be used anywhere that a scalar value could be used. In this scenario, DBA/Developer specified WITH SCHEMABINDING, ENCRYPTION. The SCHEMABINDING option is used to ensure that no objects on which the function depends are modified in a way that might make the function unusable. The ENCRYPTION option will encrypt the function's definition to ensure that it is not stored in plain text that is readable by others.

Sunday 15 June 2014

SQL Server: Defining Views for Database Tables

Usually views are based on a SELECT statement and only provide query access to underlying data. However, DBA/Developer can create a view and then use INSTEAD OF triggers defined for the view to implement DML functionality. An INSTEAD OF trigger fires in place of the operation that fired the trigger. INSTEAD OF triggers can be used to prevent the original operation from taking place or to perform additional or alternative actions. DBA/Developer could create a view that made data in both tables available and then create the necessary INSTEAD OF triggers that would insert, update, or delete from both tables as needed. When DBA/Developer uses this approach, DBA/Developer must write DBA/Developer codes to specifically handle any constraints defined on the base tables' columns. Also, for IDENTITY columns, such as TaskID in the Task table, DBA/Developer would use the @@IDENTITY function to retrieve the last identity value before inserting or updating a record.DBA/Developer should not create a partitioned view that allows users to update only their respective partitions. Partitioned views are used when DBA/Developer has similar data stored in multiple tables and want to create a view to allow access to all of the data as if it were stored in a single table. Partitioned views are implemented using the UNION ALL operator. For example, if DBA/Developer had three separate tables with an identical structure, DBA/Developer might use the following statement to create a partitioned view that allows users to query data from all three tables:

CREATE VIEW PartView
AS SELECT * FROM Table1
UNION ALL
SELECT * FROM Table2
UNION ALL
SELECT * FROM Table3;

DBA/Developer should not create a view that includes a CTE. A Common Table Expression (CTE) would not be useful in this scenario. Using a CTE makes the Transact-SQL code, such as the view's definition in this scenario, more readable than using a subquery. The syntax for creating a CTE is as follows:

WITH expression_name [(column_name [,...n])]
AS (CTE_query_definition)

When defining a CTE, the WITH clause specifies the expression name that will be used in the subsequent SELECT statement. The WITH clause must contain a column list identifying the available columns, unless all columns in the expression's query have distinct names. After DBA/Developer creates the CTE, the statements immediately following the CTE definition can reference the CTE expression by name one or more times as if it were a table or view. Only the columns defined in the CTE expression are accessible. DBA/Developer can also create two CTEs in a single WITH clause by separating the expressions with a comma. Within the WITH clause, the CTE can also reference itself or another CTE that DBA/Developer previously created. DBA/Developer should also note that only one WITH clause is allowed, even if the query defining the CTE contains a subquery. In addition, a CTE query definition cannot contain an ORDER BY, COMPUTE, COMPUTE BY, INTO, FOR XML, or FOR BROWSE clause, or an OPTION clause that specifies query hints.DBA/Developer should not create a parameterized stored procedure and create a view on the returned result set. Although DBA/Developer might choose to implement parameterized stored procedures to implement DML functionality, DBA/Developer cannot create a view over a result set.