Pages

Wednesday 30 July 2014

SQL Server: XACT_ABORT Option

DBA/Developer authored the UpdInventory stored procedure as follows:


CREATE PROCEDURE UpdInventory(@InvID int, @qty int)
AS BEGIN TRANSACTION
UPDATE InventorySET UnitsInStock = UnitsInStock - @qty
WHERE InvID = @InvID;
INSERT INTO InvQtyHistory(InvID, Adj, Description)
VALUES (@InvID, @qty, 'Inventory adjustment for purchase order');
COMMIT TRANSACTION

 
To ensure that if the UpdInventory stored procedure attempts to update the UnitsInStock table with a negative value, then no inserts or updates to any of the tables will be committed DBA/Developer sets the XACT_ABORT option to ON at the beginning of the AddPurchaseOrder stored procedure.

The XACT_ABORT option controls how SQL Server handles transactions when a run-time error occurs. When the XACT_ABORT option is set to ON, and a Transact-SQL statement raises an error at run time, the entire transaction is terminated and rolled back. When the XACT_ABORT option is set to OFF, only the Transact-SQL statement that raised the error is rolled back. The remaining statements in the transaction will be executed. The default value of the XACT_ABORT option is OFF. When transactions are nested, the outer transaction controls whether or not both transactions are committed or rolled back. Therefore, to ensure that all modifications take place or are rolled back, DBA/Developer can set the XACT_ABORT option to ON before beginning the outer transaction, namely the transaction started in AddPurchaseOrder. If an error occurs in AddPurchaseOrder, including an invalid update in UpdInventory, all statements in both transactions will be rolled back. DBA/Developer should not set the XACT_ABORT option to OFF at the beginning of the AddPurchaseOrder stored procedure. With this setting, each statement within the AddPurchaseOrder stored procedure would be considered individually. Some of the modifications might be committed, while others might not. DBA/Developer should not set the XACT_ABORT option to ON at the beginning of the UpdInventory stored procedure. This would control how modifications are committed within the UpdInventory stored procedure, but not in the outer procedure. DBA/Developer should not include the code for both stored procedures in a single stored procedure that includes a TRY...CATCH block. Although DBA/Developer can use a TRY...CATCH block to perform error processing, the UpdInventory stored procedure might be called from other Transact-SQL code. Therefore, this would not be the best choice, because it could affect other code.

SQL Server: Deterministic and Non-Deterministic Function

A function is deterministic if it returns the same value each time it is passed the same values. Nondeterministic user-defined functions have certain restrictions on how they can be used. In this scenario, the best approach would be to rewrite the function as a stored procedure because stored procedures do not have this restriction. If DBA/Developer are not sure whether a function is deterministic, DBA/Developer can return the IsDeterministic property using the OBJECTPROPERTY function. The IsDeterministic property is 1 if the function is deterministic, and 0 if it is not. For example, DBA/Developer could use the following query to determine whether the dbo. udf_get_days_empfunction is deterministic or nondeterministic:

SELECT OBJECTPROPERTY
(OBJECT_ID ('dbo.udf_get_days_emp'), 'IsDeterministic');

DBA/Developer should not remove the reference to the GETDATE() built-in function in the function's definition. Although this would make the function deterministic, DBA/Developer would not be able to calculate the number of days the employee has been employed. DBA/Developer should not create a view that includes only the function's result and create an index on the view because DBA/Developer cannot create a view that references a nondeterministic function. In this scenario, DBA/Developer would have to create a view based on each value returned from the function. Unlike a UDF or stored procedure, DBA/Developer cannot pass a parameter to a view. DBA/Developer should not create a computed column in the Employee table that uses the function in its expression definition and create an index on the computed column. Computed columns are virtual columns that are not physically stored in the table by default. Each computed column uses the AS keyword followed by an expression that evaluates to a value. The expression can contain constants, functions, operators, and references to other columns within the table. The value of the computed column is calculated each time a query that references it executes. DBA/Developer can also include the optional PERSISTED keyword when creating a computed column. When a persisted computed column is created, it is physically stored in the table and is recalculated each time a column value referenced in the calculation expression is changed. To be able to create an index on a computed column, the column must be deterministic and precise. A computed column is considered deterministic if it produces the same value each time it is passed the same values. A computed column is considered precise if it does not perform floating-point calculations using either a float or real data type. In this scenario, the function is non-deterministic, so using it in the computed column's expression would prevent DBA/Developer from creating an index on the computed column. DBA/Developer can query the IsDeterministic and IsPrecise properties using the COLUMNPROPERTY function to determine if a computed column is deterministic and precise, respectively.

Monday 28 July 2014

SQL Server: exist() and query()XML methods

DBA/Developer can use the exist() XML method in the WHERE clause of a SELECT statement. The exist() method accepts a string representing a specific node, and returns a Boolean value indicating if the node exists in the XML. The exist() method returns a value of 1 if the specified node exists. In this scenario, DBA/Developer passed it a string to search for the <SKU> element with an ID attribute value of "XS-7". DBA/Developer also included CompletionDate ISNULL in the WHERE clause. Therefore, the query will return the ProjectID, Description, and TargetDate for all projects that do not have a completion date and have a material list that contains the XS-7 item. DBA/Developer should not use the following query:

SELECT ProjectID, Description, TargetDate
FROM dbo.Project
WHERE MaterialList.exist
('/Materials/Item/SKU[@ID="XS-7"]')
AND CompletionDate IS NULL;

This query will return an error because the exist() method returns a value of 0 or 1.
This query will generate the following error:

Msg 4145, Level 15, State 1, Line 7

An expression of non-boolean type specified in a context where a condition is expected, near
'AND'.DBA/Developer should not use the following query:

SELECT *, MaterialList.query ('Materials/Item/SKU[@ID="XS-7"]')
FROM dbo.Project;

This query will return a row for all projects because no WHERE clause was specified. For products with a material list including the XS-7 item, it would return a single <SKU> element as follows:

<SKU ID="XS-7" ItemDesc="Wall Brackets" />

DBA/Developer should not use the following query:

SELECT ProjectID, Description, TargetDate
FROM dbo.Project
WHERE MaterialList.query
('/Materials/Item/SKU[.="XS-7"]')
AND CompletionDate IS NULL;

The query() method is used to query and retrieve XML elements and attributes from an XML instance. The method accepts a string XQuery expression that determines which elements and element attributes are extracted. It will return untyped XML, not a Boolean value that can be referenced in a WHERE clause. This query will return the following error:

Msg 4145, Level 15, State 1, Line 5

SQL Server: Data Modeling

DBA/Developer can create the data model using the following Transact-SQL:

CREATE TABLE Departments
(DeptID int PRIMARY KEY, DeptName varchar(25));

CREATE TABLE TrainingCourses
(CourseID int PRIMARY KEY, CourseName varchar(30));

CREATE TABLE Employees
(EmployeeID int PRIMARY KEY, FirstName varchar(25), LastName varchar(30), DepartmentID int FOREIGN KEY REFERENCES Departments(DeptID));
CREATE TABLE TrainingHistory(CourseID int,EmployeeID int,TrainingDate datetime,
CONSTRAINT PK_THistory
PRIMARY KEY CLUSTERED (CourseID, EmployeeID),
FOREIGN KEY (CourseID) REFERENCES TrainingCourses(CourseID),
FOREIGN KEY (EmployeeID)
REFERENCES Employees(EmployeeID));

The data model in this scenario will resemble the following:

 
A PRIMARY KEY constraint is used to create a primary key for a table that uniquely identifies each row in the table. A FOREIGN KEY constraint is used to establish a relationship between a primary key or a unique key in one table and another column in the same table or a different table.Each row in the Departments table represents a single department identified by the DeptID primary key. Each row in the Employees table represents a single employee identified by the EmployeeID primary key. The FOREIGN KEY constraint in the Employees table that references the Departments table ensures that each employee is assigned to a single department, but allows departments to have multiple employees. Each row in the TrainingCourses table represents a single course offered to employees, identified by the CourseID primary key.To record the fact that an employee has completed a course, DBA/Developer must create an additional table, referred to as a junction table, to represent the many-to-many relationship between Employees and TrainingCourses. The junction table contains a composite primary key consisting of the primary keys of the joined tables, and has FOREIGN KEY constraints on each of the primary key columns to reference the original tables. The junction table may also include any other applicable columns. Each row in the TrainingHistory table represents the fact that a specific employee has taken a specific training course. The composite primary key on the CourseID and EmployeeID columns in the TrainingHistory table ensures that an employee can take a specific course only once, and allows each training course to be taken by multiple employees. The foreign key that references the EmployeeID column in the Employees table ensures that only
employees can take the offered training courses. The foreign key that references the CourseID column in the TrainingCourses table ensures that employees may only take courses that are offered. All of the other options are incorrect because they will introduce redundant data into the data model or do not support the business requirements. DBA/Developer should not create only two tables, Employees and TrainingCourses. If DBA/Developer only created two tables, redundant data would be introduced. For example, DBA/Developer might store department data in the Employees table. If so, because departments have multiple employees, the DeptName would be redundant for employees within a given department. In a normalized data model, each table contains data for a single entity, such as an employee, department, or course. DBA/Developer should not create only three tables, Employees, Departments, and TrainingCourses. DBA/Developer must have a TrainingHistory table to represent the fact that a specific employee has taken a specific training course.

Wednesday 23 July 2014

SQL Server: Partitioned View

Partitioned views are used when DBA/Developer have similar data stored in multiple tables and want to create a view to allow access to all of the data as if it were stored in a single table. The tables referenced in the view can reside on the same server or on different servers. Partitioned views are implemented using the UNION ALL operator. For example, if DBA/Developer had three separate tables with an identical structure on the same server, DBA/Developer might use the following statement to create a partitioned view that allows users to query data from all three tables:

CREATE VIEW PartView
AS SELECT * FROM Table1
UNION ALL
SELECT * FROM Table2
UNION ALL
SELECT * FROM Table3;

In this scenario, DBA/Developer have multiple TransactionHistory tables residing on separate servers, each of which contains a subset of transaction history data. DBA/Developer can create a view across servers, known as a distributed partitioned view, by first creating a linked server definition for each branch and then creating the partitioned view. The partitioned view is created using a fully-qualified name in each SELECT statement. Using a distributed partitioned view would allow branch offices to access their transactions and enforce the CHECK constraint defined for each  branch's TransactionHistory table, but would also allow users at the corporate office to query all data as if it resided in a single table. In addition, if a corporate office user issued a query against one branch's transaction history, the query optimizer would use the individual CHECK constraints defined on the tables to optimize performance of the query and search only the required tables. Using partitioned views would also allow the base tables to be managed separately. This can improve availability and decentralize administration effort because each base table can be backed up, restored, reorganized, or managed individually as needed. DBA/Developer should not implement a nested view that exposes the required data. A nested view is a view that references another view in its definition. A nested view would not be appropriate in this scenario. For optimum performance, DBA/Developer should avoid nested views when possible, and if DBA/Developer use nested views, DBA/Developer should limit the level of nesting where possible. DBA/Developer should not create a view for each branch office that includes the WITH CHECK OPTION clause. If DBA/Developer created a view for each branch office, it would not maximize performance of corporate queries, and it would increase the complexity to create queries across all branches. A query that accessed transaction history across multiple branches would have to reference each branch's view. When creating a view, the WITH CHECK OPTION clause is used to ensure that no data modifications can be made through a view that would cause the underlying data to violate the view's definition. DBA/Developer should not create a single partitioned table that includes a single CHECK constraint and transaction history from all branch offices. In this scenario, leaving the transaction history data in separate tables across multiple servers will allow transaction history at each branch to be managed independently. This will provide better data availability, while still providing optimum performance of corporate queries. DBA/Developer might choose to use a single partitioned table if all the base tables resided on the same server.

Saturday 19 July 2014

SQL Server: Database Mail

Database Mail sends e-mail messages using SMTP. To use Database Mail, DBA/Developer does not have to install an Extended MAPI client on the SQL server. Database Mail is an external process that runs outside of the Database Engine. Therefore, it has minimal impact on the SQL server. Database Mail offers many advantages over using SQL Server Mail. Database Mail can use multiple SMTP accounts and profiles, limit the size and types of attachments, and log all mail events that occur. Mail messages sent by Database Mail use Service Broker to queue and deliver e-mail messages. Therefore, Service Broker must be active in the msdb database to successfully send e-mail messages using Database Mail.Database Mail is disabled by default. DBA/Developer can enable Database Mail using the Database Mail Configuration Wizard in SQL Server Management Studio, the sp_configure system stored procedure, or the Surface Area Configuration Utility.To launch the Database Mail Configuration Wizard, DBA/Developer should first expand the Management node in the Object Explorer in SQL Server Management Studio as shown:

 
After expanding the Management node, DBA/Developer should right-click Database Mail and select Configure Database Mail: This will launch the Database Mail Configuration Wizard. At the first screen, click Next to proceed past the welcome information. At the second screen, select the Setup Database Mail option as shown and click the Next button.

 


The Database Mail Configuration Wizard will walk DBA/Developer through the initial steps of configuring database mail and will enable Database Mail.To enable Database Mail using the sp_configure system stored procedure, DBA/Developer can enable the Database Mail XPs option as follows:

sp_configure 'show advanced options', 1;
GO RECONFIGURE;
GO
sp_configure 'Database Mail XPs', 1;
GO RECONFIGURE;
GO

To enable Database Mail using the Surface Area Configuration Utility, DBA/Developer should right click the server in Object Explorer and select the Facets option. In the View Facets window, DBA/Developer should select Server Configuration for the Facet option and set the DatabaseMailEnabled property to True as shown:

 
After enabling Database Mail, DBA/Developer can use Database Mail system stored procedures to manage mail accounts and profiles or send e-mail messages. For example, in this scenario, the following code might be used to send an e-mail to a sales representative using Database Mail:
EXEC
msdb.dbo.sp_send_dbmail@recipients =
N 'asmith@virtuart.com',
@body = 'An order > $1000 for a priority customer has shipped.',
@subject = 'Priority Customer Order',
@profile_name = 'VirtuArtMailProfile',
@query =
SELECT *
FROM Sales.SalesOrderHeader h
INNER JOIN Sales.SalesOrderDetail d
ON h.SalesOrderID = d.SalesOrderID
WHERE
TotalDue > 1000
AND CustType = 'Priority'
AND SalesOrderID = '43652',
@attach_query_result_as_file = 1,
@query_attachment_filename
='PriorityOrderDetails.txt'

The sp_send_dbmail system stored procedure is used to send an e-mail message using Database Mail. This procedure accepts parameters defining the details of the e-mail message, such as the recipients, message subject, message body, and importance, and sends the e-mail message. DBA/Developer should not use SQL Server Mail. SQL Server Mail requires an Extended MAPI client be installed on the server, and runs as a server process. This would not allow DBA/Developer to send messages using SMTP and would have more impact on the server than using Database Mail. DBA/Developer should not use Distributed Transaction Coordinator (DTC). Microsoft Distributed Transaction Coordinator (MS DTC) is used to provide for transactional processing across multiple SQL Server instances. DBA/Developer should not use SQL Server alerts. SQL Server alerts provide automatic notification when specific errors or events occur on the SQL server. When a server error or event occurs, it is recorded in the Windows application log. SQL Server Agent reviews the application log entries and fires alerts defined for the recorded events.

Friday 18 July 2014

SQL Server: ALTER PARTITION FUNCTION statement

DBA/Developer can use the ALTER PARTITION FUNCTION statement to add a partition to an existing partitioned table, or to combine two partitions in a partitioned table. The complete syntax for the ALTER PARTITION FUNCTION statement is as follows:

ALTER PARTITION FUNCTION partition_function_name()
{SPLIT RANGE (boundary_value) | MERGE RANGE (boundary_value)};



The MERGE RANGE clause combines two partitions, and the SPLIT RANGE clause splits a partition into two partitions. Each of these clauses specifies the boundary value for which the split or merge should occur. In addition, before DBA/Developer split partitions, DBA/Developer must ensure that the partition scheme has enough filegroups to accommodate the new partition, or an error will be generated. DBA/Developer must also specify a value that is different from any other defined boundary value. DBA/Developer should not consolidate the partitions using the MERGE RANGE clause and re-partition the table. The MERGE RANGE clause is used when combining two partitions. DBA/Developer should not alter the partition scheme and rebuild the table partitions. The partition scheme maps partitions to file groups. Modifying the partition scheme would not affect the number of partitions. DBA/Developer should not create a temporary table to store the data, and then drop and re-create the table. There is no need to re-create the table. This would make the table unavailable to users and would not be the best choice.
 
 

Wednesday 16 July 2014

SQL Server: Dropping the Priority column from a table

DBA/Developer authored the following Transact-SQL:

CREATE TABLE Blogging (ID int IDENTITY(1,1)
PRIMARY KEY,
Description varchar(30) NOT NULL,
MaxAttendance smallint DEFAULT 0,
Type bit NULL,
Priority tinyint CHECK (Priority BETWEEN 0 and 10),
Cost money NULL);

DBA/Developer created a view on the table using the following statement:

CREATE VIEW BloggingView
WITH SCHEMABINDING
AS SELECT ID, Description,
Priority FROM dbo.Blogging
WHERE Priority BETWEEN 1 and 5;
To drop the Priority column from a table, DBA/Developer should remove the CHECK constraint on the Priority column and alter the view to remove all references to the Priority column. When DBA/Developer created the view, DBA/Developer included the WITH SCHEMABINDING clause. The WITH SCHEMABINDING clause ensures that base tables of a view cannot be dropped or modified in a way that affects the view's definition. This prevents users from dropping or modifying base tables in a way that makes the view unusable. To drop base tables or make such modifications, DBA/Developer would need to first drop the view, alter the view omitting SCHEMABINDING, or alter the view to remove any unwanted dependencies. In this scenario, the Priority column also has a CHECK constraint, which must also be removed before dropping the column or the following error occurs:
 
 

Tuesday 15 July 2014

SQL Server: READPAST Vs NOWAIT Table Hint

When DBA/Developer includes the READPAST table hint in a query, the query executes immediately, but does not return any locked rows. In this scenario, DBA/Developer could use the following statement to execute the query immediately without any errors:

SELECT ShipMethodID,
SUM (Freight)
FROM PurchaseOrderHeader
WITH (READPAST)
GROUP BY ShipMethodID;

DBA/Developer should note that if DBA/Developer includes the READPAST table hint, the Freight value of the row that is locked would not be included in the sum returned in DBA/Developer query's result set. In this scenario, if DBA/Developer wanted to execute the query and includes the Freight value for the locked row, DBA/Developer could use the NOLOCK table hint. The NOLOCK table hint ignores all locking, but retrieves uncommitted data for locked rows. The NOLOCK table hint prevents blocking but allows dirty reads, and the READPAST table hint prevents blocking but does allow dirty
The NOWAIT table hint will cause the query to not wait for locks to be released, and will immediately return an error. Usually, queries use the LOCK_TIMEOUT setting value to determine how long the statement should wait for locks to be released before returning an error. Using the NOWAIT table hint would be the equivalent of setting LOCK_TIMEOUT to 0 before executing the query. DBA/Developer should not add the TABLOCK or UPDLOCK hint to DBA/Developer's query because both of these hints would cause the query to wait to obtain locks. The TABLOCK hint specifies that the query should acquire a table-level lock and keep the table locked until the statement completes. The UPDLOCK hint specifies that the query should acquire an update lock that remains in effect until the transaction ends.

Monday 14 July 2014

SQL Server: sp_estimate_data_compression_savings System Stored Procedure

sp_estimate_data_compression_savings system stored procedure is used to estimate the space that would be saved by implementing row or page compression for a table. DBA/Developer can also use the procedure to estimate the result of disabling compression or implementing compression for an index or a single partition of a partitioned table. The procedure accepts the following input parameters:

 @schema_name: Indicates the schema that contains the table.

This parameter defaults to the current user's schema if a schema is not specified.

@object_name: Indicates the table name.
@index_id: Indicates the ID number of an index.
@partition_number: Indicates the partition number.
@data_compression: Indicates the type of compression. Valid values are NONE, ROW, and PAGE.

The procedure returns a column indicating the current size, size_with_current_compression_setting, and a column indicating the estimated size if the specified type of compression were implemented, size_with_requested_compression_setting. In this scenario, DBA/Developer would be able to determine the estimated savings if DBA/Developer implemented page-level compression for a table. DBA/Developer should note that when evaluating whether to implement compression, DBA/Developer should also consider how the data is accessed. Compression generates overhead, especially for tables that are frequently accessed. Therefore, both the storage savings and the performance impact should be considered.
The sp_spaceused system stored procedure returns current space usage information, not estimated storage savings by implementing compression.

Sunday 13 July 2014

SQL Server: Creating a Service Broker Application

By default, messages are sent and received in order. Service Broker allows DBA/Developer to override the order in which messages are sent and received by defining conversation priorities. DBA/Developer can use the CREATE BROKER PRIORITY statement to assign different numeric priority levels to different conversations. The syntax of the CREATE BROKER PRIORITY statement is as follows:

CREATE BROKER PRIORITY
priorityname
FOR CONVERSATION
[SET
([CONTRACT_NAME = {contractname | ANY}]
[[,] LOCAL_SERVICE_NAME =
{localservicename | ANY}]
[[,] REMOTE_SERVICE_NAME = {'remoteservicename' | ANY}]
[[,] PRIORITY_LEVEL ={priorityvalue | DEFAULT}])];

The priorityname specifies the conversation priority's name. The priorityvalue specifies an integer between 1 and 10 representing the priority value of the conversation. Conversations with a higher priority value have higher priority. The default priorityvalue is 5. The SET clause specifies the information that Service Broker uses to decide if a conversation should be prioritized. The SET clause may contain one or more of the following criteria:

CONTRACT_NAME: The name of the contract used in the conversation that was specified in the ON CONTRACT clause of the BEGIN DIALOG statement. This criterion allows DBA/Developer to prioritize conversations that use different contracts. LOCAL_SERVICE_NAME: The local service name criterion used to prioritize conversations. REMOTE_SERVICE_NAME: The remote service name criterion used to prioritize conversations.

PRIORITY_LEVEL: The integer value representing the conversation priority. Each criterion in the SET clause defaults to ANY if not specified. When an application sends or receives a message, Service Broker examines the criteria to determine the conversation priority. If Service Broker determines that the conversation meets more than one criterion, it will assign the conversation priority based on the best match. In this scenario, the Service2 and Service3 services share the same queue. However, DBA/Developer want to prioritize the conversations so that when reading messages from ReceiveQueue, messages delivered to Service3 are read first. To accomplish this, DBA/Developer can create two different conversation priorities. For example, DBA/Developer could use the following Transact-SQL to assign a higher priority to Service3's conversations:

CREATE BROKER PRIORITY
LowPriority
FOR CONVERSATION
SET
(CONTRACT_NAME = Contract1,
LOCAL_SERVICE_NAME = Service2,
REMOTE_SERVICE_NAME = N'Service1',
PRIORITY_LEVEL = 1);
CREATE BROKER PRIORITY HighPriority
FOR CONVERSATION
SET
(CONTRACT_NAME = Contract1,
LOCAL_SERVICE_NAME = Service3,
REMOTE_SERVICE_NAME = N'Service1',
PRIORITY_LEVEL = 10);

An application could then send a message from Service1 to Service2 using Contract1 and another message from Service1 to Service3. Using the conversation priorities defined by these statements, Service Broker will receive the higher-priority message sent to Service3 first. Without conversation priorities, the messages would be retrieved from ReceiveQueue in the order they were originally sent. DBA/Developer should note that if the conversation priority is bidirectional, separate priorities for each direction must be established. To determine the message with the highest priority without actually receiving it, DBA/Developer can use the GET CONVERSATION GROUP statement. DBA/Developer should also note that although conversations for received messages can implement priorities by default, the HONOR_BROKER_PRIORITY database option must be set to ON to implement conversation priorities for sent messages. DBA/Developer should not create an additional message type and assign it a higher priority because Service Broker does not prioritize messages based on the message type. DBA/Developer should not create separate physical queues for the Service2 and Service3 services. In this scenario, it is not necessary to create separate queues for the services. Both services can use the same physical message queue with the conversations for each prioritized differently. DBA/Developer should not use MSMQ instead of Service Broker because the desired functionality is not supported. Service Broker does support the required functionality with conversation priorities.

Friday 11 July 2014

SQL Server: LOCK_ESCALATION setting

DBA/Developer authored the following Transact-SQL statement:

CREATE PARTITION FUNCTION PFunction1 (int)
AS RANGE LEFT FOR VALUES (2000, 3000);
CREATE PARTITION SCHEME PScheme1
AS PARTITION PFunction1 TO (fg1, fg2, fg3);
CREATE TABLE
TransDetails (TransID int, Description varchar(50), Region varchar(20),
Date datetime, TypeID int, EmpID int,Status bit)
ON PScheme1(TypeID);

DBA/Developer opens a session and executes the following Transact-SQL:

ALTER TABLE TransDetails
SET (LOCK_ESCALATION = TABLE);
GO BEGIN
TRANSACTION UPDATE TransDetails
SET TypeID = TypeID + 1000
WHERE TypeID > 4000;
GO

ALTER TABLE statement set LOCK_ESCALATION for the TransDetails table to TABLE, which is actually the default. Normally, table-level lock escalation is fine. However, with partitioned tables, DBA/Developer may not want to lock the entire table, but only a single partition, such as the partition containing the rows being updated in this scenario. This would allow queries against other partitions in the table to execute successfully without being blocked. By default, SQL Server will escalate locks to the table level as needed. However, to override this default behavior, or to disable lock escalation altogether, DBA/Developer can set the LOCK_ESCALATION setting for a table. The LOCK_ESCALATION setting can have one of the following values:

TABLE: Lock escalation is performed to the table level, which is the default.

AUTO: Lock escalation is performed to the table level for non-partitioned tables, and to the partition level for partitioned tables.

DISABLE: Lock escalation is disabled, and lock escalation does not typically occur. SQL Server only escalates locks in specific situations where it is absolutely required to ensure data integrity.

In this scenario, DBA/Developer could instruct SQL Server to escalate locks to the partition level only using the following statement:

ALTER TABLE TransDetails SET (LOCK_ESCALATION=AUTO);

Doing so would prevent lock escalation to the table level, and allow other queries to execute successfully on other unaffected partitions of the table. To identify the LOCK_ESCALATION setting for a table, DBA/Developer can query sys.tables. For example, the following statement would display the current LOCK_ESCALATION setting for the TransDetails table:

SELECT lock_escalation_desc
FROM sys.tables
WHERE name = 'TransDetails';

SQL Server: STOPATMARK and STOPATBEFOREMARK clauses

The STOPATMARK and STOPATBEFOREMARK clauses of the RESTORE statement are used to restore the database to the saved point in the transaction or to a point before the transaction began, respectively. DBA/Developer could use the following statement to restore the log:

RESTORE LOG Research
FROM ResearchBackUp
WITH RECOVERY,
STOPATMARK='ToleranceUpdate';

When restoring a database, DBA/Developer must restore the latest full backup. If applicable, DBA/Developer must restore the latest differential backup since the last full backup. DBA/Developer should then restore each transaction log backup that was made since the last full backup or, if DBA/Developer made differential backups, the transaction log backup that was made since the last differential backup. DBA/Developer should also make a copy of the current transaction log. This backup will be called the tail-log backup because it contains the transactions that have not been recorded. When DBA/Developer restore the latest full database backup and the latest differential backup, specify WITH NORECOVERY in the RESTORE statement. The NORECOVERY option is required in this scenario because DBA/Developer will be applying transaction log backups after the database is restored. Then, DBA/Developer consecutively restore each of the transaction log backups that were made after the last full database backup or differential backup. Each transaction log backup must be restored by specifying the NORECOVERY option so that all subsequent transaction log backups can be restored. DBA/Developer should restore the transaction log backup that was made on the active transaction log. This is the latest log backup. When restoring the transaction log, DBA/Developer should use the STOPATMARK option along with the RECOVERY option in the RESTORE LOG statement. The STOPAT option specifies the desired point in time to which DBA/Developer want to restore the database, and the WITH RECOVERY option brings the database to a consistent state. DBA/Developer can use marked transactions across databases to recover multiple databases to the same specific point, but doing so causes any transactions committed after the mark used as the recovery point to be lost. DBA/Developer should not restore the log backup using WITH STOPBEFOREMARK = 'ToleranceUpdate'. The WITH STOPBEFOREMARK clause in the RESTORE LOG statement uses the log record before the saved point as the mark for recovery. In this scenario, DBA/Developer wanted to restore the Widget table to the ToleranceUpdate transaction, not to a point before it started. DBA/Developer should not use a ROLLBACK statement to roll back to the ToleranceUpdate transaction. DBA/Developer can use a ROLLBACK statement to roll back to a specific point in a transaction, but to do so, DBA/Developer must first use the SAVE TRANSACTION statement to create a named savepoint. In this scenario, DBA/Developer did not create a savepoint. DBA/Developer created a marked transaction. DBA/Developer should not restore the transaction log and use a ROLLBACK statement to roll back to the ToleranceUpdate transaction. DBA/Developer must create a savepoint to roll back to a specific point in a transaction.

Thursday 10 July 2014

SQL Server: GROUPING SETS Clause

The GROUPING SETS clause allows DBA/Developer to explicitly specify the groups for which aggregate information should be displayed. This allows DBA/Developer to use more than one grouping within a single query. The syntax of the GROUP BY clause with a GROUPING SETS clause is as follows:

GROUP BY GROUPING SETS
(groupingset1 [,...groupingsetn])

Each grouping set can contain one or more columns or an empty set. Aggregate rows are returned in the result set for only the specified groups. Specifying an empty set, with (), indicates that a grand total row should also be returned in the result set. In this statement, DBA/Developer specified an empty set in the GROUPING SETS clause. However, DBA/Developer used the GROUPING_ID function in the HAVING clause. The GROUPING_ID function returns either 0 or a 1 to identify the level of grouping. This HAVING clause suppresses the grand total rows. DBA/Developer can also use the CUBE and ROLLUP operators to aggregate data. The ROLLUP operator groups the selected rows in the result set based on the values in the GROUP BY clause and returns one row as a summary row for each group. The ROLLUP operator can be used to generate totals and subtotals. Using the ROLLUP operator, a row containing the subtotal and the total is also returned in the result set. When DBA/Developer specifies WITH CUBE, a summary row is included in the result set for each possible combination of the columns specified in the GROUP BY clause. When using ROLLUP, CUBE, or GROUPING SETS, aggregate rows can be identified by the NULL values. Summary rows for grand totals will contain NULL values for all grouping columns. If DBA/Developer grouped using two columns, such as ProductID and SpecialOfferID in this scenario, DBA/Developer could identify the aggregate rows as follows:

Summary rows that represent totals for each ProductID would have a value for ProductID and a NULL value for SpecialOfferID.

Summary rows that represent totals for each SpecialOfferID would have a value for SpecialOfferID and a NULL value for ProductID.

DBA/Developer should not remove the empty grouping set from the GROUPING SETS clause. To include grand totals in a query that uses the GROUPING SETS clause, DBA/Developer must include an empty set. DBA/Developer should not modify the HAVING clause to check for a value of 1 returned by the GROUPING_ID function. This would return only the grand total row, and filter out the aggregate rows for products and the aggregate rows for unique ProductID and SpecialOfferID combinations.DBA/Developer should not remove the GROUP BY and HAVING clauses and use GROUP BY SpecialOfferID WITH ROLLUP in the GROUP BY clause. All columns in the SELECT list must either use an aggregate function or be included in the query's GROUP BY clause.

Wednesday 9 July 2014

SQL Server: For XML Clause

The FOR XML clause specifies that the result of the SELECT statement should be returned in XML format. DBA/Developer can specify one of the following modes with the FOR XML clause:

RAW: A single <row> element will be generated for each row in the rowset. Each non-null column value generates an attribute with the name identical to the column's name or the column's alias.

AUTO: Nested elements are returned using the columns specified in the SELECT list. Each non-null column value generates an attribute named according to the column name or column alias. The element nesting is based on the order in which the columns are specified in
the SELECT list.

EXPLICIT: Each row in the rowset can be defined in detail, including attributes and
elements.

PATH: Each row in the rowset can be defined in detail, but column names and column
aliases are specified as XPath expressions.

DBA/Developer can also include the ELEMENTS option with the FOR XML clause. This will return columns in the SELECT list as subelements, rather than as attributes. Each table specified in the FROM clause will be represented by a separate element, and each column from that table will appear as a subelement of that element. Tables specified first will constitute higher-level elements of the hierarchy, and if specified, column aliases will be used as element names. For example, in this scenario, DBA/Developer might use the following query to extract data from the Product, ProductVendor, and Vendor tables in XML format:

SELECT
p.Name AS Product,
v.Name AS Vendor,
p.ProductID
FROM Production.Product p
JOIN Purchasing.ProductVendor pv
ON p.ProductID = pv.ProductID
JOIN Purchasing. Vendor v
ON pv.VendorID = v.VendorID
FOR XML RAW,
ELEMENTS;

The query would display the XML in the following format:

 <row> <Product>Product A
</Product><Vendor>VirtuArt, Inc.
</Vendor><ProductID>1
</ProductID></row><row>
<Product>Product B
</Product><Vendor>NuTex Corporation
</Vendor><ProductID>879
</ProductID></row><row>
<Product>Product C
</Product><Vendor>VirtuArt, Inc.
</Vendor><ProductID>712
</ProductID></row><row>
<Product>Product </Product>
<Vendor>InterConn
</Vendor><ProductID>2
</ProductID></row>

DBA/Developer should not implement a query that uses the OPENXML function in the FROM clause, or create a Transact- SQL script that uses the sp_xml_preparedocument system stored procedure and then inserts the XML into an xml data type column. OPENXML is a rowset provider function that creates a relational view of data contained in an XML document. This function can be used in SELECT statements where a table or view would be specified to extract data from an XML document. One of the function's required parameters is an integer used as a handle to the internal representation of the XML document. Therefore, before DBA/Developer can extract data from an XML document, DBA/Developer must call the sp_xml_preparedocument stored procedure to create and return the document handle. After the data has been extracted from the XML document by a SELECT query with the OPENXML function, DBA/Developer would call the sp_xml_removedocument stored procedure to remove the internal representation of the document and free resources. DBA/Developer should not create a stored procedure that queries the tables and returns an xml data type. In this scenario, DBA/Developer wanted to display the data in XML format. Therefore, DBA/Developer would still have to display the result of the returned xml data type.

 

Tuesday 8 July 2014

SQL Server: TRY CATCH

TRY...CATCH construct can be used to catch execution errors with a severity level higher than 10, as long as the error does not end the database connection. Transact-SQL code that might generate an error is included in the TRY block. If an error with severity level greater than 10 that does not end the database connection occurs in the TRY block, control is transferred to the CATCH block. The CATCH block can contain Transact- SQL code to handle the error that occurred. With a TRY...CATCH construct, DBA/Developer can also use functions to retrieve additional information about an error, such as ERROR_LINE, ERROR_MESSAGE, ERROR_PROCEDURE, and ERROR_SEVERITY. The RAISERROR statement returns a user-defined error message and sets a flag to indicate that an error occurred. The full syntax of the RAISERROR statement is as follows:

RAISERROR ({msg_id | msg_str | @local_variable}
{, severity, state}[, argument [,...n]])
[WITH [LOG]
[NOWAIT]
[SETERROR]];

The RAISERROR statement can return specific error message text or a custom error message that has been stored in the sysmessages table. To return an error message stored in sysmessages, DBA/Developer can pass the RAISERROR statement the error's message_id as the first parameter. In this scenario, DBA/Developer could include each DML statement within DBA/Developerr stored procedures in a TRY block, and use the RAISERROR statement within the associated CATCH block to return the desired information about the error. DBA/Developer could use the ERROR_PROCEDURE function in the CATCH block to return the name of the procedure that generated the error and the ERROR_SEVERITY function to return the error's severity level. DBA/Developer should not implement an output parameter in each stored procedure to return the error message number. In this scenario, some of DBA/Developerr stored procedures execute multiple DML statements. Therefore, returning information for a single error is not sufficient. DBA/Developer should not check the value of @@ERROR after each stored procedure call because the @@ERROR function returns the error number of only the most recently executed Transact-SQL statement. The @@ERROR function returns a value of zero if the most recently executed Transact-SQL statement ran without errors. If an error occurred that corresponds to an error stored in the sysmessages table, the function returns the message_id for the error. The value of @@ERROR is reset each time a new Transact-SQL statement is executed. Therefore, DBA/Developer should call the function immediately following the Transact-SQL statement for which DBA/Developer want to inspect the error number. DBA/Developer should not add a custom error message for each stored procedure to the sysmessages table using the sp_addmessage system stored procedure. The sp_addmessage system stored procedure creates a custom error message and stores the new message in the sysmessages table in the master database. After a user- defined error message is created with sp_addmessage, Transact-SQL constructs or applications can reference it using the RAISERROR statement. In this scenario, adding custom error messages with the sp_addmessage system stored procedure would define custom error message, but not ensure these message were returned by the stored procedures. DBA/Developer would still need to add code within the stored procedures to implement error handling.

Monday 7 July 2014

SQL Server: User Defined Scalar Function

DBA/Developer authored a statement to create the function:

CREATE FUNCTION dbo.udf_get_reorder_point(@v_prodid int)
RETURNS int
AS BEGIN
DECLARE @v_reorderpt int;
SELECT @v_reorderpt = ReorderPoint
FROM ProductDetails
WHERE ProductID = @v_prodid
IF (@v_prodid BETWEEN 100 AND 500)
RETURN
@v_reorderpt
RETURN 0
END;

In this scenario, DBA/Developer wanted to create a user-defined scalar function to return the ReorderPoint value for a specific product, but DBA/Developer wanted the function to return a zero value if the ProductID value was not between 100 and 500. A scalar function returns a single scalar value. DBA/Developer can create a user-defined function using the CREATE FUNCTION statement. The basic syntax of the CREATE FUNCTION statement when creating a scalar function is as follows:
 
CREATE FUNCTION [schema_name.]
function_name ([{@parm_name
[AS][parmtype_schema.] parm_data_type [=default] [READONLY]} [,...n]])
RETURNS
return_type [WITH function_opt [,...n]][AS]
BEGIN function_body
RETURN scalar_expression
END;

DBA/Developer can pass no parameters or multiple parameters into the function using the parameter list. DBA/Developer specifies the RETURNS clause to indicate the data type of the value that the function returns. Then, DBA/Developer includes Transact- SQL statements within the body of the function, and use the RETURN statement to return the value. In this scenario, the statement accepts a parameter that identifies a product in the ProductDetails table, and declares and sets the value of a variable to return the ReorderPoint value. The IF statement checks the ProductID value that was passed to the function. If the ProductID value is in the desired range, the function executes the first RETURN statement and returns the actual ReorderPoint value. However, if the ProductID is not in the desired range, the second RETURN statement executes and returns a zero value to the caller. After DBA/Developer creates the function, DBA/Developer could call the function from other Transact-SQL code. For example, DBA/Developer could use the following statement to return the ReorderPoint value for the product with a ProductID value of 100:

SELECT dbo.udf_get_reorder_point(100);

DBA/Developershould not use the CREATE FUNCTION statement that includes an ELSE because this statement will return the following error:
Msg 455, Level 16, State 2, Procedure udf_get_reorder_point, Line 16

The last statement included within a function must be a return statement. Even though the last RETURN statement may be the last executed, it is not considered the last statement in the function and will generate an error. DBA/Developer should not use the CREATE FUNCTION statement that omits the RETURNS clause because a RETURNS clause is required to identify the data type returned. The following statement will generate a syntax error:

CREATE FUNCTION dbo.udf_get_reorder_point(@v_prodid int)
AS BEGIN DECLARE
@v_reorderpt int;
SELECT @v_reorderpt = ReorderPoint
FROM ProductDetails
WHERE
ProductID = @v_prodidIF (@v_prodid
BETWEEN 100 AND 500)
RETURN @v_reorderpt
RETURN 0
END;

DBA/Developer should not use the CREATE FUNCTION statement that does not include a RETURN statement. Each function must contain a RETURN statement to return the value, and the statement must be the last statement in the function. The following statement will generate an error as shown:

CREATE FUNCTION dbo.udf_get_reorder_point(@v_prodid int)
RETURNS int AS BEGIN
DECLARE @v_reorderpt int;
IF (@v_prodid
BETWEEN 100 AND 500)
SELECT
@v_reorderpt = ReorderPoint
FROM ProductDetails
WHERE ProductID =
@v_prodidELSE SET @v_reorderpt = 0
END;

Msg 455, Level 16, State 2, Procedure udf_get_reorder_point, Line 7
The last statement included within a function must be a return statement.