Pages

Friday 29 August 2014

SQL Server: Update Locks

First user starts a session and executes the following Transact-SQL:

BEGIN TRANSACTION
UPDATE ClassDetail
WITH(TABLOCK)
SET ClassDetail.Details = ClassDetail.Details + '.This is a Freshman-level class.'
FROM ClassMaster
INNER JOIN ClassDetail
ON ClassMaster.ClassID = ClassDetail.ClassID
WHERE ClassMaster.Level = 'Freshman';

Then, Second User starts a session and executes the following Transact-SQL:

BEGIN TRANSACTION
INSERT INTO ClassMaster(Description, Seats, Level)
VALUES ('Calculus I', 25, 'Sophomore'), ('Accounting III', 80, 'Senior'), ('World History', 30, 'Freshman');
DELETE FROM dbo.ClassDetail
WHERE CDID = 2;
COMMIT TRANSACTION

First User begins a transaction and executes an UPDATE statement, but does not commit the changes. Then, Second User begins a transaction, but First User currently has locks on the table that prevents First User's actions from being performed. Second User's session hangs waiting for First User's session to release the locks. SQL Server uses locking to control which transactions have access to which resources at any given time. This prevents one transaction from overwriting another transaction's changes. SQL Server determines the locks that will be acquired based on the type of action being performed. However, you can override default locking behavior if necessary using table hints. The option that states First User's updates are applied first and then Second User's DML operations are performed is incorrect. First User does not commit the changes. Therefore, the changes are pending and Second User's operations are not performed. The option that states First User's updates are not performed but Second User's DML operations complete successfully is incorrect. In this scenario, First User's updates have not been committed and will remain pending, blocking Second User's operations from executing. The option that states Second User's session immediately returns an error is incorrect. Second User's session will wait to obtain the necessary locks to perform the required operations.

Sunday 24 August 2014

SQL Server: value() XML method/ nodes() XML method

DBA/Developer authored the following T-SQL Query:

SELECT x.Event.value('@ID', 'varchar(10)')
AS EventID,x.Event.value('@EventDesc [1]', 'varchar(35)')
AS Description, x.Event.value ('AvailSeats[1]', 'int')
AS TotalSeats, x.Event.value('Enrolled[1]', 'int')
AS Enrolled,x.Event.value ('AvailSeats[1]', 'int')x.Event.value('Enrolled[1]', 'int')
AS Remaining
FROM @xmldoc.nodes('//EventData/Event')
AS x(Event);

This query uses the value() XML method to extract values of specific elements and attributes. However, the value() method must return a single value, and the XML contains multiple events. Therefore, the correct query includes the nodes() XML method in the FROM clause of the query. The nodes() method accepts an XQuery string and returns all of the specified nodes as a result set. In this scenario, the nodes() method returns a result set that contains one row for each event. Each row contains the corresponding XML for the event. Then, in the SELECT list, the table alias defined is used with the value() method to return the details for each specific event. DBA/Developer could also use the nodes() method to separate XML. For example, the following statement would generate a result set containing a single xml column, Events, with each row containing the XML representing a specific event.

SELECT x.Event.query ('.')
AS Events
FROM @xmldoc.nodes ('//EventData/Event')
AS x (Event);

Wednesday 20 August 2014

SQL Server: CONTAINS Predicate

The CONTAINS predicate can be used to search character-based columns for an inflectional form of a specific word, a specific word or phrase, a word or phrase that is near another word or phrase, or words or phrases with weighted values. DBA/Developer can combine several conditions using the AND, OR, and NOT keywords. The CONTAINS search condition is case insensitive. The syntax for the CONTAINS predicate is as follows:

CONTAINS ({column | *}, '<contains_search_condition>')

The search condition is enclosed in single quotes, but individual words or phrases inside the search condition can be offset with double quotes. If the condition only includes single quotes, the argument is passed as a single search condition. A search condition of ' "weld" AND "grind" 'returns all rows having both weld and grind in the column. In contrast, a search condition of 'weld AND grind' returns all rows having weld and grind separated by a stopword. Because stopwords, or noise words, are not recognized, the search condition of 'weld OR grind' would produce an identical result. The given statement does not return the ID and Description values for all rows containing the words weld and grind near each other in the full-text enabled columns. To perform such a search, DBA/Developer would use the NEAR keyword in the CONTAINS predicate. The given statement does not return the ID and Description values for all rows with a Name containing either the word weld or grind. Using an asterisk (*) in the first argument of the CONTAINS predicate specifies that all columns registered for full-text searching be used. The given statement does not return the ID and Description values for all rows containing the phrase "weld and grind" in the full-text enabled columns.

Tuesday 19 August 2014

SQL Server: IMPLICT_TRANSACTIONS Option

The IMPLICIT_TRANSACTIONS option controls how transactions are handled. Transactions can be specified explicitly, implicitly, or automatically. In autocommit mode, which is the default, each Transact-SQL statement is treated as a separate transaction, and each statement is automatically committed when it successfully executes. In this scenario if DBA/Developer set the IMPLICIT_TRANSACTIONS option to ON, transactions will be implicitly created. This means that transactions will be automatically started if there is no current transaction, and must be explicitly committed or rolled back. Any uncommitted implicit transactions are rolled back when the user disconnects. DBA/Developer should not add the SET IMPLICIT_TRANSACTIONS OFF statement. This is the default setting, and all Transact- SQL would use autocommit mode. Each DML statement would be considered a separate transaction and automatically committed if it executed successfully. DBA/Developer should not set the transaction isolation level to READ COMMITTED. The transaction isolation level controls how a transaction behaves when there are other concurrent transactions. However, in this scenario, transactions are not used. Therefore, this would not be applicable. DBA/Developer should not include a custom error-handler for each DML statement because this would require more effort than necessary. DBA/Developer can check the value returned by the @@ERROR function for each DML statement and include blocks of code to perform the desired actions. However, in this scenario, DBA/Developer wants to have minimal development effort.

Sunday 17 August 2014

SQL SERVER EXECUTE AS CLAUSE

The EXECUTE AS clause is used to specify the security context under which the stored should execute. DBA/Developer can specify the following values in the EXECUTE AS clause:

SELF: The stored procedure executes under the security context of the current user.

OWNER: The stored procedure executes under the security context of the user that owns the procedure.

CALLER: The stored procedure executes under the security context of the user calling the procedure. To execute the stored procedure successfully, the user calling the stored procedure would require permissions on the stored procedure and any underlying database objects referenced by the stored procedure. user_name: The stored procedure executes under the security context of the specified user, regardless of which user called the stored procedure.

The ENCRYPTION clause encrypts the CREATE PROCEDURE statement used to create the stored procedure. This ensures that the statement is not stored in plain text that is readable by others. When a stored procedure's definition is encrypted, users cannot use the sp_helptext system stored procedure to view the stored procedure's definition. With the given CREATE PROCEDURE statement, if DBA/Developer attempted to use the sp_helptext system stored procedure to access the definition of the stored procedure, DBA/Developer would receive the following

output: The text for object 'dbo.usp_UpdateSalesHistory' is encrypted.In addition, users cannot view the definition of the stored procedure by querying the sys.sql_modules catalog view or using Visual Designer in SQL Server Management Studio. In this scenario, if a user queried sys. sql_modules using the following SELECT statement, a definition value of NULL would be returned:

SELECT definition
FROM sys.sql_modules
WHERE object_id = OBJECT_ID('dbo.
usp_UpdateSalesHistory');

DBA/Developer should not use the CREATE PROCEDURE statement that omits the EXECUTE AS clause. If DBA/Developer omit the EXECUTE AS clause when creating a stored procedure, then by default the stored procedure will execute under the security context of the caller. This statement also includes the RECOMPILE clause, which will cause the stored procedure to be recompiled each time it is called. DBA/Developer can use the RECOMPILE clause to force stored procedure compilation, but this was not required in this scenario. DBA/Developer should not use the CREATE PROCEDURE statement that specifies a user name in the EXECUTE AS clause because this will cause the stored procedure to execute under the security context of DevUser. DBA/Developer should not use the CREATE PROCEDURE statement that includes the EXTERNAL NAME clause because this statement will generate a syntax error. The EXTERNAL NAME clause is only valid when creating a CLR stored procedure. A CLR stored procedure is a stored procedure written in a .NET Framework language. To create a CLR stored procedure, DBA/Developer use the .NET language and define a static class method. Then, DBA/Developer compile the class, register the assembly in SQL Server with the CREATE ASSEMBLY statement, and use the CREATE PROCEDURE statement in SQL Server to reference the assembly. The EXTERNAL NAME clause must be specified in the CREATE PROCEDURE statement to reference the appropriate method.

Thursday 14 August 2014

SQL Server: MERGE WHEN MATCHED/NO MATCHED Option

The MERGE statement allows DBA/Developer to combine the inserts, deletes, and updates, and to use a single statement to perform multiple DML actions. Using a MERGE statement instead of issuing multiple DML statements can improve performance. In a MERGE statement, DBA/Developer must specify a source and a target and include a join. Then, DBA/Developer use the MATCHED clauses to specify the actions to be performed. The basic syntax of the MERGE statement is as follows:

MERGE [INTO] target_table
USING source_table
ON join_condition
[WHEN MATCHED THEN matched_action]
[WHEN NOT MATCHED [BY TARGET]
THEN notmatched_action]
[WHEN NOT MATCHED BY SOURCE
THEN notmatchedsource_action];

The two WHEN NOT MATCHED THEN clauses specify the actions to take if the records from the source table are not in the target table, or vice versa. The WHEN MATCHED THEN clause specifies the action to take if the records from the source table are in the target table. When synchronizing tables, DBA/Developer can use BY TARGET or BY SOURCE to further control how the synchronization occurs when there are differences in the source data and the target data. In this scenario, DBA/Developer could use the following MERGE statement:

MERGE ARTrxMaster AS t
USING ARTrx AS s
ON (t.TrxID=s.TrxID) WHEN MATCHED THEN UPDATE SET
t.TrxType = s.TrxType, t.Quantity = s.Quantity, t.UnitPrice = s.UnitPrice, t.ExtAmt =
s.ExtAmt, t.TaxAmt = s.TaxAmt, t.LoadDate = GETDATE()
WHEN NOT MATCHED BY TARGET AND s.TrxType = 'TX1'
THEN INSERT(TrxID, TrxType, Quantity, UnitPrice, ExtAmt, TaxAmt, LoadDate)
VALUES (s.TrxID, s.TrxType, s.Quantity, s.UnitPrice, s.ExtAmt, s.TaxAmt,
GETDATE ())
WHEN NOT MATCHED BY SOURCE AND t.TrxType = 'TX1'
THEN DELETE
OUTPUT $action, INSERTED.*, DELETED.*;

With this statement, the following results would occur under these conditions:
The WHEN MATCHED clause would execute and update the target table (ARTrxMaster) if a row existed in both tables. The UPDATE statement does not include a table name because the table to be updated is implicit as the target table in the merge. The WHEN NOT MATCHED BY TARGET clause would insert rows into the target table (ARTrxMaster) if the row does not exist in the source table and the additional condition specified in the WHEN clause is met. Only rows in the ARTrx table that have a TrxType of 'TX1' would be inserted into ARTrxMaster. The WHEN NOT MATCHED BY SOURCE clause would delete rows from the target table (ARTrxMaster) which do not exist in the source table (ARTrx) if the additional condition specified in the WHEN clause is met. Only rows in ARTrxMaster with a TrxType of 'TX1' would be deleted. This statement also includes an OUTPUT clause. The OUTPUT clause allows DBA/Developer to retrieve and display information about the rows affected by the MERGE statement. The OUTPUT clause can display this information to the user, insert the data into another permanent or temporary table or table variable using an INTO clause, or pass the data to a nested DML statement for processing. Within the OUTPUT clause, DBA/Developer specify the column values that should be retrieved by using the column names with the INSERTED and DELETED prefixes. The DELETED prefix returns the column value before the DML operation, and the INSERTED prefix returns the column value after the DML operation but before executing any triggers. DBA/Developer can also use $action to return a string indicating which type of DML operation affected the row. In this statement, DBA/Developer specified $action, INSERTED.*, DELETED.*. This statement would return a result set of the rows affected by the MERGE, with the action that was performed for each row and the before and after values for each action. In this scenario, DBA/Developer could automate this synchronization process by using the MERGE statement in a job that is scheduled to run weekly.

Saturday 9 August 2014

SQL Server: Table Partition

To create a partitioned a table, DBA/Developer first create a partition function using the CREATE PARTITION FUNCTION statement that specifies the number of partitions and how partitioning will occur. A partition function maps the rows of a table or index into partitions based on the specified partition boundary values. The CREATE PARTITION FUNCTION statement in this scenario creates a partition function named MyPF with a partition column of the int data type. The FOR VALUES clause of the CREATE PARTITION FUNCTION statement specifies the boundary value of each partition. The RANGE RIGHT and RANGE LEFT clauses are used to specify how the actual boundary values are handled. RANGE LEFT indicates that the boundary value should be stored in the partition on the left side of the boundary value with the boundary values sorted in ascending order from left to right. The CREATE PARTITION FUNCTION statement in this scenario, defines five partitions as follows:

Next, DBA/Developer must create a partition scheme based on the previously created partition function using the CREATE PARTITION SCHEME statement. The partition scheme maps the partitions created by the partition function to filegroups. One or more filegroups can be specified in the partition scheme. The AS PARTITION clause of the CREATE PARTITION SCHEME statement identifies the partition function, and the TO clause specifies the filegroups. The complete syntax for the CREATE PARTITION SCHEME statement is:

CREATE PARTITION SCHEME name_of_partition_scheme
AS PARTITION name_of_partition_function [ALL]
TO ({file_group | [PRIMARY]} [,...n] );
The partition scheme created in this scenario would map the partitions to file groups as follows:

After DBA/Developer has created the partition function and partition scheme, DBA/Developer must create the SalesHistory table as a partitioned table using a CREATE TABLE statement that includes an ON clause. The ON clause identifies the partition scheme and the column on which the table will be partitioned. The specified partitioning scheme identifies the partition function that is used. The complete syntax of the CREATE TABLE statement to create a partitioned table is as follows:

CREATE TABLE table_name (column_def1, column_def2, ...)
ON name_of_partition_scheme (partition_column_name);

The arguments used in the statement syntax are as follows: table_name: Specifies the name of the table to be created. column_defn: Specifies the details of the column(s) in the table. partition_scheme_name: Specifies the name of the partition scheme that identifies the partition function and the filegroups to which the partitions of the table will be written. partition_column_name: Specifies the name of the column in the table on which the table will be partitioned. The column specified must match the column definition specified in the associated partition function in terms of the data type, length, and precision. DBA/Developer should not use the Transact-SQL that includes RANGE LEFT FOR VALUES (6999, 26999, 56999, 86999) in the CREATE PARTITION FUNCTION statement because this will partition the SalesHistory table as follows:
 
 

SQL Server: WITH ENCRYPTION Clause

The WITH ENCRYPTION clause is used to encrypt the CREATE VIEW statement that is stored in the table. This ensures that the statement is not stored in plain text that is readable by others. When a view is encrypted, the view's definition cannot be accessed using the sp_helptext system stored procedure, directly queried from the sys.sql_modules catalog view, or accessed from the Visual Designer in SQL Server Management Studio. You should not delete the view from sys.objects. You should never directly alter data in system objects, but rather allow SQL Server to manage them. You should not alter the view to remove schema binding. Schema binding of a view ensures that the base tables used by the view remain usable; it does not control whether the statement that defines the view is encrypted or stored as plain text. The WITH SCHEMABINDING clause of the CREATE VIEW statement ensures that base tables of a view cannot be dropped or modified in a way that affects the view's definition. This prevents users from dropping or modifying base tables in such a way that the view becomes unusable. To drop base tables or make such modifications, you would need to first drop the view, alter the view omitting SCHEMABINDING, or alter the view to remove any unwanted dependencies. You should not implement Transparent Data Encryption (TDE) for the view. TDE is a special type of full database encryption that uses a symmetric key to encrypt the entire database. It is not used to encrypt the statements that define views, functions, stored procedures, or triggers.

Tuesday 5 August 2014

SQL Server: M-2-M Relationship

DBA/Developer could use the following Transact-SQL statements to create the Project and ProjectManager tables and a junction table, ProjectXPM, to implement the many-to- many relationship:

CREATE TABLE Project (ProjectID int PRIMARY KEY,Description varchar(25),StartDate
datetime);

CREATE TABLE ProjectManager (PMID int PRIMARY KEY,LastName
varchar(30),FirstName varchar(30));

CREATE TABLE ProjectXPM (ProjectID int,PMID int,CONSTRAINT
PK_Project_PMPRIMARY KEY CLUSTERED (ProjectID, PMID),FOREIGN KEY
(ProjectID)
REFERENCES Project (ProjectID),
FOREIGN KEY (PMID)
REFERENCES ProjectManager (PMID));

Each row in the Project table represents a single project within the organization and includes the project's start date. Each row in the ProjectManager table represents a project manager who may be assigned to projects within the organization. Each row in the ProjectXPM table represents a project manager assigned to a specific project. The ProjectXPM table contains a composite primary key consisting of the combination of ProjectID and PMID. This ensures that the table may include multiple project managers for each project and multiple projects for each project manager. The data model in this scenario would resemble the following:

 
DBA/Developer should not create a one-to-one relationship between the Project entity and the ProjectManager entity. This data model would allow each project manager to be assigned to only one project and each project to be assigned only one project manager, but would not allow projects to have multiple project managers or allow project managers to manage multiple projects. DBA/Developer should not create a one-to-many relationship from the ProjectManager entity to the Project entity. This data model would allow each project manager to be assigned to multiple projects, but would only allow each project to be assigned a single project manager. DBA/Developer should not create a one-to-many relationship from the Project entity to the ProjectManager entity. This data model would allow each project to have one or more project managers, but would allow a project manager to be assigned to only one project.

Monday 4 August 2014

SQL Server: DB Scalability

Scalable shared databases allow DBA/Developer to distribute multiple read-only databases to multiple instances for read- only reporting. DBA/Developer should not configure peer-to-peer replication. In this scenario, there was no requirement that reporting users have the most up-to-date information. DBA/Developer could use peer-to-peer replication to scale out a heavily accessed database. This would allow users to access consistent data and provide multiple database copies to be updated by different users. DBA/Developer should not create a database snapshot. A database snapshot is a copy of a database as of single point in time. DBA/Developer should not configure log shipping. With log shipping, transaction log backups from a primary database are automatically transferred to another database and then applied. DBA/Developer are a database developer on an instance of SQL Server 2008. DBA/Developer Sales database contains both current and historical sales data for DBA/Developer company. DBA/Developer SalesHeader table is defined as follows:



Sunday 3 August 2014

SQL Server: Scalar Input Values

For stored procedure to query multiple tables in the database using a complex join, and return the query results to the caller DBA/Developer should create a stored procedure that accepts scalar input values and returns a result set.
The result set that is returned can then be accessed by the in-house applications. A stored procedure can return an entire result set by including a SELECT statement within the stored procedure. For example, DBA/Developer might use the following CREATE PROCEDURE statement to return a single result set to the caller:

CREATE PROCEDURE usp_GetDepositList (@DepositID int)
AS SELECT DepositID, FirstName, LastName, Amount
FROM Deposit
WHERE DepositID = @DepositID
ORDER BY Amount
DESC;

DBA/Developer should not create a CLR stored procedure that accepts multiple scalar input parameters. There is no need to use a CLR stored procedure in this scenario. A CLR stored procedure is a stored procedure created using a .NET Framework language and then registered with the database so it can be called from Transact-SQL. CLR stored procedures should be used to implement complex logic or functionality that is not inherently supported in the database, such as logic that requires the use of language constructs available in a .NET Framework language. Although a CLR stored procedure could work in this scenario, the option of creating a CLR stored procedure that accepts multiple scalar input parameters is incorrect because this option does not return a result set. DBA/Developer should not create a stored procedure that accepts multiple scalar input parameters and returns data using an OUTPUT parameter. DBA/Developer would use an OUTPUT parameter if DBA/Developer wanted to return only a few values rather than the entire query result.

Saturday 2 August 2014

SQL Server: Injection Attack

SQL injection attacks occur when a user maliciously provides input that is embedded into a dynamic SQL statement. The sp_executesql system stored procedure accepts parameters and constructs the dynamic SQL. This eliminates the need to dynamically construct SQL statements with string concatenation, and minimizes the likelihood of SQL injection attacks. This also is more likely to improve performance because with parameters being used, cached execution plans are more likely to be reused. The sp_executesql system stored procedure accepts an @stmt parameter that contains one or more SQL statements, an optional @params parameter to identify parameters used in the SQL statements, and optional user-defined parameter values. The following Transact-SQL illustrates how DBA/Developer might use the sp_executesql with the tables in this scenario:

-- Selects the names of all shipping methods used for Vendor

SELECT @sql =N'
SELECT DISTINCT sm.Name ' + N'
FROM Purchasing.PurchaseOrderHeader p ' +N'
INNER JOIN Purchasing.ShipMethod sm ' +N'
ON p.ShipMethodID = sm.ShipMethodID ' +N'
WHERE p.VendorID = @v';
SELECT @params = N'@v int'
EXEC sp_executesql @sql, @params, 99

DBA/Developer should not use the EXECUTE statement to execute dynamic SQL. Using the EXECUTE statement to execute dynamic SQL increases the likelihood of SQL injection attacks. DBA/Developer should not implement all dynamic SQL using CLR functions and procedures. CLR functions and procedures still introduce the possibility of SQL injection attacks if they pass dynamically constructed SQL to the database for execution. DBA/Developer should not implement all dynamic SQL within Transact-SQL stored procedures. Although using parameterized stored procedures may decrease the likelihood of SQL injection attacks, SQL injection may still occur when dynamic SQL is executed from within a stored procedure. For example, suppose DBA/Developer have the following stored procedure defined:

CREATE PROCEDURE getpos (@sname varchar(50))
AS DECLARE @sql nvarchar(max) = N'SELECT p.PurchaseOrderID, sm.Name ' +N'
FROM PurchaseOrderHeader p ' + N'
INNER JOIN ShipMethod sm ' + N'
ON p.ShipMethodID = sm.ShipMethodID ' + N'
WHERE sm.Name
LIKE ''' + @sname +N'%'';'

With this stored procedure, a user could maliciously pass a parameter of; DROP TABLE ShipMethod;' and introduce malicious SQL code.

Friday 1 August 2014

SQL Server: XML Indexes

XML indexes can be used to optimize queries on an xml column. The optimizer uses these indexes differently than regular indexes, but they can often improve performance based on the given workload. XML indexes can be either primary or secondary. A primary index must be created first. Each xml column in a table can have only one primary XML index defined. Primary indexes can be created on xml columns containing either typed or untyped XML. The primary XML index uses all paths and values in the xml column to create an index with a B-Tree structure. DBA/Developer create a primary XML index using the CREATE PRIMARY XML INDEX statement. In this scenario, DBA/Developer could create a primary XML index on the TransItems column using the following statement:

CREATE PRIMARY XML INDEX PXML_TransItems
ON Transactions(TransItems);

DBA/Developer can also use secondary XML indexes to improve performance of specific queries. A secondary XML index can be created on an xml column only after a primary XML index has been created. When creating a secondary index, DBA/Developer omit the PRIMARY keyword and specify the primary XML index in the USING XML INDEX clause. Secondary XML indexes can be one of three types: PATH, VALUE, or PROPERTY. The type of secondary index is specified by the FOR clause. VALUE secondary XML indexes can improve performance if queries are often performed searching for specific values without specified paths. PATH secondary XML indexes may improve performance when path expressions are often used in queries. PROPERTY secondary XML indexes can improve performance for queries that use the value() method to retrieve one or more values. For example, in this scenario, DBA/Developer might create a secondary XML index using the following statement:

CREATE XML INDEX SXML_TransItems
ON Transactions(TransItems)
USING XML
INDEX PXML_TransItems
FOR VALUE;

DBA/Developer should not add code to the stored procedure to shred the data found in the TransItems column into columns in another table and create a composite index on the new table. In this scenario, the TransItems column is used by other applications, so it is best to leave it in XML format rather than shredding it into relational data. In addition, this solution would likely decrease performance of the GetItems stored procedure. DBA/Developer should not create a VALUE secondary index or a PATH secondary index on the TransItems column because to create a secondary XML index, a primary XML index must first be created. DBA/Developer must specify the previously created primary XML index in the USING XML INDEX clause. If DBA/Developer attempt to create a secondary index omitting the USING XML INDEX clause, the statement will generate a syntax error.

Wednesday 30 July 2014

SQL Server: XACT_ABORT Option

DBA/Developer authored the UpdInventory stored procedure as follows:


CREATE PROCEDURE UpdInventory(@InvID int, @qty int)
AS BEGIN TRANSACTION
UPDATE InventorySET UnitsInStock = UnitsInStock - @qty
WHERE InvID = @InvID;
INSERT INTO InvQtyHistory(InvID, Adj, Description)
VALUES (@InvID, @qty, 'Inventory adjustment for purchase order');
COMMIT TRANSACTION

 
To ensure that if the UpdInventory stored procedure attempts to update the UnitsInStock table with a negative value, then no inserts or updates to any of the tables will be committed DBA/Developer sets the XACT_ABORT option to ON at the beginning of the AddPurchaseOrder stored procedure.

The XACT_ABORT option controls how SQL Server handles transactions when a run-time error occurs. When the XACT_ABORT option is set to ON, and a Transact-SQL statement raises an error at run time, the entire transaction is terminated and rolled back. When the XACT_ABORT option is set to OFF, only the Transact-SQL statement that raised the error is rolled back. The remaining statements in the transaction will be executed. The default value of the XACT_ABORT option is OFF. When transactions are nested, the outer transaction controls whether or not both transactions are committed or rolled back. Therefore, to ensure that all modifications take place or are rolled back, DBA/Developer can set the XACT_ABORT option to ON before beginning the outer transaction, namely the transaction started in AddPurchaseOrder. If an error occurs in AddPurchaseOrder, including an invalid update in UpdInventory, all statements in both transactions will be rolled back. DBA/Developer should not set the XACT_ABORT option to OFF at the beginning of the AddPurchaseOrder stored procedure. With this setting, each statement within the AddPurchaseOrder stored procedure would be considered individually. Some of the modifications might be committed, while others might not. DBA/Developer should not set the XACT_ABORT option to ON at the beginning of the UpdInventory stored procedure. This would control how modifications are committed within the UpdInventory stored procedure, but not in the outer procedure. DBA/Developer should not include the code for both stored procedures in a single stored procedure that includes a TRY...CATCH block. Although DBA/Developer can use a TRY...CATCH block to perform error processing, the UpdInventory stored procedure might be called from other Transact-SQL code. Therefore, this would not be the best choice, because it could affect other code.

SQL Server: Deterministic and Non-Deterministic Function

A function is deterministic if it returns the same value each time it is passed the same values. Nondeterministic user-defined functions have certain restrictions on how they can be used. In this scenario, the best approach would be to rewrite the function as a stored procedure because stored procedures do not have this restriction. If DBA/Developer are not sure whether a function is deterministic, DBA/Developer can return the IsDeterministic property using the OBJECTPROPERTY function. The IsDeterministic property is 1 if the function is deterministic, and 0 if it is not. For example, DBA/Developer could use the following query to determine whether the dbo. udf_get_days_empfunction is deterministic or nondeterministic:

SELECT OBJECTPROPERTY
(OBJECT_ID ('dbo.udf_get_days_emp'), 'IsDeterministic');

DBA/Developer should not remove the reference to the GETDATE() built-in function in the function's definition. Although this would make the function deterministic, DBA/Developer would not be able to calculate the number of days the employee has been employed. DBA/Developer should not create a view that includes only the function's result and create an index on the view because DBA/Developer cannot create a view that references a nondeterministic function. In this scenario, DBA/Developer would have to create a view based on each value returned from the function. Unlike a UDF or stored procedure, DBA/Developer cannot pass a parameter to a view. DBA/Developer should not create a computed column in the Employee table that uses the function in its expression definition and create an index on the computed column. Computed columns are virtual columns that are not physically stored in the table by default. Each computed column uses the AS keyword followed by an expression that evaluates to a value. The expression can contain constants, functions, operators, and references to other columns within the table. The value of the computed column is calculated each time a query that references it executes. DBA/Developer can also include the optional PERSISTED keyword when creating a computed column. When a persisted computed column is created, it is physically stored in the table and is recalculated each time a column value referenced in the calculation expression is changed. To be able to create an index on a computed column, the column must be deterministic and precise. A computed column is considered deterministic if it produces the same value each time it is passed the same values. A computed column is considered precise if it does not perform floating-point calculations using either a float or real data type. In this scenario, the function is non-deterministic, so using it in the computed column's expression would prevent DBA/Developer from creating an index on the computed column. DBA/Developer can query the IsDeterministic and IsPrecise properties using the COLUMNPROPERTY function to determine if a computed column is deterministic and precise, respectively.

Monday 28 July 2014

SQL Server: exist() and query()XML methods

DBA/Developer can use the exist() XML method in the WHERE clause of a SELECT statement. The exist() method accepts a string representing a specific node, and returns a Boolean value indicating if the node exists in the XML. The exist() method returns a value of 1 if the specified node exists. In this scenario, DBA/Developer passed it a string to search for the <SKU> element with an ID attribute value of "XS-7". DBA/Developer also included CompletionDate ISNULL in the WHERE clause. Therefore, the query will return the ProjectID, Description, and TargetDate for all projects that do not have a completion date and have a material list that contains the XS-7 item. DBA/Developer should not use the following query:

SELECT ProjectID, Description, TargetDate
FROM dbo.Project
WHERE MaterialList.exist
('/Materials/Item/SKU[@ID="XS-7"]')
AND CompletionDate IS NULL;

This query will return an error because the exist() method returns a value of 0 or 1.
This query will generate the following error:

Msg 4145, Level 15, State 1, Line 7

An expression of non-boolean type specified in a context where a condition is expected, near
'AND'.DBA/Developer should not use the following query:

SELECT *, MaterialList.query ('Materials/Item/SKU[@ID="XS-7"]')
FROM dbo.Project;

This query will return a row for all projects because no WHERE clause was specified. For products with a material list including the XS-7 item, it would return a single <SKU> element as follows:

<SKU ID="XS-7" ItemDesc="Wall Brackets" />

DBA/Developer should not use the following query:

SELECT ProjectID, Description, TargetDate
FROM dbo.Project
WHERE MaterialList.query
('/Materials/Item/SKU[.="XS-7"]')
AND CompletionDate IS NULL;

The query() method is used to query and retrieve XML elements and attributes from an XML instance. The method accepts a string XQuery expression that determines which elements and element attributes are extracted. It will return untyped XML, not a Boolean value that can be referenced in a WHERE clause. This query will return the following error:

Msg 4145, Level 15, State 1, Line 5

SQL Server: Data Modeling

DBA/Developer can create the data model using the following Transact-SQL:

CREATE TABLE Departments
(DeptID int PRIMARY KEY, DeptName varchar(25));

CREATE TABLE TrainingCourses
(CourseID int PRIMARY KEY, CourseName varchar(30));

CREATE TABLE Employees
(EmployeeID int PRIMARY KEY, FirstName varchar(25), LastName varchar(30), DepartmentID int FOREIGN KEY REFERENCES Departments(DeptID));
CREATE TABLE TrainingHistory(CourseID int,EmployeeID int,TrainingDate datetime,
CONSTRAINT PK_THistory
PRIMARY KEY CLUSTERED (CourseID, EmployeeID),
FOREIGN KEY (CourseID) REFERENCES TrainingCourses(CourseID),
FOREIGN KEY (EmployeeID)
REFERENCES Employees(EmployeeID));

The data model in this scenario will resemble the following:

 
A PRIMARY KEY constraint is used to create a primary key for a table that uniquely identifies each row in the table. A FOREIGN KEY constraint is used to establish a relationship between a primary key or a unique key in one table and another column in the same table or a different table.Each row in the Departments table represents a single department identified by the DeptID primary key. Each row in the Employees table represents a single employee identified by the EmployeeID primary key. The FOREIGN KEY constraint in the Employees table that references the Departments table ensures that each employee is assigned to a single department, but allows departments to have multiple employees. Each row in the TrainingCourses table represents a single course offered to employees, identified by the CourseID primary key.To record the fact that an employee has completed a course, DBA/Developer must create an additional table, referred to as a junction table, to represent the many-to-many relationship between Employees and TrainingCourses. The junction table contains a composite primary key consisting of the primary keys of the joined tables, and has FOREIGN KEY constraints on each of the primary key columns to reference the original tables. The junction table may also include any other applicable columns. Each row in the TrainingHistory table represents the fact that a specific employee has taken a specific training course. The composite primary key on the CourseID and EmployeeID columns in the TrainingHistory table ensures that an employee can take a specific course only once, and allows each training course to be taken by multiple employees. The foreign key that references the EmployeeID column in the Employees table ensures that only
employees can take the offered training courses. The foreign key that references the CourseID column in the TrainingCourses table ensures that employees may only take courses that are offered. All of the other options are incorrect because they will introduce redundant data into the data model or do not support the business requirements. DBA/Developer should not create only two tables, Employees and TrainingCourses. If DBA/Developer only created two tables, redundant data would be introduced. For example, DBA/Developer might store department data in the Employees table. If so, because departments have multiple employees, the DeptName would be redundant for employees within a given department. In a normalized data model, each table contains data for a single entity, such as an employee, department, or course. DBA/Developer should not create only three tables, Employees, Departments, and TrainingCourses. DBA/Developer must have a TrainingHistory table to represent the fact that a specific employee has taken a specific training course.

Wednesday 23 July 2014

SQL Server: Partitioned View

Partitioned views are used when DBA/Developer have similar data stored in multiple tables and want to create a view to allow access to all of the data as if it were stored in a single table. The tables referenced in the view can reside on the same server or on different servers. Partitioned views are implemented using the UNION ALL operator. For example, if DBA/Developer had three separate tables with an identical structure on the same server, DBA/Developer might use the following statement to create a partitioned view that allows users to query data from all three tables:

CREATE VIEW PartView
AS SELECT * FROM Table1
UNION ALL
SELECT * FROM Table2
UNION ALL
SELECT * FROM Table3;

In this scenario, DBA/Developer have multiple TransactionHistory tables residing on separate servers, each of which contains a subset of transaction history data. DBA/Developer can create a view across servers, known as a distributed partitioned view, by first creating a linked server definition for each branch and then creating the partitioned view. The partitioned view is created using a fully-qualified name in each SELECT statement. Using a distributed partitioned view would allow branch offices to access their transactions and enforce the CHECK constraint defined for each  branch's TransactionHistory table, but would also allow users at the corporate office to query all data as if it resided in a single table. In addition, if a corporate office user issued a query against one branch's transaction history, the query optimizer would use the individual CHECK constraints defined on the tables to optimize performance of the query and search only the required tables. Using partitioned views would also allow the base tables to be managed separately. This can improve availability and decentralize administration effort because each base table can be backed up, restored, reorganized, or managed individually as needed. DBA/Developer should not implement a nested view that exposes the required data. A nested view is a view that references another view in its definition. A nested view would not be appropriate in this scenario. For optimum performance, DBA/Developer should avoid nested views when possible, and if DBA/Developer use nested views, DBA/Developer should limit the level of nesting where possible. DBA/Developer should not create a view for each branch office that includes the WITH CHECK OPTION clause. If DBA/Developer created a view for each branch office, it would not maximize performance of corporate queries, and it would increase the complexity to create queries across all branches. A query that accessed transaction history across multiple branches would have to reference each branch's view. When creating a view, the WITH CHECK OPTION clause is used to ensure that no data modifications can be made through a view that would cause the underlying data to violate the view's definition. DBA/Developer should not create a single partitioned table that includes a single CHECK constraint and transaction history from all branch offices. In this scenario, leaving the transaction history data in separate tables across multiple servers will allow transaction history at each branch to be managed independently. This will provide better data availability, while still providing optimum performance of corporate queries. DBA/Developer might choose to use a single partitioned table if all the base tables resided on the same server.

Saturday 19 July 2014

SQL Server: Database Mail

Database Mail sends e-mail messages using SMTP. To use Database Mail, DBA/Developer does not have to install an Extended MAPI client on the SQL server. Database Mail is an external process that runs outside of the Database Engine. Therefore, it has minimal impact on the SQL server. Database Mail offers many advantages over using SQL Server Mail. Database Mail can use multiple SMTP accounts and profiles, limit the size and types of attachments, and log all mail events that occur. Mail messages sent by Database Mail use Service Broker to queue and deliver e-mail messages. Therefore, Service Broker must be active in the msdb database to successfully send e-mail messages using Database Mail.Database Mail is disabled by default. DBA/Developer can enable Database Mail using the Database Mail Configuration Wizard in SQL Server Management Studio, the sp_configure system stored procedure, or the Surface Area Configuration Utility.To launch the Database Mail Configuration Wizard, DBA/Developer should first expand the Management node in the Object Explorer in SQL Server Management Studio as shown:

 
After expanding the Management node, DBA/Developer should right-click Database Mail and select Configure Database Mail: This will launch the Database Mail Configuration Wizard. At the first screen, click Next to proceed past the welcome information. At the second screen, select the Setup Database Mail option as shown and click the Next button.

 


The Database Mail Configuration Wizard will walk DBA/Developer through the initial steps of configuring database mail and will enable Database Mail.To enable Database Mail using the sp_configure system stored procedure, DBA/Developer can enable the Database Mail XPs option as follows:

sp_configure 'show advanced options', 1;
GO RECONFIGURE;
GO
sp_configure 'Database Mail XPs', 1;
GO RECONFIGURE;
GO

To enable Database Mail using the Surface Area Configuration Utility, DBA/Developer should right click the server in Object Explorer and select the Facets option. In the View Facets window, DBA/Developer should select Server Configuration for the Facet option and set the DatabaseMailEnabled property to True as shown:

 
After enabling Database Mail, DBA/Developer can use Database Mail system stored procedures to manage mail accounts and profiles or send e-mail messages. For example, in this scenario, the following code might be used to send an e-mail to a sales representative using Database Mail:
EXEC
msdb.dbo.sp_send_dbmail@recipients =
N 'asmith@virtuart.com',
@body = 'An order > $1000 for a priority customer has shipped.',
@subject = 'Priority Customer Order',
@profile_name = 'VirtuArtMailProfile',
@query =
SELECT *
FROM Sales.SalesOrderHeader h
INNER JOIN Sales.SalesOrderDetail d
ON h.SalesOrderID = d.SalesOrderID
WHERE
TotalDue > 1000
AND CustType = 'Priority'
AND SalesOrderID = '43652',
@attach_query_result_as_file = 1,
@query_attachment_filename
='PriorityOrderDetails.txt'

The sp_send_dbmail system stored procedure is used to send an e-mail message using Database Mail. This procedure accepts parameters defining the details of the e-mail message, such as the recipients, message subject, message body, and importance, and sends the e-mail message. DBA/Developer should not use SQL Server Mail. SQL Server Mail requires an Extended MAPI client be installed on the server, and runs as a server process. This would not allow DBA/Developer to send messages using SMTP and would have more impact on the server than using Database Mail. DBA/Developer should not use Distributed Transaction Coordinator (DTC). Microsoft Distributed Transaction Coordinator (MS DTC) is used to provide for transactional processing across multiple SQL Server instances. DBA/Developer should not use SQL Server alerts. SQL Server alerts provide automatic notification when specific errors or events occur on the SQL server. When a server error or event occurs, it is recorded in the Windows application log. SQL Server Agent reviews the application log entries and fires alerts defined for the recorded events.

Friday 18 July 2014

SQL Server: ALTER PARTITION FUNCTION statement

DBA/Developer can use the ALTER PARTITION FUNCTION statement to add a partition to an existing partitioned table, or to combine two partitions in a partitioned table. The complete syntax for the ALTER PARTITION FUNCTION statement is as follows:

ALTER PARTITION FUNCTION partition_function_name()
{SPLIT RANGE (boundary_value) | MERGE RANGE (boundary_value)};



The MERGE RANGE clause combines two partitions, and the SPLIT RANGE clause splits a partition into two partitions. Each of these clauses specifies the boundary value for which the split or merge should occur. In addition, before DBA/Developer split partitions, DBA/Developer must ensure that the partition scheme has enough filegroups to accommodate the new partition, or an error will be generated. DBA/Developer must also specify a value that is different from any other defined boundary value. DBA/Developer should not consolidate the partitions using the MERGE RANGE clause and re-partition the table. The MERGE RANGE clause is used when combining two partitions. DBA/Developer should not alter the partition scheme and rebuild the table partitions. The partition scheme maps partitions to file groups. Modifying the partition scheme would not affect the number of partitions. DBA/Developer should not create a temporary table to store the data, and then drop and re-create the table. There is no need to re-create the table. This would make the table unavailable to users and would not be the best choice.
 
 

Wednesday 16 July 2014

SQL Server: Dropping the Priority column from a table

DBA/Developer authored the following Transact-SQL:

CREATE TABLE Blogging (ID int IDENTITY(1,1)
PRIMARY KEY,
Description varchar(30) NOT NULL,
MaxAttendance smallint DEFAULT 0,
Type bit NULL,
Priority tinyint CHECK (Priority BETWEEN 0 and 10),
Cost money NULL);

DBA/Developer created a view on the table using the following statement:

CREATE VIEW BloggingView
WITH SCHEMABINDING
AS SELECT ID, Description,
Priority FROM dbo.Blogging
WHERE Priority BETWEEN 1 and 5;
To drop the Priority column from a table, DBA/Developer should remove the CHECK constraint on the Priority column and alter the view to remove all references to the Priority column. When DBA/Developer created the view, DBA/Developer included the WITH SCHEMABINDING clause. The WITH SCHEMABINDING clause ensures that base tables of a view cannot be dropped or modified in a way that affects the view's definition. This prevents users from dropping or modifying base tables in a way that makes the view unusable. To drop base tables or make such modifications, DBA/Developer would need to first drop the view, alter the view omitting SCHEMABINDING, or alter the view to remove any unwanted dependencies. In this scenario, the Priority column also has a CHECK constraint, which must also be removed before dropping the column or the following error occurs: