Pages

Friday 29 August 2014

SQL Server: Update Locks

First user starts a session and executes the following Transact-SQL:

BEGIN TRANSACTION
UPDATE ClassDetail
WITH(TABLOCK)
SET ClassDetail.Details = ClassDetail.Details + '.This is a Freshman-level class.'
FROM ClassMaster
INNER JOIN ClassDetail
ON ClassMaster.ClassID = ClassDetail.ClassID
WHERE ClassMaster.Level = 'Freshman';

Then, Second User starts a session and executes the following Transact-SQL:

BEGIN TRANSACTION
INSERT INTO ClassMaster(Description, Seats, Level)
VALUES ('Calculus I', 25, 'Sophomore'), ('Accounting III', 80, 'Senior'), ('World History', 30, 'Freshman');
DELETE FROM dbo.ClassDetail
WHERE CDID = 2;
COMMIT TRANSACTION

First User begins a transaction and executes an UPDATE statement, but does not commit the changes. Then, Second User begins a transaction, but First User currently has locks on the table that prevents First User's actions from being performed. Second User's session hangs waiting for First User's session to release the locks. SQL Server uses locking to control which transactions have access to which resources at any given time. This prevents one transaction from overwriting another transaction's changes. SQL Server determines the locks that will be acquired based on the type of action being performed. However, you can override default locking behavior if necessary using table hints. The option that states First User's updates are applied first and then Second User's DML operations are performed is incorrect. First User does not commit the changes. Therefore, the changes are pending and Second User's operations are not performed. The option that states First User's updates are not performed but Second User's DML operations complete successfully is incorrect. In this scenario, First User's updates have not been committed and will remain pending, blocking Second User's operations from executing. The option that states Second User's session immediately returns an error is incorrect. Second User's session will wait to obtain the necessary locks to perform the required operations.

Sunday 24 August 2014

SQL Server: value() XML method/ nodes() XML method

DBA/Developer authored the following T-SQL Query:

SELECT x.Event.value('@ID', 'varchar(10)')
AS EventID,x.Event.value('@EventDesc [1]', 'varchar(35)')
AS Description, x.Event.value ('AvailSeats[1]', 'int')
AS TotalSeats, x.Event.value('Enrolled[1]', 'int')
AS Enrolled,x.Event.value ('AvailSeats[1]', 'int')x.Event.value('Enrolled[1]', 'int')
AS Remaining
FROM @xmldoc.nodes('//EventData/Event')
AS x(Event);

This query uses the value() XML method to extract values of specific elements and attributes. However, the value() method must return a single value, and the XML contains multiple events. Therefore, the correct query includes the nodes() XML method in the FROM clause of the query. The nodes() method accepts an XQuery string and returns all of the specified nodes as a result set. In this scenario, the nodes() method returns a result set that contains one row for each event. Each row contains the corresponding XML for the event. Then, in the SELECT list, the table alias defined is used with the value() method to return the details for each specific event. DBA/Developer could also use the nodes() method to separate XML. For example, the following statement would generate a result set containing a single xml column, Events, with each row containing the XML representing a specific event.

SELECT x.Event.query ('.')
AS Events
FROM @xmldoc.nodes ('//EventData/Event')
AS x (Event);

Wednesday 20 August 2014

SQL Server: CONTAINS Predicate

The CONTAINS predicate can be used to search character-based columns for an inflectional form of a specific word, a specific word or phrase, a word or phrase that is near another word or phrase, or words or phrases with weighted values. DBA/Developer can combine several conditions using the AND, OR, and NOT keywords. The CONTAINS search condition is case insensitive. The syntax for the CONTAINS predicate is as follows:

CONTAINS ({column | *}, '<contains_search_condition>')

The search condition is enclosed in single quotes, but individual words or phrases inside the search condition can be offset with double quotes. If the condition only includes single quotes, the argument is passed as a single search condition. A search condition of ' "weld" AND "grind" 'returns all rows having both weld and grind in the column. In contrast, a search condition of 'weld AND grind' returns all rows having weld and grind separated by a stopword. Because stopwords, or noise words, are not recognized, the search condition of 'weld OR grind' would produce an identical result. The given statement does not return the ID and Description values for all rows containing the words weld and grind near each other in the full-text enabled columns. To perform such a search, DBA/Developer would use the NEAR keyword in the CONTAINS predicate. The given statement does not return the ID and Description values for all rows with a Name containing either the word weld or grind. Using an asterisk (*) in the first argument of the CONTAINS predicate specifies that all columns registered for full-text searching be used. The given statement does not return the ID and Description values for all rows containing the phrase "weld and grind" in the full-text enabled columns.

Tuesday 19 August 2014

SQL Server: IMPLICT_TRANSACTIONS Option

The IMPLICIT_TRANSACTIONS option controls how transactions are handled. Transactions can be specified explicitly, implicitly, or automatically. In autocommit mode, which is the default, each Transact-SQL statement is treated as a separate transaction, and each statement is automatically committed when it successfully executes. In this scenario if DBA/Developer set the IMPLICIT_TRANSACTIONS option to ON, transactions will be implicitly created. This means that transactions will be automatically started if there is no current transaction, and must be explicitly committed or rolled back. Any uncommitted implicit transactions are rolled back when the user disconnects. DBA/Developer should not add the SET IMPLICIT_TRANSACTIONS OFF statement. This is the default setting, and all Transact- SQL would use autocommit mode. Each DML statement would be considered a separate transaction and automatically committed if it executed successfully. DBA/Developer should not set the transaction isolation level to READ COMMITTED. The transaction isolation level controls how a transaction behaves when there are other concurrent transactions. However, in this scenario, transactions are not used. Therefore, this would not be applicable. DBA/Developer should not include a custom error-handler for each DML statement because this would require more effort than necessary. DBA/Developer can check the value returned by the @@ERROR function for each DML statement and include blocks of code to perform the desired actions. However, in this scenario, DBA/Developer wants to have minimal development effort.

Sunday 17 August 2014

SQL SERVER EXECUTE AS CLAUSE

The EXECUTE AS clause is used to specify the security context under which the stored should execute. DBA/Developer can specify the following values in the EXECUTE AS clause:

SELF: The stored procedure executes under the security context of the current user.

OWNER: The stored procedure executes under the security context of the user that owns the procedure.

CALLER: The stored procedure executes under the security context of the user calling the procedure. To execute the stored procedure successfully, the user calling the stored procedure would require permissions on the stored procedure and any underlying database objects referenced by the stored procedure. user_name: The stored procedure executes under the security context of the specified user, regardless of which user called the stored procedure.

The ENCRYPTION clause encrypts the CREATE PROCEDURE statement used to create the stored procedure. This ensures that the statement is not stored in plain text that is readable by others. When a stored procedure's definition is encrypted, users cannot use the sp_helptext system stored procedure to view the stored procedure's definition. With the given CREATE PROCEDURE statement, if DBA/Developer attempted to use the sp_helptext system stored procedure to access the definition of the stored procedure, DBA/Developer would receive the following

output: The text for object 'dbo.usp_UpdateSalesHistory' is encrypted.In addition, users cannot view the definition of the stored procedure by querying the sys.sql_modules catalog view or using Visual Designer in SQL Server Management Studio. In this scenario, if a user queried sys. sql_modules using the following SELECT statement, a definition value of NULL would be returned:

SELECT definition
FROM sys.sql_modules
WHERE object_id = OBJECT_ID('dbo.
usp_UpdateSalesHistory');

DBA/Developer should not use the CREATE PROCEDURE statement that omits the EXECUTE AS clause. If DBA/Developer omit the EXECUTE AS clause when creating a stored procedure, then by default the stored procedure will execute under the security context of the caller. This statement also includes the RECOMPILE clause, which will cause the stored procedure to be recompiled each time it is called. DBA/Developer can use the RECOMPILE clause to force stored procedure compilation, but this was not required in this scenario. DBA/Developer should not use the CREATE PROCEDURE statement that specifies a user name in the EXECUTE AS clause because this will cause the stored procedure to execute under the security context of DevUser. DBA/Developer should not use the CREATE PROCEDURE statement that includes the EXTERNAL NAME clause because this statement will generate a syntax error. The EXTERNAL NAME clause is only valid when creating a CLR stored procedure. A CLR stored procedure is a stored procedure written in a .NET Framework language. To create a CLR stored procedure, DBA/Developer use the .NET language and define a static class method. Then, DBA/Developer compile the class, register the assembly in SQL Server with the CREATE ASSEMBLY statement, and use the CREATE PROCEDURE statement in SQL Server to reference the assembly. The EXTERNAL NAME clause must be specified in the CREATE PROCEDURE statement to reference the appropriate method.

Thursday 14 August 2014

SQL Server: MERGE WHEN MATCHED/NO MATCHED Option

The MERGE statement allows DBA/Developer to combine the inserts, deletes, and updates, and to use a single statement to perform multiple DML actions. Using a MERGE statement instead of issuing multiple DML statements can improve performance. In a MERGE statement, DBA/Developer must specify a source and a target and include a join. Then, DBA/Developer use the MATCHED clauses to specify the actions to be performed. The basic syntax of the MERGE statement is as follows:

MERGE [INTO] target_table
USING source_table
ON join_condition
[WHEN MATCHED THEN matched_action]
[WHEN NOT MATCHED [BY TARGET]
THEN notmatched_action]
[WHEN NOT MATCHED BY SOURCE
THEN notmatchedsource_action];

The two WHEN NOT MATCHED THEN clauses specify the actions to take if the records from the source table are not in the target table, or vice versa. The WHEN MATCHED THEN clause specifies the action to take if the records from the source table are in the target table. When synchronizing tables, DBA/Developer can use BY TARGET or BY SOURCE to further control how the synchronization occurs when there are differences in the source data and the target data. In this scenario, DBA/Developer could use the following MERGE statement:

MERGE ARTrxMaster AS t
USING ARTrx AS s
ON (t.TrxID=s.TrxID) WHEN MATCHED THEN UPDATE SET
t.TrxType = s.TrxType, t.Quantity = s.Quantity, t.UnitPrice = s.UnitPrice, t.ExtAmt =
s.ExtAmt, t.TaxAmt = s.TaxAmt, t.LoadDate = GETDATE()
WHEN NOT MATCHED BY TARGET AND s.TrxType = 'TX1'
THEN INSERT(TrxID, TrxType, Quantity, UnitPrice, ExtAmt, TaxAmt, LoadDate)
VALUES (s.TrxID, s.TrxType, s.Quantity, s.UnitPrice, s.ExtAmt, s.TaxAmt,
GETDATE ())
WHEN NOT MATCHED BY SOURCE AND t.TrxType = 'TX1'
THEN DELETE
OUTPUT $action, INSERTED.*, DELETED.*;

With this statement, the following results would occur under these conditions:
The WHEN MATCHED clause would execute and update the target table (ARTrxMaster) if a row existed in both tables. The UPDATE statement does not include a table name because the table to be updated is implicit as the target table in the merge. The WHEN NOT MATCHED BY TARGET clause would insert rows into the target table (ARTrxMaster) if the row does not exist in the source table and the additional condition specified in the WHEN clause is met. Only rows in the ARTrx table that have a TrxType of 'TX1' would be inserted into ARTrxMaster. The WHEN NOT MATCHED BY SOURCE clause would delete rows from the target table (ARTrxMaster) which do not exist in the source table (ARTrx) if the additional condition specified in the WHEN clause is met. Only rows in ARTrxMaster with a TrxType of 'TX1' would be deleted. This statement also includes an OUTPUT clause. The OUTPUT clause allows DBA/Developer to retrieve and display information about the rows affected by the MERGE statement. The OUTPUT clause can display this information to the user, insert the data into another permanent or temporary table or table variable using an INTO clause, or pass the data to a nested DML statement for processing. Within the OUTPUT clause, DBA/Developer specify the column values that should be retrieved by using the column names with the INSERTED and DELETED prefixes. The DELETED prefix returns the column value before the DML operation, and the INSERTED prefix returns the column value after the DML operation but before executing any triggers. DBA/Developer can also use $action to return a string indicating which type of DML operation affected the row. In this statement, DBA/Developer specified $action, INSERTED.*, DELETED.*. This statement would return a result set of the rows affected by the MERGE, with the action that was performed for each row and the before and after values for each action. In this scenario, DBA/Developer could automate this synchronization process by using the MERGE statement in a job that is scheduled to run weekly.

Saturday 9 August 2014

SQL Server: Table Partition

To create a partitioned a table, DBA/Developer first create a partition function using the CREATE PARTITION FUNCTION statement that specifies the number of partitions and how partitioning will occur. A partition function maps the rows of a table or index into partitions based on the specified partition boundary values. The CREATE PARTITION FUNCTION statement in this scenario creates a partition function named MyPF with a partition column of the int data type. The FOR VALUES clause of the CREATE PARTITION FUNCTION statement specifies the boundary value of each partition. The RANGE RIGHT and RANGE LEFT clauses are used to specify how the actual boundary values are handled. RANGE LEFT indicates that the boundary value should be stored in the partition on the left side of the boundary value with the boundary values sorted in ascending order from left to right. The CREATE PARTITION FUNCTION statement in this scenario, defines five partitions as follows:

Next, DBA/Developer must create a partition scheme based on the previously created partition function using the CREATE PARTITION SCHEME statement. The partition scheme maps the partitions created by the partition function to filegroups. One or more filegroups can be specified in the partition scheme. The AS PARTITION clause of the CREATE PARTITION SCHEME statement identifies the partition function, and the TO clause specifies the filegroups. The complete syntax for the CREATE PARTITION SCHEME statement is:

CREATE PARTITION SCHEME name_of_partition_scheme
AS PARTITION name_of_partition_function [ALL]
TO ({file_group | [PRIMARY]} [,...n] );
The partition scheme created in this scenario would map the partitions to file groups as follows:

After DBA/Developer has created the partition function and partition scheme, DBA/Developer must create the SalesHistory table as a partitioned table using a CREATE TABLE statement that includes an ON clause. The ON clause identifies the partition scheme and the column on which the table will be partitioned. The specified partitioning scheme identifies the partition function that is used. The complete syntax of the CREATE TABLE statement to create a partitioned table is as follows:

CREATE TABLE table_name (column_def1, column_def2, ...)
ON name_of_partition_scheme (partition_column_name);

The arguments used in the statement syntax are as follows: table_name: Specifies the name of the table to be created. column_defn: Specifies the details of the column(s) in the table. partition_scheme_name: Specifies the name of the partition scheme that identifies the partition function and the filegroups to which the partitions of the table will be written. partition_column_name: Specifies the name of the column in the table on which the table will be partitioned. The column specified must match the column definition specified in the associated partition function in terms of the data type, length, and precision. DBA/Developer should not use the Transact-SQL that includes RANGE LEFT FOR VALUES (6999, 26999, 56999, 86999) in the CREATE PARTITION FUNCTION statement because this will partition the SalesHistory table as follows:
 
 

SQL Server: WITH ENCRYPTION Clause

The WITH ENCRYPTION clause is used to encrypt the CREATE VIEW statement that is stored in the table. This ensures that the statement is not stored in plain text that is readable by others. When a view is encrypted, the view's definition cannot be accessed using the sp_helptext system stored procedure, directly queried from the sys.sql_modules catalog view, or accessed from the Visual Designer in SQL Server Management Studio. You should not delete the view from sys.objects. You should never directly alter data in system objects, but rather allow SQL Server to manage them. You should not alter the view to remove schema binding. Schema binding of a view ensures that the base tables used by the view remain usable; it does not control whether the statement that defines the view is encrypted or stored as plain text. The WITH SCHEMABINDING clause of the CREATE VIEW statement ensures that base tables of a view cannot be dropped or modified in a way that affects the view's definition. This prevents users from dropping or modifying base tables in such a way that the view becomes unusable. To drop base tables or make such modifications, you would need to first drop the view, alter the view omitting SCHEMABINDING, or alter the view to remove any unwanted dependencies. You should not implement Transparent Data Encryption (TDE) for the view. TDE is a special type of full database encryption that uses a symmetric key to encrypt the entire database. It is not used to encrypt the statements that define views, functions, stored procedures, or triggers.

Tuesday 5 August 2014

SQL Server: M-2-M Relationship

DBA/Developer could use the following Transact-SQL statements to create the Project and ProjectManager tables and a junction table, ProjectXPM, to implement the many-to- many relationship:

CREATE TABLE Project (ProjectID int PRIMARY KEY,Description varchar(25),StartDate
datetime);

CREATE TABLE ProjectManager (PMID int PRIMARY KEY,LastName
varchar(30),FirstName varchar(30));

CREATE TABLE ProjectXPM (ProjectID int,PMID int,CONSTRAINT
PK_Project_PMPRIMARY KEY CLUSTERED (ProjectID, PMID),FOREIGN KEY
(ProjectID)
REFERENCES Project (ProjectID),
FOREIGN KEY (PMID)
REFERENCES ProjectManager (PMID));

Each row in the Project table represents a single project within the organization and includes the project's start date. Each row in the ProjectManager table represents a project manager who may be assigned to projects within the organization. Each row in the ProjectXPM table represents a project manager assigned to a specific project. The ProjectXPM table contains a composite primary key consisting of the combination of ProjectID and PMID. This ensures that the table may include multiple project managers for each project and multiple projects for each project manager. The data model in this scenario would resemble the following:

 
DBA/Developer should not create a one-to-one relationship between the Project entity and the ProjectManager entity. This data model would allow each project manager to be assigned to only one project and each project to be assigned only one project manager, but would not allow projects to have multiple project managers or allow project managers to manage multiple projects. DBA/Developer should not create a one-to-many relationship from the ProjectManager entity to the Project entity. This data model would allow each project manager to be assigned to multiple projects, but would only allow each project to be assigned a single project manager. DBA/Developer should not create a one-to-many relationship from the Project entity to the ProjectManager entity. This data model would allow each project to have one or more project managers, but would allow a project manager to be assigned to only one project.

Monday 4 August 2014

SQL Server: DB Scalability

Scalable shared databases allow DBA/Developer to distribute multiple read-only databases to multiple instances for read- only reporting. DBA/Developer should not configure peer-to-peer replication. In this scenario, there was no requirement that reporting users have the most up-to-date information. DBA/Developer could use peer-to-peer replication to scale out a heavily accessed database. This would allow users to access consistent data and provide multiple database copies to be updated by different users. DBA/Developer should not create a database snapshot. A database snapshot is a copy of a database as of single point in time. DBA/Developer should not configure log shipping. With log shipping, transaction log backups from a primary database are automatically transferred to another database and then applied. DBA/Developer are a database developer on an instance of SQL Server 2008. DBA/Developer Sales database contains both current and historical sales data for DBA/Developer company. DBA/Developer SalesHeader table is defined as follows:



Sunday 3 August 2014

SQL Server: Scalar Input Values

For stored procedure to query multiple tables in the database using a complex join, and return the query results to the caller DBA/Developer should create a stored procedure that accepts scalar input values and returns a result set.
The result set that is returned can then be accessed by the in-house applications. A stored procedure can return an entire result set by including a SELECT statement within the stored procedure. For example, DBA/Developer might use the following CREATE PROCEDURE statement to return a single result set to the caller:

CREATE PROCEDURE usp_GetDepositList (@DepositID int)
AS SELECT DepositID, FirstName, LastName, Amount
FROM Deposit
WHERE DepositID = @DepositID
ORDER BY Amount
DESC;

DBA/Developer should not create a CLR stored procedure that accepts multiple scalar input parameters. There is no need to use a CLR stored procedure in this scenario. A CLR stored procedure is a stored procedure created using a .NET Framework language and then registered with the database so it can be called from Transact-SQL. CLR stored procedures should be used to implement complex logic or functionality that is not inherently supported in the database, such as logic that requires the use of language constructs available in a .NET Framework language. Although a CLR stored procedure could work in this scenario, the option of creating a CLR stored procedure that accepts multiple scalar input parameters is incorrect because this option does not return a result set. DBA/Developer should not create a stored procedure that accepts multiple scalar input parameters and returns data using an OUTPUT parameter. DBA/Developer would use an OUTPUT parameter if DBA/Developer wanted to return only a few values rather than the entire query result.

Saturday 2 August 2014

SQL Server: Injection Attack

SQL injection attacks occur when a user maliciously provides input that is embedded into a dynamic SQL statement. The sp_executesql system stored procedure accepts parameters and constructs the dynamic SQL. This eliminates the need to dynamically construct SQL statements with string concatenation, and minimizes the likelihood of SQL injection attacks. This also is more likely to improve performance because with parameters being used, cached execution plans are more likely to be reused. The sp_executesql system stored procedure accepts an @stmt parameter that contains one or more SQL statements, an optional @params parameter to identify parameters used in the SQL statements, and optional user-defined parameter values. The following Transact-SQL illustrates how DBA/Developer might use the sp_executesql with the tables in this scenario:

-- Selects the names of all shipping methods used for Vendor

SELECT @sql =N'
SELECT DISTINCT sm.Name ' + N'
FROM Purchasing.PurchaseOrderHeader p ' +N'
INNER JOIN Purchasing.ShipMethod sm ' +N'
ON p.ShipMethodID = sm.ShipMethodID ' +N'
WHERE p.VendorID = @v';
SELECT @params = N'@v int'
EXEC sp_executesql @sql, @params, 99

DBA/Developer should not use the EXECUTE statement to execute dynamic SQL. Using the EXECUTE statement to execute dynamic SQL increases the likelihood of SQL injection attacks. DBA/Developer should not implement all dynamic SQL using CLR functions and procedures. CLR functions and procedures still introduce the possibility of SQL injection attacks if they pass dynamically constructed SQL to the database for execution. DBA/Developer should not implement all dynamic SQL within Transact-SQL stored procedures. Although using parameterized stored procedures may decrease the likelihood of SQL injection attacks, SQL injection may still occur when dynamic SQL is executed from within a stored procedure. For example, suppose DBA/Developer have the following stored procedure defined:

CREATE PROCEDURE getpos (@sname varchar(50))
AS DECLARE @sql nvarchar(max) = N'SELECT p.PurchaseOrderID, sm.Name ' +N'
FROM PurchaseOrderHeader p ' + N'
INNER JOIN ShipMethod sm ' + N'
ON p.ShipMethodID = sm.ShipMethodID ' + N'
WHERE sm.Name
LIKE ''' + @sname +N'%'';'

With this stored procedure, a user could maliciously pass a parameter of; DROP TABLE ShipMethod;' and introduce malicious SQL code.

Friday 1 August 2014

SQL Server: XML Indexes

XML indexes can be used to optimize queries on an xml column. The optimizer uses these indexes differently than regular indexes, but they can often improve performance based on the given workload. XML indexes can be either primary or secondary. A primary index must be created first. Each xml column in a table can have only one primary XML index defined. Primary indexes can be created on xml columns containing either typed or untyped XML. The primary XML index uses all paths and values in the xml column to create an index with a B-Tree structure. DBA/Developer create a primary XML index using the CREATE PRIMARY XML INDEX statement. In this scenario, DBA/Developer could create a primary XML index on the TransItems column using the following statement:

CREATE PRIMARY XML INDEX PXML_TransItems
ON Transactions(TransItems);

DBA/Developer can also use secondary XML indexes to improve performance of specific queries. A secondary XML index can be created on an xml column only after a primary XML index has been created. When creating a secondary index, DBA/Developer omit the PRIMARY keyword and specify the primary XML index in the USING XML INDEX clause. Secondary XML indexes can be one of three types: PATH, VALUE, or PROPERTY. The type of secondary index is specified by the FOR clause. VALUE secondary XML indexes can improve performance if queries are often performed searching for specific values without specified paths. PATH secondary XML indexes may improve performance when path expressions are often used in queries. PROPERTY secondary XML indexes can improve performance for queries that use the value() method to retrieve one or more values. For example, in this scenario, DBA/Developer might create a secondary XML index using the following statement:

CREATE XML INDEX SXML_TransItems
ON Transactions(TransItems)
USING XML
INDEX PXML_TransItems
FOR VALUE;

DBA/Developer should not add code to the stored procedure to shred the data found in the TransItems column into columns in another table and create a composite index on the new table. In this scenario, the TransItems column is used by other applications, so it is best to leave it in XML format rather than shredding it into relational data. In addition, this solution would likely decrease performance of the GetItems stored procedure. DBA/Developer should not create a VALUE secondary index or a PATH secondary index on the TransItems column because to create a secondary XML index, a primary XML index must first be created. DBA/Developer must specify the previously created primary XML index in the USING XML INDEX clause. If DBA/Developer attempt to create a secondary index omitting the USING XML INDEX clause, the statement will generate a syntax error.