Pages

Wednesday 22 June 2011

MS SQL Server 2008 Database Partitioning

You should use the horizontal partitioning method to improve database performance. This method allows you to divide a single large table into multiple smaller tables with fewer rows and an equal number of columns. Horizontal partitioning allows you to reduce the table data depending on your requirements. For instance, if a table contains data from the last ten years, and you frequently access only the current year's data,
you can horizontally partition the table into 10 tables. Each table in this method would represent data for a specific year. Database performance is enhanced because you are accessing only a specific year's data from an individual table when queries are executed. Therefore, it provides quicker response time and enhanced data performance because less data is scanned.

When you apply horizontal partitioning on a table that contains a large number of rows, you will be able to split the table into fewer rows and the same number of columns. With horizontal partitioning, the number of rows is split across multiple tables but the number of columns remains the same. You gain a performance benefit because you will have fewer rows to work with. Horizontal partitioning allows you to
split the rows across multiple tables depending on your needs.As an example, you can split the table into four different tables where each table represents data for a specific quarter for the year. However, in this type of partitioning, the number of columns always remains the same across multiple tables.In vertical partitioning, you will have the same number of rows but fewer columns. In vertical partitioning, the columns are divided amongst multiple tables with the same number of rows. Vertical partitioning can be of two types: normalization and row splitting. In normalization, redundant columns are removed from the primary table and moved into secondary tables. The redundant columns are then linked with the primary table using primary or foreign key. In row splitting, fewer columns are retained, which means that each row contains a portion of the logical row in the partitioned table.

Tuesday 21 June 2011

MS SQL Server 2008 Index Option

The sys.dm_db_index_usage_stats dynamic management view allows you to track which index is most frequently used for update and insert operations. Using this view, you can track when an index was last used. This view also tracks the type of indexes used for update and insert operations. Every time an index is used for an operation such as an update, the counter of the index in the view is incremented by one. This
view is also used for tracking unused indexes, which typically cause maintenance overhead.If you have a performance-related concern, you can use the Database Engine Tuning Advisor to recommend indexes for a single or multiple databases. The Database Engine Tuning Advisor can work with different types of indexes, such as clustered or non-clustered , and to resolve unused or potential index issues.

The sys.dm_db_index_physical_stats function is a replacement of the DBCC SHOWCONTIG statement from the previous version of SQL Server. This function returns the defragmentation of indexes and data. The indexes and data can be part of a specific view or a table. This function will not be able to track indexes that are most frequently used for update and insert operations, and therefore should not be used.

The sys.dm_db_index_operational_stats function can be used to obtain details of a partition of a table or about an index. This function will not be able to track indexes that are most frequently used for update and insert operations, and therefore should not be used.

The sys.dm_fts_index_population view displays only the full-text indexes that are currently in use. Any full-text index that is not currently being used is not displayed in this view. This view will not be able to track indexes that are most frequently used for update and insert operations, and therefore should not be used.

Sunday 19 June 2011

MS SQL Server 2008 Optimization

When data is written on to a database, the transaction log file is the file that records this activity. In most SQL Server environments, the transaction log file is one of the most used files, as it keeps writing information based on the transactions that were either committed to the database or rolled back. As this file writes every change that occurs in the database, it needs more processing power than the data files. Therefore, Microsoft recommends you have the transaction log on a separate drive. Even the data files should be stored on separate drives.It is highly recommended that for better disk I/O usage, you should place data files and log files on different hard drives dedicated for these files. If you place all of them in the same hard drive, there is going to be a performance bottleneck. As a recommended practice, you should separate the database files and transaction log files on different disks, preferably disks attached to different disk controllers.

The SQL Server Databases object contains counters that help you monitor backup/restore throughput, transaction log activities, and bulk copy operations. There are a number of counters used for monitoring log-related activities. Some of these counters are:

-Log Bytes
-Flushed/sec
-Log Cache Hit Ratio
-Log Cache Reads/sec
-Log File(s) Size (KB)
-Log Growths
-Percent Log Used.

You should not use the SQL Server General Statistics object to monitor the transaction log activities. Using Database object, you can capture information on number of current connections to the SQL Server.

Wednesday 15 June 2011

MS SQL Server 2008 Storage Allocation

Requirement:

You need to make sure that the volume will be able to store documents in a compressed form  (Volume F:/).

Solution:

You should execute the format F: /FS:NTFS /V:BLOBContainer/A:4096 /C command.

Explanation:

This command will format the volume, convert it to NTFS and then label it as BLOB Container . This command will also set the NTFS cluster size to 4096. It is important to note that the cluster size must be 4096 or smaller for compression to be enabled for a volume. Finally, the /c parameter will compress the volume. When you are storing the FILESTREAM data, you can store it on a compressed or a non compressed volume. The data in FILESTREAM is actually stored on the file system. There are a number of benefits to storing the data on the file system. For instance, you will be able to compress data with the NTFS compression capabilities. When the data is stored, it is in compressed form and when the data is retrieved, it is decompressed.

Requirement:

To change default size 4096 to Raid Level 5

Solution:

You should consider making use of the format F: /A: 64K /V:Data command.

Explanation:

When you create a RAID 5 volume, you need to format it to ensure that it is usable for storing data. By default, the allocation unit for a logical unit number (LUN) is 4096 bytes (4 KB) on an NTFS partition. When you are storing data with large files on a RAID 5 volume, you should change the allocation unit to 64 KB, which is 65536 bytes. When you run execute the command format F: / A : 64K /V: Data, you will be able to format the RAID 5 disk with a new allocation unit. In this command, you would need to provide the volume letter, which is F: in this case, the /A :size clause defines the allocation unit size, which is 64 KB in this scenario, and /V: is the volume name.

Monday 13 June 2011

Partial Backup of MS SQL Server 2008 Database

Requirement:

Backup to be part of the regular backup routine whilst continuing to append the backup to the media.

Add the statement below in the script:

BACKUP DATABASE [TT_DATA] TO DISK = N'C:\Backup\TT_DATA.bak', [Fullback] WITH NOFORMAT, NOINIT, NAME = D'TT_DATA-Full Database Backup',SKIP, NOREHIND, NOUNLOAD, DIFFERENTIAL, COPY_ONLY

This statement will perform a differential backup that will back up the changes made in the database since the last full backup. This statement also includes the differential parameter and the copy_only parameter. If both of these parameters are in the statement, the copy_only parameter is ignored and the differential parameter is used. When you perform a full backup, it resets the base backup. A differential backup takes all data that has changed since the last full backup. If you were to perform another full backup in between the current backup sequence of Full, Differential and Transaction log backups, you would disrupt the regular backup schedule. Copy-only backups are useful for making a backup to transport to another location because it will not affect the current sequence of regular backups.You can use the backup database command to back up a database.

The following are options with the backup database command:

copy_only -allows a copy-only backup.
differential -backs up only the portions of the database or file that have changed since the last full backup. If the DIFFERENTIAL parameter and COPY_ONLY parameter are used together, the COPY_ONLY parameter is ignored.
disk -specifies the destination of the backup.
noformat -does not format the media.
init -initializes the media and does not allow backups to be appended to media.
noinit -does not initialize the media and allows backups to be appended to media.
name -specifies name of the backup.
skip -Ignores the backup set expiration.
norewind -does not rewind media.
NOUNLOAD - does not unload media after the backup operations is complete

When performing a partial restore that requires a point-in-time restore you must include one of the stopat , stopatmark , or stopbeforemark options. If a FILESTREAM filegroup is excluded from a point-in-time restore, the point-in-time restore will fail. For the point-in-time restore to continue, you can specify the continue_after_error option together with the stopat , stopatmark , or stopbeforemark . This will allow the restore to continue, but the FILESTREAM filegroup will become unrecoverable.

Sunday 12 June 2011

MS SQL Server 2008 Execution Mode

On Demand: When you choose this execution mode, the policy will be evaluated when the user manually runs the policy. Except for this policy execution mode, all other policy execution modes are automated.

On Schedule: Choosing this execution mode allows you to define a schedule when the policy will be evaluated automatically. The automatic evaluation of the policy is carried out by a SQL Server Agent job. This mode is an automated mode and logs any violations of the policy in the Windows Event log.

On Change-Log Only: With this mode, the policy is evaluated only when a relevant change occurs. This mode is also automated and logs any violations of the policy.

On Change-Prevent: With this mode, the policy is evaluated when a relevant change occurs in the associated target entity. This mode also prevents policy violations by using DDL triggers.

Saturday 11 June 2011

Configuring a high availability solution for SQL Server 2008 DataBase

Enable FILESTREAM in both the primary and the secondary servers. Each server must be running SQL Server 2008 with FILESTREAM enabled. If any of the required prerequisites are not met, the solution will not work.
Failover clustering: This high availability solution requires FILESTREAM data to be hosted on a shared disk so that it can be accessible by all nodes in the cluster. Each node in the cluster must have FILESTREAM enabled for it to work successfully in a clustered environment.
Replication: In merge replication, a uniqueidentifier data type column is automatically added to tables that does not contain this data type. A UNIQUE constraint must also be defined for the column along with ROWGUIDCOL property set.

When a witness server goes offline, the full quorum is no longer used. Both the principal, and mirror will move into a partner-to-partner quorum. Without the witness server, automatic failover will not be supported. A database administrator will have to perform manual failover if the principal server becomes unavailable. Only when the witness server is back online will the automatic failover be possible.

Tuesday 7 June 2011

P-2-P Replication Between SQL 2008 DB Server

Peer-to-peer transactional replication allows you insert, update, or delete data at any node in a topology and have data changes propagated to the other nodes. Because you can change data at any node, data changes at different nodes could conflict with each other. In SQL Server 2008, peer-to-peer replication introduces the option to enable conflict detection across a peer-to-peer topology. This option helps you prevent the issues that are caused by undetected conflicts. When you enable conflict detection, a conflicting change is considered a critical error that causes the Distribution Agent to fail. In the event of a conflict, the topology remains in an inconsistent state until the conflict is resolved and the data is made consistent across the topology. To use conflict detection, all nodes must be running SQL Server 2008 or a later version, and detection must be enabled for all nodes. You can enable and disable detection in Management Studio either by using the Subscription Options page of the Publication Properties dialog box or the Configure Topology page of the Configure Peer-to-Peer Topology Wizard. You can also enable and disable conflict detection by using the sp_addpublication or sp_configure_peerconflictdetection stored procedures.

Monday 6 June 2011

SQL Server 2008 Admin

 Disaster Recovery in SQL Server 2008


To ensure disaster readiness, you should maintain system logs in a secure manner and perform recovery of a database on another SQL server. The full set of recommendations for disaster readiness is as follows:
-Ensure that your personnel are trained in restoring a database or server.
-Have your staff test the backups and recovery sets.
-Ensure that the regular database and transaction log backups are performed frequently.
-Maintain system logs in a secure manner.
-Keep detailed records of what service packs were installed and when.
-Keep records of the SQL server configuration.
-Perform and document the restoration of a server or database to another server.
-Maintain a base-functionality script that may execute insert, update, and delete statements or a batch of stored procedures to determine if everything is working correctly after a restore.

How to make Databases available even when the operating system (OS) disk fails

Configure your SQL Server 2008 implementation in the following manner:

-SQL Server files: RAID 5
-Operating System files: RAID 1
-Transaction files: RAID 1

To meet varying requirements for response time, data reliability, and performance, you typically use different RAID levels depending on how you are designing the SQL Server deployment. These RAID levels are:

-RAID 5: Known as Striping with Parity. The parity information is written across all disks in the array. You need minimum of three disks to form a RAID 5 array. This RAID level offers better read/write performance as long as all disks in the RAID 5 are present. If one disk is missing, the read performance is degraded. RAID 5 stripes data across all disks in the array.

-RAID 0: Known as Disk Striping. This RAID level stripes the data across disks in the array, offering better throughput on the read/ write operations. However, there is no data protection offered in this RAID level. If one disk fails, the data stored on the disk will be lost.

-RAID 1: Known as Disk Mirroring. You need minimum of two disks to form a RAID 1 array. One primary disk is used for read/write operations, and the data is replicated to the second disk. This RAID level offers better read performance and fault tolerance, but slower write performance than RAID 0. The scenario states that the database must be functional even if a drive fails in the system. To meet this goal, you must have the operating system files on RAID 1. Using RAID 1, you will mirror the operating system so that even if a disk fails, the operating system will function. The transaction logs should be placed on a separate RAID 1 array. SQL Server writes data to the transaction logs and maintains serial information of all the modifications that occurred in a SQL database. The transaction log files can be used for rollback and roll forward operations from a SQL Server database. To enhance the performance of SQL databases, you should place the transaction log files on a dedicated RAID 1. Finally, the SQL Server files and filegroups should be placed on a RAID 5 array to achieve the best performance throughput.

Sunday 5 June 2011

Java Echo Server

Hi Guys,

I am trying to capture my imaginations in a virtual toolbox that resides in my mind. The ideas are crude and world is not willing to recognize me. Get ready, fasten your belts but wait there is no high speed travelling so remain calm. I am providing your mind a food for thought with some freaky port enabled coding that shoes server and client connection. Feel free to comment:

Server Connection:
import java.io.*;
import java.net.*;
import java.util.Date;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
public class EchoServer extends Thread {
static final String APP_NAME = "EchoServer";
static final int PORT = 6991;
static ServerSocket serverSocket;
static int port = PORT;
Socket clientSocket;
BufferedReader is = null;
BufferedWriter os = null;
public EchoServer(Socket cs) {
clientSocket = cs;
}
public static void main(String args[]) {
if (usageOnly(args)) {
System.exit(0);
}
initialize(args);
printMsg("EchoServer running on port: " + port + "...Ready to accept connections...");
while (true) {
try {
Socket clientSocket = serverSocket.accept();
EchoServer es = new EchoServer(clientSocket);
es.start();
} catch (IOException e) {
printMsg("Cannot accept client connection.");} } }
public void run() {
processClientRequest();}
private static boolean usageOnly(String args[]) {
if (args.length > 1 || (args.length == 1
&& (args[0].equalsIgnoreCase("‐usage")
|| args[0].equalsIgnoreCase("‐help")
|| args[0].equalsIgnoreCase("‐h")))) {
System.out.println("Usage: java " + APP_NAME + " [<port>]");
System.out.println(" The default port is " + port + ".");
return true;} else { return false;} }
private static void initialize(String args[]) {
processCommandLine(args);
try {serverSocket = new ServerSocket(port); } catch (IOException e) {
printMsg("Cannot create server socket " + "on port: " + port + ". Exiting...");
System.exit(0);}}
private void processClientRequest() {
try {os = new BufferedWriter(
new OutputStreamWriter(clientSocket.getOutputStream()));
is = new BufferedReader(
new InputStreamReader(clientSocket.getInputStream()));} catch (IOException e) {
printMsg("Cannot handle client connection.");
cleanup();
return;
}
try {
String input = is.readLine();
if (input != null) {
input = APP_NAME + ": " + input + " " + getDateTime();
os.write(input, 0, input.length());
os.flush();
}
} catch (IOException e) {
printMsg("I/O error while processing client's print file.");
}
cleanup();
}
private void cleanup() {
try {
if (is != null) {
is.close();
}
if (os != null) {
os.close();
}
if (clientSocket != null) {
clientSocket.close();
}
} catch (IOException e) {
printMsg("I/O error while closing connections.");
}
}
private static void processCommandLine(String args[]) {
if (args.length != 1) {
return;}
port = Integer.parseInt(args[0]);
if (port < 1 || port > 6881) {
port = PORT;
printMsg("Using port " + port + " instead.");
}}
private static void printMsg(String msg) {
System.out.println(APP_NAME + ": " + msg);}
private String getDateTime() {DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date date = new Date();
return dateFormat.format(date);}}
Client Connection:

import java.net.*;
import java.io.*;
public class EchoClient
{
public static void main(String[] args) throws IOException {
if (args.length < 2) {
System.err.println("Usage: java EchoClient <99.253.76.32> <255.255.255.0>");
System.exit(0);}
BufferedReader in = null;
PrintWriter out = null;
BufferedReader fromUser = null;
Socket sock = null;
try {
sock = new Socket(args[0], Integer.parseInt(args[1]));
// set up the necessary communication channels
in = new BufferedReader(new InputStreamReader(sock.getInputStream()));
fromUser = new BufferedReader(new InputStreamReader(System.in));
out = new PrintWriter(sock.getOutputStream(),true);
while (true) {
String line = fromUser.readLine();
if (line == null || line.equals("bye"))
{
out.println("bye");
break;
}
out.println(line);
System.out.println(in.readLine());
}
}
catch (IOException ioe) {
System.err.println(ioe);
}
finally {
if (in != null)
in.close();
if (fromUser != null)
fromUser.close();
if (out != null)
out.close();
if (sock != null)
sock.close();}}}

Output


Output


Thank You.

Fayyaz