Thursday, December 16, 2010

FAQ

System Databases in SQL Server 2008

Master Database:
1. This database is heart & soul of SQL Server.
2. It records all the system level information as it captures instance related configuration.
3. It captures SQL Server Instance level configuration, sql server login, system level Store Procedures & System Level functions.
4. File Name master.mdf(data) & master.ldf(log file)
5. Location Drive:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\
6. In SQL Server 2005 and later versions the system objects are stored in Resource Database rather than in Master Database.

Model Database:
1. The Model database is basically used as a template when creating databases in SQL Server.
2. SQL Server takes a copy of Model database whnever a user tries to create a new database in SQL Server.
3. TempDB is recreated everytime SQL Server service is restarted.
4. File Name: Model.mdf(data) and ModelLog.ldf(log file).

MSDB Database:
1. SQL Server Agent uses MSDB database to store information related to SQL Server Agent Jobs, Job schedules Alerts, Operators etc.
2. It also stores information related to configuration of service broker, Log shipping, database backups & restore information & Policy bases information of SQL Server 2008.
3. FIle Name: MSDBData.mdf & MSDBLog.ldf

TempDB Database:
1. The TempDB is the only database in SQL Server which is recreated everytime when SQL Server restarts.
2. It basically stores all the temporary objects such as temporary tables, global temporary tables, temporary store procedures, cursors, table variables.It is also used when indexes are created.
3. It doesnt store database backup & recovery
4. File Name: tempdb.mdf & templog.ldf.

Resource Database:
1. It is ready only & hiden system database.
2. Contains all the SQL Server system objects such as sys.objects.
3. It does not contain any user data or any user meta data.
4. FIle Name mysqlsystemresource.mdf & mysqlsystemresource.ldf.

ReportServer:
1. It is created when a user installs SQL Server Reporting Services.
2. It includes information related to all the reports, linked reports, data source, report models, folders,permissions, the properties & security related settings used by all the objects, report execution schedule, report execution log.
3. File Name: ReportServer.mdf(data file) & ReportServer.ldf(log file)

ReportServerTempDB:
1. It is basically used by Report Server database to store session information, cached reports working tables used by reporting services etc.
2. File Name: ReportServerTempDB.mdf(data file) & ReportServerTempDB_log.ldf(log file)

Distribution:
1. It is created on the Distribution SQL Server when replication is configured.
2. This database basically stores all the meta information related to the configuration of replication and it also stored replicated transaction from the publisher database which needs to be delivered to the subscribers in case of transactional replications.
3. File Name: distribution.mdf & distribution.ldf.


Keys in SQL Server:
1. Primary Key
2. Foreign Key
3. Candidate Key
4. Alternate Key
5. Composite Key

1. Primary Key: It is used to uniquely identify a row, It doesn't accept null values.
2. Foreign Key: In a relationship between tables, Primary key in one table is foreign key in the second table.
3. Candidate Key: Any attribute or set of attributes that uniquely identify a row is a candidate, In it we can create a attiribute as primary.
4. Alternate Key: Candidate key which is not a primary key is alternate key.
5. Composite Key: Combination of 2-3 attributes to uniquely identify a row is composite key,
Example:Assume a table having below columns;
Account No Transaction Date Transaction Time Amount Description
Candidate Key: Account No, Transaction Date, Transaction Time
Primary Key: Account No
Alternate Key: Transaction Date, Transaction Time
Composite Key: Combination of (Acccount No + Transaction Time + Amount) to uniquely identify a row is composite key.
Select Statement:
Select [*Distinct] Select Column Listinto [new_table_name]from [tablenameview_name],[tablename1view_name1][Where clause][Having Clause][Order By Clause][Compute Clause]
Airthmetic Operators:
+ (For Addition)
- (For Substraction)
/ (For Division)
* (For Multiplication)
% (To obtain the remainder of two divible numeric): Cannot be use with Column of money, small money, float or real data type.

Logical Operators in SQL Server:
1. And
2. Or
3. Not

More on Operators:
1. Comparison Operators.
2. Range Operators.
3. List Operators.

1. Comparison Operator:= (Equal to)> (Greater than)> (Less than)>= (Greater than equal to)<= (Less than equal to)<>,!= (Not equal to)!> (Not greater than)!< (Not Less than)() (Controls precedence) 2. Range Operators:1. Between2. Not Between
3. List Operator:
1. In
2. NotIn

String Operators:
% Represent any string of zero or more characters.
- Represent a single character
[] Represent any single character with the specified range.
[^] Represent any single character not within specified range.
Example:
LIKE 'LO%' - All name begin with 'LO'.
LIKE '%ion' - All names that end with 'ion'.
LIKE '%rt%' - All names that have letters 'rt' in them.
LIKE '_rt' - All three letter names ending with 'rt'.
LIKE '[DK]%' - All names that begin with 'D' or 'K'.
LIKE '[A-D]ear' - All four letter names that end with 'ear' and begin with any letter from 'A' through 'D'.
LIKE 'D[^c]%' - All names begning with 'D' and not having 'c' as the second letter.
Aggregate Functions:
AVG
COUNT
MAX
MIN
SUM

IS NULL & IS NOT NULL keywords:NULL is an unknown value or a value for which data is not available . The NuLL values can be retrieved from the table using the IS NULL keyword in the where clause. Whenever a NULL value is compared with any other value (with use of comparison operator), or any calcualtion is performed on a NULL value the result is
always NULL.No two NULL values are equal. You cannot compare one NULL value with another.

Group By & Group By ALL:
The Group By Clause:Group by Summarizes the result set into groups as defined in the query using aggrerate functions.The having clause further restricts the result set to produce the data based on a condition.
Ex:
Select Type, 'Average Price' = Avg(Price)from TitlesWhere Price > 10 Group By TypeHaving AVG(Price) > 15

Group By All: It is same as Group By clause the only diff is that it returns all records ignoring the where condition (Where price > 10) . So it'll return all records where its price is less that 10.
Ex:
Select Type, Advance = Sum(Advance)from TitlesWhere Type In ('business','mod_cook','trad_cook')group by All Type

SO Output will be:
Type Advance-------- -----------business
25125 mod_cook 15000
popular_comp NULL
psychology NULL trad_cook
19000 UNDECIDED NULL
As per example, while using group by all it returns all records.

Having Clause:The Having clause, if used with the Group By clause, overrides the functionallity of the ALL Clause and display only those groups that meet the condition specified in the Having clause.

Creating Non cluster Index: CREATE INDEX IX_Table ON Table (Key_Index_Column) INCLUDE (Column1, Column2, Column3) VARCHAR is an abbreviation for variable-length character string. It's a string of text characters that can be as large as the page size for the database table holding the column in question. The size for a table page is 8,196 bytes, and no one row in a table can be more than 8,060 characters. This in turn limits the
maximum size of a VARCHAR to 8,000 bytes. The "N" in NVARCHAR means uNicode. Essentially, NVARCHAR is nothing more than a VARCHAR that supports two-
byte.The most common use for this sort of thing is to store character data that is a mixture of English and non-English symbols.

SELECT @@IDENTITY
It returns the last IDENTITY value produced on a connection, regardless of the table that produced the value, and regardless of the scope of the statement that produced the value. @@IDENTITY will return the last identity value entered into a table in your current session. While @@IDENTITY is limited to the current session, it is not limited to the current scope. If you have a trigger on a table that causes an identity to be created in another table, you will get the identity that was created last, even if it was the trigger that created it.

SELECT SCOPE_IDENTITY()
It returns the last IDENTITY value produced on a connection and by a statement in the same scope, regardless of the table that produced the value. SCOPE_IDENTITY(), like @@IDENTITY, will return the last identity value created in the current session, but
it will also limit it to your current scope as well. In other words, it will return the last identity value that you explicitly created, rather than any identity that was created by a trigger or a user defined function.

SELECT IDENT_CURRENT(‘tablename’)
It returns the last IDENTITY value produced in a table, regardless of the connection that created the value, and regardless of the scope of the statement that produced the value. IDENT_CURRENT is not limited by scope and session; it is limited to a specified table. IDENT_CURRENT returns the identity value generated for a specific table in any session and any scope.characters.

New Features in SQL Server 2008
1. Merge Statement
This statement allows a single statement to UPDATE, INSERT, or DELETE a row depending on its condition.
Ex:
MERGE INTO Sc_PublicationMaster PM
USING @Sc_PublicationMaster PMVAR ON PMVAR.IDGUID=PM.IDGUID
WHEN MATCHED THEN
UPDATE SET
PublicationNo=PMVAR.PublicationNo,
Volume=PMVAR.Volume,
PublicationName=PMVAR.PublicationName, PublicationTypeId=PMVAR.PublicationTypeId, DateOfPublish=PMVAR.DateOfPublish,
EditionDate=PMVAR.EditionDate,
EditionNo=PMVAR.EditionNo,
FolioId=PMVAR.FolioId,
HydrographicOfficeId=PMVAR.HydrographicOfficeId,
Status=PMVAR.Status,
WithdrawnDate=PMVAR.WithdrawnDate,
Comments=PMVAR.Comments

WHEN NOT MATCHED THEN
INSERT VALUES( PMVAR.IdGUID, PMVAR.PublicationNo, PMVAR.FolioId, PMVAR.HydrographicOfficeId, PMVAR.Volume, PMVAR.PublicationName, PMVAR.PublicationTypeId, PMVAR.EditionDate, PMVAR.DateOfPublish, PMVAR.EditionNo, PMVAR.Status, PMVAR.WithdrawnDate, PMVAR.Comments);

2. User Defind Types - (Table Valued Parameters) – Pass a table into a stored procedure as a parameter
Steps to Create a New Table Value Type
1. Expand Programmability -> Types
2. Right Click on Types and create New
3. Add Colums with datatype and execute it.
ex:-- Create the data typeCREATE TYPE ChartType AS TABLE ( IDGUID uniqueidentifier, ChartNo int, FolioId int)

How to use it in a Procedure.
when using a user defined table type then it must be declared as read only.
CREATE PROCEDURE Sp_Test@Charttype
ChartType readonly
AS
BEGIN
SET NOCOUNT ON;
Select * from @Charttype
END
GO
Note: SQL server supports a comprehensive set of standard data types for storing data, from the basic int andchar to the more exotic uniqueidentifier. User-defined data types do not extend the type of data thatmay be stored; instead they allow an alias to be defined for a specific type (e.g. varchar(10)).During initial database development, if the underlying data type needs to be changed, all fields using the user-defined data type can be changed simply by changing the definition of the user-defined type and regenerating the database.
A further advantage of user-defined types is that default values and validation rules can be bound tothem. This means that any field of this type in any table will use the default value and validation rule.
User Defined types doesnt create overhead on database, nor increases its performance.

3. DATE / TIME Data Types: SQL Server 2008 introduces several new date and time based data types.
1. DATETIME2 : references the Coordinated Universal Time (UTC) instead of the system time for greater accuracy and can store date and time data to a precision of 100 nanoseconds.
2. The new DATE and TIME data types enable you to store date and time data separately.
3. The new DATETIMEOFFSET data type introduces time zone support by storing date, time and offset such as ‘plus 5 hours’.· Greater flexibility for storing date and time data.· Greater accuracy and precision for date and time data.· DATETIME and SMALLDATETIME data types still supported.
4. Reporting ServicesProcessing and performance have been improved in SSRS (SQL Server Reporting Server). Large reports will no longer consume all available memory. In addition, there is greater consistency between layout and render.
Also, the TABLIX, a cross between a table and a matrix is included in SQL SSRS 2008. Application Embedding allows URLs in reports to point to a calling application.
5. Microsoft Office 2007
SQL Server 2008 can tightly integrate with Microsoft Office 2007. For example, in SQL Server Reporting Server reports can now export directly to Word. In addition, both Word and Excel can be used as templates for SSRS reports by using the Report Authoring tool. Excel SSAS performance has been improved and there is a data mining add-in.

6. Intellisense – Built in finally

Open XML
Reading an XMl File:
OPENXML(iDoc, rowpattern, flags)[WITH (rowsetschema [colpatterns] tablename)]
parameters:
iDoc:- We get this by calling a stored procedure called sp_xml_preparedocument.
rowpattern:- The RowPattern parameter specified which nodes we want OPENXML to process using XPath.
flags:- The Flags parameter specifies the format of our results. The following values can be used:
0 – Default value. Attribute centric mapping.
1 – Use Attribute centric mapping.
2 – Use element centric mapping.
8 – Only unconsumed data should be copied to the overflow property @mp;xmltext.
Ex:
DECLARE @idoc int
DECLARE @doc varchar(1000)
SET @doc =' '

--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument @idoc OUTPUT, @doc
-- SELECT stmt using OPENXML rowset provider

SELECT *FROM OPENXML (@idoc, '/ROOT/Customer/Order/OrderDetail',2)
WITH
(OrderID int '../@OrderID',
CustomerID varchar(10) '../@CustomerID',
OrderDate datetime '../@OrderDate',
ProdID int '@ProductID',
Qty int '@Quantity')

Reading Meta data from an XML: The OpenXML function can be used to retrieve metadata from an XML document.
ex:
SELECT * FROM OPENXML (@index, 'DataSet/Person')
WITH (
[Id] Varchar(10) '@mp:id',
NodeName VARCHAR(20) '@mp:localname',
ParentNode VARCHAR(20) '@mp:parentlocalname')

Metadata parameters you can use are listed them below:
@mp:id - Unique identifier for the specified node.
@mp:localname - The name of the element.
@mp:namespaceuri - The namespace for the specified node. The value of this property is NULL if no namespace is defined.
@mp:parentid - The ID of the parent node.
@mp:parentlocalname - The name of the parent node.
@mp:prefix - The prefix used for the specified node.
@mp:prev - The ID of the previous sibling node.
@mp:parentnamespaceuri - The namespace of the parent node.
@mp:parentprefix - The prefix of the parent node.
@mp:xmltext - The textual representation of the node and its attributes and subelements.

Writing an XML Document:
SET @FinalXML = '' + (SELECT * FROM (SELECT SCP.IdGUID,PortCode,REPLACE(PortName,'''','''''') AS
PortName,SCC.IdGUID AS CountryID,SCP.IsActive FROM SC_Port SCP WITH(NOLOCK) INNER JOIN SC_Countries SCC WITH(NOLOCK) ON SCP.CountryId=SCC.Id AND SCP.Id=@PortID)SC_Port FOR XML AUTO)+ '
';

Insert into a table using single statement:
Select * into destinationtablenamefrom sourcetablename

What is normalization? Database normalization is a data design and organization process applied to data structures based on rules
that help build relational databases. In relational database design, the process of organizing data to
minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships
between the tables. The objective is to isolate data so that additions, deletions, and modifications of a
field can be made in just one table and then propagated through the rest of the database via the defined
relationships. What are different normalization forms?
1NF: Eliminate Repeating Groups Make a separate table for each set of related attributes, and give each table a primary key. Each field
contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key If attributes do not contribute to a description of the key, remove them to a separate table. All
attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form If there are non-trivial dependencies between candidate key attributes, separate them out into distinct
tables.
4NF: Isolate Independent Multiple Relationships No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships There may be practical constrains on information that justify separating logically related many-to-many
relationships.
ONF: Optimal Normal Form A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form A model free from all modification anomalies. Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill
all the criteria of a 2NF and 1NF database.

No comments:

Post a Comment