Category Archives: Sql Server Management

SQL Server: Script to Generate HTML Report/mail for Databses Current Size, Growth Rate and Available Disk Space

Working with multiple databases on multiple instances is a tough job. You need to monitor all these instances for everything. Best way to keep eye on every instance activity is SQL Server Jobs.
How fastly databases on these instances are growing and does target instance has requried space on hard drives ? These are basic questions which every DBA keeps in mind during instance monitoring.
Amna Asif has suggested a better script to create a proper report (to mail) for actual database space, requried and currently available on hard drives.
 /*

 Script to send an alert through mail, with information that how many drive
 space is required from next databases growth on a specific instance and how many
 space is available.

 Script By: Amna Asif for ConnectSQL.blogspot.com

 */


 DECLARE@dbName varchar(200),
    @Qry Nvarchar(max)
 DECLARE@dbsize VARCHAR(50),
    @logsize VARCHAR(50),
    @reservedpages VARCHAR(50),
    @usedpages VARCHAR(50),
    @pages VARCHAR(50)

 SET @dbName =



—Get LOG File Spaces of All Databases–
 CREATE TABLE #LogSpaceStats
    (
      RowID INTIDENTITY
                PRIMARYKEY,
      dbName SYSNAME,
      Totallogspace DEC(20, 2),
      UsedLogSpace DEC(20, 2),
      Status CHAR(1)
    )
   
 INSERT#LogSpaceStats
        ( dbName, Totallogspace,UsedLogSpace, Status)
        EXEC ( ‘DBCC sqlperf(logspace) WITH NO_INFOMSGS’
            )
    
–Get Info of All Drives
 DECLARE@ServerDrives TABLE
    (
      RowID intIDENTITY
                PRIMARYKEY,
      Drive char,
      DriveSpace varchar(100),
      Required_Space varchar(100)
    )
 INSERT INTO @ServerDrives
        ( Drive, DriveSpace )
        EXEC master.sys.xp_fixeddrives
–Temporary Table to hold requried data
 CREATE TABLE#ServerFileStats
    (
      RowID INTIDENTITY
                PRIMARYKEY,
      dbName SYSNAME,
      Database_DSize varchar(100),
      Allocated_Space varchar(100),
      Unallocated_Space varchar(100),
      Unused varchar(100),
      Database_LSize varchar(100),
      UsedLogSpace DEC(20, 2),
      FreeLogSpace DEC(20, 2),
      FDataFileGrowth DEC(20, 2),
      FLogFileGrowth DEC(20, 2),
      DataFileDrive char,
      LogFileDrive char
    )
  
–Cursor Used to get each database size on given instance
 DECLAREcur_dbName CURSOR
    FOR SELECT  NAME
        FROM    SYS.DATABASES
        WHERE   state_desc =‘ONLINE’
                ANDis_read_only = 0
 OPENcur_dbName
 FETCH NEXT FROM cur_dbName into @dbName
 WHILE @@FETCH_Status = 0
    BEGIN
        SELECT  @Qry = ‘ SELECT @dbsizeOUT = sum(convert(bigint,
                              case when status & 64 = 0 then size
                              else 0 end))
                              ,@logsizeOUT = sum(convert(bigint,
                                    case when status & 64 <> 0 then size
                                    else 0 end)) 
                                       FROM ‘ +@dbName + ‘.dbo.sysfiles ‘
                             
        EXEC sp_executesql@Qry,
            N’@dbsizeOUT  nvarchar(50) OUTPUT,@logsizeOUT  nvarchar(50) OUTPUT’,
            @dbsizeOUT =@dbsize OUTPUT,@logsizeOUT = @logsize OUTPUT; 

        SELECT  @Qry = ‘ SELECT @reservedpagesOUT = sum(a.total_pages)

                                 ,@usedpagesOUT = sum(a.used_pages)
                      FROM ‘ +@dbName + ‘.sys.partitions p join ‘ + @dbName
                +‘.sys.allocation_units a on p.partition_id = a.container_id 
                      LEFT JOIN ‘ +@dbName
                +‘.sys.internal_tables it on p.object_id = it.object_id’

        EXEC sp_executesql@Qry,

     N’@reservedpagesOUT  nvarchar(50) OUTPUT,@usedpagesOUT nvarchar(50) OUTPUT’,
            @reservedpagesOUT = @reservedpages OUTPUT,
            @usedpagesOUT = @usedpages OUTPUT ; 
       
        SELECT  @Qry = ‘ INSERT INTO #ServerFileStats                
                         SELECT DB_size.Database_Name
                         , DB_size.Database_DSize
                         , DB_size.Allocated_Space
                         , DB_size.Unallocated_Space
                         , DB_size.Unused
                         , DB_size.Database_LSize
             , (lss.TotalLogSpace*(lss.UsedLogSpace/100)) UsedLogSpace
             , (TotalLogSpace-(TotalLogSpace*(UsedLogSpace/100))) FreeLogSpace
             ,CASE mfD.is_percent_growth
              WHEN 0 THEN CONVERT(DEC(15,2),(mfD.growth* 8192 / 1048576))
              ELSE CONVERT(DEC(15,2),(CONVERT(DEC(15,2),REPLACE(DB_size.Database_DSize,” MB”,””))
                              *mfD.growth/100)) END  FDataFileGrowth
                          ,
                          CASE mfL.is_percent_growth WHEN 0 THEN CONVERT(DEC(15,2),(mfL.growth* 8192 / 1048576))
                          ELSE CONVERT(DEC(15,2),(CONVERT(DEC(15,2),REPLACE(DB_size.Database_DSize,” MB”,””))
                          *mfL.growth/100)) END  FLogFileGrowth
                         ,LEFT(mfD.physical_name,1) DataFileDrive
                         ,LEFT(mfL.physical_name,1) LogFileDrive
                         FROM
                         (
                          SELECT Database_Name = ”’ + @dbName
                +”’
, Database_DSize = ltrim(str((convert (dec (15,2),’
       +@dbsize
       + ‘))* 8192 / 1048576,15,2) + ” MB”)
, ”Allocated_Space”=ltrim(str((CASE WHEN ‘
       +@dbsize + ‘ >= ‘+ @reservedpages
       +
THEN convert (DEC (15,2),’
                +@reservedpages
                +‘)* 8192 / 1048576
ELSE 0 end),15,2) + ” MB”) 
                                    , ”Unallocated_Space” = ltrim(str((CASE WHEN ‘
               +@dbsize + ‘ >= ‘+ @reservedpages
                +
THEN  (convert (DEC (15,2),’
                +@dbsize + ‘) – convert (DEC (15,2),’ + @reservedpages
                +‘))* 8192 / 1048576
ELSE 0 end),15,2) + ” MB”)
                                    , ”Unused” =ltrim(str(((‘
                +@reservedpages + ‘ – ‘+ @usedpages
                +‘) * 8192 / 1024.)/1024,15,2) + ” MB”) 
                , Database_LSize = ltrim(str((convert (dec (15,2),’
                +@logsize
                + ‘))* 8192 / 1048576,15,2) + ” MB”)
  )DB_size LEFT JOIN #LogSpaceStats AS lss on lss.dbName=DB_size.Database_Name
                          INNER JOIN ‘ +@dbName
                +‘.sys.databases db ON DB.name=DB_size.Database_Name
                          INNER JOIN ‘ +@dbName
                +‘.sys.master_files mfD on mfD.database_id=DB.database_id AND mfD.type_desc=”ROWS”
                          INNER JOIN ‘ +@dbName
                +‘.sys.master_files mfL on mfL.database_id=DB.database_id AND mfL.type_desc=”LOG”’

        EXEC ( @Qry


            )

  FETCH NEXT FROM cur_dbName into @dbName
    END
 CLOSEcur_dbName
 DEALLOCATEcur_dbName

 UPDATE@ServerDrives

 SET    Required_Space =SumDriveS.sumofdrivespcae
 FROM   ( SELECT    SUM(CONVERT(DEC(20, 2), sumofdrivespcae)) sumofdrivespcae,
                    DRIVE AS DRIVE
          FROM      ( SELECT    SUM(CONVERT(DEC(20, 2), REPLACE(fss.FDataFileGrowth, ‘ MB’, )))
                                                sumofdrivespcae,
                                fss.DataFileDrive ASDRIVE
                      FROM      #ServerFileStats fss
                      GROUPBY  fss.DataFileDrive
                      UNION
                      SELECT    SUM(CONVERT(DEC(20, 2), REPLACE(fss.FLogFileGrowth, ‘ MB’, )))
                                                sumofdrivespcae,
                                fss.LogFileDrive AS DRIVE
                      FROM      #ServerFileStats fss
                      GROUPBY  fss.LogFileDrive )SumDrive
          GROUPBY  SumDrive.DRIVE )SumDriveS
        LEFT OUTER JOIN@ServerDrives sd on SumDriveS.Drive = sd.Drive


——————————————————————————
—————————————–Report Mailing———————–
DECLARE @Loop int
 DECLARE@Subject varchar(100)
 DECLARE@strMsg varchar(4000)

 SELECT@Subject = ‘SQL Monitor Alert: ‘ + @@SERVERNAME+ ‘        ‘

        + Convert(varchar, GETDATE())
  Declare @Body varchar(max),
    @TableHead varchar(1000),
    @TableTail varchar(1000),
    @TableHead2 varchar(1000),
    @Body2 varchar(3000)
 Set NoCount On ;
— Create HTML mail body
 Set @TableTail = ‘</table></body></html>’;
  Set @TableHead = ‘<html><head>’+ ‘<style>’
    + ‘td {border: solid black 1px;padding-left:3px;padding-right:3px;padding-top:2px;padding-bottom:2px;font-size:10pt;} ‘
    + ‘</style>’ + ‘</head>’
    + ‘<body><table cellpadding=0 cellspacing=0 border=0>’
    + ‘<tr><td align=center bgcolor=#E6E6FA><b>Row ID</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>Database Name</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>File Group</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>DF Total Space</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>DF Allocated Space</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>DF Unallocated Space</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>DF Unused</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>LF Total Space</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>LF Used Space</b></td>’
    + ‘<td align=center bgcolor=#5F9EA0><b>LF Unused Space</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>DF FileGrowth</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>LF FileGrowth</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>DF Drive</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b> LF Drive </b></td></tr>’ ;

  Select @Body =( SELECT    td = CONVERT(VARCHAR, ROW_NUMBER() OVER ( ORDER BY dbName ))

                            + CHAR(10),
                            td = ISNULL(dbName, ‘Unknown’) + CHAR(10),
                            td = ISNULL(‘Data/LOG’, ‘Unknown’) + CHAR(10),
                            td = ISNULL(Database_DSize, ‘0.00’) + CHAR(10),
                            td = ISNULL(Allocated_Space, ‘0.00’) + CHAR(10),
                            td = ISNULL(Unallocated_Space, ‘0.00’) + CHAR(10),
                            td = ISNULL(Unused, ‘0.00’) + CHAR(10), ,
                            td = ISNULL(Database_LSize, ‘0.00’) + CHAR(10),
                            td = ISNULL(convert(varchar, UsedLogSpace), ‘0.00’)
                            +‘ MB’ + CHAR(10),
                            td = ISNULL(convert(varchar, FreeLogSpace), ‘0.00’)
                            + ‘ MB’ + CHAR(10),
                            td = ISNULL(convert(varchar, FDataFileGrowth),
                                        ‘0.00’) + ‘ MB’ + CHAR(10), ,
                            td = ISNULL(convert(varchar, FLogFileGrowth),
                                        ‘0.00’) + ‘ MB’ + CHAR(10), ,
                            td = ISNULL(DataFileDrive, ‘0’) + CHAR(10), ,
                            td = ISNULL(LogFileDrive, ‘0’) + CHAR(10),
                  FROM      #ServerFileStats
                  ORDERBY  dbName
        FOR       XML RAW(‘tr’),
                      ELEMENTS)


— Replace the entity codes and row numbers
 Set @Body = Replace(@Body, ‘_x0020_’, space(1))
 Set @Body = Replace(@Body, ‘_x003D_’, ‘=’)
 Set @Body = Replace(@Body, ‘<tr><TRRow>1</TRRow>’, ‘<tr bgcolor=#C6CFFF>’)
 Set @Body = Replace(@Body, ‘<TRRow>0</TRRow>’,)

 DECLARE @flag BIT

 SELECT @flag = 1
 FROM   @ServerDrives
 WHERE  convert(dec(15, 2), DriveSpace) < convert(dec(15, 2), Required_Space)
        * 2
 SET @flag = ISNULL(@flag, 0)

 SET@TableHead2 = ‘<html><head>’+ ‘<style>’

    + ‘td {border: solid black 1px;padding-left:1px;padding-right:1px;padding-top:1px;padding-bottom:1px;font-size:8pt;} ‘
    + ‘</style>’ + ‘</head>’
    + ‘<body><table cellpadding=0 cellspacing=0 border=0>’
    + ‘<tr><td align=center bgcolor=#E6E6FA><b>Row ID</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>Drive</b></td>’
    + ‘<td align=center bgcolor=#E6E6FA><b>Drive Space</b></td> ‘


 IF ( @flag = 0 )
    set@TableHead2 = @TableHead2
     + ‘<td align=center bgcolor=#E6E6FA><b>Required Drive Space</b></td></tr>’ ;
 ELSE
    set@TableHead2 = @TableHead2
     + ‘<td align=center bgcolor=#FF7F50><b>Required Drive Space</b></td></tr>’ ;
         
 Select @Body2 =( SELECT   td = ROW_NUMBER() OVER ( ORDER BY Drive ),
                            td = ISNULL(Drive, ‘Unknown’) + char(10),
                            td = ISNULL(DriveSpace + ‘ MB’, 0) + char(10),
                            td = ISNULL(Required_Space + ‘ MB’, 0)
                   FROM     @ServerDrives sd
        For        XML RAW(‘tr’),
                       Elements)

 Select @Body = @TableHead2 +@Body2 + @TableTail +‘<br/><br/><br/><br/>’

        +@TableHead + @Body +@TableTail
— Send mail
 EXEC msdb.dbo.sp_send_dbmail
      @recipients = ‘abc@xyz.com’,
    @subject =@Subject,
    @profile_name =‘MyMailProfileName’,
    @body =@Body,
    @body_format =‘HTML’ ;


 –Drop Temporary Tables When Not Required
 DROP TABLE #ServerFileStats
 DROP TABLE#LogSpaceStats

SQL Server : Query Result Showing Incomplete Text

Sometime, simple and basic problem can trap experts. This happened last week with one of my senior, who was trying to generate some scripts and saving output scripts to an output text file. On execution of these scripts he found that for some queries text was not complete. He tried to get result in text format but problem was same.  
File output
If you are facing same basic problem of SQL Server Management Studio then no need to worry, as you just need to make some changes in SSMS options.
Go to TOOLS — > OPTIONS — > QUERY RESULTS — > Result to Text — > Maximum number of characters displayed in each column
Default value is 256, which is too less, update it to your desired length. You can extend it to 8192 characters maximum.

SQL Server: Idera SQL Doctor

If you don’t know much about SQL Server internals and its performance tuning techniques, but still need your SQL Server performance, right upto the mark, then you must try SQL Doctor, a really helpful tool by Idera. 
SQL doctor is a revolutionary technology that analyzes the performance of SQL Server and provides recommendations for improving performance.

>SQL Server: Automatic Query Execution at Every Instance Startup

Though production database servers are design to stay up for 24×7, but still when ever these production database servers go down and restart, sometime we need to execute some queries automatically on every start-up, like clean some setup tables or capture some sort of necessary data which is only available at instance start-up.

For such queries which need to be executed automatically at every start-up, we have to create a store procedure to encapsulate all these queries. Then automatic execution of this stored procedure is achieved by using the sp_procoption system stored procedure.
(Note: Best place to store such stored procedure is MASTER database)
Let’s create a stored procedure to store instance start-up time in a log table.
USE MASTER
GO
–Create table to hold startup time
CREATE TABLE dbo.InstanceLog
(StartupTime DATETIME)
GO
–Create stored procedure to execute on startup automatically
CREATE PROCEDURE dbo.Proc_InsertStartupTime
AS
INSERT dbo.InstanceLog
SELECT GETDATE()
GO
Now we will use SP_PROCOPTION to tell SQL Server that we want to execute our stored procedure at every instance start-up. Syntax will be as follow:
EXEC SP_PROCOPTION
@ProcName = ‘Proc_InsertStartupTime’,
@OptionName = ‘STARTUP’,
@OptionValue = ‘TRUE’
After executing above statement, when ever SQL Server instance will restart, stored procedure will be executed automatically and a new row in our log table dbo.InstanceLog will be inserted.
To revert this option and to stop stored procedure from automatic execution, we will use following syntax.
EXEC sp_procoption
@ProcName = ‘Proc_InsertStartupTime’,
@OptionName = ‘STARTUP’,
@OptionValue = ‘OFF’
(Applicable for SQL Server 2005 and above versions)

>SQL Server: How SQL Diag Became Easy for Every DBA

>

SQL Diag is a useful diagnostic tool, through which we can collect lot of useful information, like
  • Windows performance logs
  • Windows event logs
  • SQL Server Profiler traces
  • SQL Server blocking information
  • SQL Server configuration information 
Only problem with this tool is that you have to configure an XML file to provide information to SQL Diag that basically decides that what type of information you want. Though a default XML file is provided with tool as an example but it is not according to everyone’s requirement and somehow you have to change this XML file to get proper information.
Like me, most of DBAs and Developers are not good with XML, and that is why unable to use SQL Diag according to their needs. But recently this problem is solved by providing a simple SQL Diag Configuration utility, through which you can configure input XML file according to your requirements BUT just using your mouse and you need not to know a single line of XML.
You can download it from here.
SQL Diag Configuration Tool can be used for SQL Server 2005, SQL Server 2008 and SQL Server 2008 R2.

Provide your machine name , instance and a valid user credentials. But user must be a member of Windows Administrators group and a member of the SQL Server sysadmin fixed server role.

Select counters of your own choice and in the end save file as XML at your desired path
Next, on command shell type following line and press enter to start information collection process.
C:/> sqldiag /I “SD_Detailed.xml” /O “C:\SQLDiagOutPut” /N2
Here first part /I “D:\SQLDiag100.XML” is input XML file path, which we already generated through SQLDiag Configuration Tool. And second part /O “D:\SQLDiagOutPut” is path and folder name where you want to place your output files. While third parameter /N is an output folder management parameter.
There are two options for /N parameter
  • N1: Automatically overwrite the output folder data
  • N2: Do not overwrite existing data, rather it keeps history of existing output by automatically creating folder like SQLDiag_00000, SQLDiag_00001 and so on.

>SQL Server: How to Manage Error Log File on Production Server

>

In one of my early post “How to Avoid Big Single Error Log File on Production Servers”, I have tried to explain that how error log file growth can be controlled to avoid very large files on production servers.
Though the post was very short but response was really great. Out of  received comments, today I would like to share one of SQL Expert Martin C. suggestions on this topic.
If you are monitoring your log on a daily basis for specific errors then the need to backup and keep old logs simply becomes dependent upon any compliance regulations. If you are not constrained by any compliance (e.g. SOX etc) then once you have examined the logs for any useful messages they become less useful for anything else.
 

You could make use of tools to parse the log entries and store anything significant in a database table so you retain these for later analysis, but messages that regularly appear (such as a backup completed) are really of no further use once you have confirmed the backup was successful especially as you have the information also stored in MSDB.
 

I’d suggest you ensure you have sufficient monitoring and checking of the error logs and perhaps even manually issue sp_cycle_errorlog after you have checked them.

>SQL Server: How to Read Trace File to Detect Database Detachment

>

On of our production server contains more then 200 databases. Few of them are rarely used but still required. Few days back, someone from DBA’s  team accidentally detached on of less used database. But once we need it we got error as there was no required database on server.
To find out, that when and who detached this database we have quickly executed a simple script.
First get current trace file name from sys.traces table
SELECT * FROM sys.traces
GO
Then copy trace file name and assign it @trace_file parameter and execute following script.
DECLARE @trace_file NVARCHAR(500)
SELECT  @trace_file = ‘C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\log_88.trc’
SELECT  *
FROM    [fn_trace_gettable](@trace_file, DEFAULT)
WHERE   TextData LIKE ‘%DETACH%’
ORDER BY starttime DESC

http://rcm.amazon.com/e/cm?t=co04b8-20&o=1&p=8&l=bpl&asins=B000071HPQ&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=FFFFFF&bg1=FFFFFF&f=ifrhttp://rcm.amazon.com/e/cm?t=co04b8-20&o=1&p=8&l=bpl&asins=B004U9USEA&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=FFFFFF&bg1=FFFFFF&f=ifrhttp://rcm.amazon.com/e/cm?t=co04b8-20&o=1&p=8&l=bpl&asins=1935182048&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=FFFFFF&bg1=FFFFFF&f=ifr

>SQL Server: How to Avoid Big Single Error Log File on Production Servers

>

One thing, strange I found on our production database servers was a BIG SINGLE Error log file. It was taking long time to open and exploring it for required information was even worse. This happened because production database servers never go down and SQL Server is keeping error log to a single file. On every restart SQL Server initiates a new error log file but for production servers, restart occurs after very long time. That is why error log file was growing to a very large size.
Only solution for this problem is system stored procedure sp_cycle_errorlog. This system stored procedure is used to cycle error log file without restarting SQL Server. Executing this stored procedure with job or manually helps to cycle the error log file periodically.
Exec master.dbo.sp_cycle_errorlog

>SQL Server: How to Get Physical Path of Tables and Indexes

>

When database consists of multiple data files and objects (tables/indexes) are dispersed on these multiple data files. Common requirement is to get a list of objects (tables, indexes) along with their physical path.  Here is a simple query to accomplish this task.
SELECT  ‘table_name’ = OBJECT_NAME(i.id),
        i.indid,
        ‘index_name’ = i.name,
        i.groupid,
        ‘filegroup’ = f.name,
        ‘file_name’ = d.physical_name,
        ‘dataspace’ = s.name
FROM    sys.sysindexes i,
        sys.filegroups f,
        sys.database_files d,
        sys.data_spaces s
WHERE   OBJECTPROPERTY(i.id, ‘IsUserTable’) = 1
        AND f.data_space_id = i.groupid
        AND f.data_space_id = d.data_space_id
        AND f.data_space_id = s.data_space_id
ORDER BY f.name,
        OBJECT_NAME(i.id),
        groupid

>Sql Server: How to Figure out Peak/Off-Peak Hours of Production Databases

>

In every learning session of performance tuning we like to repeat one sentence “DON’T EXECUTE THIS IN PEAK HOURS” or you must wait for peak hours to execute a specific query. For example we should never execute REBUILD INDEX statement or FULL BACKUP DATABASE statement in peak hours of a production database.
But is there any way to find out these peak and off-peak hours for a production database. Performance counter SQL SERVER: SQL Statistics\Batch Request/Sec can be little helpful, but what if I want to create a graph report of work load for a specific production database.
Follow given steps to accomplish your goal.
1.  Create a table to store work load data for next 24 hours or any other period of your choice
CREATE TABLE dbo.LoadCounter
    (
      cntr_time DATETIME,
      cntr_value BIGINT
    )
2.  Create a job so LoadCounter table can be filled after every 10 minutes (or after interval of your own choice) and your are done.
Note:  Don’t forget to provide SERVER NAME, DATABASE NAME and LOGIN NAME for following statements at marked places
USE [msdb]
GO
–Add new job with name LoadCounter
DECLARE @jobId BINARY(16)
EXEC  msdb.dbo.sp_add_job @job_name=N’LoadCounter’,
            @enabled=1,
            @notify_level_eventlog=0,
            @notify_level_email=2,
            @notify_level_netsend=2,
            @notify_level_page=2,
            @delete_level=0,
            @category_name=N'[Uncategorized (Local)]’,
            @owner_login_name=N’YourLoginNameHere’, –Provide your own login name here
            @job_id = @jobId OUTPUT
select @jobId
GO
EXEC msdb.dbo.sp_add_jobserver @job_name=N’LoadCounter’, @server_name = N’YOURserverNAMEhere’–Provide your datbase server name here
GO
— Create job setp to insert counter record from sys.dm_os_performance_counters
USE [msdb]
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N’LoadCounter’, @step_name=N’LoadCounter’,
            @step_id=1,
            @cmdexec_success_code=0,
            @on_success_action=1,
            @on_fail_action=2,
            @retry_attempts=0,
            @retry_interval=0,
            @os_run_priority=0, @subsystem=N’TSQL’,
            @command=N’INSERT  INTO dbo.LoadCounter ( cntr_time, cntr_value )
        SELECT  GETDATE() AS cntr_time,
                cntr_value
        FROM    sys.dm_os_performance_counters
        WHERE   counter_name = ”Batch Requests/sec”’,
            @database_name=N’YourDatabaseNameHere’, –Provide Your Database Name here
            @flags=0
GO
— Create Schedule
USE [msdb]
GO
DECLARE @schedule_id int
EXEC msdb.dbo.sp_add_jobschedule @job_name=N’LoadCounter’, @name=N’LoadCounter’,
            @enabled=1,
            @freq_type=4,
            @freq_interval=1,
            @freq_subday_type=4,
            @freq_subday_interval=10, — exectue after every 10 Minutes
            @freq_relative_interval=0,
            @freq_recurrence_factor=1,
            @active_start_date=20110114,
            @active_end_date=99991231,
            @active_start_time=0,
            @active_end_time=235959, @schedule_id = @schedule_id OUTPUT
select @schedule_id
GO
3.  Execute following query to see the results or you can create a report (or a graph) on this query. And easily find out that during which hours your database end users are working actively or just sleeping 😉 .
SELECT  cntr_time,
        cntr_value ( SELECT TOP ( 1 )
                                cntr_value
                       FROM     dbo.LoadCounter
                       WHERE    cntr_time < OuterTable.cntr_time
                       ORDER BY cntr_time DESC
                     ) AS BatchPerTenMin
FROM    dbo.LoadCounter OuterTable

ORDER BY cntr_time

Note: SQL SERVER Agent service must be running to execute your job after given intervals.