SQL Data Decimation by Date Range and Nth Record

added by Arroyocode
5/24/2012 10:02:28 AM


Data decimation (aka Down Sampling) of large SQL datasets allows application to quickly visualize trends in data over large historical periods greatly improving application performance. Combining a custom date range and a SQL Modulo operator, quickly and evenly display long periods of data.


5/24/2012 10:04:20 AM
Nice approach to solving this. I wonder how much extra performance, if any, could be squeaked out by using something like an indexed view. The query analyzer in SQL Server is pretty good these days, so sometimes it's hard to tell whether adding a view over the underlying table (even if it's then being queried in a stored procedure) will have any positive benefits over querying the table itself. If you seldom insert data into the table, a covering index could negate the benefits of a view as well.

5/24/2012 11:02:11 AM
Yes sir, all good points. Opted for separate schema per client due to magnitude of statistics records in recent project work. We originally tested the indexed view, even went as far as creating its index on a separate Filegroup on a separate dedicated spindle. It turned out due to massive volume of data, isolated schema per client via stored procedure and a slightly modified version of this post solved the issue. Another benefit of separate schema was no additional indexed view HDD file storage (upwards of 400+ gigs for us!).

5/24/2012 12:30:26 PM
Wow, that's some serious disk space for the indexes! Thanks for the follow-up, it gives me something to think about in terms of managing some of our file growth issues relating to indexes.