Deep sampling
Deep sampling is a variation of statistical sampling in which precision is sacrificed for insight. Small numbers of samples are taken, with each sample containing much information. The samples are taken approximately uniformly over the resource of interest, such as time or space. It is useful for identifying large hidden problems.
Examples
- In the context of software performance analysis samples are taken of the call stack at random times during an execution interval. This can identify extraneous function calls as well as hot spots.
- In computer disk storage management, random bytes of storage under a directory are sampled. At each sample, the directory path to the file containing the byte is recorded. This can identify files or types of files that unnecessarily consume large amounts of storage, even though they may be buried or widely distributed within the directory structure.
See also
References
- Dunlavey, “Performance tuning with instruction-level cost derived from call-stack sampling”, ACM SIGPLAN Notices 42, 8 (August, 2007), pp. 4–8.
- Dunlavey, “Performance Tuning: Slugging It Out!”, Dr. Dobb's Journal, Vol 18, #12, November 1993, pp 18–26.
This article is issued from Wikipedia - version of the Thursday, May 05, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.