Using Microsoft’s Performance Toolkit for Benchmarking – Part 3

This blog is the third and final blog of the series on Microsoft’s Performance Toolkit for Benchmarking.  By this point in the process the Windows Assessment and Deployment Toolkit has been installed and the number of tests have been specified.  The next step is to analyze the data from the benchmark tests.

After running the tests, results can be viewed in the “Windows Performance Analyzer” application, as depicted in Figure 1.  Navigate to the “Windows Performance Analyzer” by double-clicking on any Benchmarking test result (.ETL) file.

The “Window Performance Analyzer” window provides various metrics for each individual test.  However, when running the recommended tests, 20 iterations or 24 hour duration, instead of viewing results individually, it is more useful to view the test results in aggregate.  

As such, create an XML file for each test, then use an automated process to combine the results into one complete results file. The complete results file can then be imported into Excel for detailed statistical analysis and graphing. Figure 2 provides an example of a graph created using the complete results file. Notice Figure 2 contains the five metrics identified in Part I of this series.

Having the complete set of test results in Excel provides flexibility to isolate specific measurements, for example, Figure 3 isolates the bootDoneViaPost measurement.  

Isolating the single measurement across multiple tests allows for easy identification of issues such as abnormal spikes or high measurements.  

Further analysis is required to determine what the actual issue is and how to address it.  Some techniques for more detailed investigation include:
-Event Viewer
-Detailed XML
results comparison
-Network utilization monitoring
-Machine resource utilizations

Once the issue has been investigated and resolved, the tests can be re-run and completed with valid results. The final results provide a comprehensive analysis of the startup performance of a company’s computers, as well as a baseline to compare future changes to. This analysis is very useful when comparing and optimizing the performance of different hardware and software combinations. The three parts of this blog series covered a lot of information.  If you have any questions or want to talk more about benchmark testing or using Microsoft’s Benchmarking Toolkit, please reach out to me with any question via Olenick’s website or my LinkedIn profile. 

Author: Mike Willett, Lead senior consultant at Olenick    

Connect with Olenick
 image spacerFollow us on Twitterimage spacerOlenick LinkedIn Pageimage spacerLike us on Facebookimage spacerG+image spacer