################## # # Template for a Vdbench Test Script # Re. SNIA Emerald Power Efficiency Measurement Specification # History: #Version 2013_08_21 Inital script. #Version 2013_12_18 (draft) Changed interval from 5 seconds to 60 seconds, added 10 minute warmup section with different identifer. #Version 2014_01_15 (draft) Changed the elasped to 31 for final test phases to account for Vdbench not using the first minute in the average calculation. #Version 2014_02_26 (draft) Fixed bug in the pre-fill rd section by removing the warmup=30 requirement. Fixed typo in the warm up RD for sequential write. #Version 2014_04_02 (draft) Typo fixes. Allowed "Change" values to have separate values for each type. #Version 2014_04_30 (draft) Deleted Extraneous Comments. Added Storage Designator Section. Added hints to equate thread count between fill and SW, conditioning and HB. #Version 2014_05_14 Approved text edits. #Version 2015_07_17 Draft version for WebEx, Add 4K support, example Linux, AIX SDs. #Version 2015_08_05 (draft) TWG meeting: editorial work plus better identification of some allowed/not allowed changes. #Version 2015_08_11 (draft) DWTadded Carlos emailed changes to base provided Steve Johnson Aug 6. #Version 2015_09_16 Approved by Green Storage TWG for release #Version 2017_10_11 Approved by Green Storage TWG for release for use with V3.0.1 #Version 2018_06_13 Approved by Green Storage TWG for release for use with V3.0.2 #Version 2020_04_29 Approved by Green Storage TWG for release for use with V3.0.3, V4.0.0, and ISO/IEC 24091:2019, with no changes other than to comment lines. # # Copyright (c) 2014, 2015, 2017, 2018, 2020 Storage Networking Industry Association # # This template is provided to facilitate testing of Online and Near Online storage systems aligned to the current SNIA Emerald Power Efficiency Measurement Specification. # Any resulting script is run with the Emerald_System_Configuration through this example command line: # vdbench -f Emerald_System_configuration Emerald_test_script # # The only recommended changes to this template are those that define the storage system under test and its operation for Vdbench. # SNIA provides this template only as an example and does not guarantee its correctness. # SNIA may change this template at any time. # Responsibility for compliance with the SNIA Emerald Power Efficiency Measurement Specification resides with the test sponsor. # The SNIA Emerald Power Efficiency Measurement Specification takes precedence over this template if there is a conflict. # # Needed changes to this template are identified as (Change_an, Change_ym,Change_xk). # Change_an is an integer representing the number of streams to be used across the concatenated storage space. # There are two instances (n = 1,2) of Change_an in the template that need to be replaced. These can be the same value. # Change_ym is an integer representing the number of threads for streaming workloads. This shall be a multiple of the value used for Change_an. # There are three instances (m = 1,2,3) of Change_ym in the template that need to be replaced. These can be the same value. # Change_xk is an integer representing the number of threads used for the random workloads. # There are four instances (K = 1,2,3,4) of Change_xk in the template that need to be replaced. These can be the same value. # concatenate=yes compratio=2.00 ################## # Begin Storage Designator Section # Change sd's to Match Storage Configuration ################## ################## # Example Storage Definition (sd) (Windows) ################## #sd=sd1,lun=\\.\PhysicalDrive2 #sd=sd2,lun=\\.\PhysicalDrive3 # . # . # . #sd=sdN,lun=\\.\PhysicalDriveN ################## # Example Storage Definition (sd) (Linux) ################## #sd=sd1,lun=/dev/sdb,openflags=o_direct #sd=sd2,lun=/dev/sdc,openflags=o_direct # . # . # . #sd=sdN,lun=/dev/sdN,openflags=o_direct ################## # Example Storage Definition (sd) (AIX) ################## #sd=sd1,lun=/dev/rhdisk2 #sd=sd2,lun=/dev/rhdisk3 # . # . # . #sd=sdN,lun=/dev/rhdiskN # Required change: please read carefully the next lines and remove the comment line on the appropriate wd. #Before you take the action indicated you need to make sure your drives are as indicated below. # Default transfer sizes for native 512 Byte Block devices. For this type of drive, remove the comment from the next line. #wd=default,xfersize=(8k,31,4K,27,64K,20,16K,5,32K,5,128K,2,1K,2,60K,2,512,2,256K,2,48K,1,56K,1),rdpct=70,th=1 # Default transfer sizes for native 4K Byte Block devices. For this type of drive, remove the comment from the next line. #wd=default,xfersize=(8k,31,4K,31,64K,20,16K,5,32K,5,128K,2,60K,2,256K,2,48K,1,56K,1),rdpct=70,th=1 ################# # Work load definitions used in the SNIA Emerald Power Efficiency Measurement Specification # Begin block of no changes to script ################## #Hotband workload wd=HOTwd_uniform,skew=6,sd=sd*,seekpct=rand,rdpct=50 wd=HOTwd_hot1,sd=sd*,skew=28,seekpct=rand,hotband=(10,18) wd=HOTwd_99rseq1,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100 wd=HOTwd_99rseq2,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100 wd=HOTwd_99rseq3,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100 wd=HOTwd_99rseq4,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100 wd=HOTwd_99rseq5,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100 wd=HOTwd_hot2,sd=sd*,skew=14,seekpct=rand,hotband=(32,40) wd=HOTwd_hot3,sd=sd*,skew=7,seekpct=rand,hotband=(55,63) wd=HOTwd_hot4,sd=sd*,skew=5,seekpct=rand,hotband=(80,88) wd=HOTwd_99wseq1,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0 wd=HOTwd_99wseq2,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0 wd=HOTwd_99wseq3,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0 ################## # End Block of no changes ################## # Random 4 Corners workload wd=wd_mixed,sd=sd*,seekpct=rand # Sequential 4 Corners workload # Replace Change_a2 defines the number of streams across the concatenated storage space. wd=wd_seq,sd=sd*,seekpct=0,streams=Change_a2 # Pre=fill storage workload # Replace Change_a1 defines the number of streams across the concatenated storage space. # Hint: Normally, Change_a2 equates to Change_a1. wd=wd_fill,sd=sd*,seekpct=eof,streams=Change_a1 ################## #Pre-fill and conditioning Run Definitions ################## # Pre-fill Test # Test that fills storage. # Replace Change_y1 with the optimal number of threads that the system can handle and fill the storage quickly. # The number of threads (Change_y1) for the pre-fill workload shall be a multiple of Change_a1. # Hint: After tuning Change_y2 below Equate Change_y1 to Change_y2. # PREFILL NOT PART OF POWER TESTING rd=rd_prefill,wd=wd_fill,iorate=max,rdpct=0,xfersize=256K,elapsed=5000m,interval=60,th=Change_y1 # START OF POWER TESTING # Conditioning Test # Test to condition and stabilize the storage system. # Replace Change_x1 to optimal number of threads for system. Recommend ~8 per physical drive in system. # After tuning to determine Change_x2 below Change_x1 Shall = Change_x2. rd=rd_conditioning,wd=HOTwd*,iorate=MAX,warmup=10m,elapsed=12H,interval=60,th=Change_x1 ################## # Active Run Definitions ################## #Default parameters used for all active run definitions. # Runs (elapsed) must be a minimum of 31m. "31m" in following line may be increased as needed for better stability (but not decreased). rd=default,iorate=MAX,elapsed=31m,interval=60 # Hot Band test phase # If stability can not be achieved in 31m, the tester can append ",elapsed=xxm" to the end of the "rd final" where xx is time in minutes larger than 31. # Replace Change_x2 to optimal number of threads for system. Recommend ~8 per physical drive in system. rd=rd_hband_final,wd=HOTwd*,th=Change_x2 # Random write test phase # Replace Change_x3 to optimal number of threads for system. Recommend ~4-8 per physical drive in system. rd=rd_rw_warm,wd=wd_mixed,rdpct=0,xfersize=8k,elapsed=10m,th=Change_x3 #Added section for warmup period of 10 minutes. # If stability can not be achieved in 31m, the tester can append ",elapsed=xxm" to the end of the "rd final" where xx is time in minutes larger than 31. rd=rd_rw_final,wd=wd_mixed,rdpct=0,xfersize=8k,th=Change_x3 # Random read test phase # Replace Change_x4 to optimal number of threads for system. Recommend ~8 per physical drive in system. rd=rd_rr_warm,wd=wd_mixed,rdpct=100,xfersize=8k,elapsed=10m,th=Change_x4 #Added section for warmup period of 10 minutes. # If stability can not be achieved in 31m, the tester can append ",elapsed=xxm" to the end of the "rd final" where xx is time in minutes larger than 31. rd=rd_rr_final,wd=wd_mixed,rdpct=100,xfersize=8k,th=Change_x4 # Sequential write test phase # Replace Change_y2 with the optimal number of threads for the system under test Recommend 2-3 per physical drive in system. # The number of threads (Change_y2) for the sequential workload shall be a multiple of Change_a2. rd=rd_sw_warm,wd=wd_seq,rdpct=0,xfersize=256K,elapsed=10m,th=Change_y2 #Added section for warmup period of 10 minutes. # If stability can not be achieved in 31m, the tester can append ",elapsed=xxm" to the end of the "rd final" where xx is time in minutes larger than 31. rd=rd_sw_final,wd=wd_seq,rdpct=0,xfersize=256K,th=Change_y2 # Sequential read test phase # Replace Change_y3 with the optimal number of threads for the system under test Recommend 2-3 per physical drive in system. # The number of threads (Change_y3) for the sequential workload shall be a multiple of Change_a2. rd=rd_sr_warm,wd=wd_seq,rdpct=100,xfersize=256K,elapsed=10m,th=Change_y3 #Added section for warmup period of 10 minutes. # If stability can not be achieved in 31m, the tester can append ",elapsed=xxm" to the end of the "rd final" where xx is time in minutes larger than 31. rd=rd_sr_final,wd=wd_seq,rdpct=100,xfersize=256K,th=Change_y3 # For additional information see http://sniaemerald.com. ################## # END ##################