-
Notifications
You must be signed in to change notification settings - Fork 20
RandomQueryGeneratorTests
This test describes the tests currently available for the Random Query Generator. They reside on LaunchPad, get it either from the download page: https://launchpad.net/randgen/+download or by Bazaar checkout:
$ bzr checkout lp:randgen randgen
Note that for all tests, you should specify --duration and --queries to control how long the test will run. Concurrent tests also allow for --threads to be specified.
All of the optimizer grammars are suitable for:
- Comparison tests
- One version of the server vs. another (5.1 vs. 5.4)
--threads=1 \ --grammar=conf/<optimizer_grammar>.yy \ --basedir1=<path>/mysql-5.4 \ --basedir2=<path>/mysql-5.1 \ --Validators=ResultsetComparatorSimplify
- The same version of the server with different parameters (ex: with semijoin on and off)
--threads=1 \ --grammar=conf/subquery_semijoin.yy \ --vardir1=/tmp/vardir-semijoin \ --vardir2=/tmp/vardir-nosemijoin \ --validators=ResultsetComparatorSimplify \ --mysqld2=--init-file=`pwd`./init/no_semijoin.sql
- Two different storage engines
--grammar=conf/subquery_semijoin.yy \ --mysqld=--default-storage-engine=MyISAM \ --mysqld2=--default-storage-engine=Innodb \ --vardir1=/tmp/vardir-myisam \ --vardir2=/tmp/vardir-innodb
- Single-server tests with a Validator (Transformer or SelectStability)
--threads=1 \ --grammar=conf/<optimizer_grammar>.yy \ --basedir=<path>/mysql-5.1 \ --validators=SelectStability | Transformer
- Valgrind runs (see RandomQueryGeneratorValgrind)
--valgrind \ --validators=MarkErrorLog
- The grammars marked as older are somewhat limited in coverage and maintenance but are still valuable for finding bugs, especially with the Transformer validator and server-server compares
- Be aware that the older grammars have the following limitations:
- False positives for things like STD*() and VAR() functions when comparing 5.1 and 5.4+ servers
- LIMIT transformations will also provide false positives
- Applying certain aggregate functions to fields (datetime, char columns) that are not reliable measures of server performance.
This grammar file is designed to hit a number of table access methods / optimizations without using subqueries.
Example set of (runall.pl) options suitable for MySQL 5.1 regression testing:
--threads=1 --queries=100K --grammar=conf/optimizer/optimizer_no_subquery.yy
The portable version has been tailored to be portable to other database systems. See RandomQueryGenerator3WayDBComparision for more information on this.
Example set of (runall.pl) options suitable for MySQL 5.1 vs. 5.0 comparison testing:
--basedir1=/mysql51/ --basedir2=/mysql50/ --threads=1 --queries=20K --grammar=conf/optimizer/optimizer_no_subquery_portable.yy --validator=ResultsetComparatorSimplify --views
This grammar file is designed to hit a number of table access methods / optimizations with subqueries in a variety of places:
* SELECT clause * FROM clause * WHERE clause (with nested subqueries possible)
Example set of (runall.pl) options suitable for MySQL 5.1 regression testing:
--threads=1 --queries=75K --grammar=conf/optimizer/optimizer_subquery.yy
This grammar focuses on generating interesting / complicated JOIN clauses
- MUST be used with gendata=conf/outer_join.zz as it is (can be modified if needed)
- Designed for:
- longer stress / endurance runs
- Server-server comparison
- 3Way-comparisons with other DBMS's (the portable variant of the grammar is recommended for this)
- valgrind runs
--threads=1 --queries=30000 # a larger amount will hit a documented crash --grammar=conf/optimizer/outer_join.yy --gendata=conf/optimizer/outer_join.zz
This grammar focuses largely on hitting the range optimization
We randomly create an index on a table, then try to generate queries that utilize the index
We run ~5 queries per index, then drop the index and pick a new table and index pattern
- MUST be used with gendata=conf/range_access.zz
- Designed for:
- stress / endurance runs
- Server-server comparison
- 3 way compares
- valgrind runs
- Particularly interesting with the Transformer Validator, especially with the new DisableIndexes Transformer.
--threads=1 --queries=15K --grammar=conf/optimizer/range_access.yy --gendata=conf/optimizer/range_access.zz
Experimental grammar designed to mix table access types more thoroughly than optimizer_no_subquery.yy Produces interesting EXPLAIN / debug output and some exotic mixes of table access methods.
(older grammar) This grammar file creates SELECT queries that trigger the semijoin optimization, e.g. queries in the form:
SELECT ... FROM ( SELECT ... ) AND ...
(older grammar) This grammar file creates nested subselects.
(older grammar) Designed to trigger the materialization optimization
This Validator will take each generated query and retry it using one of several methods such as:
- Changing / Removing ORDER BY clauses
- Adding / Removing DISTINCT clauses
- Adding / Removing LIMIT clauses
- Adding / Removing other restrictive clauses (new result set should include at least the original result set)
- Expansion of subquries
- ex: SELECT * FROM...WHERE ('a') IN (SELECT char_column...) vs. SELECT * FROM...WHERE ('a') IN ('b','c','x'...)
This Validator will take each generated query and retry it, replacing its ORDER BY with an ORDER BY on every field in the table separately. This should have no effect on the result set of the query, so if the result set is different, failure will be reported.
--grammar=conf/subquery_semijoin.yy --validators=OrderBy
This Validator will take each generated query and retry several times it by adding a ORDER BY `column` LIMIT 1024 * 1024 * 1024 clause at the end and check that the result is identical. Adding LIMIT will hopefully change the query plan and thus exercise the Optimizer.
The SelectStability Validator takes each generated query and re-executes it without modification several times with a delay between 0 and 0.1 seconds. Any differences between the result set that are caused by some transient anomaly will be reported.
--threads=1 --grammar=conf/subquery_materialization.yy --validators=SelectStability
This test is based on the classic textbook example of having a bank (a database) with individual accounts (table rows). Any movement of funds from one account into another must be atomic and the total sum of all accounts must remain constant over time.
perl runall.pl \ --basedir=/path/to/mysql/distro \ --engine=Innodb \ --grammar=conf/transactions.yy \ --gendata=conf/transactions.zz \ --validators=DatabaseConsistency
This test creates a table containing 10 integer records with a value of 100 each (ZZ file). It then proceeds to execute zero-sum transactions against this table (YY file). The DatabaseConsistency validator checks that the total sum of all integers in the table remains the same throughout the test. If it is not, then transactional consistency is broken and the test aborts.
This test can be applied against any workload. It uses kill -9 to kill the server 20 seconds before the end of the test, initiates recovery and then attempts to verify the integrity of the database regardless of the actual data and SQL grammar. The following checks are performed:
- whether the server can start up properly and begin accepting connections;
- whether SELECTS on each table using various table access methods (full table scan, index scan, Falcon LIMIT optimization) on all available indexes return the same data;
- whether CHECK/ANALYSE/REPAIR/OPTIMIZE/ALTER ENGINE on each table works. ALTER ENGINE will essentially rebuild the table and fail if there are duplicate primary keys.
--basedir=/path/to/mysql/distro \ --engine=Innodb \ --grammar=conf/example.yy \ --gendata=conf/example.zz \ --reporters=Recovery
This test executes the zero-sum workload from conf/transactions.yy and then kills the server with kill -9, initiates recovery and checks the consistency of the table after recovery.
perl runall.pl \ --basedir=/path/to/mysql/distro \ --engine=Innodb \ --grammar=conf/transactions.yy \ --gendata=conf/transactions.zz \ --reporters=RecoveryConsistency
This test executes a simple INSERT/UPDATE/DELETE/SELECT scenario under REPEATABLE READ transaction-isolation level. The RepeatableRead Validator executes each SELECT several times by adding different predicates that would cause the query plan and the table access mechanism to change without impacting the actual result from the query. The results from the different executions are compared and failure is reported in case they are different, which would mean that the reads in the transaction are not repeatable.
perl runall.pl --basedir=/path/to/mysql/distro \ --engine=Falcon \ --mysqld=--loose-falcon-consistent-read=1 \ --gendata=conf/transactions.zz \ --grammar=conf/repeatable_read.yy \ --validators=RepeatableRead \ --queries=100000 \ --threads=4
Any transactional test based on zero-sum transactions is prone to a situation where entire transactions can be lost and this will not be detected because the database will remain consistent.
A durability test needs to compare the database against a reference source, such as the InnoDB storage engine. The transaction_durability.yy grammar implements a workload that is both (pseudo)concurrent and fully serialized via advisory GET_LOCK()/UNLOCK().
$ runall.pl \ --basedir=/build/bzr/6.0-falcon-team \ --grammar=conf/transaction_durability.yy \ --vardir1=/tmp/vardir/vardir-falcon \ --vardir2=/tmp/vardir/vardir-innodb \ --mysqld=--default-storage-engine=falcon \ --mysqld2=--default-storage-engine=Innodb \ --validators=ResultsetComparator \ --reporters=Recovery \ --queries=100000
The conf/information_schema.yy grammar generates classical joins involving one or more tables from the INFORMATION_SCHEMA and mysql databases. It does not attempt to validate the accuracy of the data returned, but can be used as a comparison test, provided that the contents INFORMATION_SCHEMA (tables, fields and rows) are identical between the versions being compared.
In addition, it also performs various SHOW commands, as well as DDL which causes the contents of INFORMATION_SCHEMA to change.
--grammar=conf/information_schema.yy \ --duration=300 \ --threads=10 \ --log-output=file
--log-output=file is required in order to prevent the mysql.general_log table from growing too much and causing SELECT-s against this table to be very slow.
The conf/performance_schema.yy grammar generates SELECTS involving performance_schema and standard tables. It also runs certain legitimate UPDATEs against the performance_schema SETUP_* , as well as SHOW ENGINE PERFORMANCE_SCHEMA STATUS. The background load is provided by standard DDL and DML statements.
The numbers in performance_schema are not repeatable, so this test can not provide any form of functional validation.
--grammar=conf/performance_schema.yy \ --log-output=file \ --mysqld=--enable-performance-schema
The backup_simple grammar runs a random workload, making a BACKUP and RESTORE every second. The backups/restores are frequent, but not concurrent. RESTORE is either from the BACKUP taken a second ago or from a BACKUP taken at the start of the test.
The test only checks for crashes, "table corrupted" errors and certain RESTORE failures. It does not attempt to verify the integrity of the data.
--grammar=conf/backup_simple.yy --engine=MyISAM
The conf/WL5004_sql.yy file provides a workload that contains CREATE/ALTER/DROP of various database objects, and various other SQL constructs, which are all important in the context of metadata locking.
--gendata=conf/WL5004_data.zz --grammar=conf/WL5004_sql.yy --rpl_mode={row|mixed|statement} --threads={10|50}
The conf/metadata_stability.yy grammar can be used to test the stability of the metadata, that is, if it is possible to modify the database objects in the middle of the transaction and break transactional consistency. This test fails in 5.1 because MDL is not available there.
The SELECT queries in the grammar are such that they are expected to return consistent results, which is tested using the /* RESULTSET_SAME_DATA_IN_EVERY_ROW */ pragma. Also, the SelectStability Validator will issue the same query repeatedly within the context of the transaction, in order to determine that transaction consistency, that is, repeatable read, is not violated by any concurrent DDL statements.
--grammar=conf/metadata_stability.yy \ --gendata=conf/metadata_stability.zz \ --queries=1M \ --duration=600 \ --engine=Innodb \ --mem \ --validator=SelectStability,QueryProperties \ --mysqld=--transaction-isolation=SERIALIZABLE \ --mysqld=--innodb-flush-log-at-trx-commit=2 \ --mysqld=--innodb \ --mysqld=--default-storage-engine=Innodb \ --mysqld=--table-lock-wait-timeout=1 \ --mysqld=--innodb-lock-wait-timeout=1 \ --mysqld=--log-output=file
The metadata stability test and metadata stress tests (above) are defined with very low lock wait timeouts in order to increase concurrency. However, with low timeouts we risk overlooking real deadlocks, since a timeout may occur before a deadlock has a chance to occur. This is especially valid after the lock_wait_timeout was introduced (mysql-next-4284 Feb 19 2010, Bug#45225).
With higher (close to default) lock wait timeouts, we may be able to detect more real deadlocks, but we may also detect more false deadlocks. Having such a test is therefore useful as a complement to the other MDL tests. With "default" timeouts we are also testing the product with settings that are more likely to be used by users in the field.
Duration set to 1200 (20 mins) as a compromise between "stress" (30 mins) and "pushbuild-regular" (5 mins).
--grammar=conf/runtime/WL5004_sql.yy --threads=10 --queries=1M --duration=1200 --mysqld=--innodb --mysqld=--innodb-lock-wait-timeout=50 --mysqld=--lock-wait-timeout=31536000 --mysqld=--log-output=file
Two grammars are available for replication, replication_simple.yy (simple insert/update/delete queries) and replication.yy (various binary log events, dynamic binlog switching, implicit commit, autocommit, flush, etc.). The replication_single_engine.zz file can be used to create tables with various field types, partitioning and with and without a PK.
--gendata=conf/replication_single_engine.zz --grammar=conf/replication.yy --engine=XXX --rpl_mode={row|mixed|statement}
Using --rpl_mode will automatically activate the ReplicationSlaveStatus Validator, which uses SHOW SLAVE STATUS to monitor the slave. At the end of the test, the master and the slave are dumped and compared.
The ReplicationSemiSync Reporter tests semisynchronous replication by periodically stopping the slave for various periods of time and making sure that the master blocks appropriately and does not commit transactions before the timeout has expired, and starts committing transactions asynchronously once the timeout has expired.
--grammar=conf/replication.yy \ --gendata=conf/replication_innodb_myisam.zz \ --rpl_mode=default \ --mysqld=--plugin-dir=/path/to/mysql/plugin/semisync/.libs \ --mysqld=--plugin-load=rpl_semi_sync_master=libsemisync_master.so:rpl_semi_sync_slave=libsemisync_slave.so \ --mysqld=--rpl_semi_sync_master_enabled=1 \ --mysqld=--rpl_semi_sync_slave_enabled=1 \ --reporters=ReplicationSemiSync,Deadlock,Backtrace,ErrorLog \ --validators= \ --threads=1 \ --duration=300 \ --queries=1M
--rpl_mode=default is used because the .YY file changes the binary log format dynamically. --threads=1 is used to avoid any replication failures due to concurrent DDL.
The --validators= line will remove the default replication Validator, which would otherwise report test failure when the slave I/O thread is stopped, which is OK in the context of this particular test.
The conf/partitions-ddl.yy grammar provides various ALTER statements that relate to partitioning and the MyISAM key cache. With slight modification (disabling TRUNCATE PARTITION, etc.) , it can be used as a comparison test against a server without partitions (e.g. 5.0).
--grammar=conf/partitions-ddl.yy \ --threads=1 \ --queries=100K
The files partition_pruning.yy and partition_pruning.zz are intended to be a good, basic test of partition pruning. These files are best used for comparing two servers (one with and one without partitioning) to look for data loss / bad query processing.
NOTE: This grammar was intended for testing WL#3352 and requires that LIST COLUMN_LIST and RANGE COLUMN_LIST partitioning be available. You can always edit the gendata file to suit your needs. Testing with up to 100k rows per table has been checked out.
perl ./runall.pl \ --threads=1 \ --gendata=conf/partition_pruning.zz \ --basedir1=<path>/mysql-5.4 \ --basedir2=<path>/mysql-5.0 \ --validators=ResultsetComparatorSimplify \ --queries=10000 \ --grammar=conf/partition_pruning.yy \ --reporters=Shutdown
For testing against a single server (i.e. no comparison of with vs. without partitioning):
perl ./runall.pl \ --threads=1 \ --gendata=conf/partition_pruning.zz \ --basedir=<path>/mysql-5.4 \ --queries=100000 \ --grammar=conf/partition_pruning.yy \ --reporters=Shutdown
This stress test creates nested stored procedures and SIGNAL/DECLARE/HANDLER/RESIGNAL calls, without trying to validate the functional accuracy of the resulting error messages.
--threads=10 \ --queries=1M \ --duration=300 \ --grammar=conf/signal_resignal.yy \ --mysqld=--max-sp-recursion-depth=10
This test performs CREATE ... SELECT and INSERT ... SELECT statements in combination with DROP/TRUNCATE TABLE. Same as maria_bulk_insert, only with MyISAM as storage engine (--engine=myisam).
$ perl runall.pl \
--grammar=conf/maria_bulk_insert.yy \ --basedir=/path/to/mysql-install-dir \ --engine=myisam \ --queries=100000
This is a generic test that runs SELECT/INSERT/UPDATE/DELETE in even proportion on a set of simple tables. Same as maria_stress, except for using MyISAM as storage engine.
$ perl runall.pl \ --basedir=/path/to/mysql-install-dir \ --engine=myisam \ --grammar=conf/maria_stress.yy \ --queries=100000
Adapted from falcon_tiny_inserts. Inserting lots of BIT values.
$ perl runall.pl \ --basedir=/path/to/mysql-install-dir \ --engine=myisam \ --gendata=conf/falcon_tiny_inserts.zz \ --grammar=conf/falcon_tiny_inserts.yy \ --queries=10000000
This is a generic test that runs SELECT/INSERT/UPDATE/DELETE in even proportion on a set of simple tables.
$ perl runall.pl \ --basedir=/path/to/mysql-6.0-maria \ --engine=Maria \ --grammar=conf/maria_stress.yy \ --queries=100000 \ --duration=1200
This test runs mostly SELECTs and very occasionally INSERT/UPDATE/DELETE. This allows more queries to proceed concurrently.
$ perl runall.pl \ --basedir=/path/to/mysql-6.0-maria \ --engine=Maria \ --gendata=conf/maria.zz \ --grammar=conf/maria_mostly_selects.yy \ --queries=100000 \ --duration=1200
This test runs mostly SELECTs, with some INSERT/UPDATE/DELETE and ALTER ADD/DROP KEY. Alter statements are present in this test and not in the others because each ALTER causes a complete table rebuild, which would mask bugs that take longer to develop.
$ perl runall.pl \ --basedir=/path/to/mysql-6.0-maria \ --engine=Maria \ --gendata=conf/maria.zz \ --grammar=conf/maria_dml_alter.yy \ --queries=100000 \ --duration=1200
maria_mostly_selects.yy and maria_dml_alter.yy are ran against data generated by the maria.zz configuration file. It creates both partitioned and non-partitioned tables with ROW_FORMAT FIXED|DYNAMIC|PAGE|COMPRESSED.
This test performs CREATE ... SELECT and INSERT ... SELECT in order to exercise Maria's mechanism for temporarily disabling the index during longer inserts.
$ perl runall.pl \ --basedir=/path/to/mysql-6.0-maria \ --engine=Maria \ --grammar=conf/maria_bulk_insert.yy \ --queries=100000 \ --duration=1200
The following tests are available for Falcon:
The falcon_online_alter.yy tests the Falcon conline ALTER capability. It covers the following:
- ALTER and ALTER ONLINE;
- Multi-part keys;
- UNIQUE and PRIMARY keys;
- Queries with and without LIMIT;
$ runall.pl \ --basedir=/path/to/6.0-falcon-team \ --engine=Falcon \ --grammar=conf/falcon_online_alter.yy \ --threads=10 \ --queries=100000
The falcon_chill_thaw.yy test executes a simple INSERT/UPDATE/SELECT/DELETE scenario, meant to be executed under low values of --falcon-index-chill-threshold and --falcon-record-chill-threshold.
$ runall.pl \ --basedir=/path/to/6.0-falcon-team \ --engine=Falcon \ --grammar=conf/falcon_chill_thaw.yy \ --threads=10 \ --queries=100000 \ --mysqld=--falcon-index-chill-threshold=4K \ --mysqld=--falcon-record-chill-threshold=4K
Currently (as of August 10th 2008), this will cause numerous crashes. Once those have been fixed, you can test the integrity of the data itself as follows:
$ perl runall.pl \ --basedir=/path/to/6.0-falcon/ \ --vardir1=/tmp/vardir-withchill \ --vardir2=/tmp/vardir-nochill \ --engine=falcon \ --grammar=conf/chill_thaw.yy \ --threads=1 --queries=100000 \ --mysqld1=--falcon_index_chill_threshold=4K \ --mysqld1=--falcon_record_chill_threshold=4K
This will start two servers, one with chill/thaw and one without. It will then proceed to execute queries on both servers and will finally use mysqldump to compare their contents and will report the result.
The falcon_recovery.yy test keeps one transaction open for 30 min while doing various operations that would be reflected in the serial log. Because of the open transaction, the serial log can not be rotated and truncated, which causes it to grow unboundedly. One minute before the test is over, the Recovery reporter kills the server, attempts recovery, and reports on the outcome.
Currently, it is not guaranteed that the test will generate all possible Falcon serial log events. This test uses a RandomDataGenerator configuration file named falcon_recovery.zz to create tables that contain a blob column and no indexes. Indexes are added and removed during the course of the test.
$ perl runall.pl \ --basedir=/path/to/6.0-falcon-team \ --grammar=conf/falcon_recovery.yy \ --gendata=conf/falcon_recovery.zz \ --duration=1800 \ --mysqld=--falcon_lock_wait_timeout=1 \ --engine=Falcon \ --queries=10000000 \ --reporters=Backtrace,Recovery
The falcon_limit.yy test executes a workload that is heavy on LIMIT queries. It also runs some ALTER ADD/DROP KEY.
$ runall.pl \ --basedir=/path/to/6.0-falcon-team \ --engine=Falcon \ --grammar=conf/falcon_limit.yy \ --threads=10 \ --queries=100000 \ --mysqld=--falcon-lock-wait-timeout=1 ;
The falcon_ddl.yy test executes a worklod which creates and drops various database objects:
- CREATE / DROP DATABASE ;
- CREATE / DROP TABLESPACE ;
- CREATE / RENAME / TRUNCATE / DROP TABLE ;
- ALTER TABLE ADD / DROP / REORGANIZE /REMOVE PARTITION ;
- DESCRIBE TABLE , SELECT * FROM INFORMATION_SCHEMA.SCHEMATA , SHOW TABLE STATUS ;
- Standard insert/update/delete/select and commit/rollback/savepoint;
$ runall.pl \ --basedir=/path/to/6.0-falcon-team \ --engine=Falcon \ --grammar=conf/falcon_ddl.yy \ --threads=10 \ --queries=10000
falcon_pagesize_2K and falcon_pagesize_32K test the two extremes allowed for --falcon-page-size
$ runall.pl \ --basedir=/path/to/6.0-falcon-team \ --engine=Falcon \ --gendata=conf/falcon_pagesize2K.zz \ --grammar=conf/falcon_pagesize.yy \ --grammar=conf/falcon_ddl.yy \ --threads=10 \ --queries=10000
The .zz file creates table with properly sized-columns. The grammar file does a SELECT/INSERT/UPDATE/DELETE using oversize field and key sizes.
This test aims to activate and exercise the Falcon Backlog (WL#4155). The .zz file creates a table containing 64K records, each with 8 (var)char fields with an average length of 128 bytes.
The .yy> file opens one long-lived REPEATABLE READ transaction which SELECTs<tt> the entire table, and nine threads which do transactions containing dozens of single-record <tt>UPDATEs.
$ perl runall.pl \ --basedir=/build/bzr/6.0-falcon-team/ \ --grammar=conf/falcon_backlog.yy \ --queries=1000000 \ --duration=1200 \ --gendata=conf/falcon_backlog.zz \ --engine=Falcon \ --mysqld=--transaction-isolation=REPEATABLE-READ \ --mysqld=--falcon_record_memory_max=10M \ --mysqld=--falcon-record-chill-threshold=1K