This repository has been archived by the owner on Mar 30, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 92
Simple Jmeter Guide in Spark Testing
Ziming Wang edited this page Jul 14, 2016
·
4 revisions
Though the Spark SQL requests share the same sampler format(JDBC Request) with queries against SQL Database, there is still a little difference: The Spark SQL requests should not be put the semicolon(;) at the end. You need to remove it or the query won't work. As for the query type in the configuration of JDBC Request, select statement can meet many kinds of JDBC requests even include the create and set.
In the JDBC Connection Configuration, set “Auto Commit” to True, or the Spark SQL won't support it.
Here is a reference for how to solving this issue https://github.com/prasanthj/jmeter-hiveserver2
Dragging the right border of GUI and you will see the hidden part. It shows the duration of test, the console window button(Click it to display/hide console) and currently running threads(users)/total threads(users).
In the test plan sometimes you want just execute part of the thread groups. Then you can cut the part you don't want to be executed and then paste it in the workbench. However, remember recovering it before exit Jmeter or you will lose it.
If you execute several times of the test plan, you will find history record would not be excluded. To
clear the history. Click the button in the buttons bar to clear it. One is for part remove and the another is for global clearing.
If the running of test plan stuck and costs several hours(if you don't set the running duration), see the logs or the command line of jmeter: there might be a heap dump which means that you set so many threads that your machine can not handle it. In that time, even the result shows at end. Sometimes the final result just refelect your machine performance, not your server clusters'.
- Overview
- Quick Start
-
User Guide
- [Defining a DataSource on a Flattened Dataset](https://github.com/SparklineData/spark-druid-olap/wiki/Defining-a Druid-DataSource-on-a-Flattened-Dataset)
- Defining a Star Schema
- Sample Queries
- Approximate Count and Spatial Queries
- Druid Datasource Options
- Sparkline SQLContext Options
- Using Tableau with Sparkline
- How to debug a Query Plan?
- Running the ThriftServer with Sparklinedata components
- [Setting up multiple Sparkline ThriftServers - Load Balancing & HA] (https://github.com/SparklineData/spark-druid-olap/wiki/Setting-up-multiple-Sparkline-ThriftServers-(Load-Balancing-&-HA))
- Runtime Views
- Sparkline SQL extensions
- Sparkline Pluggable Modules
- Dev. Guide
- Reference Architectures
- Releases
- Cluster Spinup Tool
- TPCH Benchmark