-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
executable file
·132 lines (101 loc) · 5.54 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
BOSS Performance Test
===================== Note =====================================
* Don't run multiple test instances at same time since they all use
same AMQP configuration
===================== Test Method ==============================
* The basic method in this test project is to simulate boss using
in real world - running as a service for long time and dealing
with multiple requests from multiple users continually; Then observe the
performance data to get the evaluation
* Concept "load" and "iteration":
* load: how many requests(workflows) sending to engine at same time
* iteration: one iteration begins at sending specific number of
load workflows to engine, ends at engine finishing all received
workflows
* In each test case, specific load will be sent to engine iteratively
for specific iterations
* Following processes will be created during each test case:
* client: sending "load" workflows to engine for each iteration and
waiting for the results; communicating with engine by AMQP
* engine: handling workflows from client
* participants: participants processes; communicating with engine by AMQP
* atop: running "atop" to record cpu/memory/disk data during testing
* run.rb: launching and managing above processes
* We're trying to make our test suite more configurable with help from two
kinds of config files:
* "global.config": contains many detail info such as AMQP, participants,
storage... See following sections for when/how to modify it
* "*.config" files under "test_cases": test case is descripted in these
config file. Normally one config file is a test suite with multiple test
cases. See following section for when/how to modify it
===================== Code Structure ==========================
This project is structured as follows:
.
|-- scripts : directory including scripts using internal
|-- analyze_load.sh
|-- client.py
|-- engine.rb
|-- global.config
|-- participant_launcher.py
|-- participants: participant definition
|-- error_handler.rb: default local participant
|-- resizer.py: default remote participant
|-- sizer.py: default remote participant
|-- persist_logger.rb
|-- run.rb
|-- workflow: workflow definition
|-- workflow_simple.config: default workflow
|-- workman.rb
|-- test_cases : directory including test case config files
|-- test_suite_0.config: example test case config file
|-- test_suite.template: test case config template
|-- workflow_simple.config: a simple workflow for testing
|-- test_spec.rb: file for RSpec running
|-- spec_helper.rb: helper script for RSpec
|-- case_spec.rb:
===================== How to Run ================================
1. Change to the test directory
2. Specify your test suite config file in "spec_helper.rb"(refer to
following "How to add new test case" section)
3. Issue "spec test_spec.rb"
4. After finish, check results in your home directory
===================== Test Results ==============================
* Test results are located in your home direcotory as default;
you can also change it by modify "spec_helper.rb"
* Test results are structured as follows:
~/boss_performance_results
|-- <case ID> : directory inlcuding results for each test case
|-- xterm_atop.log: xterm log for atop process
|-- xterm_client.log: xterm log for client process
|-- xterm_engine.log: xterm log for engine process
|-- xterm_sizer.log: xterm log for participant process
|-- xterm_resizer.log: xterm log for participant process
|-- cpu.load: cpu load data for engine process
|-- mem.load: memory load data for engine process
|-- dsk.load: disk load data for engine process
|-- atop.raw: atop raw data(just keep for reference)
|-- storage: storage raw data(just keep for reference)
* What you can get:
* cpu/memory/load data of engine from "*.load" files
* Iteration start/end time, iteration duration and rates from "xterm_engine.log" file
* Useful info from other files for debugging purpose
===================== How to add new test case ==================
* Test case is descripted as config file which is located in
"test_cases" directory; Refer "test_suite_0.config" as example
* There are some parameters in test case config, such as "load",
"iteration", "storage"... Check "test_suite.template" for each
parameter detail
* You can modify existing config files to add your test cases
or
add new config file and modify "spec_helper.rb" file to point to
your test config file.(such as "*.config" format is supported)
===================== How to add new participants ================
1. One participant is one file contains a local participant(ruby file) or
a remote participant(python file); refer to files under "scripts/participants/"
to create your participant
3. Update "scripts/global.config" to add your participant's detail info
===================== How to add new workflow =====================
1. One workflow is one config file; refer to "scripts/workflows/workflow_simple.config"
to create new workflow
2. To using your new workflow, just specify the workflow config file name in your
test case config file