-
Notifications
You must be signed in to change notification settings - Fork 1
Home
In the paper "Automatic Fall Detection in Combat Soldiers: A Machine Learning Based Approach" the author promoted the data collection of soldiers performing three types of activities: daily activities, military activities involving handling a rifle, and falling activities. You can find the dataset at: https://zenodo.org/records/12760391.
Instructions to Replicate the Experiments
1. Structuring the Repository
1.1. Clone this repository: git clone https://github.com/AIRGOLAB-CEFET-RJ/fall-detect
1.2. Download the dataset and extract it into the main directory of the repository.
1.3. Install the necessary libraries using the command in the main directory of the repository: pip install -r requirements.txt
2. Create the Data Files to Train the Neural Network
2.1. Run the training_data_generator.py file found in the main directory of the repository, Run the script by specifying the position of the device for which you want to generate the data (chest, left, or right), as in the following example for data collected on the chest: python training_data_generator.py chest
- After execution, the files will be saved in the directories inserted within the labels_and_data folder.
3. Execute Bayesian Optimization and Neural Network Training
To better understand the training of the neural networks, here is a brief explanation of the chosen training scenarios and the labeling approaches used:
3.1. Scenarios:
- Scenario 1: Training using the magnitude of linear acceleration and the magnitude of angular acceleration individually: In this scenario, the neural networks are trained separately using only the magnitude of linear acceleration and the magnitude of angular acceleration. The trained neural networks will be named as follows: Sc.1-CNN1D-acc and Sc.1-CNN1D-gyr for the CNN1D networks trained with the linear and angular acceleration data, respectively; and Sc.1-MLP-acc and Sc.1-MLP-gyr for the MLP networks trained with the linear and angular acceleration data, respectively.
- Scenario 2: Training using the x, y, and z components of linear acceleration and angular acceleration individually: The neural networks are trained separately, each using the three components (x, y, and z) of linear acceleration and angular acceleration. The trained neural networks will be named as follows: Sc.2-CNN1D-acc and Sc.2-CNN1D-gyr for the CNN1D networks trained with the linear and angular acceleration data, respectively; and Sc.2-MLP-acc and Sc.2-MLP-gyr for the MLP networks trained with the linear and angular acceleration data, respectively.
- Scenario 3: Training using the magnitude of linear acceleration combined with the magnitude of angular acceleration: The neural networks are trained with the magnitude of linear acceleration and the magnitude of angular acceleration simultaneously. The trained neural networks will be named as follows: Sc.3-CNN1D for the CNN1D networks and Sc.3-MLP for the MLP networks.
- Scenario 4: Training using the x, y, and z components of linear acceleration combined with the x, y, and z components of angular acceleration: The neural networks are trained with the x, y, and z components of linear acceleration and the x, y, and z components of angular acceleration together. The trained neural networks will be named as follows: Sc.4-CNN1D for the CNN1D networks and Sc.4-MLP for the MLP networks.
- The acronyms of the neural networks for each scenario are accompanied by a "T" for the time domain or an "F" for the frequency domain. For example: Sc.1-CNN1D-acc-T for the CNN1D network trained with the linear acceleration data of scenario 1 in the time domain and Sc.1-CNN1D-acc-F for the same network trained in the frequency domain.
3.2. Labels:
- binary_class_label_1: Is the approach where all activities of daily living and military operations receive label 1 and falls receive label However, activities 0M6 to OM8 relating to the transition to the prone firing position are considered as falling activities.
- binary_class_label_2: Is the approach where all activities of daily living and military operations receive label 1 and falls receive label However, activities 0M6 to OM8 relating to the transition to the prone shooting position are not considered as fall activities and receive label 1.
3.3. Execution of the run_of_the_neural_network_model.py file
Run the file in the main directory of the repository with the following parameters:
- scenario: Sc1_acc_T, Sc1_gyr_T, Sc1_acc_F, Sc1_gyr_F, Sc_2_acc_T, Sc_2_gyr_T, Sc_2_acc_F, Sc_2_gyr_F, Sc_3_T, Sc_3_F, Sc_4_T, Sc_4_F
- position: left, chest, right
- label_type: binary_one, binary_two
- neural_network_type: CNN1D, MLP
Use the following example as a reference:
python run_of_the_neural_network_model.py --scenario Sc1_acc_T --position left --label_type binary_one --neural_network_type CNN1D
- The file will perform Bayesian optimization in 100 trials and, after obtaining the parameters that return the best result of the MCC metric, will use this configuration to execute 20 training sessions. The results will be saved in the subdirectory with the name of the executed scenario, inserted within a subdirectory with the name of the sensor position, inserted in the subdirectory that gets the name of the neural network type, inserted in the output directory.