Skip to content

Commit

Permalink
Merge pull request #38 from ditrit/dev-3dtools
Browse files Browse the repository at this point in the history
Dev 3dtools
  • Loading branch information
Novanef authored Sep 15, 2023
2 parents 408b316 + 4513306 commit 022ca0a
Show file tree
Hide file tree
Showing 31 changed files with 1,877 additions and 0 deletions.
167 changes: 167 additions & 0 deletions 3dtools/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# COMPOS user guide

This is a user guide for COMPOS api, a powerful program who can locate the components on the board of server, output the coordinate in mm. After all, we can generate the model on the 2d face, then we put this face on the 3d model's front and back face.
## Components supported
| Name | method |
| ------ | ------ |
| idrac | template matching |
| usb | rectangle detect |
| d-sub female/vga | image's feature points |
| d-sub male/rs232 | image's feature points |
| slot normal | YOLOV5 object detection |
| slot lp | YOLOV5 object detection |
| disk lff | YOLOV5 object detection |
| disk sff | YOLOV5 object detection |
| power supply unit | YOLOV5 object detection |
## Requirement
YOLOV5 needs to be cloned from [YOLOV5 official page](https://github.com/ultralytics/yolov5#tutorials) in the `3dtools` directory.
Package infomation in [requirements.txt](requirements.txt)

## Setup

```sh
git clone https://github.com/ditrit/OGrEE-Tools.git
cd OGrEE-Tools/3dtools
pip install -r ./requirements.txt
#yolov5 installation
git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r ./requirements.txt
```

_setup script coming soon_

## Introduction
### main.py
This is the general pilot of all the functions, and also the user interface.
Here is a minimum tutorial example.
```sh
cd Compos path
python main.py --servername image/serveur/dell-poweredge-r720xd.rear.png --height 86.8 --length 482.4
```
Some basic parameters should be told before start the program.
> --servername : string, the file name of image.
--height : float, horizontal dimension, (unit mm).
--length : float, vertical dimension, (unit mm).
--face : string, 'front' or 'rear'
>
User can also just run the *main.py* in python consoler. In this case, a default server "dell-poweredge-r720xd.rear.png" will be shown.

There are other parameters that the user can choose, to control algorithm's performance.
> #yolov5 hyparameter
--weights, type:str, model path or triton URL, if not want to change the yolov model, don't use it!
--conf-thres, type:float, default=0.5, confidence threshold, yolov will filter the result less than it.
--iou-thres, type=float, default=0.45, 'NMS IoU threshold'
--device', default='cuda device, i.e. 0 or 0,1,2,3 or cpu'
--view-img', if provided, show results on screen.
--save-txt', if provided, save results to *.txt'.
--save-conf', if provided, save confidences in --save-txt labels'
--save-crop', if provided, save cropped prediction boxes.
--nosave', if provided, do not save images/videos'
--augment', if provided, augmented inference'.
--visualize', if provided, visualize features.
--project', default=ROOT / 'detect', help='save results to project/name'
--name', default='exp', Save results to "project/name"
--exist-ok', If provided, existing project/name ok, do not increment, like exp1, exp2...
--line-thickness', default=1, type=int, Bounding box thickness (pixels)
--hide-labels', default=False, If provided, hide labels
--hide-conf', default=False, action='store_true', help='if provided, hide confidences
>
After setting all the parameters for classifier(this step can also be replaced by executing data base automatically in the future), the program will creat a classifier class named ogree, then go on each defined classifier. Use ogree.clxxxx to detect component xxxx. It doesn't need any input at this step.

### Step
#### 1. Use the following code to start the program.
```sh
cd COMPOS path
python main.py --servername image/serveur/dell-poweredge-r720xd.rear.png --height 86.8 --width 482.4 --face rear
```
#### 2. Input the command
It will be seen:
```sh
class list: {'d-sub female': '11', 'd-sub male': '12', 'idrac': '13', 'usb': '14', 'all': '15'}
or enter the name 'slot', 'disk', 'source'(without '')
Please input one by one. Enter 'finish' to output the json
----Enter component name or code:
```
The user is asked to enter the component name or code. Here is an example:
```sh
d-sub female
```
or
```sh
11
```
The result is printed in the window shows "xxx in [x,y,angle,similarity]"
```sh
start detecting d-sub female
0° searching progress: 100%: ▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋

90° searching progress: 100%: ▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋▋

vga in : [[640.0, 860.0, 0.0, 0.7698690075499673]]
```
**Attention:** the type of input code is *string*. If we want transform it into a interface connect with other program, the command should also be *string*, not int.
#### 3. Finish and output
After detect all the components, type in finish to start the output processing.
```sh
finish
```
The program will generate the json file and saves it under floder api with the server name + '.json', It will show:
```sh
{(16, 92): ('vga', 0.0, 0.8375591957768449), (16, 59): ('rs232', 0, 0.8667459425301842), ...}
dell-poweredge-r720xd json file in "/api/"
```
The json file is written in the form like:
```sh
[{"location": "vga0", "type": "", "elemOrient": "horizontal", "elemPos": [92.0, 0, 9.0], "elemSize": [16.0, 11.0, 8.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.7698690075499673}},{...},...]
```
## Result of test example
### json file
The json file of test example dell-poweredge-r720xd.rear should be same as [test.json](api/test.json) if lanch the command from 11 to 15.
### 3d model
origin photo
![image](image/serveur/dell-poweredge-r720xd.rear.png)
3d model
![image](image/serveur/test1.png)
![image](image/serveur/test2.png)
# Description of standard server (components):
## standard image/components
We choose a model ibm-x3690x5.rear like the standard server. this image gives a ratio of 9.14x pixel /mm. and standard components idrac, rs232, vga are captured and are saved at the path /image/standard/. Another idrac template is captured from *cisco-c240-m6-lff.rear.png*, just used for cisco models.

The standard components usb is captured from *ibm-x3690x5.rear.png* and are saved at the path /image/standard/.
explaination:
In std_vga and std_rs232, the ibm has draw larger holes than other manufacturer in the picture.
Interface idrac in cisco has a different shape than other manufacturer. The pins is more short.


## Classifiers in *Classifier* class:
### clidrac:
Based on template matching, the program will use the standard image of idrac to compare each slice of image, and calculate the similarity. Then a local_peak function is applied to find out the posible position where pass the threshould. (The threshould is setted as 0.45)

### clvgars232:
This classifier is designed to find vga or rs232 at the same time. Because they have the same shape.
The methods applied is CENSURE algorithem. It process out the image features, which is pin positions here.

### clusb:
This classifier is designed to find power block in the image. We use the same template matching method to find out c14 interface. This gives a point situating in the block. Power block usually has a rectangle edge with a unit dimention. So we use a straight line detection function to find suspect rectangles of the same dimension. Last, an algorithem based on vector geometric calculation will decide whether the point is in this rectangle, i.e. is yhis the power bloc we are.searching.

## Foreign methode YOLOV5
The api file of yolov5 is under */api/yoloapi.py*. It is similar with the one under */yolov5/detect.py*. But with difference in coordinates treatment and more simple. Some unnecessary parameters and codes have been moved. The user can detct all the slots, disks, PSU with command *all*. And it is also capable to detect one or two among them.
| Command |task type(s) |
| ------ | ------ |
| all | slot normal, slot lp, disk sff, disk lff, psu |
| disk | disk lff, disk sff |
| slot | slot normal, slot lp |
| disk_sff | disk sff |
| disk_lff | disk lff |
| slot_normal | slot normal |
| slot_lp | slot lp |
| source | power supply unit |


The dect result of yolov5 will also saves under /detect/exp x/, user can check the output there. But remember to clean the flod regularly for not occupying too many space.
#### Mark:
- The unit dimention of power supply unit differs among each producer. An data base is needed to be created to give this information.
- An excel about the shape of the server is created under */image/name_list.xlsx*. But not every data is ensured to be correct.
- A very comment erro is the axis x,y are inversed, This is because the indexs in different library is not same. Some is (vertical down, horison right), some is (horison right, vertical down). For the further programming, check the axis order when classifier give wrong position with hight similarity, when component position is out of the picture. For the user, they can trust the present version and all these bug is gone.
- In COMPOS code, the origin point situates on upper left corner of the picture. In json file, the 3d origin point situates on down right back corner of the brick.
26 changes: 26 additions & 0 deletions 3dtools/YOLOVcfg/serveur122.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
# └── coco128 ← downloads here (7 MB)


# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: D:/Work/OGREE/image/YOLO_serveur/train/ # dataset root dir
train: D:/Work/OGREE/image/YOLO_serveur/train/images
val: D:/Work/OGREE/image/YOLO_serveur/train/images
test: # test images (optional)

# Classes
names:
0: slot_normal
1: slot_lp
2: disk_lff
3: disk_sff
4: PSU


# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128.zip
48 changes: 48 additions & 0 deletions 3dtools/YOLOVcfg/serveur_yolov5s.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license

# Parameters
nc: 5 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32

# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]

# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13

[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)

[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)

[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)

[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
4 changes: 4 additions & 0 deletions 3dtools/YOLOVcfg/yolov5_guid.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
This file is used to config the YOLOV5 model.
step 1: copy "serveur122.yaml", paste under /yolov5/data. This is the setting of our dataset.
step 2: Copy "serveur_yolov5s.yaml", paste under /yolov5/models. This is the config of yolov5s model in the case of Ogree.
Then we can use the YOLOV5 model in our code.
Binary file added 3dtools/api/best.pt
Binary file not shown.
1 change: 1 addition & 0 deletions 3dtools/api/test.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[{"location": "vga0", "type": "", "elemOrient": "horizontal", "elemPos": [92.0, 0, 8.0], "elemSize": [16.0, 11.0, 8.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8375591957768449}}, {"location": "rs2321", "type": "", "elemOrient": "horizontal", "elemPos": [59.0, 0, 8.0], "elemSize": [16.0, 11.0, 8.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8667459425301842}}, {"location": "idrac2", "type": "", "elemOrient": "horizontal", "elemPos": [214.0, 0, 11.0], "elemSize": [14.0, 11.0, 11.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.6340864398834318}}, {"location": "idrac3", "type": "", "elemOrient": "horizontal", "elemPos": [192.0, 0, 11.0], "elemSize": [14.0, 11.0, 11.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.6216763517298072}}, {"location": "idrac4", "type": "", "elemOrient": "horizontal", "elemPos": [35.0, 0, 7.0], "elemSize": [14.0, 11.0, 11.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.6163130799679702}}, {"location": "idrac5", "type": "", "elemOrient": "horizontal", "elemPos": [148.0, 0, 11.0], "elemSize": [14.0, 11.0, 11.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.6053733629620092}}, {"location": "idrac6", "type": "", "elemOrient": "horizontal", "elemPos": [170.0, 0, 11.0], "elemSize": [14.0, 11.0, 11.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.5934850590318013}}, {"location": "slot_lp7", "type": "", "elemOrient": "horizontal", "elemPos": [20.0, 0, 20.0], "elemSize": [65.0, 175.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.9166945219039917}}, {"location": "PSU8", "type": "", "elemOrient": "horizontal", "elemPos": [344.0, 0, 1.0], "elemSize": [90.0, 100.0, 40.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.9154388308525085}}, {"location": "slot_lp9", "type": "", "elemOrient": "horizontal", "elemPos": [20.0, 0, 61.0], "elemSize": [65.0, 175.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.910266637802124}}, {"location": "PSU10", "type": "", "elemOrient": "horizontal", "elemPos": [253.0, 0, 1.0], "elemSize": [90.0, 100.0, 40.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.9088661670684814}}, {"location": "slot_lp11", "type": "", "elemOrient": "horizontal", "elemPos": [20.0, 0, 41.0], "elemSize": [65.0, 175.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.9041398763656616}}, {"location": "disk_sff12", "type": "", "elemOrient": "horizontal", "elemPos": [337.0, 0, 52.0], "elemSize": [70.0, 101.0, 10.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8937407732009888}}, {"location": "slot_normal13", "type": "", "elemOrient": "horizontal", "elemPos": [110.0, 0, 63.0], "elemSize": [107.0, 312.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8629162311553955}}, {"location": "slot_normal14", "type": "", "elemOrient": "horizontal", "elemPos": [248.0, 0, 63.0], "elemSize": [107.0, 312.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8617583513259888}}, {"location": "slot_normal15", "type": "", "elemOrient": "horizontal", "elemPos": [110.0, 0, 41.0], "elemSize": [107.0, 312.0, 18.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8525636196136475}}, {"location": "disk_sff16", "type": "", "elemOrient": "horizontal", "elemPos": [249.0, 0, 52.0], "elemSize": [70.0, 101.0, 10.0], "labelPos": "rear", "color": "", "attributes": {"factor": "", "similarity": 0.8052607774734497}}]
Loading

0 comments on commit 022ca0a

Please sign in to comment.