A repository to host the open source projects for Autonomous Driving research group.
Community Creator: Mayur Waghchoure and friends
- All the Members are encouraged to equally contribute to the projects.
- All the members are encouraged to come forward and share their knowledge and materials to everyone in the group with the motto of teach by learning.
- Feel free to ask questions if one is stuck at any point for smooth progress of the project
- All the members(intermediate level) of the group with experience are urged to spare some time to answer the query.
- Follow the format decided so that everyone will be at same pace.
- All the members are requested to actively participate in the discussion.
Imp:
Any member if found frequently not attending the meeting or not contributing to the project may loose their opportunity to work on the project
- Literature Review -
- Deciding on the final architecture - To be Finalized on 4th May(Saturday) before having meeting with Mayur on 5th May
- Prepare a Small ppt summarising the progress
- Methodology and Implementation 70% completion
- Complete Implementation and Results with final presentation
Detailed map of all the elements team needs to accomplish to reach project’s goals
Initiation and Planning | Timeline | Status |
---|---|---|
Team Formation | 14/04/2024 | Done |
Git fork and clone repository | < 05/05/24 | In Progress |
Reading Research Papers | 22/04/24> Week2 <26/04/24 | In Progress |
Literature Review from Papers | 29/04/24> Week3 <3/05/24 | |
Developing Abstract and Architecture | 29/04/24> Week3 <3/05/24 | |
Preparing PPTs | 29/04/24> Week3 <3/05/24 |
Execute and complete task 06/05/24 - 31/05/24
Towards Project closure 03/06/24 - 25/06/24
I am hoping all our projects would fit together as a jigsaw puzzle to get the autonomous driving capable software
First visual/sensor fusion + Semantic BEV + detection + tracking + motion prediction to get the current tracks and future trajectories
Semantic SLAM, 3D mapping, and Localization or point cloud project to localize the object and to get the map
Then path planning would jump in and then controls
DBW, Motor controls and Simulation based testing
To fuse the data from multiple modalities to get the understanding of the scene. e.g. Lidar and camera fusion to get the list of detected objects
To merge the data from the multiple cameras toget the birds eye view of the scene, and then perfrom the downstream tasks on it. first try this with IPM , and then using the transformers.
First try out one stage and two stage object detector on camera images to detect the vehicles, pedestrians, traffic lights and Signs, and track them further Second, to perform this on Birds eye view(BEV) maps of the scene by merging info from multiple cameras
To perfrom the track prdection using the model based approach followed by hybrid approach and then deep learning approach for the long term motion prediction
Fuse the data from different sensors using the different data association methods and kalman filters
- GPS and Odometry based Localization using the Kalman filters
- Visual odometry
- HD Map based Localization a. Lanelet maps b. Lidar Point Cloud based Localization
- Mapless Localization
please find the relevant research papers, and try to implement first GPS +IMU based Localization. Eventually try the HD Map based localization
To create the map of the environment using the lidar or camera data.
using the Graph neural networks create the map of the environment and localize inside it
we would want to develop the path planning algorithm for the autonomous vehicles.
- Global planner: here we would want to develop the global planner using the lanelet maps or Google maps api
- Behavior planner: it would be rule based in the beginning using the state machines, which we can later do using the RL
- Local planner: local planner would be of different style, which would include the path profiling and velocity profile planning, some algo are state lattice planners, RRT* etc. We could also develop MPC controller do the trajectory planning and tracking