-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] calculate each reflection separately #79
base: dev
Are you sure you want to change the base?
[WIP] calculate each reflection separately #79
Conversation
So I did some tests and throughput drops quite significantly for lower numbers of rays. I will follow this up with some profiling, maybe there is a way to optimize the overhead away (CUDA streams might be a solution if the GPU is not fully utilized). SetupExecuted on Node Kepler002 in the Hypnos cluster C example
* ** runtimes estimated after 10% of the simulation were completed. These times should be representative enough to get a good grasp on the performance implications. |
c674b4a
to
42cf48b
Compare
Ok so I did some refactoring and debugging, the code got a lot faster. As an added benefit, it would be trivial to add CUDA streams.
* ** runtimes estimated after 10% of the simulation were completed. These times should be representative enough to get a good grasp on the performance implications. |
42cf48b
to
999b652
Compare
999b652
to
c534705
Compare
This is a test to see if we can split the calculation of a sample point into multiple kernel-calls (one per reflection slice). The reason is, that our current code computes all reflection slices in a single huge array. This old style had several disadvantages that could be fixed: - the array `numberOfReflectionSlices` is **huge**. As big as indicesOfPrisms, and so it is part of the bottleneck for the number of rays. - removing this array cuts our memory requirements almost by 50% (for really high ray numbers) - the kernel does not have to do the memory lookup to find the correct reflection_i (all reflections are in the same plane anyway) - the arrays `numberOfReflectionSlices` and `raysPerPrism` are actually linearized 2D arrays that contain all the reflection planes. This leads to more difficult code while we do the `mapRaysToPrisms`. - if there is only one reflection plane at a time, this makes it much easier to split the numbers of rays even further to allow more rays in total (see ComputationalRadiationPhysics#2). - the resulting code is more maintainable This is nice and all, but splitting the reflections might introduce some problems: - if there are reflections, we will do a lot more kernel calls and each of those might be quite small. So maybe the GPU is not utilized as much. Previously, everything was done in 1 huge kernel call. - since we don't know how many rays there are in each plane, we have to call thrust::reduce in each iteration. - since we need multiple ray schedules (one for each reflection plane), we also need to call the mapRaysToPrisms in each iteration. All in all, the performance implications need to be tested. I believe that this commit can improve long-term code quality and will directly enable ComputationalRadiationPhysics#2. But if the performance suffers, we might need to code some workaround (maybe use the split functionality only for really high ray numbers where the tradeoff is not so bad and we really NEED it).
- Re-use all vectors as much as possible - one device-vector is created inside the loop (which seems wasteful at first), to enable an easy transition to CUDA streams in the future :)
c534705
to
673b690
Compare
This is a test to see if we can split the calculation of a sample point
into multiple kernel-calls (one per reflection slice). The reason is,
that our current code computes all reflection slices in a single huge
array. This old style had several disadvantages that could be fixed:
numberOfReflectionSlices
is huge. As big asindicesOfPrisms, and so it is part of the bottleneck for the number
of rays.
really high ray numbers)
correct reflection_i (all reflections are in the same plane anyway)
numberOfReflectionSlices
andraysPerPrism
areactually linearized 2D arrays that contain all the reflection planes.
This leads to more difficult code while we do the
mapRaysToPrisms
.easier to split the numbers of rays even further to allow more rays
in total (see Number of rays is limited to GPU memory #2).
This is nice and all, but splitting the reflections might introduce some
problems:
of those might be quite small. So maybe the GPU is not utilized as
much. Previously, everything was done in 1 huge kernel call.
call thrust::reduce in each iteration.
we also need to call the mapRaysToPrisms in each iteration.
All in all, the performance implications need to be tested. I believe
that this commit can improve long-term code quality and will directly
enable #2. But if the performance suffers, we might need to code some
workaround (maybe use the split functionality only for really high ray
numbers where the tradeoff is not so bad and we really NEED it).