Start time: 1800 EDT
Zoom (Meeting ID: 936 0420 1350)
-
Discussion of GPGPU and linear algebra
- Gauging interest and experience
- WebGPU, Emu,
ocl
crate, restartedaccel
crate, and BLAS subset- 2D matrix dot product, Hadamard product, element-wise operations, transpose
-
Creation of Rust ML tutorial/documentation/prototype for understanding NN from first principles
- explaining how neural nets learn (using MNIST, show process for learning using recursive autodiff + epochs to approximate solution)
- showing where each bit fits in the process
- should drive discussion for future development
-
Meeting times/dates
- should we change from 1800 EDT?
- put it up to a vote on the Zulip?
- is every 3 weeks working right now?
@degausser (Ricky)
@chrism (Chris)
@Geordon Worley
@Jamal Simpson
Talking about Cogent
- interesting it is using arrayfire (C bindings to the GPU)
An alternative for GLSL on WebGPU https://gpuweb.github.io/gpuweb/wgsl.html
- An example of something we should could consider for mutliple platforms since WebGPU, CUDA, etc should be supported by modern NNs
- WebGPU isn't stablized (not 1.0)
Not against Emu as a basis but, it is missing some core funcationality
- we could impliment what we need for machine learning and it might be a good start and a good contribution for
Emu
The bare minimum we need to start on MNIST is:
- 2D matrix dot product, Hadamard product, element-wise operations, transpose
Could work on an OpenCL binding for a library that work with Emu
and using ndarray
as a backend for the linear algebra computations
Geordon has gotten WebGPU to work well with Linux, Windows, and Intel/Nvidia GPUs so it is pretty portable in their opinion
If we want to use optimization, we cannot code for just WebGPU, if you have CUDA libs that are older, we need to make sure that those optimizations are available for the user
accel can write CUDA kernels directly in (unsafe) Rust
- chrism has been playing with this crate
- main author of
accel
is part of thendarry
crate as well - very specific compiler for this crate to work (nightly 2020-05-01)
- have to write Kernels in openCL
Jamal wants to make sure we have a very simple start to a tutorial/documentation
Geordon brought up initialization
- loading prior weights/bias etc
optimizers (just a few to show users)
- gradiant decent
- adam?
How to write the book?
- try to keep it in markdown
- Github is fine as well
Plus they got the LaTeX plugin working for mdBook
recap: write a bit of code, that's a bit of framework that is a minimal example of MNIST (maybe), just a basic minimal implimentation that's a refernce point, and start a book that shows how this example was made piece by piece
Times? Let's open a poll or something for the Zulip group
2 weeks for inbetween meetings
- tentitive next meeting date and time 20200526 @1800 EDT
Jamal plans to try to OpenCL crate
Ricky will find a good way to create a poll for meeting times
- frequency will be 2 weeks starting from today
Jamal and Geordon will start on the minimal example
- not for the next meeting but just an ongoing process
- do they have the access they need to the github group?
Chris asked for some contributions to their crate