-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' of https://github.com/portugueslab/sashimi
- Loading branch information
Showing
53 changed files
with
2,664 additions
and
199 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# Code architecture | ||
|
||
Sashimi is structured in modules that communicate with each other via signals and use shared queues to broadcast data to different processes. | ||
In a simplified schematic we can see how the `State` file is the core of the program, which controls and ties together the core functions. | ||
|
||
It communicates with GUI and updates values for the GUI to read, it creates and overlooks processes that then are responsible for directly controlling the hardware through custom interfaces. | ||
Moreover, the State controls the _Global State_ variable of the program which defines the mode in which the program is. This is used by the GUI to change the interface and settings accordingly and by the different processes to control the hardware. | ||
|
||
```{figure} ../images/sashimi_struct.png | ||
--- | ||
height: 500px | ||
name: sashimi-struct | ||
--- | ||
Sashimi simplified code structure | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# Hardware Control | ||
|
||
One of the strengths of **Sashimi** is the smart use of interfaces for the connection with potentially any hardware component. Whenever a component needs to be swapped it can be easily done by just creating a custom python file for its connection to the Sashimi interface. The main hardware components needed are a `light source` (laser), a `camera`, a `piezo` and multiple `galvos` for directing the laser and creating a sheet of light, and, finally, a `board` to drive most of the triggers. Each interface is defined as an `Abstract` class which ensures that while creating a new custom module, inheriting from the interface, all the functions and properties necessary for the software to work will be implemented. | ||
|
||
Another means of interaction with the hardware, but also the software, is the sashimi-config module that lets the user interact with software settings from the command line. This enables easy and quick access to some fixed settings that are used over the whole software. An example of which is the maximum resolution of the camera sensor, the default save path, the channels of the NI board, and so on. | ||
|
||
## Light-Source Interface | ||
|
||
The light source interface enforces that any custom light source module defines a method for setting the power of the laser, a method for closing the laser, and properties for reading the intensity of the laser and the status (ON, OFF). | ||
At the moment, only two modules are present in the software: | ||
|
||
- Mock light source, which mocks the behavior of the light source and it’s used mainly for testing purposes. | ||
- Cobolt light source, which takes care of the opening and setting up of the Cobolt laser. | ||
|
||
## Camera Interface | ||
|
||
The camera interface outlines the functions and properties needed for the control of the camera and the adjustment of the most relevant settings. The core methods are start/stop acquisition, shut down the camera, and most importantly, `get_frames` that return a list of images. | ||
|
||
Using the camera interface it is possible to change the following properties: | ||
|
||
- Binning size | ||
- Exposure time | ||
- Trigger mode, which is expected to be external for Sashimi volumetric mode | ||
- Frame Rate, which can only be read (to change this value you’ll need to change the exposure time) | ||
- Sensor resolution, which computes the resolution based on the binning size and the maximum resolution (setting to be defined in the configuration file) | ||
|
||
For this interface there’s a mock module which displays a gaussian filtered noise image, and a module for the Hamamatsu Orca flash 4.0 v3 camera. | ||
|
||
## External-Trigger Interface | ||
|
||
The external trigger interface allows synchronizing Sashimi with behavioral software for stimulus presentation and behavioral tracking. To achieve this it uses the python bindings of ZeroMQ[[5](../overview/references.html#id5)], a high-performance asynchronous messaging library. The interface takes care of establishing the connection to the other software and returning the total duration of the experiment protocol. The duration will be then used by the external communication process to update the acquisition duration of the experiment inside Sashimi. | ||
|
||
There’s a module that allows for a built-in connection between Sashimi and [Stytra](https://www.portugueslab.com/stytra/index.html) [[1](references.html#id1)],[[2](references.html#id2)], an open-source software package, designed to cover all the general requirements involved in larval zebrafish behavioral experiments. Once an experiment protocol is ready, and both software are set up correctly, [Stytra](https://www.portugueslab.com/stytra/index.html) will stand by and wait for a message from the acquisition software (Sashimi) to start the experiment. This message is automatically sent once the acquisition start is triggered in the Sashimi GUI. | ||
|
||
## Scanning Interface | ||
|
||
The scanning interface is the more complicated interface since it needs to handle the NI board which in turn controls the scanning hardware. | ||
This interface outlines three simple methods to write and read samples from the board, as well as initialization of the board and start of the relevant tasks. | ||
There are multiple properties that control different functionalities: | ||
|
||
- `z_piezo`, reads and writes values to move the piezo vertically. | ||
- `z_frontal`, reads and writes values to move the frontal laser vertically. | ||
- `xy_frontal`, reads and writes values to move the frontal laser horizontally. | ||
- `z_lateral`, reads and writes values to move the lateral laser vertically. | ||
- `xy_lateral`, reads and writes values to move the lateral laser horizontally. | ||
- `Camera_trigger`, triggers the acquisition of one frame. | ||
|
||
The implementation of the scanning interfaces connects to the NI board and initialize three analog streams: | ||
|
||
- `xy_writer`, which combines the frontal and lateral galvos moving the laser horizontally and outputs a waveform. | ||
- `z_reader`, which reads the piezo position. | ||
- `z_writer`, which combines the frontal and lateral galvos moving the laser vertically, the piezo and the camera trigger. For each of them the output varies depending on the mode in which the software is. | ||
|
||
Inside the config file, there’s a factor that allows applying a rescaling factor to the piezo. | ||
|
||
Another major part of the interface is the implementation of different scanning loops to continuously move the laser to form a sheet of light and move it in z synchronously with the piezo to keep the focus. There is a main class called ScanLoop which continuously checks whether the settings have changed, fills the input arrays with the appropriate waveform, writes this array on the NI board (through the scanning interface), reads the values from the NI board, and keeps a count of the current written and read samples. Two classes inherit from this main class: | ||
|
||
- `PlanarScanLoop` | ||
- `VolumetricScanLoop` | ||
|
||
The main difference between the two is the way they fill the arrays responsible to control the vertical movement of Piezo and galvos. Inside the `planar loop`, there are two possible modes, one of which is used for calibration purposes and is completely manual. In this mode, the piezo is moved independently of the lateral and frontal vertical galvos. This allows for proper calibration of the focus plane for each specimen placed in the microscope. The other mode is synched and uses the linear function computed by the calibration to compute the appropriate value for each galvo, based on the piezo position. | ||
|
||
The `volumetric loop` instead writes a sawtooth waveform to the piezo, then reads the piezo position and computes the appropriate value to set the vertical galvos to. Given the desired framerate, it will also generate an array of impulses for the camera trigger, where the initial or final frame can be skipped depending on the waveform of the piezo. For ease of use, the waveform is shown in the GUI so that the user can decide how many frames to skip depending on the settings that they inserted, see [figure](sashimi-mode_vol) , where the white line represents the waveform. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
# Multiprocessing | ||
|
||
Multiprocessing allows programs to run multiple sets of instructions in separate processing units at the same time. The correctly implemented programs run much faster and take full advantage of multi-core CPUs. Python has an entire built-in library that offers multiprocessing tools and functionalities, such as the spawning and synching of separate processes, and the sharing of information between them. In microscopy, it is crucial to have events that happen fast and synchronously to one another, especially for light-sheet microscopy, where the piezo, vertical galvos, and the camera trigger must be synchronized to deliver a focused image. Multiprocessing ensures that multiple hardware components and functionalities can work simultaneously, and even more importantly can redistribute priority to make sure that the most important tasks are executed in the correct time frame, while other, less time-sensitive tasks, can be processed less promptly. | ||
|
||
As an example, in a light-sheet microscope, it is of the utmost importance that galvos, camera triggers, and piezo are synchronized, while the process that saves the data in the memory can work asynchronously with the other processes. Obviously, most processes need to keep up with the whole program in order to avoid stalls and delays, but given enough speed and buffers, some functionalities do not need to be as precise and synched as others. This allows the computer to also allocate resources dynamically to fulfill the tasks at hand. | ||
|
||
## Logging | ||
|
||
The `logging process` is a simple class that implements a **concurrence logger** inside it. This means that the logger will log the message and the context in which a message was sent.The logger has built-in functions to log events, queues, and any particular message in a custom file. Any other process inherits from this class and will have a logger built-in which will make logging events and messages easy and organized. To automatically log inside events there’s another class called LoggedEvent which accepts a range of internally defined events and a logger and returns an Event (from the multiprocessing library) which expands each functionality of the event class with an in-built logger. | ||
The initialization process follows these steps: | ||
|
||
1. The main process creates a `LoggedEvent` | ||
2. The `LoggedEvent` is passed to one of the processes | ||
3. The process assigns it’s logger to the `LoggedEvent` | ||
4. Now, every time the event is set, cleared or pinged, it will be automatically logged in the process logging file | ||
|
||
## Camera | ||
|
||
The camera process handles all the camera-related functionality and sets the camera parameters, mode, and trigger. It computes and checks the framerate and runs the camera in a mode-dependent loop. | ||
|
||
If the current program mode is `Paused`, then the loop waits for the mode to change and keeps checking for updates of the camera parameters, on the other hand, if the mode is `Preview`, the loop gets new frames from the camera and inserts them in a queue, at the end it checks for changes in the camera parameters and eventually updates them. Until the program is closed, the camera is kept in this constant loop between preview mode and pause mode. | ||
he last possible camera mode is used to abort the current preview, stop the camera, and set the paused mode. | ||
|
||
## Scanning | ||
|
||
The scanning process leverages the implementations of the scanning loops inside the interface and it mainly sets the loop and updates the relevant settings. It initializes the board, settings, and queues, which then passes to a loop object. This loop object will be either an implementation of the `PlanarScanLoop` or of the `VolumetricScanLoop` depending on the mode in which the program is. | ||
|
||
## External Communication | ||
|
||
The external communication process uses the connection made by the external trigger interface to keep updating the settings, and checking the trigger conditions. Once these conditions are set, it sends a trigger and receives the duration of the experiment. The duration is then inserted inside a queue, where it will be read by the main process and used to compute the end signal of the acquisition. | ||
|
||
## Dispatching & Saving | ||
|
||
There are two more processes that take care of the setup of the volumes and their saving in memory. The dispatcher process runs a loop where it gets the newest settings and gets a frame from the camera process queue. | ||
|
||
This frame is then optionally filtered from the sensor background noise (this can be activated in the volumetric mode widget) and stacked with others until it completes a volume. The volume is then fed to two queues, one is for saving the volume and the other one is for the preview which is displayed by the viewer. The saving process is a bit more complex since it also holds the saving parameters and the saving status (which is important to keep track of the current chunk which has been saved). | ||
|
||
The saving loop executes the following actions: | ||
|
||
1. Initialize the saving folder | ||
2. Reads a volume from the dispatcher queue | ||
3. Calculates the optimal size for a file to be saved in chunks based on the size of the data and the ram memory | ||
4. It stores n volumes until it reaches the optimal size | ||
5. It saves the chunk in an .h5 file | ||
6. Once it finishes saving it saves a .json file with all the metadata inside | ||
|
||
This dynamical approach to the saving process ensures that the program doesn't get overloaded while trying to save the acquired data. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
# Scanning | ||
|
||
The scanning interface is the more complicated interface since it needs to handle the NI board which in turn controls the scanning hardware. | ||
This interface outlines three simple methods to write and read samples from the board, as well as initialization of the board and start of the relevant tasks. | ||
There are multiple properties that control different functionalities: | ||
|
||
- `z_piezo`, reads and writes values to move the piezo vertically. | ||
- `z_frontal`, reads and writes values to move the frontal laser vertically. | ||
- `xy_frontal`, reads and writes values to move the frontal laser horizontally. | ||
- `z_lateral`, reads and writes values to move the lateral laser vertically. | ||
- `xy_lateral`, reads and writes values to move the lateral laser horizontally. | ||
- `Camera_trigger`, triggers the acquisition of one frame. | ||
|
||
The implementation of the scanning interfaces connects to the NI board and initialize three analog streams: | ||
|
||
- `xy_writer`, which combines the frontal and lateral galvos moving the laser horizontally and outputs a waveform. | ||
- `z_reader`, which reads the piezo position. | ||
- `z_writer`, which combines the frontal and lateral galvos moving the laser vertically, the piezo and the camera trigger. For each of them the output varies depending on the mode in which the software is. | ||
|
||
Inside the config file, there’s a factor that allows applying a rescaling factor to the piezo. | ||
|
||
Another major part of the interface is the implementation of different scanning loops to continuously move the laser to form a sheet of light and move it in z synchronously with the piezo to keep the focus. There is a main class called ScanLoop which continuously checks whether the settings have changed, fills the input arrays with the appropriate waveform, writes this array on the NI board (through the scanning interface), reads the values from the NI board, and keeps a count of the current written and read samples. Two classes inherit from this main class: | ||
|
||
- `PlanarScanLoop` | ||
- `VolumetricScanLoop` | ||
|
||
The main difference between the two is the way they fill the arrays responsible to control the vertical movement of Piezo and galvos. Inside the `planar loop`, there are two possible modes, one of which is used for calibration purposes and is completely manual. In this mode, the piezo is moved independently of the lateral and frontal vertical galvos. This allows for proper calibration of the focus plane for each specimen placed in the microscope. The other mode is synched and uses the linear function computed by the calibration to compute the appropriate value for each galvo, based on the piezo position. | ||
|
||
The `volumetric loop` instead writes a sawtooth waveform to the piezo, then reads the piezo position and computes the appropriate value to set the vertical galvos to. Given the desired framerate, it will also generate an array of impulses for the camera trigger, where the initial or final frame can be skipped depending on the waveform of the piezo. For ease of use, the waveform is shown in the GUI so that the user can decide how many frames to skip depending on the settings that they inserted, see [figure](sashimi-mode_vol) , where the white line represents the waveform. | ||
|
||
**PlanarScanLoop** fill array method: | ||
|
||
```python | ||
def fill_arrays(self): | ||
# Fill the z values | ||
self.board.z_piezo = self.parameters.z.piezo | ||
if isinstance(self.parameters.z, ZManual): | ||
self.board.z_lateral = self.parameters.z.lateral | ||
self.board.z_frontal = self.parameters.z.frontal | ||
elif isinstance(self.parameters.z, ZSynced): | ||
self.board.z_lateral = calc_sync( | ||
self.parameters.z.piezo, self.parameters.z.lateral_sync | ||
) | ||
self.board.z_frontal = calc_sync( | ||
self.parameters.z.piezo, self.parameters.z.frontal_sync | ||
) | ||
super().fill_arrays() | ||
|
||
self.wait_signal.clear() | ||
``` | ||
|
||
**VolumetricScanLoop** fill array method: | ||
|
||
```python | ||
def fill_arrays(self): | ||
super().fill_arrays() | ||
self.board.z_piezo = self.z_waveform.values(self.shifted_time) | ||
i_sample = self.i_sample % len(self.recorded_signal.buffer) | ||
|
||
if self.recorded_signal.is_complete(): | ||
wave_part = self.recorded_signal.read(i_sample, self.n_samples) | ||
max_wave, min_wave = (np.max(wave_part), np.min(wave_part)) | ||
if ( | ||
-2 < calc_sync(min_wave, self.parameters.z.lateral_sync) < 2 | ||
and -2 < calc_sync(max_wave, self.parameters.z.lateral_sync) < 2 | ||
): | ||
self.board.z_lateral = calc_sync( | ||
wave_part, self.parameters.z.lateral_sync | ||
) | ||
if ( | ||
-2 < calc_sync(min_wave, self.parameters.z.frontal_sync) < 2 | ||
and -2 < calc_sync(max_wave, self.parameters.z.frontal_sync) < 2 | ||
): | ||
self.board.z_frontal = calc_sync( | ||
wave_part, self.parameters.z.frontal_sync | ||
) | ||
``` |
Oops, something went wrong.