Releases: pupil-labs/pupil
Pupil Capture, Player, and Service release (Updated)
We are pleased to announce the release of Pupil v1.16!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Pupil Invisible Support - #1596, #1623
Preparing for the upcoming public release of Pupil Invisible (PI), we are adding support for recordings made with PI in Pupil Player.
You will be able to download the raw recording from Pupil Cloud or copy them directly from the Companion device. Drag and drop the recordings onto Pupil Player as usual.
Most features of Player will support recordings made with PI. Currently not supported are: The iMotions exporter and blink detector.
Surface Tracker
Removed Robust Detection from SurfaceTracker - #1600
The robust detection mode was a remnant of the legacy surface marker system. The new Apriltags markers are by default more robust than even the old robust detection mode. This option did not have an effect anymore, so it was removed.
Bugfixes
Disabled Hardware Timestamps on Windows Operating System - #1621
In the past we have received multiple reports of timestamps being out of sync on Windows. After some testing we discovered that the timestamps from the hardware were not always reliable on Windows. The exact cause of this is still unknown but we have disabled the use of hardware timestamps on Windows for now. We are now using software timestamps with an appropriate offset-mapping.
Improved Eye Overlay Visualizations - #1609
Previously, the Eye Overlay plugin would always display the closest eye frame to the current playback position. Therefore, in cases where the eye camera disconnected during a recording, the eye video was visualized incorrectly. Instead, now, we show a gray frame to indicate missing data.
Smaller Fixes
- Fixed minor performance issues in Apriltag surface marker detector
- Fixed broken SurfaceTracker heatmap rendering when no gaze data is available - #1584
- Fixed SurfaceTracker timeline drawing performance issues - #1595
- Fixed issues with EyeMovementDetector when using inconsistent 3D gaze data - #1587
- Fixed UI/UX issues when editing surfaces in SurfaceTracker - #1591
- Fixed flickering surface markers - #1600
- Updated the app icons for the Windows bundles to the latest design
- Fixed incorrect handling of missing projection matrices for HMD - #1603
- Fixed performance-related crashes of the SurfaceTracker on Windows - #1605, #1606
- Improved user feedback when opening invalid recordings - #1585
- Fixed crashing offline detection in recordings with timing irregularities - #1615
- Improved user feedback when trying to load an unknown plugin - #1613
- Fixed a crash when trying to use Pupil Invisible with Capture - #1620
- Fixed a crash when setting color values for CircleViz to 1.0 - #1624
- Improved user feedback when recording from Pupil Mobile without TimeSync - #1629
Deprecated Recordings
Previously released versions of Pupil Player v1.16 do not support recordings created with
- Pupil Capture
v1.2
or earlier, and - Pupil Mobile
r0.22.x
or earlier.
This was due to the fact that these recordings are missing meta information that is required for the upgrade to our new Pupil Player Recording Format 2.0
. For details see "Missing Meta Information" below.
With this PR, instead of aborting with a "Recording too old" warning, Pupil Player will attempt to infer the missing values from other data in the recording. This inference is not perfect and results in imprecise values and might cause issues when converting recorded Pupil time to wall clock time. For details see "Missing Meta Information" below.
Upgrading will only work in Player v1.16
Since this inference is limited, we will remove the upgrade functionality for deprecated recordings in v1.17. This means, that if you want to open a deprecated recording in v1.17 or later, you will have to convert them to the new format using v1.16 first.
Missing Meta Information
Specifically the following keys in the info.csv
file are missing:
Start Time (System)
: Recording start time in Unix epochStart Time (Synced)
: Recording start time in Pupil time
Pupil Player assumes that these timestamps were measured at the same time. This allows for the after-the-effect synchronization of Pupil data with externally recorded data that uses the Unix epoch.
Their value can be inferred from the existing recording, given a roughly known imprecision:
Start Time (System)
: The system start time can be inferred using the existingStart Date
andStart Time
fields (precision: one second). Unfortunately, theStart Time
is subject to the system's timezone whileStart Time (System)
is not. Therefore, its inference is off by the recording's amount of timezone offset.Start Time (Synced)
: This timestamp can be replaced by the earliest known recorded video timestamp in the given recording. In this case, the inference error depends on the startup delay of the cameras. This delay is typically less than a second for Pupil Capture recordings, and 2-5 seconds for Pupil Mobile recordings.
Incorrect Playback
In some cases, deprecated recordings might not be played back correctly in Pupil Player. If this is the case for you, please contact [email protected]
for support.
Developers Notes
Changed Requirements
- Updated
pyndsi
dependency to v1.0
pip install git+https://github.com/pupil-labs/pyndsi -U
- Added
packaging
dependency
pip install packaging
Pupil Player Recording Format 2.0
Variable Framerate Videos
Instead of writing videos with equally distant presentation timestamps, Capture now saves videos with their original timing. This means, that newly recorded videos can be played back correctly in VLC without having to export them first with Pupil Player.
Recording Info Files
Due to slight differences in how Capture, Mobile and Invisible store recording meta info, we introduced a unified recording meta info file for Pupil Player: info.player.json
. When opening a recording made with Pupil Mobile, Pupil Invisible, or older versions of Pupil Capture, we are transforming the recording into the new format (as we did before already) and aggregate all necessary meta infos in info.player.json
without changing the original meta info file. The original meta info files will be renamed in order to uniquely identify the origin of this meta info:
- Older versions of Pupil Capture*:
info.csv
->info.old_style.csv
- Pupil Mobile:
info.csv
->info.mobile.csv
- Pupil Invisible:
info.json
->info.invisible.json
*Note that e.g. when a Pupil Mobile recording has been opened in an older version of Pupil Player before, we also rename to info.old_style.csv
as the original info.csv
will have been adjusted by Player before.
You can read about the exact specification of the new info.player.json
format here.
Pretty Plugin Class Names - #1597
Custom plugin classes don't have to rely on underscores in their names anymore for displaying them nicely in the UI. You can now name your plugin classes in accordance with PEP 8 - Class Names:
class MyAwesomePlugin(Analysis_Plugin_Base):
@classmethod
def parse_pretty_class_name(cls) -> str:
return "Awesome Plugin"
Note that currently all existing pupil classes (e.g. Analysis_Plugin_Base
) keep their names with underscores for backward compatibility.
We Are Hiring Python Developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for Dev Ops engineers that have experience with kubernetes, docker, and server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Release Note Updates
17.09.2019 22:40: Missing Microsoft Visual C++ Redistributable
We have received reports from two users that Pupil Capture failed to start on Windows with the following error message:
ImportError: DLL load failed: The specified module could not be found.
In both cases the "[Microsoft Visual C++ Re...
Pupil Capture, Player, and Service release (Updated)
We are pleased to announce the release of Pupil v1.15!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Apriltags support for Surface Tracking
Starting with v1.15
, the Surface Tracker will detect apriltags instead of "square markers".
We highly recommend to replace "square markers" with apriltags in your setup since their improved detection performance translates to gaze being mapped to surfaces with higher accuracy and surfaces being detected more reliably, especially during movements.
You can still use the "square markers" by setting the Marker Detector Mode
to Legacy square markers
. This is especially useful, if you want to process recordings that were made previous to this release.
Apriltags version 3 and support for Windows
We started to maintain our own fork of @duckietown
's apriltags Python bindings for @AprilRobotics
' AprilTag 3 library. Forking the projects was mostly necessary in order to support them on Windows.
This change allows us to finally enable the Head Pose Tracker feature (initially released in v1.12
) on Windows!
Freeze the current 3d eye model
You now have the option to freeze the current eye model. Doing so will prevent any changes to the current eye model and discard any alternative models that might have built up in the background.
Warning: Freezing the eye model will disable its ability to compensate for any kind of slippage.
Bugfixes
- Drop invalid frame after disconnect - #1573
- Revert change that caused eye process to crash when minimizing
Developers notes
Changed requirements
Please install apriltags via
pip install git+https://github.com/pupil-labs/apriltags
API changes
Improved fixations_on_surface
export
A fixation is based on gaze data that full-fills the maximum-dispersion-minimum-duration-criterion. We use it to infer e.g. fixation's position. When mapping a fixation to a surface, we use the surface's homography to calculate the fixation's position in surface coordinates. The surface's homography is calculated based on the detected surface markers for a given world frame. In most cases, the fixation appears during multiple world frames, i.e. there are multiple surface homographies which could be used to map the fixation to the surface. Until now, we only exported fixations for the first frame during which the fixation appeared. This is not a problem, as long the surface does not move in relation to the world camera during the fixation. Since this is an assumption, that does not always hold true, we will export all fixation mappings starting with v1.15. This means, that if a fixation spans multiple world frames during which a surface was detected, we will export the fixation and its position for each of these world frames.
Therefore, we added two new columns, world_timestamp
and world_index
, to identify the world frame which was used to map the fixation.
Eye model freezing
You can un/freeze the 3d eye models via a notification:
{
"subject": "pupil_detector.set_property.3d",
"name": "model_is_frozen",
"value": True, # set to False to unfreeze
# remove this line if you want
# to freeze the model both eyes
"target": "eye0",
}
See this example on how to use the Pupil Detector Network API.
Change pupil detection ROI by notification - #1576
You can change the pupil detector's region of interest via a notification:
{
"subject": "pupil_detector.set_property",
"target": "eye0",
"name": "roi",
# value: (minX, maxX, minY, maxY)
"value": [0, 100, 0, 200],
}
HMD-Eyes Streaming - #1501
This new video backend allows to stream a virtual Unity camera to Pupil Capture.
HMD-Eyes Streaming is an experimental feature for the hmd-eyes project and can only be activated via a network notification.
More news on this topic will follow soon over at the hmd-eyes
project!
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for Dev Ops engineers that have experience with kubernetes, docker, and server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Release Updated
Update: Versions v1.15-65
and higher include multiple fixes for surface tracking. Please update Pupil if you have been using versions v1.15-4
or v1.15-5
. A release download for macOS will follow soon.
Update 2019-08-27 10:07: Please find the macOS release attached below.
Update 2019-09-05 12:05: We updated the Windows bundle to include fixes for the HMD Streaming backend.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.14! This update primarily contains bugfixes and optimizations (no new features).
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Bugfixes
- Surface Tracker stability improvements - #1547, #1555
- Includes re-introduction of the
fixations_on_surface_<name>.csv
export
- Includes re-introduction of the
- Head Pose Tracker stability improvements - #1561, #1562, #1569
- Timeline visualization improvements - #1563, #1565
- Calculate eye movement segment frame indices - #1539
- Work around negative time-jump issue on Windows - #1551
- Correctly disable Stripe Detector for Pupil Cam3 - #1554
- GLFW: Raise with detailed error message - #1559
Developers notes
Changed Requirements
We removed our dependency on boost 🍾.
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for Dev Ops engineers that have experience with kubernetes, docker, and server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.13!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Eye Movement Detector - #1512, #1516, #1519
Our new eye movement detector is based on Naive Segmented Linear Regression and classifies fixation, saccade, post-saccadic oscillation, and smooth pursuit movements.
Event identification is based on segmentation that simultaneously de-noises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner.[1]
More details about this approach can be found here.
The open source implementation can be found here.
See this Pupil Helper script to filter eye movements in real time.
Surface Tracker Revamp - #1268, #1515
We have rewritten the surface tracker and included many improvements, e.g.:
- Compensate camera distortion correctly when mapping gaze onto the surface
- Do not block when recomputing surface location cache in post-hoc surface tracker in Pupil Player
- Improve heatmap generation
- General UI improvements
- Improved timeline visualizations
- Option to freeze the image in Capture for easier surface setup and editing
- Use more than a single video frame to define a surface in post-hoc surface tracker in Pupil Player
See the pull request description for a full list.
Please be aware that the names of the exported csv files has changed slightly. See the documentation for the new names.
Caveat: You might need to readjust your surface definitions after upgrading pre-v1.13 recordings. This is due to the internal surface definition having changed and not being fully compatible with previously saved surface definitions.
Pupil Invisible Companion Compatibility Improvements
We are gearing up for a wider release of our new eye tracking glasses Pupil Invisible, and are adding some features to Pupil desktop to support Pupil Invisible. Pupil Invisible will ship with its own Companion Android app. This app will enable one to stream video over WiFi for monitoring purposes.
Bugfixes
- Improved ui responsiveness - pyglui#105, #1527
- Fix Pupil Mobile intrinsics lookup - #1510
- Fix audio playback in cases where the audio is shorter than the world video - #1514
- Pupil data fields
theta
andphi
changed to be consistently floats - #1526
Developers notes
Changed Requirements
- Eye movement detector:
nslr
andnslr-hmm
pip install git+https://github.com/pupil-labs/nslr
pip install git+https://github.com/pupil-labs/nslr-hmm
ndsi v0.5
pip install git+https://github.com/pupil-labs/pyndsi -U
pyglui v1.24
pip install git+https://github.com/pupil-labs/pyglui -U
API changes
- Eye movement detector: See the format documentation for information.
- New
world_process.adapt_window_size
notification. Request world process to fit window size to most recent world video frame - #1524
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.12!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Head Pose Tracking - #1484
The plugin builds a 3d model by detecting markers (apriltag) in your environment. After building the model, you can track the Pupil wearer's head pose within this environment.
See our video tutorial on how to setup and use this feature. Enable captions on the youtube video for the tutorial commentary. Also, checkout the documentation (Detailed Data Format section, last paragraph) on how the exported data format looks like.
Caveat: Head Pose Tracking is not yet available on Windows since the used apriltag implementation is incomaptible with Windows.
Auto Camera Selection - #1470
We introduced a new button in the Backend Manager: Start with default devices
After switching to a different video backend, you can click the Start with default devices button
. This will automatically select the correct sensor and start capturing for corresponding world and eye windows.
This feature is particularly useful if you need to the change the capture often.
Video Overlay - #1489
The Video Overlay
plugin enables you to overlay any video with synchronized timestamps.
You can easily record videos with timestamps by using a second instance of Pupil Capture or Pupil Mobile.
Example use case: Use another camera to record a "third person" view of a participant in your eye tracking experiment and overlay this video with the first person world video + gaze visualization.
The Vis Eye Video Overlay
has been renamed to Eye Overlay
.
Improvements
- Full support for the new 200Hz HTC Vive Binocular Add-on - #1497
- Consistent export ranges - #1480
- Opencv 4 compatibility - #1495
- Eye video export - Fixed a bug where the data from the incorrect eye was rendered - #1488
- Fix offline gaze mapper crash when using a recorded calibration - #1499
Developer notes
Changed Requirements
- apriltag - The Head Pose Tracker uses the apriltag detector. Specifically, Pupil uses
@swatbotics
's C library via a modified version of their Python bindings.
API changes
- Plugins can consume UI events - #1481
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.11!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Offline Calibration Refactor
Author: @ckbaumann
Pull Requests: #1389, #1400, #1407, #1415, #1434
- Calibration export and import: Export a calibration from one recording and use it to map gaze in an other recording. See #1003 for a possible use case.
- Accuracy and precision: Evaluate accuracy and precision for separate gaze mappings.
- Caching of generated gaze data: Offline calibrated gaze is session-persistent and does not need to be recalculated after restarting Pupil Player.
- Streamlined usability: Detect reference locations, calibrate, map gaze, and calculate accuracy with the click of a single button.
Split Recording Support
Author: @Windsooon
Pull Requests: #1417, #1452, #1456
Starting with version v0.37
, Pupil Mobile creates recordings with multiple video files. This ensures that we do not reach/exceed the file size limit on SD cards and additionally ensures that we can gracefully handle sensor disconnects of the headset without compromising the recording session.
Pupil Player will behave in the same way as it did previously, with the difference that it will fill any gaps within the recording, e.g. during a a disconnect, with gray frames. World-less recordings as well as corrupted video files are also being handled as gaps.
Developers notes
- Added
debug
-level logs for Pupil Remote requests and responses - #1436
Changed Requirements
numpy >= 1.13
API changes
- Raw Data Export: Changed timestamp column names - #1444
pupil_positions.csv
:world_timestamp
->pupil_timestamp
gaze_positions.csv
:world_timestamp
->gaze_timestamp
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for full stack Python developers that have experience with kubernetes, docker, and async server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.10!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Capture
RealSense D400 Support - #1409
Please be aware, that it is recommended to keep your D400's firmware update-to-date. See the Intel documentation on how to do that.
Intel does not currently provide a Python wrapper for macOS. Until they do, we are not able to support the Pupil Capture D400 backend on macOS.
Player
Export timestamps as csv
on video export
- Timestamps will now be exported as a csv when you export a video #1411
Command-line arguments - #1384
Previously, only simple arguments could be passed to Pupil via command-line. This included the debug
and profiled
mode. Motivated by #1315, we decided to start parsing command-line arguments properly.
See the pull request description for details on the available command-line arguments.
Pupil Remote port command-line argument
Using the --port PORT
argument, it is now possible to set the Pupil Remote port for Capture and Service.
Bugfixes
- Fake Backend: Fix playback after seeking - #1396
- Fixation Detector: Fix export format - #1395
- Improved compatibility with macOS 10.14 Mojave - #1381
Developers notes
New dependencies
- Optional, required for RealSense D400 backend:
pyrealsense2
API changes
-
Pupil Detector Network API, see PR for details - #1395
-
Pupil Remote: Forward IPC/multipart messages - #1385
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for full stack Python developers that have experience with kubernetes, docker, and async server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.9!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Features
Capture
Remote Recorder - #1194
The Remote Recorder
plugin no longer requires Pupil Mobile streams to be active in Capture. Instead, it lists all available devices and allows you to start and stop recordings with the click of a button. It is also possible to change the recording session names for all available devices at the same time.
This feature requires Pupil Mobile version 0.25.1
or higher.
Annotation Plugin
Annotations are no longer special types of notifications. Annotations are now sent via the IPC under the topic annotation
. Recorded annotations are now stored in annotation.pldata
.
Pupil Detectors - #1358
Reduced the default "maximum pupil size" setting in order to decrease false-positive detections.
Time Sync Group Members - #1277
All Time Sync
actors (e.g. Capture or Pupil Mobile) join a network group for discovery and time sync announcements. We added the functionality of listing all present group members in the Time Sync
plugin menu.
Be aware that all Pupil Mobile instances will be listed as pupil-mobile-follower
.
Player
Buffered Playback - #1279
We added buffering to Pupil Player. Instead of decoding each frame on demand, Player decodes multiple frames ahead of time. This improves smooth playback, even at high playback speeds of h264 encoded videos.
Be aware, that this will require a bit more of memory as a trade-off.
Video Exporters
We improved the different video export plugins.
- There is a new
Eye Video Exporter
plugin (#1301). This plugin will export eye videos for your recording. Video Export Launcher
was renamed toWorld Video Exporter
to make its task clearer. By default, the exported video is no longer namedworld_viz.mp4
butworld.mp4
(#1295).- We implemented a new export management UI that is now used in
World Video Exporter
,Eye Video Exporter
, andiMotitions Exporter
. You can add as many export tasks as needed and they will all be processed in order (#1322).
Raw Data Exporter
@fneitzel added the possibility to individually turn off pupil and gaze exports in the Raw Data Exporter
(#1239).
The exported timestamp
, index
, and id
csv columns have been renamed to more explicit titles (#1352).
iMotions Exporter: Support for world-less recordings - #1308
We added support for world-less recordings to the iMotions Exporter. World-less recordings do not include a scene (world) video and are common for AR/VR setups.
Annotation Plugin
After opening a recording in Player, all annotations (recorded or added in Player) are stored in annotation_player.pldata
instead of the offline data
directory. This does not override annotation.pldata
, so you can reset annotations to the recorded ones by deleting annotation_player.pldata
.
Recordings created by former versions of Pupil will be updated to the new format.
Bugfixes
- Surface edit bug on Macs with Retina displays - #1252
- Crash when recording during Frame Publisher enabled - #1263
- Synchronization Drift between Cameras and Recording Computer - #1266
- Incorrect
Frame Publisher
world frame topic - #1276 Fixation Detector
bug where binocular 3d gaze data was ignored - #1286- Race condition during Player exports - #1304
- Incorrect log message handling in background tasks - #1305
- Timebase changes break fixation and blink detection in Capture - #1324
- Use of deprecated fields in
Blink Detection
- #1283
Developers notes
New dependencies
- pyav 0.4.2
- cysignal (on macOS and Linux only)
black
format
The entire code base has been changed to the black
format (#1343, #1344, #1346).
API changes
seek_control.trim_indeces_changed
notification - #1329set_min_calibration_confidence
notification - #1361- Fixed field consistency for monocularly mapped gaze - #1291
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for full stack Python developers that have experience with kubernetes, docker, and async server-side Python.
Send an email to [email protected] with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
We are pleased to announce the release of Pupil v1.8!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
We have been working hard to significantly reduce the memory usage of Pupil Capture and Player. See incremental data serialization and deferred deserialization for details.
Features
Capture
Auto-exposure mode -- #1210
We have added an auto-exposure mode for the 200Hz Pupil cameras. You can enable it in the UVC Source
menu of the eye windows.
Incremental data serialization -- #1141
Prior to this release, data was cached in memory during recordings and written to disk after the recording had finished. This resulted in large memory consumption during recording.
Starting in v1.8
, Pupil Capture will store data directory to the disk as it becomes available during the recording. This reduces the memory footprint and improves reliability of Pupil Capture. See the New Recording Format section on our documentation.
Automatic recording stop on low disk space
The recorder will show a warning if less than 5GB of disk space is available to the user. Recordings will be stopped gracefully as soon as less than 1GB of disk space is available.
Fingertip Calibration -- #1218
We introduced a proof-of-concept fingertip calibration with Pupil Capture v1.6
. It was based on traditional computer vision approaches and the fingertip detection was not very stable.
Now, we are releasing a revised version that uses convolutional neural networks (CNN) for the hand and fingertip detection. For details checkout our documentation.
Note - The current bundle support CPU inference only. If you install from source and have an NVIDIA GPU with CUDA 9.0 drivers installed, you can install pytorch and our fingertip detector will use your GPU!
Player
Deferred Deserialization
Before v1.8
, opening a recording in Pupil Player would read the entire pupil_data
file and deserialize the data into Python objects. This enabled fast processing of the data but also used excessive amounts of memory. This led to software instabilities for users who were trying to process recordings with long durations.
Starting with v1.8
, Pupil Player only deserializes data if required. This reduces memory consumption dramatically and improves software stability.
Please be aware that the initial upgrade of recordings to the new format can take a bit of time. Please be patient while the recording is converted.
Temporally disabled features
We had to disable the following features due to changes on how we handle data within Pupil Player:
Vis Scan Path
- Manual gaze correction for
Gaze From Recording
We are working on a solution and will hopefully by able ro re-enable these with in the next release.
Bugfixes
- Fixed a bug were Player crashed if
info.csv
included non-ASCII chracters -- #1224 - Correctly reset the last known reference location after stopping the manual marker calibration in Capture -- #1206
Developers notes
New dependencies
We have added PyTorch to our dependencies. If you want to make use of GPU acceleration you will have to run Pupil Capture from source and install the GPU version of PyTorch. We will work on bundling GPU supported versions in the future.
New recording format
We had to make changes to our recording format in order to make the incremental serialization and deferred deserialization features possible. Please, see our documentation for more details on the New Recording Format.
API changes
zmq_tools.Msg_Streamer.send()
has been reworked. The previous required two arguments: topic
, payload
. The new version only accepts the payload
argument and expects the payload to have a topic
field instead.
Real-time fixation format changes -- #1231
Online fixations are published in a high frequency. Each fixation has a base_data
field that includes gaze data related to this fixation. In turn, gaze data has a base_data
field on its own including pupil data. As a result, recordings grew unreasonably fast in size if the Online Fixation Detector
was enabled. E.g. for an eleven minute long recording, the pupil_data
file grew to 1.4GB of which 1.1GB were fixations alone.
As a consequence, we are replacing each gaze datum in the base_data
field of online fixations with (topic, timestamp)
tuples. These uniquely identify their corresponding gaze datum within the gaze data stream.
We are hiring developers!
Hey - you're reading the developer notes, so this is for you!
Pupil Core Contributors - We're hiring developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
Full Stack Developers - We are also looking for full stack developers that have experience with one ore more of the following tools/platforms: kubernetes, docker, vue.js, and flask.
Send an email to [email protected]
with a CV to start a discussion. We look forward to hearing from you.
Pupil Capture, Player, and Service release
Overview
We are pleased to announce the release of Pupil v1.7
!
Download the latest bundle and let us know what you think via the #pupil channel 😄
Features
- Adjustable audio playback volume - #1131
- Audio wave visualization - #1131
- Online Surface Tracker: Map and publish fixations - 4bb1aeb
- 2d and 3d detector improvements - #1168
- Capture opens both eye windows by default - e2d9576