Releases: VikParuchuri/marker
Releases · VikParuchuri/marker
Significant speedup
This release has a 15% GPU speedup, 3x CPU, 7x MPS. The speedup comes from new surya models for layout and text detection that are a lot more efficient.
This is a "best case" speedup, if you need to OCR or do equation recognition, the speedup will be lower. But it will still be a lot faster.
Fix transformers bugs
- New transformers version introduces a new kwarg in donut models. Handle this case by ignoring it.
- New transformers version breaks MPS compatibility by using torch .isin to do a comparison. Handle this by setting the pytorch mps fallback setting.
Pagination, bug fixes
- Add a setting to enable output pagination
- Enable convert.py to use mps (but less memory efficient than cpu/cuda)
- Fix bug with inference ram setting
- Fix bug with pdf names with dots in them
- Fix bug with images at the end of blocks
Fix convert.py bug
Fix model device check.
Specify page range
- Make it more clear MPS can't be used with convert.py
- Specify page range in convert with start_page and max_pages
Python 3.12 compatibility
- Remove ray to enable python 3.12 compatibility
- Removing ray frees a lot of VRAM (since we can use torch shared tensors), so on average with
convert.py
each process takes 3GB VRAM. This enables much higher throughput (was between 4.5GB and 5GB before).
OCR speedups
- Pull in new surya and pdftext versions for speedups in OCR and text extraction, respectively
- Refine heuristics to reduce OCR false positives (and true positives, unfortunately)
- Enable float batch multipliers
Speed improvements
- Enable parallel text extraction, with worker count settings
- Bump surya version to pull in layout/line segmentation speed improvements, and OCR bug fix
Faster OCR
- OCR is now ~2.5x faster, due to improvements in surya
Speed up inference
- (from surya) faster ocr, line detection, layout inference
- Unpin transformers version after testing
Should be significantly faster now, but haven't fully benchmarked, since I'm running low on time this week!