Offline maps stored in IndexedDB and 2GB blob limit in Chrome #100
Replies: 2 comments
-
Is this a 100% browser based application (not using Cordova, Electron, etc)? If can guarantee that your new archive is under the 2GB limit, storing the entire 2GB as a single row in IndexedDB seems like the best choice, but you need to write a new header/directory dynamically; no implementation exists for that yet. If you mean storing a single archive across multiple 2GB chunks, I don't have enough insight on how IndexedDB works to comment on possible ways to do that. |
Beta Was this translation helpful? Give feedback.
-
Yes, it's 100% browser app. I have only Web APIs to use. File System Access API is not ideal as it keeps asking file/directory for permissions everytime the app is launched (efforts to change this are met with glacial Chromium team response, or rather lack of it) Currently I'm doing just like you wrote, storing PMTiles below 2GB as single blobs in one row. For reference, here is the rough code to divide maps into chunks if anyone stumbles upon this issue: // based on https://javascript.info/fetch-progress
async function upload(file) { // File object: https://developer.mozilla.org/en-US/docs/Web/API/File
let reader = file.stream().getReader()
let receivedLength = 0;
while(true) {
const {done, value} = await reader.read();
if (done) {
console.log("file uploaded:", file.name)
break;
}
receivedLength += value.length;
// Dexie wrapper to IndexedDB
db.chunks.add({
fileName: file.name,
chunkPosition: receivedLength,
chunk: value
}) |
Beta Was this translation helpful? Give feedback.
-
I'm building a PWA app where I want to store PMTiles inside IndexedDB for offline use. Unfortunately, Chrome has 2GB limit per blob/Arraybuffer which prevents any large map from being loaded to memory.
One popular workaround to this is to divide files into chunks, each one less than 2GB (100MB by default if I recall).
My idea is to only serve part of PMTiles file requested by library based on byte range: merge corresponding chunks (each chunk holds its byte range in original file) and return one properly sliced blob as URL to query. I assume byte ranges would have to be modified (translated) to this smaller blob (for example, instead of starting from X-nth byte of big file, start from 1st byte of small file).
I'm stuck on translating byte ranges to new file. Anyone has tips how to approach this?
Beta Was this translation helpful? Give feedback.
All reactions