Skip to content
This repository has been archived by the owner on Nov 4, 2020. It is now read-only.

Performance issues with 3000+ files #14

Open
ZetaTwo opened this issue Feb 3, 2019 · 1 comment
Open

Performance issues with 3000+ files #14

ZetaTwo opened this issue Feb 3, 2019 · 1 comment

Comments

@ZetaTwo
Copy link
Contributor

ZetaTwo commented Feb 3, 2019

Faxes started to fail due to timeouts. I have about 3000 sent faxes at a total of ~700mb

> db.faxFile.stats()
{
	"ns" : "vapor.faxFile-backup1",
	"size" : 745257940,
	"count" : 3050,
	"avgObjSize" : 244346,
	"storageSize" : 728104960,

Removing all old files by running

> db.faxFile.renameCollection("faxFile_backup1")

Got rid of the issue. In my opinion, 3000 files totalling 700mb isn't that much data and it feels like there is some kind of inefficiency in the code. Unfortunately, I haven't had the time to look closer to try to see what the problem could be or how it could be improved but I post this here in the meanwhile as a start for further investigation.

@ZetaTwo
Copy link
Contributor Author

ZetaTwo commented Feb 20, 2019

What's missing is an index on the faxFile collection, for example, in the Mongo console, running:
db.faxFile.createIndex({"faxObjectId":1 }, {"unique": true})
completely solves the performance issue.

I don't know much about MongoKitten but reading their docs would suggest that the following code:

try FaxFile.collection.createIndex(named:"faxObjectId", withParameters: .unique)

somewhere would create the relevant index. I know too little about the codebase however to know where to put this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant