Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Production cleanup tasks. #105

Open
stevenvandervalk opened this issue Jul 16, 2015 · 7 comments
Open

Production cleanup tasks. #105

stevenvandervalk opened this issue Jul 16, 2015 · 7 comments
Assignees

Comments

@stevenvandervalk
Copy link
Member

  • Linting of js
  • Cleanup of css, html
  • Benchmarking of serve times and caching changes?

Previously tried the first but broke js on qa-deploy so need to test more systematically.

@stevenvandervalk stevenvandervalk self-assigned this Jul 16, 2015
@stevenvandervalk
Copy link
Member Author

  1. and 2. dealt committed and passing local tests in https://github.com/vecnet/dl-discovery/tree/qa-deploy-rollback
  2. Both http://guides.rubyonrails.org/caching_with_rails.html and suggest shifting to ActiveSupport::MemoryStore if we have the extra ram to spare to deal with slow load times. But perhaps vecnet production has some other cache in place - if this even an issue in the first place!

@dbrower if we end up having a chat during the week, you can say if this is even an issue.

@dbrower
Copy link
Member

dbrower commented Jul 20, 2015

I'm all for adding caching. I think either a file store, which i believe is the default, or redis store are better than the memory store. We are running redis, so that would be easy to get going.

first, though, is whether we are even caching any partials. Also, I know the original backend is very slow. In which places is the discovery interface slow? For the most part it is getting its data from the database and from solr, both of which are pretty fast.

@stevenvandervalk
Copy link
Member Author

Yep file store is the default. Cool I think redis is a good option so long as the network cost lag is low. So we need to check the benchmarking however.

Currently not caching any partials but I think we should consider the common elements because not much else changes really. I could ask the blacklight/geo guys what they are thinking in that way as well? I don't anything is that slow personally but always check the box in case?

@stevenvandervalk
Copy link
Member Author

Also is your nginx handking compression of some kind, don't think this app has rack::deflater on.

@dbrower
Copy link
Member

dbrower commented Jul 21, 2015

I think redis is on the same host as the ruby code, so there is not really any network latency.

Also, I think i tried to set up compression on nginx, but I don't know if it is working.

We could also cache pages in nginx, which is turned off at the moment. Or at least cache public pages. But since a fair number of results are private or what not, I think adding fragment caching to the application would also be a good idea, and that would be my preference between nginx caching and rails caching.

@stevenvandervalk
Copy link
Member Author

Ok let's setup redis and fragment caching on the common partials? Will need to record the current serving times from qa-deploy to compare against i guess. Sent an email to arrange a time to chat and kick start it etc.

@stevenvandervalk
Copy link
Member Author

https://github.com/redis-store/redis-rails seems the current preference.
Also curious about http caching, never tried it before with redis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants