When using a shared database the (elastic-)search index should be written into that database as well!
Jabref writes entries into the shared database but writes the search index to disk.
Jabref writes entries and search index to the shared database.
Using a (maybe quite huge) shared database means thousands of entries. If the index is kept locally every client of that shared database needs to generate their own search index which is not very resourceful.
Please vote if you support this proposal:
- I would like to have this feature, too!
- I don’t care.
This is above my skill-level. My rudimentary thoughts:
Search index on client exists → there is a change to the database → transfer from clients to server and other clients → search index on clients will be recreated → Bottleneck: computing power of clients (high?) + traffic (negligible?) + computing power of server (negligible?)
Search index on server exists–> there is a change to the database → transfer from clients to server and other clients → search index on server will be recreated → clients have to download these changes to the index OR have to download results of the search. → New bottlenecks: Computing power of Server (high?) + traffic (negligible?) + computing power of clients (negligible?)
I am sure it is not that easy, and maybe what i wrote here is wrong. Please correct me, if i am not correct. I repeat, this is something above my head, but if you think this feature would bring some performance improvements, i am all for it
this is a bit more complex and especially hard if many clients can read/write. For starting a probably more easy solution would be to copy the index from Machine A to Machine B once the majority is indexed. That reduces the overhead.
Then JabRef only needs to index new files that come after copying.
Here are some more explanations:
So you would need something like a connector for Apache Solr in JabRef.