Full-search implementation concerns
See original GitHub issueFirst of all full-search is awesome. Really cool. But let me criticize a bit.
- Why do we need search-stopwords.json? lurn already contains built-in stopwords for English. But you remove default stopWordFilter, then load a separate stopwords index file and generate a filter based on it. Why? That search-stopwords.json contains the same stopwords as default builtin filter! Moreover lunr addons for languages (from https://github.com/MihaiValentin/lunr-languages) contains their own stop words.
- Why not build index in build-time? Why instead do you load json in run-time and then add item by item into index. It can be done (and usually done) in build time. Then in run-time we can just load an index file:
$.getJSON("index.json", function (data) { engine = lunr.Index.load(data); })
That’s all. I understand that you enrich search results with title and keywords which are absent inlunr.search
’s result. But it can be done via additional index file. - no i18n
Index should be built with honor of other languages. lunr natively supports only English. For additional languages support we need to add addons (from https://github.com/MihaiValentin/lunr-languages):
in buildtime:
var lunr = require('lunr');
require('./lunr.stemmer.support.js')(lunr);
require('./lunr.ru.js')(lunr);
require('./lunr.multi.js')(lunr);
var lunrIdx = lunr(function() {
this.use(lunr.multiLanguage('en', 'ru'));
// config ref/fields
});
in runtime:
lunr.multiLanguage('en', 'ru');
engine = lunr.Index.load(data);
I can create a template for customization of index building but I think it should possible without template customization. Also please see #650 - these’re problems with encoding of extracted keywords for indexing.
Issue Analytics
- State:
- Created 7 years ago
- Reactions:3
- Comments:12 (2 by maintainers)
Top Results From Across the Web
Barriers and facilitators to implementing evidence-based ...
The objectives of this review are to (1) synthesize barriers and facilitators to implementing evidence-based guidelines in long-term care, ...
Read more >Contextual Factors That Impact the Implementation ...
Contextual factors influencing patient portal implementation tended to cluster in ... Patients' security and privacy concerns (eg, control over access) ...
Read more >Unable to run full search crawl or Incremental in ...
We are experiencing search function issues even though the incremental search crawl had been running on Application server every 4 hours for ...
Read more >FPGA implementation of full-search vector quantization ...
This paper presents a novel algorithm for field programmable gate array (FPGA) realization of vector quantizer (VQ) encoders using partial ...
Read more >Hands on Full-Text Search in SQL Server
Microsoft SQL Server comes up with an answer to part of this issue with a Full-Text Search feature and it lets users run...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
The performance of this runtime index processing isn’t so good. The doc site I have has an
index.json
file of about 8.5 MB. This means that search isn’t available for a minute or two while it’s being processed.It’s mentioned above that it might be possible to do this processing at build time instead of runtime in the browser. If that is possible, does anyone know how i can achieve that?
Checking back in on build-time indexes. Our search takes over a minute for the idex to be built on most desktops. It looks like search is just broken, because users would give up before the index is created.