Skip to content

Commit 90d0f63

Browse files
mridula-s109Copilottlrxmartijnvgbreskeby
authored
[8.19] Add L2 norm normalization support to linear retriever (#128972)
* Add l2_norm normalization support to linear retriever (#128504) * New l2 normalizer added * L2 score normaliser is registered * test case added to the yaml * Documentation added * Resolved checkstyle issues * Update docs/changelog/128504.yaml * Update docs/reference/elasticsearch/rest-apis/retrievers.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Score 0 test case added to check for corner cases * Edited the markdown doc description * Pruned the comment * Renamed the variable * Added comment to the class * Unit tests added * Spotless and checkstyle fixed * Fixed build failure * Fixed the forbidden test --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Clarify Javadoc for L2ScoreNormalizer (l2_norm) (#128808) * propgating retrievers to inner retrievers * Java doc fixed * Cleaned up * Update docs/changelog/128808.yaml * Enhanced comment as stated by the copilot * Delete docs/changelog/128808.yaml * Add Cluster Feature for L2 Norm (#129181) * propgating retrievers to inner retrievers * test feature taken care of * Small changes in concurrent multipart upload interfaces (#128977) Small changes in BlobContainer interface and wrapper. Relates ES-11815 * Unmute FollowingEngineTests#testProcessOnceOnPrimary() test (#129054) The reason the test fails is that operations contained _seq_no field with different doc value types (with no skippers and with skippers) and this isn't allowed, since field types need to be consistent in a Lucene index. The initial operations were generated not knowing about the fact the index mode was set to logsdb or time_series. Causing the operations to not have doc value skippers. However when replaying the operations via following engine, the operations did have doc value skippers. The fix is to set `index.seq_no.index_options` to `points_and_doc_values`, so that the initial operations are indexed without doc value skippers. This test doesn't gain anything from storing seqno with doc value skippers, so there is no loss of testing coverage. Closes #128541 * [Build] Add support for publishing to maven central (#128659) This ensures we package an aggregation zip with all artifacts we want to publish to maven central as part of a release. Running zipAggregation will produce a zip file in the build/nmcp/zip folder. The content of this zip is meant to match the maven artifacts we have currently declared as dra maven artifacts. * ESQL: Check for errors while loading blocks (#129016) Runs a sanity check after loading a block of values. Previously we were doing a quick check if assertions were enabled. Now we do two quick checks all the time. Better - we attach information about how a block was loaded when there's a problem. Relates to #128959 * Make `PhaseCacheManagementTests` project-aware (#129047) The functionality in `PhaseCacheManagement` was already project-aware, but these tests were still using deprecated methods. * Vector test tools (#128934) This adds some testing tools for verifying vector recall and latency directly without having to spin up an entire ES node and running a rally track. Its pretty barebones and takes inspiration from lucene-util, but I wanted access to our own formats and tooling to make our lives easier. Here is an example config file. This will build the initial index, run queries at num_candidates: 50, then again at num_candidates 100 (without reindexing, and re-using the cached nearest neighbors). ``` [{ "doc_vectors" : "path", "query_vectors" : "path", "num_docs" : 10000, "num_queries" : 10, "index_type" : "hnsw", "num_candidates" : 50, "k" : 10, "hnsw_m" : 16, "hnsw_ef_construction" : 200, "index_threads" : 4, "reindex" : true, "force_merge" : false, "vector_space" : "maximum_inner_product", "dimensions" : 768 }, { "doc_vectors" : "path", "query_vectors" : "path", "num_docs" : 10000, "num_queries" : 10, "index_type" : "hnsw", "num_candidates" : 100, "k" : 10, "hnsw_m" : 16, "hnsw_ef_construction" : 200, "vector_space" : "maximum_inner_product", "dimensions" : 768 } ] ``` To execute: ``` ./gradlew :qa:vector:checkVec --args="/Path/to/knn_tester_config.json" ``` Calling `./gradlew :qa:vector:checkVecHelp` gives some guidance on how to use it, additionally providing a way to run it via java directly (useful to bypass gradlew guff). * ES|QL: refactor generative tests (#129028) * Add a test of LOOKUP JOIN against a time series index (#129007) Add a spec test of `LOOKUP JOIN` against a time series index. * Make ILM `ClusterStateWaitStep` project-aware (#129042) This is part of an iterative process to make ILM project-aware. * Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex ASYNC} #129078 * Remove `ClusterState` param from ILM `AsyncBranchingStep` (#129076) The `ClusterState` parameter of the `asyncPredicate` is not used anywhere. * Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex SYNC} #129082 * Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/70_ilm/Test Lifecycle Still There And Indices Are Still Managed} #129097 * Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/90_ml_data_frame_analytics_crud/Get mixed cluster outlier_detection job} #129098 * Mute org.elasticsearch.packaging.test.DockerTests test081SymlinksAreFollowedWithEnvironmentVariableFiles #128867 * Threadpool merge executor is aware of available disk space (#127613) This PR introduces 3 new settings: indices.merge.disk.check_interval, indices.merge.disk.watermark.high, and indices.merge.disk.watermark.high.max_headroom that control if the threadpool merge executor starts executing new merges when the disk space is getting low. The intent of this change is to avoid the situation where in-progress merges exhaust the available disk space on the node's local filesystem. To this end, the thread pool merge executor periodically monitors the available disk space, as well as the current disk space estimates required by all in-progress (currently running) merges on the node, and will NOT schedule any new merges if the disk space is getting low (by default below the 5% limit of the total disk space, or 100 GB, whichever is smaller (same as the disk allocation flood stage level)). * Add option to include or exclude vectors from _source retrieval (#128735) This PR introduces a new include_vectors option to the _source retrieval context. When set to false, vectors are excluded from the returned _source. This is especially efficient when used with synthetic source, as it avoids loading vector fields entirely. By default, vectors remain included unless explicitly excluded. * Remove direct minScore propagation to inner retrievers * cleaned up skip * Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testAvailableDiskSpaceMonitorWhenFileSystemStatErrors #129149 * Add transport version for ML inference Mistral chat completion (#129033) * Add transport version for ML inference Mistral chat completion * Add changelog for Mistral Chat Completion version fix * Revert "Add changelog for Mistral Chat Completion version fix" This reverts commit 7a57416. * Correct index path validation (#129144) All we care about is if reindex is true or false. We shouldn't worry about force merge. Because if reindex is true, we will create the directory, if its false, we won't. * Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testUnavailableBudgetBlocksNewMergeTasksFromStartingExecution #129148 * Implemented completion task for Google VertexAI (#128694) * Google Vertex AI completion model, response entity and tests * Fixed GoogleVertexAiServiceTest for Service configuration * Changelog * Removed downcasting and using `moveToFirstToken` * Create GoogleVertexAiChatCompletionResponseHandler for streaming and non streaming responses * Added unit tests * PR feedback * Removed googlevertexaicompletion model. Using just GoogleVertexAiChatCompletionModel for completion and chat completion * Renamed uri -> nonStreamingUri. Added streamingUri and getters in GoogleVertexAiChatCompletionModel * Moved rateLimitGroupHashing to subclasses of GoogleVertexAiModel * Fixed rate limit has of GoogleVertexAiRerankModel and refactored uri for GoogleVertexAiUnifiedChatCompletionRequest --------- Co-authored-by: lhoet-google <lhoet@google.com> Co-authored-by: Jonathan Buttner <56361221+jonathan-buttner@users.noreply.github.com> * Added cluster feature to yaml * Node feature added * Duplicate line - result of merge removed * Update docs/changelog/129181.yaml * Update 129181.yaml --------- Co-authored-by: Tanguy Leroux <tlrx.dev@gmail.com> Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com> Co-authored-by: Rene Groeschke <rene@elastic.co> Co-authored-by: Nik Everett <nik9000@gmail.com> Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com> Co-authored-by: Benjamin Trent <ben.w.trent@gmail.com> Co-authored-by: Luigi Dell'Aquila <luigi.dellaquila@gmail.com> Co-authored-by: Bogdan Pintea <bogdan.pintea@elastic.co> Co-authored-by: elasticsearchmachine <58790826+elasticsearchmachine@users.noreply.github.com> Co-authored-by: Albert Zaharovits <email+github@zalbert.me> Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co> Co-authored-by: Jan-Kazlouski-elastic <jan.kazlouski@elastic.co> Co-authored-by: Leonardo Hoet <55866308+leo-hoet@users.noreply.github.com> Co-authored-by: lhoet-google <lhoet@google.com> Co-authored-by: Jonathan Buttner <56361221+jonathan-buttner@users.noreply.github.com> * Remove changelog for 129181, keep only 128504.yaml as the changelog entry * Remove redundant retrievers.md, documentation is now in retrievers-overview.asciidoc * updated retriever-overview.asciidoc * Resolved duplicate tag issue --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Tanguy Leroux <tlrx.dev@gmail.com> Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com> Co-authored-by: Rene Groeschke <rene@elastic.co> Co-authored-by: Nik Everett <nik9000@gmail.com> Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com> Co-authored-by: Benjamin Trent <ben.w.trent@gmail.com> Co-authored-by: Luigi Dell'Aquila <luigi.dellaquila@gmail.com> Co-authored-by: Bogdan Pintea <bogdan.pintea@elastic.co> Co-authored-by: elasticsearchmachine <58790826+elasticsearchmachine@users.noreply.github.com> Co-authored-by: Albert Zaharovits <email+github@zalbert.me> Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co> Co-authored-by: Jan-Kazlouski-elastic <jan.kazlouski@elastic.co> Co-authored-by: Leonardo Hoet <55866308+leo-hoet@users.noreply.github.com> Co-authored-by: lhoet-google <lhoet@google.com> Co-authored-by: Jonathan Buttner <56361221+jonathan-buttner@users.noreply.github.com>
1 parent 5f364d9 commit 90d0f63

File tree

7 files changed

+257
-1
lines changed

7 files changed

+257
-1
lines changed

docs/changelog/128504.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 128504
2+
summary: Add l2_norm normalization support to linear retriever
3+
area: Relevance
4+
type: enhancement
5+
issues: []

docs/reference/search/search-your-data/retrievers-overview.asciidoc

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,40 @@ Returns top documents from a <<search-api-knn,knn search>>, in the context of a
2626
* <<linear-retriever,*Linear Retriever*>>.
2727
Combines the top results from multiple sub-retrievers using a weighted sum of their scores. Allows to specify different
2828
weights for each retriever, as well as independently normalize the scores from each result set.
29+
30+
[discrete]
31+
[[retrievers-overview-linear-retriever-parameters]]
32+
==== Linear Retriever Parameters
33+
34+
`retrievers`
35+
: (Required, array of objects)
36+
A list of the sub-retrievers' configuration, that we will take into account and whose result sets we will merge through a weighted sum. Each configuration can have a different weight and normalization depending on the specified retriever.
37+
38+
Each entry specifies the following parameters:
39+
40+
`retriever`
41+
: (Required, a `retriever` object)
42+
Specifies the retriever for which we will compute the top documents for. The retriever will produce `rank_window_size` results, which will later be merged based on the specified `weight` and `normalizer`.
43+
44+
`weight`
45+
: (Optional, float)
46+
The weight that each score of this retriever’s top docs will be multiplied with. Must be greater or equal to 0. Defaults to 1.0.
47+
48+
`normalizer`
49+
: (Optional, String)
50+
Specifies how we will normalize the retriever’s scores, before applying the specified `weight`. Available values are: `minmax`, `l2_norm`, and `none`. Defaults to `none`.
51+
52+
* `none`
53+
* `minmax` : A `MinMaxScoreNormalizer` that normalizes scores based on the following formula
54+
55+
```
56+
score = (score - min) / (max - min)
57+
```
58+
59+
* `l2_norm` : An `L2ScoreNormalizer` that normalizes scores using the L2 norm of the score values.
60+
61+
See also the hybrid search example for how to independently configure and apply normalizers to retrievers.
62+
2963
* <<rrf-retriever,*RRF Retriever*>>.
3064
Combines and ranks multiple first-stage retrievers using the reciprocal rank fusion (RRF) algorithm.
3165
Allows you to combine multiple result sets with different relevance indicators into a single result set.

x-pack/plugin/rank-rrf/src/main/java/org/elasticsearch/xpack/rank/RankRRFFeatures.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
import java.util.Set;
1515

1616
import static org.elasticsearch.search.retriever.CompoundRetrieverBuilder.INNER_RETRIEVERS_FILTER_SUPPORT;
17+
import static org.elasticsearch.xpack.rank.linear.L2ScoreNormalizer.LINEAR_RETRIEVER_L2_NORM;
1718
import static org.elasticsearch.xpack.rank.linear.MinMaxScoreNormalizer.LINEAR_RETRIEVER_MINMAX_SINGLE_DOC_FIX;
1819
import static org.elasticsearch.xpack.rank.rrf.RRFRetrieverBuilder.RRF_RETRIEVER_COMPOSITION_SUPPORTED;
1920

@@ -31,6 +32,6 @@ public Set<NodeFeature> getFeatures() {
3132

3233
@Override
3334
public Set<NodeFeature> getTestFeatures() {
34-
return Set.of(INNER_RETRIEVERS_FILTER_SUPPORT, LINEAR_RETRIEVER_MINMAX_SINGLE_DOC_FIX);
35+
return Set.of(INNER_RETRIEVERS_FILTER_SUPPORT, LINEAR_RETRIEVER_MINMAX_SINGLE_DOC_FIX, LINEAR_RETRIEVER_L2_NORM);
3536
}
3637
}
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
2+
/*
3+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
4+
* or more contributor license agreements. Licensed under the Elastic License
5+
* 2.0; you may not use this file except in compliance with the Elastic License
6+
* 2.0.
7+
*/
8+
9+
package org.elasticsearch.xpack.rank.linear;
10+
11+
import org.apache.lucene.search.ScoreDoc;
12+
import org.elasticsearch.features.NodeFeature;
13+
14+
/**
15+
* A score normalizer that applies L2 normalization to a set of scores.
16+
* <p>
17+
* Each score is divided by the L2 norm of the scores if the norm is greater than a small EPSILON.
18+
* If all scores are zero or NaN, normalization is skipped and the original scores are returned.
19+
* </p>
20+
*/
21+
public class L2ScoreNormalizer extends ScoreNormalizer {
22+
23+
public static final L2ScoreNormalizer INSTANCE = new L2ScoreNormalizer();
24+
25+
public static final String NAME = "l2_norm";
26+
27+
private static final float EPSILON = 1e-6f;
28+
29+
public static final NodeFeature LINEAR_RETRIEVER_L2_NORM = new NodeFeature("linear_retriever.l2_norm");
30+
31+
public L2ScoreNormalizer() {}
32+
33+
@Override
34+
public String getName() {
35+
return NAME;
36+
}
37+
38+
@Override
39+
public ScoreDoc[] normalizeScores(ScoreDoc[] docs) {
40+
if (docs.length == 0) {
41+
return docs;
42+
}
43+
double sumOfSquares = 0.0;
44+
boolean atLeastOneValidScore = false;
45+
for (ScoreDoc doc : docs) {
46+
if (Float.isNaN(doc.score) == false) {
47+
atLeastOneValidScore = true;
48+
sumOfSquares += doc.score * doc.score;
49+
}
50+
}
51+
if (atLeastOneValidScore == false) {
52+
// No valid scores to normalize
53+
return docs;
54+
}
55+
double norm = Math.sqrt(sumOfSquares);
56+
if (norm < EPSILON) {
57+
return docs;
58+
}
59+
ScoreDoc[] scoreDocs = new ScoreDoc[docs.length];
60+
for (int i = 0; i < docs.length; i++) {
61+
float score = (float) (docs[i].score / norm);
62+
scoreDocs[i] = new ScoreDoc(docs[i].doc, score, docs[i].shardIndex);
63+
}
64+
return scoreDocs;
65+
}
66+
}

x-pack/plugin/rank-rrf/src/main/java/org/elasticsearch/xpack/rank/linear/ScoreNormalizer.java

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ public abstract class ScoreNormalizer {
1717
public static ScoreNormalizer valueOf(String normalizer) {
1818
if (MinMaxScoreNormalizer.NAME.equalsIgnoreCase(normalizer)) {
1919
return MinMaxScoreNormalizer.INSTANCE;
20+
} else if (L2ScoreNormalizer.NAME.equalsIgnoreCase(normalizer)) {
21+
return L2ScoreNormalizer.INSTANCE;
22+
2023
} else if (IdentityScoreNormalizer.NAME.equalsIgnoreCase(normalizer)) {
2124
return IdentityScoreNormalizer.INSTANCE;
2225

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the Elastic License
4+
* 2.0; you may not use this file except in compliance with the Elastic License
5+
* 2.0.
6+
*/
7+
8+
package org.elasticsearch.xpack.rank.linear;
9+
10+
import org.apache.lucene.search.ScoreDoc;
11+
import org.elasticsearch.test.ESTestCase;
12+
13+
public class L2ScoreNormalizerTests extends ESTestCase {
14+
15+
public void testNormalizeTypicalVector() {
16+
ScoreDoc[] docs = { new ScoreDoc(1, 3.0f, 0), new ScoreDoc(2, 4.0f, 0) };
17+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
18+
assertEquals(0.6f, normalized[0].score, 1e-5);
19+
assertEquals(0.8f, normalized[1].score, 1e-5);
20+
}
21+
22+
public void testAllZeros() {
23+
ScoreDoc[] docs = { new ScoreDoc(1, 0.0f, 0), new ScoreDoc(2, 0.0f, 0) };
24+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
25+
assertEquals(0.0f, normalized[0].score, 0.0f);
26+
assertEquals(0.0f, normalized[1].score, 0.0f);
27+
}
28+
29+
public void testAllNaN() {
30+
ScoreDoc[] docs = { new ScoreDoc(1, Float.NaN, 0), new ScoreDoc(2, Float.NaN, 0) };
31+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
32+
assertTrue(Float.isNaN(normalized[0].score));
33+
assertTrue(Float.isNaN(normalized[1].score));
34+
}
35+
36+
public void testMixedZeroAndNaN() {
37+
ScoreDoc[] docs = { new ScoreDoc(1, 0.0f, 0), new ScoreDoc(2, Float.NaN, 0) };
38+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
39+
assertEquals(0.0f, normalized[0].score, 0.0f);
40+
assertTrue(Float.isNaN(normalized[1].score));
41+
}
42+
43+
public void testSingleElement() {
44+
ScoreDoc[] docs = { new ScoreDoc(1, 42.0f, 0) };
45+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
46+
assertEquals(1.0f, normalized[0].score, 1e-5);
47+
}
48+
49+
public void testEmptyArray() {
50+
ScoreDoc[] docs = {};
51+
ScoreDoc[] normalized = L2ScoreNormalizer.INSTANCE.normalizeScores(docs);
52+
assertEquals(0, normalized.length);
53+
}
54+
}

x-pack/plugin/rank-rrf/src/yamlRestTest/resources/rest-api-spec/test/linear/10_linear_retriever.yml

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -265,6 +265,99 @@ setup:
265265
- match: { hits.hits.3._id: "3" }
266266
- close_to: { hits.hits.3._score: { value: 0.0, error: 0.001 } }
267267

268+
---
269+
"should normalize initial scores with l2_norm":
270+
- requires:
271+
cluster_features: [ "linear_retriever.l2_norm" ]
272+
reason: "Requires l2_norm normalization support in linear retriever"
273+
- do:
274+
search:
275+
index: test
276+
body:
277+
retriever:
278+
linear:
279+
retrievers: [
280+
{
281+
retriever: {
282+
standard: {
283+
query: {
284+
bool: {
285+
should: [
286+
{ constant_score: { filter: { term: { keyword: { value: "one" } } }, boost: 3.0 } },
287+
{ constant_score: { filter: { term: { keyword: { value: "two" } } }, boost: 4.0 } }
288+
]
289+
}
290+
}
291+
}
292+
},
293+
weight: 10.0,
294+
normalizer: "l2_norm"
295+
},
296+
{
297+
retriever: {
298+
standard: {
299+
query: {
300+
bool: {
301+
should: [
302+
{ constant_score: { filter: { term: { keyword: { value: "three" } } }, boost: 6.0 } },
303+
{ constant_score: { filter: { term: { keyword: { value: "four" } } }, boost: 8.0 } }
304+
]
305+
}
306+
}
307+
}
308+
},
309+
weight: 2.0,
310+
normalizer: "l2_norm"
311+
}
312+
]
313+
314+
- match: { hits.total.value: 4 }
315+
- match: { hits.hits.0._id: "2" }
316+
- match: { hits.hits.0._score: 8.0 }
317+
- match: { hits.hits.1._id: "1" }
318+
- match: { hits.hits.1._score: 6.0 }
319+
- match: { hits.hits.2._id: "4" }
320+
- close_to: { hits.hits.2._score: { value: 1.6, error: 0.001 } }
321+
- match: { hits.hits.3._id: "3" }
322+
- close_to: { hits.hits.3._score: { value: 1.2, error: 0.001 } }
323+
324+
---
325+
"should handle all zero scores in normalization":
326+
- requires:
327+
cluster_features: [ "linear_retriever.l2_norm" ]
328+
reason: "Requires l2_norm normalization support in linear retriever"
329+
- do:
330+
search:
331+
index: test
332+
body:
333+
retriever:
334+
linear:
335+
retrievers: [
336+
{
337+
retriever: {
338+
standard: {
339+
query: {
340+
bool: {
341+
should: [
342+
{ constant_score: { filter: { term: { keyword: { value: "one" } } }, boost: 0.0 } },
343+
{ constant_score: { filter: { term: { keyword: { value: "two" } } }, boost: 0.0 } },
344+
{ constant_score: { filter: { term: { keyword: { value: "three" } } }, boost: 0.0 } },
345+
{ constant_score: { filter: { term: { keyword: { value: "four" } } }, boost: 0.0 } }
346+
]
347+
}
348+
}
349+
}
350+
},
351+
weight: 1.0,
352+
normalizer: "l2_norm"
353+
}
354+
]
355+
- match: { hits.total.value: 4 }
356+
- close_to: { hits.hits.0._score: { value: 0.0, error: 0.0001 } }
357+
- close_to: { hits.hits.1._score: { value: 0.0, error: 0.0001 } }
358+
- close_to: { hits.hits.2._score: { value: 0.0, error: 0.0001 } }
359+
- close_to: { hits.hits.3._score: { value: 0.0, error: 0.0001 } }
360+
268361
---
269362
"should throw on unknown normalizer":
270363
- do:

0 commit comments

Comments
 (0)