跳到主要内容

Stop

stop filter 从分词文本中删除指定的停用词,帮助消除常见的、意义较少的单词。您可以使用 stop_words 参数配置停用词列表。

配置

stop filter 是 Milvus 中的自定义 filter。要使用它,请在 filter 配置中指定 "type": "stop",并使用 stop_words 参数提供停用词列表。

analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "stop", # Specifies the filter type as stop
"stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
}],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
Collections.singletonList(
new HashMap<String, Object>() {{
put("type", "stop");
put("stop_words", Arrays.asList("of", "to", "_english_"));
}}
)
);
const analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "stop", # Specifies the filter type as stop
"stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
}],
};
analyzerParams = map[string]any{"tokenizer": "standard",
"filter": []any{map[string]any{
"type": "stop",
"stop_words": []string{"of", "to", "_english_"},
}}}
# restful
analyzerParams='{
"tokenizer": "standard",
"filter": [
{
"type": "stop",
"stop_words": [
"of",
"to",
"_english_"
]
}
]
}'

stop filter 接受以下可配置参数:

参数

描述

stop_words

要从分词中删除的单词列表。默认情况下,filter 使用内置的 english 字典。您可以通过三种方式覆盖或扩展它:

  • 内置字典 – 提供这些语言别名之一以使用预定义字典:"english""danish""dutch""finnish""french""german""hungarian""italian""norwegian""portuguese""russian""spanish""swedish"

  • 自定义列表 – 传递您自己的 term 数组,例如 ["foo", "bar", "baz"]

  • 混合列表 – 结合别名和自定义 term,例如 ["of", "to", "english"]

    有关每个预定义字典的确切内容的详细信息,请参阅 stop_words

stop filter 对 tokenizer 生成的 term 进行操作,因此必须与 tokenizer 结合使用。有关 Milvus 中可用的 tokenizer 列表,请参阅 Tokenizer 参考

定义 analyzer_params 后,您可以在定义 collection schema 时将其应用于 VARCHAR field。这允许 Milvus 使用指定的 analyzer 处理该 field 中的文本,以实现高效的分词和过滤。有关详细信息,请参阅 示例使用

示例

在将 analyzer 配置应用到您的 collection schema 之前,使用 run_analyzer 方法验证其行为。

Analyzer 配置

analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "stop", # Specifies the filter type as stop
"stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
}],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
Collections.singletonList(
new HashMap<String, Object>() {{
put("type", "stop");
put("stop_words", Arrays.asList("of", "to", "_english_"));
}}
)
);
// javascript
analyzerParams = map[string]any{"tokenizer": "standard",
"filter": []any{map[string]any{
"type": "stop",
"stop_words": []string{"of", "to", "_english_"},
}}}
# restful

使用 run_analyzer 验证

# Sample text to analyze
sample_text = "The stop filter allows control over common stop words for text processing."

# Run the standard analyzer with the defined configuration
result = MilvusClient.run_analyzer(sample_text, analyzer_params)
print(result)
// java
// javascript
// go
# restful

预期输出

['The', 'stop', 'filter', 'allows', 'control', 'over', 'common', 'stop', 'words', 'text', 'processing']