索引和查询向量
了解如何使用 Redis 为向量嵌入编制索引和查询
Redis 查询引擎允许您在哈希或 JSON 对象中为向量字段编制索引(有关更多信息,请参阅向量参考页面)。 除其他外,vector fields 可以存储文本嵌入,这些嵌入是 AI 生成的 vector 文本片段中语义信息的表示形式。两个嵌入向量之间的向量距离表示它们在语义上的相似程度。通过比较 从某些查询文本生成的嵌入与存储在 hash 中的嵌入的相似性 或 JSON 字段,Redis 可以检索在术语方面与查询紧密匹配的文档 的含义。
在下面的示例中,我们使用sentence-transformers
库来生成用于存储和索引的向量嵌入
Redis 查询引擎。
初始化
安装redis-py
如果你
尚未这样做。另外,安装sentence-transformers
使用
以下命令:
pip install sentence-transformers
In a new Python source file, start by importing the required classes:
from sentence_transformers import SentenceTransformer
from redis.commands.search.query import Query
from redis.commands.search.field import TextField, TagField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
import numpy as np
import redis
The first of these imports is the
SentenceTransformer
class, which generates an embedding from a section of text.
Here, we create an instance of SentenceTransformer
that uses the
all-MiniLM-L6-v2
model for the embeddings. This model generates vectors with 384 dimensions, regardless
of the length of the input text, but note that the input is truncated to 256
tokens (see
Word piece tokenization
at the Hugging Face docs to learn more about the way tokens
are related to the original text).
model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
Create the index
Connect to Redis and delete any index previously created with the
name vector_idx
. (The dropindex()
call throws an exception if
the index doesn't already exist, which is why you need the
try: except:
block.)
r = redis.Redis(decode_responses=True)
try:
r.ft("vector_idx").dropindex(True)
except redis.exceptions.ResponseError:
pass
Next, we create the index.
The schema in the example below specifies hash objects for storage and includes
three fields: the text content to index, a
tag
field to represent the "genre" of the text, and the embedding vector generated from
the original text content. The embedding
field specifies
HNSW
indexing, the
L2
vector distance metric, Float32
values to represent the vector's components,
and 384 dimensions, as required by the all-MiniLM-L6-v2
embedding model.
schema = (
TextField("content"),
TagField("genre"),
VectorField("embedding", "HNSW", {
"TYPE": "FLOAT32",
"DIM": 384,
"DISTANCE_METRIC":"L2"
})
)
r.ft("vector_idx").create_index(
schema,
definition=IndexDefinition(
prefix=["doc:"], index_type=IndexType.HASH
)
)
Add data
You can now supply the data objects, which will be indexed automatically
when you add them with hset()
, as long as
you use the doc:
prefix specified in the index definition.
Use the model.encode()
method of SentenceTransformer
as shown below to create the embedding that represents the content
field.
The astype()
option that follows the model.encode()
call specifies that
we want a vector of float32
values. The tobytes()
option encodes the
vector components together as a single binary string rather than the
default Python list of float
values.
Use the binary string representation when you are indexing hash objects
(as we are here), but use the default list of float
for JSON objects.
content = "That is a very happy person"
r.hset("doc:0", mapping={
"content": content,
"genre": "persons",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
content = "That is a happy dog"
r.hset("doc:1", mapping={
"content": content,
"genre": "pets",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
content = "Today is a sunny day"
r.hset("doc:2", mapping={
"content": content,
"genre": "weather",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
Run a query
After you have created the index and added the data, you are ready to run a query.
To do this, you must create another embedding vector from your chosen query
text. Redis calculates the similarity between the query vector and each
embedding vector in the index as it runs the query. It then ranks the
results in order of this numeric similarity value.
The code below creates the query embedding using model.encode()
, as with
the indexing, and passes it as a parameter when the query executes
(see
Vector search
for more information about using query parameters with embeddings).
q = Query(
"*=>[KNN 3 @embedding $vec AS vector_distance]"
).return_field("score").dialect(2)
query_text = "That is a happy person"
res = r.ft("vector_idx").search(
q, query_params={
"vec": model.encode(query_text).astype(np.float32).tobytes()
}
)
print(res)
The code is now ready to run, but note that it may take a while to complete when
you run it for the first time (which happens because RedisVL must download the
all-MiniLM-L6-v2
model data before it can
generate the embeddings). When you run the code, it outputs the following result
object (slightly formatted here for clarity):
Result{
3 total,
docs: [
Document {
'id': 'doc:0',
'payload': None,
'vector_distance': '0.114169985056',
'content': 'That is a very happy person'
},
Document {
'id': 'doc:1',
'payload': None,
'vector_distance': '0.610845386982',
'content': 'That is a happy dog'
},
Document {
'id': 'doc:2',
'payload': None,
'vector_distance': '1.48624813557',
'content': 'Today is a sunny day'
}
]
}
Note that the results are ordered according to the value of the vector_distance
field, with the lowest distance indicating the greatest similarity to the query.
As you would expect, the result for doc:0
with the content text "That is a very happy person"
is the result that is most similar in meaning to the query text
"That is a happy person".
Learn more
See
Vector search
for more information about the indexing options, distance metrics, and query format
for vectors.
On this page