索引和查询向量

了解如何使用 Redis 为向量嵌入编制索引和查询

Redis 查询引擎允许您在哈希JSON 对象中为向量字段编制索引(有关更多信息,请参阅向量参考页面)。 除其他外,vector fields 可以存储文本嵌入,这些嵌入是 AI 生成的 vector 文本片段中语义信息的表示形式。两个嵌入向量之间的向量距离表示它们在语义上的相似程度。通过比较 从某些查询文本生成的嵌入与存储在 hash 中的嵌入的相似性 或 JSON 字段,Redis 可以检索在术语方面与查询紧密匹配的文档 的含义。

在下面的示例中,我们使用@xenova/transformers库来生成用于存储和索引的向量嵌入 Redis 查询引擎。

初始化

安装node-redis如果你 尚未这样做。另外,安装@xenova/transformers使用 以下命令:

npm install @xenova/transformers

In a new JavaScript source file, start by importing the required classes:

import * as transformers from '@xenova/transformers';
import {VectorAlgorithms, createClient, SchemaFieldTypes} from 'redis';

The first of these imports is the @xenova/transformers module, which handles the embedding models. Here, we use an instance of the all-distilroberta-v1 model for the embeddings. This model generates vectors with 768 dimensions, regardless of the length of the input text, but note that the input is truncated to 128 tokens (see Word piece tokenization at the Hugging Face docs to learn more about the way tokens are related to the original text).

The pipe value obtained here is a function that we can call to generate the embeddings. We also need an object to pass some options for the pipe() function call. These specify the way the sentence embedding is generated from individual token embeddings (see the all-distilroberta-v1 docs for more information).

let pipe = await transformers.pipeline(
    'feature-extraction', 'Xenova/all-distilroberta-v1'
);

const pipeOptions = {
    pooling: 'mean',
    normalize: true,
};

Create the index

Connect to Redis and delete any index previously created with the name vector_idx. (The dropIndex() call throws an exception if the index doesn't already exist, which is why you need the try...catch block.)

const client = createClient({url: 'redis://localhost:6379'});

await client.connect();

try { await client.ft.dropIndex('vector_idx'); } catch {}

Next, create the index. The schema in the example below specifies hash objects for storage and includes three fields: the text content to index, a tag field to represent the "genre" of the text, and the embedding vector generated from the original text content. The embedding field specifies HNSW indexing, the L2 vector distance metric, Float32 values to represent the vector's components, and 768 dimensions, as required by the all-distilroberta-v1 embedding model.

await client.ft.create('vector_idx', {
    'content': {
        type: SchemaFieldTypes.TEXT,
    },
    'genre': {
        type:SchemaFieldTypes.TAG,
    },
    'embedding': {
        type: SchemaFieldTypes.VECTOR,
        TYPE: 'FLOAT32',
        ALGORITHM: VectorAlgorithms.HNSW,
        DISTANCE_METRIC: 'L2',
        DIM: 768,
    }
},{
    ON: 'HASH',
    PREFIX: 'doc:'
});

Add data

You can now supply the data objects, which will be indexed automatically when you add them with hSet(), as long as you use the doc: prefix specified in the index definition.

Use the pipe() method and the pipeOptions object that we created earlier to generate the embedding that represents the content field. The object returned by pipe() includes a data attribute, which is a Float32Array that contains the embedding data. If you are indexing hash objects, as we are here, then you must also call Buffer.from() on this array's buffer value to convert the Float32Array to a binary string. If you are indexing JSON objects, you can just use the Float32Array directly to represent the embedding.

Make the hSet() calls within a Promise.all() call to create a Redis pipeline (not to be confused with the @xenova/transformers pipeline). This combines the commands together into a batch to reduce network round trip time.

const sentence1 = 'That is a very happy person';
const doc1 = {
    'content': sentence1, 
    'genre':'persons', 
    'embedding':Buffer.from(
        (await pipe(sentence1, pipeOptions)).data.buffer
    ),
};

const sentence2 = 'That is a happy dog';
const doc2 = {
    'content': sentence2, 
    'genre':'pets', 
    'embedding': Buffer.from(
        (await pipe(sentence2, pipeOptions)).data.buffer
    )
};

const sentence3 = 'Today is a sunny day';
const doc3 = {
    'content': sentence3, 
    'genre':'weather', 
    'embedding': Buffer.from(
        (await pipe(sentence3, pipeOptions)).data.buffer
    )
};

await Promise.all([
    client.hSet('doc:1', doc1),
    client.hSet('doc:2', doc2),
    client.hSet('doc:3', doc3)
]);

Run a query

After you have created the index and added the data, you are ready to run a query. To do this, you must create another embedding vector from your chosen query text. Redis calculates the vector distance between the query vector and each embedding vector in the index and then ranks the results in order of this distance value.

The code below creates the query embedding using pipe(), as with the indexing, and passes it as a parameter during execution (see Vector search for more information about using query parameters with embeddings).

The query returns an array of objects representing the documents that were found (which are hash objects here). The id attribute contains the document's key. The value attribute contains an object with a key-value entry corresponding to each index field specified in the RETURN option of the query.

const similar = await client.ft.search(
    'vector_idx',
    '*=>[KNN 3 @embedding $B AS score]',
    {
        'PARAMS': {
            B: Buffer.from(
                (await pipe('That is a happy person', pipeOptions)).data.buffer
            ),
        },
        'RETURN': ['score', 'content'],
        'DIALECT': '2'
    },
);

for (const doc of similar.documents) {
    console.log(`${doc.id}: '${doc.value.content}', Score: ${doc.value.score}`);
}

await client.quit();

The code is now ready to run, but note that it may take a while to download the all-distilroberta-v1 model data the first time you run it. The code outputs the following results:

doc:1: 'That is a very happy person', Score: 0.127055495977
doc:2: 'That is a happy dog', Score: 0.836842417717
doc:3: 'Today is a sunny day', Score: 1.50889515877

The results are ordered according to the value of the score field, which represents the vector distance here. The lowest distance indicates the greatest similarity to the query. As you would expect, the result for doc:1 with the content text "That is a very happy person" is the result that is most similar in meaning to the query text "That is a happy person".

Learn more

See Vector search for more information about the indexing options, distance metrics, and query format for vectors.

RATE THIS PAGE
Back to top ↑