索引和查询向量
了解如何使用 Redis 为向量嵌入编制索引和查询
Redis 查询引擎允许您在哈希或 JSON 对象中为向量字段编制索引(有关更多信息,请参阅向量参考页面)。 除其他外,vector fields 可以存储文本嵌入,这些嵌入是 AI 生成的 vector 文本片段中语义信息的表示形式。两个嵌入向量之间的向量距离表示它们在语义上的相似程度。通过比较 从某些查询文本生成的嵌入与存储在 hash 中的嵌入的相似性 或 JSON 字段,Redis 可以检索在术语方面与查询紧密匹配的文档 的含义。
在下面的示例中,我们使用 Microsoft.ML 生成向量嵌入,以使用 Redis 查询引擎进行存储和索引。 我们还将展示如何调整代码以使用 Azure OpenAI 进行嵌入。
初始化
如果您从新的 console 应用程序,您可以使用以下命令创建该应用程序:
dotnet new console -n VecQueryExample
In the app's project folder, add
NRedisStack
:
dotnet add package NRedisStack
Then, add the Microsoft.ML
package.
dotnet add package Microsoft.ML
If you want to try the optional
Azure embedding
described below, you should also add Azure.AI.OpenAI
:
dotnet add package Azure.AI.OpenAI --prerelease
Import dependencies
Add the following imports to your source file:
// Redis connection and Query Engine.
using NRedisStack.RedisStackCommands;
using StackExchange.Redis;
using NRedisStack.Search;
using static NRedisStack.Search.Schema;
using NRedisStack.Search.Literals.Enums;
// Text embeddings.
using Microsoft.ML;
using Microsoft.ML.Transforms.Text;
If you are using the Azure embeddings, also add:
// Azure embeddings.
using Azure;
using Azure.AI.OpenAI;
Define a function to obtain the embedding model
Note:
Ignore this step if you are using an Azure OpenAI
embedding model.
A few steps are involved in initializing the embedding model
(known as a PredictionEngine
, in Microsoft terminology), so
we declare a function to contain those steps together.
(See the Microsoft.ML docs for more information about the
ApplyWordEmbedding
method, including example code.)
Note that we use two classes, TextData
and TransformedTextData
, to
specify the PredictionEngine
model. C# syntax requires us to place these
classes after the main code in a console app source file. The section
Declare TextData
and TransformedTextData
below shows how to declare them.
static PredictionEngine<TextData, TransformedTextData> GetPredictionEngine(){
// Create a new ML context, for ML.NET operations. It can be used for
// exception tracking and logging, as well as the source of randomness.
var mlContext = new MLContext();
// Create an empty list as the dataset
var emptySamples = new List<TextData>();
// Convert sample list to an empty IDataView.
var emptyDataView = mlContext.Data.LoadFromEnumerable(emptySamples);
// A pipeline for converting text into a 150-dimension embedding vector
var textPipeline = mlContext.Transforms.Text.NormalizeText("Text")
.Append(mlContext.Transforms.Text.TokenizeIntoWords("Tokens",
"Text"))
.Append(mlContext.Transforms.Text.ApplyWordEmbedding("Features",
"Tokens", WordEmbeddingEstimator.PretrainedModelKind
.SentimentSpecificWordEmbedding));
// Fit to data.
var textTransformer = textPipeline.Fit(emptyDataView);
// Create the prediction engine to get the embedding vector from the input text/string.
var predictionEngine = mlContext.Model.CreatePredictionEngine<TextData,
TransformedTextData>(textTransformer);
return predictionEngine;
}
Define a function to generate an embedding
Note:
Ignore this step if you are using an Azure OpenAI
embedding model.
Our embedding model represents the vectors as an array of float
values,
but when you store vectors in a Redis hash object, you must encode the vector
array as a byte
string. To simplify this, we declare a
GetEmbedding()
function that applies the PredictionEngine
model described
above, and
then encodes the returned float
array as a byte
string. If you are
storing your documents as JSON objects instead of hashes, then you should
use the float
array for the embedding directly, without first converting
it to a byte
string.
static byte[] GetEmbedding(
PredictionEngine<TextData, TransformedTextData> model, string sentence
)
{
// Call the prediction API to convert the text into embedding vector.
var data = new TextData()
{
Text = sentence
};
var prediction = model.Predict(data);
// Convert prediction.Features to a binary blob
float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x);
byte[] byteArray = new byte[floatArray.Length * sizeof(float)];
Buffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length);
return byteArray;
}
Generate an embedding from Azure OpenAI
Note:
Ignore this step if you are using a Microsoft.ML
embedding model.
Azure OpenAI can be a convenient way to access an embedding model, because
you don't need to manage and scale the server infrastructure yourself.
You can create an Azure OpenAI service and deployment to serve embeddings of
whatever type you need. Select your region, note the service endpoint and key,
and add them where you see placeholders in the function below.
See
Learn how to generate embeddings with Azure OpenAI
for more information.
private static byte[] GetEmbeddingFromAzure(string sentence){
Uri oaiEndpoint = new ("your-azure-openai-endpoint”);
string oaiKey = "your-openai-key";
AzureKeyCredential credentials = new (oaiKey);
OpenAIClient openAIClient = new (oaiEndpoint, credentials);
EmbeddingsOptions embeddingOptions = new() {
DeploymentName = "your-deployment-name",
Input = { sentence },
};
// Generate the vector embedding
var returnValue = openAIClient.GetEmbeddings(embeddingOptions);
// Convert the array of floats to binary blob
float[] floatArray = Array.ConvertAll(returnValue.Value.Data[0].Embedding.ToArray(), x => (float)x);
byte[] byteArray = new byte[floatArray.Length * sizeof(float)];
Buffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length);
return byteArray;
}
Create the index
Connect to Redis and delete any index previously created with the
name vector_idx
. (The DropIndex()
call throws an exception if
the index doesn't already exist, which is why you need the
try...catch
block.)
var muxer = ConnectionMultiplexer.Connect("localhost:6379");
var db = muxer.GetDatabase();
try { db.FT().DropIndex("vector_idx");} catch {}
Next, create the index.
The schema in the example below includes three fields: the text content to index, a
tag
field to represent the "genre" of the text, and the embedding vector generated from
the original text content. The embedding
field specifies
HNSW
indexing, the
L2
vector distance metric, Float32
values to represent the vector's components,
and 150 dimensions, as required by our embedding model.
The FTCreateParams
object specifies hash objects for storage and a
prefix doc:
that identifies the hash objects we want to index.
var schema = new Schema()
.AddTextField(new FieldName("content", "content"))
.AddTagField(new FieldName("genre", "genre"))
.AddVectorField("embedding", VectorField.VectorAlgo.HNSW,
new Dictionary<string, object>()
{
["TYPE"] = "FLOAT32",
["DIM"] = "150",
["DISTANCE_METRIC"] = "L2"
}
);
db.FT().Create(
"vector_idx",
new FTCreateParams()
.On(IndexDataType.HASH)
.Prefix("doc:"),
schema
);
Add data
You can now supply the data objects, which will be indexed automatically
when you add them with HashSet()
, as long as
you use the doc:
prefix specified in the index definition.
Firstly, create an instance of the PredictionEngine
model using our
GetPredictionEngine()
function.
You can then pass this to the GetEmbedding()
function
to create the embedding that represents the content
field, as shown below .
(If you are using an Azure OpenAI model for the embeddings, then
use GetEmbeddingFromAzure()
instead of GetEmbedding()
, and note that
the PredictionModel
is managed by the server, so you don't need to create
an instance yourself.)
var predEngine = GetPredictionEngine();
var sentence1 = "That is a very happy person";
HashEntry[] doc1 = {
new("content", sentence1),
new("genre", "persons"),
new("embedding", GetEmbedding(predEngine, sentence1))
};
db.HashSet("doc:1", doc1);
var sentence2 = "That is a happy dog";
HashEntry[] doc2 = {
new("content", sentence2),
new("genre", "pets"),
new("embedding", GetEmbedding(predEngine, sentence2))
};
db.HashSet("doc:2", doc2);
var sentence3 = "Today is a sunny day";
HashEntry[] doc3 = {
new("content", sentence3),
new("genre", "weather"),
new("embedding", GetEmbedding(predEngine, sentence3))
};
db.HashSet("doc:3", doc3);
Run a query
After you have created the index and added the data, you are ready to run a query.
To do this, you must create another embedding vector from your chosen query
text. Redis calculates the vector distance between the query vector and each
embedding vector in the index as it runs the query. We can request the results to be
sorted to rank them in order of ascending distance.
The code below creates the query embedding using the GetEmbedding()
method, as with
the indexing, and passes it as a parameter when the query executes (see
Vector search
for more information about using query parameters with embeddings).
The query is a
K nearest neighbors (KNN)
search that sorts the results in order of vector distance from the query vector.
(As before, replace GetEmbedding()
with GetEmbeddingFromAzure()
if you are using
Azure OpenAI.)
var res = db.FT().Search("vector_idx",
new Query("*=>[KNN 3 @embedding $query_vec AS score]")
.AddParam("query_vec", GetEmbedding(predEngine, "That is a happy person"))
.ReturnFields(
new FieldName("content", "content"),
new FieldName("score", "score")
)
.SetSortBy("score")
.Dialect(2));
foreach (var doc in res.Documents) {
var props = doc.GetProperties();
var propText = string.Join(
", ",
props.Select(p => $"{p.Key}: '{p.Value}'")
);
Console.WriteLine(
$"ID: {doc.Id}, Properties: [\n {propText}\n]"
);
}
Declare TextData
and TransformedTextData
Note:
Ignore this step if you are using an Azure OpenAI
embedding model.
As we noted in the section above about the
embedding model,
we must declare two very simple classes at the end of the source
file. These are required because the API that generates the model
expects classes with named fields for the input string
and output
float
array.
class TextData
{
public string Text { get; set; }
}
class TransformedTextData : TextData
{
public float[] Features { get; set; }
}
Run the code
Assuming you have added the code from the steps above to your source file,
it is now ready to run, but note that it may take a while to complete when
you run it for the first time (which happens because the tokenizer must download the
embedding model data before it can generate the embeddings). When you run the code,
it outputs the following result text:
ID: doc:1, Properties: [
score: '4.30777168274', content: 'That is a very happy person'
]
ID: doc:2, Properties: [
score: '25.9752807617', content: 'That is a happy dog'
]
ID: doc:3, Properties: [
score: '68.8638000488', content: 'Today is a sunny day'
]
The results are ordered according to the value of the score
field, which represents the vector distance here. The lowest distance indicates
the greatest similarity to the query.
As you would expect, the result for doc:1
with the content text
"That is a very happy person"
is the result that is most similar in meaning to the query text
"That is a happy person".
Learn more
See
Vector search
for more information about the indexing options, distance metrics, and query format
for vectors.
On this page