Documentation
Query Transformation - Vectorizer Stage
In order to use vector search or hybrid vector search, you need to generate a vector-representation of your query. This is done by using a large language model embedding.
Therefore, you need to configure a query vectorizer as a query processing stage. This query stage has the following options.
Id of the transformer: this is the kind of the transformer stage and for vectorization/embeddings, this must be set as Vectorizer.
Type: here you can configure the type of your language model provider. The Suite supports Open Llama as a local model, as well as Azure OpenAI GPT. In future, more models are planned for being supported.
Open Llama configuration
Embedding model: here you can provide the name of the embedding model, you want to use. For example mxbai-embed-large
Use authentication. If enabled, the Suite can use basic authentication for communicating with the embedding endpoint. Please provide an according username and password.
Public keys for SSL certificates: this configuration is needed, if you run the environment with self-signed certificates, or certificates which are not known to the Java key store.
We use a straight-forward approach to validate SSL certificates. In order to render a certificate valid, add the modulus of the public key into this text field. You can access this modulus by viewing the certificate within the browser.
Azure OpenAI GPT configuration
GPT Endpoint: Offer the endpoint such as <https://<baseUrl>>.openai.azure.com/openai/deployments/<deploymentName>/chat/embeddings?api-version=<version>
Password: here please add your API key which you can configure in the OpenAI configuration in the Azure portal.
At query time, it automatically transforms the input query into a vector representation which is then sent to the vector fields of the search engine. Which fields were configured as vector fields, can be looked up in the respective search engine configuration dialogs (cf. the documentation at Search Engines ).