Error: Unsupported connection type None for embedding configuration. Expected types: [AzureOpenAI, OpenAI]

Encountering Connection Issues with AI RAG Chat Evaluator

I’m attempting to utilize an evaluation tool for my AI chat application, but I’m consistently running into this error when trying to execute the evaluation command:

python -m evaltools run-evaluation --settings=my_config.json

Configuration Details

my_config.json:

{
  "data_source": "test_input/questions.jsonl",
  "output_folder": "evaluation_results/run<TIMESTAMP>",
  "metrics_to_evaluate": ["gpt_accuracy", "gpt_context_relevance", "gpt_response_quality", "response_time", "text_length"],
  "service_endpoint": "my service url",
  "evaluation_params": {
    "settings": {
      "max_results": 5,
      "response_temp": 0.2,
      "min_ranking_score": 0,
      "search_mode": "combined",
      "use_semantic_ranking": true,
      "include_captions": false,
      "show_follow_ups": false,
      "apply_security_filters": false,
      "embedding_fields": ["text_embedding"],
      "enable_vision_model": false,
      "random_seed": 42
    }
  },
  "response_content_path": "response.text",
  "context_data_path": "metadata.sources.content"
}

Environment Variables:

OPENAI_PROVIDER="openai"
OPENAI_MODEL_NAME="gpt-4o"

# Azure OpenAI settings (not applicable as using standard OpenAI):
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_BASE_URL=""
AZURE_OPENAI_API_KEY=""

# OpenAI settings:
OPENAI_API_KEY="my actual api key here"
OPENAI_ORG_ID=""

# Search service configuration:
AZURE_SEARCH_SERVICE="my search service url"
AZURE_SEARCH_INDEX_NAME="knowledge_base_index"
AZURE_SEARCH_API_KEY=""

I configured my search service using a standard OpenAI API key instead of an Azure OpenAI key. The error message reads:

Not Support connection type None for embedding api. Connection type should be in [AzureOpenAI, OpenAI]

What steps should I take to resolve this embedding connection configuration error? It appears that the system cannot ascertain which embedding service to utilize.

The issue stems from your embedding configuration being incomplete or misconfigured. Since you’re using standard OpenAI rather than Azure OpenAI, you need to explicitly specify the embedding model in your environment variables. Add OPENAI_EMBEDDING_MODEL="text-embedding-ada-002" to your environment variables - this is typically the default embedding model for OpenAI. Also verify that your OPENAI_PROVIDER variable is being read correctly by the evaluation tool. I encountered similar issues when the tool couldn’t determine which embedding service to use because the embedding model wasn’t explicitly defined. The error suggests the connection type is resolving to None, which usually means the configuration parser can’t match your settings to either Azure or standard OpenAI patterns. Double-check that all your OpenAI environment variables are properly exported and accessible to the evaluation script.

Had this problem when working with a similar RAG evaluation setup last month. The error occurs because the evaltools library requires an explicit embedding deployment configuration even when your main service is properly configured. You need to add embedding-specific parameters to your JSON config file under a separate section. Try adding this to your my_config.json at the root level: “embedding_deployment”: “text-embedding-ada-002” and “embedding_api_base”: “https://api.openai.com/v1”. The evaluation tool treats embeddings as a separate service connection from your main chat completion service, so it needs its own configuration block. I also noticed you have OPENAI_ORG_ID set to empty string - either populate it with your actual organization ID or remove it entirely since empty values can interfere with the OpenAI client initialization. Make sure to restart your terminal session after updating environment variables to ensure they’re properly loaded.

looks like your missing the embedding connection type parameter in the config itself. i had similar issue and solved it by adding "embedding_connection_type": "OpenAI" directly in your json config file. the tool cant automatically detect which service your using even with env vars set correctly sometimes. also check if your using the right evaltools version - older versions had bugs with openai embedding detection.

I had this exact same error recently and found the root cause was missing embedding endpoint configuration in my JSON config file. Your configuration is missing the embedding service specification entirely. Try adding an “embedding_config” section to your my_config.json file with the OpenAI embedding details. Something like this should work: add “embedding_endpoint”: “https://api.openai.com/v1/embeddings” and “embedding_model”: “text-embedding-ada-002” directly in your main config object. The evaluation tool needs to know where to send embedding requests, and without explicit embedding configuration it defaults to None connection type. Also make sure your OPENAI_API_KEY has access to the embeddings endpoint - some restricted keys only work with chat completions. After adding the embedding config section to the JSON, the tool should properly initialize the OpenAI connection for embeddings.

This error typically occurs when the evaluation tool cannot properly parse your embedding service configuration from the environment variables. Based on your setup, the issue is likely that you need to set OPENAI_EMBEDDING_ENDPOINT environment variable explicitly even when using standard OpenAI. Try adding OPENAI_EMBEDDING_ENDPOINT="https://api.openai.com/v1" to your environment variables. Another common cause is having empty string values for Azure variables which can confuse the connection type detection logic. Instead of setting AZURE_OPENAI_DEPLOYMENT="", completely remove or unset those Azure-related environment variables when using standard OpenAI. The tool might be interpreting empty Azure variables as an attempt to use Azure OpenAI service. Also verify your API key has embedding permissions by testing it directly with a curl request to the OpenAI embeddings endpoint before running the evaluation.