Skip to main content

ModelConfig

The ModelConfig class provides configuration for AI model interactions, including provider selection, parameters, and feature flags.

Overview

Properties

PropertyTypeDefaultDescription
modelstrRequiredThe model identifier (e.g., "claude-3-sonnet-20240229")
providerstrRequiredThe provider name (e.g., "anthropic", "openai")
api_baseOptional[str]NoneCustom API base URL
api_keyOptional[str]NoneAPI key for authentication
api_versionOptional[str]NoneAPI version identifier
max_tokensOptional[int]NoneMaximum tokens to generate
max_history_tokensOptional[int]200000Maximum tokens to retain in history
temperaturefloat0.7Temperature for sampling (0.0-1.0)
use_assistants_apiboolFalseWhether to use OpenAI Assistants API
use_native_adapterboolTrueWhether to use native SDK adapters
streaming_enabledboolFalseWhether to enable streaming responses
enable_token_countingboolTrueWhether to track token usage
vision_enabledOptional[bool]NoneWhether to enable vision capabilities

Methods

__post_init__

def __post_init__(self):

This method is called after initialization to:

  • Set default values for max_history_tokens
  • Set up assistants API configuration
  • Auto-detect vision capabilities based on model name
  • Set backward compatibility properties

get_config

def get_config(self) -> Dict[str, Any]:

Returns a dictionary with the configuration values.

from_env

@classmethod
def from_env(cls):

Creates a ModelConfig instance from environment variables:

Environment VariableProperty
PENGUIN_MODELmodel
PENGUIN_PROVIDERprovider
PENGUIN_API_BASEapi_base
PENGUIN_MAX_TOKENSmax_tokens
PENGUIN_TEMPERATUREtemperature
PENGUIN_MAX_HISTORY_TOKENSmax_history_tokens
PENGUIN_USE_ASSISTANTS_APIuse_assistants_api
PENGUIN_USE_NATIVE_ADAPTERuse_native_adapter
PENGUIN_STREAMING_ENABLEDstreaming_enabled
PENGUIN_VISION_ENABLEDvision_enabled

Auto-Detection Features

Vision Capabilities

The vision_enabled property is auto-detected if not explicitly set:

  • For Anthropic: True if model name contains "claude-3"
  • For OpenAI: True if model name contains "gpt-4" and "vision"
  • Default: False for other models

Usage Examples

Basic Configuration

from penguin.llm.model_config import ModelConfig

# Create basic config
config = ModelConfig(
model="claude-3-5-sonnet",
provider="anthropic",
temperature=0.7
)

Configuration with Advanced Options

# Create config with advanced options
config = ModelConfig(
model="claude-3-5-sonnet",
provider="anthropic",
max_tokens=4096,
temperature=0.5,
use_native_adapter=True,
streaming_enabled=True,
vision_enabled=True
)

Loading from Environment

# Set environment variables
os.environ["PENGUIN_MODEL"] = "gpt-4-turbo"
os.environ["PENGUIN_PROVIDER"] = "openai"
os.environ["PENGUIN_TEMPERATURE"] = "0.8"

# Load from environment
config = ModelConfig.from_env()

Provider-Specific Features

Anthropic Models

For Anthropic Claude models:

  • Vision automatically enabled for Claude 3 models
  • Native adapter (when use_native_adapter=True) uses Anthropic's Python SDK directly
  • Direct token counting for accurate token usage tracking

OpenAI Models

For OpenAI GPT models:

  • Vision automatically enabled for GPT-4 Vision models
  • Assistants API optionally available through use_assistants_api=True
  • Will use OpenAI's native SDK in future updates (currently uses LiteLLM)