- Implemented reasoning effort setting in SESSION panel of Chat Sessio View - Removed all ability to "sign up" for an account |
||
|---|---|---|
| .. | ||
| src | ||
| types | ||
| .gitignore | ||
| package.json | ||
| README.md | ||
| tsconfig.json | ||
| vitest.config.ts | ||
@gadget/ai
Gadget Code's AI API abstraction layer. Provides a single internal API contract for calling AI providers (Ollama, OpenAI) without consumer code knowing which provider is configured.
Principles
- One interface, all providers. Consumer code calls
createAiApi()once and holds the resultingAiApi. It never checksprovider.sdkagain. - All AI SDK knowledge is contained here. No consumer imports
ollamaoropenaiSDKs directly. - Responses are normalized. All provider responses are translated to Gadget Code's internal interface types before returning.
Usage
import { createAiApi } from "@gadget/ai";
const provider = {
_id: "local-ollama",
name: "Local Ollama",
sdk: "ollama",
baseUrl: "http://localhost:11434",
apiKey: "",
};
const modelConfig = {
provider,
modelId: "llama3.2",
params: {
reasoning: false,
temperature: 0.8,
topP: 0.9,
topK: 40,
},
};
const ai = createAiApi(provider, logger);
const result = await ai.generate(modelConfig, {
prompt: "Explain what this code does",
systemPrompt: "You are a code reviewer.",
});
console.log(result.response);
console.log(result.stats.duration.text); // formatted, e.g. "00:00:02"
API
Factory
createAiApi(provider, logger?) — Returns an AiApi instance for the given provider. logger is optional and defaults to a no-op logger. Pass your own logger to receive debug output.
AiApi
Abstract base class. Currently implemented:
OllamaAiApi— Ollama providerOpenAiApi— OpenAI provider (stubbed)
ai.generate(model, options, streamCallback?)
Single-prompt generation. Returns IAiGenerateResponse.
ai.chat(model, options, streamCallback?)
Chat with conversation history. Pass options.context for multi-turn对话. Returns IAiChatResponse.
Interfaces
All interfaces are exported for use by consumers:
IAiProvider— AI provider configurationIAiModelConfig— Model + runtime parametersIAiGenerateOptions/IAiGenerateResponseIAiChatOptions/IAiChatResponse— includestool_callsfor function-calling modelsIAiInferenceStats— token counts and duration (both rawsecondsnumber and formattedtextstring)IAiLogger— injectable logger interface (debug,info,warn,error)
Providers
Ollama
Configured via IAiProvider with sdk: "ollama". Uses the ollama npm package. Handles streaming responses and normalizes Ollama-specific response fields (thinking tokens, token counts, duration).
OpenAI
Configured via IAiProvider with sdk: "openai". Stubbed — chat() and generate() throw "Not yet implemented". Implement by wiring the openai npm package following the same pattern as OllamaAiApi.
Duration Formatting
The library uses numeral to provide a consistent formatted duration string (stats.duration.text) in hh:mm:ss format. The raw nanosecond value is also returned in stats.duration.seconds for consumers that need the raw number.
Adding a New Provider
- Create
packages/ai/src/<provider>.ts— extendAiApi, implement all abstract methods - Update
packages/ai/src/index.ts— add the new class to thecreateAiApifactory switch - Update this README
No consumer code changes required.