We now have AiApi, OllamaAiApi, and OpenAiApi. Documentation updates to provide a bit more high-level clarity that was originally generated by the agent.
99 lines
3.3 KiB
Markdown
99 lines
3.3 KiB
Markdown
# @gadget/ai
|
|
|
|
Gadget Code's AI API abstraction layer. Provides a single internal API contract for calling AI providers (Ollama, OpenAI) without consumer code knowing which provider is configured.
|
|
|
|
## Principles
|
|
|
|
1. **One interface, all providers.** Consumer code calls `createAiApi()` once and holds the resulting `AiApi`. It never checks `provider.sdk` again.
|
|
2. **All AI SDK knowledge is contained here.** No consumer imports `ollama` or `openai` SDKs directly.
|
|
3. **Responses are normalized.** All provider responses are translated to Gadget Code's internal interface types before returning.
|
|
|
|
## Usage
|
|
|
|
```typescript
|
|
import { createAiApi } from "@gadget/ai";
|
|
|
|
const provider = {
|
|
_id: "local-ollama",
|
|
name: "Local Ollama",
|
|
sdk: "ollama",
|
|
baseUrl: "http://localhost:11434",
|
|
apiKey: "",
|
|
defaultModelId: "llama3.2",
|
|
};
|
|
|
|
const modelConfig = {
|
|
provider,
|
|
modelId: "llama3.2",
|
|
params: {
|
|
reasoning: false,
|
|
temperature: 0.8,
|
|
topP: 0.9,
|
|
topK: 40,
|
|
},
|
|
};
|
|
|
|
const ai = createAiApi(provider, logger);
|
|
|
|
const result = await ai.generate(modelConfig, {
|
|
prompt: "Explain what this code does",
|
|
systemPrompt: "You are a code reviewer.",
|
|
});
|
|
console.log(result.response);
|
|
console.log(result.stats.duration.text); // formatted, e.g. "00:00:02"
|
|
```
|
|
|
|
## API
|
|
|
|
### Factory
|
|
|
|
**`createAiApi(provider, logger?)`** — Returns an `AiApi` instance for the given provider. `logger` is optional and defaults to a no-op logger. Pass your own logger to receive debug output.
|
|
|
|
### AiApi
|
|
|
|
Abstract base class. Currently implemented:
|
|
|
|
- **`OllamaAiApi`** — Ollama provider
|
|
- **`OpenAiApi`** — OpenAI provider (stubbed)
|
|
|
|
#### `ai.generate(model, options, streamCallback?)`
|
|
|
|
Single-prompt generation. Returns `IAiGenerateResponse`.
|
|
|
|
#### `ai.chat(model, options, streamCallback?)`
|
|
|
|
Chat with conversation history. Pass `options.context` for multi-turn对话. Returns `IAiChatResponse`.
|
|
|
|
### Interfaces
|
|
|
|
All interfaces are exported for use by consumers:
|
|
|
|
- **`IAiProvider`** — AI provider configuration
|
|
- **`IAiModelConfig`** — Model + runtime parameters
|
|
- **`IAiGenerateOptions`** / **`IAiGenerateResponse`**
|
|
- **`IAiChatOptions`** / **`IAiChatResponse`** — includes `tool_calls` for function-calling models
|
|
- **`IAiInferenceStats`** — token counts and duration (both raw `seconds` number and formatted `text` string)
|
|
- **`IAiLogger`** — injectable logger interface (`debug`, `info`, `warn`, `error`)
|
|
|
|
## Providers
|
|
|
|
### Ollama
|
|
|
|
Configured via `IAiProvider` with `sdk: "ollama"`. Uses the `ollama` npm package. Handles streaming responses and normalizes Ollama-specific response fields (thinking tokens, token counts, duration).
|
|
|
|
### OpenAI
|
|
|
|
Configured via `IAiProvider` with `sdk: "openai"`. Stubbed — `chat()` and `generate()` throw `"Not yet implemented"`. Implement by wiring the `openai` npm package following the same pattern as `OllamaAiApi`.
|
|
|
|
## Duration Formatting
|
|
|
|
The library uses `numeral` to provide a consistent formatted duration string (`stats.duration.text`) in `hh:mm:ss` format. The raw nanosecond value is also returned in `stats.duration.seconds` for consumers that need the raw number.
|
|
|
|
## Adding a New Provider
|
|
|
|
1. Create `packages/ai/src/<provider>.ts` — extend `AiApi`, implement all abstract methods
|
|
2. Update `packages/ai/src/index.ts` — add the new class to the `createAiApi` factory switch
|
|
3. Update this README
|
|
|
|
No consumer code changes required.
|