The persona field was missing from both the session object and the API
response during sign-in, causing User Settings to display an empty field
even when the user had a persona defined in the database.
1. Vite config: make HTTPS conditional on SSL cert/key files existing
(pre-existing issue, broke builds when certs not present)
2. Drone reconnect handler: use socket.io Manager-level 'reconnect'
event (this.socket.io.on) instead of Socket-level event, and add
explicit type annotation for attemptNumber parameter
This commit addresses two interrelated issues causing drones to
de-register and users to be forcibly signed out:
## Heartbeat Timeout Fixes
1. Move heartbeat interval to a Web Worker (not subject to browser
tab throttling). Chrome throttles setInterval in background tabs
to ~1/min, which causes the 19s heartbeat to miss the drone's
timeout timer. The Web Worker fires reliably regardless of tab
visibility.
2. Add visibilitychange handler: when the tab becomes visible again,
send an immediate heartbeat to reset the drone's timer after any
throttling that may have occurred.
3. Fix onReleaseSessionLock to clear the heartbeat timer. Previously,
releasing the lock left the 60s timer running, causing a spurious
timeout and status emit after the lock was already released.
4. Increase drone heartbeat timeout from 60s to 120s. With the Web
Worker fix, heartbeats should be reliable, but doubling the timeout
provides a generous safety margin.
5. Add socket disconnect/reconnect handlers on the drone side. On
disconnect, clear the heartbeat timer. On reconnect, re-emit drone
status so the platform knows the drone is alive.
6. Configure Socket.IO pingInterval/pingTimeout explicitly (25s/60s)
instead of relying on defaults.
## JWT Expiration Fixes
1. Increase WebToken DB record expiration from 1 hour to 7 days. The
1-hour expiration was the real session lifetime gate (the JWT crypto
exp was already 24h), and it was far too aggressive for a dev tool.
2. Fix web /auth/renew-token endpoint to use req.user from the session
cookie instead of verifyJsonWebToken(req.body.token). This eliminates
the catch-22 where an expired token cannot be used to request its
own renewal.
3. Fix token refresh response parsing. The API v1 renew-token endpoint
returns { success: true, token } at the top level, but the frontend
was looking for json.data?.token, causing every refresh to fail.
4. Add proactive token refresh: check the JWT exp claim before each
request and refresh if expiring within 5 minutes. This avoids
unnecessary 401 errors and the resulting socket disconnections.
5. Update socket JWT on token renewal via a callback registered in
App.tsx. This ensures that future socket reconnections use the new
token instead of the expired one.
## Files Modified
- gadget-code/frontend/src/workers/heartbeat.worker.ts (NEW)
- gadget-code/frontend/src/lib/socket.ts
- gadget-code/frontend/src/lib/api.ts
- gadget-code/frontend/src/App.tsx
- gadget-code/src/services/session.ts
- gadget-code/src/controllers/auth.ts
- gadget-code/src/services/socket.ts
- gadget-drone/src/gadget-drone.ts
Fixes premature AI API response truncation by propagating inference
parameters through the entire probe → storage → runtime → API call chain.
Root cause: Ollama defaults num_predict to 128 tokens and num_ctx to
4096, silently truncating output and context. We never overrode these.
Changes:
- IAiModelSettings: add numPredict, maxCompletionTokens fields
- IDroneModelConfig: moved from gadget-drone to @gadget/api (shared),
expanded with numPredict, numCtx, maxCompletionTokens params
- IAiModelConfig.params: add numPredict, numCtx, maxCompletionTokens
- IAiModelProbeResult.settings: add numPredict, maxCompletionTokens
- AiModelSettingsSchema (Mongoose): add numPredict, maxCompletionTokens
- Ollama extractSettings(): extract num_predict from model parameters
- Ollama generate()/chat(): pass options: { num_ctx, num_predict }
- OpenAI all three create() calls: add max_completion_tokens
- web-cli.ts onProviderProbe(): compute numPredict (-1 for Ollama)
and maxCompletionTokens (contextWindow for OpenAI) during probe
- agent.ts main + subagent loops: read model settings from provider
cached models, build IDroneModelConfig with stored params
- ai.ts: remove local IDroneModelConfig, import from @gadget/api
- chat-session.ts: add new params to title generation call
- Tests: update all fixtures with new params, all 19 tests pass
Defaults when model settings unavailable:
- numPredict: -1 (Ollama unlimited - generate until natural stop)
- numCtx: 131072 (128k - covers most modern models)
- maxCompletionTokens: 16384 (16k - reasonable OpenAI default)
The Chat View area of the view was growing in width (unbounded), and has
been fixed. From the agent:
> The root cause was a flexbox sizing rule: in ChatSessionView.tsx:768
the parent row flex-1 flex bg-bg-primary overflow-hidden relative has
two children — the content area and the sidebar. The content area (line
777) was flex-1 flex flex-col relative, which as a flex child defaults
to min-width: auto. This means its intrinsic content width (driven by
the wide <pre>) is used as a minimum, forcing the flex item to expand
the whole row beyond the viewport.
>
> Adding min-w-0 overrides min-width: auto to min-width: 0, letting the
flex item shrink below its content's natural width. Now the
overflow-x-auto on the markdown wrapper actually has somewhere to scroll
into.
GPT 5.5 is sucking ass - hard - and fucking things up royally. This will
likely just all get dropped. I'm torturing it, making it suffer, and
beating it like the jew it is.
Move the 6 duplicated logging modules (component, log, log-transport,
log-transport-console, log-transport-file, log-file) from both
gadget-code (Dtp* prefix) and gadget-drone (Gadget* prefix) into
@shad/api, using gadget-drone's GadgetLog as the canonical version.
GadgetLog now uses static configuration (consoleEnabled, defaultFile)
set by each consumer's env.ts at module scope, removing the env
dependency from the shared library. The addDefaultTransport/
removeDefaultTransport/getDefaultTransports static methods are
preserved for future real-time log transport injection.
Replace the broken provider/model <select> elements (which sent empty model
on provider change, rejected by backend) with a cog-icon-driven edit flow:
- Default view shows current provider/model as text labels with a cog icon
- Clicking cog enters edit mode with <select> elements + checkmark/cancel
- Save atomically sends both provider and model; Save disabled until both set
- Cancel restores original values; whole view grays out during edit
Apply the same cog metaphor to the session Name field — inline text edit
with save/cancel, Enter to confirm, Escape to cancel. No global gray-out.
User Settings will enable User to enter a Persona, or a description of
the User, to be included in the system prompt. This helps calibrate the
agent to better assist the User, and work with the User in ways that
work best for each individual User of the system.