-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Session ID: v2:f7640137-c8c3-45a9-afc3-f4405bf91a81
Session Schema: v2.0
Pro User ID: n/a
Issue Description (required)
I am using MiniMax M2.7. My codebase context token size is 140k. Max token size for M2.7 is 204.8k. I have capped the max output token size to 32k, but for today, the whole damn day, I get dropped responses in the middle of the output stream, maybe after 2-5k tokens output, while the max supported output by the model is 131k tokens. It freaks me out. Dyad only supports the OpenAI API, while MiniMax recommends the Claude API. Could this be a deal?
I know that 140k context is too much for the 205k model, but during launch, it didn’t behave that way. My usage is barely 10% of the 5-hour limit window of my plan.
Expected Behavior (required)
Complete the whole output at once.
Actual Behavior (required)
It drops in the middle of output, and I press keep going, because if you press entry, maybe 1/10 tries would actually complete the whole output as needed.
System Information
- Dyad Version: 0.40.0
- Platform: win32
- Architecture: x64
- Node Version: v22.14.0
- PNPM Version: 10.15.0
- Node Path: C:\Program Files\nodejs\node.exe
- Pro User ID: n/a
- Telemetry ID: b6a2441f-ac6f-4507-b7a5-eb5db9c3774a
- Model: custom::minimax:MiniMax-M2.7 | customId: 14
Settings
- Selected Model: custom::minimax:MiniMax-M2.7
- Chat Mode: build
- Auto Approve Changes: true
- Dyad Pro Enabled: n/a
- Thinking Budget: high
- Runtime Mode: n/a
- Release Channel: stable
- Auto Fix Problems: true
- Native Git: true
Logs
version: '2026-03-24T00:43:54.368Z',
providerCount: 6
}
source: 'remote',
version: '2026-03-24T00:43:54.368Z',
expiresAt: '2026-03-24T01:43:54.368Z'
}
source: 'remote',
version: '2026-03-24T00:43:54.368Z',
providerCount: 6
}
source: 'remote',
version: '2026-03-24T00:43:54.368Z',
expiresAt: '2026-03-24T01:43:54.368Z'
}
source: 'remote',
version: '2026-03-24T00:43:54.368Z',
providerCount: 6
}