support responses api , support native message-api, fix inconsistent credit consumption in chat#170
Open
caozhiyuan wants to merge 63 commits intoericc-ch:masterfrom
Open
support responses api , support native message-api, fix inconsistent credit consumption in chat#170caozhiyuan wants to merge 63 commits intoericc-ch:masterfrom
caozhiyuan wants to merge 63 commits intoericc-ch:masterfrom
Conversation
…se output message type
…nd state management
…ure in translation tests
…arsing and allign with vscode-copilot-chat extractThinkingData, otherwise it will cause miss cache occasionally
…ing signature check and update prompt
…ing small model if no tools are used 2.add bun idleTimeout = 0 3.feat: Compatible with Claude code JSONL file usage error scenarios, delay closeBlockIfOpen and map responses api to anthropic support tool_use and fix spelling errors 4.feat: add configuration management with extra prompt handling and ensure config file creation
…is incompatible with gpt-5-mini
…ssage translation
…just runServer to set verbose level correctly
…ponses-api # Conflicts: # src/start.ts
…adjusting input token calculations and handling tool prompts
Some clients, like RooCode may send `service_tier` to `/responses` endpoint, but Copilot do not support this field and returns error
… expanded reasoning options and add doc
…ndling in responses
… code skill tool_result
Contributor
Author
|
@ericc-ch also fix inconsistent credit consumption in chat and adapter claude code skill tool_result. opencode had fixed it. |
Contributor
Author
|
Also supports the vscode extension, not sure if you need it: https://github.com/caozhiyuan/copilot-api/tree/feature/vscode-extension. Does not depend on bun. |
2c2d0db to
0eb8e7f
Compare
|
hi @caozhiyuan thanks so much for your efforts on this. Question, none of the models seem to be showing thinking/reasoning in openwebui, do you have any suggestions? |
Contributor
Author
@tzhouML in cherry studio , it's ok |
…update documentation
83dcfa8 to
4f6ee78
Compare
… 0.38.2 and update opencode plugin to set header x-session-id
4f6ee78 to
e69e6a8
Compare
5f5a134 to
ca803ba
Compare
Change ID format from message.id-subagent-marker to prt-message.id-subagent-marker to match opencode 1.2.26 requirements.
…onsistent with github copilot extension and opencode
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request introduces a new configuration system, structured logging, and support for the /v1/responses endpoint, and support for the claude native message api, along with improvements to model selection and request handling. The most important changes are grouped below:
Responses API Integration:
Claude Native Message API:
Configuration Management:
src/lib/config.tsmodule to provide persistent application configuration, including support for model-specific prompts, reasoning effort levels, and default model selection. Configuration is stored in a newconfig.jsonfile in the app data directory, with automatic creation and safe permissions. [1] [2]Logging Improvements:
src/lib/logger.tsfor handler-level logging, with log rotation, retention, and structured output. Integrated this logger into key request handlers for better diagnostics. [1] [2] [3] [4] [5]Token Counting Logic:
src/lib/tokenizer.tsto more accurately account for tool calls, array parameters, and model-specific behaviors (including GPT and Anthropic/Grok models). Added support for excluding certain schema keys and improved calculation for nested parameters. [1] [2] [3] [4] [5] [6] [7] [8]Fix Credit Consumption Inconsistency: