test: add API and dashboard integration tests#5362
test: add API and dashboard integration tests#5362whatevertogo wants to merge 3 commits intoAstrBotDevs:masterfrom
Conversation
- Add API key and OpenAPI integration tests - Update dashboard and main entry tests - Update API compatibility smoke tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary of ChangesHello @whatevertogo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the testing infrastructure of the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces comprehensive integration tests for API endpoints, dashboard functionality, and adapter compatibility, along with robustness improvements in the main agent and a fallback mechanism for the cron manager. The new smoke tests for API compatibility and runtime adapter checks are particularly valuable for preventing regressions. However, a prompt injection vulnerability was identified in the conversation title generation logic. Addressing this will improve the security posture of the application by preventing potential manipulation of the dashboard's display and mitigating the risk of stored XSS. Additionally, consider improving the reliability of the test suite by adding timeouts to subprocess executions.
| "(e.g., “hi”, “hello”, “haha”), return <None>. " | ||
| "Output only the title itself or <None>, with no explanations." | ||
| ), | ||
| prompt=f"Generate a concise title for the following user query:\n{user_prompt}", |
There was a problem hiding this comment.
The conversation title generation logic in _handle_webchat is vulnerable to prompt injection because it directly concatenates untrusted user input (user_prompt) into the LLM prompt. An attacker could craft a message that overrides the instructions in the system prompt to control the generated title. Since this title is stored in the database and displayed in the web dashboard, this could lead to Stored Cross-Site Scripting (XSS) if the dashboard does not properly escape the session titles, or at least allow for defacement of the management interface.
To remediate this, use delimiters to wrap the user input and update the system prompt to only consider the content within those delimiters. Additionally, ensure the output is sanitized before storage and rendering.
| prompt=f"Generate a concise title for the following user query:\n{user_prompt}", | |
| prompt=f"Generate a concise title for the following user query:\n<user_query>\n{user_prompt}\n</user_query>", |
| def _run_python(code: str) -> subprocess.CompletedProcess[str]: | ||
| repo_root = Path(__file__).resolve().parents[2] | ||
| return subprocess.run( | ||
| [sys.executable, "-c", textwrap.dedent(code)], | ||
| cwd=repo_root, | ||
| capture_output=True, | ||
| text=True, | ||
| check=False, | ||
| ) |
There was a problem hiding this comment.
The _run_python helper executes code in a subprocess without a timeout. If the subprocess hangs (e.g., due to a deadlock in the mocked code or an infinite loop), the entire test suite will hang indefinitely. It is recommended to add a reasonable timeout (e.g., 30 seconds) to the subprocess.run call.
| def _run_python(code: str) -> subprocess.CompletedProcess[str]: | |
| repo_root = Path(__file__).resolve().parents[2] | |
| return subprocess.run( | |
| [sys.executable, "-c", textwrap.dedent(code)], | |
| cwd=repo_root, | |
| capture_output=True, | |
| text=True, | |
| check=False, | |
| ) | |
| def _run_python(code: str) -> subprocess.CompletedProcess[str]: | |
| repo_root = Path(__file__).resolve().parents[2] | |
| return subprocess.run( | |
| [sys.executable, "-c", textwrap.dedent(code)], | |
| cwd=repo_root, | |
| capture_output=True, | |
| text=True, | |
| check=False, | |
| timeout=30, | |
| ) |
| result = subprocess.run( | ||
| [sys.executable, "-c", script], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd=repo_root, | ||
| check=False, | ||
| ) |
There was a problem hiding this comment.
Similar to the other subprocess execution in the test suite, this subprocess.run call lacks a timeout. Adding a timeout ensures that the test suite fails gracefully if the plugin import process hangs.
| result = subprocess.run( | |
| [sys.executable, "-c", script], | |
| capture_output=True, | |
| text=True, | |
| cwd=repo_root, | |
| check=False, | |
| ) | |
| result = subprocess.run( | |
| [sys.executable, "-c", script], | |
| capture_output=True, | |
| text=True, | |
| cwd=repo_root, | |
| check=False, | |
| timeout=30, | |
| ) |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并提供了一些整体性的反馈:
- 新的适配器运行时冒烟测试在子进程字符串中内联了大量、且大多是重复的 SDK stub 代码;建议将这些通用 stub(例如 quart、wechatpy、slack_sdk、lark_oapi 等)提取到可复用的辅助模块或脚本中,以减少重复代码,并让这些 shim 将来的维护更安全、更容易。
给 AI Agent 的提示
请根据本次代码评审的评论进行修改:
## 整体意见
- 新的适配器运行时冒烟测试在子进程字符串中内联了大量、且大多是重复的 SDK stub 代码;建议将这些通用 stub(例如 quart、wechatpy、slack_sdk、lark_oapi 等)提取到可复用的辅助模块或脚本中,以减少重复代码,并让这些 shim 将来的维护更安全、更容易。
## 逐条评论
### 评论 1
<location> `tests/test_main.py:15-24` </location>
<code_context>
+def _make_version_info(
</code_context>
<issue_to_address>
**suggestion (testing):** 建议为新的 sandbox 提示词行为和 webchat 标题错误处理添加专门的测试
这些改动提升了 `check_env` 的覆盖率,但 `astrbot.core.astr_main_agent` 中的新行为仍然缺少直接测试:
- `_handle_webchat` 现在会记录并吞掉来自 `prov.text_chat` 的异常。
- `_apply_sandbox_tools` 现在在追加 `SANDBOX_MODE_PROMPT` 时会处理 `None`/空的 `system_prompt`。
建议在 `tests/core` 中添加如下测试:
- 模拟一个其 `text_chat` 会抛出异常的 provider,并断言 `_handle_webchat` 在不向外传播异常的情况下返回。
- 使用 `system_prompt=None` 和已有字符串分别调用 `_apply_sandbox_tools`,并断言结果中 `SANDBOX_MODE_PROMPT` 恰好出现一次,且不会出现字面量前缀 `"None"`。
这将把修复锁定下来,防止回归。
建议的实现方式:
```python
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock
```
要完整实现这些测试,可以新增文件 `tests/core/test_astr_main_agent.py`,内容大致如下(根据你的项目结构调整 import 和路径):
```python
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
# Arrange: provider whose text_chat raises
async def raising_text_chat(*args, **kwargs):
raise RuntimeError("provider failure")
prov = SimpleNamespace(text_chat=raising_text_chat)
# _handle_webchat likely needs other params; adapt as needed
# e.g. async def _handle_webchat(prov, conv, webchat, *...)
# Provide minimal dummy arguments to reach the text_chat call.
webchat = SimpleNamespace() # or whatever the function expects
conversation = SimpleNamespace()
# Act / Assert: no exception should propagate
await _handle_webchat(
provider=prov,
conversation=conversation,
webchat=webchat,
)
def test_apply_sandbox_tools_with_none_system_prompt():
# Arrange
args = SimpleNamespace(
system_prompt=None,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
# Act
updated_args = _apply_sandbox_tools(args)
# Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]
def test_apply_sandbox_tools_with_existing_system_prompt():
base_prompt = "Base system prompt."
args = SimpleNamespace(
system_prompt=base_prompt,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
updated_args = _apply_sandbox_tools(args)
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
# The base prompt should be preserved (order might depend on implementation)
assert base_prompt in updated_args.system_prompt
```
你可能还需要:
- 调整 `_handle_webchat` 和 `_apply_sandbox_tools` 的参数名和参数位置,以匹配它们实际的函数签名。
- 将 `sandbox_mode=True` 替换为实际用于启用 sandbox 工具的标志/条件。
- 为 `conversation`、`webchat` 或其他必须参数提供更真实的虚拟对象,以便 `_handle_webchat` 在正常流程中确实调用到 `prov.text_chat`。
- 如果 `_handle_webchat` 不是异步函数,请移除 `@pytest.mark.asyncio` 和 `await`。
</issue_to_address>帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续评审。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- The new adapter runtime smoke tests embed substantial, mostly duplicated SDK-stubbing code inside subprocess strings; consider extracting common stubs (e.g., for quart, wechatpy, slack_sdk, lark_oapi) into reusable helper modules or scripts to reduce duplication and make future maintenance of these shims safer and easier.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new adapter runtime smoke tests embed substantial, mostly duplicated SDK-stubbing code inside subprocess strings; consider extracting common stubs (e.g., for quart, wechatpy, slack_sdk, lark_oapi) into reusable helper modules or scripts to reduce duplication and make future maintenance of these shims safer and easier.
## Individual Comments
### Comment 1
<location> `tests/test_main.py:15-24` </location>
<code_context>
+def _make_version_info(
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a dedicated test for the new sandbox prompt behavior and webchat title error handling
These changes improve `check_env` coverage, but the new behaviors in `astrbot.core.astr_main_agent` still lack direct tests:
- `_handle_webchat` now logs and swallows exceptions from `prov.text_chat`.
- `_apply_sandbox_tools` now handles `None`/empty `system_prompt` when appending `SANDBOX_MODE_PROMPT`.
Consider adding tests in `tests/core` that:
- Mock a provider whose `text_chat` raises, and assert `_handle_webchat` returns without propagating the exception.
- Call `_apply_sandbox_tools` with `system_prompt=None` and with an existing string, and assert the result contains `SANDBOX_MODE_PROMPT` exactly once and never includes a literal `"None"` prefix.
This will lock in the bugfixes and prevent regressions.
Suggested implementation:
```python
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock
```
To fully implement the requested tests, add a new file `tests/core/test_astr_main_agent.py` with content along the following lines (adjust imports/paths to your project layout):
```python
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
# Arrange: provider whose text_chat raises
async def raising_text_chat(*args, **kwargs):
raise RuntimeError("provider failure")
prov = SimpleNamespace(text_chat=raising_text_chat)
# _handle_webchat likely needs other params; adapt as needed
# e.g. async def _handle_webchat(prov, conv, webchat, *...)
# Provide minimal dummy arguments to reach the text_chat call.
webchat = SimpleNamespace() # or whatever the function expects
conversation = SimpleNamespace()
# Act / Assert: no exception should propagate
await _handle_webchat(
provider=prov,
conversation=conversation,
webchat=webchat,
)
def test_apply_sandbox_tools_with_none_system_prompt():
# Arrange
args = SimpleNamespace(
system_prompt=None,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
# Act
updated_args = _apply_sandbox_tools(args)
# Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]
def test_apply_sandbox_tools_with_existing_system_prompt():
base_prompt = "Base system prompt."
args = SimpleNamespace(
system_prompt=base_prompt,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
updated_args = _apply_sandbox_tools(args)
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
# The base prompt should be preserved (order might depend on implementation)
assert base_prompt in updated_args.system_prompt
```
You may need to:
- Adjust argument names and positions in `_handle_webchat` and `_apply_sandbox_tools` to match their actual signatures.
- Replace `sandbox_mode=True` with the actual flag/condition used to enable sandbox tooling.
- Provide realistic dummy objects for `conversation`, `webchat`, or any other required parameters so that `_handle_webchat` reaches the `prov.text_chat` call in normal flow.
- If `_handle_webchat` is not async, remove `@pytest.mark.asyncio` and `await`.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| def _make_version_info( | ||
| major: int, | ||
| minor: int, | ||
| micro: int = 0, | ||
| releaselevel: str = "final", | ||
| serial: int = 0, | ||
| ): | ||
| return SimpleNamespace( | ||
| major=major, | ||
| minor=minor, |
There was a problem hiding this comment.
suggestion (testing): 建议为新的 sandbox 提示词行为和 webchat 标题错误处理添加专门的测试
这些改动提升了 check_env 的覆盖率,但 astrbot.core.astr_main_agent 中的新行为仍然缺少直接测试:
_handle_webchat现在会记录并吞掉来自prov.text_chat的异常。_apply_sandbox_tools现在在追加SANDBOX_MODE_PROMPT时会处理None/空的system_prompt。
建议在 tests/core 中添加如下测试:
- 模拟一个其
text_chat会抛出异常的 provider,并断言_handle_webchat在不向外传播异常的情况下返回。 - 使用
system_prompt=None和已有字符串分别调用_apply_sandbox_tools,并断言结果中SANDBOX_MODE_PROMPT恰好出现一次,且不会出现字面量前缀"None"。
这将把修复锁定下来,防止回归。
建议的实现方式:
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock要完整实现这些测试,可以新增文件 tests/core/test_astr_main_agent.py,内容大致如下(根据你的项目结构调整 import 和路径):
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
# Arrange: provider whose text_chat raises
async def raising_text_chat(*args, **kwargs):
raise RuntimeError("provider failure")
prov = SimpleNamespace(text_chat=raising_text_chat)
# _handle_webchat likely needs other params; adapt as needed
# e.g. async def _handle_webchat(prov, conv, webchat, *...)
# Provide minimal dummy arguments to reach the text_chat call.
webchat = SimpleNamespace() # or whatever the function expects
conversation = SimpleNamespace()
# Act / Assert: no exception should propagate
await _handle_webchat(
provider=prov,
conversation=conversation,
webchat=webchat,
)
def test_apply_sandbox_tools_with_none_system_prompt():
# Arrange
args = SimpleNamespace(
system_prompt=None,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
# Act
updated_args = _apply_sandbox_tools(args)
# Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]
def test_apply_sandbox_tools_with_existing_system_prompt():
base_prompt = "Base system prompt."
args = SimpleNamespace(
system_prompt=base_prompt,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
updated_args = _apply_sandbox_tools(args)
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
# The base prompt should be preserved (order might depend on implementation)
assert base_prompt in updated_args.system_prompt你可能还需要:
- 调整
_handle_webchat和_apply_sandbox_tools的参数名和参数位置,以匹配它们实际的函数签名。 - 将
sandbox_mode=True替换为实际用于启用 sandbox 工具的标志/条件。 - 为
conversation、webchat或其他必须参数提供更真实的虚拟对象,以便_handle_webchat在正常流程中确实调用到prov.text_chat。 - 如果
_handle_webchat不是异步函数,请移除@pytest.mark.asyncio和await。
Original comment in English
suggestion (testing): Consider adding a dedicated test for the new sandbox prompt behavior and webchat title error handling
These changes improve check_env coverage, but the new behaviors in astrbot.core.astr_main_agent still lack direct tests:
_handle_webchatnow logs and swallows exceptions fromprov.text_chat._apply_sandbox_toolsnow handlesNone/emptysystem_promptwhen appendingSANDBOX_MODE_PROMPT.
Consider adding tests in tests/core that:
- Mock a provider whose
text_chatraises, and assert_handle_webchatreturns without propagating the exception. - Call
_apply_sandbox_toolswithsystem_prompt=Noneand with an existing string, and assert the result containsSANDBOX_MODE_PROMPTexactly once and never includes a literal"None"prefix.
This will lock in the bugfixes and prevent regressions.
Suggested implementation:
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMockTo fully implement the requested tests, add a new file tests/core/test_astr_main_agent.py with content along the following lines (adjust imports/paths to your project layout):
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock
from astrbot.core.astr_main_agent import (
_handle_webchat,
_apply_sandbox_tools,
SANDBOX_MODE_PROMPT,
)
@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
# Arrange: provider whose text_chat raises
async def raising_text_chat(*args, **kwargs):
raise RuntimeError("provider failure")
prov = SimpleNamespace(text_chat=raising_text_chat)
# _handle_webchat likely needs other params; adapt as needed
# e.g. async def _handle_webchat(prov, conv, webchat, *...)
# Provide minimal dummy arguments to reach the text_chat call.
webchat = SimpleNamespace() # or whatever the function expects
conversation = SimpleNamespace()
# Act / Assert: no exception should propagate
await _handle_webchat(
provider=prov,
conversation=conversation,
webchat=webchat,
)
def test_apply_sandbox_tools_with_none_system_prompt():
# Arrange
args = SimpleNamespace(
system_prompt=None,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
# Act
updated_args = _apply_sandbox_tools(args)
# Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]
def test_apply_sandbox_tools_with_existing_system_prompt():
base_prompt = "Base system prompt."
args = SimpleNamespace(
system_prompt=base_prompt,
sandbox_mode=True, # or flag needed to trigger sandbox tools
)
updated_args = _apply_sandbox_tools(args)
assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
# The base prompt should be preserved (order might depend on implementation)
assert base_prompt in updated_args.system_promptYou may need to:
- Adjust argument names and positions in
_handle_webchatand_apply_sandbox_toolsto match their actual signatures. - Replace
sandbox_mode=Truewith the actual flag/condition used to enable sandbox tooling. - Provide realistic dummy objects for
conversation,webchat, or any other required parameters so that_handle_webchatreaches theprov.text_chatcall in normal flow. - If
_handle_webchatis not async, remove@pytest.mark.asyncioandawait.
There was a problem hiding this comment.
Pull request overview
This PR adds integration tests for API endpoints and dashboard functionality while also attempting to refactor some import statements in the API layer. However, there is a critical bug in the import refactoring that will break the application at runtime.
Changes:
- Added comprehensive integration tests for API key management, dashboard routes, and adapter smoke tests
- Improved test fixtures with proper async event loop scoping (
loop_scope="module") - Added subprocess-based tests for platform adapters with mocked SDKs
- Fixed potential None-handling issue in system_prompt concatenation
- Added fallback handling for CronJobManager when apscheduler is mocked
- Added exception handling for webchat title generation
- CRITICAL BUG: Introduced broken imports that reference non-existent
astrbot.core.star.basemodule
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/unit/test_skipped_items_runtime.py | New comprehensive runtime tests for platform adapters using subprocess isolation and SDK stubs |
| tests/unit/test_fixture_plugin_usage.py | New tests verifying fixture plugin can be loaded in isolated process |
| tests/unit/test_api_compat_smoke.py | New API compatibility smoke tests (missing coverage for star.Star imports) |
| tests/test_main.py | Improved environment check tests with better mocking of path functions |
| tests/test_dashboard.py | Enhanced dashboard tests with mocked plugin operations and update check scenarios |
| tests/test_api_key_open_api.py | Fixed test isolation issues with unique creator names |
| tests/test_kb_import.py | Updated async fixture configuration with loop_scope |
| astrbot/core/cron/init.py | Added fallback for CronJobManager when apscheduler is unavailable (good for tests) |
| astrbot/core/astr_main_agent.py | Added exception handling and fixed None-handling for system_prompt (good fixes) |
| astrbot/api/star/init.py | BROKEN: Imports from non-existent astrbot.core.star.base module |
| astrbot/api/all.py | BROKEN: Imports from non-existent astrbot.core.star.base module |
| @@ -1,7 +1,9 @@ | |||
| from astrbot.core.star import Context, Star, StarTools | |||
| from astrbot.core.star.base import Star | |||
There was a problem hiding this comment.
Import error: astrbot.core.star.base module does not exist. The Star class is defined in astrbot.core.star.__init__.py, not in a separate base.py file. This import should be changed to from astrbot.core.star import Star.
| from astrbot.core.star.base import Star | |
| from astrbot.core.star import Star |
| register_star as register, # 注册插件(Star) | ||
| ) | ||
| from astrbot.core.star import Context, Star | ||
| from astrbot.core.star.base import Star |
There was a problem hiding this comment.
Import error: astrbot.core.star.base module does not exist. The Star class is defined in astrbot.core.star.__init__.py, not in a separate base.py file. This import should be changed to from astrbot.core.star import Star.
| from astrbot.core.star.base import Star | |
| from astrbot.core.star import Star |
| import astrbot.api as api | ||
|
|
||
| assert callable(api.agent) | ||
| assert callable(api.llm_tool) |
There was a problem hiding this comment.
Missing test coverage: The API compatibility smoke tests don't verify that Star and Context can be imported from astrbot.api.star. This module is used by many builtin plugins (session_controller, astrbot, builtin_commands, web_searcher, etc.) via "from astrbot.api import star" followed by "star.Star" and "star.Context". The tests should include a check like:
from astrbot.api import star
assert hasattr(star, 'Star')
assert hasattr(star, 'Context')This would have caught the broken import from astrbot.core.star.base which doesn't exist.
| assert callable(api.llm_tool) | |
| assert callable(api.llm_tool) | |
| def test_api_star_exports_star_and_context(): | |
| """astrbot.api.star should expose Star and Context for plugin imports.""" | |
| from astrbot.api import star | |
| assert hasattr(star, "Star") | |
| assert hasattr(star, "Context") |
| @@ -1,7 +1,9 @@ | |||
| from astrbot.core.star import Context, Star, StarTools | |||
| from astrbot.core.star.base import Star | |||
Add integration tests for API endpoints and dashboard functionality to improve test coverage for API and dashboard integration.
Modifications / 改动点
tests/test_api_key_open_api.pyfor API key management teststests/test_dashboard.pyandtests/test_main.pytests/unit/test_api_compat_smoke.pytests/unit/test_fixture_plugin_usage.pytests/unit/test_skipped_items_runtime.pyastrbot/api/all.py,astrbot/api/star/__init__.py,astrbot/dashboard/routes/auth.py,astrbot/core/astr_main_agent.py, andastrbot/core/cron/__init__.pyTest Coverage:
API key management
OpenAPI endpoints
Dashboard routes and authentication
API compatibility
Plugin fixture usage
This is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.Summary by Sourcery
围绕 API 端点、仪表盘更新流程以及平台/适配器层添加集成和兼容性测试,同时收紧错误处理和导入健壮性的核心行为。
增强内容:
apscheduler部分不可用时依然具备稳健的导入能力,在使用时暴露清晰错误,而不是在导入阶段即失败。测试:
astrbot.api公共表面保持向后兼容,并且夹具插件可以在隔离的 Python 进程中被导入和使用。Original summary in English
Summary by Sourcery
Add integration and compatibility tests around API endpoints, dashboard update flows, and platform/adapter layers while tightening core behavior for error handling and import robustness.
Enhancements:
apscheduleris partially unavailable, exposing a clear error on use instead of failing at import time.Tests:
astrbot.apipublic surface remains backward compatible and that fixture plugins can be imported and used in isolated Python processes.