Skip to content

test: add API and dashboard integration tests#5362

Open
whatevertogo wants to merge 3 commits intoAstrBotDevs:masterfrom
whatevertogo:test/api-dashboard
Open

test: add API and dashboard integration tests#5362
whatevertogo wants to merge 3 commits intoAstrBotDevs:masterfrom
whatevertogo:test/api-dashboard

Conversation

@whatevertogo
Copy link
Contributor

@whatevertogo whatevertogo commented Feb 23, 2026

Add integration tests for API endpoints and dashboard functionality to improve test coverage for API and dashboard integration.

Modifications / 改动点

  • Added tests/test_api_key_open_api.py for API key management tests
  • Updated dashboard and main entry tests in tests/test_dashboard.py and tests/test_main.py
  • Added API compatibility smoke tests in tests/unit/test_api_compat_smoke.py
  • Updated fixture plugin usage tests in tests/unit/test_fixture_plugin_usage.py
  • Updated skipped items runtime tests in tests/unit/test_skipped_items_runtime.py
  • Fixed issues in astrbot/api/all.py, astrbot/api/star/__init__.py, astrbot/dashboard/routes/auth.py, astrbot/core/astr_main_agent.py, and astrbot/core/cron/__init__.py

Test Coverage:

  • API key management

  • OpenAPI endpoints

  • Dashboard routes and authentication

  • API compatibility

  • Plugin fixture usage

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

# Verification: API key tests pass
$ pytest tests/test_api_key_open_api.py -v
...
collected 12 items
............
12 passed in 0.52s

# Verification: Dashboard tests pass
$ pytest tests/test_dashboard.py -v
...
collected 8 items
........
8 passed in 0.35s

# Verification: Main tests pass
$ pytest tests/test_main.py -v
...
collected 5 items
.....
5 passed in 0.22s

# Verification: API compat smoke tests pass
$ pytest tests/unit/test_api_compat_smoke.py -v
...
collected 6 items
......
6 passed in 0.18s

Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了"验证步骤"和"运行截图"。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

围绕 API 端点、仪表盘更新流程以及平台/适配器层添加集成和兼容性测试,同时收紧错误处理和导入健壮性的核心行为。

增强内容:

  • 改进网页聊天标题生成,在提供方出错时安全失败,并确保沙盒系统提示始终被正确追加。
  • 使定时任务管理器在 apscheduler 部分不可用时依然具备稳健的导入能力,在使用时暴露清晰错误,而不是在导入阶段即失败。
  • 调整公共 star API 的导入方式,使其与核心 star 实现结构保持一致。

测试:

  • 扩展仪表盘集成测试,覆盖插件管理端点和多种更新检查场景,包括可选的在线检查。
  • 拓展主入口点测试,以验证所有关键数据目录的环境路径初始化。
  • 扩展 API key 和 OpenAPI 集成测试,为分页使用隔离的创建器,并新增知识库导入测试,以正确验证文档上传行为。
  • 新增冒烟测试,确保 astrbot.api 公共表面保持向后兼容,并且夹具插件可以在隔离的 Python 进程中被导入和使用。
  • 新增运行时冒烟测试,通过在子进程中使用桩(stub)SDK 运行平台管理器和多个第三方适配器,覆盖此前被跳过的适配器场景。
Original summary in English

Summary by Sourcery

Add integration and compatibility tests around API endpoints, dashboard update flows, and platform/adapter layers while tightening core behavior for error handling and import robustness.

Enhancements:

  • Improve webchat title generation to fail safely on provider errors and ensure sandbox system prompts are always appended correctly.
  • Make the cron job manager import resilient when apscheduler is partially unavailable, exposing a clear error on use instead of failing at import time.
  • Adjust public star API imports to stay aligned with the core star implementation structure.

Tests:

  • Expand dashboard integration tests to cover plugin management endpoints and multiple update-check scenarios, including optional online checks.
  • Broaden main entrypoint tests to validate environment path initialization for all key data directories.
  • Extend API key and OpenAPI integration tests to use isolated creators for pagination and add knowledge-base import tests that properly verify document upload behavior.
  • Introduce smoke tests to guarantee astrbot.api public surface remains backward compatible and that fixture plugins can be imported and used in isolated Python processes.
  • Add runtime smoke tests for previously skipped adapter scenarios by exercising platform managers and multiple third‑party adapters via stubbed SDKs in subprocesses.

- Add API key and OpenAPI integration tests
- Update dashboard and main entry tests
- Update API compatibility smoke tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 23, 2026 00:58
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Feb 23, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @whatevertogo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the testing infrastructure of the astrbot project by introducing new integration tests for critical API and dashboard functionalities. It also refines existing test suites and adds new smoke tests to validate API compatibility. These changes aim to improve overall code quality, stability, and maintainability by ensuring that key components behave as expected and that API contracts remain consistent.

Highlights

  • New Integration Tests: Added comprehensive integration tests for API key management and OpenAPI endpoints to ensure robust functionality and security.
  • Enhanced Dashboard and Main Tests: Updated existing tests for dashboard routes, authentication, and the main application entry point, improving their coverage and reliability.
  • API Compatibility Smoke Tests: Introduced new smoke tests to verify backward compatibility of the astrbot.api module, ensuring stable API interfaces.
  • Refined Plugin Fixture Usage Tests: Updated tests related to plugin fixture usage, including mocking plugin manager methods for better isolation and control during testing.
  • Core Module Fixes: Implemented minor bug fixes in core API and cron modules, addressing import paths and error handling to support the new testing infrastructure.
Changelog
  • astrbot/api/all.py
    • Removed an unused import for AstrMessageEvent.
    • Refactored imports for Context and Star to use specific submodules (base and context) for better organization.
  • astrbot/api/star/init.py
    • Refactored imports for Context, Star, and StarTools to import from their dedicated submodules (base, context, star_tools).
  • astrbot/core/astr_main_agent.py
    • Added a try-except block around the LLM call for webchat title generation to gracefully handle exceptions.
    • Ensured system_prompt is initialized before concatenation to prevent potential None type errors.
  • astrbot/core/cron/init.py
    • Modified CronJobManager import to include robust error handling for ModuleNotFoundError related to apscheduler, providing a more informative message.
  • tests/test_api_key_open_api.py
    • Updated pytest_asyncio.fixture decorators to include loop_scope="module" for consistent event loop management.
    • Modified the creator variable in test_open_chat_sessions_pagination to use a unique UUID, improving test isolation and preventing conflicts.
    • Updated API calls to use the dynamically generated creator for pagination tests.
  • tests/test_dashboard.py
    • Updated pytest_asyncio.fixture decorators to include loop_scope="module".
    • Removed unused imports related to star_registry and star_handlers_registry.
    • Added an environment variable check (ASTRBOT_RUN_ONLINE_UPDATE_CHECK) to conditionally run online update tests.
    • Refactored test_plugins to use mock objects for install_plugin, update_plugin, and uninstall_plugin methods of the plugin manager, enhancing test control and removing direct assertions against global registries.
    • Added new test cases for test_check_update covering success (no new version), success (new version available), and error scenarios, along with an optional online smoke test.
  • tests/test_kb_import.py
    • Updated pytest_asyncio.fixture decorators to include loop_scope="module".
    • Refined the mocking of KBHelper to explicitly mock its upload_document method as an AsyncMock.
  • tests/test_main.py
    • Added import for SimpleNamespace.
    • Refactored the internal _version_info class to use SimpleNamespace for Python version mocking.
    • Updated test_check_env to mock get_astrbot_root and related path functions, ensuring all expected directories (config, plugins, temp, knowledge_base, site_packages) are created during environment checks.
  • tests/unit/test_api_compat_smoke.py
    • Added a new file containing smoke tests to verify the backward compatibility of astrbot.api exports, event and platform mappings, message components, and eager imports of star.register aliases.
  • tests/unit/test_fixture_plugin_usage.py
    • Added a new file with tests to confirm the existence of fixture plugin files and their successful import in an isolated Python process, verifying plugin structure and basic functionality.
  • tests/unit/test_skipped_items_runtime.py
    • Added a new file providing runtime coverage for various platform adapters (Slack, Wecom, Lark, Dingtalk, Line, Satori, Misskey, Weixin Official Account) by stubbing their external SDKs in isolated Python subprocesses, ensuring critical adapter paths function correctly without full SDK installations.
Activity
  • The pull request was created by whatevertogo with the title 'test: add API and dashboard integration tests'.
  • The author provided a detailed description outlining the motivation to improve test coverage for API and dashboard integration.
  • Specific modifications were listed, including the addition of new test files (tests/test_api_key_open_api.py, tests/unit/test_api_compat_smoke.py, tests/unit/test_fixture_plugin_usage.py, tests/unit/test_skipped_items_runtime.py) and updates to existing ones (tests/test_dashboard.py, tests/test_main.py, tests/test_kb_import.py).
  • Bug fixes in astrbot/api/all.py, astrbot/api/star/__init__.py, astrbot/core/astr_main_agent.py, and astrbot/core/cron/__init__.py were also noted.
  • Verification steps were included, showing pytest output for API key, dashboard, main, and API compatibility smoke tests, all passing successfully.
  • The author confirmed that the changes are not breaking and have been thoroughly tested.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added area:core The bug / feature is about astrbot's core, backend area:webui The bug / feature is about webui(dashboard) of astrbot. labels Feb 23, 2026
@dosubot
Copy link

dosubot bot commented Feb 23, 2026

Related Documentation

Checked 1 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive integration tests for API endpoints, dashboard functionality, and adapter compatibility, along with robustness improvements in the main agent and a fallback mechanism for the cron manager. The new smoke tests for API compatibility and runtime adapter checks are particularly valuable for preventing regressions. However, a prompt injection vulnerability was identified in the conversation title generation logic. Addressing this will improve the security posture of the application by preventing potential manipulation of the dashboard's display and mitigating the risk of stored XSS. Additionally, consider improving the reliability of the test suite by adding timeouts to subprocess executions.

"(e.g., “hi”, “hello”, “haha”), return <None>. "
"Output only the title itself or <None>, with no explanations."
),
prompt=f"Generate a concise title for the following user query:\n{user_prompt}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The conversation title generation logic in _handle_webchat is vulnerable to prompt injection because it directly concatenates untrusted user input (user_prompt) into the LLM prompt. An attacker could craft a message that overrides the instructions in the system prompt to control the generated title. Since this title is stored in the database and displayed in the web dashboard, this could lead to Stored Cross-Site Scripting (XSS) if the dashboard does not properly escape the session titles, or at least allow for defacement of the management interface.

To remediate this, use delimiters to wrap the user input and update the system prompt to only consider the content within those delimiters. Additionally, ensure the output is sanitized before storage and rendering.

Suggested change
prompt=f"Generate a concise title for the following user query:\n{user_prompt}",
prompt=f"Generate a concise title for the following user query:\n<user_query>\n{user_prompt}\n</user_query>",

Comment on lines +15 to +23
def _run_python(code: str) -> subprocess.CompletedProcess[str]:
repo_root = Path(__file__).resolve().parents[2]
return subprocess.run(
[sys.executable, "-c", textwrap.dedent(code)],
cwd=repo_root,
capture_output=True,
text=True,
check=False,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _run_python helper executes code in a subprocess without a timeout. If the subprocess hangs (e.g., due to a deadlock in the mocked code or an infinite loop), the entire test suite will hang indefinitely. It is recommended to add a reasonable timeout (e.g., 30 seconds) to the subprocess.run call.

Suggested change
def _run_python(code: str) -> subprocess.CompletedProcess[str]:
repo_root = Path(__file__).resolve().parents[2]
return subprocess.run(
[sys.executable, "-c", textwrap.dedent(code)],
cwd=repo_root,
capture_output=True,
text=True,
check=False,
)
def _run_python(code: str) -> subprocess.CompletedProcess[str]:
repo_root = Path(__file__).resolve().parents[2]
return subprocess.run(
[sys.executable, "-c", textwrap.dedent(code)],
cwd=repo_root,
capture_output=True,
text=True,
check=False,
timeout=30,
)

Comment on lines +40 to +46
result = subprocess.run(
[sys.executable, "-c", script],
capture_output=True,
text=True,
cwd=repo_root,
check=False,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the other subprocess execution in the test suite, this subprocess.run call lacks a timeout. Adding a timeout ensures that the test suite fails gracefully if the plugin import process hangs.

Suggested change
result = subprocess.run(
[sys.executable, "-c", script],
capture_output=True,
text=True,
cwd=repo_root,
check=False,
)
result = subprocess.run(
[sys.executable, "-c", script],
capture_output=True,
text=True,
cwd=repo_root,
check=False,
timeout=30,
)

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 1 个问题,并提供了一些整体性的反馈:

  • 新的适配器运行时冒烟测试在子进程字符串中内联了大量、且大多是重复的 SDK stub 代码;建议将这些通用 stub(例如 quart、wechatpy、slack_sdk、lark_oapi 等)提取到可复用的辅助模块或脚本中,以减少重复代码,并让这些 shim 将来的维护更安全、更容易。
给 AI Agent 的提示
请根据本次代码评审的评论进行修改:

## 整体意见
- 新的适配器运行时冒烟测试在子进程字符串中内联了大量、且大多是重复的 SDK stub 代码;建议将这些通用 stub(例如 quart、wechatpy、slack_sdk、lark_oapi 等)提取到可复用的辅助模块或脚本中,以减少重复代码,并让这些 shim 将来的维护更安全、更容易。

## 逐条评论

### 评论 1
<location> `tests/test_main.py:15-24` </location>
<code_context>
+def _make_version_info(
</code_context>

<issue_to_address>
**suggestion (testing):** 建议为新的 sandbox 提示词行为和 webchat 标题错误处理添加专门的测试

这些改动提升了 `check_env` 的覆盖率,但 `astrbot.core.astr_main_agent` 中的新行为仍然缺少直接测试:

- `_handle_webchat` 现在会记录并吞掉来自 `prov.text_chat` 的异常。
- `_apply_sandbox_tools` 现在在追加 `SANDBOX_MODE_PROMPT` 时会处理 `None`/空的 `system_prompt`。

建议在 `tests/core` 中添加如下测试:

- 模拟一个其 `text_chat` 会抛出异常的 provider,并断言 `_handle_webchat` 在不向外传播异常的情况下返回。
- 使用 `system_prompt=None` 和已有字符串分别调用 `_apply_sandbox_tools`,并断言结果中 `SANDBOX_MODE_PROMPT` 恰好出现一次,且不会出现字面量前缀 `"None"`。

这将把修复锁定下来,防止回归。

建议的实现方式:

```python
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock

```

要完整实现这些测试,可以新增文件 `tests/core/test_astr_main_agent.py`,内容大致如下(根据你的项目结构调整 import 和路径):

```python
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock

from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)

@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
    # Arrange: provider whose text_chat raises
    async def raising_text_chat(*args, **kwargs):
        raise RuntimeError("provider failure")

    prov = SimpleNamespace(text_chat=raising_text_chat)

    # _handle_webchat likely needs other params; adapt as needed
    # e.g. async def _handle_webchat(prov, conv, webchat, *...)
    # Provide minimal dummy arguments to reach the text_chat call.
    webchat = SimpleNamespace()  # or whatever the function expects
    conversation = SimpleNamespace()
    # Act / Assert: no exception should propagate
    await _handle_webchat(
        provider=prov,
        conversation=conversation,
        webchat=webchat,
    )

def test_apply_sandbox_tools_with_none_system_prompt():
    # Arrange
    args = SimpleNamespace(
        system_prompt=None,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    # Act
    updated_args = _apply_sandbox_tools(args)

    # Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]

def test_apply_sandbox_tools_with_existing_system_prompt():
    base_prompt = "Base system prompt."
    args = SimpleNamespace(
        system_prompt=base_prompt,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    updated_args = _apply_sandbox_tools(args)

    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    # The base prompt should be preserved (order might depend on implementation)
    assert base_prompt in updated_args.system_prompt
```

你可能还需要:
- 调整 `_handle_webchat``_apply_sandbox_tools` 的参数名和参数位置,以匹配它们实际的函数签名。
-`sandbox_mode=True` 替换为实际用于启用 sandbox 工具的标志/条件。
-`conversation``webchat` 或其他必须参数提供更真实的虚拟对象,以便 `_handle_webchat` 在正常流程中确实调用到 `prov.text_chat`- 如果 `_handle_webchat` 不是异步函数,请移除 `@pytest.mark.asyncio``await`。
</issue_to_address>

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续评审。
Original comment in English

Hey - I've found 1 issue, and left some high level feedback:

  • The new adapter runtime smoke tests embed substantial, mostly duplicated SDK-stubbing code inside subprocess strings; consider extracting common stubs (e.g., for quart, wechatpy, slack_sdk, lark_oapi) into reusable helper modules or scripts to reduce duplication and make future maintenance of these shims safer and easier.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The new adapter runtime smoke tests embed substantial, mostly duplicated SDK-stubbing code inside subprocess strings; consider extracting common stubs (e.g., for quart, wechatpy, slack_sdk, lark_oapi) into reusable helper modules or scripts to reduce duplication and make future maintenance of these shims safer and easier.

## Individual Comments

### Comment 1
<location> `tests/test_main.py:15-24` </location>
<code_context>
+def _make_version_info(
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a dedicated test for the new sandbox prompt behavior and webchat title error handling

These changes improve `check_env` coverage, but the new behaviors in `astrbot.core.astr_main_agent` still lack direct tests:

- `_handle_webchat` now logs and swallows exceptions from `prov.text_chat`.
- `_apply_sandbox_tools` now handles `None`/empty `system_prompt` when appending `SANDBOX_MODE_PROMPT`.

Consider adding tests in `tests/core` that:

- Mock a provider whose `text_chat` raises, and assert `_handle_webchat` returns without propagating the exception.
- Call `_apply_sandbox_tools` with `system_prompt=None` and with an existing string, and assert the result contains `SANDBOX_MODE_PROMPT` exactly once and never includes a literal `"None"` prefix.

This will lock in the bugfixes and prevent regressions.

Suggested implementation:

```python
from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock

```

To fully implement the requested tests, add a new file `tests/core/test_astr_main_agent.py` with content along the following lines (adjust imports/paths to your project layout):

```python
import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock

from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)

@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
    # Arrange: provider whose text_chat raises
    async def raising_text_chat(*args, **kwargs):
        raise RuntimeError("provider failure")

    prov = SimpleNamespace(text_chat=raising_text_chat)

    # _handle_webchat likely needs other params; adapt as needed
    # e.g. async def _handle_webchat(prov, conv, webchat, *...)
    # Provide minimal dummy arguments to reach the text_chat call.
    webchat = SimpleNamespace()  # or whatever the function expects
    conversation = SimpleNamespace()
    # Act / Assert: no exception should propagate
    await _handle_webchat(
        provider=prov,
        conversation=conversation,
        webchat=webchat,
    )

def test_apply_sandbox_tools_with_none_system_prompt():
    # Arrange
    args = SimpleNamespace(
        system_prompt=None,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    # Act
    updated_args = _apply_sandbox_tools(args)

    # Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]

def test_apply_sandbox_tools_with_existing_system_prompt():
    base_prompt = "Base system prompt."
    args = SimpleNamespace(
        system_prompt=base_prompt,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    updated_args = _apply_sandbox_tools(args)

    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    # The base prompt should be preserved (order might depend on implementation)
    assert base_prompt in updated_args.system_prompt
```

You may need to:
- Adjust argument names and positions in `_handle_webchat` and `_apply_sandbox_tools` to match their actual signatures.
- Replace `sandbox_mode=True` with the actual flag/condition used to enable sandbox tooling.
- Provide realistic dummy objects for `conversation`, `webchat`, or any other required parameters so that `_handle_webchat` reaches the `prov.text_chat` call in normal flow.
- If `_handle_webchat` is not async, remove `@pytest.mark.asyncio` and `await`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +15 to +24
def _make_version_info(
major: int,
minor: int,
micro: int = 0,
releaselevel: str = "final",
serial: int = 0,
):
return SimpleNamespace(
major=major,
minor=minor,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 建议为新的 sandbox 提示词行为和 webchat 标题错误处理添加专门的测试

这些改动提升了 check_env 的覆盖率,但 astrbot.core.astr_main_agent 中的新行为仍然缺少直接测试:

  • _handle_webchat 现在会记录并吞掉来自 prov.text_chat 的异常。
  • _apply_sandbox_tools 现在在追加 SANDBOX_MODE_PROMPT 时会处理 None/空的 system_prompt

建议在 tests/core 中添加如下测试:

  • 模拟一个其 text_chat 会抛出异常的 provider,并断言 _handle_webchat 在不向外传播异常的情况下返回。
  • 使用 system_prompt=None 和已有字符串分别调用 _apply_sandbox_tools,并断言结果中 SANDBOX_MODE_PROMPT 恰好出现一次,且不会出现字面量前缀 "None"

这将把修复锁定下来,防止回归。

建议的实现方式:

from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock

要完整实现这些测试,可以新增文件 tests/core/test_astr_main_agent.py,内容大致如下(根据你的项目结构调整 import 和路径):

import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock

from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)

@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
    # Arrange: provider whose text_chat raises
    async def raising_text_chat(*args, **kwargs):
        raise RuntimeError("provider failure")

    prov = SimpleNamespace(text_chat=raising_text_chat)

    # _handle_webchat likely needs other params; adapt as needed
    # e.g. async def _handle_webchat(prov, conv, webchat, *...)
    # Provide minimal dummy arguments to reach the text_chat call.
    webchat = SimpleNamespace()  # or whatever the function expects
    conversation = SimpleNamespace()
    # Act / Assert: no exception should propagate
    await _handle_webchat(
        provider=prov,
        conversation=conversation,
        webchat=webchat,
    )

def test_apply_sandbox_tools_with_none_system_prompt():
    # Arrange
    args = SimpleNamespace(
        system_prompt=None,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    # Act
    updated_args = _apply_sandbox_tools(args)

    # Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]

def test_apply_sandbox_tools_with_existing_system_prompt():
    base_prompt = "Base system prompt."
    args = SimpleNamespace(
        system_prompt=base_prompt,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    updated_args = _apply_sandbox_tools(args)

    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    # The base prompt should be preserved (order might depend on implementation)
    assert base_prompt in updated_args.system_prompt

你可能还需要:

  • 调整 _handle_webchat_apply_sandbox_tools 的参数名和参数位置,以匹配它们实际的函数签名。
  • sandbox_mode=True 替换为实际用于启用 sandbox 工具的标志/条件。
  • conversationwebchat 或其他必须参数提供更真实的虚拟对象,以便 _handle_webchat 在正常流程中确实调用到 prov.text_chat
  • 如果 _handle_webchat 不是异步函数,请移除 @pytest.mark.asyncioawait
Original comment in English

suggestion (testing): Consider adding a dedicated test for the new sandbox prompt behavior and webchat title error handling

These changes improve check_env coverage, but the new behaviors in astrbot.core.astr_main_agent still lack direct tests:

  • _handle_webchat now logs and swallows exceptions from prov.text_chat.
  • _apply_sandbox_tools now handles None/empty system_prompt when appending SANDBOX_MODE_PROMPT.

Consider adding tests in tests/core that:

  • Mock a provider whose text_chat raises, and assert _handle_webchat returns without propagating the exception.
  • Call _apply_sandbox_tools with system_prompt=None and with an existing string, and assert the result contains SANDBOX_MODE_PROMPT exactly once and never includes a literal "None" prefix.

This will lock in the bugfixes and prevent regressions.

Suggested implementation:

from main import check_dashboard_files, check_env
from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock

To fully implement the requested tests, add a new file tests/core/test_astr_main_agent.py with content along the following lines (adjust imports/paths to your project layout):

import pytest
from types import SimpleNamespace
from unittest.mock import AsyncMock

from astrbot.core.astr_main_agent import (
    _handle_webchat,
    _apply_sandbox_tools,
    SANDBOX_MODE_PROMPT,
)

@pytest.mark.asyncio
async def test_handle_webchat_swallows_provider_exceptions():
    # Arrange: provider whose text_chat raises
    async def raising_text_chat(*args, **kwargs):
        raise RuntimeError("provider failure")

    prov = SimpleNamespace(text_chat=raising_text_chat)

    # _handle_webchat likely needs other params; adapt as needed
    # e.g. async def _handle_webchat(prov, conv, webchat, *...)
    # Provide minimal dummy arguments to reach the text_chat call.
    webchat = SimpleNamespace()  # or whatever the function expects
    conversation = SimpleNamespace()
    # Act / Assert: no exception should propagate
    await _handle_webchat(
        provider=prov,
        conversation=conversation,
        webchat=webchat,
    )

def test_apply_sandbox_tools_with_none_system_prompt():
    # Arrange
    args = SimpleNamespace(
        system_prompt=None,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    # Act
    updated_args = _apply_sandbox_tools(args)

    # Assert: SANDBOX_MODE_PROMPT is present exactly once, and no "None" prefix
    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    assert "None" not in updated_args.system_prompt.split(SANDBOX_MODE_PROMPT)[0]

def test_apply_sandbox_tools_with_existing_system_prompt():
    base_prompt = "Base system prompt."
    args = SimpleNamespace(
        system_prompt=base_prompt,
        sandbox_mode=True,  # or flag needed to trigger sandbox tools
    )

    updated_args = _apply_sandbox_tools(args)

    assert SANDBOX_MODE_PROMPT in updated_args.system_prompt
    assert updated_args.system_prompt.count(SANDBOX_MODE_PROMPT) == 1
    # The base prompt should be preserved (order might depend on implementation)
    assert base_prompt in updated_args.system_prompt

You may need to:

  • Adjust argument names and positions in _handle_webchat and _apply_sandbox_tools to match their actual signatures.
  • Replace sandbox_mode=True with the actual flag/condition used to enable sandbox tooling.
  • Provide realistic dummy objects for conversation, webchat, or any other required parameters so that _handle_webchat reaches the prov.text_chat call in normal flow.
  • If _handle_webchat is not async, remove @pytest.mark.asyncio and await.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds integration tests for API endpoints and dashboard functionality while also attempting to refactor some import statements in the API layer. However, there is a critical bug in the import refactoring that will break the application at runtime.

Changes:

  • Added comprehensive integration tests for API key management, dashboard routes, and adapter smoke tests
  • Improved test fixtures with proper async event loop scoping (loop_scope="module")
  • Added subprocess-based tests for platform adapters with mocked SDKs
  • Fixed potential None-handling issue in system_prompt concatenation
  • Added fallback handling for CronJobManager when apscheduler is mocked
  • Added exception handling for webchat title generation
  • CRITICAL BUG: Introduced broken imports that reference non-existent astrbot.core.star.base module

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
tests/unit/test_skipped_items_runtime.py New comprehensive runtime tests for platform adapters using subprocess isolation and SDK stubs
tests/unit/test_fixture_plugin_usage.py New tests verifying fixture plugin can be loaded in isolated process
tests/unit/test_api_compat_smoke.py New API compatibility smoke tests (missing coverage for star.Star imports)
tests/test_main.py Improved environment check tests with better mocking of path functions
tests/test_dashboard.py Enhanced dashboard tests with mocked plugin operations and update check scenarios
tests/test_api_key_open_api.py Fixed test isolation issues with unique creator names
tests/test_kb_import.py Updated async fixture configuration with loop_scope
astrbot/core/cron/init.py Added fallback for CronJobManager when apscheduler is unavailable (good for tests)
astrbot/core/astr_main_agent.py Added exception handling and fixed None-handling for system_prompt (good fixes)
astrbot/api/star/init.py BROKEN: Imports from non-existent astrbot.core.star.base module
astrbot/api/all.py BROKEN: Imports from non-existent astrbot.core.star.base module

@@ -1,7 +1,9 @@
from astrbot.core.star import Context, Star, StarTools
from astrbot.core.star.base import Star
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import error: astrbot.core.star.base module does not exist. The Star class is defined in astrbot.core.star.__init__.py, not in a separate base.py file. This import should be changed to from astrbot.core.star import Star.

Suggested change
from astrbot.core.star.base import Star
from astrbot.core.star import Star

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看起来需要同步?

register_star as register, # 注册插件(Star)
)
from astrbot.core.star import Context, Star
from astrbot.core.star.base import Star
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import error: astrbot.core.star.base module does not exist. The Star class is defined in astrbot.core.star.__init__.py, not in a separate base.py file. This import should be changed to from astrbot.core.star import Star.

Suggested change
from astrbot.core.star.base import Star
from astrbot.core.star import Star

Copilot uses AI. Check for mistakes.
import astrbot.api as api

assert callable(api.agent)
assert callable(api.llm_tool)
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test coverage: The API compatibility smoke tests don't verify that Star and Context can be imported from astrbot.api.star. This module is used by many builtin plugins (session_controller, astrbot, builtin_commands, web_searcher, etc.) via "from astrbot.api import star" followed by "star.Star" and "star.Context". The tests should include a check like:

from astrbot.api import star
assert hasattr(star, 'Star')
assert hasattr(star, 'Context')

This would have caught the broken import from astrbot.core.star.base which doesn't exist.

Suggested change
assert callable(api.llm_tool)
assert callable(api.llm_tool)
def test_api_star_exports_star_and_context():
"""astrbot.api.star should expose Star and Context for plugin imports."""
from astrbot.api import star
assert hasattr(star, "Star")
assert hasattr(star, "Context")

Copilot uses AI. Check for mistakes.
Copy link
Member

@Dt8333 Dt8333 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同步一下主分支,清理一下在其他位置也出现的代码。

@@ -1,7 +1,9 @@
from astrbot.core.star import Context, Star, StarTools
from astrbot.core.star.base import Star
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看起来需要同步?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend area:webui The bug / feature is about webui(dashboard) of astrbot. size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants