Skip to content

studio: setup log styling#4494

Open
Imagineer99 wants to merge 22 commits intounslothai:mainfrom
Imagineer99:feat/studio-setup-log-styling
Open

studio: setup log styling#4494
Imagineer99 wants to merge 22 commits intounslothai:mainfrom
Imagineer99:feat/studio-setup-log-styling

Conversation

@Imagineer99
Copy link
Contributor

@Imagineer99 Imagineer99 commented Mar 20, 2026

Unify Studio terminal output styling across setup flows with a cleaner, step-based format and shared banner treatment.
Update Windows setup (studio/setup.ps1) to match the new visual style while keeping behavior intact.
Add unsloth studio setup --verbose (-v) to enable full dependency/build logs for troubleshooting.

Windows:
image

WSL:
image

Launch:
image

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the user experience during Unsloth Studio setup by standardizing and improving the terminal output. It introduces a consistent, step-based visual style for setup logs across different operating systems, making the process clearer and easier to follow. A new verbose mode provides detailed diagnostic information for advanced users, while making certain build steps optional prevents complete setup failures due to non-critical component issues.

Highlights

  • Unified Terminal Output Styling: Terminal output styling across Studio setup flows has been unified, presenting a cleaner, step-based format with a shared banner treatment for improved readability.
  • Windows Setup Script Update: The Windows setup script (studio/setup.ps1) was updated to match the new visual style, ensuring consistent user experience while retaining its original behavior.
  • Verbose Mode for Troubleshooting: A new --verbose (-v) option was added to unsloth studio setup to enable full dependency and build logs, facilitating troubleshooting for optional failures like llama.cpp or cmake.
  • Optional llama.cpp Build Steps: The llama.cpp build steps (GGUF server / quantize tooling) are now treated as optional; if they fail, setup continues and prints a non-fatal warning, improving UX on mixed GPU/toolchain setups.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b4e6ba32d0

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the setup experience by unifying the terminal output style across setup.sh, setup.ps1, and Python installation scripts. The new step-based format is much cleaner and easier to follow. The addition of a --verbose flag for detailed troubleshooting logs is a great feature. Making the llama.cpp build optional is also a smart move to improve user experience on systems with tricky toolchains.

I've found one issue in setup.ps1 where it exits on llama.cpp build failure, which is inconsistent with setup.sh and the PR's goal of making this step optional. My review includes a suggestion to fix this.

@Imagineer99
Copy link
Contributor Author

reverted this back to fail-fast behaviour in studio/setup.sh for core llama.cpp steps.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9a691e4117

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Imagineer99 and others added 2 commits March 21, 2026 00:20
Resolve merge conflicts in setup.sh and setup.ps1 while preserving
both main's behavioral additions and the PR's styling refactor:

setup.sh:
- Keep main's _run_quiet/run_quiet/run_quiet_no_exit architecture
  with PR's step() styled error messages
- Restore REQUESTED_PYTHON_VERSION support from main (install.sh
  contract) with substep() output style
- Restore graceful run_quiet_no_exit || BUILD_OK=false for llama.cpp
  clone/cmake/build steps (instead of hard exit)
- Keep PR's try_quiet for optional llama-quantize step
- Keep PR's compact python version check one-liners

setup.ps1:
- Restore main's cpu-first/CUDA-else PyTorch install structure with
  PR's step/substep/verbose styling
- Combine main's conda+URL detail in Python-not-found error with
  PR's step/substep format
- Style main's stale-venv and conda-skip messages to match PR
  conventions

Auto-merged from main (no conflicts):
- _platform_compat import and UTF-8 reconfigure in run.py
- Test-IsConda, py.exe launcher, Get-Command -All Python discovery
- Stale-venv rebuild logic
- Get-PytorchCudaTag with cu118/cpu fallbacks
@danielhanchen
Copy link
Contributor

Merged main into this branch to resolve conflicts in setup.sh and setup.ps1.

What was conflicting and how it was resolved:

setup.sh (9 conflict regions):

  • _run_quiet / run_quiet / run_quiet_no_exit -- main refactored the quiet runner into a base _run_quiet with an exit-vs-return parameter. Kept main's architecture, applied the PR's step() styled error messages on top.
  • REQUESTED_PYTHON_VERSION -- main added support for install.sh passing its selected interpreter. Restored from main, using substep() for output.
  • llama.cpp build steps -- main uses run_quiet_no_exit || BUILD_OK=false for graceful failure handling (clone, cmake, build). Restored that pattern instead of the hard-exit run_quiet calls. Kept try_quiet for the optional llama-quantize step.
  • Python version checks -- kept the PR's compact one-liner style since the logic is identical to main.

setup.ps1 (3 conflict regions):

  • Python not found error -- combined main's detail (conda note, python.org URL, winget command) with the PR's step/substep format.
  • PyTorch install -- main restructured to determine $CuTag first (including "cpu" path), then branch. Kept main's structure, applied the PR's step/substep/verbose styling throughout.
  • Stale venv + conda skip messages -- these auto-merged from main. Styled them to use substep()/step() to match the PR's conventions.

Auto-merged without conflicts (from main):

  • _platform_compat import and UTF-8 stdout reconfigure in run.py
  • Test-IsConda, py.exe launcher, Get-Command -All Python discovery in setup.ps1
  • Stale-venv rebuild logic in setup.ps1
  • Get-PytorchCudaTag with cu118/cpu fallbacks

All syntax checks pass (bash -n setup.sh, python -m py_compile on all Python files).

- Bracket IPv6 display_host in external_url to produce clickable URLs
- Redirect try_quiet failure log to stderr instead of stdout
- Clamp _step label to column width to prevent negative padding
@danielhanchen danielhanchen self-requested a review as a code owner March 24, 2026 10:18
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7ed5b69c22

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +53 to +59
ipv6_bind = bind_host in ("::", "::1")
if ipv6_bind:
local_url = f"http://[::1]:{port}"
alt_local = f"http://localhost:{port}"
else:
local_url = f"http://127.0.0.1:{port}"
alt_local = f"http://localhost:{port}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep localhost banner URLs resolver-agnostic

Treating only ::/::1 as IPv6 means a bind_host of localhost always emits 127.0.0.1 URLs. On hosts where localhost resolves to IPv6 first (or IPv6-only loopback), Uvicorn can listen on ::1 while the printed local/API links point to IPv4 and fail to connect. This is a regression from using localhost-based URLs for loopback binds and affects users who start Studio with -H localhost.

Useful? React with 👍 / 👎.

danielhanchen and others added 2 commits March 24, 2026 11:00
Simulation harness (tests/simulate_pr4494.py) creates an isolated uv
venv, copies the real source files into it, and runs subprocess tests
for all three fixes with visual before/after demos and edge cases.

Standalone bash test (tests/test_try_quiet.sh) validates try_quiet
stderr redirect across 8 scenarios including broken-version contrast.

39 integration tests total (14 IPv6 + 15 try_quiet + 10 _step), all
existing 75 unit tests still pass.
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

The %-15s printf format pads short labels but does not truncate long
ones.  Change to %-15.15s so labels wider than 15 chars are clipped,
matching the PowerShell .Substring(0,15) and Python label[:15] logic.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 83cab0c395

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

external_url = f"http://{display_host}:{port}"
listen_all = bind_host in ("0.0.0.0", "::")
loopback_bind = bind_host in ("127.0.0.1", "localhost", "::1")
api_base = local_url if listen_all or loopback_bind else external_url

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Print reachable API URLs when binding all interfaces

When bind_host is 0.0.0.0/::, api_base is forced to local_url, so the banner advertises 127.0.0.1/::1 API endpoints even though the server is intentionally exposed on another host address. In remote/server setups, users copying the shown /api URL from the banner get a non-routable loopback address instead of the reachable network address, which is a regression from the prior display_host-based API links.

Useful? React with 👍 / 👎.

These test files are not part of the styling fix and should not
ship with this PR.
Resolve conflicts in install_python_stack.py: keep the PR's column-
layout output system (_step, _COL, _LABEL) and incorporate main's
_safe_print Unicode fallback for Windows consoles.
- install_python_stack.py: restore _red for patch_package_file
  warnings (was downgraded to _dim)
- setup.ps1: capture winget output and show on failure for CUDA,
  Node, Python, and OpenSSL installs (was piped to Out-Null)
- setup.ps1: always show git pull failure warning, not just in
  verbose mode
@danielhanchen
Copy link
Contributor

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly refactors the terminal output for Unsloth Studio's setup and startup processes across various platforms (Linux/macOS, Windows, Colab). It introduces a standardized, color-coded logging system using step and substep functions for consistent and user-friendly feedback during installation and runtime. This includes centralizing the startup banner generation into a new startup_banner.py module and updating install_python_stack.py and the setup.sh/setup.ps1 scripts to use these new output helpers. The unsloth studio setup CLI command now supports a --verbose flag for detailed logging. Additionally, the setup scripts' logic for detecting and installing dependencies like Node.js, Python, CUDA Toolkit, and llama.cpp build has been refined for improved robustness and clarity. The stdout_supports_color function in studio/backend/startup_banner.py should be updated to include Windows Virtual Terminal processing for better consistency and color support on Windows, and its broad except Exception: block should be modified to log the exception, adhering to the repository's rule against silent exception handling.

Comment on lines +15 to +42
def stdout_supports_color() -> bool:
"""True if we should emit ANSI colors."""
if os.environ.get("NO_COLOR", "").strip():
return False
if os.environ.get("FORCE_COLOR", "").strip():
return True
try:
return sys.stdout.isatty()
except Exception:
return False


def print_port_in_use_notice(original_port: int, new_port: int) -> None:
"""Message when the requested port is taken and another is chosen."""
msg = f"Port {original_port} is in use, using port {new_port} instead."
if stdout_supports_color():
print(f"\033[38;5;245m{msg}\033[0m")
else:
print(msg)


def print_studio_access_banner(
*,
port: int,
bind_host: str,
display_host: str,
) -> None:
"""Pretty-print URLs after the server is listening (beginner-friendly)."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with install_python_stack.py and to ensure colors work correctly on Windows, this function should also attempt to enable virtual terminal (VT) processing. The current implementation only checks if stdout is a TTY, which is not sufficient on its own for modern Windows terminals. Additionally, the broad except Exception: blocks have been modified to log the exception, adhering to the repository rule against silent exception handling for better debugging.

def stdout_supports_color() -> bool:
    """True if we should emit ANSI colors."""
    if os.environ.get("NO_COLOR", "").strip():
        return False
    if os.environ.get("FORCE_COLOR", "").strip():
        return True
    try:
        if not sys.stdout.isatty():
            return False
    except Exception as e:
        # Log the exception for debugging purposes as per repository rule.
        import logging
        logging.debug(f"stdout.isatty() check failed: {e}")
        return False

    if sys.platform == "win32":
        try:
            import ctypes
            kernel32 = ctypes.windll.kernel32
            handle = kernel32.GetStdHandle(-11)  # STD_OUTPUT_HANDLE
            mode = ctypes.c_ulong()
            kernel32.GetConsoleMode(handle, ctypes.byref(mode))
            kernel32.SetConsoleMode(handle, mode.value | 0x0004)
        except Exception as e:
            # Log the exception for debugging purposes as per repository rule.
            import logging
            logging.debug(f"Failed to enable VT processing on Windows: {e}")
            return False
    return True
References
  1. Avoid using broad, silent exception handlers like except Exception: pass. Instead, log the exception, even if at a debug level, to aid in future debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants