OpenXAI : Towards a Transparent Evaluation of Model Explanations
-
Updated
Aug 17, 2024 - JavaScript
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Love2D LSP (VS Code / Neovim / Zed / etc.) extension for live coding and live variable tracking
Editing machine learning models to reflect human knowledge and values
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
A user interface to interpret machine learning models.
[ICLR 2026] DecAlign: Aligning Cross-Modal Semantics for Multimodal Foundation Models
Visually compare fill-in-the-blank LLM prompts to uncover learned biases and associations!
Online exploration of memory reduction strategies of a DRL agent trained to solve a navigation task on ViZDoom
A Python Toolkit for Explainable IR methods
ir_explain: a Python Library of Explainable IR Methods
Visual analytics approach presented in the paper "Visual Analytics Tool for the Interpretation of Hidden States in Recurrent Neural Networks" (VCIBA, 2021).
Web based interpretability tool for LLMs using Huggingface and Inseq
Colour-coded prompt debugging for AI engineers. Paste a prompt, run a perturbation-based saliency analysis, and see exactly which phrases are driving your model's output - and which are dead weight.
A web user interface for the OncoText Pathology System (https://github.com/yala/OncoText)
Build explainable machine learning products and services
Neural network inspector web application source repository
Interactive visualization tool for complex neural network architectures and activation maps.
📥 Enhance your YGGTorrent experience with YGGMollo, a Chrome extension that adds a direct download button using Ygg-API, removing download limits.
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."