Explanation Objects

Classes for representing and visualizing explanations.

Explanation

class case_explainer.Explanation(test_sample, test_index, neighbors, predicted_class, true_class, correspondence, correspondence_interpretation, feature_names=None, class_names=None)[source]

Bases: object

Explanation object containing case-based explanation details.

__init__(test_sample, test_index, neighbors, predicted_class, true_class, correspondence, correspondence_interpretation, feature_names=None, class_names=None)[source]

Initialize explanation.

Parameters:
  • test_sample (ndarray) – The test sample being explained

  • test_index (Optional[int]) – Index in test set (if applicable)

  • neighbors (List[Neighbor]) – List of Neighbor objects

  • predicted_class (int) – Predicted class label

  • true_class (Optional[int]) – True class label (if available)

  • correspondence (float) – Correspondence score [0, 1]

  • correspondence_interpretation (str) – “high”, “medium”, or “low”

  • feature_names (Optional[List[str]]) – Names of features (optional)

  • class_names (Optional[Dict[int, str]]) – Mapping from class labels to names (optional)

get_predicted_class_name()[source]

Get the predicted class name.

Return type:

str

get_true_class_name()[source]

Get the true class name.

Return type:

Optional[str]

is_correct()[source]

Check if prediction matches true label (if available).

Return type:

Optional[bool]

summary()[source]

Generate a text summary of the explanation.

Return type:

str

to_dict()[source]

Export explanation as dictionary (for JSON serialization).

Return type:

Dict[str, Any]

plot(plot_type='radar', highlight_differences=True, show_distances=True, save_path=None, figsize=(12, 8))[source]

Visualize the explanation.

Parameters:
  • plot_type (str) – ‘radar’, ‘bar’, or ‘parallel’

  • highlight_differences (bool) – Whether to highlight feature differences

  • show_distances (bool) – Whether to show distance values

  • save_path (Optional[str]) – Path to save figure (if provided)

  • figsize (Tuple[int, int]) – Figure size

Return type:

None

Key Attributes

Explanation.correspondence

The correspondence score between 0 and 1, indicating agreement between the prediction and retrieved neighbors. Higher values indicate stronger agreement with training precedent.

Explanation.correspondence_interpretation

Human-readable interpretation of correspondence: ‘high’ (≥85%), ‘medium’ (70-85%), or ‘low’ (<70%).

Explanation.neighbors

List of Neighbor objects representing the k nearest training samples.

Explanation.predicted_class

The predicted class for the explained instance.

Explanation.true_class

The true class of the explained instance (if provided).

Methods

Explanation.is_correct()[source]

Check if prediction matches true label (if available).

Return type:

Optional[bool]

Explanation.summary()[source]

Generate a text summary of the explanation.

Return type:

str

Explanation.to_dict()[source]

Export explanation as dictionary (for JSON serialization).

Return type:

Dict[str, Any]

Explanation.plot(plot_type='radar', highlight_differences=True, show_distances=True, save_path=None, figsize=(12, 8))[source]

Visualize the explanation.

Parameters:
  • plot_type (str) – ‘radar’, ‘bar’, or ‘parallel’

  • highlight_differences (bool) – Whether to highlight feature differences

  • show_distances (bool) – Whether to show distance values

  • save_path (Optional[str]) – Path to save figure (if provided)

  • figsize (Tuple[int, int]) – Figure size

Return type:

None

Neighbor

class case_explainer.explanation.Neighbor(index, distance, label, features, metadata=None)[source]

Bases: object

Represents a single nearest neighbor.

__init__(index, distance, label, features, metadata=None)[source]

Key Attributes

Neighbor.index

Index of the neighbor in the training set.

Neighbor.distance

Distance from the test sample to this neighbor.

Neighbor.label

Class label of the neighbor.

Neighbor.features

Feature values of the neighbor.

Neighbor.metadata

Optional metadata dictionary for this training sample.

Example Usage

Accessing Explanation Details

from case_explainer import CaseExplainer
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split

# Setup
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

clf = RandomForestClassifier()
clf.fit(X_train, y_train)

explainer = CaseExplainer(X_train, y_train)
explanation = explainer.explain_instance(X_test[0], k=5, model=clf)

# Access explanation properties
print(f"Correspondence: {explanation.correspondence:.2%}")
print(f"Interpretation: {explanation.correspondence_interpretation}")
print(f"Predicted class: {explanation.predicted_class}")
print(f"Is correct: {explanation.is_correct()}")

Inspecting Neighbors

# Examine each neighbor
for i, neighbor in enumerate(explanation.neighbors):
    print(f"\nNeighbor {i+1}:")
    print(f"  Training index: {neighbor.index}")
    print(f"  Distance: {neighbor.distance:.3f}")
    print(f"  Label: {neighbor.label}")
    print(f"  Features: {neighbor.features[:3]}...")  # First 3 features

    if neighbor.metadata:
        print(f"  Metadata: {neighbor.metadata}")

Exporting Explanations

# Convert to dictionary for JSON serialization
exp_dict = explanation.to_dict()

import json
with open('explanation.json', 'w') as f:
    json.dump(exp_dict, f, indent=2)

# The dictionary contains:
# - correspondence: float
# - correspondence_interpretation: str
# - predicted_class: int
# - true_class: int or None
# - is_correct: bool or None
# - neighbors: list of dicts with index, distance, label, features

Generating Summaries

# Get human-readable summary
summary = explanation.summary()
print(summary)

# Example output:
# Explanation for test sample:
#   Predicted class: 1
#   True class: 1 (CORRECT)
#   Correspondence: 94.3% (high)
#
# Nearest neighbors:
#   1. Index 42, distance 0.123, label 1
#   2. Index 67, distance 0.234, label 1
#   3. Index 15, distance 0.289, label 1
#   4. Index 89, distance 0.345, label 0
#   5. Index 23, distance 0.401, label 1

Visualizing Explanations

# Create bar plot of neighbor distances and labels
explanation.plot()

# Save plot to file
import matplotlib.pyplot as plt
explanation.plot()
plt.savefig('explanation.png', dpi=300, bbox_inches='tight')
plt.close()

Batch Analysis

# Analyze multiple explanations
explanations = explainer.explain_batch(X_test[:50], k=5, y_test=y_test[:50], model=clf)

# Correspondence by class
from collections import defaultdict
corr_by_class = defaultdict(list)

for exp in explanations:
    corr_by_class[exp.predicted_class].append(exp.correspondence)

for cls, corrs in corr_by_class.items():
    mean_corr = sum(corrs) / len(corrs)
    print(f"Class {cls}: {mean_corr:.2%} avg correspondence ({len(corrs)} samples)")

# High vs low correspondence predictions
high_corr = [exp for exp in explanations if exp.correspondence >= 0.85]
low_corr = [exp for exp in explanations if exp.correspondence < 0.70]

print(f"\nHigh correspondence (≥85%): {len(high_corr)} samples")
print(f"Low correspondence (<70%): {len(low_corr)} samples")

# Accuracy by correspondence level
high_acc = sum(1 for exp in high_corr if exp.is_correct()) / len(high_corr)
low_acc = sum(1 for exp in low_corr if exp.is_correct()) / len(low_corr) if low_corr else 0

print(f"High correspondence accuracy: {high_acc:.2%}")
print(f"Low correspondence accuracy: {low_acc:.2%}")

See Also

  • CaseExplainer: The main explainer class that generates these explanations

  • case_explainer.metrics: The correspondence computation functions