Display All Feature Names in Permutation Feature Importance Results

I’m currently analyzing the importance of features in my machine learning model using permutation methods, but I have an issue. When I check the importance scores, not all feature names appear; only a few show up instead of listing all the columns from my dataset.

I really need to view the importance values for each feature, even those with low scores. At the moment, the output seems to be missing some features or is incomplete.

Here’s the code I’m using for this task:

from sklearn.inspection import permutation_importance
import pandas as pd

# Calculate feature importance
result = permutation_importance(model, X_validation, y_validation, n_repeats=10, random_state=42)

# Display results
feature_names = X_validation.columns.tolist()
for i in result.importances_mean.argsort()[::-1]:
    print(f"{feature_names[i]}: {result.importances_mean[i]:.3f}")

I’ve searched for ways to specify the number of features that should be displayed, but I haven’t found anything clear. Does anyone know how to ensure that all feature names are included in the output, regardless of their importance scores?

Check if you accidentally added filtering conditions in your loop. I hit this exact issue on a credit scoring project - features with negative importance weren’t showing because I’d forgotten about a filter I added earlier. The permutation_importance function always returns results for every feature, so if some are missing, it’s probably how you’re accessing or displaying them. Print the raw arrays first: python print(f"Shape of importances_mean: {result.importances_mean.shape}") print(f"All importance values: {result.importances_mean}") Also make sure your feature_names list matches exactly what you used for training. I’ve seen mismatches between validation and training column names, especially after preprocessing like encoding or scaling. Check for duplicate column names too - Pandas handles them but it can mess up the feature name mapping. Run X_validation.columns.duplicated().any() to check.

Had the same issue with a regression model recently. Your terminal or IDE might be truncating the output when there are too many features. Try these pandas settings before running your code:

pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)

Or just write the results to a file so nothing gets cut off:

with open('feature_importance.txt', 'w') as f:
    for i in result.importances_mean.argsort()[::-1]:
        f.write(f"{feature_names[i]}: {result.importances_mean[i]:.3f}\n")

Also check if your X_validation dataframe actually has all the features you think it does. Data cleaning sometimes removes columns but doesn’t update variable names. Quick check: print(X_validation.shape) and print(X_validation.columns).

Permutation importance should return scores for every feature in your data. If some are missing, there’s probably a mismatch between your feature names and the actual columns being processed.

Your code looks fine. The issue’s probably not with displaying features but how you’re reading the output.

I hit something similar last year debugging a fraud detection model. Turns out I was filtering my dataset upstream, so some features never made it to the permutation importance calculation.

First - print the length of your feature names and importance scores:

print(f"Number of features: {len(feature_names)}")
print(f"Number of importance scores: {len(result.importances_mean)}")

If these don’t match, you’ve got a data preprocessing problem. Check if you’re dropping columns with missing values or doing feature selection before this step.

If the numbers match but you’re still not seeing all features, try this:

# Create a dataframe to see everything clearly
importance_df = pd.DataFrame({
    'feature': feature_names,
    'importance': result.importances_mean
}).sort_values('importance', ascending=False)

print(importance_df)

This shows every single feature with its score, even the zeros.

Nine times out of ten, when features seem “missing” from permutation importance, they’re actually there with tiny values that get rounded to zero or filtered out in your pipeline.