Skip to content

Understanding Responsible ML

With great power comes great responsibility, especially when it comes to machine learning.

Kyle Lyon
Kyle Lyon
15 min read
Understanding Responsible ML
AI Humanoid Explaining (credits)

Table of Contents

Introduction

With the increasing use of machine learning and artificial intelligence in our daily lives, it's essential that we must consider the ethical implications of these technologies. Real-life examples of biased or unfair decisions made by machine learning models have emerged in recent years, leading to harmful consequences. Whether its facial recognition software found to be less accurate for people with darker skin tones, resulting in potential misidentification and discrimination, or predictive policing algorithms criticized for perpetuating existing biases in law enforcement, responsible machine learning practices are necessary to mitigate the risks of these technologies and ensure their fair and ethical use.

In this article, we will explore the concept of responsibility in machine learning and artificial intelligence and discuss how explainable machine learning (XML) models can help achieve responsible machine learning. First, let's define what responsibility means in ML and AI. Here's a definition by Virginia Dunn from her book Responsible Artificial Intelligence:

Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human-flourishing and well-being in a sustainable world."

This quote, of course, is a definition of Responsible Artificial Intelligence, but it has a problem: it's a circular definition. It uses the word "responsible" to define Responsible AI, which doesn't help us understand the concept. A better way to describe it might be to use the term "risk-management" instead. This is because "responsibility" is onerous for many people to understand, especially regarding technology. As my former professor, Dr. Patrick Hall, suggested, using the term "risk-management" can help us better understand what Responsible AI and ML are all about.

The National Institute of Standards and Technology (NIST) has a more in-depth depiction of technological trustworthiness and responsibleness:

Trustworthy AI: Risks & Characteristics (credits)

Risk management is a critical element of this taxonomy. Each category has its characteristics that identify potential risks, and with it, we can better manage them and reduce their impact on our organizations.

But why are there risks in the first place? You may ask.

Let's remind ourselves that computers cannot "trust" because they are inanimate objects and cannot understand or experience such concepts. Quite honestly, they cannot understand anything at all. They just execute lines of code. Although computers cannot trust, they can, however, be trustworthy. Trustworthiness is achieved through careful programming, testing, and maintenance.

Okay, now that we understand responsibility, what's the difference between Responsible AI and ML?

The graphic above explains it all, but if we had to summarize it, we could say that AI is the broader field of computer science that focuses on designing computers to perform tasks that usually require human intelligence. ML is the specific technique used to teach computers to learn and improve at performing tasks.

A simple way of viewing it is that AI is like a toolbox full of different tools that can be used to solve various problems. ML is like a hammer within this toolbox - it's a specific tool that can be used to solve certain issues, but it's just one of many tools that can be used.

What is an XML Model?

First and foremost, it is crucial to point out that there's a difference between interpretability and explainability. Interpretation is a meaningful, high-level understanding that uses context and leverages human background knowledge. That is to say; an interpretable ML model should be expected to describe what an output means in a real-life context. On the other hand, explainability is a low-level, detailed understanding that can tell a complex process. This means XML models should be expected to describe how an output occurred.

The distinction between interpretability and explainability is clear and profound, but what's even more profound is the challenge of creating an interpretable model compared to an explainable one due to the complexity involved. Developing an interpretable model is a challenging task as it requires a significant amount of effort and foresight. Many companies (including industry leaders) struggle with deploying interpretable models with accountability.

Why is XML Useful?

Although explainability is simpler to achieve than interpretability, it comes in many different shapes and sizes. You have XML models directly understandable to non-technical consumers and others that are only explainable to highly-skilled data practitioners. Regardless, explainability is crucial for success in many areas, including:

  • Risk Management
  • Documentation
  • Compliance
  • Consulting
  • Finding and Fixing Discrimination
  • Debugging

The reality is that although machine learning is a powerful tool that can help us solve complex problems, it can also create issues if we don't fully understand how it works. When ML models are used to make crucial decisions, we must have a clear understanding of the decision-making process rather than relying on a "black-box" approach where the model's inner workings are not transparent. ML models can sometimes produce unexpected or biased results, especially if trained on biased data. By using a "glass-box" approach with XML, where the decision-making process is open and transparent, we can better understand the limitations and biases of our models and take steps to mitigate any potential risks. Additionally, having a clear understanding of our models is essential for the scientific method, as it allows us to evaluate the validity and reliability of our results.

Although XML models can be effective for both structured and unstructured data, they may be particularly well-suited for structured data due to their clear and well-defined... well, structure, making it easier to identify relationships between input features and output predictions with a minimal tradeoff in performance. With unstructured data, however, it can be more challenging to extract meaningful features and relationships between those features, which can hinder the explainability of the model and may require more advanced techniques, such as neural networks with attention mechanisms or convolutional neural networks, which can be more challenging to interpret.

Now, let's examine some characteristics that make a model explainable.

What are the Characteristics of Explainability?

If a model must be capable of producing a description of what an output means to be considered explainable, then its characteristics must enable it to reach such a conclusion. The seven factors listed below are indeed helpful in that way:

  1. Additivity: whether/how a model takes additive or modular form. When each variable in a function is treated independently, the model tends to be more explainable. For example, a regression takes X1 and X2 as inputs. How X2 is treated would never affect how X1 is treated, and vice-versa. "Treated" means how a variable is fitted as a coefficient. As for explainability, additivity is quite helpful because when considering how X2 affects a model, we don't have to think about X1 or any other variables. Moreover, for this reason, neural networks are not explainable. Imagine a simple neural network with two layers and three inputs: X1, X2, and X3. Those three inputs would be mixed twice, making it very difficult to understand how the variables affect each other and the model output.
  2. Sparsity: whether/how features or model components are regularized. Models become more explainable when the cost function has fewer coefficients/variables, meaning less information in a model increases its explainability. Although it sounds counterintuitive, information and explainability can possess an inverse relationship. It becomes more apparent when you picture it in a real-world scenario. Imagine that you're in a boardroom and tasked to explain the output of a logistic regression that takes 50 variables as inputs. Could you explain how each variable singularly affects the model output? Truthfully, it would be tough for anyone to balance the relationships of several variables in their mind at once.
  3. Linearity: whether/how feature effects are linear. This characteristic should be the most well-known, as the concept of linearity is often covered in grade school. However, when the products of a feature are constant in a model, the model is more explainable. That is to say, a constant change of X1 resulting in a constant change of Y is more explainable than a constant change of X1 resulting in an inconsistent change of Y. On the other hand, non-linear models are more challenging to comprehend and summarize.
  4. Smoothness: whether/how feature effects are continuous and smooth. To keep it simple, smooth functions are generally easier to understand than functions with sharp spikes or discontinuities, as they are more predictable. This can make them easier to work with mathematically and reason about their behavior. In contrast, functions whose outputs frequently jump between positive and negative numbers are less explainable due to their unpredictable behavior.
  5. Monotonicity: whether/how feature effects consistently increase or decrease with the target variable. When there is a monotonic relationship between two variables, it is clear that there is a direct and consistent relationship between them, which can make it easier to understand and explain the behavior of a model. This relationship can be either positive or negative and does not involve interactions or nonlinearities.
  6. Visualizability: whether/how the feature effects can be directly visualized. Models are considered more explainable when charts and other visuals can facilitate the final model diagnostics and explanation.

Types of XML Models

Before we proceed, I want to clarify that some prior machine learning knowledge is necessary to understand the following content fully. Specifically, you should be familiar with statistics and probability theory, calculus, and linear algebra. Assuming that you have this knowledge, we will now delve into three types of models that increase in complexity.

Penalized GLM

A Penalized GLM (Generalized Linear Model) is a flexible generalization of an ordinary linear regression that allows for response variables with error distribution models other than a normal distribution (e.g., binomial, poisson, and gamma) and is robust to correlation, wide data, and outliers.

Additionally, Penalized GLM applies regularization techniques to prevent overfitting, which occurs when a model learns the training data too well and fails to generalize to new, unseen data.

The "Penalty" in Penalized GLM comes from introducing penalties and additional terms added to the cost function during the model-building process, which shrinks the coefficients of the predictor variables. This leads to a more stable and more straightforward model that can better generalize to new data. The most common regularization techniques are Ridge Regression (L2 penalty) and LASSO (L1 penalty). Ridge Regression shrinks the coefficients towards zero, while LASSO can shrink some coefficients to precisely zero, effectively performing feature selection.

Mathematically:

Programmatically:

In this code snippet below, we generate synthetic data for a regression problem, set up an ElasticNet model with LASSO penalty, and use GridSearchCV to find the best regularization parameter (alpha) using 5-fold cross-validation. The script then prints the final model's best alpha and coefficients.

import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import GridSearchCV

# generate sample data
n_samples, n_features = 100, 10
X, y = make_regression(n_samples, n_features, noise=0.5)

# set up the Lasso (L1) penalty
l1_ratio = 1.0  # 1.0 for Lasso, 0.0 for Ridge, between 0 and 1 for Elastic Net

# define the regularization path (alpha values)
alphas = np.logspace(-3, 1, 50)

# set up the ElasticNet model with cross-validation
model = ElasticNet(max_iter=10000, tol=1e-3, random_state=42)
grid = GridSearchCV(model, param_grid={'alpha': alphas, 'l1_ratio': [l1_ratio]}, cv=5)

# fit the penalized GLM model
grid.fit(X, y)

# extract the best model and coefficients
best_model = grid.best_estimator_
coefficients = best_model.coef_

# print the results
print(f"Best alpha: {best_model.alpha:.4f}")
print("Coefficients:")
print(coefficients)

Okay, that's covered. Let's take it up a notch, shall we?

GAMs and EBMs

GAMs (Generalized Additive Models) and EBMs (Explainable Boosting Machines) build upon GLMs by allowing for more flexible and complex relationships between response and predictor variables. GAMs achieve this by using smooth functions in the systematic component instead of linear combinations. At the same time, EBMs combine more straightforward functions (e.g., linear terms, decision trees, GAMs) with gradient-boosting techniques.

  1. GAMs allow for non-linear relationships between the response variable and predictor variables using smooth functions for predictors instead of linear combinations, which can capture more complex patterns in the data. GAMs are formulated as the sum of these smooth functions, allowing for a more flexible and explainable modeling approach because the contributions of each predictor variable can be visualized and understood separately. GAMs, like GLMs, can handle different response variables and error structures.
  2. EBMs combine the strengths of both boosted decision trees and GAMs. Like GAMs, EBMs model the relationship between the response and predictor variables using additive functions. Each term is a simple function of a single input feature, and the final prediction is the sum of these terms. Each term can be a linear function (similar to GLMs) or a more complex function like a decision tree or a GAM. This combination of more straightforward functions makes the model more interpretable and easier to explain.

    EBMs use a stage-wise gradient boosting technique to learn the best combination of functions for each predictor variable. The boosting process involves iteratively adding new functions to the model, with each function being fit to the residual errors of the current model. This allows EBMs to capture complex interactions and non-linear relationships while maintaining a relatively simple and explainable structure.

Both GAMs and EBMs focus on improving the explainability and flexibility of traditional linear models. They allow for non-linear relationships and provide insights into the contribution of each predictor variable, making them valuable tools for achieving responsible and interpretable machine-learning outcomes.

Mathematically:

Programmatically:

Generalized Additive Model:

In the example below, we generate synthetic data with a quadratic relationship between X and y and fit a GAM with a spline term using the pyGAM library. The script then makes predictions using the fitted GAM and plots the original data points and the GAM fit.

import numpy as np
import matplotlib.pyplot as plt
from pygam import LinearGAM, s
from sklearn.datasets import make_regression

# generate sample data with non-linear relationships
n_samples = 200
X = np.linspace(-10, 10, n_samples)
y = X ** 2 + np.random.normal(0, 9, n_samples)

# reshape X for compatibility with pyGAM
X = X.reshape(-1, 1)

# fit a GAM with a spline term
gam = LinearGAM(s(0, n_splines=25)).gridsearch(X, y)

# make predictions
X_pred = np.linspace(-12, 12, 300)
y_pred = gam.predict(X_pred)

# plot the original data and the GAM fit
plt.scatter(X, y, color='gray', alpha=0.5, label='Original data')
plt.plot(X_pred, y_pred, color='red', label='GAM fit')
plt.legend()
plt.title('GAM Example')
plt.xlabel('X')
plt.ylabel('y')
plt.show()

Explainable Boosting Machine:

In the example below, we generate synthetic data for a regression problem and fit an EBM model using the interpret library. We split the data into training and testing sets and then fit the EBM model on the training data. The script makes predictions using the fitted EBM, evaluates the model on the test set, and plots the training and testing data points and the EBM fit.

import numpy as np
import matplotlib.pyplot as plt
from interpret.glassbox import ExplainableBoostingRegressor
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_regression

# generate sample data with more noise
n_samples, n_features = 200, 1
X, y = make_regression(n_samples, n_features, noise=20)

# split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# fit an EBM model
ebm = ExplainableBoostingRegressor(random_state=42)
ebm.fit(X_train, y_train)

# make predictions
X_pred = np.linspace(X.min(), X.max(), 300).reshape(-1, 1)
y_pred = ebm.predict(X_pred)

# evaluate the model on the test set
score = ebm.score(X_test, y_test)

# plot the original data and the EBM fit
plt.scatter(X_train, y_train, color='gray', alpha=0.5, label='Training data')
plt.scatter(X_test, y_test, color='blue', alpha=0.5, label='Testing data')
plt.plot(X_pred, y_pred, color='red', label='EBM fit')
plt.legend()
plt.title(f'EBM Example with Messier Data (R^2: {score:.2f})')
plt.xlabel('X')
plt.ylabel('y')
plt.show()

Now, let's move on to my personal favorite.

Monotonic GBM

Monotonic Gradient Boosting (MGBM) builds an ensemble of decision trees while enforcing monotonic constraints on the relationship between the predictor variables and the response variable. Generally speaking, a monotonic model has an easy-to-understand relationship with certain input features; where these features increase, the model's output will always increase (or vice versa).

MGBMs offer explainability because they follow this simple, monotonic rule, which makes understanding how a model works and how changes in the input data affect its predictions easy.

Typically, gradient boosting can sometimes be difficult to explain due to its complexity as it is an iterative process that builds decision trees sequentially, with each tree focusing on the errors made by the previous tree. By enforcing monotonic constraints on the predictor variables. These constraints ensure that the relationship between a predictor variable and the response variable is always increasing or always decreasing, depending on the nature of the constraint. This leads to more understandable models since the effect of each predictor variable on the response variable follows a consistent direction. It allows the model to gradually improve its predictions while maintaining its explainability.

For example, in a credit scoring model, a monotonic constraint could be applied to the income variable, ensuring that the predicted creditworthiness should also increase as income increases. By enforcing such constraints, MGBM provides a better balance between model accuracy and interpretability, ensuring that users can trust and understand the model's decision-making process.

Mathematically:

Programmatically:

This code snippet below demonstrates using LightGBM to train a monotonic Gradient Boosting Machine on the Boston Housing dataset for a regression task. The monotonic_constraints parameter is used to enforce the monotonicity constraint on the specified input feature.

import pandas as pd
import numpy as np
import lightgbm as lgb
from sklearn.model_selection import train_test_split

# load the Boston Housing dataset
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
X = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
y = raw_df.values[1::2, 2]

# split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# define the monotonic constraints (1 for non-decreasing, -1 for non-increasing, 0 for no constraint)
# here, we assume the first feature should have a non-decreasing relationship with the target
monotonic_constraints = [1] + [0] * (X.shape[1] - 1)

# set up the LightGBM parameters
params = {
    'boosting_type': 'gbdt',
    'objective': 'regression',
    'metric': 'l2',
    'num_leaves': 31,
    'learning_rate': 0.05,
    'feature_fraction': 0.9,
    'bagging_fraction': 0.8,
    'bagging_freq': 5,
    'verbose': 0,
    'monotone_constraints': monotonic_constraints
}

# create the LightGBM datasets
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test, reference=train_data)

# train the model
gbm = lgb.train(params, train_data, valid_sets=test_data, num_boost_round=500, early_stopping_rounds=10)

# make predictions
y_pred = gbm.predict(X_test)

The following code snippet creates a scatter plot to visualize the performance of the MGBM on the test set. The x-axis represents the actual target values (y_test), and the y-axis represents the predicted values (y_pred). A tighter clustering of points around a diagonal line indicates better model performance.

import matplotlib.pyplot as plt

# plot the true target values vs predicted values
plt.scatter(y_test, y_pred)
plt.xlabel('True Values')
plt.ylabel('Predicted Values')
plt.title('True Values vs Predicted Values for Monotonic GBM')
plt.grid(True)
plt.show()

To demonstrate the monotonic relationship, you can plot the feature with the monotonic constraint against the target and predicted values. In our example, we enforced a non-decreasing constraint on the first feature.

This code snippet sorts the test set by the first feature and then plots the actual target and predicted values against the first feature. The resulting chart should demonstrate the monotonic relationship between the first feature and the target values enforced by the monotonic constraint.

import matplotlib.pyplot as plt

# sort the test set by the first feature
sorted_indices = np.argsort(X_test[:, 0])
X_test_sorted = X_test[sorted_indices]
y_test_sorted = y_test[sorted_indices]
y_pred_sorted = y_pred[sorted_indices]

# plot the true target values and predicted values against the first feature
plt.plot(X_test_sorted[:, 0], y_test_sorted, 'o', label='True Values', markersize=5)
plt.plot(X_test_sorted[:, 0], y_pred_sorted, 'o', label='Predicted Values', markersize=3)
plt.xlabel('First Feature')
plt.ylabel('Target Values')
plt.title('Monotonic Relationship of First Feature with Target Values')
plt.legend()
plt.grid(True)
plt.show()

Wrap-Up

XML models have become increasingly relevant and prevalent in today's data-driven society. With the ever-growing demand for interpretable and explainable AI solutions, XML helps bridge the gap between complex algorithms and human understanding. Various XML models, such as Penalized GLMs, GAMs, EBMs, and Monotonic GBMs, cater to different use cases and requirements. For instance, in the financial sector, XML models are widely used for credit scoring and risk assessment, ensuring transparency and regulatory compliance. In the healthcare industry, they assist in making more informed decisions by providing clear explanations for diagnostic predictions. As we move forward, the importance of XML will only continue to grow as it empowers stakeholders to trust, understand, and make better use of AI and ML technologies in our daily lives.

Acknowledgments

This blog post on explainable machine learning is based on a lecture given by Patrick Hall from George Washington University on May 14, 2022. The material is shared under a CC By 4.0 license, allowing for editing and redistribution, even for commercial purposes, with the requirement that any derivative work attributes the author.

I used various resources to ensure accuracy and clarity in my post during the writing process, including a language model. While I monitored and manually curated, controlled, and verified all outputs, the assistance of these tools was invaluable in producing a high-quality piece.

Kyle Lyon Twitter

I'm Kyle, a Data Scientist in DevSecOps contracting with the U.S. Space Force through Silicon Mountain Technologies.

Comments


Related Posts

Members Public

Microsoft's DoD Azure Summit: JADC2 Data Fabric, PKIs, and Fortifying U.S. Decision Advantage

Attended a military conference featuring the newest Azure mission capabilities, such as GPT-4 Copilot Semantic Kernel, Hyperscale AI, Ask Sage, and more.

Microsoft's DoD Azure Summit: JADC2 Data Fabric, PKIs, and Fortifying U.S. Decision Advantage
Members Public

On "Showing Your Work"

How can you showcase the sheer brilliance of your statistical approach in a way that's both informative and easy to follow?

On "Showing Your Work"
Members Public

All Roads Lead to Probability

Probability and statistics are the basis of how we deal with the uncertainties of our world.

All Roads Lead to Probability