Machine learning is a powerful tool. Do you have the right framework in place to manage risk and bias in your algorithms? Kathryne Hume and Alex LaPlante explore the processes needed to insure the integrity of inputs to outputs in this Harvard business review:
Design: Define the problem and articulate the business case. Determine the business’ tolerance for error and ascertain which regulations, if any, could impact the solution.
Exploration: Conduct a feasibility study on the available data. Determine whether the data are biased or imbalanced, and discuss the business’ need for explainability. May require re-iterating from design phase depending on the answers to these questions.
Refinement: Train and test the model (or several potential model variants). Gauge the impact of fairness and privacy enhancements on accuracy.
Build and ship: Implement a production-grade version of the model. Determine how frequently the model must be retrained and whether its output must be stored, and how these requirements affect infrastructure needs
Measure: Document and learn from the model’s ongoing performance. Scale it to new contexts and incorporate new features. Discuss how to manage model errors and unexpected outcomes. May require re-iterating from build-and-ship phase depending on the answers to these questions.
Comments