Socially Aware Algorithms Are Ready to Help – Scientific American

Excerpt:

…..rather than training the model to simply minimize the overall predictive error rate, we can instead train it under the additional fairness condition that no racial group is treated substantially worse than any other. Doing so will generally cause the overall error rate to be higher, since the fairness condition only makes the problem harder. Or if the error disparity stems from a lack of data from the minority population, it can be improved by gathering more data. This of course requires the investment of money and other resources.

These trade-offs highlight a fundamental but often painful truth: algorithmically enforcing social norms like fairness, privacy and transparency will come at a cost—usually a cost to accuracy, profit or “utility” more broadly. While there are a variety of good ways of reducing algorithmic bias, all of them will present the designer, and thus their corporate employer and society at large, with inescapable trade-offs between fairness and utility. Deciding how to balance these trade-offs will require executives, companies and scientists to make difficult yet crucial choices with far-reaching consequences for society.

News Link
News Link Sourced by WireWatch.News






Be the first to comment

Leave a Reply

Your email address will not be published.


*