IBM and MIT Media Lab Test AI Recommendation Algorithm

Tech companies rely on artificial intelligence algorithms to recommend content, thus keeping users on their apps and platforms. While the benefit of that is obvious for the companies using AI, how the consumer might reap rewards is less clear. Some of those same companies are now asking themselves if they can both use AI to keep the consumer’s attention while also adhering to an ethical framework. IBM Research and MIT Media Lab have developed a recommendation technique that its research scientists say does just that.

VentureBeat reports that the research team, led by IBM Research AI global leader Francesca Rossi, has created a technique that, “while optimizing its results for the user’s preferences, also makes sure it stays conformant to other constraints, such as ethical and behavioral guidelines.” The team demonstrated the technique in “a movie recommendation system that allows parents to set moral constraints for their children.”

AI_Artificial_Intelligence_Man_Woman

IBM researcher Nicholas Mattei describes how the new technique is different from past “attempts to integrate ethical rules into AI algorithms” that were “mostly based on static rules.”

“It’s easy to define an explicit rule set,” he said. “But in a lot of the stuff on the Internet, in areas with vast amounts of data, you can’t always write down exactly all the rules that you want the machine to follow.”

Instead, Mattei and the rest of the IBM team defined the rules by examples. “We thought that the idea of learning by example what’s appropriate and then transferring that understanding while still being reactive to the online rewards is a really interesting technical problem,” he said. The team decided to test out the algorithm with movie recommendations “because quite a bit of movie-related data already exists and it’s a domain in which the difference between user preferences and ethical norms are clearly visible.”

The recommendation algorithm is comprised of two training states. In the first, which happens offline, “an arbiter gives the system [appropriate and in appropriate] examples that define the constraints the recommendation engine should abide by.” The algorithm studies the examples and “the data associated with them to create its own ethical rules.” The more examples and data, “the better it becomes at creating the rules.”

The second stage of the training “takes place online in direct interaction with the end user,” whereby the algorithm “tries to maximize its reward by optimizing its results for the preferences of the user and showing content the user will be more inclined to interact with.” The system deals with the potential of conflicting goals between ethics and preferences by setting a “threshold that defines how much priority each of them gets.” In the movie recommendation demonstration, parents were able to use a slider to choose the balance.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.