The Latest AI Framework in LinkedIn’s Content Moderation Unveiled

The Latest AI Framework in LinkedIn's Content Moderation Unveiled

With the introduction of a new content moderation framework from LinkedIn, moderation queues are now optimized, cutting down on the time it takes to detect policy breaches by 60%. When this technology becomes more widely available, it might be the way that content moderation is done in the future.

How Content Violations Are Modified on LinkedIn

Teams of content moderators at LinkedIn are tasked with manually going over any information that might violate company policies.

Want a Free Website

To find and eliminate offensive information, they combine AI models, member reports from LinkedIn , and human evaluations.

The problem’s depth is huge, though, as hundreds of thousands of items need to be examined each week.

It often happened that every item that needed to be inspected would wait in a queue when the first in, first out (FIFO) system was employed in the past. This made it take a long time to evaluate and eliminate truly problematic items.

Thus, consumers were exposed to hazardous content as a result of using FIFO.

LinkedIn outlined the shortcomings of the prior FIFO approach in its description:

This strategy has two significant flaws.

Firstly, a significant amount of the content that is reviewed by humans is deemed to be non-violative, meaning it has been approved.

This diverts reviewers’ valuable bandwidth from the examination of genuinely offensive information.

Second, if non-violative content is consumed first and then violated content, it may take longer to identify violations when items are examined according to a FIFO system.

In order to prioritize content that is probably in violation of content policies, LinkedIn developed an automated framework utilizing a machine learning model. This allows those items to be moved to the front of the queue.

The review procedure was expedited in part by this new procedure.

A Novel Framework uses XGBoost

The new approach predicts which content item is most likely to violate policy using an XGBoost machine learning model.

Extreme Gradient Boosting, or XGBoost for short, is an open-source machine learning framework that aids in the ranking and classification of objects inside a dataset.

XGBoost is a type of machine learning model that is trained using algorithms to identify particular patterns in a labeled dataset (a dataset that indicates which content item is violating a particular rule).

LinkedIn trained its new framework using the same exact procedure:

“These models are tested on an additional out-of-time sample after being trained on a representative sample of historical human-labeled data from the content review queue.”

After being taught, the model may recognise content that requires human review because it probably violates this particular application of the technology.

Innovative technology called XGBoost has proven to be quite effective for this type of use in benchmarking tests, exceeding other types of algorithms in terms of accuracy and processing time.

LinkedIn explained this new strategy as follows:

“With this framework, a series of AI models score content that enters review queues and determines the likelihood that it breaches our policies.

Content that is more likely to violate policy is prioritized above less important content in order to expedite its detection and removal, while content that is more likely to violate policy is deprioritized in order to conserve the time of human reviewers.

Effects on Moderation

According to LinkedIn, the new framework can automatically decide on roughly 10% of the content that is waiting to be reviewed, with a level of accuracy that the company describes as “extremely high.” The accuracy of the AI model surpasses that of a human reviewer.

The new framework remarkably cuts down on the average time it takes to identify content that violates policies by almost 60%.

Where Modern AI Is Used

For feed postings and comments, the new content review prioritization method is now in use. LinkedIn declared that they are developing this new procedure to be included to LinkedIn in other places.

Because it can enhance the user experience by lowering the number of users exposed to hazardous content, content moderation is crucial.

The moderation staff finds it helpful as well, as it enables them to handle the high traffic and scale up.

This technology has shown to be effective, and as it gets more publicly accessible, it might eventually become more commonplace.

Check out the announcement on LinkedIn:

Augmenting our content moderation efforts through machine learning and dynamic content prioritization

Want a Free Website