Snorkel dives into data labeling and basic AI models

Check out the Low-Code/No-Code Summit on-demand sessions to learn how to successfully innovate and achieve efficiencies by enhancing and scaling citizen developers. Watch now.


Data labeling is a critical, though often time consuming and complex, element of modern machine learning (ML) operations.

Data labeling could also be the key to unlocking the broader business potential of entry-level models. While basic models like GPT-3 and DALL-E are very useful for generating text and images, they often lack the context needed for specific business use cases. Optimizing a baseline model requires additional training and tuning, and that often requires labeled data.

But what if a basic model could be used to drive a data labeling process to make a smaller model useful for specific business use cases? That’s the challenge that data labeling provider Snorkel AI now aims to help solve.

“It’s one thing if you’re trying to do creative generative tasks where you’re generating copy text or some creative images, but there’s a big chasm between that and a complex production use case where you need to perform at a high bar. precision for very specialized data and tasks,” Alex Ratner, CEO and co-founder of Snorkel AI, told VentureBeat.

Event

smart security summit

Learn about the critical role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

To help solve that challenge, Snorkel AI today announced a preview of its new data-centric foundation model development capabilities. The goal is to help users of the company’s Snorkel Flow platform adapt basic models for business use cases. Ratner explained that Snorkel’s core research and ideas have to do with finding more efficient ways to label data to train or fit models.

Go with the flow to build a new foundation for enterprise AI

There are other vendors that are also trying to develop technology to help make tuning base models easier. Among them is Nvidia, which in September announced its NeMo LLM (large language model) service.

One of the core components of the Nvidia service is the ability for users to train large models for specific use cases in an approach known as rapid learning. With Nvidia’s fast learning approach, a complementary model is trained to provide context to the previously trained LLM, using a fast token.

Snorkel also uses prompts as part of the Enterprise Foundation Model Management Suite with the Foundation Model Prompt Builder feature. However, Ratner emphasized that the hints are only one part of a larger set of tools needed to optimize the basic models for business use cases.

Another tool that Snorkel offers is the Foundation Model Warm Start capability, which uses an existing foundation model to help provide data labeling.

“Basically, when you load a dataset to label in Snorkel Flow, you can now get a kind of first-pass auto-labeling at the push of a button using the power of basic models,” Ratner said.

Ratner noted that Warm Start isn’t a solution to all data labeling, but it will get you the “low-hanging fruit.” He suggests that users will likely use Warm Start in combination with the prompt builder, as well as Snorkel’s basic model fine-tuning feature, to optimize models. The fine-tuning feature allows organizations to distill the base model into a domain-specific training set.

Generative vs. Predictive AI Business Use Cases

The goal of Snorkel AI is to use base models for real business use cases.

For better or worse, Ratner said people are probably more familiar with generative AI today, which uses basic models. He distinguished generative models as distinct from the predictive AI models that help predict an outcome that are commonly used by companies today.

Anecdotally, Ratner said that he was trying to generate some Snorkel AI logos using Stable Diffusion because “…it was so much fun.” He said he went through about 30 samples and never really got exactly what he wanted: an octopus with a snorkel underwater, which is the actual corporate logo.

“Too weird nonsense image I guess, but I got some great logos after about 30 samples as a creative, generative human loop process,” Ratner said. “However, if you think about it from a predictive automation perspective, 30 attempts to get a successful result is a 3.3% success rate and you can never ship something with a result that poor.”

One of Snorkel’s clients is online video ad optimization provider Pixability. Ratner explained that Pixability has millions of data points of YouTube videos that need to be classified for ML. Using the base model capabilities within Snorkel Flow, they can rapidly perform classification with an accuracy level greater than 90%.

Ratner said that a large US bank that is a Snorkel client was able to improve accuracy for text extraction from complex legal documents using the basic model approach.

“We’re seeing this technology apply to the entire universe of applications where you’re trying to tag, classify, extract, or label something with very high precision for some kind of predictive automation task across text, PDF, image, and video,” Ratner said. . . “We think it’s going to accelerate all of the use cases we currently support, as well as add new ones that previously wouldn’t have been feasible with our existing approaches, which we’re very excited about.”

VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.

Leave a Comment