Meta’s latest AI tools aim to reduce human involvement in AI development.
On Friday, Meta announced the release of a set of advanced AI models developed by its research division.
Among these tools is the “Self-Taught Evaluator,” which could reduce human involvement in the AI development process.
This model follows Meta’s introduction of the tool in an August paper, where it detailed its reliance on the “chain of thought” technique, which improves the model’s ability to make reliable judgments.
AI Models That Train Themselves
Meta’s Self-Taught Evaluator relies entirely on AI-generated data, eliminating the need for human input during the training phase.
The technique involves breaking down complex problems into smaller steps, improving the accuracy of responses in challenging subjects like coding, science, and math.
Meta researchers believe that these advancements offer a glimpse into the future of AI, where models can autonomously evaluate their own performance and learn from their mistakes.
Jason Weston, one of the researchers, said, “We hope, as AI becomes more super-human, it will excel at checking its work, surpassing average human capabilities.”
Impact on AI Development Process
This advancement could significantly cut costs associated with the traditional Reinforcement Learning from Human Feedback (RLHF) method.
Today, human annotators, who must possess specialized knowledge, are often used to verify data and answers to complex questions. Meta’s AI models present a pathway toward more efficient AI systems that rely less on human oversight.
Meta’s New Releases and Future Potential
In addition to the Self-Taught Evaluator, Meta introduced updates to its image-identification Segment Anything model and tools to accelerate LLM response times. These innovations signal Meta’s commitment to advancing AI for public use, unlike competitors Google and Anthropic, who keep their models under wraps.