More

    OpenAI Unveils ‘o1’ Model: A Leap in AI Reasoning with Enhanced Complexity

    OpenAI has launched a significant update to its ChatGPT series with the introduction of the new model, OpenAI o1. This advanced AI is designed to tackle more complex tasks in fields such as science, coding, and mathematics, and is touted as the most advanced model to date.

    Previously known during development as “Strawberry,” the o1 model comes in two versions: o1-preview, available to general users with a limit of 50 queries per week, and o1-mini, aimed at developers with a daily query limit of 50. This new model represents a major leap forward in generative AI, though it is not yet at the level of achieving artificial general intelligence (AGI).

    OpenAI has highlighted that the o1 model can be utilized across various research and development sectors. For instance, healthcare researchers can use it to annotate cell sequencing data, physicists can employ it to generate complex mathematical formulas for quantum optics, and developers can leverage it for multi-step workflows.

    The model employs a self-training process that involves extended processing times before responding, which helps it learn new strategies and correct mistakes. According to OpenAI, in tests, the o1 model performs comparably to PhD students on challenging tasks in physics, chemistry, and biology. It has shown significant improvement in math and coding, achieving an 83% success rate in an International Mathematics Olympiad (IMO) qualifying exam, compared to just 13% for its predecessor, GPT-4o.

    Despite its advancements, the o1 model has certain limitations. Unlike GPT-4o, it cannot browse the web or handle file and image uploads. CEO Sam Altman acknowledged these limitations, noting that while o1 is impressive upon initial use, it reveals its limitations over time.

    OpenAI has described the new model as its “most dangerous yet,” a statement that has been met with skepticism and is seen as part of a marketing strategy. The company has implemented rigorous safety measures and guardrails to prevent misuse and jailbreaking of the model. Internal tests show that while GPT-4o scored 22 out of 100 in a jailbreaking test, the o1-preview model scored significantly higher at 84.

    To ensure responsible deployment, OpenAI has begun collaborating with the U.S. and U.K. AI Safety Institutes, granting them early access to research versions of the model for evaluation and testing before broader release.

    Related topics:

    Playtika to Acquire SuperPlay in Up to $1.95 Billion Deal

    FBI Takes Down Second Major Chinese Hacking Group, Director Reveals

    Meta Platforms Secures Legal Victory in Lawsuit Over Privacy Settings and Sandberg Disclosures

    Recent Articles

    TAGS

    Related Stories