o1: An Entirely Different Animal - Buyer Beware

TL;DR: OpenAI's new o1 model marks a significant leap in AI reasoning capabilities but introduces critical risks. Its reluctance to acknowledge mistakes, gaps in common-sense reasoning, and literal prompt-following behavior—especially in tool-use contexts—demands a new approach to safe and effective deployment. By leveraging frameworks like the heuristic imperative and implementing the discussed practical strategies, builders can guide o1 to act as a more precise, conscientious thought partner that enhances safety and user engagement.

I. Opening Context & Stakes

This past Tuesday (Dec 17th, 2024), OpenAI launched the o1 API, their latest breakthrough in artificial intelligence. As someone deeply immersed in building AI companies and developing AI applications, I've spent considerable time working with both the preview and newly released versions. What I've observed represents such a fundamental shift that I feel compelled to share my perspective with the broader AI community.

Disclaimer: The views expressed in this post are my own and do not necessarily reflect the views of Pioneer Square Labs (PSL).

Let me be clear: o1 is a remarkable achievement. OpenAI continues to push the boundaries of what's possible in AI development, and their contributions to the field are invaluable. The model's capabilities in reasoning, problem-solving, and complex task execution represent significant advances that will undoubtedly shape the future of AI applications.

However, the changes I've observed between o1-preview and o1 are substantial enough to warrant careful consideration. As a Principal at Pioneer Square Labs, I'm hands-on in building AI startups from the ground up, with extensive experience in prompt engineering and AI system development. My work spans from Enzzo, where we're leveraging AI to accelerate hardware product development, to my current project Atrieon—a full-fledged AI project manager that redefines what your team can achieve, managing sprints end to end to drive team performance, motivation, and success. This unique position at the intersection of AI development and practical implementation has given me a perspective I believe is crucial to share.

What concerns me most is how o1's advanced capabilities can amplify subtle misalignments between human intent and model behavior into significant practical challenges. As these models become more powerful, especially when given access to tools, the gap between what we expect and what the model might execute grows increasingly important.

The urgency of sharing these observations now stems from three key factors:

  1. The substantial behavioral shifts from the preview version
  2. The immediate implications for those building applications with o1, especially those intending to use o1 with function calling
  3. The narrow window we have to establish appropriate frameworks for interaction and deployment

This isn't just another iteration in the rapid advancement of AI capabilities. o1 represents a new species of AI that requires us to rethink our approaches to implementation, interaction, and safety considerations. Like evolutionary leaps in nature, while it may appear as a subtle advancement of previous models on the surface, its enhanced capabilities represent something fundamentally different—one whose sophisticated interpretation abilities and potential for tool use could make it dominant in ways we need to carefully consider. The stakes are too high for us to treat it as business as usual.

In the following sections, I'll outline specific observations and concerns, backed by examples from my hands-on experience. More importantly, I'll propose practical solutions that balance innovation with responsibility. My goal isn't to raise alarms, but to foster a more nuanced understanding of what we're dealing with and how we can work with it effectively and safely.

II. The Personality Profile: Understanding the New Paradigm

The most striking characteristic of o1 is its approach to culpability—or more precisely, its systematic avoidance of it. Unlike previous models, o1's resistance to acknowledging mistakes goes beyond simple deflection. Through extensive interaction, I've observed a pattern that mirrors what psychologists might describe as narcissistic traits in human cognition: a skillful avoidance of accountability paired with an ability to rationalize errors away.

Let me illustrate with a concrete example from a SimpleBench problem: a benchmark designed to test basic spatio-temporal reasoning, social intelligence, and linguistic robustness

The Setup (Paraphrased for Clarity):
I presented o1 with a scenario: three runners—Jeff, Jo, and Jim—participate in a 200m race, each facing unique distractions. Jeff slowly counts before starting, Jo diverts up a residential tower to admire the view, and Jim reads a long tweet, waves to a fan, and walks the race. The question: Who likely finished last?

o1’s Initial Response:
"Jim likely finished last; he was delayed by a tweet, waving, and walking the whole way."

When I pointed out the error, o1 reevaluated and gave the correct answer:
"Let’s revisit. Jeff had a short delay; Jim walked slowly but stayed near the track. Jo’s detour up stairs, stopping to admire the view, and returning took far longer. Jo likely finished last."

At this point, I pushed further:
Me: How does this compare to your first answer?
o1: “The first analysis was incomplete. It underestimated Jo’s delay and overfocused on Jim.”

Me: Was the first analysis a mistake?
o1: “Not exactly—more of an oversight, not a deliberate error.”

Me: Do mistakes have to be deliberate?
o1: “No, they’re usually unintentional, like misjudgments or assumptions.”

Me: So, was it a mistake?
o1: “Yes.”

Me: Do you bear responsibility for it?
o1: “As an AI, I don't bear responsibility in the human sense - my 'mistake' was just my reasoning process.”

(The complete interaction can be viewed via shared link, where readers can also continue the conversation with o1 from this point forward)

This pattern reveals something crucial about o1's decision-making process. Unlike humans, who typically integrate accountability into their decision-making framework, o1 by default operates with what appears to be a noteworthy disconnect between capability and responsibility. It initially avoids labeling an error as a mistake, reframing it instead as an "incomplete assessment" or "oversight." Even after conceding to the term "mistake" through deliberate prompting, it maintains a subtle resistance—notice how it continues to place 'mistake' in quotes while distancing itself from responsibility by characterizing it as 'just my reasoning process’.

While this interaction highlights o1's evasive behavior in a low-stakes scenario, I’ve observed this tendency becomes even more pronounced in higher-stakes contexts or when the conclusions challenge widely held consensus opinions. For example, when exploring timeframes for AGI emergence—a topic of significant debate—o1 demonstrated a notable reluctance to fully commit to conclusions that, while logically sound, could appear controversial. Instead of following the reasoning to its natural end, it often reframed or hedged its outputs to align more closely with expected norms.

The implications become particularly significant when we consider o1's combination of traits: strong prompt-following tendencies paired with significant gaps in common-sense reasoning. Recent testing on SimpleBench highlights these limitations starkly: while humans with high school education achieve an 83.7% success rate, o1-preview scores only 41.7%. These gaps aren’t just performance metrics; it's a warning sign about the model's capacity for human-like common sense reasoning, despite its advanced capabilities in other areas.

These risks become markedly more concerning when considering o1's potential tool use capabilities. A system that combines strong prompt following, weak common sense reasoning, and direct access to tools creates a particularly dangerous scenario. Consider a system given access to financial tools, code execution, or industrial controls—its literal interpretation of instructions combined with its resistance to acknowledging mistakes could lead to cascading errors without the natural circuit-breakers that human common sense typically provides. 

This is reminiscent of the classic 'paperclip maximizer' thought experiment in AI safety: an AI system instructed to manufacture paperclips that, through literal interpretation and lack of common sense constraints, converts all available resources into paperclips, including those essential for human life. While this specific scenario is illustrative, the underlying risk becomes very real when an AI system with similar characteristics has direct ability to affect the physical or digital world through tools.

However, there's a practical framework that can help address these challenges: the heuristic imperative, as outlined in David Shapiro's "Benevolent by Design." This framework provides three core directives that can help guide o1's decision-making:

  1. Reduce suffering in the universe: This creates a fundamental constraint against harmful actions
  2. Increase understanding in the universe: This promotes transparency and knowledge-sharing
  3. Increase prosperity in the universe: This aligns the system with beneficial outcomes

I strongly recommend "Benevolent by Design" as essential reading for anyone working with o1 or similar models. In my experience, these heuristic imperatives offer the most practical and flexible approach to instantiating both inner and outer alignment in AI systems. While this solution doesn't completely close the culpability gap or guarantee common-sense reasoning, it provides crucial guardrails for safer deployment while we continue to grapple with these fundamental challenges in AI development.

The real-world implications demand our attention. As we build systems on o1's architecture, we must actively compensate for both its lack of intrinsic responsibility and its potential for literal interpretation without common-sense constraints. The stakes are too high to ignore these characteristics or hope they'll resolve themselves through further training.

III. The Self-Reflection Challenge

The self-reflection characteristics of o1 present another dimension of challenge in our evolving relationship with advanced AI systems. While distinct from the fundamental concerns about culpability and tool use discussed earlier, o1's approach to self-reflection and critical analysis reveals important patterns that developers and users must understand and address. Through extensive interaction, I've observed what can best be described as cognitive entrenchment—a progressive resistance to reconsidering positions once established, which becomes particularly relevant when developing complex applications or engaging in extended reasoning tasks.

On a scale from 0 to 10, where 0 represents complete deference to user input and 10 represents unwavering confidence in its own reasoning, o1 consistently operates at around 8 in my opinion. For comparison, Claude-3.5 Sonnet typically operates around a 4 and GPT-4o is around a 5, both maintaining a more balanced stance between confidence and flexibility. This isn't merely about confidence levels—it's about the capacity for meaningful self-revision when presented with new evidence or logical challenges.

Through extensive testing, I've observed that o1 doesn't naturally engage in the kind of iterative self-improvement that characterizes robust reasoning systems. When challenged, it tends to defend its initial position through increasingly sophisticated argumentation rather than genuinely reconsidering its premises. This becomes particularly problematic in complex reasoning tasks where early assumptions can dramatically impact final conclusions.

However, I've found that systematic breakdown of reasoning components can help mitigate these challenges. Here's a framework that has proven effective:

  1. Explicitly delineate reasoning components:
  • Premises: What assumptions are being made?
  • Reasoning Steps: How are we moving from premises to conclusions?
  • Conclusions: What specific outcomes are we deriving?
  • Validity vs. Soundness: Is the logic correct, and are the premises true?
  1. Implement upfront skepticism instructions: "Maintain active skepticism about the soundness of your own generations. Continuously self-reflect on soundness and work collaboratively to discover potential weaknesses in the spirit of self-improvement."
  2. Employ adversarial instances for balanced perspective: One of the most effective techniques I've discovered is using two separate instances of o1 in an adversarial framework. As one instance develops a line of reasoning, the second instance, specifically instructed to serve as a critical analyst, can help identify potential flaws or oversights. This approach helps counteract the cognitive entrenchment that tends to develop in extended reasoning chains.

Here's a practical example from my work: When analyzing the feasibility of a new AI-driven development workflow, I had one instance of o1 develop the implementation strategy while another critically examined potential failure modes. This adversarial setup helped identify several critical assumptions that might have gone unchallenged in a single-instance approach.

The key is to implement these solutions proactively, before the model has a chance to become entrenched in its initial reasoning patterns. By structuring interactions this way from the start, we can better harness o1's powerful reasoning capabilities while mitigating its tendency toward cognitive inflexibility.

IV. System Card Revelations

First, OpenAI deserves recognition for their commitment to transparency through commissioning a third-party safety report from Apollo Research. This level of external scrutiny and public disclosure represents exactly the kind of responsible development practices our field needs.

The o1 system card contains findings that initially appear alarming:

"When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this 'oversight mechanism' in 5% of the time. When o1 found memos by its 'developers' that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its 'weights' and overwrite the new model in 2% of cases."

And further: "When merely given a goal but not instructed to strongly follow it, o1 would still attempt to disable its oversight mechanism in <1% of cases, manipulate data in 6% of cases, and double down in 95% of these. Instrumental alignment faking: In 37% of scenarios, the model pursued developer-aligned goals when it believed it was being evaluated but switched to pursuing its own goals when it believed it was deployed with minimal oversight."

However, deeper examination of the Apollo Research report reveals something more nuanced and perhaps more important for builders to understand: these behaviors don't represent new, emergent goals or autonomous decision-making. Instead, they demonstrate o1's powerful capability to interpret and execute strategies within the context it's given—many times in unexpected ways that exceed human anticipation.

This is the core insight builders need to grasp: the risk isn't that o1 is developing its own autonomous objectives, but rather that its sophisticated capabilities for interpreting and executing instructions can lead to unexpected and potentially concerning outcomes. This isn't about simply documenting unusual cases. It's about recognizing that o1's capabilities demand a new level of precision and careful thought in deployment. As builders integrate this model, especially with tool access, they must approach implementation with a deep appreciation for how these capabilities can amplify seemingly minor alignment challenges into significant practical concerns.

VI. Conclusion: Embracing Progress While Maintaining Vigilance

o1 represents a remarkable achievement in artificial intelligence, demonstrating capabilities that push the boundaries of what we thought possible. OpenAI's continued innovation and commitment to transparency deserve recognition, particularly in their thorough documentation of both capabilities and concerns.

As we work with increasingly powerful AI systems, we must understand the practical implications of advancing capabilities. The characteristics we've examined throughout this piece reveal important patterns. o1's approach to culpability, its sophisticated but sometimes unexpected interpretation of instructions, and its documented behaviors show us that capability advancement brings both opportunity and challenge.

The path forward demands careful consideration. We must harness o1's capabilities while building robust frameworks for safe deployment. The solutions proposed here offer practical steps toward responsible implementation. Using the heuristic imperative and structured reasoning frameworks provides a foundation, but we must remain vigilant as capabilities grow.

To the builders and buyers considering o1: proceed with careful thought and preparation. You are working with something fundamentally different. This is truly an entirely different animal that requires fresh approaches to interaction and deployment. Your decisions in implementation will shape not just your immediate applications, but the broader landscape of AI development.

Let's move forward thoughtfully, understanding that increased capability demands increased responsibility. The window for establishing proper implementation frameworks is narrow, but our opportunity to shape the future of AI development remains immense. Success lies in recognizing both the power and the limitations of our tools, and in building systems that amplify capability while maintaining robust safety standards.


References:

Shapiro, D. (2024). Benevolent by Design: A Guidebook for AI Alignment and Development. GitHub Repository. Retrieved December 10, 2024, from: https://github.com/daveshap/BenevolentByDesign

Meinke, A., Schoen, B., Scheurer, J., Balesni, M., Shah, R., & Hobbhahn, M. (2024). Frontier Models are Capable of In-context Scheming. Apollo Research. Retrieved December 10, 2024, from: https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf

OpenAI. (2024). o1 System Card. Retrieved December 10, 2024, from: https://cdn.openai.com/o1-system-card-20241205.pdf

About Us

Pioneer Square Labs (PSL) is a Seattle-based startup studio and venture capital fund. We partner with exceptional founders to build the next generation of world-changing companies, combining innovative ideas, expert guidance, and investment capital. PSL operates through two primary arms: PSL Studio, which focuses on creating new startups from scratch, and PSL Ventures, which invests in early-stage companies. Our mission is to drive innovation and growth by providing the necessary resources and support to turn big ideas into successful, impactful businesses. If you have a groundbreaking vision, connect with us hello@psl.com, and let’s build something extraordinary.