OpenAI has just unveiled an exciting new research tool known as “deep research” that promises to revolutionize how we gather information online. This innovative tool, powered by the o3 model, is tailored for intensive knowledge work in fields like finance and science, providing detailed reports akin to those produced by a research analyst.
Unlike its predecessor, Operator, which focused on simpler tasks like shopping and reservations, deep research is geared towards more complex endeavors. It can offer personalized recommendations for significant purchases such as cars and appliances, completing tasks in minutes that would take a human hours.
Available exclusively to subscribers of the $200-per-month ChatGPT Pro plan, deep research scans through text, images, and PDFs on the web to generate responses. While it may take between five to 30 minutes to produce results, users can track its progress in real-time through an activity sidebar. Although current reports are text-based, OpenAI plans to incorporate images and data visualizations in the near future.
Despite its impressive capabilities, deep research does have limitations. OpenAI recognizes that, like other large language models, the tool may occasionally provide inaccurate information and struggle to distinguish between credible sources and rumors. This potential lack of precision raises concerns about the reliability of the reports it generates, particularly in scientific contexts.
While deep research can streamline the information-gathering process, users should be cautious and verify the accuracy of its findings. As OpenAI continues to advance AI technologies, it becomes increasingly important to exercise vigilant oversight and critical evaluation to ensure the integrity of the information presented.