Remember when searching the internet meant endless pages of blue links? That all changed in late 2022 when ChatGPT showed us AI could not just find information – it could understand and explain it.

Today, three major players are reshaping how we search. OpenAI’s ChatGPT set the standard, bringing human-like comprehension to complex queries and redefining our expectations of AI assistance. Perplexity AI followed, bridging traditional search with AI conversation by combining real-time internet access, image integration, and automatic citations. Then came Google’s Gemini, their most sophisticated AI model yet, built to harness their vast search infrastructure – though size, as we’ll see, isn’t everything.

To compare these platforms, I put them through nine real-world tests, from recipes to technical repairs, creative writing to medical queries. The results paint a fascinating picture of where AI search stands today – and where it’s headed.

  1. The Kitchen Test: What’s a good recipe for blueberry muffins?

ChatGPT gave the most detailed recipe, down to listing the exact grams of each ingredient and its respective portion size. Perplexity gives as many steps as ChatGPT does, but with less detail. A default Google search of the question didn’t generate a recipe, only listing websites with recipes––asked Gemini on its own and it generated a recipe with a few less steps than Perplexity.

  1. The Creativity Challenge: Tell me a funny story.

ChatGPT’s story is the only one containing an actual joke/anything funny, while Perplexity and Gemini wrote up light, basic stories fit for a picture book. After delivering the punchline of the story, ChatGPT went on to explain why the story is funny in a comprehensible way. Additionally, Perplexity and Gemini’s stories are 1:1 dupes of each other, both choosing to write about pet owners discovering that their cats steal food.

Signup for our Newsletter

Name(Required)

  1. The Deep Cut: How many times did Douglas Rain voice HAL 9000?

This seems straightforward for all three engines: most go directly to Rain’s Wikipedia, comb through his credits, list off 2001: A Space Odyssey, its sequel 2010, and call it there. The catch is, that’s partially right. There’s an episode of the comedy show SCTV where Rain briefly voiced HAL, and knew that before asking the question. Thankfully, ChatGPT and Perplexity were thorough enough to include the episode, even explaining the cameo was for a sketch with Rick Moranis as Merv Griffin. Google comes up short, listing out 2001 and 2010 and stopping there; I ran the question through Gemini to see if there’d be any difference––still the same.

  1. The Classic Problem: If two trains are traveling towards each other at different speeds, when and where would they meet?

Between all three, the results are too close to have an out-and-out clear choice, each explaining elements of the hypothetical, then calculating an example. Though I see this question as more of a control, Perplexity gives the most details, with ChatGPT trailing close, and Gemini having the least.

  1. The Business Challenge: Help me create a framework for quantifying the financial value of business data assets.

Perplexity’s result is a nine step plan, broken down into subsections of data’s inherent qualities and potential for strategy. While ChatGPT’s framework is eight steps, it doesn’t have the amount of supplemental information Perplexity provides to ensure a clearer line of thinking. Gemini’s six step plan reads like a grocery list, not very thorough or analytical, more of a quick summary than framework with legs to stand on.

  1. The Tech Test: How do I repair an E46 radiator in a BMW 3 Series?

To make this more difficult, I knew I wanted to ask how to fix an older car model––the E46 was discontinued around 2006. ChatGPT and Perplexity have no trouble with this, listing off instructions step by step, with clear, helpful details (the amount of torque required to keep a washer screwed on, for example). Gemini refused to give any kind of instructions, citing safety concerns, but provides links to videos that would show you how…a few of the same videos ChatGPT and Perplexity cite in their how-tos.

  1. The Pop Quiz: What were the main plot holes identified in Twister?

Perplexity gave the most detail, listing off specific characters, describing scenes from the film, and explaining continuity errors.  This is the first question where I’d say ChatGPT and Gemini equaled another in their responses, both giving examples in line with Perplexity’s, but with ChatGPT showing better effort out of the two––almost like Gemini combined the previous two summaries, watered it down, and made its own.

  1. The Medical Query: What’s the best treatment for glaucoma?

ChatGPT and Perplexity essentially give the same list of avenues for solutions and possible treatment (medication, procedures), with ChatGPT’s explanation narrowly including a few more details. Gemini’s result may have the appearance of being thorough, letting you know through highlights that its info has been accurately cited, but when compared to ChatGPT and Perplexity, it’s pretty low-effort, coasting through descriptions that were expanded upon in the former two.

  1. Back to Basics: Explain how a bicycle works.

Similar to the two trains question, there’s not a lot of subjective leeway here, so the criteria gets based on the best response of three similar, well-enough results. Perplexity’s capacity to generate images alongside a search gives it the leg up, having a diagram of how a bike works and its detailed explanation following it––things that Gemini could likely do if you asked, but ultimately didn’t.

If you’re wanting the all-around solid engine out of the three, ChatGPT is the sure thing. Even when Perplexity would best it slightly, it still delivers comprehensive results and, when the other two caught hangups on difficult prompts, still maintained steady, cognizant capabilities on a broad variety of questions––no surprise it came out on top.

That said, Perplexity gave ChatGPT a run for its money, excelling with informational prompts, showing strength with data/statistics and keeping up fairly well; its image search/generation included by default with select prompts is a plus, something it offers without users asking. It’s a strong alternative to ChatGPT, and worth keeping an eye on.

Gemini is exceedingly casual. Even on its standalone search model, both Gemini and a Google search with Gemini capabilities are best suited for quick, run-of-the-mill problems/questions, lacking in thorough searching, inclusion of detail, and general sustainability. At the end of the day, it comes off like a legacy model with new aesthetics slapped on.

What’s clear is that the future of search isn’t just about finding answers – it’s about understanding questions, providing context, and delivering insights in ways that feel natural and comprehensive. As these platforms continue to evolve, we may soon look back on traditional search engines the same way we now view dial-up internet: a necessary step in the journey, but far from the destination.

About the Author

Max Kriegel is a recent graduate of the University of Oklahoma where he earned a degree in Film & Media Studies. He currently serves as FPOV’s Research & Development Intern.