LLMs Offer Webmasters Less Control Over Search Results Than Traditional Search Engines
Search engines are sophisticated systems that take into account personal data, location, and user intent to deliver search results. However, these results are still more predictable and stable compared to those produced by large language models (LLMs). With traditional search engines like Google, users are presented with a list of results, and it's up to them to interpret the information.
But when you ask ChatGPT to "search" the web for you, it's not quite as transparent. The prompt can be anything—from a formal query to something as whimsical as asking the assistant to search as a pirate or a snake. Based on that prompt, the LLM will sift through search engines like Bing, and the results it delivers can vary widely. In this sense, using ChatGPT as a search tool is somewhat of a black box for webmasters. What it pulls and how it interprets the results is beyond their control, and this could be either a blessing or a curse.
Webmasters have little say in how LLMs represent their businesses. For example, an LLM might "hallucinate" details about a product or falsely attribute a feature that doesn't exist. It could also dig up a single negative post from Reddit and present it as fact, regardless of its validity. The issue lies in the lack of control and transparency—webmasters have no way of ensuring that the information their business is associated with is accurate or fair. To be honest, I’m not sure if there’s a way to fix this, or if it’s just something we’ll have to learn to navigate.