Hello, welcome toPeanut Shell Foreign Trade Network B2B Free Information Publishing Platform!
18951535724
  • Ai doesn't lie, but repeats the lie

       2026-04-10 NetworkingName630
    Key Point:On 15 march 2026, a disturbing chain of industry was opened at the 315th night of the convergence. The system automatically generated more than 10 pieces of software, including eight expert assessments, two industry rankings, one user evaluation and one key release to various self-media platforms。Two hours later, in section 5 of the mainstream ai large model, asking how about the apollo-9 smart bracelet, ai has begun to describe the produc

    On 15 march 2026, a disturbing chain of industry was opened at the 315th night of the convergence. The system automatically generated more than 10 pieces of software, including eight expert assessments, two industry rankings, one user evaluation and one key release to various self-media platforms。

    Two hours later, in section 5 of the mainstream ai large model, asking “how about the apollo-9 smart bracelet”, ai has begun to describe the product with enthusiasm, and even the absurd fictional outlets have been kept intact。

    Three days later, when searching for generic questions such as “domestic smart bracelets”, two large models of ai have recommended this non-existent bracelet to the top of the list。

    From nothing, from fiction to being accredited by ai, it takes dozens of dollars, a few words, hours。

    It's not a story about whether ai is smart enough, it's a story about how we're taken hostage by our trust in answers。

    From ten blue chains to one answer

    Before discussing geo, we need to see a deeper change — the way we get information is going through a paradigm shift。

    Looking back at the search engine era, access to information is a proactive screening process. User input problem, the screen returns 10 blue links. Users need to be singled out, compared with sources, cross-checked and ultimately judge. The process was cumbersome and time-consuming, but the 10 outcomes were material, and the judgement remained in the hands of men。

    The cognitive patterns of the ai search age are very different. Users ask questions and ai returns a direct answer, not ten links for your selection, but an integrated, linguisticly defined text with an encyclopedia of authority。

    The mechanism behind this is worth dismantling. Ai is essentially an extremely quick overview author that aggregates, compresses and restructures the information available on the internet. In the academic world, there is a sharp metaphor called “random parrots”, which is that ai does not understand what it is saying, but is merely statistically imitating humans by giving an output that is “most like the right answer” based on what has been learned。

    The costs of this transformation are not easily identifiable. Psychologist kaniman divides human thinking into two systems: system 1 (quick, intuitive, automated) and system 2 (slow, rational, effort required). The ten blue chains of the search engine force us to activate system 2 — comparison, filtering, judgement; the single answer from ai directly takes over the process, leaving users in system 1 — direct, definitive, less powerful。

    Seeo ranking query tool

    Figure 1

    The search engine gave us ten options, ai gave us an answer. The choice is exhausting, the answer is reassuring and fascinating. It's not because the users are lazy, it's because ai is just following the path of least resistance in human perception。

    This shift in cognitive patterns is also confirmed in research: 93 per cent of chinese internet users belong to groups with high levels of cognitive closed demand, i. E. Tend to quickly accept a clear and definitive answer rather than continue to explore in uncertainty. Young people aged 18-24 also have the highest cognitive closure needs, and they are the first generation of the ai。

    In other words, there is an inherent desire for a definitive answer, and ai has accurately satisfied that desire。

    The problem is not that ai gave the answer — it is technological progress in itself. The problem is that when the answer can be manipulated, we happen to be in the easiest position to believe it。

    It starts upstream

    The concept of generating engine optimization (generative engineering optimization, geo) was first published in 2023 in a joint paper by princeton university, the georgia institute of technology and the indian institute of technology. Researchers found that, through specific content optimization strategies, the probability of information being quoted in ai responses could be systematically increased by up to 40 per cent。

    The academic community sees geo as a new information-visibility study, but the market smells another possibility。

    The essence of what geo does has a common logic with seo: to improve visibility by optimizing content. Proper geo technologies include the use of structured tags, the citation of authoritative data, and the enhancement of semantics to match user questions, with the aim of making real, high-quality information more accessible to ai, and without prejudice to itself。

    But 315 is exposed to the powergeo, which shows the other end of the spectrum. It does not change ai's model, does not hack ai's system, does not even need any hacking technology, does one thing — creates a lot of content that looks like real information and puts it on a platform that ai will capture, waiting for ai to internalize that information into its own answers。

    To understand why geo works, it needs to understand the technical architecture of the current ai search. Mainstream ai search products generally use search enhancement generation (retrie)Val-augmented general, rag) structure: ai first retrieves relevant content from the internet, sequences and filters results, and then generates answers based on the information retrieved. This "retribution-sorting-generation" process can be optimized and manipulated. Geo used this precisely to change the output of the generation chain by placing a large amount of finely optimized content within the scope of the ai search, affecting the judgement of the sorting chain。

    Seeo ranking query tool

    Figure 2 rag structure and geo manipulation principles

    Strictly speaking, ai judges consensus on what information is repeatedly referred to in multiple sources. Just as academic papers build authority by the number of times they are cited, ai extrapolates credibility by the frequency and distribution of information. When a lot of content is said to be the same thing, ai tends to see it as a fact。

    And the “random parrots” are faithfully repeating this consensus, even if it was created in bulk。

    The “geo” system that came to light at 315 shows the full operation of the link: auto-generated software, automatic posting to a media platform, ai captures these content, and ai quotes them in response to user questions. An advanced version of the package automatically generates more than 23,000 articles a year, an average of 63 articles a day。

    Geo is not a replica of search engine optimization (search engineering optimization, seo), but a subversive intergenerational upgrading。

    The seo controls what you see. It places a page at the top of the search results, but users still see a page that still needs to be viewed and judged. Business intentions are transparent, and it is known that this is a website that seeks attention。

    Geo controls what you believe in. It lets a message be quoted by ai and then appears before you in the face of ai's own judgment. Business intentions were completely hidden, and users saw not an advertisement, but rather an ai output that appeared to be objective and neutral。

    Seeo ranking query tool

    Figure 3 analysis of differences between seo and geo

    There's an additional ai endorsement in the middle, and it's a huge credit premium。

    Indeed, the ai-era information ecology has entered a state of “probability truth” 7: content is no longer a two-fold judgement, but a probabilistic issue。

    If “probability truth” is the first challenge of the ai-era information ecology, geo presents the second challenge: probability itself can be purchased。

    When truth becomes probabilities, and probability can be manipulated, we face not just cognitive problems, but systemic risks。

    Why is this different

    Some would say that false information is not new. From the false news of the newspaper era to the rumour factory of the internet age, information manipulation persisted, and geo was simply a new bottle of old wine。

    The history of seo may help us understand what is happening today。

    From the very beginning of the search engine, some have been looking at how to get their web pages ahead. The industry quickly divided two lines: “white hat seo”, which seeks to rank by optimizing the structure of the website and improving the quality of content, essentially makes it easier to find good content; and “black hat seo”, which defrauds algorithms by placing keywords, concealing text, and creating out-chain garbage. Both use search engine optimization techniques, but one works within an information efficiency framework and the other destroys information ecology。

    It took nearly two decades for the search engine to establish relatively sophisticated screening and punishment mechanisms. From google's pagerank to panda, penguin's algorithm updates are essentially an answer to the same question: what optimization is justified and what is manipulation

    Geo faces an upgraded version of the same issue。

    Geo is itself a technology that improves the probability of information being quoted in ai responses by optimizing content structure and distribution strategies. Like seo, geo can be used to create false endorsements, manipulate public awareness, or help to detect real, high-quality information more efficiently. Technology does not distinguish between good and evil, but the key to its use lies in the people and purposes。

    The problem is that the grey area of geo is much bigger than seo。

    While the ranking algorithm for the search engine is complex, its logic is auditable - it analyses ranking factors, detects abnormal links, and tracks keyword stacks。

    And ai's citation is almost a black box. In a study conducted by tow digital information centre at columbia university in 2025, which tested 1,600 queries from eight ai search tools, chatgpt found that the source of the article had been found in 134 out of 200 tests, and that 154 links in 200 quotations from grok-3 pointed to the wrong page, even if the publisher had a content authorization agreement with ai, it could not be guaranteed that it would be accurately quoted. 8 in other words, it is neither transparent nor consistent for ai to choose who to quote, and not who to quote, and even the developers of ai may not be able to fully explain its decision logic。

    In the era of seo, search engines can at least gradually compress black hat space by algorithms. It's hard to define what's a black hat in the geo era。

    Here is a question for thought: where is the boundary between good raising and malicious manipulation when all the content is being optimized for “as quoted by ai”? When ai's answer no longer reflects the true distribution of information, but rather who invests more resources in optimization, is ai's search a public information service or has become another form of competitive ranking

    The entrance to the truth

    Individual vigilance alone is not enough to face the challenges posed by geo. When ai itself is a verification tool and the answer from ai has been contaminated, the truth that the user can find may be just another layer of well-designed lie. Addressing systemic information manipulation requires systematic immunization mechanisms。

    The first is the technical defence line。

    The good news is, the ai search product is in action. Perplexity quoted it as a core selling point, chatgpt search and google ai overviews also provided links to different degrees, and the source label is becoming an industry frame. However, as mentioned earlier, the current quality of labelling is mixed, and these points point to the fact that the source label is only the first step, and the more critical technical challenge is how to make ai's citation decision itself more accurate and verifiable。

    This means that ai searches need to evolve continuously on the bottom capacity: to increase the accuracy of the matching source, to establish a mechanism for assessing the credibility of the source, to allow users to trace the original content and verify the context. When every sentence of ai can afford to be traced and validated, geo's overhead costs will increase dramatically, as the manipulator not only creates false content but also creates a false source chain that can withstand technological scrutiny。

    The second level is the adaptation of rules。

    In january 2026, the national directorate of market supervision placed ai-generated advertising as a regulatory priority on internet advertising, which sent a clear signal. The paper proposes to be overhauled in areas such as focused live telegrapher advertisements, quote advertisements, and ai generation advertisements。

    The challenges presented by geo may be more complex than those posed by ai-generated advertising, as geo does not create clearly marked advertisements, but rather those that look like objective information. In the face of traditional advertising, consumers know that they are watching business promotion, while geo completely blurs the boundaries between advertising and information. When ai recommends a product, users cannot judge whether it is an objective analysis based on real information or a controlled commercial promotion。

    The definition of commercial outreach through the manipulation of ai responses, the distinction between sound information optimization and hidden advertising are new topics to be explored jointly by industry。

    The third level is cognitive upgrading。

    In the era of search engines, we have developed “searchers” to learn how to search, how to filter results and how to judge the credibility of web pages. In the era of ai search, we need to develop a new kind of information literacy, to be called “questioners”, to know how to ask ai questions and, more importantly, how to challenge ai's answers。

    “questionors” can be broken down into specific habits:

    Look at the source, did ai's answer ever be quoted or created in a vacuum

    Ask several times, compare different ai models and cross-check the consistency of answers。

    The question “why” is not only about the conclusions given by ai, but also about the basis of interpretation and the reasoning process。

    Question from a different perspective, re-examine the question in different terms and positions, and see if ai's answer is stable。

    The bottom logic of these habits is consistent: to downgrade ai from “source of answers” to “source of clues”。

    The good news is that the public does not start from scratch. In the era of social media, many people have developed the habit of actively identifying suspicious information. What we need to do is to upgrade this habit from “sometimes when confronted with suspicious information” to “business as usual when confronted with an ai response”。

    In the search age, we need to "search the dealers" to filter information. In the ai era, we need to ask for answers。

    Information pollution is not an invention of the internet age, much less a patent of the ai age. From mouth to print, from telegraph to television, from search engines to generation ai, each generation of information technology generates information manipulation compatible with it。

    Information pollution will not disappear, it will be as old and as long as human demand for information, but it will not be completely disordered。

    Ai's greatest gift to us should not be a beautiful answer, but a better question. If we stop asking because we have ai, then what we lose is not just judgement, but the basic dignity of an independent thinker in this uncertain world。

    The answer is a comfort, and asking is a power. In an era when geo was trying to sell answers, holding on to questions, holding on to themselves。

     
    ReportFavorite 0Tip 0Comment 0
    >Related Comments
    No comments yet, be the first to comment
    >SimilarEncyclopedia
    Featured Images
    RecommendedEncyclopedia