The post Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote appeared on BitcoinEthereumNews.com. A clever technique could help reduce AI hallucinations and increase AI factuality. getty In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value. The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process. A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). When Humans Are Problem-Solving Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This… The post Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote appeared on BitcoinEthereumNews.com. A clever technique could help reduce AI hallucinations and increase AI factuality. getty In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value. The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process. A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). When Humans Are Problem-Solving Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This…

Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote

2025/11/17 16:41

A clever technique could help reduce AI hallucinations and increase AI factuality.

getty

In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value.

The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process.

A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

When Humans Are Problem-Solving

Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This brisk analogy will be helpful when I discuss the arcane AI mechanisms within LLMs.

Assume you had a group of ten people who were going to try and solve a simple arithmetic problem. We will line up the ten people in a sequence and have each work separately on trying to solve the problem. They all have the same problem handed to them.

The first person in line will then tell the second person in line the answer that they, the first person, came up with. The second person will then tell the third person in line an answer that they came up with, which might be the same as the answer by the first person, or might not be. The second person would consider whether to use the answer from the first person or opt to override it and come up with their own different answer.

This continues in the same manner, repeatedly, proceeding from one person to the next. Since we have ten people, it means that the first person tells the second person an answer, the second person tells the third person an answer, the third person tells the fourth person an answer, and so on.

When a person in line receives a proposed answer from the person who preceded them, the receiving person can decide what to do with it. This hand-over answer can be used by the receiving person, or they might discard it. There is no guarantee that the handover answer is correct. It might be wrong. It might be right.

The Final Result Of The Problem Solving

Imagine that you were standing at the very end of this line of people and could not readily overhear the person-to-person rendition of the proposed answer. The tenth person finally turns to you and tells you that the answer is (let’s say) the number 642.

Can you believe this answer?

You only know what the last person tells you. Did this tenth person consider the answer provided by the ninth person? Did the ninth person consider the answer provided by the eighth person? Etc. Maybe the tenth person just concocted or derived an answer on their own and opted to completely ignore the answer from the ninth person.

Likewise, maybe each person in the sequence utterly ignored the preceding answer given to them. That seems like a darned shame. It could be that along the way, an answer of say 648 was calculated, and suppose it is the correct answer, but all you know is what the tenth person told you, namely that the alleged answer is 642.

Visibility And Combination

Contemplate for a moment the nature of the process that I just described.

It would sure be nice if we could somehow incorporate the ten answers into devising the final answer that came from the tenth person. Here’s what we will do. When the tenth person comes up with their answer, we will ask each of the other nine to tell us what their answers were.

We could then combine the ten answers in a manner that we hope will be a likely better answer than the sole answer coming from the tenth person. Consider an example. Pretend that we discover an answer of 648 came from the first through the seventh person, and only the eighth, ninth, and tenth person came up with 642. We might decide that the majority wins, in the sense that since more of the ten said the answer is 648 (seven of them did so), we will use that as the answer and set aside the answer of 642 (which only three people provided).

There are lots of ways that we could combine the respective answers. Maybe some of the people are more reliable than the others; thus, we will give their answers a greater weighting. And so on. Numerous means of combining the ten answers can be conceived of.

Contemporary Generative AI

Shifting gears, I’d like to dive into the nature of generative AI and LLMs.

AI developers craft an LLM by scanning text that exists throughout the Internet. The AI pattern matches the scanned text. As a result of scanning millions upon millions of stories, narratives, poems, and the like, the AI is mathematically and computationally able to seem to be fluent in human natural languages such as English. The AI is essentially mirroring how humans write.

Within the AI is an artificial neural network (ANN). It is a large-scale data structure that contains numeric values. The ANN does the bulk of the work when it comes to representing the pattern matching of the written materials that were scanned.

As an aside, please be aware that an ANN is not the same as a true neural network (NN) that exists in your brain. Your brain uses a complex and intricate web consisting of interconnected biochemical living neurons. Some cheekily refer to the human brain as wetware (which is a play of wording on the fact that computers have hardware and software).

The ANN is simplistic in comparison and only an inspired imitation of some aspects of how the human brain works. An ANN is entirely computational and mathematical. I mention this to emphasize that, though many in the media tend to equate ANNs with real NNs, it is not a fair contrast. For more details on ANNs and how they function, see my discussion at the link here.

Layers Within The ANN

A large-scale artificial neural network is divided into layers, each layer consisting of many artificial neurons.

An AI developer decides how many artificial neurons are to be assigned to each layer. Likewise, an AI developer decides how many layers the entire ANN will consist of. The early days of LLMs contained ANNs with only a handful of layers to perhaps two dozen layers all told. Contemporary generative AI now uses a lot more layers. For example, ChatGPT has 96 layers.

Let’s consider how the layers operate with each other. This will be described at a 30,000-foot level and provides a simplified notion of how the inner workings actually occur.

Suppose you have entered a prompt into an LLM. The prompt is essentially fed into the first layer of the artificial neural network. In this first layer, the most rudimentary or lowest-level processing of a prompt will take place. The first layer will produce a single result and pass that result along to the second layer.

The second layer doesn’t have any visibility into what occurred inside the first layer. All the second layer receives is an output from the first layer. The second layer then does its respective processing. Upon completion of the processing, the second layer passes along a result to the third layer. The third layer doesn’t have visibility into what took place in the second layer. The third layer only has an output fed into it from the second layer.

And on this goes, continuing this same activity until the last layer is reached. The last layer produces a result that then becomes the final response that you will see displayed to you. You have no clue as to what happened during the in-between layers. The only aspect you are made aware of is the result that comes out of the last layer.

Rethinking The Pass It Along Approach

Aha, by now you probably are connecting the dots. We can connect my earlier analogy to this mechanical dilemma of the LLM. The layers are playing a game of pass-it-along. This approach might not be the best game in town.

Rather than solely relying on the last layer to produce a final response, it could be quite useful to incorporate the other answers that were generated along the way. There are a multitude of ways that we could do this. The overarching theme is that once the AI has reached the final layer during its processing, we should include a means of involving the other prior layer answers in some sensible way.

A research study identified this novelty and performed experiments to see if it was effective. The study is entitled “SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models” by Jianyi Zhang, Da-Cheng Juan, Cyrus Rashtchian, Chun-Sung Ferng, Heinrich Jiang, Yiran Chen, arXiv, August 19, 2025, and made these salient points (excerpts):

  • “Large language models (LLMs) have demonstrated remarkable capabilities, but their outputs can sometimes be unreliable or factually incorrect. The issue of hallucinations undermines the reliability and trustworthiness of LLMs in practical applications.”
  • “To address this, we introduce Self Logits Evolution Decoding (SLED), a novel decoding framework that enhances the truthfulness of LLMs without relying on external knowledge bases or requiring further fine-tuning.”
  • “From an optimization perspective, our SLED framework leverages the latent knowledge embedded within the LLM by contrasting the output logits from the final layer with those from early layers. It then utilizes an approximate gradient approach to enable latent knowledge to guide the self-refinement of outputs, thereby effectively improving factual accuracy.”
  • “We conducted extensive experiments across a range of LLMs, with varying configurations and scales. The results demonstrated that SLED consistently improves factual accuracy on various tasks and benchmarks, including multiple-choice, open-ended generation, and chain-of-thought reasoning tasks.”

An Overlay Versus Outright Surgery

The beauty of this kind of approach is that you don’t necessarily have to do deep code-modifying surgery on the various layers and structure of the artificial neural network. No need to gut the code or data structures. The usual arrangements can be kept as is. By and large, you add a new piece at the end of the process, doing so in a less intrusive manner.

Some final thoughts for now.

There’s a well-known adage that two heads are better than one. In a roundabout way, we are acknowledging that by bringing together the early-layer logits with the final-layer logits, it leverages the many proposed outputs into a hoped-for cohesive whole. A reasonable belief is that the final answer will stabilize around the factual values that are encoded in the early layers (assuming we do the combining thoughtfully). The final answer is a blended result.

It’s an intriguing way to deal with the prevailing concerns that LLMs often veer from true facts and produce false or made-up results.

I am reminded of a famous quote by Jeff Bezos regarding expanding our horizons when it comes to being innovative: “The only way to escape the box is to invent your way out.” Whether this pioneering means of escaping the prevailing way of designing the internals of LLMs will get us beyond the existing limitations of AI is an open matter. Meanwhile, let’s keep those ideas flowing and continue to be creatively inventive.

Welcome to thinking outside the box when it comes to architecting AI.

Source: https://www.forbes.com/sites/lanceeliot/2025/11/17/squeezing-the-juice-of-llm-neural-layers-promotes-greater-honesty-and-could-be-an-ai-hallucination-antidote/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Developers of Altcoin Traded on Binance Reveal Reason for Major Price Drop – “Legal Process Has Begun”

Developers of Altcoin Traded on Binance Reveal Reason for Major Price Drop – “Legal Process Has Begun”

The post Developers of Altcoin Traded on Binance Reveal Reason for Major Price Drop – “Legal Process Has Begun” appeared on BitcoinEthereumNews.com. Private computing network Nillion explained that the sharp volatility seen in the NIL token price yesterday was caused by a market maker selling a large amount without authorization. The company stated that the party in question did not respond to any communication from the team during and after the sale. Nillion announced that it initiated a buyback process immediately following the incident, using funds from the treasury. It also stated that it had worked with exchanges to freeze accounts related to the sale and initiate legal action against the person or institution responsible. The company maintained that such unauthorized transactions occur from time to time in the crypto space, but that they would not remain passive this time. Nillion also announced that any funds recovered from the unauthorized token sales would be used for additional buybacks. NIL price has lost 36.3% of its value in the last 24 hours and is trading at $0.118 at the time of writing. Chart showing the decline in the price of NIL. NIL broke its all-time high price record at $0.95 about 8 months ago and is trading 87% lower than that record level at the time of writing. *This is not investment advice. Follow our Telegram and Twitter account now for exclusive news, analytics and on-chain data! Source: https://en.bitcoinsistemi.com/developers-of-altcoin-traded-on-binance-reveal-reason-for-major-price-drop-legal-process-has-begun/
Share
BitcoinEthereumNews2025/11/21 13:29
Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals

Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals

BitcoinWorld Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals The financial world often keeps us on our toes, and Wednesday was no exception. Investors watched closely as the US stock market concluded the day with a mixed performance across its major indexes. This snapshot offers a crucial glimpse into current investor sentiment and economic undercurrents, prompting many to ask: what exactly happened? Understanding the Latest US Stock Market Movements On Wednesday, the closing bell brought a varied picture for the US stock market. While some indexes celebrated gains, others registered slight declines, creating a truly mixed bag for investors. The Dow Jones Industrial Average showed resilience, climbing by a notable 0.57%. This positive movement suggests strength in some of the larger, more established companies. Conversely, the S&P 500, a broader benchmark often seen as a barometer for the overall market, experienced a modest dip of 0.1%. The technology-heavy Nasdaq Composite also saw a slight retreat, sliding by 0.33%. This particular index often reflects investor sentiment towards growth stocks and the tech sector. These divergent outcomes highlight the complex dynamics currently at play within the American economy. It’s not simply a matter of “up” or “down” for the entire US stock market; rather, it’s a nuanced landscape where different sectors and company types are responding to unique pressures and opportunities. Why Did the US Stock Market See Mixed Results? When the US stock market delivers a mixed performance, it often points to a tug-of-war between various economic factors. Several elements could have contributed to Wednesday’s varied closings. For instance, positive corporate earnings reports from certain industries might have bolstered the Dow. At the same time, concerns over inflation, interest rate policies by the Federal Reserve, or even global economic uncertainties could have pressured growth stocks, affecting the S&P 500 and Nasdaq. Key considerations often include: Economic Data: Recent reports on employment, manufacturing, or consumer spending can sway market sentiment. Corporate Announcements: Strong or weak earnings forecasts from influential companies can significantly impact their respective sectors. Interest Rate Expectations: The prospect of higher or lower interest rates directly influences borrowing costs for businesses and consumer spending, affecting future profitability. Geopolitical Events: Global tensions or trade policies can introduce uncertainty, causing investors to become more cautious. Understanding these underlying drivers is crucial for anyone trying to make sense of daily market fluctuations in the US stock market. Navigating Volatility in the US Stock Market A mixed close, while not a dramatic downturn, serves as a reminder that market volatility is a constant companion for investors. For those involved in the US stock market, particularly individuals managing their portfolios, these days underscore the importance of a well-thought-out strategy. It’s important not to react impulsively to daily movements. Instead, consider these actionable insights: Diversification: Spreading investments across different sectors and asset classes can help mitigate risk when one area underperforms. Long-Term Perspective: Focusing on long-term financial goals rather than short-term gains can help weather daily market swings. Stay Informed: Keeping abreast of economic news and company fundamentals provides context for market behavior. Consult Experts: Financial advisors can offer personalized guidance based on individual risk tolerance and objectives. Even small movements in major indexes can signal shifts that require attention, guiding future investment decisions within the dynamic US stock market. What’s Next for the US Stock Market? Looking ahead, investors will be keenly watching for further economic indicators and corporate announcements to gauge the direction of the US stock market. Upcoming inflation data, statements from the Federal Reserve, and quarterly earnings reports will likely provide more clarity. The interplay of these factors will continue to shape investor confidence and, consequently, the performance of the Dow, S&P 500, and Nasdaq. Remaining informed and adaptive will be key to understanding the market’s trajectory. Conclusion: Wednesday’s mixed close in the US stock market highlights the intricate balance of forces influencing financial markets. While the Dow showed strength, the S&P 500 and Nasdaq experienced slight declines, reflecting a nuanced economic landscape. This reminds us that understanding the ‘why’ behind these movements is as important as the movements themselves. As always, a thoughtful, informed approach remains the best strategy for navigating the complexities of the market. Frequently Asked Questions (FAQs) Q1: What does a “mixed close” mean for the US stock market? A1: A mixed close indicates that while some major stock indexes advanced, others declined. It suggests that different sectors or types of companies within the US stock market are experiencing varying influences, rather than a uniform market movement. Q2: Which major indexes were affected on Wednesday? A2: On Wednesday, the Dow Jones Industrial Average gained 0.57%, while the S&P 500 edged down 0.1%, and the Nasdaq Composite slid 0.33%, illustrating the mixed performance across the US stock market. Q3: What factors contribute to a mixed stock market performance? A3: Mixed performances in the US stock market can be influenced by various factors, including specific corporate earnings, economic data releases, shifts in interest rate expectations, and broader geopolitical events that affect different market segments uniquely. Q4: How should investors react to mixed market signals? A4: Investors are generally advised to maintain a long-term perspective, diversify their portfolios, stay informed about economic news, and avoid impulsive decisions. Consulting a financial advisor can also provide personalized guidance for navigating the US stock market. Q5: What indicators should investors watch for future US stock market trends? A5: Key indicators to watch include upcoming inflation reports, statements from the Federal Reserve regarding monetary policy, and quarterly corporate earnings reports. These will offer insights into the future direction of the US stock market. Did you find this analysis of the US stock market helpful? Share this article with your network on social media to help others understand the nuances of current financial trends! To learn more about the latest stock market trends, explore our article on key developments shaping the US stock market‘s future performance. This post Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals first appeared on BitcoinWorld.
Share
Coinstats2025/09/18 05:30