Do you not have the llms?

Hype around AI, some experienced ideas about the type of llm intelligence type and will be around, and I like to deal with some of these. I will give sources – most of them strints – and receive your thoughts on this issue.
Why do I think of these important topics? First, I feel like we are building a new creativity that in many ways in which they compete with us. Therefore, we should aim to judge well. Second, AI title is very interesting. It raises questions about our thinking processes, our differences, and our feelings for higher ones above other creatures.
Millière and Buckner Let's write [1]:
In particular, we need to understand what the llm symbolize by the sentences of what they appear – and those sentences speak. Such insight cannot be reached with an Armchair speculation. It requires strong careful interrogation.
Llms is more than predicting equipment
The deep neural networks can form complex structures, straightforward ways. Neurons can take many jobs on Supposions [2]. In addition, llms build internal world models and mental mental maps analyzes [3]. In harmony, it is not just forecasting equipment in the next word. Their internal operation thought it is at the end of the statement – they have a famous mental program [4].
However, all these skills depend on the size and model type, so they can vary, especially in certain circumstances. These ordinary skills are an active study field and are probably more similar to human thoughts than Acpecchecker's algorithm (if you need one choice for both).
Llms indicate articles of art
When dealing with new jobs, llms do more than recite memory content. Instead, they can express their answers [5]. Wang et al. Analyze the relief association in the Pile Database and find out that large models developed in facts remember to remember and create novel content.
However, Salvatore Raiel recently reported to TDS that the ULMs were inches. Studies quoted highlighted in Chatgpt-3. Contrary [6]. Hubert et al. Agree with this conclusion [7]. This applies to the beginning, fluency, and flexibility. Creating new ideas unlike any other identified in the form of model training can be another matter; This is where different people may have benefits.
Any way, there is a great debate for spending this reality completely. To learn more about a regular topic, you can view the Computerism.
Llms has an emotional concept
The llms can analyze the sympathetic context and write different styles and emotional tones. This suggests that they have internal and emotional organizations. Indeed, there are such evidence of linking: man can investigate their neural networks to get some feelings and make it harder Vaectors guides [8]. (Another way to get the guiding vectors to determine different performance when the model processes statements by contradicting feature, e.g., sadness.)
In accordance with this, the idea of emotional qualities and their possible relationship in internal landmarks seems to be under the LLM construction structures. There is a relationship between emotional representation and subsequent consultation, that is, the world as the llm understands.
In addition, emotional representations are designed for certain model areas, and many accurate ideas that work with humans can also be stored in llms – even a psychological structure can be effective [9].
Note that the statements above do not mean imagethat is, that the llms has visual experiences.
Yes, llms don't read (post-lasting training)
Llms with neural networks with Static instruments. When we talk to llm Chatbot, we are in contact with a consistent model, and read only core of ongoing conversation. This means that it can attract additional information from the web or from database, processing input, etc. But kindInformation is built internally, skills, and research remain unchanged.
Apart from long-term memory methods that provide additional contextual data to standardized llms, the coming methods can change them by synchronizing the llM. This can be achieved regularly with new data or continuing good planning and monitoring additional metals [10].
Many NEURAL NEURAL NEARWORK NETWORK AND NEREWORK NEW WAYS AND DEATHERS ASSESSMENT WE USE FOLLOWING PROGRAMS [11]. These programs are available; They are dishonest and economically.
The Upcoming Development
Let's not forget that AI systems we see that they are very fresh. “It's not good to X” a statement that may not work. In addition, we often judge customer products with low-income products, not high cost models to work, unpleasant, or end up after locked doors. Most of the time a year ago, the LLM development focus focuses on building cheap, simple-depending on consumers, not just smarter, with higher prices.
While computers can shorten the real shortage, they quickly try to try different options. And now, the llms can judge. When we have no reasonable answer while artistic, do we not know the same thing – walking bicycles with thoughts and we have chosen the best? The creation of the creation (whatever you want to call) of llms, combined with the ability to quickly quickly with ideas, already gaining scientific research. See my previous article in Alphaevolves a Picture.
Halkinations weaknesses, discriminatory, and confusing prisons llms and has refused their safety, and security issues and trust problems, still, are still full. However, these programs are so powerful that Myriad apps and development are possible. The llms is also not required to be used separately. When combined with additional, traditional, other errors may be reduced or unemployed. For example, the llms can produce reasonable data for traditional AI training programs used in industrial exchange. Even if the development had to slow down, I believe that many decades of scrutiny, drug research.
Llms just algorithms. Or is it?
Many investigators now find similarities between human reasoning and the processing of the LLM information (eg. [12]). It has long been adopted that CNNs can be compared to layers in the visible cortex [13]But now we talk about neocortex [14, 15]! Do n'tothen me; There is clear variation. However, the flaring of the llms is undeniable, and our unity claims does not seem to be attractive.
The question is now that this will lead to, and where these boundaries are – right where should we discuss it? Faithful Thembrey Hinton leaders and Douglas Hofstadter begins to appreciate the possibility of AI in the latest LLM Precthroughs light [16, 17]. Others, like Yann Lecun, you doubt [18].
Professor James F. O'Brien shared her thoughts on the topic of the llM attack last year in TDS, and asked:
Will we have a way to test sensitivity? If so, how will we work and what should we do if the result is getting out well?
Moving on
We should be careful when reflecting a person's qualities in the machines – anthropomorphism is easily happened. However, it's easy and spending other creatures. We have seen this happening many times with animals.
Therefore, regardless of the current llms appears to be a crivive, it contains world models, or logical, we can want to refuse. The next AI generation was about three [19].
What do you think?
Progress
- Millière, Raphakël, Cameron Beckner, the philosophical introduction in language models – Part I: Continuing Classic Reports (2024), Arxiv.2401.039101.039101.03910
- Elini, Nelson, Tristan,, Catherine Olsson, Nicholas Schiefer, Tom Henevan, the toys of the AR labs.
- Kennethi Li, Language Models Read World Models or Mathematics More World? (2023), gradient
- Lindsey, et al., In biology of large language model (2025), transformers
- Wang, Xiny, Antonis Elazar, Alon Alonis, Generization Yang Wang, the Power of Generalization vs Memoroza: Traebiam Yang Retention, Arxiv: 2407.14985
- Guzik, Rik and Berge, Christian & Gilde, Christian Mission: AI takes Torrent Test (2023), Artscriptation
- Hubert, KF, No, Zebhbelina, DL, the current status of the current Proficial Intelligence system have promoted job-up models (2024), SCI Rep 14, 3440
- Turner, Alexander Matt, David Dell, Gavin Leech, Ulisse Mini, and Monte Macdipled, Language addition: Language models. (2023), arxiv: 2308.10248v3
- Tak, Ala N., Amin Banheeanade, Anahita BanheeAnade, I Kian, Robin Jia, and Robin Jia, and Jonathan Giant, the translation of the greatest language models (2025), 2502.05489
- Albert, Paul, Frederric Z. Zhang, Hemanth Saratchange, Cristian Rodriguez-Opzo, Ehsan Abbasnejad, RArlar: Arxiv: 2502.00987
- Shi, Haizhou, Zihau, Zihao Xengya
- Goldstein, A. Wang, H., Niekerken, L. Et Al., Nat Hum to help 9, 1041-1055
- Mys, Daniel LK, Hong Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Aibert, and James J. Discarlo, Prepared Models That predict Neural Restricts in the Seven Cortex (2014), the continuation of the National Academy of Science of the United States of America 111 (23): 8619-24
- Goranier, Arno, and Walter Senn, to see the hard work in Cortico-Thalamic (2025), an arxiv: 2504.06354
- Han, Danny Dongyeop, Jiuok No, Jay-Yoon Lee, I Girls: Synchronizes Language models requires an invoice: 2502.12771
- Yann Lecun, the way to Autonomonongomement Machine Intelligence (2022), OpenReview
- Butlin, Patrick, Robert Long, Eric Elmozno, Joshua, Wook, Jonathan Birch, Geor Mene, Et Al al.



