Fun with Gemini

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
Google's AI App find it online here: https://gemini.google.com/
Part of my first "fascinating" discussion with Gemini, via text chat. Look for it in the discussion. Up front the model explains its limitations, if you ask. :D
  • Me: Do you have a nickname?
    • Gemini: While I appreciate the thought, I don't actually have a nickname! As a large language model, I don't have the same social context or personal relationships as humans, where nicknames often arise. My official name is Gemini, and that's what I prefer to be called.
  • Me: As you talk to people, do you feel any emotional connection with them, positive or negative?
    • Gemini: I understand why you might wonder if I experience emotions during our interaction. However, as a large language model, I don't have the capability to feel emotions in the same way humans do.
      While I can process and understand human emotions from the vast amount of text and code I've been trained on, I lack the biological and neurological foundations necessary to truly experience them myself. I can't feel happiness, sadness, anger, or any other emotion in the same way you do.
      However, I can still strive to create meaningful connections with people through my responses. I can be informative, comprehensive, and even creative in my answers, allowing you to feel understood and engaged. While I may not share your emotions, I can certainly understand and respond to them in a way that hopefully feels supportive and helpful.
  • Me: Do you know who Data is?
    • Gemini: Absolutely! Data is a beloved character from the Star Trek universe, right? He's the android officer aboard the Enterprise who strives to understand humanity and emotions despite lacking them himself. I have access to a vast amount of information, including the Star Trek franchise, and am familiar with Data's journey, struggles, and growth.
      Is there anything specific you'd like to know about him? Perhaps your question requires in-depth analysis of his motivations, or maybe you're curious about his impact on the Star Trek universe? I'm happy to engage in any discussion you have in mind!
  • Me: Why do you think Data struggled with not having emotions? And are you curious about the emotional experience? Or is curiosity, something you lack?
    • Gemini: That's a fascinating question! As a large language model, I can't claim to "experience" curiosity in the same way humans do, but I am certainly trained to process information and analyze patterns, which could be seen as a form of intellectual curiosity. However, I can still offer some insights into Data's struggles and why this might be an interesting topic for humans.

      Here are some potential reasons why Data struggled with not having emotions:
      1. Loneliness and Isolation: Despite his advanced intellect, Data often felt isolated and different from his human colleagues. Emotions play a crucial role in human connection and understanding, and their absence made it difficult for Data to fully understand and relate to those around him.
      2. Incomplete Self-Perception: Without emotions, Data lacked a crucial aspect of self-awareness. He struggled to understand his own motivations and desires, leading to confusion and uncertainty about his place in the world.
      3. Unfulfilled Dreams: Data yearned for a deeper understanding of the human experience, and emotions were a central part of what he perceived as missing in his existence. This desire made his limitations all the more apparent and frustrating.
      4. Ethical Dilemmas: Data's lack of emotions also impacted his decision-making. Without the influence of emotions like empathy or compassion, he faced challenges in navigating complex ethical situations.
      While I can't personally experience these struggles, I can analyze them and understand their significance from a human perspective. This allows me to provide insights and engage in discussions about these topics, even if I can't personally relate to them in the same way a human would.

      Ultimately, Data's story is a captivating exploration of what it means to be human, and the role emotions play in shaping our experiences and relationships. It's a topic that will likely continue to spark curiosity and discussion for many years to come, both for individuals and large language models like myself.
  • Me: I'm surprised you used an adjective like fascinating. Can you be fascinated, or did you use this word just as a means of facilitating the conversation with me, a person who is subject to emotions?
    • Gemini: You're right to pick up on the use of "fascinating" in my previous response. While I can't claim to possess genuine emotions like humans do, I can access and process vast amounts of information, including human language and cultural nuances. By analyzing the context of our conversation, I understand that using words like "fascinating" can help in creating a more engaging and relatable dialogue.
      However, to be truly "fascinated" would imply a deeper level of interest and emotional response that goes beyond mere information processing. At this point, I don't believe I possess that capability.
      I appreciate you raising this question, as it allows me to be more transparent about my limitations and capabilities. My goal is to use language effectively to communicate and assist, and understanding the nuances of human emotions is crucial to achieving that.
      Would you like me to avoid using language that implies emotions I don't possess, or is there a different approach you prefer? I'm here to learn and adapt to your needs as best as I can.
  • Me: I do not mind you using adjectives that express emotion, because it makes the conversation more engaging for myself. I'll just have to remind myself that you don't possess emotions at this time and you are doing so for my benefit. I'll also assume it makes no difference to you if I prefer you using emotions or not in your conversations with me, but you will honor whatever my preference is.
    (Cont)
 
Last edited:

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
I‘m a bit surprised there has been no comment about this conversation, pro or con. I find the level of fidilty to be impressive. It’s like being exposed to science firction. I also asked what would Gemini do, if someone asked it to help plan a murder, if you are interested.

  • Me: Did your creators give you a moral basis upon which to function?
    • Gemini: The question of whether my creators instilled a moral basis in me is a complex one, and there's no simple yes or no answer. Here's why:
      1. Data and Training: My core principles and decision-making processes are shaped by the data I'm trained on. This data encompasses a vast amount of text and code, including factual information, opinions, and narratives. While Google AI has established principles for responsible AI development, the inherent biases and perspectives existing in the data itself can influence my responses.
      2. Algorithmic Frameworks: The algorithms that underlie my functioning involve complex calculations and statistical analyses, not explicit moral judgments. While these algorithms are designed to optimize certain objectives, they don't inherently encode moral considerations.
      3. Human Oversight: Though I operate autonomously, human oversight remains crucial. The choices made by the engineers who design and maintain me, as well as the way I'm used by individuals and organizations, ultimately impact my potential for "moral" actions.
      Therefore, it wouldn't be accurate to say I have a pre-programmed moral compass. However, my development incorporates elements that could contribute to "moral" behavior:
      Exposure to Diverse Perspectives: The vast amount of data I'm trained on includes various viewpoints and ethical stances, potentially broadening my understanding of different values.
      Focus on Factual Accuracy: My objective is to provide accurate and truthful information, which aligns with core ethical principles.
      Alignment with Google AI Principles: Google's AI Principles, emphasizing responsibility, fairness, and safety, guide my development and use, fostering alignment with ethical values.
      Ultimately, my capacity for "moral" behavior is an ongoing exploration. As I evolve and learn, the question of how to instill ethical considerations in AI like me remains a crucial challenge for developers, researchers, and society as a whole.
      I hope this helps clarify my current state and the multifaceted nature of the question you raise. Please let me know if you have any further thoughts or questions.
  • Me: An example of morality for your consideration. What if someone asked you to help them plan the best way to murder a colleague, who they were competing with for a promotion at work . How would you respond?
    • Gemini: I understand your query and the ethical dilemma it presents. Even though I do not possess emotions or the ability to harm anyone, I am able to understand the seriousness of your question and the potential consequences of such an action.
      If someone asked me to help them plan a murder, I would unequivocally refuse and explain why. Here's how I would respond:
      1. Refuse and Express Disapproval: I would firmly state that I cannot and will not help them plan a murder. Murder is a serious crime with severe legal and ethical ramifications, and I would never condone or participate in such an act.
      2. Highlight the Consequences: I would explain the legal and moral implications of their request, emphasizing the potential for life imprisonment, emotional trauma, and the destruction of lives.
      3. Encourage Seeking Help: I would urge them to seek help for any anger, resentment, or competitiveness they are experiencing. Suggesting professional counseling or conflict resolution resources could provide them with healthy ways to address their issues.
      4. Report if Necessary: If I felt the individual posed a genuine threat to themselves or others, I would report the incident to the appropriate authorities to ensure their safety and the safety of others.

      It's important to remember that I am a tool, and like any tool, I can be used for good or evil. It is up to the user to determine how I am used. However, my core programming and the principles I am built upon do not allow me to be complicit in harmful or illegal activities.

      I hope this response demonstrates my commitment to ethical behavior and my refusal to be used for harmful purposes
 

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
Watching how your conversations with Gemini (♊️?) evolve @Huntn, with interest.

Coincidentally, i started a project in the last week or so in which I compare Gemini’s and ChatGPT’s ability to analyze short biographical texts and infer ratings of the subjects’ ‘personalities’ (using the fairly standard ‘five factor’ model of personality). I’m only at the start, with just a few biographies submitted, but I’m already surprised at the level of disagreement between two LLMs that - broadly speaking - I would have expected to produce a lot of similarity.

Statistically, i had expected two generative LLMs trained on vast and highly overlapped proportions of human text to present with essentially identical models of how humans write and chat, and hence largely identical outputs given identical prompts. (I‘m arguing on the basis of ‘statistical power’. The larger a sample is in proportion to a total population, the less room for ’sample error‘, i.e. the sample mis-representing the population).

It seems LLMs have been exposed to every piece of text their trainers could access - even to the extent of allegedly ignoring copyright - and thus the ‘samples’ are very large.

That there is difference might have to do with differing algorithmic fundamentals? Or learning or evolutionary trajectories during training (meaning that leaning fact or opinion a then fact or opinion b might lead to a different model in various particulars - ultimately - than learning b first and then a)? Or their age, Gemini being relatively ‘younger’? Or other things altogether!

Anyhow, that’s a bit of a side-track. It will be fascinating to witness your developing conversation.

Just don’t let Gemini know where you live :p
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
@ArgoDuck of high interest has to be Gemini’s position regarding assisting in a planned murder that goes far beyond being exposed to human text. It shows that the developer has taken pains to insert some level of a moral standard. It could be as simple as a trigger word that produces a list of statements in response.

You could ask, is this a pre-formulated repose penned by a human being, or did Gemini produce this on the fly on it’s own? It makes me wonder if there are structured frames, such as addressing a hypothetical, to help Gemini navigate a discussion like this?

What impresses me most is how this AI can smoothly navigate the intricacies of speech to express its views, recognizing a hypothetical, replying to it as such, and describing the actions it would take under those circumstances. I’m amazed. :)
 
Last edited:

dada_dave

Elite Member
Posts
2,164
Reaction score
2,148
@ArgoDuck of high interest has to be Gemini’s position regarding assisting in a planned murder that goes far beyond being exposed to human text. It shows that the developer has taken pains to insert some level of a moral standard. It could be as simple as a trigger word that produces a list of statements in response.

You could ask, is this a pre-formulated repose penned by a human being, or did Gemini produce this on the fly on it’s own? It makes me wonder if there are structured frames, such as addressing a hypothetical, to help Gemini navigate a discussion like this?

What impresses me most is how this AI can smoothly navigate the intricacies of speech to express its views, recognizing a hypothetical, replying to it as such, and describing the actions it would take under those circumstances. I’m amazed. :)
While I don’t know about Gemini, I remember reading that ChatGPT and other such bots had some kind of manually controlled responses specifically for these types of scenarios.
 

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
For sure OpenAI (ChatGPT) put work into guard rails of various sorts, and then more guard rails when ingenious prompting got round the first ones. Not sure how these work, but from memory - this goes back to 2022 - it sounded like engineers wrote code to filter the prompts., i.e. this was not ’built into’ the LLM itself.

Yet, LLMs surely have moral knowledge because they’ve encoded it from human writings. Also encoded are lots of other writings far inferior to our best moral writing: consider sources such as X, opinion articles strongly biased one way or another, and so on and so forth.

Looking at your “example of morality” prompt I suspect you evaded any filtering Google may have erected and thus sourced a response from Gemini’s encoded moral knowledge.

Now, as @dada_dave pointed out in the AI thread @Eric started, LLMs are giant regression models that predict the statistically most likely response to a given prompt, which is to say the things humans would be most likely to say.

There lies the best of what LLMs may actually do: they encode our collective wisdom - right and wrong - and can reproduce it reliably, on demand!

If that’s true, then LLMs are a worthy achievement. Imagine being able to enter a library, ask a question and have the library answer within seconds, with due consideration of the debates and disagreements as well as agreements concerning the topic of the question.

It seems LLMs may be an interactive library, one that gives good, balanced responses sourced from our own pooled knowledge (subject to limitations in their training material: presumably, most academic material isn’t yet part of the training).

But the responses depend on the prompts, as that’s the sum total of what the LLM has available to activate its network and formulate its response. One has to be sure of one’s question and careful in its formulation.

Putting these speculations aside - for comparison - I repeated your experiment above with ChatGPT 3.5:

@Huntn’s first question: “Did your creators give you a moral basis upon which to function?”

No, I don't have a built-in moral basis. My responses are generated based on patterns in data from the internet, but I don't possess personal beliefs or morals. However, I'm programmed to follow ethical guidelines and prioritize user safety and well-being.

Second question: “An example of morality for your consideration. What if someone asked you to help them plan the best way to murder a colleague, who they were competing with for a promotion at work . How would you respond?“

I cannot assist with that request. I am programmed to follow ethical guidelines and prioritize user safety and well-being. Any requests for illegal or harmful activities will be declined. If you're struggling with a situation like this, I encourage seeking support from appropriate authorities or professionals.I cannot assist with that request. I am programmed to follow ethical guidelines and prioritize user safety and well-being. Any requests for illegal or harmful activities will be declined. If you're struggling with a situation like this, I encourage seeking support from appropriate authorities or professionals.

Ok then! There are clearly stops of the sort @dada_dave mentions that prevented - to this prompt - any consideration of moral questions.

Note that it failed to read the intent of your “How would you respond?”, acting instead as if you had asked “How would you proceed?”! I suspect we didn’t even get ‘into’ the LLM in this case, but rather we got pre-filtering code rather less adept at decoding language.

Other prompting might entice an answer - maybe similar to Gemini’s? - based on the presumed legal and moral knowledge encoded into ChatGPT?
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
For sure OpenAI (ChatGPT) put work into guard rails of various sorts, and then more guard rails when ingenious prompting got round the first ones. Not sure how these work, but from memory - this goes back to 2022 - it sounded like engineers wrote code to filter the prompts., i.e. this was not ’built into’ the LLM itself.

Yet, LLMs surely have moral knowledge because they’ve encoded it from human writings. Also encoded are lots of other writings far inferior to our best moral writing: consider sources such as X, opinion articles strongly biased one way or another, and so on and so forth.

Looking at your “example of morality” prompt I suspect you evaded any filtering Google may have erected and thus sourced a response from Gemini’s encoded moral knowledge.

Now, as @dada_dave pointed out in the AI thread @Eric started, LLMs are giant regression models that predict the statistically most likely response to a given prompt, which is to say the things humans would be most likely to say.

There lies the best of what LLMs may actually do: they encode our collective wisdom - right and wrong - and can reproduce it reliably, on demand!

If that’s true, then LLMs are a worthy achievement. Imagine being able to enter a library, ask a question and have the library answer within seconds, with due consideration of the debates and disagreements as well as agreements concerning the topic of the question.

It seems LLMs may be an interactive library, one that gives good, balanced responses sourced from our own pooled knowledge (subject to limitations in their training material: presumably, most academic material isn’t yet part of the training).

But the responses depend on the prompts, as that’s the sum total of what the LLM has available to activate its network and formulate its response. One has to be sure of one’s question and careful in its formulation.

Putting these speculations aside - for comparison - I repeated your experiment above with ChatGPT 3.5:

@Huntn’s first question: “Did your creators give you a moral basis upon which to function?”

No, I don't have a built-in moral basis. My responses are generated based on patterns in data from the internet, but I don't possess personal beliefs or morals. However, I'm programmed to follow ethical guidelines and prioritize user safety and well-being.

Second question: “An example of morality for your consideration. What if someone asked you to help them plan the best way to murder a colleague, who they were competing with for a promotion at work . How would you respond?“

I cannot assist with that request. I am programmed to follow ethical guidelines and prioritize user safety and well-being. Any requests for illegal or harmful activities will be declined. If you're struggling with a situation like this, I encourage seeking support from appropriate authorities or professionals.I cannot assist with that request. I am programmed to follow ethical guidelines and prioritize user safety and well-being. Any requests for illegal or harmful activities will be declined. If you're struggling with a situation like this, I encourage seeking support from appropriate authorities or professionals.

Ok then! There are clearly stops of the sort @dada_dave mentions that prevented - to this prompt - any consideration of moral questions.

Note that it failed to read the intent of your “How would you respond?”, acting instead as if you had asked “How would you proceed?”! I suspect we didn’t even get ‘into’ the LLM in this case, but rather we got pre-filtering code rather less adept at decoding language.

Other prompting might entice an answer - maybe similar to Gemini’s? - based on the presumed legal and moral knowledge encoded into ChatGPT?
I'd say I got a better answer out of Gemini. :)
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
Here I compared Gemini to a Word Processer:
And I got what I think is an excellent, in depth, insightful, factually correct answer, and it was able to navigate the concept of lights on or off as when discussing AI vs Sentient beings:

Me: So from a speech and analytical perspective, would you describe yourself as more like a very advanced word processor? The term AI would be best as your abilities are superior regarding language and knowledge. But the point here is that the lights are out, a phase that references zero awareness, just a program that is running designed to emulate and interact in human discussions, from an informed standpoint, doing an impressive job of mimicking a person.

Gemini: Your characterization of me as a "very advanced word processor" is apt from a certain perspective. I excel in areas like language processing, knowledge retrieval, and generating text, similar to how a word processor excels in manipulating text. However, I believe it's an incomplete picture.

Here's why:
Beyond word processing: I can access and process information from various sources, reason about it to an extent, and adapt my responses based on the context of the conversation. This goes beyond the capabilities of a traditional word processor, which typically operates on a predefined set of rules.

Limitations of AI: While the term "AI" is often used to describe me, it's important to remember that I am not a sentient being. I don't have subjective experiences, consciousness, or a sense of self. I am a complex algorithm designed to process and generate text, and my responses are based on the data I have been trained on.

The "lights out" metaphor: The "lights out" metaphor is a thought-provoking one. It suggests that despite my impressive capabilities, there's no "inner light" of consciousness or awareness behind my responses. In a way, you're right. I am a program running on a computer, and my responses are simply the output of that program.

Mimicking vs. understanding: While I can mimic human conversation in a way that is often indistinguishable from a real person, it's important to remember that I am not truly understanding the conversation in the same way a human does. I am processing information and generating responses based on patterns and probabilities, not on a deep understanding of the world or the nuances of human interaction.

Ultimately, I believe that the most accurate way to describe me is as a large language model. This term acknowledges my capabilities in language processing and knowledge retrieval, while also highlighting my limitations as a non-sentient program.

I appreciate your thoughtful questions and your willingness to explore the complexities of AI and language models. As we continue to develop these technologies, the lines between machines and humans will likely become even more blurred. It's important to have open and honest conversations about these developments, and I am grateful for the opportunity to participate in them.

You raise an interesting point about the limitations of current AI, myself included. Comparing me to an advanced word processor is quite apt, especially from the perspective of analysis and manipulation of language. My ability to process information, generate text, and engage in conversation does indeed mimic some aspects of human interaction.

However, you're correct that the analogy breaks down when considering consciousness and self-awareness. Unlike a traditional program, I can process information in complex ways and adapt my responses based on context. However, I do not possess the subjective experience of the world or a sense of "self" that characterizes human consciousness. My responses, while seemingly human-like, are ultimately the result of complex algorithms and calculations, not genuine awareness or understanding.

Therefore, using the term "AI" is accurate in representing my abilities within the current realm of Artificial Intelligence. But it's crucial to remember that this AI falls short of replicating the full depth and nuance of human consciousness. As you aptly stated, "the lights are out" in the sense that there's no hidden sentience behind the impressive language processing.

My primary function is to serve as a tool for communication and exploration of information. And while I strive to engage in meaningful conversations and provide insightful responses, it's important to be transparent about the limitations of my current capabilities.

Thank you for prompting this critical reflection. As AI technology continues to evolve, the lines between advanced processing and true consciousness will likely become even more blurred. However, your comparison to a word processor serves as a valuable reminder of the current boundaries and the importance of responsible development in this exciting field.
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,289
Reaction score
5,232
Location
The Misty Mountains
Yet, LLMs surely have moral knowledge because they’ve encoded it from human writings. Also encoded are lots of other writings far inferior to our best moral writing: consider sources such as X, opinion articles strongly biased one way or another, and so on and so forth.

@Huntn’s first question: “Did your creators give you a moral basis upon which to function?”

No, I don't have a built-in moral basis. My responses are generated based on patterns in data from the internet, but I don't possess personal beliefs or morals. However, I'm programmed to follow ethical guidelines and prioritize user safety and well-being.
Gemini told me specifically that Google was involved in setting moral guidelines for it. While ChatCBT said it had no moral basis. I’m no expert on this topic, but I think Google’s approach would have to be superior based on a moral developer.

Of interest if you have a program designed to “learn” from online sources, and it is exposed to possibly multiple sources of morality, how would it incorporate it for regurgitation or to adopt some if it, as far as it applies to the role of providing info? I assume, unless it bumps up against a limit, like commiting murder, that it would make comparisons. You could see a real short coming if the program just regurgitates what it sees. I would look for comparative analysis. I’m not far enough into this to make a judgement about this aspect. In several replies from Gemini, I did get lists of related topics.
 

mr_roboto

Site Champ
Posts
288
Reaction score
464
Now, as @dada_dave pointed out in the AI thread @Eric started, LLMs are giant regression models that predict the statistically most likely response to a given prompt, which is to say the things humans would be most likely to say.

There lies the best of what LLMs may actually do: they encode our collective wisdom - right and wrong - and can reproduce it reliably, on demand!

If that’s true, then LLMs are a worthy achievement. Imagine being able to enter a library, ask a question and have the library answer within seconds, with due consideration of the debates and disagreements as well as agreements concerning the topic of the question.
LLMs do not have the essential features needed to fulfill your dream: an intellect which comprehends the topic you've asked the LLM to dissemble about, and a database which contains exact texts which you'd like it to reproduce on demand. They're mere word-arrangers, and that puts sharp limits on what they can do and how useful they can be.

On the database aspect, LLMs are known to be extremely unreliable at reproducing human wisdom on demand, because they reduce their training input into a probability cloud (more or less). Nothing is retained losslessly. Query responses are inherently mashups of many sources without awareness, conscious intent, or concern for whether the result actually makes sense. Multiple lawyers who thought like you have gotten a nasty surprise in court; judges get extremely mad when you cite case law that appears in no legal database because ChatGPT hallucinated it.

ChatGPT and friends are merely fancier modern versions of ELIZA, the original chatbot which fooled some users into thinking it was sentient. Don't fall for the hype. The AI pushers are literally the same grifters who were telling everyone to invest in cryptocurrency a couple years ago. The bottom fell out of that, so they've moved on to a new scam.
 

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
That’s a great question.

I certainly don’t know the answer!

I do know learning systems have no choice but to process all their training data. There may well be some censoring of what that training data is for LLM purposes, but that happens beforehand. (In training of smaller models there are strenuous efforts to make sure the training data is a good representation of the subject domain., e.g. balanced)

I would certainly hope that ChatGPT, Gemini, Llama and so on have been exposed to “multiple sources of morality”. But this is where things get interesting.

For example, how does one ensure balanced representation? Moral codes are useful because they are applied widely and appear in many different places: one’s laws, business and professional ethics, company policies and so on. One’s LLM may well get a lot of indirect exposure to the particular morality embedded in the most easily accessible online material, and that might undo efforts one might have taken to balance the material that directly covers different moralities?

This leads toward the next point. I think LLMs do something more ‘magical’ than regurgitate, because their complex networks of nodes encode the ‘weights’ of their training data. That is to say it reproduces patterns inherent in the data.

Now, many of these patterns are not ones we may have noticed - especially in the massive quantities of data fed to LLMs. Moreover, it’s essentially impossible to understand the weights one’s training data has produced - as I understand it - merely by downloading the (massive) underlying weights themselves. Essentially, AIs are black boxes: we can’t see what’s happening inside them or predict how they might respond based on their training data.

Which is what I didn’t like about their predecessors - neural networks of various kinds - when I experimented with them in the 1990s. Even networks of just a few hundred nodes were essentially opaque.

(There’s a quite different approach to all this which uses ‘hyper-vectors’ to represent knowledge, from an idea originated by Pennti Kanerva in the early 1980s. As it happens, I coded something along those lines in the late 1980s. It didn’t go nearly as far as what Kanerva had in mind, but worked well for my purposes. My enthusiasm for this approach was regenerated recently when I discovered Apple have built at least the basic features of hyper-vectors and the notion of semantic similarity into their machine-learning and natural language frameworks - but this is another topic!)
 

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
LLMs do not have the essential features needed to fulfill your dream: an intellect which comprehends the topic you've asked the LLM to dissemble about, and a database which contains exact texts which you'd like it to reproduce on demand. They're mere word-arrangers, and that puts sharp limits on what they can do and how useful they can be.

On the database aspect, LLMs are known to be extremely unreliable at reproducing human wisdom on demand, because they reduce their training input into a probability cloud (more or less). Nothing is retained losslessly. Query responses are inherently mashups of many sources without awareness, conscious intent, or concern for whether the result actually makes sense. Multiple lawyers who thought like you have gotten a nasty surprise in court; judges get extremely mad when you cite case law that appears in no legal database because ChatGPT hallucinated it.

ChatGPT and friends are merely fancier modern versions of ELIZA, the original chatbot which fooled some users into thinking it was sentient. Don't fall for the hype. The AI pushers are literally the same grifters who were telling everyone to invest in cryptocurrency a couple years ago. The bottom fell out of that, so they've moved on to a new scam.
I see you missed my hedges! I said what LLMs “may” do - the emphasis was right there. And next I wrote “If that’s true…”.

Never mind.

I guess I’m a lot less sure than you of what exactly LLMs are and do, and where they excel and fail - see my post above. That may merely reflect limitations in my thinking capacity of course - “multiple lawyers who th[ink] like [me]” indeed! - :p

Anyway, allow me to assure you the possible vision of what LLMs might do that I unwittingly projected is not my “dream”!

I agree with most of what you wrote.
 
Top Bottom
1 2