As I continue my foray into large language models (LLMs), exploring how they can be used and tuned for my areas of interest (materials science, physics, and chemistry), I have started contemplating how the rapid development of fine-tuned or use-case-specific LLMs by various groups and companies might eventually lead to an extraordinary comprehension of the human-constructed world. Currently, tools like ChatGPT and GPT-4 are capable of performing a wide range of intelligent tasks. However, in many instances, they don't quite meet our expectations or aren't able to produce exceptional results. For example, if you ask ChatGPT-4 to draw a car using scalable vector graphics, it will generate something that most would recognize as a car, complete with tires, trunk, and windows. Yet, it doesn't fine-tune the drawing the way a human would. Of course, you could pass this output to Stable Diffusion or another image-generative platform to get a better result, but I'm focusing on the text-, markup-, or code-generating LLMs here.
This brings me to my thoughts on domain-specific LLMs. I believe that eventually, we will have hundreds or thousands of highly intelligent domain-specific LLM systems. While this is undoubtedly exciting, what if we connect these systems so they can send queries to one another? Furthermore, what would happen if we invoke a consensus policy based on a human user's prompt? Let's consider a question containing a factoid (i.e., something taken as true by some but not verifiable), for example:
Based on our understanding of the universe, is the following statement true:
"There is said to be an omnipresent entity that seeded the Big Bang, which brought forth our existence. This entity is believed to remain hidden indefinitely."
If you rely on faith to describe our existence, you might consider this factoid nearly true. However, if you are a staunch atheist, this statement would be an unverifiable claim and of little value.
For the sake of this blog, let's assume that these highly capable domain-specific LLMs have no guardrails limiting how they might answer the question. If these LLMs capture the distribution of human beliefs and behaviors, each one might provide a different response that either finds the factoid compelling or purely speculative. However, if we now have the LLMs query one another, what would happen? Would they reach a consensus that aligns with the majority of humanity's beliefs, which might be in favor1 of the factoid? I'm not certain, but it seems plausible that these hundreds or thousands of LLMs could eventually encompass nearly all of humanity's collective "wisdom" or abstract constructions of our behaviors and existence.
I'm not entirely sure if what I'm proposing makes sense, but it seems possible that there could come a time when all these capable and impressive LLM systems are interconnected and start exhibiting collective behavior that surprises us. Alternatively, this could merely lead to a series of frustrating dialogues that produce dead-end responses. I eagerly anticipate discovering what the future holds for these advanced LLMs and their potential impact on our understanding of the world.
Update
In the last few days after writing this, I came across a few projects that seem to be going in the direction mentioned above, namely, Auto-GPT, AgentGPT, and BabyAGI. From a technical perspective, 🤯, however, is also pretty scary. It would appear we are on the precipitous path towards true artificial general intelligent systems and possible superhuman capabilities. I mean when I first glanced at BabyAGI name, I thought of Baba Yaga, which elicits emotions of worry. Anyhow, it appears my thoughts above are indeed occurring.
I also wanted to add that my line of thinking above was purely technical, meaning I was only thinking about how more intelligent systems could emerge simply by group thinking2, which can lead to such outcomes. However, from a more personal perspective, I do find these systems and the prospect of AGI super useful for enhancing my productivity. However, from an existential stance, I'm very worried about what these systems will do and can achieve no matter how hard humanity ponders and prepares. It could very well be that the amazing productivity and increase in freedom we humans gain from these systems is ultimately the demise of our well-being or even existence.
1
A Pew Research survey asked about God and morality and showed a median
value of 45% across 34 countries in favor of the need for God. In
countries like the US it was fairly high. S. Greenwood, The Global God
Divide, Pew Research Center’s Global Attitudes Project. (2020).
https://www.pewresearch.org/global/2020/07/20/the-global-god-divide/https://hbr.org/2014/12/making-dumb-groups-smarter
(accessed April 13, 2023).
2
Sunstein, Cass R., and Reid Hastie. “Making Dumb Groups Smarter.” Harvard
Business Review, 1 Dec. 2014. hbr.org,https://hbr.org/2014/12/making-dumb-groups-smarter
(accessed April 17, 2023).
No comments:
Post a Comment
Please refrain from using ad hominem attacks, profanity, slander, or any similar sentiment in your comments. Let's keep the discussion respectful and constructive.