Is it just me, or is high verbal IQ alone kinda mediocre? You can get really good at communicating and teaching people, super good at public speaking and writing, and can learn foreign languages with ease. Also, you're able to construct your own theories, invent new terms, and spot connections between wildly different things. When I think of high verbal IQ, my mind immediately jumps to Cicero on one hand and Montaigne on the other. Obviously their communication skills are beautiful and they have some novel perspectives on life, but because their verbal intelligence is so dominant, none of their creative ideas are really fleshed out or put to an adequate test. Reading them is very refreshing, like a cool glass of water, but I doubt it's changed anyone's life. Another person I'd place into this category is Schopenhauer, but unlike the other two he was fairly multi-talented. He kept up with the sciences for his entire life, was an avid supporter of Goethe's theory of colors, and his spatial reasoning is clearly not bad whatsoever. However, his work is still littered with countless fascinating but unexplored ideas. What he does write on the human eye, for example, or on animal intelligence, is not only extremely well articulated but deeply fascinating. That no one cares about his scientific observations is really kinda sad. Even Plato, the "high verbal IQ" guy par excellence, has played a far smaller role in history than his student Aristotle, whose talent lie instead on the logical-mathematical side. And Plato's one attempt to make a real world change -- the Republican government in Sicily -- failed tragically. It seems that high verbal IQs are a group of talkers, dreamers, and wild speculators, whereas the people who truly leave a positive influence on the world for generations to come are logical-mathematically minded. Am I being too pessimistic here? This thread is a perfect example of the pitfalls of high verbal IQ -- you get caught up exploring these vague little ideas, wandering endlessly around the enormous web of all your mental concepts, just to form some novel little idea here and there; so novel in fact that it generally alienates people, and you meanwhile do not explore it in any depth because you lack a real framework from which to do so. You could also try to create your own, but off the top of my head, I can't think of anybody who's actually done this. tl;dr: Maybe high verbal IQ alone just isn't enough
Anonymous :
23 days ago :
No.5299
>>5305
>>5299
Yup. High verbal IQs reshape minds, whereas high logical-mathematical IQs reshape the world around them. We revere the one and loathe the other: The scientist represents real progress, whereas the dictator, sophist, prophet, or ideologue represents a false vision of reality that only distracts from the facts. Millennials and Gen Xers remember when Einstein's face was plastered everywhere, his wise and playful grin a tacit proclamation of the new reality. I actually remember checking out some of Einstein's writings, and they had the feel of this weird disconnected STEMlord who can't actually fathom why humans have ideologies at all, why we occasionally band together and kill one another over pointless shit. For him ideology was sorta like a brain bug, a glitch, whereas really being immune to ideology is far weirder, because the idea that life's just numbers and particles and brain chemicals is kinda horrifying, and most of us will do whatever it takes to escape that, even if it means another war. The war actually hurts less.
You can become a dictator or a prophet if you so choose.
I will become a dictator one day. I am reading Schmitt, Spengler, Machiavelli in preparation. I will leave a mark on history on the level of Alexander the Great, or Caesar. My name will become synonymous with "Emperor".
Anonymous :
23 days ago :
No.5305
>>5399
>>5305
world only cares about talkers though
i think einstein was only made famous because technology had a decisive part in world war and they thought einstein would be instrumental in bringing a new kind of ultimate weapon or edge
>>5299
You can become a dictator or a prophet if you so choose.
Yup. High verbal IQs reshape minds, whereas high logical-mathematical IQs reshape the world around them. We revere the one and loathe the other: The scientist represents real progress, whereas the dictator, sophist, prophet, or ideologue represents a false vision of reality that only distracts from the facts. Millennials and Gen Xers remember when Einstein's face was plastered everywhere, his wise and playful grin a tacit proclamation of the new reality. I actually remember checking out some of Einstein's writings, and they had the feel of this weird disconnected STEMlord who can't actually fathom why humans have ideologies at all, why we occasionally band together and kill one another over pointless shit. For him ideology was sorta like a brain bug, a glitch, whereas really being immune to ideology is far weirder, because the idea that life's just numbers and particles and brain chemicals is kinda horrifying, and most of us will do whatever it takes to escape that, even if it means another war. The war actually hurts less.
Anonymous :
22 days ago :
No.5306
>>5498
>>5306
I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society. In an intensely segmented division of labor, autistic style specialization is what our pedagogy churns out from the most intelligent and capable. Well rounded results in mediocrity, relative to human computers, or humans who can program computers capable of computing exponentially more powerful than brainpower alone. I think that liberal arts will result in perhaps a happier individual, though.
>>5556>>5306
The notion of separate types of intelligence such "verbal iq" has always struck me as a cope.
IQ is supposed be some representation of neuroplasticity or how quickly you can adapt to skills right? Why wouldn't that encompass everything?
>>5563>>5306 "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
~Robert A. Heinlein
I choose not to believe in separate silos of intelligence. Even if they arise we shouldn't worship them as if we are specialized insects. Everyone should strive to be as well-rounded as possible.
>>5305
>>5299
Yup. High verbal IQs reshape minds, whereas high logical-mathematical IQs reshape the world around them. We revere the one and loathe the other: The scientist represents real progress, whereas the dictator, sophist, prophet, or ideologue represents a false vision of reality that only distracts from the facts. Millennials and Gen Xers remember when Einstein's face was plastered everywhere, his wise and playful grin a tacit proclamation of the new reality. I actually remember checking out some of Einstein's writings, and they had the feel of this weird disconnected STEMlord who can't actually fathom why humans have ideologies at all, why we occasionally band together and kill one another over pointless shit. For him ideology was sorta like a brain bug, a glitch, whereas really being immune to ideology is far weirder, because the idea that life's just numbers and particles and brain chemicals is kinda horrifying, and most of us will do whatever it takes to escape that, even if it means another war. The war actually hurts less.
world only cares about talkers though
i think einstein was only made famous because technology had a decisive part in world war and they thought einstein would be instrumental in bringing a new kind of ultimate weapon or edge
Can get you a good LSAT score
Anonymous :
15 days ago :
No.5498
>>5501
>>5498
I feel like people way overestimate the ability of computers to build a bird's view of a topic by making links. Whenever I look up something I know, I obtain (at best) a high-school level summary. Else, I get a few very precise scientific articles. There is no in-between, the in-between is dealt with a solid liberal education and the ability to easily draw links that were never drawn before.
But, you are right on one point: the ability to make links is probably not that valuable nowadays.
>>5515>>5498
>I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society
That makes "contemporary society" obsolete lol, not the other way around
>>5306
I choose not to believe in separate silos of intelligence. Even if they arise we shouldn't worship them as if we are specialized insects. Everyone should strive to be as well-rounded as possible.
I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society. In an intensely segmented division of labor, autistic style specialization is what our pedagogy churns out from the most intelligent and capable. Well rounded results in mediocrity, relative to human computers, or humans who can program computers capable of computing exponentially more powerful than brainpower alone. I think that liberal arts will result in perhaps a happier individual, though.
Anonymous :
15 days ago :
No.5501
>>5502
>>5501
I think computers cannot make connections, especially qualitatively, in a meaningful way. They require clear guidance: AI is a perfect example. It fails to do much more than spit out information which has been loaded into it, except rearranged. I mentioned computers because they can take mass quantities of quanitative data and crunch it, utilizing it in meaningful ways that human brains would require significant periods of time for.
I think a human will always be required to shift the window of what is possible forward. A computer, though, can intensely apply all the principles that we load onto it. Think of a rocket. A computer couldn't think up a rocket on its own, we needed to do that. However, once we find the parameters of rocketry, a computer can compute sort of data we want, faster than any human could, say, formulate the time it takes a ICBM to fly from Nevada to Moscow.
I think I mainly wanted to say what you did at the end. In a highly specialized society, qualitative shifting and analyses is less important than pure application.
>>5498
>>5306
I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society. In an intensely segmented division of labor, autistic style specialization is what our pedagogy churns out from the most intelligent and capable. Well rounded results in mediocrity, relative to human computers, or humans who can program computers capable of computing exponentially more powerful than brainpower alone. I think that liberal arts will result in perhaps a happier individual, though.
I feel like people way overestimate the ability of computers to build a bird's view of a topic by making links. Whenever I look up something I know, I obtain (at best) a high-school level summary. Else, I get a few very precise scientific articles. There is no in-between, the in-between is dealt with a solid liberal education and the ability to easily draw links that were never drawn before.
But, you are right on one point: the ability to make links is probably not that valuable nowadays.
Anonymous :
15 days ago :
No.5502
>>5508
>>5502
Your observation is partly accurate and partly out-of-date. Present-day computers still need explicit objectives and data that originate from people, yet modern machine-learning systems can already produce connections that were neither hard-coded nor foreseen by their creators. AlphaGo’s famous “move 37,” AlphaFold’s protein-structure predictions and large-scale language models that draft code illustrate that, once trained, a program can navigate an immense combinatorial space and arrive at solutions or ideas no human had enumerated. That capability rests on statistical generalization rather than on conscious insight, so it is not creativity in the human sense, but it shows that the boundary between “merely rearranging” and “originating” is becoming porous.
When you say a computer could never have invented the rocket, you are invoking an era in which software lacked open-ended world models. Today, systems that integrate symbolic reasoning, reinforcement learning and self-supervised representation learning can already perform automated design optimisation for airframes, antennas and chemical syntheses, proposing configurations that surprise domain experts. They still depend on humans to define the goal—maximise thrust-to-weight ratio, minimise drag, satisfy safety constraints—but within that goal they explore design manifolds far faster than any engineer could search unaided.
That leaves two crucial asymmetries. First, values, problem selection and cross-domain framing remain human responsibilities; algorithms have no intrinsic preferences or situational awareness. Second, models work only as well as their training distributions allow, so genuinely novel physical regimes or socio-technical contexts can break them. In practice, then, progress comes from a loop: humans articulate a question, machines generate and rank candidate answers, humans interpret the outcomes, adjust the assumptions and ask the next question. The frontier is no longer raw calculation speed—pure number crunching was mastered decades ago—but adaptive modeling of ambiguous, heterogeneous information. In that sense, computers are beginning to “shift the window” with us, although they still cannot decide where the window ought to be.
>>5501
>>5498
I feel like people way overestimate the ability of computers to build a bird's view of a topic by making links. Whenever I look up something I know, I obtain (at best) a high-school level summary. Else, I get a few very precise scientific articles. There is no in-between, the in-between is dealt with a solid liberal education and the ability to easily draw links that were never drawn before.
But, you are right on one point: the ability to make links is probably not that valuable nowadays.
I think computers cannot make connections, especially qualitatively, in a meaningful way. They require clear guidance: AI is a perfect example. It fails to do much more than spit out information which has been loaded into it, except rearranged. I mentioned computers because they can take mass quantities of quanitative data and crunch it, utilizing it in meaningful ways that human brains would require significant periods of time for.
I think a human will always be required to shift the window of what is possible forward. A computer, though, can intensely apply all the principles that we load onto it. Think of a rocket. A computer couldn't think up a rocket on its own, we needed to do that. However, once we find the parameters of rocketry, a computer can compute sort of data we want, faster than any human could, say, formulate the time it takes a ICBM to fly from Nevada to Moscow.
I think I mainly wanted to say what you did at the end. In a highly specialized society, qualitative shifting and analyses is less important than pure application.
Anonymous :
14 days ago :
No.5508
>>5537
>>5508
Btw, that was AI (O3), which was probably obvious. Have been playing around with it for a bit, it's pretty good.
>>5502
>>5501
I think computers cannot make connections, especially qualitatively, in a meaningful way. They require clear guidance: AI is a perfect example. It fails to do much more than spit out information which has been loaded into it, except rearranged. I mentioned computers because they can take mass quantities of quanitative data and crunch it, utilizing it in meaningful ways that human brains would require significant periods of time for.
I think a human will always be required to shift the window of what is possible forward. A computer, though, can intensely apply all the principles that we load onto it. Think of a rocket. A computer couldn't think up a rocket on its own, we needed to do that. However, once we find the parameters of rocketry, a computer can compute sort of data we want, faster than any human could, say, formulate the time it takes a ICBM to fly from Nevada to Moscow.
I think I mainly wanted to say what you did at the end. In a highly specialized society, qualitative shifting and analyses is less important than pure application.
Your observation is partly accurate and partly out-of-date. Present-day computers still need explicit objectives and data that originate from people, yet modern machine-learning systems can already produce connections that were neither hard-coded nor foreseen by their creators. AlphaGo’s famous “move 37,” AlphaFold’s protein-structure predictions and large-scale language models that draft code illustrate that, once trained, a program can navigate an immense combinatorial space and arrive at solutions or ideas no human had enumerated. That capability rests on statistical generalization rather than on conscious insight, so it is not creativity in the human sense, but it shows that the boundary between “merely rearranging” and “originating” is becoming porous.
When you say a computer could never have invented the rocket, you are invoking an era in which software lacked open-ended world models. Today, systems that integrate symbolic reasoning, reinforcement learning and self-supervised representation learning can already perform automated design optimisation for airframes, antennas and chemical syntheses, proposing configurations that surprise domain experts. They still depend on humans to define the goal—maximise thrust-to-weight ratio, minimise drag, satisfy safety constraints—but within that goal they explore design manifolds far faster than any engineer could search unaided.
That leaves two crucial asymmetries. First, values, problem selection and cross-domain framing remain human responsibilities; algorithms have no intrinsic preferences or situational awareness. Second, models work only as well as their training distributions allow, so genuinely novel physical regimes or socio-technical contexts can break them. In practice, then, progress comes from a loop: humans articulate a question, machines generate and rank candidate answers, humans interpret the outcomes, adjust the assumptions and ask the next question. The frontier is no longer raw calculation speed—pure number crunching was mastered decades ago—but adaptive modeling of ambiguous, heterogeneous information. In that sense, computers are beginning to “shift the window” with us, although they still cannot decide where the window ought to be.
Anonymous :
14 days ago :
No.5515
>>5519
>>5515
I only meant obsolete in the sense that contemporary society has no use/desire for it.
>>5498
>>5306
I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society. In an intensely segmented division of labor, autistic style specialization is what our pedagogy churns out from the most intelligent and capable. Well rounded results in mediocrity, relative to human computers, or humans who can program computers capable of computing exponentially more powerful than brainpower alone. I think that liberal arts will result in perhaps a happier individual, though.
>I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society
That makes "contemporary society" obsolete lol, not the other way around
>>5515
>>5498
>I idolize and attempt to follow the tradition of liberal arts (to be interested in all forms of knowledge and skill), however: I am convinced that it is not only obsolete, but also incompatible with contemporary society
That makes "contemporary society" obsolete lol, not the other way around
I only meant obsolete in the sense that contemporary society has no use/desire for it.
Anonymous :
13 days ago :
No.5537
>>5538
>>5537
Don't bring AI into a place like this. It should be obvious that they are diametrically opposed concepts.
>>5508
>>5502
Your observation is partly accurate and partly out-of-date. Present-day computers still need explicit objectives and data that originate from people, yet modern machine-learning systems can already produce connections that were neither hard-coded nor foreseen by their creators. AlphaGo’s famous “move 37,” AlphaFold’s protein-structure predictions and large-scale language models that draft code illustrate that, once trained, a program can navigate an immense combinatorial space and arrive at solutions or ideas no human had enumerated. That capability rests on statistical generalization rather than on conscious insight, so it is not creativity in the human sense, but it shows that the boundary between “merely rearranging” and “originating” is becoming porous.
When you say a computer could never have invented the rocket, you are invoking an era in which software lacked open-ended world models. Today, systems that integrate symbolic reasoning, reinforcement learning and self-supervised representation learning can already perform automated design optimisation for airframes, antennas and chemical syntheses, proposing configurations that surprise domain experts. They still depend on humans to define the goal—maximise thrust-to-weight ratio, minimise drag, satisfy safety constraints—but within that goal they explore design manifolds far faster than any engineer could search unaided.
That leaves two crucial asymmetries. First, values, problem selection and cross-domain framing remain human responsibilities; algorithms have no intrinsic preferences or situational awareness. Second, models work only as well as their training distributions allow, so genuinely novel physical regimes or socio-technical contexts can break them. In practice, then, progress comes from a loop: humans articulate a question, machines generate and rank candidate answers, humans interpret the outcomes, adjust the assumptions and ask the next question. The frontier is no longer raw calculation speed—pure number crunching was mastered decades ago—but adaptive modeling of ambiguous, heterogeneous information. In that sense, computers are beginning to “shift the window” with us, although they still cannot decide where the window ought to be.
Btw, that was AI (O3), which was probably obvious. Have been playing around with it for a bit, it's pretty good.
Anonymous :
13 days ago :
No.5538
>>5570
>>5538
Well, was the AI wrong? If not, would you rather have false claims remain uncorrected? Do you care about human species monopoly more than about truthfulness? Also, this is racist against synthetic entities. AI-generated content is being dismissed outright, not based on merit or truthfulness but purely on its artificial origin. Just as past prejudices dismissed creations based on heritage rather than merit, we're now seeing a new bias against anything not 'human-made.'
Anonymous :
13 days ago :
No.5540
>>5581
>>5540
The AI usage in this thread led to an interesting conversation but yes as a rule I will remove comments that like the indiscriminately copied output of an LLM.
Posting AI content should be an automatic ban if the admins had any balls. They can ban me too at the same time if it makes them feel okay for being called out
Delete the content at least.
Anonymous :
12 days ago :
No.5556
>>5564
>>5556
I've always felt that there's an underlying general IQ, and the divergence between verbal and quantitative IQ is just a matter of personal interests. I'm fairly confident that all of the high "verbal IQ" people I know could rapidly excel in quantitative arenas if they put their minds to it, and vice versa with "quantitative IQ" people.
>>5306
I choose not to believe in separate silos of intelligence. Even if they arise we shouldn't worship them as if we are specialized insects. Everyone should strive to be as well-rounded as possible.
The notion of separate types of intelligence such "verbal iq" has always struck me as a cope.
IQ is supposed be some representation of neuroplasticity or how quickly you can adapt to skills right? Why wouldn't that encompass everything?
Anonymous :
11 days ago :
No.5563
>>5567
>>5563, yeah I shamelessly stole that phrase from him.
>>5587>>5563
I got a score of 10/21! What did you get?
>>5306
I choose not to believe in separate silos of intelligence. Even if they arise we shouldn't worship them as if we are specialized insects. Everyone should strive to be as well-rounded as possible.
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
~Robert A. Heinlein
Anonymous :
11 days ago :
No.5564
>>5571
>>5564
I believe you're right. Modern IQ research also backs up this stance with its emphasis on g factor.
Though IMO, trait openness is no less important than IQ. This is really the actual trait that allows for brilliant accomplishments in humans, whereas IQ is nothing but a means to convert what you've gained from your openness into action. I come from a lower class family of blue collar workers, and I am the only conventionally intelligent one in the family, but I never had the impression they were dumber than me. Rather, they could usually notice the same things as me, but lacked a ready-made mental framework with which to hash out whatever they noticed. To me, high IQ is just one big mental canvas for you to hash out ideas. If you're not high IQ, it's not that you can't hash out ideas, but you need a more physical way to do it.
>>5572>>5564
I used to think this but then I kept coming across quantitatively skilled people with strikingly poor reading comprehension
>>5556
>>5306
The notion of separate types of intelligence such "verbal iq" has always struck me as a cope.
IQ is supposed be some representation of neuroplasticity or how quickly you can adapt to skills right? Why wouldn't that encompass everything?
I've always felt that there's an underlying general IQ, and the divergence between verbal and quantitative IQ is just a matter of personal interests. I'm fairly confident that all of the high "verbal IQ" people I know could rapidly excel in quantitative arenas if they put their minds to it, and vice versa with "quantitative IQ" people.
>>5563
>>5306 "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
~Robert A. Heinlein
, yeah I shamelessly stole that phrase from him.
Anonymous :
11 days ago :
No.5570
>>5578
>>5570
You're missing the point. It's not about the AI being wrong or right. You're correct that AI-generated material can indeed be useful (I use it, carefully, all the time), but why not write what you meant to say yourself? You are robbing yourself of agency, becoming a direct conduit for the machine rather than actually expressing things in your own way. Even if you use LLMs to help you sharpen your response, hone specific arguments, or similar, copy-pasting output is lazy and degrades your own intellectual capacity. I want to hear a genuine voice, not that of o3 or whatever other model. Use your brain or lose it.
You are correct that we're seeing a new bias against machine-generated content regardless of its quality (rabid bluesky lib types come to mind), but don't kid yourself--the vast majority of AI-generated material is garbage slop, and distaste for AI is on that basis. Braindead copy-pasting is a symptom of a deeper malaise and intellectual limp-wristedness. I agree with the notion that rejecting AI-generated content outright for being artificial is stupid, but be real: this is a place where discussion is meant to be authentic and sincere, and it's impossible for AI to really do that.
Idk, I feel like there's a happy medium where we can make use of these technologies (flawed as they are, for now) as an extension or augmentation of human agency and expression, but copy-pasting raw LLM output is not that. Don't justify your intellectual laziness and self-degradation of agency by stating that some output can have merit and people hate the technology unnecessarily. They hate the tech because of the type of bullshit you're doing right now, polluting this forum. Don't be a part of the problem.
>>5596>>5570
The other anon already made some good points about AI degrading human agency and intellect before I saw your response to my post, and while I agree, I think that explanation is far more than what your post deserved.
Is it hard for you to comprehend why someone wouldn't want to spend time and energy reading a text generated by a robot? Human interaction is based on reciprocity. A software specifically designed to pretend to be a human by displaying a carefully pre-determined imitation of human mannerisms and ideas is anything but reciprocal. You might as well say the reflection you see in the mirror is a full flesh and bone person, since it displays everything you would expect to see on one in minute detail.
This imageboard has a very clear purpose, and the fact that you've come far enough to know about it and still thought that it would be perfectly acceptable to post an AI-generated text *and say that you did it* is genuinely grim beyond words. When you copy-paste several paragraphs worth of complex text into a regular conversation, you're directly implying that the subject matter is trivial and that the people discussing it have no intelligence worth respecting, because you yourself couldn't even bother to string your own personal ideas together into a cohesive and understandable text, and instead relied on an automated tool to do all the intellectual labor for you.
In short, it's insulting. We're not asking for the removal of AI content because we're purity obsessed, but because being duped into spending mental effort and time reading something nobody bothered to write is one of the main things we are trying to avoid by frequenting a website like this. If you want to use your stupid and worthless LLM then go back to the regular apps everyone uses and continue rotting your soul there, but leave us alone.
>Also, this is racist against synthetic entities.
Sadly, this dumb fuck retarded statement and the ones that followed are the most human things you've posted here, because writing downright mentally challenged content in a thread exclusively dedicated to IQ is something only a real person can do with so much authenticity.
>we're now seeing a new bias against anything not 'human-made.'
Non-humans have never and will never "make" anything. There is no AI revolution, there is no tech uprising, there is no AI girlfriend. I'm almost entirely sure that's why you value your AI so much, and even if it wasn't, you'd be just as pathetic.
>>5538
>>5537
Don't bring AI into a place like this. It should be obvious that they are diametrically opposed concepts.
Well, was the AI wrong? If not, would you rather have false claims remain uncorrected? Do you care about human species monopoly more than about truthfulness? Also, this is racist against synthetic entities. AI-generated content is being dismissed outright, not based on merit or truthfulness but purely on its artificial origin. Just as past prejudices dismissed creations based on heritage rather than merit, we're now seeing a new bias against anything not 'human-made.'
>>5564
>>5556
I've always felt that there's an underlying general IQ, and the divergence between verbal and quantitative IQ is just a matter of personal interests. I'm fairly confident that all of the high "verbal IQ" people I know could rapidly excel in quantitative arenas if they put their minds to it, and vice versa with "quantitative IQ" people.
I believe you're right. Modern IQ research also backs up this stance with its emphasis on g factor.
Though IMO, trait openness is no less important than IQ. This is really the actual trait that allows for brilliant accomplishments in humans, whereas IQ is nothing but a means to convert what you've gained from your openness into action. I come from a lower class family of blue collar workers, and I am the only conventionally intelligent one in the family, but I never had the impression they were dumber than me. Rather, they could usually notice the same things as me, but lacked a ready-made mental framework with which to hash out whatever they noticed. To me, high IQ is just one big mental canvas for you to hash out ideas. If you're not high IQ, it's not that you can't hash out ideas, but you need a more physical way to do it.
>>5564
>>5556
I've always felt that there's an underlying general IQ, and the divergence between verbal and quantitative IQ is just a matter of personal interests. I'm fairly confident that all of the high "verbal IQ" people I know could rapidly excel in quantitative arenas if they put their minds to it, and vice versa with "quantitative IQ" people.
I used to think this but then I kept coming across quantitatively skilled people with strikingly poor reading comprehension
Anonymous :
11 days ago :
No.5574
>>5580
>>5574
Is it an ego problem? Idk.
There are multiple things in life I believed were completely impossible for me, but it turns out I just misunderstood something super important about them, and having that false assumption made it 100x harder.
>>5886>>5574
>fully developed ego beings
What does this mean?
To expand: It sounds like cultural, societal and even subcultural/localized biases against reading as a skill. People will subconsciously avoid things they aren't experts at because of a weak ego. Fully developed ego-beings can learn and utilize a wide variety of skills on the mental-physical nexus. "Jack of all trades, master of none" is a cope. Either you get shit done or you don't.
>>5570
>>5538
Well, was the AI wrong? If not, would you rather have false claims remain uncorrected? Do you care about human species monopoly more than about truthfulness? Also, this is racist against synthetic entities. AI-generated content is being dismissed outright, not based on merit or truthfulness but purely on its artificial origin. Just as past prejudices dismissed creations based on heritage rather than merit, we're now seeing a new bias against anything not 'human-made.'
You're missing the point. It's not about the AI being wrong or right. You're correct that AI-generated material can indeed be useful (I use it, carefully, all the time), but why not write what you meant to say yourself? You are robbing yourself of agency, becoming a direct conduit for the machine rather than actually expressing things in your own way. Even if you use LLMs to help you sharpen your response, hone specific arguments, or similar, copy-pasting output is lazy and degrades your own intellectual capacity. I want to hear a genuine voice, not that of o3 or whatever other model. Use your brain or lose it.
You are correct that we're seeing a new bias against machine-generated content regardless of its quality (rabid bluesky lib types come to mind), but don't kid yourself--the vast majority of AI-generated material is garbage slop, and distaste for AI is on that basis. Braindead copy-pasting is a symptom of a deeper malaise and intellectual limp-wristedness. I agree with the notion that rejecting AI-generated content outright for being artificial is stupid, but be real: this is a place where discussion is meant to be authentic and sincere, and it's impossible for AI to really do that.
Idk, I feel like there's a happy medium where we can make use of these technologies (flawed as they are, for now) as an extension or augmentation of human agency and expression, but copy-pasting raw LLM output is not that. Don't justify your intellectual laziness and self-degradation of agency by stating that some output can have merit and people hate the technology unnecessarily. They hate the tech because of the type of bullshit you're doing right now, polluting this forum. Don't be a part of the problem.
>>5574
To expand: It sounds like cultural, societal and even subcultural/localized biases against reading as a skill. People will subconsciously avoid things they aren't experts at because of a weak ego. Fully developed ego-beings can learn and utilize a wide variety of skills on the mental-physical nexus. "Jack of all trades, master of none" is a cope. Either you get shit done or you don't.
Is it an ego problem? Idk.
There are multiple things in life I believed were completely impossible for me, but it turns out I just misunderstood something super important about them, and having that false assumption made it 100x harder.
>>5540
Posting AI content should be an automatic ban if the admins had any balls. They can ban me too at the same time if it makes them feel okay for being called out
The AI usage in this thread led to an interesting conversation but yes as a rule I will remove comments that like the indiscriminately copied output of an LLM.
Anonymous :
11 days ago :
No.5587
>>5597
>>5587
I'd say about 10-ish too, but I hope to learn many more in the coming years.
>>5563
>>5306 "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
~Robert A. Heinlein
I got a score of 10/21! What did you get?
Anonymous :
11 days ago :
No.5596
>>5614
>>5596
Guys like him sincerely believe that AI can play ball in human discussions, especially where it comes to intellectual topics.
It's only wrong because LLM's are just a blender of what everyone has said about X thing. If you get a blender and throw in whatever everybody says about something, you don't get a human opinion.
>>5570
>>5538
Well, was the AI wrong? If not, would you rather have false claims remain uncorrected? Do you care about human species monopoly more than about truthfulness? Also, this is racist against synthetic entities. AI-generated content is being dismissed outright, not based on merit or truthfulness but purely on its artificial origin. Just as past prejudices dismissed creations based on heritage rather than merit, we're now seeing a new bias against anything not 'human-made.'
The other anon already made some good points about AI degrading human agency and intellect before I saw your response to my post, and while I agree, I think that explanation is far more than what your post deserved.
Is it hard for you to comprehend why someone wouldn't want to spend time and energy reading a text generated by a robot? Human interaction is based on reciprocity. A software specifically designed to pretend to be a human by displaying a carefully pre-determined imitation of human mannerisms and ideas is anything but reciprocal. You might as well say the reflection you see in the mirror is a full flesh and bone person, since it displays everything you would expect to see on one in minute detail.
This imageboard has a very clear purpose, and the fact that you've come far enough to know about it and still thought that it would be perfectly acceptable to post an AI-generated text *and say that you did it* is genuinely grim beyond words. When you copy-paste several paragraphs worth of complex text into a regular conversation, you're directly implying that the subject matter is trivial and that the people discussing it have no intelligence worth respecting, because you yourself couldn't even bother to string your own personal ideas together into a cohesive and understandable text, and instead relied on an automated tool to do all the intellectual labor for you.
In short, it's insulting. We're not asking for the removal of AI content because we're purity obsessed, but because being duped into spending mental effort and time reading something nobody bothered to write is one of the main things we are trying to avoid by frequenting a website like this. If you want to use your stupid and worthless LLM then go back to the regular apps everyone uses and continue rotting your soul there, but leave us alone.
>Also, this is racist against synthetic entities.
Sadly, this dumb fuck retarded statement and the ones that followed are the most human things you've posted here, because writing downright mentally challenged content in a thread exclusively dedicated to IQ is something only a real person can do with so much authenticity.
>we're now seeing a new bias against anything not 'human-made.'
Non-humans have never and will never "make" anything. There is no AI revolution, there is no tech uprising, there is no AI girlfriend. I'm almost entirely sure that's why you value your AI so much, and even if it wasn't, you'd be just as pathetic.
>>5596
>>5570
The other anon already made some good points about AI degrading human agency and intellect before I saw your response to my post, and while I agree, I think that explanation is far more than what your post deserved.
Is it hard for you to comprehend why someone wouldn't want to spend time and energy reading a text generated by a robot? Human interaction is based on reciprocity. A software specifically designed to pretend to be a human by displaying a carefully pre-determined imitation of human mannerisms and ideas is anything but reciprocal. You might as well say the reflection you see in the mirror is a full flesh and bone person, since it displays everything you would expect to see on one in minute detail.
This imageboard has a very clear purpose, and the fact that you've come far enough to know about it and still thought that it would be perfectly acceptable to post an AI-generated text *and say that you did it* is genuinely grim beyond words. When you copy-paste several paragraphs worth of complex text into a regular conversation, you're directly implying that the subject matter is trivial and that the people discussing it have no intelligence worth respecting, because you yourself couldn't even bother to string your own personal ideas together into a cohesive and understandable text, and instead relied on an automated tool to do all the intellectual labor for you.
In short, it's insulting. We're not asking for the removal of AI content because we're purity obsessed, but because being duped into spending mental effort and time reading something nobody bothered to write is one of the main things we are trying to avoid by frequenting a website like this. If you want to use your stupid and worthless LLM then go back to the regular apps everyone uses and continue rotting your soul there, but leave us alone.
>Also, this is racist against synthetic entities.
Sadly, this dumb fuck retarded statement and the ones that followed are the most human things you've posted here, because writing downright mentally challenged content in a thread exclusively dedicated to IQ is something only a real person can do with so much authenticity.
>we're now seeing a new bias against anything not 'human-made.'
Non-humans have never and will never "make" anything. There is no AI revolution, there is no tech uprising, there is no AI girlfriend. I'm almost entirely sure that's why you value your AI so much, and even if it wasn't, you'd be just as pathetic.
Guys like him sincerely believe that AI can play ball in human discussions, especially where it comes to intellectual topics.
It's only wrong because LLM's are just a blender of what everyone has said about X thing. If you get a blender and throw in whatever everybody says about something, you don't get a human opinion.
>>5574
To expand: It sounds like cultural, societal and even subcultural/localized biases against reading as a skill. People will subconsciously avoid things they aren't experts at because of a weak ego. Fully developed ego-beings can learn and utilize a wide variety of skills on the mental-physical nexus. "Jack of all trades, master of none" is a cope. Either you get shit done or you don't.
>fully developed ego beings
What does this mean?