everything is moving too quickly. The transformation has gotten so obvious that Even on normie sites they have to admit that half of internet traffic is bots. everyone has been made aware of the dead internet theory, and thus created the paralysis of lame hollow discussion and ridiculous dichotomies of singularity level or nothingburger AI. It doesn't seems we cant react to monsterous change with the vitality and human-ness as we once did. consensus is like gold these days but I think we can all agree that everything is becoming more and more weird. ITT we try to make sense of this weirdness by trying to take a step back and search for some insights in this mess. Has the amount of AI sloptent increased during your daily scroll? Are the comments you read getting more and more incoherent? Can you spot AI like a voight kompff or have you been fooled more often than you woudl like to admit? How crazy are you getting? LET YOUR CRIES HOPES CONCERNS LOOSE AND SHARE SOME OF YOUR EXPERIENCES AND OBSERVATIONS DURING THIS REAL WEIRD TRANSFOR-INFORMATION.
The only social media I use is instagram which I use to keep up with a niche sport that I'm into never see any Ai stuff but saying that I only look at my feed and storys never the explore page or suggested posts
The only social media I use is 4chan, Wired-7 and Petrarchan. In this regard, there are two ways AI makes its appeareance for me: either it's blatantly obvious (its use here may be intended to be noticed), or it is coveted enough to make me not feel 100% sure about its AI origins: I jsut feel a "weird feeling".
IRL:
I'm a college student. Since I began my studies, I have had to participate in group projects. Most often than not, someone uses AI, usually in a very obvious way. People who during classes can't write a proper paragraph end up writing a 10/10 contribution to the work, for example. Of course, people may write better at home, but in general their usage is obvious to me, although I prefer not complaining about it.
This year I had to take a writing workshop, and the weird thing is that the teacher apparently was using AI (he showed us examples to guide our writing). I kek'd hard at first, but then I felt uneasy about it.
Lastly, googling things is annoying. I swear if I click on a link at random there's a high probability of it being AI (excessive amount of subtitles, consisting of one or two short paragraphs; verbosity and redundacy, etc.). It doesn't matter if what I'm looking for is a recipe or instructions to use certain software.
Oh! This is my experience, of course.From time to time I hear what my brothers watch on their phones, and it's 90% of the time AI-slop. Artificial voices, images, scripts, etc. There's not a shed o human-ness in those videos.
I suppose some may benefit from their usage. However, my opinion is that since the beginning of the AI "boom" everything is worse.
I think it's undeniable we breached some sort of AI threshold and the rate at which it can more successfully replicate (or resemble, more accurately) human created works is exponentially increasing. Will it ever be able to create something which is indistinguishable from a human masterpiece? Probably not, due to a computer being unable to breach human defined parameters and being inherently limited by the data fed into the model.
When the photorealistic AI imagery came on the scene, I'll admit to being fooled once. But once you understand the pattern, there are obvious tells which distinguish them. I think maybe the more interesting thing, more than the realism and ability of AI, is the way that we are now able to "offshore" a more significant portion of traditionally human labor onto a machine. I mean, the conjuring aspect of AI is really not that different than photoshop or rendering, except insofar as a machine is totally doing it all and in a fraction of the time as a human.
It's very much a spectacle, an illusion, but one that is being bought into full sail. I cannot tell what the ramifications will be yet, if it will just be a toy, or another machine that leads to the demise of labor.
Right now though, the widespread naming of AI made content as "slop" is accurate, as it is mostly a bucket of feed for the masses who either cannot tell or don't care to tell if it is AI.
How long until AI can write bioweapon code?
What the fuck is "bioweapon code"
Anonymous :
31 days ago :
No.6624
>>6639
>>6624
I don't know what you mean, because there was no question, but if you are implying I am too online with your picrel then yes, obviously.
>>6586
i feel like AI is making me a bit schizo these days. i see a lot of things and think, was that real or fake, even though there's no way it was anything except real.
We already know the answer.
I don't know if you're familiar with the idea that we often get trapped by virtue rather than vice, but I think it applies to AI.
Yes, some use it to cheat or trump, but the danger is the trust that it asks from the user. You have to talk to it like a person, so you extend it a lot of privileges usually reserved for humans. Once I use it, I trust it, and that's the issue because it always lies.
More and more, it feels like a pact with the Devil. You can get any information you want instantly, but you will never know if it is true, and everyone will forget it can be wrong.
(I guess I could talk to it like a machine, but it is made to emulate another human and I don't want to learn how to dehumanize something that acts like a human; it would just be another way to dehumanize myself.)
Stolen from r/redscarepod... This image provides me a strange sense of comfort against the obvious impending doom of AI taking over human faculty for thought and action. Even its most avid users would rather do nothing than have to do something, even if that something is made easier with LLM. It's just a tool kids at school use to bypass all the stupid busywork that's thrown at them that they know is all crap anyway. The more they associate it with this sort of fake busywork the less extreme they'll come to absorb it in their personal or social life.
Anonymous :
17 days ago :
No.7004
>>7016
>>7004
The dotcom bubble was indicative of financial speculation and its inability to use technological growth as a way to print money, not that the Internet was overblown. There is no way to diminish the effect the Internet has had on the psyche. I don't think I am an AI doomer, nor a worshipper, but it's not crazy to think that it will rapidly increase its feasibility in a short amount of time. And both the Internet and AI are the same thing at their core: information organizing devices. AI has the benefit of being sort of self contained, whereas the Internet is human to human.
Probably though the most amount of AI's impact will be from its perceived value or risk.
This feels more and more like the dotcom bubble.
Yes, it is useful and it will change things, like the Internet did, but there is a lot of overpromising, and unreal expectations fueled by the market euphoria.
I've been receiving a daily resume of IA news and new tools. The tone is epic, there are revolutions every day yada yada. They recently decided to publish testimonies from actual users, and the discrepancy between the two is pretty massive.
Honestly? Kinda relatable. Them tuning the LLM to have hyperperfectionist anxieties is rather endearing. Whomst among us hasn't wanted to jump out a window after a minor mistake?
>>7004
This feels more and more like the dotcom bubble.
Yes, it is useful and it will change things, like the Internet did, but there is a lot of overpromising, and unreal expectations fueled by the market euphoria.
I've been receiving a daily resume of IA news and new tools. The tone is epic, there are revolutions every day yada yada. They recently decided to publish testimonies from actual users, and the discrepancy between the two is pretty massive.
The dotcom bubble was indicative of financial speculation and its inability to use technological growth as a way to print money, not that the Internet was overblown. There is no way to diminish the effect the Internet has had on the psyche. I don't think I am an AI doomer, nor a worshipper, but it's not crazy to think that it will rapidly increase its feasibility in a short amount of time. And both the Internet and AI are the same thing at their core: information organizing devices. AI has the benefit of being sort of self contained, whereas the Internet is human to human.
Probably though the most amount of AI's impact will be from its perceived value or risk.
The biggest impact of AI (as it exists) will be the death of the take-home essay. Schools are really running out of homework options, there will have to be a fallback on testing. Parents will be mad that their conscientious-but-dumb kids get Cs now. This adjustment will take up to a 10 years, and the decade of mass learning loss will be visible on charts forever.
Web developers can output a lot more feasible, bloated features per day (I hesitate to call it "slop" because the status quo was just as bad). More serious programmers will find limited usage from code generators, mostly relying on it for autocomplete and utility functions.
Using AI for emails, image generation will continue to be declassee and may become widely understood as a status identifier.
The anti-AI movement will become a punchline, synonymous with tilting at windmills.
When it becomes clear that AGI is not on the horizon there will be a miniature dotcom crash. Leaked email from Sam Altman or something.
Regular people will continue to find it occasionally useful in their personal lives. Specialized applications will prove useful to some office jobs.
Maybe in 20 years they find a breakthrough towards AGI and you'd have some serious impacts on the job market. Probably not.
Anonymous :
17 days ago :
No.7021
>>7031
>>7021
About a decade ago I came across the idea of competitive and complementary cognitive artifacts. Basically, some things like calculators plainly replace a cognitive ability in humans and when taken away we are worse for it. Others, like an abacus, increase cognitive capacity. When you learn how to use it, you both become more efficient at basic calculation and you can achieve roughly the same result when it is taken away by imagining an abacus. A map is another example of the complementary one. If I show you one, you can memorize a region's territory to some extent even if I take it away a moment later. A voice activated GPS with no map screen, on the other hand, would be competing with your brain's development of directional skills. If you spend your life navigating that way, the moment it's taken away you have no ability to navigate.
All this to say, LLMs are like 90 million competitive cognitive artifacts in one. Every aspect of human thinking that involves language, it's competing against to some extent. We are fundamentally changing the cognitive horizons of human experience and I genuinely think that the advent of widespread literacy and the invention of spoken language in general were the only two points in human history that were comparable.
While I am deeply sceptical of AGI, it does seem that current AIs could be more deeply integrated into our lives than they are now, and this is something which concerns me. Reflecting on society's loss of reading skills earlier (after reading https://kittenbeloved.substack.com/p/college-english-majors-cant-read), it struck me that we could just as easily hand off many menial tasks to LLMs, which could enfeeble us. The ancient Greeks identified that literacy was reducing young people's ability to memorise texts (true, but certainly worth it). What about AI? What about losing the ability to write simple letters and handing that off to the AI? What is being gained here? Already young people just call the AI 'Chat': let me ask Chat this, let me ask Chat that. Why not stop and think? If you have to ask Chat, you are introducing at least 5 seconds of latency in whatever you are doing. And Chat is fucking stupid too.
Anonymous :
17 days ago :
No.7031
>>7034
>>7031
Nice distinction. In the end AI is competing against everything involved in live exchange between humans. So it will enhance any long distance and/or asynchronous exchange, but it won't measure up with good old interaction (which will probably become rarer anyway).
>>7071
The only homework left is public presentation (with no notes), but it requires so much time that it is already a rarity.
>>7035>>7031
This is interesting.
One little-discussed point Popper makes in his 'Open Society and Its Enemies' is that the maximally open society would lack all face-to-face interaction. He seems uncertain about this, as he's obviously in favour of the open society.
I'm reminded of the Zizek joke about the couple who plug in her robotic dildo into his Fleshlight and let that contraption have the sex for them, and then, their guilt gone, they can sit and have a good talk
>>7036>>7031
Skill issue on the part of 80% of humanity. They let themselves get taken advantage by elements in their environment instead of being the master of those elements around them. Knowing your way around a computer in current year is starting to be like being part of a wholly separate race. I am almost done being social justice about it. Good luck with your fried brains, normies
>>7021
While I am deeply sceptical of AGI, it does seem that current AIs could be more deeply integrated into our lives than they are now, and this is something which concerns me. Reflecting on society's loss of reading skills earlier (after reading https://kittenbeloved.substack.com/p/college-english-majors-cant-read), it struck me that we could just as easily hand off many menial tasks to LLMs, which could enfeeble us. The ancient Greeks identified that literacy was reducing young people's ability to memorise texts (true, but certainly worth it). What about AI? What about losing the ability to write simple letters and handing that off to the AI? What is being gained here? Already young people just call the AI 'Chat': let me ask Chat this, let me ask Chat that. Why not stop and think? If you have to ask Chat, you are introducing at least 5 seconds of latency in whatever you are doing. And Chat is fucking stupid too.
About a decade ago I came across the idea of competitive and complementary cognitive artifacts. Basically, some things like calculators plainly replace a cognitive ability in humans and when taken away we are worse for it. Others, like an abacus, increase cognitive capacity. When you learn how to use it, you both become more efficient at basic calculation and you can achieve roughly the same result when it is taken away by imagining an abacus. A map is another example of the complementary one. If I show you one, you can memorize a region's territory to some extent even if I take it away a moment later. A voice activated GPS with no map screen, on the other hand, would be competing with your brain's development of directional skills. If you spend your life navigating that way, the moment it's taken away you have no ability to navigate.
All this to say, LLMs are like 90 million competitive cognitive artifacts in one. Every aspect of human thinking that involves language, it's competing against to some extent. We are fundamentally changing the cognitive horizons of human experience and I genuinely think that the advent of widespread literacy and the invention of spoken language in general were the only two points in human history that were comparable.
>>7031
>>7021
About a decade ago I came across the idea of competitive and complementary cognitive artifacts. Basically, some things like calculators plainly replace a cognitive ability in humans and when taken away we are worse for it. Others, like an abacus, increase cognitive capacity. When you learn how to use it, you both become more efficient at basic calculation and you can achieve roughly the same result when it is taken away by imagining an abacus. A map is another example of the complementary one. If I show you one, you can memorize a region's territory to some extent even if I take it away a moment later. A voice activated GPS with no map screen, on the other hand, would be competing with your brain's development of directional skills. If you spend your life navigating that way, the moment it's taken away you have no ability to navigate.
All this to say, LLMs are like 90 million competitive cognitive artifacts in one. Every aspect of human thinking that involves language, it's competing against to some extent. We are fundamentally changing the cognitive horizons of human experience and I genuinely think that the advent of widespread literacy and the invention of spoken language in general were the only two points in human history that were comparable.
Nice distinction. In the end AI is competing against everything involved in live exchange between humans. So it will enhance any long distance and/or asynchronous exchange, but it won't measure up with good old interaction (which will probably become rarer anyway).
>>7071
The only homework left is public presentation (with no notes), but it requires so much time that it is already a rarity.
>>7031
>>7021
About a decade ago I came across the idea of competitive and complementary cognitive artifacts. Basically, some things like calculators plainly replace a cognitive ability in humans and when taken away we are worse for it. Others, like an abacus, increase cognitive capacity. When you learn how to use it, you both become more efficient at basic calculation and you can achieve roughly the same result when it is taken away by imagining an abacus. A map is another example of the complementary one. If I show you one, you can memorize a region's territory to some extent even if I take it away a moment later. A voice activated GPS with no map screen, on the other hand, would be competing with your brain's development of directional skills. If you spend your life navigating that way, the moment it's taken away you have no ability to navigate.
All this to say, LLMs are like 90 million competitive cognitive artifacts in one. Every aspect of human thinking that involves language, it's competing against to some extent. We are fundamentally changing the cognitive horizons of human experience and I genuinely think that the advent of widespread literacy and the invention of spoken language in general were the only two points in human history that were comparable.
This is interesting.
One little-discussed point Popper makes in his 'Open Society and Its Enemies' is that the maximally open society would lack all face-to-face interaction. He seems uncertain about this, as he's obviously in favour of the open society.
I'm reminded of the Zizek joke about the couple who plug in her robotic dildo into his Fleshlight and let that contraption have the sex for them, and then, their guilt gone, they can sit and have a good talk
>>7031
>>7021
About a decade ago I came across the idea of competitive and complementary cognitive artifacts. Basically, some things like calculators plainly replace a cognitive ability in humans and when taken away we are worse for it. Others, like an abacus, increase cognitive capacity. When you learn how to use it, you both become more efficient at basic calculation and you can achieve roughly the same result when it is taken away by imagining an abacus. A map is another example of the complementary one. If I show you one, you can memorize a region's territory to some extent even if I take it away a moment later. A voice activated GPS with no map screen, on the other hand, would be competing with your brain's development of directional skills. If you spend your life navigating that way, the moment it's taken away you have no ability to navigate.
All this to say, LLMs are like 90 million competitive cognitive artifacts in one. Every aspect of human thinking that involves language, it's competing against to some extent. We are fundamentally changing the cognitive horizons of human experience and I genuinely think that the advent of widespread literacy and the invention of spoken language in general were the only two points in human history that were comparable.
Skill issue on the part of 80% of humanity. They let themselves get taken advantage by elements in their environment instead of being the master of those elements around them. Knowing your way around a computer in current year is starting to be like being part of a wholly separate race. I am almost done being social justice about it. Good luck with your fried brains, normies
I now realise that what I can't find on the Internet nowadays, I ask the IA agent; but these are things I used to find, generally on obscure tutorial made by some geek - all those websites that diseappeared with time and/or under deplorable SEO cheats.
This is a second death of the old Internet.