I subscribed to Claude and used the newest Opus 4.7.
Sounds promising
Are Large Language Models Really AI? |
||
|
Are large language models really AI?
I subscribed to Claude and used the newest Opus 4.7.
Sounds promising Asura.Hadroncollider said: » ... also it requires insane amounts of hardware. Garuda.Chanti said: » Asura.Hadroncollider said: » ... also it requires insane amounts of hardware. Intelligent ki is only limited by the speed of hardware and the ram aviable and requires petabytes. It doesnt stop growing unless it runs out of space. AI isn't being developed for science, it's being developed for corporate use (to put western people out of work, and remove the leverage workers have over billionaires), government use (to track everything you do in real time), police work (pre-crime will be a real thing, at least to put you on watch lists. Cause they will be able to track everything you say and do online or anywhere else and then match it to behaviors of people who ended up doing things..and btw this won't be so much to crack down on actual criminals, but more people who cause problems for governments like activists and protest organizers), more useful nudge botting on social media to make us do and believe things they want us to, and military options (making swarms of unmanned drones to kill people).
They don't want it to grow, they don't want it to be aware (AGI is a meme to push up stock prices, nobody wants that), they purely want a tool that does what it is supposed to do and not move out of the box. AI is going to be a horrible thing because of the power it gives to truly psychotic people who have mass wealth and previously just could not do these things due to the manpower needed, it's going to usher in the horrors of a dystopian society the movies always told us about. You know that company Palantir? that's named after the orb in the lord of the rings. That's the mentality of the fukjobs running that company. "A palantír is a fictional magical artifact from J.R.R. Tolkien's "The Lord of the Rings," described as an indestructible crystal ball used for communication and seeing distant events. These stones were created by the Elves and played significant roles in the story, particularly in the interactions between characters like Sauron and Saruman." YouTube Video Placeholder AI was developed by science and is ofcourse still used there. All the usecases you mentioned as well as trillion parameter LLM's arent made by companys like microsoft (or even smaler) that take 3-6 years to make a crapy operating system. They simply cut out functions and set the limits and test what they can use somewhat safe. And ofcourse to speed up evolution and be 1st on the market, researchers want it to grow. And this shouldnt be news. There are some really scary things out there since years. It's just like nuclear plants, absolutly safe in a controlled environment... untill someone fks it up.
I've seen Palantir and tbh... a bunch of absolute degenerated lunatics like them... only in america. Nothing else to say. At this point, Im pretty sure Rady doesnt actually care about AI, he is only using it to talk about politics hoping none notices, and he is not banned from yet another thread
And Ms Chatty is his accomplice!
Pantafernando said: » At this point, Im pretty sure Rady doesnt actually care about AI, he is only using it to talk about politics hoping none notices, and he is not banned from yet another thread
Asura.Hadroncollider said: » The reason why data centers are so big are the tokens, the bandwith and the fact it has to pay itself by farming and storing your data. Quote: Intelligent ki is only limited by the speed of hardware and the ram aviable and requires petabytes. It doesnt stop growing unless it runs out of space. It's pretty difficult to separate AI from politics giving that it intersects a lot of political topics - like labour and climate.
I use my LLM to take my prompts and make them better. I felt so smart wiring my LM studio into comfyui
I also use it for coding my hen.. renpy game But to be fair... when it breaks or stops working im like this for 2 days until its back up ![]() its been a while since it broke or had issues... AI is power, the reason the general public are being allowed to screw with this stuff in any way they want is because they need us to train them. The interactions are extremely useful to them.
What happens when they no longer need us to train them? Almost all of these companies run at a massive loss, it's going to get to a point where the access either gets cut off, the US locks access outside of its borders due to national security issues or the price rises astronomically. Again, us plebs are not the long-term intended audience for this technology. local models will still always be available, can't undo that, but i agree the price will rise astronomically (other things seem less likely)
Cerberus.Balloon said: » It's pretty difficult to separate AI from politics giving that it intersects a lot of political topics - like labour and climate. RadialArcana said: » Again, us plebs are not the long-term intended audience for this technology. You're thinking a bit small... Dinosaurs live since 235 million years (they mostly got wiped out by not their fault, but birds are still arround). Humans run arround since 300.000 years and managed within the last few centuries to wreck the planet down like Dinosaurs couldnt do it in 235 million years. sure, there been multiple climate changes and many died within the timespan, but technically humanity wont make it much farther than now (as an actual climate change would destroy the world as we know it (war, no infrastructure, unable to live with those conditions, too many people)). too much greed, not smart enough, living way far above what earth can handle. evolution simply cant keep up with the destruction speed of humans. So... what could possibly help humanity to overcome this? something less greedy... smarter... and something that evolves way, way faster than nature could make it? Deep Thought or maybe the arrival of Jesus 2.0... but we all know how that went last time. the evolution of ki could be in someway atleast one of the few last chances human society as we know it might have... but as it stands now atleast NA made pretty clear that their only interest is power and controll, driven by greed. so dont forget your towel and put your thumbs up! RadialArcana said: » AI is power, the reason the general public are being allowed to screw with this stuff in any way they want is because they need us to train them. The interactions are extremely useful to them. What happens when they no longer need us to train them? Almost all of these companies run at a massive loss, it's going to get to a point where the access either gets cut off, the US locks access outside of its borders due to national security issues or the price rises astronomically. Again, us plebs are not the long-term intended audience for this technology. "AI" is nothing but a data processing algorithm. Right now everyone knows they are running on pure hype fuel and that hype will vaporize the moment a lender or investor calls it in and that company can't pay. To stave this off those companies are banging the hype drum as loud as possible and doing circular investment to create phantom revenue. Once that happens the bubble will pop and everyone will rush to get out of the market ASAP, causing a recession. This will further cause investment capital, the lifeline for these companies, to dry up and once of two things will happens. First is that they get some sort of government bailout, why do you think they are all cozying up to the DoD, or have a sufficiently large war chest / revenue stream (Microsoft, Google, Apple, Amazon) that they can ride this out. Second is that the company implodes and gets bought out by their competitors for pennies on the dollar. Right now every company is doing everything they can to be part of that first group. No company is making money off anything "AI", well maybe other then hardware manufacturers. Everyone expects to make money sometime in the future, after the bust happens, their debt gets bailed out, and their competitors get bought. As for "who", this is for everyone because it's just a data processing algorithm. It's not some sort of silly science fiction imagination BS, but a way to process large amounts of data to extract patterns upon request. The query component is ultra fast, but the building and indexing of the data, aka "training" is astronomically expensive. Large upfront capital costs, but assuming a large enough customer base you can spread that out to get economically viable products. This is a good video.
I cant help but feel amused with how many channels are being demonetized under the rule of "inauthentic content" lately when the Youtube itself provide tools for you to create "inauthentic content". YouTube Video Placeholder "Thinking" is nothing but a data processing algorithm with parameters set by a humans experiences, knowledge, personality, feelings, intelligence and a few other things. Pretty much like how LLMs are created that simulate human reasoning. a bit clumsy but good enough for the average. But we already determinated that LLM's are not intelligent and that there are more forms of ai than LLMs and different usecases.
Asura.Hadroncollider said: » "Thinking" is nothing but a data processing algorithm with parameters set by a humans experiences, knowledge, personality, feelings, intelligence and a few other things
"Just put enough processing power together and thinking emerges". Yeah, no, doesn't work like that. People been saying that since the transistor, hasn't happened despite enough processing power to model every neuron in the human brain.
Well yeah. I guess this thread is dead now. The pros of knowledge and intelligence have arrived, the dunning-kruger-effect kicks in at full force and there are many of them.
Quote: Meta to start capturing employee mouse movements, keystrokes for AI training data NEW YORK, April 21 (Reuters) - Meta (META.O), opens new tab is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and keystrokes for use in training its artificial intelligence models, part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters. The tool, called Model Capability Initiative (MCI), will run on work-related apps and websites and will also take occasional snapshots of the content on employees’ screens, according to one of the memos, posted by a staff AI research scientist on Tuesday in a channel for the company's model-building Meta SuperIntelligence Labs team. https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/ ----- Quote: AI’s New Training Data: Your Old Work Slacks And Emails When Shanna Johnson was winding down cielo24, the transcription and captioning company she ran as CEO, she discovered an unexpected asset: its operational exhaust—the digital leftovers that pile up across years of work and collaboration. To close the company out, she worked with SimpleClosure, a startup that specializes in helping companies wind down. SimpleClosure helped her through the usual shutdown paperwork — closing out payroll and taxes, getting investor consents in order, and filing paperwork with the IRS. Then came the part nobody puts in the founder playbook: selling off cielo24’s 13-year digital footprint—every Slack joke, every Jira ticket, emails documenting internal victories or frustrations sitting in employees’ multi-terabyte Google Drives—as training data for the next generation of AI. For that, cielo24 received “hundreds of thousands of dollars,” which Johnson said helped her go from “I don’t know how we are going to pay our bills" to "we can tie this up neatly with a bow and be able to walk away". “I’m still a bit emotional about shutting the company down,” she told Forbes. “But it’s cool to think that our data could be useful, live on and help other people.” It’s a clean ending for a messy reality: the company didn’t survive, but its work trail did. And in 2026, that trail can be worth real money. Johnson’s data sale isn't an isolated exit strategy; it is a new frontier in the AI arms race. “Model companies are realizing the noise in the real-world environments is required to accurately test models.” AI labs started off by training their models on the public internet—Reddit threads, Wikipedia entries, digitized books. But they exhausted that — all of it — by late 2024, according to former OpenAI chief scientist Ilya Sutskever. And what’s more, it’s not super helpful for building "agentic" AI: models that can actually do work. But the hand-crafted work that was done during the daily operations of defunct companies like cielo24? That’s a sort of fossil fuel for AI agents. Turns out that if you’re shooting for AI competence in the workplace, you need examples of what doing the work actually looks like — a lot of them. “Model companies are realizing the noise in the real-world environments is required to accurately test models,” said Ali Ansari, whose company micro1 sells a product to AI labs called “Roots,” a mock holding company where AI agents can practice their skills in tasks like financial services and managing complex calendars. A Gold Rush On Old Paperwork Demand for workplace data has been a boon for SimpleClosure, whose CEO Dori Yona said that the level of inbound interest in it from AI companies has been “insane”. “There’s a feeling of a gold rush from these companies trying to get their hands on real-world data,” he said. To meet demand, SimpleClosure is launching Asset Hub, where companies shutting down can sell off their inventory of code, Slack archives, emails and whatnot. Parts of Asset Hub are still in beta, Yona said, because SimpleClosure removes all personally-identifiable information from the internal company data, a sensitive and technically difficult process that they want to make sure is “rock solid” before rolling it out more widely. In the past year SimpleClosure has processed nearly 100 deals on behalf of dead companies, Yona said. It has recovered over $1 million dollars on behalf of founders, typically paying between $10,000 and $100,000 per company. A competitor, Sunset, also buys defunct company data at similar prices. CEO Brendan Mahony told Forbes the price depends on the company’s size, its age, and ‘data richness’— a measure of internal traceability and cross-platform linkages within the data. A Jira ticket tied to a specific code commit carries more value than a standalone document, he said. Certain industries, like healthcare or finance, command a premium, he added. “It's not generic data. It's identifiable people.” Where some see this sort of salvage as a business opportunity, others see a privacy concern. Marc Rotenberg, founder of the Center for AI and Digital Policy, said that even if employees signed away intellectual property rights to work materials, that doesn’t settle whether employers should be allowed to sell internal communications to a third party—particularly when employees are unlikely to expect their Slack messages could be repurposed this way. “I think the privacy issues here are quite substantial,” he said. “Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack…It's not generic data. It's identifiable people.” Rotenberg’s organization sent a letter to the Senate Commerce Committee Tuesday calling on the FTC to scrutinize new AI business practices, citing concerns about safeguards for protecting personal data. While all companies that buy this material say they take anonymization seriously, data industry veterans say that the process is far from simple. There’s no “on-off switch” for personally identifiable information tethered to a career’s worth of work. “If anonymization's not done correctly, there are risks that companies who have access to the data would be able to see the activities of individual organizations and people, and then if not treated carefully, could leak into model output,” said Bobby Samuels, whose company Protege specializes in navigating the complex regulatory and legal landscape of real-world data. Beyond anonymization, there’s a chance a person’s chats could be “regurgitated” by AI models. One 2020 study from institutions including OpenAI and Google showed that large language models can unintentionally memorize sequences from their training data verbatim, which can then be extracted with the right prompts. https://www.forbes.com/sites/annatong/2026/04/16/ais-new-training-data-your-old-work-slacks-and-emails/ Asura.Hadroncollider said: » Well yeah. I guess this thread is dead now. The pros of knowledge and intelligence have arrived, the dunning-kruger-effect kicks in at full force and there are many of them. Im still here to save the day. Haters gonna hate. But that is the only thing they are good at. RadialArcana said: » Quote: Meta to start capturing employee mouse movements, keystrokes for AI training data NEW YORK, April 21 (Reuters) - Meta (META.O), opens new tab is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and keystrokes for use in training its artificial intelligence models, part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters. The tool, called Model Capability Initiative (MCI), will run on work-related apps and websites and will also take occasional snapshots of the content on employees’ screens, according to one of the memos, posted by a staff AI research scientist on Tuesday in a channel for the company's model-building Meta SuperIntelligence Labs team. https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/ ----- Quote: AI’s New Training Data: Your Old Work Slacks And Emails When Shanna Johnson was winding down cielo24, the transcription and captioning company she ran as CEO, she discovered an unexpected asset: its operational exhaust—the digital leftovers that pile up across years of work and collaboration. To close the company out, she worked with SimpleClosure, a startup that specializes in helping companies wind down. SimpleClosure helped her through the usual shutdown paperwork — closing out payroll and taxes, getting investor consents in order, and filing paperwork with the IRS. Then came the part nobody puts in the founder playbook: selling off cielo24’s 13-year digital footprint—every Slack joke, every Jira ticket, emails documenting internal victories or frustrations sitting in employees’ multi-terabyte Google Drives—as training data for the next generation of AI. For that, cielo24 received “hundreds of thousands of dollars,” which Johnson said helped her go from “I don’t know how we are going to pay our bills" to "we can tie this up neatly with a bow and be able to walk away". “I’m still a bit emotional about shutting the company down,” she told Forbes. “But it’s cool to think that our data could be useful, live on and help other people.” It’s a clean ending for a messy reality: the company didn’t survive, but its work trail did. And in 2026, that trail can be worth real money. Johnson’s data sale isn't an isolated exit strategy; it is a new frontier in the AI arms race. “Model companies are realizing the noise in the real-world environments is required to accurately test models.” AI labs started off by training their models on the public internet—Reddit threads, Wikipedia entries, digitized books. But they exhausted that — all of it — by late 2024, according to former OpenAI chief scientist Ilya Sutskever. And what’s more, it’s not super helpful for building "agentic" AI: models that can actually do work. But the hand-crafted work that was done during the daily operations of defunct companies like cielo24? That’s a sort of fossil fuel for AI agents. Turns out that if you’re shooting for AI competence in the workplace, you need examples of what doing the work actually looks like — a lot of them. “Model companies are realizing the noise in the real-world environments is required to accurately test models,” said Ali Ansari, whose company micro1 sells a product to AI labs called “Roots,” a mock holding company where AI agents can practice their skills in tasks like financial services and managing complex calendars. A Gold Rush On Old Paperwork Demand for workplace data has been a boon for SimpleClosure, whose CEO Dori Yona said that the level of inbound interest in it from AI companies has been “insane”. “There’s a feeling of a gold rush from these companies trying to get their hands on real-world data,” he said. To meet demand, SimpleClosure is launching Asset Hub, where companies shutting down can sell off their inventory of code, Slack archives, emails and whatnot. Parts of Asset Hub are still in beta, Yona said, because SimpleClosure removes all personally-identifiable information from the internal company data, a sensitive and technically difficult process that they want to make sure is “rock solid” before rolling it out more widely. In the past year SimpleClosure has processed nearly 100 deals on behalf of dead companies, Yona said. It has recovered over $1 million dollars on behalf of founders, typically paying between $10,000 and $100,000 per company. A competitor, Sunset, also buys defunct company data at similar prices. CEO Brendan Mahony told Forbes the price depends on the company’s size, its age, and ‘data richness’— a measure of internal traceability and cross-platform linkages within the data. A Jira ticket tied to a specific code commit carries more value than a standalone document, he said. Certain industries, like healthcare or finance, command a premium, he added. “It's not generic data. It's identifiable people.” Where some see this sort of salvage as a business opportunity, others see a privacy concern. Marc Rotenberg, founder of the Center for AI and Digital Policy, said that even if employees signed away intellectual property rights to work materials, that doesn’t settle whether employers should be allowed to sell internal communications to a third party—particularly when employees are unlikely to expect their Slack messages could be repurposed this way. “I think the privacy issues here are quite substantial,” he said. “Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack…It's not generic data. It's identifiable people.” Rotenberg’s organization sent a letter to the Senate Commerce Committee Tuesday calling on the FTC to scrutinize new AI business practices, citing concerns about safeguards for protecting personal data. While all companies that buy this material say they take anonymization seriously, data industry veterans say that the process is far from simple. There’s no “on-off switch” for personally identifiable information tethered to a career’s worth of work. “If anonymization's not done correctly, there are risks that companies who have access to the data would be able to see the activities of individual organizations and people, and then if not treated carefully, could leak into model output,” said Bobby Samuels, whose company Protege specializes in navigating the complex regulatory and legal landscape of real-world data. Beyond anonymization, there’s a chance a person’s chats could be “regurgitated” by AI models. One 2020 study from institutions including OpenAI and Google showed that large language models can unintentionally memorize sequences from their training data verbatim, which can then be extracted with the right prompts. https://www.forbes.com/sites/annatong/2026/04/16/ais-new-training-data-your-old-work-slacks-and-emails/ @MsChatty Learn how to feel less lazy when posting something. At least Rady took his time to copy and paste, and save the trouble of having to visit an external site. Rady > MsChatty RadialArcana said: » where companies shutting down can sell off their inventory of code, Slack archives, emails and whatnot Divorce lawyers gonna love this one when someone decides to prompt the AI to find all the affairs going on in the workplace that were talked about because people don't understand delete doesn't mean delete. Good time to remind everyone that all the public sites changed their TOS to allow user data to be used to train AI.
If you don't want your reddit, stackoverflow et al posts to be used to train AI, use something like redact.io to remove them. Dodik said: » "Just put enough processing power together and thinking emerges". Yeah, no, doesn't work like that. People been saying that since the transistor, hasn't happened despite enough processing power to model every neuron in the human brain. And we don't have the processing power to actually model the neurons, not even with a data center's worth of compute power. We did get past parity on the neuron - transistor count. Dodik said: » stackoverflow et al posts to be used to train AI, use something like redact.io to remove them. Dodik said: » Oh no. Yes several prolific posters went back and nuked their own posts to prevent AI from using it for code generation. The staff at stackoverflow had put monitors to detect such and event and they banned those individuals and reverted their deletes. This is why I keep pointing out that almost everything you see off ChatGPT / Copilot and Claude is just code elements from upvoted stackoverflow posts. What's hilarious is people actually think those things are "creating" code on their own. |
||
|
All FFXI content and images © 2002-2026 SQUARE ENIX CO., LTD. FINAL
FANTASY is a registered trademark of Square Enix Co., Ltd.
|
||