Are Large Language Models Really AI? |
||
|
Are large language models really AI?
Other than as a slightly fancy search engine, no.
I use it to make mermaid markdown flow diagrams for managers to smile and nod at and feel like they understand what the software does.
Zoolander merMAN.gif
Fenrir.Niflheim said: » I use it to make mermaid markdown flow diagrams for managers to smile and nod at and feel like they understand what the software does. Haha this is one of those areas I can totally see generative AI being useful for. Feed it your curated design doc or code base and let me build flow charts for you. Since the design / code is curated and accurate, the chances of it producing inaccurate output is pretty slim. Asura.Saevel said: » the chances of it producing inaccurate output is pretty slim. It's funny OpenAI is crashing and burning, they are behind the massive increase in ram prices for everyone and they were responsible for the tech being thrown into the world before it/society was ready for it. I think there was a planned slower release by the other big players but he wanted to be a billionaire.
So he is responsible for the chaos, and it's fitting he goes down in flames for it. OpenAI is an Alphabet subsidiary. There is no "he".
It's just Google's executive hierarchy wanting to make even more money for themselves. Dodik said: » OpenAI is an Alphabet subsidiary. There is no "he". It's just Google's executive hierarchy wanting to make even more money for themselves. Ehh no.. OpenAI is a for-profit subsidiary of the not-for-profit OpenAI Group PBC. Microsoft invested somewhere around 27~49%, which includes use of Azure Infrastructure. It's this whacky configuration, that pissed off founder Elon Musk, because Sam Altman and his friends really wanted to get rich and couldn't do that with a not-for-profit model. Even though MS is heavily invested, they do not have voting power. Asura.Saevel said: » Dodik said: » OpenAI is an Alphabet subsidiary. There is no "he". It's just Google's executive hierarchy wanting to make even more money for themselves. Ehh no.. OpenAI is a for-profit subsidiary of the not-for-profit OpenAI Group PBC. Microsoft invested somewhere around 27~49%, which includes use of Azure Infrastructure. It's this whacky configuration, that pissed off founder Elon Musk, because Sam Altman and his friends really wanted to get rich and couldn't do that with a not-for-profit model. Even though MS is heavily invested, they do not have voting power. It's also clear that Microsoft are trying to divest from OpenAI at this point. I think what is a bit clearer is that the big players are all in some lock-step about legitimising the technology over competing. Which is why you see massive investments from ostensible competitors (Google > Anthropic, for instance). Softbank also owns a large part of OpenAI Group PB.
The point is they get investments from companies with vested interests. Those vested interests are not "for the good of humanity". They're to make profit, like every other investment. I do speak from first hand knowledge of OpenAI's hiring practices btw. Somehow the current evolution of the AI industry reminds me of a joke.
Three businessmen were marooned on a desert island for a year. By the time they were rescued they had become incredibly wealthy by selling their hats to each other. (And to update it for the 21st century ...) When they returned to civilization they used their new found wealth to become venture capitalists. The AI companies saying they want to sell "intelligence", effectively just means they want to make almost everyone reliant on AI to do their job (when you could do it just fine without it previously if you had to) and them make you pay per minute to use it.
If you're a coder, it's fine to use AI but you should never be reliant on it and you should always be able to work without it and keep your skills up to the point of that being the case. You're dealing with absolute demons with mental problems, running these companies. All these billionaires are psychopaths trying to fk you in the ***. I like how the artist made Altman look like a zombie, and the others are just drawn normal.
You know, the Claude consumption model isnt as bad as I thought at first.
I thought we just get a fixed amount per month, and had to use it during that period. But actually it gives a fixed amount every 5h. This short window changes my mentality from "save the most" to "use the most within this time frame", because the penalty of running out tokens is lower in a 5h windows than a 1 month window. I kinda enjoy this "imprimateur" mentality. Makes me want to try better uses of it to make the most out of it. Like some say, the feeling of running out of tokens in a 5h is like "well, I did my part. Lets check on Youtube now!"
EDIT: Dang, there is a weekly limit too? This is bad! And so it begins.
OpenAI reportedly missed revenue targets. Shares of Oracle and these chip stocks are falling Quote: The Wall Street Journal reported that OpenAI has recently missed its own projections for user growth and revenue. The shortfall has sparked internal concern about whether the company can keep pace with the massive financial commitments required to build out data centers and secure long-term computing capacity. RadialArcana said: » That's when the leaders start wanting to start fabricated wars with forced conscription of men (the only group of their own nations population they are really afraid of), so all the problematic men get thrown into the meat grinder and are no longer a threat to them. With thunderous applause. Garuda.Chanti said: » Quote: The Wall Street Journal reported that OpenAI has recently missed its own projections for user growth and revenue. The shortfall has sparked internal concern about whether the company can keep pace with the massive financial commitments required to build out data centers and secure long-term computing capacity. They should've told everyone 5.5 was too dangerous to release I get the joke, but it's not like OpenAI isn't doing the same ***. Altman is over there using the attacks against him as a platform to talk about how dangerous and revolutionary AI will be.
The really concerning part is how widely accepted these takes are outside of niche circles. Enough clickbait AI-written articles and you can convince people of anything, apparently. Shiva.Thorny said: » Enough clickbait AI-written articles and you can convince people of anything, apparently. Shiva.Thorny said: » The really concerning part is how widely accepted these takes are outside of niche circles. Yea, it's always been that way in the security industry and anything adjacent to it. A similar thing happened about 10 years ago, a group of academics released a whitepaper claiming they developed a framework that could automatically generate exploits from start to finish using some tool they wrote. Similarly, they didn't release it, claiming it was "too dangerous", but anyone who knew anything about exploit dev could read it and see it was *** by omission. Before that it was source code scanners making all sorts of crazy claims, fuzzing tools being released to the public, etc. The talking heads all melt down over these things and create FUD, which is driven by the industry types they interview who benefit from it. The problem, in comparison, is that AI seems a lot more plausible on the surface and does, to an extent, provide more useful information when used properly and in the correct context than those tools do (which isn't saying much). It's always what is not said that matters the most. The omission and missing contexts that make things seem scarier than they should be, but would only be visible to people who are technical enough to understand the entire scope. Concepts like reachability, reliability, and exploitability are lost on the majority of the industry to the point some vendors will give exploitability rating scores to bugs that are flat out not exploitable or reachable. This has long been an issue. In the end, though, it doesn't matter. What matters is what people believe, that's where budget decisions come from and what will steer the industry. Anyway, I'm practicing my "Welcome to Wendy's, what can I get you?" in the mirror. It's not just "the industry" - people, engineers, co-workers, CTOs, often get stuck on technicalities and miss the context. A potential bug that has no attack vector is, by definition, not a potential exploit because it cannot be exploited. But CTOs will see something on a report and focus on that, not the fact it cannot be exploited and that you're doing work for no benefit.
AI generated reports are no different - they miss context. I'm not so doom and gloom about these things as most people though - other than my eyes glazing over whenever a coworker wants to talk about "vibe coding". Certain industries are certainly ripe for being heavily automated by AI. The value add of having people do that work over having AI do that work just doesn't add up. A company is always going to look at the cost/benefit to make a decision - just business. If you, in your day job, use AI to automate large parts of your job, why should your company continue to pay you as the middleman to use that AI. That's the question you should be asking yourself. Dodik said: » Certain industries are certainly ripe for being heavily automated by AI. The value add of having people do that work over having AI do that work just doesn't add up. Quote: Businesses are racking up huge AI usage bills they didn’t expect, with a single employee spending over $150,000 a month on AI tokens. Quote: If you, in your day job, use AI to automate large parts of your job, why should your company continue to pay you as the middleman to use that AI. In practice a single AI subscription can replace a team of sales callers, helpdesk operators, jobs that read off a script with branching paths for certain types of answers.
Can you do that with higher skilled jobs? No, not yet anyway. Should you do it - not if output relies on context. The 150k a month token guy is a good example - that's someone used to getting AI to do their job. Essentially telling the company they are redundant. Meanwhile another employee not using 150k worth of AI tokens a month is getting their job done just fine. Now HERE'S something that today's LLM AIs should be good at. Unfortunately.
How LLMs could supercharge mass surveillance in the US The technology could make commercially available bulk datasets even more of a privacy concern. MIT Technology Review. Also they should end online anonymity! AI allows hackers to identify anonymous social media accounts, study finds Richard Dawkins used an AI chatbot, and now he thinks its conscious, and the next step of evolution.
What a potato man. https://x.com/RichardDawkins/status/2049973529576108160 |
||
|
All FFXI content and images © 2002-2026 SQUARE ENIX CO., LTD. FINAL
FANTASY is a registered trademark of Square Enix Co., Ltd.
|
||