Public discourse has focused a lot on artificial intelligence in the past few years. And even though the technology is hyped up and has plenty of supporters, there are lots of skeptics, too. People are worried about its (un)ethical use, environmental impact, effect on the job market, and more.
In an illuminating thread on AskReddit, tech workers and AI-savvy internet users revealed some of the secret things about the artificial intelligence industry that the public might not know about. Scroll down to read their insights.
#1
Google make around $250 billion per year from controlling nearly all of the online advertising market.
Open AI need to recoup $1.5 trillion ($1,500 billion) just to break even on their hardware investment costs.
Their current *revenue* is just $13 billion per year.

Image source: queen-adreena, Getty Images
#2
If your job starts having you constantly logging random information and interactions about your duties you’re probably training AI to do your job.

Image source: onmy40, WBMUL / Envato (not the actual photo)
#3
That most jobs are safe from it, but the corporate sector thinks they are saving money by reducing staff.
People are still needed to make ‘AI’ work. It doesn’t just know what you want.

Image source: nerdykronk, Andrej Lišakov / Unsplash (not the actual photo)
The AI industry is utterly massive. According to Statista, the market for AI technologies amounts to around $244 billion in 2025. It is expected to rise to $800 billion by 2030.
Naturally, this has many people wondering whether the investments are worth the actual value. Some folks are worried that the (over)investment that we’re seeing in AI companies and tools is akin to an economic bubble of sorts.
They argue that the AI tools that the public has access to right now are flawed, unreliable, and limited, often leading to far more work rather than less. In short, they argue that the tech is overhyped and not quite as great as major tech companies would have you believe.
Meanwhile, proponents believe that the technology is so fundamental and universal that it’s not going anywhere. From their point of view, it’s vital not only to invest in the tech ASAP, but also to adopt and integrate it into your workflows, no matter what you do.
#4
It’s just reinterpreting what it finds on the Internet. GIGO (Garbage In, Garbage Out) still applies.

Image source: GEEK-IP, John / Unsplash (not the actual photo)
#5
That most people don’t understand what AI is, even “tech” people.
AI is a very broad category. Everyone automatically assumes it is the mad-libs style LLM AI, however AI learning models have been around for a long time and do a variety of things. Things like your spam filter, predictive text, or your nav system’s traffic avoidance, these are all variations of AI in the category of machine learning. These are tools which don’t take jobs, they make our lives easier.
Then there are AI machine learning models that DO take jobs, but actually do so much better than a person can do. Like ones that examine components for defects. They can identify things people may miss far quicker. This allows for better quality and safer products.

Image source: KelhGrim, DragonImages / Envato (not the actual photo)
#6
A ton of what is labeled as “AI” is just spreadsheets and algorithms that have existed for decades. Companies are calling anything done by a computer “AI” for marketing purposes. .

Image source: RedditBugler, Rodrigo Rodrigues / Unsplash (not the actual photo)
However, a recent report by the MIT Media Lab/Project NANDA found that a jaw-dropping 95% of investments in generative AI have produced zero returns.
As the Harvard Business Review reports, while individuals are adopting generative AI tools, results still aren’t measurable at a profit and loss level in businesses.
#7
They keep saying general AI is around the corner. The current technology is fundamentally incapable of becoming a general AI. It’s like saying any day now your toaster will become a TV.

Image source: summonsays, DC_Studio / Envato (not the actual photo)
#8
It’s not actual AI. It still requires prompts. It doesn’t have true autonomy or self inspired standalone operations. It relies wholly on pre programming and external support.
What we have is more akin to adaptive algorithms. Which is impressive but it’s not AI.

Image source: Telrom_1, Impactphotography / Envato (not the actual photo)
#9
AI engineer here
– the internet (web browsing) as a whole is going to fundamentally change into being AI-based
– companies are moving away from being AI dependent . Yes everyone spent years saying AI is coming for everyone’s job and grandmother, but the pushback is real
– as someone who works in AI (on education and cancer reseerch), the backlash i face is real.

Image source: ToughAd5010, vukasinlj81 / Envato (not the actual photo)
We’d like to hear your thoughts, dear Pandas. You can share them in the comments below.
What are your thoughts about AI tech and the industry as a whole? Do you think it’s overhyped, or do you see it as the future? What are the biggest pros and cons of artificial intelligence tools that you’ve personally noticed so far? Let us know!
#10
Most AI models are built on massive amounts of copyrighted data taken without permission or compensation. The entire industry is basically built on the largest scale of intellectual property theft in history.

Image source: Weird_Ad6669, AnnaStills / Envato (not the actual photo)
#11
I think at some point the costs (data centers, energy, the models themselves) will far outweigh the benefits for most companies.

Image source: gianlu_world, JuiceFlair / Envato (not the actual photo)
#12
CHANGE DEFAULT SEARCH PARAMETERS FOR MORE ACCURATE INFO (depending on the info you want.) You have to give AI search parameters if you want legit info, otherwise it tends to use Reddit, FB, Wikipedia, etc for a lot of the results. For example if you’re researching mushrooms, you want to specify & say that you don’t want any info from Reddit, FB, Wikipedia, etc., and you only want info from mycologists, fungal biologists, plant pathologists, experts in similar fields, PhDs, published research papers & books, and the like. You’ll get entirely different answers when you specify different search parameters.

Image source: Iamtress1, Yunus Tuğ / Unsplash (not the actual photo)
#13
The RAM price hikes and raised prices on some products is just the beginning. You see, AI data centers consume TONS of power. The next crisis will be an energy shortage as we balance AI centers vs everyday life.

Image source: Celcius_87, Iyus sugiharto / Unsplash (not the actual photo)
#14
Reddit partnered with Google last year. This part isn’t that much of a secret, but what most people don’t realize is that everything, and I mean absolutely everything, you post here is being used to train Google’s AI model. Then, said model is being used to post back on reddit for many different purposes, through bot accounts. Then, it all gets fed back into AI, posted again, fed back and so on. So not only you’re all here arguing with bots, but you’re arguing with lobotomized braindead bots who are using your own regurgitated words back at you. It’s kinda like playing tennis against a wall with a poorly-made drawing of you taped on it.

Image source: BaltazarOdGilzvita, Brett Jordan / Unsplash (not the actual photo)
#15
Not a tech worker, I do know people at higher levels in this push:
It’s all a gamble. The companies are using huge amounts of borrowed money to see if they can change what it is now into a gold mine that puts them in a position to capitalize on it for the next century or more.
And if the bubble pops? They file for bankruptcy and the banks are too large to fail. Which means we the people get to pick up the tab.

Image source: SciFi_MuffinMan, Celyn Kang / Unsplash (not the actual photo)
#16
It’s causing very legitimate problems in the judicial profession. I work for courts and attorneys have attempted to use rulings that literally do not exist to help their argument. .

Image source: SnooPets1528, Getty Images / Unsplash (not the actual photo)
#17
It’s really good at coding. There’s almost no going back to writing all the code by hand.
But writing the code is usually the easiest part. The hardest part is to figure out how things should work.
AI can assist with that part too but if you give AI an ambiguous problem and let it choose then AI will make some wild stuff.
So it’s good as a tool but can hardly replace humans at this point.

Image source: Nizurai, Ilya Pavlov / Unsplash (not the actual photo)
#18
I work at a big tech company in Silicon Valley that you’ve definitely heard of.
AI isn’t a smokeshow, isn’t a bubble, and is quite possibly undervalued in terms of the impact it might have on society.
AI is quite possibly the most powerful and unique technology in the history of the Valley. We’ve observed that as we scale the model size and data used to train it, the performance increases to the point where it can now outperform humans on almost every benchmark of intelligence we can come up with.
As a result every company is in a prisoners dilemma to build the most powerful AI models, because the potential reward is so high.

Image source: liqui_date_me, Carles Rabada / Unsplash (not the actual photo)
#19
One dirty secret is that a lot of “AI” isn’t nearly as autonomous or intelligent as people think.
Many systems rely heavily on massive amounts of human labor behind the scenes: data labeling, moderation, cleanup, edge cases, and constant manual intervention. The public sees a polished model, but underneath there are thousands of low-paid workers correcting mistakes, filtering outputs, and patching failures in real time.
Another uncomfortable truth is that most AI products aren’t optimized for truth or long-term benefit. They’re optimized for engagement, retention, and revenue. If a model keeps users hooked, it’s considered successful even if it subtly reinforces bad habits, misinformation, or dependency.
AI isn’t “lying” to people, but the incentives shaping it are rarely aligned with human well-being. That gap is much bigger than most marketing admits.

Image source: EventNo9425, Flipsnack / Unsplash (not the actual photo)
#20
The dirty secret is that these models are not optimized for truth; they are optimized for plausibility. They are designed to predict the next word that makes the user happy, not the word that is factually correct.
It’s kind of “Confidence Trap.” If you ask for a specific statistic or source that doesn’t exist, it will often invent a plausible-sounding citation just to be helpful. It has zero concept of “I don’t know” unless explicitly forced to admit it. It’s possible. Overcoming this ‘people-pleasing’ tendency requires explicit ‘Uncertainty Prompting’ to force the model to flag what it isn’t sure about, rather than guessing.” I show solutions to these kinds of problems and ways to deal with them in my publications.

Image source: Beginning-Law2392, Natalya / Envato (not the actual photo)
#21
Lots of companies waste huge amounts of money using AI to solve problems that would be far easier to solve without AI just so their execs can say ‘we use AI too’ to the market.
Genuine organisational use cases where AI is the best of all available solutions and provides true ROI are vanishingly small. The current market around it is a huge bubble but everyone’s too invested to let the cat out the bag..
It’s the corporate equivalent of a tween trend with trillions of dollars behind it.
Image source: magicbellend
#22
If AI is a force multiplier, companies have two options:
1) reduce workforce to offset this performance gain and achieve the same amount as before with less people
2) keep the people you have and gain more market share by leveraging the labor you already have along with the force multiplier provided by AI.
It’s telling that pretty much every company is choosing option 1. If it was everything people claimed it was, they would all be piling in to option 2 and trying to win more of the market. Instead, it’s convenient cover to reduce workforce while keeping a nice PR story.
Image source: GrayestRock
#23
Most of what companies are pushing as AI – is NOT AI. It’s just automation. It’s just getting systems to talk to each other and kick off processes without human intervention.
Agentic AI bots are in essence just connected to FAQ documents which look for key words, and then spit out the answers and create zendesk tickets (which are worked on by actual people) on the backend automatically. Then when it recognizes more help is needed beyond its prompts, it connects to REAL people to solve it.
So yes, AI and automation are changing the workforce. But they aren’t doing near as much as what tech companies claim they are.
Image source: JGonz1224
#24
For every over hyped startup promising an AI revolution, there are 1000 white collar people quietly using LLMs daily for basic tasks. Without much thought they won’t need to hire a junior developer or expand their admin staff or backfill the guy that retired because they can do more and do it faster.
The dirty secret is that entry level white collar jobs are vanishing. Which means universities are selling a ticket for a train that’s being dismantled and the pipeline for filling future senior roles is empty.
Image source: mechtonia
#25
It isn’t private. While some companies are trying to keep your data private, most are using it then to train from. This is particularly scary when some people are pushing it for uses like therapy where the data is extremely sensitive.
Image source: ancalime9
#26
The consequence of people not hiring juniors as much due to Ai being able to handle much of the grunt work that they would typically do will be absolutely devastating ten to fifteen years down the line.
The old guard will retire and we’ll suddenly have alot of senior devs with few people to manage. When it’s their turn to leave… Well…
Image source: TheCharalampos
#27
AI cant fully replace a software engineer… not even a junior @.@, in order for any company to have a shot at replacing devs they need a strong development process that is well documented / comprehensive which very very few companies have.
Image source: Mem0
#28
AI is not new. People act like LLMs (the things that power ChatGPT and the like) are the be-all and end-all of AI, but it’s really just the first “viral” AI tool. AI has been critical to nearly every product and service you’ve used in the last 15 years; everything uses recommendation systems and computer vision and speech recognition. LLMs are an incredible leap forward in text generation (and now image generation), but those are probably two of the least practically useful applications of AI.
AI engineering has been my full-time job since 2016, and the biggest difference since COVID is not what we can do, but how management wants us to do it.
Image source: nowadaykid
#29
AI companies don’t make money, and most of the startups are scams to get VC and bounce. Also just because AI can’t actually do your job doesn’t mean your boss won’t fire you and try to replace you with it. It’s not the AI convincing them, it’s the salesman.
Image source: 800Volts
#30
Many companies feel the need to incorporate AI to “stay competitive” that have no idea what to do with it but that isn’t stopping them.
Image source: ActionCalhoun
#31
We are rapidly reaching a point where AI is training on AI-generated content. Because the internet is getting flooded with AI text, the new models are learning from the ‘mistakes’ of the old ones. It’s a feedback loop that leads to ‘model collapse,’ where the AI eventually becomes a distorted, nonsensical version of itself because it hasn’t seen fresh, human-created data in months.
Image source: Ok-Bathroom273
#32
That it looks like we’ve come up with a way to waste even more energy (and now water) even faster than using bitcoin for McDonald’s.
Image source: Dapper-Network-3863
#33
I’m on the construction side and a lot of the contractors building these AI data centers have no idea what they’re doing. The demand is for these data centers to be built as quickly as possible. But there simply aren’t enough competent general contractors in the market that can do it. So that leads to a bunch of GCs and subcontractors with no experience in data center work building these projects.
That means a lot of mistakes and rework is required in the field. Making these projects double or triple time more expensive to build than originally budgeted.
Image source: JonathanStat
#34
There are very few people who are pushing AI that have a clue on how to use it. Personally I think its going to be like the metaverse but instead of just Facebook buying in the whole world is.
Image source: Level_Macaroon2533
#35
I think it is super inefficient they talk of wanting more data centers all over, yet we have climate change, etc. And nobody wants nuclear power, they keep taking over farmland for solar fields seems insane. I have a feeling this will be a fad and then explode like the .com bust.
Image source: Network-King19
#36
As an outsider looking in, I’m thinking that the biggest secret about AI/AGI is that it’s an out-of-control mess that nobody really knows what to do with.
Image source: bobvagabond
#37
The tech companies are financing each other to keep afloat, its a scam for more money.
Image source: LostTiredWanderer
#38
Everything you’ve put into ChatGPT and most larger AI bots is searchable to anyone and there is little legislation limiting companies from providing it to law enforcement on the backend.
Image source: justsomedudeusa
#39
The ability to measure its impact and ROI within the majority of organizations is basically non-existent and a massive after thought.
Whenever Copilot pops up in excel, outlooks, or powerpoint it is being counted as if the employee actually used it, they are equating that to time being saved therefore $’s saved in efficiency. The numbers reported by the tools counter (Viva) is showing millions of dollars in savings in larger organizations when in reality it isnt even close to that…
Executives who spend for their organization to have ‘AI’ are simply doing so, so that they can tell their boss that they are adapting and using AI, and they can say the same to their boss, etc. etc. It is a solution, looking for a problem.
Also – 90% of the agents we build cost the company $300k for a team to implement and it is just us hooking up a query to a knowledge source to get information they probably would’ve found by themselves a tad bit faster…
Image source: FlaniganWackerMan
#40
Progress in AGI has very much plateaued because of LLMs. 3 years ago, the big ml conferences would have dozens of new ideas and fundemental research presented, now its all minor modifications and micro optimizations of existing attention based architectures. And the benchmarks reflect this sadly.
Image source: Active_Change9423
Follow Us





