Artificial Intelligence: yes, no, don't care, just want to argue about it here
8AI is being debated, discussed, argued about in several threads here on our forum. I thought it might be a good idea to offer one place where we all can talk about it.
And please… don’t accuse Meh of using AI to eliminate jobs. They’ve made it clear that they’re not doing that.
- 19 comments, 52 replies
- Comment
IMHO: It’s fun as an amusement but feels tacky when you start actually earning anything off of it.
Whether the sometimes AI-generated “fun” pictures on deals or the AI forum thread summaries actually earn Mercatalyst any money is probably still an open question. Maybe there’s at least a person or two who buys stuff off of Meh or SideDeal because they got engaged with the community that way?
@lljk I assumed Meh was using AI to bring a few laughs to the forum. It does seem to fit with the vibe here, doesn’t it? And those /showme images can be pretty funny!
I use my employer’s version of ChatGTP often to make my emails and other communication more concise because I am constantly being told I’m too verbose. Their meeting summarizer tool is helpful as well, especially if I wasn’t paying attention. On Meh, I do enjoy using showme, maybe a little too much.
@heartny
How does that work, i.e., do you still have to do what you would normally do, in typing and formatting the email and then have the AI re-process it?
Especially how does it get the meeting information input in order to summarize it?
Thanks in advance!
I wish AI had never been invented because of what it will one day do to our society.
That said. It has been invented. I use it every day, for laughs and productivity. I’m a heavy user of it. If it exists I will use it. Wish it never was invented… there again, I wish the cell phone was never invented either- but I’m on mine hours every day.
@OnionSoup
I feel your pain [not really, but I can empathize with you].
I feel the same about constant connectivity, it has huge bonuses, but also huge downsides.
My expectation is that the currently-accelerating antihuman revolution that began in the steam era with the automation of basic tasks, and has subsequently devalued both skilled and unskilled manual labor, will eventually culminate in the devaluation of separable value itself. At some point, we either become irrelevant to the replacement AI “civilization” and get relegated to a rabble reduced to a subsistence existence, or the AI apparatus adopts us and equalizes the playing field in which we are then “freed” to engage in our own chosen methods of creativity and/or consumption, with only the restrictions that the AI permits and enforces. How that may shake out is impossible to predict. Much depends upon whether AI develops a capacity to GAFF. I have no clue, and I think any predictions in that regard must be driven by aspiration rather than actual extant evidence. I doubt that this maybe-final maybe-stable state will be achieved sooner than a century from now, but that’s just a wild-assed guess. Human stupidity could easily result in the collapse of both human civilization and the progress toward that AI-centric potential future. This is why we have SF.
OBTW, Apple+ is going to air a Murderbot series, OMFG squeeeeeeee!
@werehatrack
So, an AI singularity, the consequences of which could be Terminator-esque, 1984-esque, or some other unknown likely world-wide changes?
Many of my concerns are along the same lines as @werehatrack , for example, at the present stage I’m more worried about the humans using the AI than about the tech itself. A society that could carefully consider the implications, and mitigate any bad side effects that might result, might be able to integrate the technology ethically and safely. We do not live in that society. We live in a capitalist hellscape where huge interests are competing to get as much of your attention, and thereby your money, as possible no matter what boundaries they violate to do it.
(And yes, I’m aware of the irony of posting this on a forum for what is essentially the rag and bone shop of the capitalist hellscape.)
Personally I have lightly dipped my toes into fooling with AI but I’m very wary of it, and I would never trust it with anything important or sensitive. The example about it thinking that a spam post was a “community highlight” is a great illustration.
I’ve been working in IT and information security most of my career, and before AI the perpetual worry was about outsourcing, which breaks down to essentially the same thing: someone/something that does your job cheaper or easier than you can. All along we human IT people have just been hoping that we can contribute in a way that the bottom-dollar bottom-feeders can’t - sometimes it bites us in the ass and we get laid off or downsized. Of course AI adoption comes with even more possible consequences that it’s too soon to clearly assess… but I am confident they will be very big and very bad.
@blandoon “Capitalist hellscape” is such a perfect term, particularly right now.
I actually made the suggestion early-on to that one user that they create a thread just like this. As I recall, I was rebuffed.
@PooltoyWolf The idea was apparently not quite ready, it just needed polishing. A different kind of re-buffing, as it were.
@PooltoyWolf That one user hasn’t used it yet …
@PooltoyWolf If that was the comment I think I saw, it was what got me thinking about starting the thread myself. Thanks for the idea!
@ItalianScallion It’s the user that posts something anti-AI on literally every single deals thread.
@ItalianScallion @PooltoyWolf Maybe that user is an AI bot gone rogue.
/showme an AI bot gone rogue
/showme an artificially intelligent inebriated feline gone wrong
One thing I find surprising about ChatGTP is that the responses have been very polite, friendly and like it’s sincerely trying to help. I did find some answers to tax questions were questionable or clearly wrong, so I have to keep its accuracy, or lack thereof, in mind.
@heartny At this point I wouldn’t trust ChatGPT or the other AI bots to be correct about anything. It will get better, I’m sure.
@heartny @ItalianScallion I agree. I looked up one of my cancers using it and it described, for the most part, a related blood cancer (in the same family of non-hodgkin’s lymphomas) but not mine.
@Kidsandliz, I never thought of doing that. Mine is follicular, by the way.
@ItalianScallion so is mine. I haven’t checked recently what AI is doing. i gave it a thumbs down with an explanation so maybe they have fixed it by now. They had mixed follicular with DLBCL (one of the aggressive, curable ones that is more common than ours, although ours is in the top 3 or something out of the 80+ different kinds of subgroups of non-hodgkin’s lymphomas.
Yes to the girlfriend experience companions. No to the schizophrenic HAL 9000 types, homicidal Red Queens, Skynet and HK terminators. Somewhat partial to hanging out with Nick Valentine over a ghoul if it came to that.
KuoH
@kuoh “I’m sorry, Dave. I’m afraid I can’t do that.”
@ItalianScallion @kuoh “Dave’s not here.”
@ItalianScallion Alternate deleted scene: Open the damn pod Bay doors HAL! Don’t make me have to come in there and shove a monolith up your output port!
KuoH
@ItalianScallion @kuoh “I’m sorry, Dave. I’m afraid I can’t do that.”
That sounds like a genuine girlfriend experience to me.
SIGH
The term artificial intelligence gets bandied about with little thought to what it really means.
Search algorithms are a good example of AI in practice. There are many data crunching jobs (for instance) that would be virtually impossible to do without the use of AI.
OTOH the use of AI for things like ChatGPT is an entirely different subject.
/end rant
While I’m no expert in the field, here’s a somewhat enlightening exchange on the subject from a few of the key movers in the industry. I must admit however, much of it goes over my head after the first couple of minutes or so.
KuoH
@kuoh
I’m a 30-year veteran of the computer business and it was the same for me. Tough to understand.
@kuoh TL;DR
Old McDonald had a farm,
AI, AI, Oh.
Then it gets a LiTTLE better, but not much.
In case anyone is in a hurry, it does not get a little better. It’s just 11 minutes of people repeating “AI” over and over again. Conclusion: “Yes. It’s a buzzword.”
@aetris Thanks Steve.
There has been a massive improvement. For years true AI has been 10 or 20 years away.
Now it’s only 5 years away. Forever.
Apparently AI is really really good at analyzing and improving software code. It seems to be a real timesaver on writing and improving code. I’ve heard podcasters say on the order of 20%+ productivity increases. One podcaster said that the first time he used it, it seemed like black magic. It certainly has its uses and is improving rapidly. Note that AI can be trained on scientific data…that could cause the biggest improvements for society when it starts really helping with medical research.
@TimW for programming those people aren’t programmers they’re “influencers” cosplaying as programmers making shitty webapps. There’s major propaganda going on by market grifters.
Remember ten years ago when the same people told you (and some still try to…) that in a year autonomous driving cars will start taking over? To stop bothering to teach kids to learn to drive because it’s already obsolete?
For actual programming it’s useful as what we’ve always said it is, autocomplete on steroids. When you’re about to type a bunch of boilerplate code it can spit it all out at once. It can also be a faster form of internet searching for a stack exchange answer. 20%? Maybe if you’re writing something simple from scratch, but if you’re modifying an existing code base it has to be less than 5 at most, being generous even.
It boils down to: it will very mildly speed up a good programmer who is only being slowed by typing, knowing what you want. It will massively speed up a garbage programmer to produce garbage code. In both cases the most important thing is that the speed up is often coming at the cost of the programmer not bothering to learn the thing they used the AI for. For a good programmer this means forgetting over time what the correct syntax for even boilerplate code. For a bad programmer it means never even learning to program at all, never knowing enough to actually be able to logically architect a program at all or have a sense of what is possible so they can intelligently choose the best solution.
I mean you don’t need to even be a programmer to get a sense of the benefits and problems of using it. Just go use it to do what it is strongest at - use chatgpt with the goal of writing a novel short story. Give it a go. Maybe ask it to write you a hero’s quest story. Then maybe ask it to change a few things, refine it yourself.
And ask yourself do you think you’re really competing with real authors with your result? Does using it make you a good writer? Quality wise, is it something you think anyone would want to read? How much time do you think it will save you if you have a really clear idea of what you want to write? How much extra prompting and tuning will you need to do if your intent is more specific? Do you think you are learning to be a better writer when you let gpt “make choices”?
Similar answers and nuances of the ways it is useful for coding, but a little worse because the code used to train is way worse than the novels used to train.
@TimW Good ghods, let’s NOT let AI drive biomedical simulation development or anything adjacent! The whole point of having actual humans in the loop is that they can correlate between output accuracy and code operation when trying to build a predictive algorithm that’s based on incomplete knowledge. AI is not going to help there, it’s going to introduce absurd false assumptions at best. And even for “routine” tasks, there’s always and inevitably some nuance that has to be built around, and AI is about as nuanced as a main gun round from the Missouri at 8 miles out. Great for creating large open spaces in a random pattern, not so hot for trimming a topiary.
@werehatrack AI is going to help here and already is. Machine Learning has been used for years. This next generation of AI will only increase innovation.
@bobthenormal respectfully disagree…many developers, even really really smart ones are using AI today. And it’s rapidly getting better at what it does.
@TimW my background is science research (not to get too specific) and I’m a programmer both for work and hobby FWIW. I was already using ML ten years ago to predict unsynthesized molecular and bulk properties. It’s not a useless tool at all, but it’s not even in the same galaxy as the hypebros have you believing.
@bobthenormal yes, certainly not at the hype some would have us believe! But it has also gotten so much better in 2 years. I have a software engineering background, too. As someone smart once said, we overestimate the change that these seismic shifts can bring in the short term and underestimate what they can bring in the long term.
@bobthenormal @TimW I use it every day for creating mostly boiler plate code… Stuff that would take me hours of meaningless drudgery that it can spit out (when I’m at home/AI blocked in the office). If I need reminding of correct syntax it’s faster to get it to tell me then googling. It’s does things correct most of the time and I worry for junior developers, but it makes mistakes and sometimes it takes someone experienced to see what it’s doing wrong. I think it will and already has replaced a lot of entry level positions.
And still no @DrunkCat here…
Of course not. We know they have wood chippers for that.
@yakkoTDI

@macromeh @yakkoTDI
I’ll add here that if you want to specifically debate @drunkcat 's primary criticism, you should discuss why you think it is acceptable and moral (or not) to use these tools when they need to harvest trillions of human artist creations, the vast majority illegally (such as facebook pirating an absolutely massive collection of books… and not even seeding the torrent after!), to produce clearly derivative art without any attribution or compensation to the art that trained it.
Not sure if he’s got more criticisms but that is was what I got from his posts…
@bobthenormal food for thought…every artist, writer, composer, etc. is influenced by other works. AI just does that on a MUCH greater scale. It’s an interesting discussion for sure. Artists become famous, then there are many copycat artists and they (the good ones) evolve the original style.
@bobthenormal @TimW I see your point about being influenced. AI is scooping up of vast amounts of material, but my impression is that AI lacks the the ability to judge, or more simply isn’t at all concerned about judging, when influencing crosses the line into plagiarism. Then again, we humans have all sorts of ideas about when that line is crossed. For example, I consider sampling that is done in the world of hip hop music to be a detestable form of theft, but the artists themselves seem to be ok with it.
@bobthenormal @ItalianScallion The only “judgement” from AI is what is programmed into it, and it was programmed to scoop up as much as possible when training. So, there’s no “judgement” like we would think about it. We anthropomorphize AI like it’s a person, but it’s just a program.
@bobthenormal Bravo @TimW! That is the crux of AI, isn’t it: can an AI system go beyond its programming and, using vast amounts of data, make inferences that would take us as humans much, much longer to make? If it can do that in an instant or after short amount of time to “think”, would we interpret that as (an artificial form of) intelligence? Would that intelligence lead us to believe that the system had morals, ethics, and could render judgment? Could such a system become sentient? So many questions, so few answers…
@ItalianScallion @TimW yes, the anthropomorphizing lately has gotten out of control. People really think it’s going to keep getting better for solely that reason, they take pieces that look to them like thinking and extrapolate those. But those aren’t the properties that will improve. It isn’t thinking, it isn’t going to start to.
You could say we’re learning a lot about neural networks and that might transfer over to work on AGI, which then maybe we could get thought from (at the cost of the entire US electrical grid to run it probably…). I don’t think computers are incapable of thinking, that’s just not what LLMs do or will ever do. I also don’t think we’re even within 25 years of baby AGI, but that’s pure opinion.
@ItalianScallion @TimW no, yes, no, no, and no. Anything else? I have so many answers!
What concerns me is that medical systems are thinking about recording our visits and sending them through AI to write the visit notes. Research has documented they are OK for uncomplicated routine and simple cases, they aren’t so good for other visits. When “trained” in a narrow field - like detecting cancer cells in a mammogram, etc. - they are pretty good at that, but then again they can be trained with a zillion mammograms where the outcome is known and what they are looking for is pretty narrow.
@Kidsandliz Just this week I heard about someone saying their healthcare provider was doing this today. It’s just for a visit note, and it actually helps the Dr. remember things that they might otherwise forget to document. The onus is still on the Dr. to review the note and “sign off” on it.
@TimW I’ve read the peer reviewed research about this. They need to do a good job of correcting the notes. If they don’t take notes they may forget details and miss when AI has them wrong (learning and recall research). Some will not do that. Some don’t even answer mychart messages and do phone followups and never record they did that to avoid doing visit notes.
The research has consistently documented that the AI summary was far less accurate with complicated cases but did a pretty good job with simple, uncomplicated cases.
The doctor could do what most of mine do which is take notes during the visit or have a nurse do that.
@Kidsandliz @TimW yeah the AI doing “trivial” work summarizing important information freaks me out because that is definitely a bad use case. They’ve already been talking about police using it to do their reports too, and the things hallucinate all kinds of shit. All it takes is an exhausted or lazy officer to sign off without looking, completely corrupting. and falsifying information that ends up being used in court as fact.
They really need to regulate these things but… Well, I guess first we have to wait and see if we still have hospitals by the end of this year or if ol’ leatherface and the dogebags are going to shut them down for selling vaccines.
@bo1bthenormal @Kidsandliz @TimW
Until they start using AI attorneys this will be mitigated in court…
Doctor’s for years have used voice to text (like Dragon… which is an AI product), I have seen way too many crappy notes that were dictated and never checked by the Docs in my ER. I don’t know not sure that the “AI” you speak of would be that much worse!
@chienfou @TimW dictation causes misspellings and occasionally the wrong word. AI causes false information to be included, summarizing the wrong thing, leaving out important information with summarizing, presuming a diagnosis when that isn’t the case… much worse for visit notes
@Kidsandliz @TimW
again… Dragon (i.e. dictation software) IS an AI product…
@chienfou @TimW I was talking about the difference between dictation where the AI uses your words and turns what you said, word for word into written form and occasionally screws up and AI where your visit is recorded and the AI program decides how to summarize what you and the doctor said rather than just turn what you said into a written version of what you said. It is how it choses to summarize what you said that still has some major issues with doctor visit notes.
@Kidsandliz @TimW
I understand that. My point is AI is not the term you’re looking for.
It’s a specific implementation of AI… not AI itself
@chienfou @Kidsandliz good point @chienfou! When an AI LLM implementation is “grounded” in specific data (eg. medical or scientific) then it does much better.
Pew Research has an interesting article/poll that shows the views of AI experts vs the public.
https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/
Things I found interesting: Overall, experts are more optimistic and less worried about the impact of AI on jobs. Public sector AI experts don’t really trust private sector developers to use AI ethically. (I think I agree.) Everyone worries that AI can spread disinformation, and almost no one trusts AI to report on politics. Both experts and the public want regulation.
Our Secretary of Education noted that “A1” is going to be taught to first-graders in one school.
Video is here.
@ItalianScallion Every 6-year-old should know how to make steak sauce!
@rockblossom Oh, I get it: Home Ec classes. I thought maybe she was talking about artificial intelligence, but didn’t know what it was. That didn’t make any sense, though, because she’s the Secretary of Education.
@ItalianScallion I think that
HomeEcConsumer Science classes for 6-year-olds make a bit more sense than teaching them to write code before they can read. But it has been a very long time since I was 6, so what do I know?On an entirely different tangent, here’s 44 minutes of deconstruction of TikTok/Xitter/YooToob AI-enhanced-like-a-flavored-doobie conspiracy/dumbshittery that’s currently flooding the bullosphere about The Amazing Stuff UNDER THA PIRRAMIDZ!