“Democrats fall in love, Republicans fall in line.”
“Democrats fall in love, Republicans fall in line.”
Careful, Lemmy seems to think you won’t be able to use the power button on that new Mac Mini.
On iOS I have it set up as a text replacement. If I type ?! it is replaced by ‽
You certainly like to see a lot of your own words. You can dismiss me and keep talking. I don’t care anymore.
(I know you deleted this but I think it’s worth referring to.)
You are accusing me here, again, of dismissing you while simultaneously saying I type too much. These aren’t compatible.
Again, engaging with you and disagreeing isn’t dismissal. It’s conversation. It’s discussion.
Here, how about this:
I thought the video you linked was entertaining. It’s not my thing, but I can understand enjoying their style. And the claims they make are interesting for the value they hold in detecting simple, low-hanging AI fruit. I’ll grant you that.
But what I’m trying to tell you is that such a simple solution isn’t a robust one. It may work for, as I said above, low-hanging fruit. Fine. But again, if AI detection were that simple then people wouldn’t be trying to figure out how to consistently detect AI as the target continues to shift.
What I now find interesting is that you have shifted—when I addressed your video and your arguments—to attacking me and my writing rather than what I wrote. You downvote every reply I make and then try to act high and mighty about how I’m dismissing you or how I’m punching down. You dismiss me and then accuse me of it.
Anyway, I hope you have a good day.
Why on earth are you taking this so personally? We’re talking about AI image generation, why is your pride involved?
I asked a question about using a different method of detecting AI images using the fact that color brightness does still average out and base values are usually identical and was met with condescension and incorrect information from you as well as to how color in pixel math work.
You asked a question about why tools don’t use an extremely simple method of detecting AI images. I said that wouldn’t work. Initially I misunderstood your question and my response was overly simple, but it wasn’t wrong. Simple methods of detecting AI images don’t work for all AI images.
You started with dismissal and haven’t gotten better.
I didn’t dismiss you. If I had I wouldn’t have bothered to respond. You hadn’t presented much besides a vague question initially, and I disagreed with it.
When you came back with more I presented my position, that AI image generation is much more varied and complicated than your question and YouTube video assume. Just because I’m disagreeing with you and providing context doesn’t mean I’m dismissing you. Dismissing you would be to say, “No, you’re wrong, go away.” Not to explain why the simple method you’re talking about isn’t feasible, broadly, for the entirety of AI images.
If I wanted to dismiss you, I wouldn’t bother wasting my time on a response.
It’s been an argument and an uphill battle to point out that this is true
And you’re accusing me of clinging to my position. 🙄
I wanted a conversation and you wanted to punch down. You still want to be from the pulpit of right because you like your toy.
Where on earth did you get the impression that I want to be right because I like AI image generation? Or that I wanted to punch down?
Someone disagreeing with you and responding to your argument without accepting it isn’t dismissal, it isn’t punching down, it isn’t condescension. It’s engagement with what you’re saying. Just because I don’t agree with you doesn’t mean I think I’m better than you or smarter than you or anything like that, it just means I think I’m right.
So because you “make” AI generated images you are saying that they are magical and don’t follow the rules of their generation?
That’s what you got from what I wrote?
There’s nothing “magical,” but the variety of AI images that can be produced belies the simplicity of their detection. Which has been my point this whole time.
They are based on noise maps and inferred forwards from there.
There are an infinite number of methods to diffuse noise into an image, and changes to any one of a wild number of variables produces a different image. Even with the same seed and model, different noise samplers can produce entirely different types of images. And there are a LOT of different samplers. And thousands of models.
Then there are millions of LORAs that can add or remove concepts or styles. There are ControlNets that let a generator adjust other features of the image generation, from things like poses to depth mapping to edge smoothing to color noise offsets and many many many more.
The number of tweaks that can be made by someone trying to generate a specific concept is insanely high. And the outputs are wildly different.
I don’t pretend to be an expert in this subject, I’ve barely scratched the surface.
In the video I linked they even talk about how the red blue green maps have the same values cause it started with a colorless pixel anyways. A real sensor doesn’t do that.
No, they give an extremely simple explanation of how noise maps work, and then speak as if it were law, “You’ll never see an AI image that’s mostly dark with a tiny little bit of light or mostly light with a tiny little bit of dark.” Or “You won’t have an AI photo of a flat sunny field with no dark spots.”
But that’s simply not true. It’s nonsense that sounds simple enough to be believable, but the reality isn’t that simple. Each step diffuses further from the initial noise maps. And you can adjust how that happens, whether it focuses more in lighter or darker areas, or in busier or smoother areas.
Just because someone on YouTube says something with confidence doesn’t mean they’re right. YouTubers often scratch the surface of whatever they’re researching to give an overview of the subject because that’s their job. I don’t fault them for it. But they aren’t experts.
(Neither am I, but I know enough to know how insanely much there is that I—and they—don’t know.)
None of the things they say in that video as though they are law or fact are things that haven’t already been thought of by people who know far more about the subject than these YouTubers (or me).
I did mention earlier that this sort of thing might be true for Dall-E or Midjourney or other cheap/free online services with no settings the user can tweak. AI images generated with as few steps as possible, with as little machine use as possible. They will be easier to spot, more uniform. But those aren’t all there is of AI images.
Another thing to consider: this technology is, at any given moment, at the worst it’s going to be going forward. The leaps and bounds that have been made in image diffusion even in the last year is remarkable. It is currently, sometimes, difficult to detect AI images. As time goes on, it will become harder.
(Which your video example even says.)
My point is that AI images don’t differ significantly enough from non-AI images. “AI images” is an extremely broad category.
If you are narrowing that category to, say, “all Dall-E images” or “all Midjourney images” or something, MAYBE. They tend to have a certain “look.” But even that strikes me as unlikely, and those are just a slice of the “AI images” pie.
As someone who has played around with Stable Diffusion and Flux, the “average color” of an image can vary dramatically based on what settings and models you’re running. AI can create remarkably real-looking images with proper variance in color and contrast, because it’s trained on real photos. Pixels, as I said, are pixels.
That’s not to mention anime or sketch or stained glass or any other medium imitation. And of course, image-to-image with in-painting, where only parts of an image are handled by the AI.
My point is that if there were overtly simple answers like, “all AI images average their color to a beige,” then there wouldn’t be all this worry about AI images. It would be easy to detect them. But things aren’t that simple, and if you spend a small amount of time looking into the depth that generating AI images has gained even in the last year, you’d realize how absurd a simple answer like that is.
Either that’s not true of AI images or it’s true of all images. There aren’t answers that simple to this. Pixels are pixels.
They can? My PS5 has been on the included vertical stand the entire time I’ve owned it, what risk am I ignoring?
I remember with the Xbox 360 the only issue was if you switched from vertical to horizontal or vice versa while the disc was being read.
I don’t know if there is a version of Poe’s law for Apple fanboys, but your comment makes me think there should be.
Roflmao
I don’t own a Mac Mini, and never will. I’m not trying to defend Apple.
But I’ll use my work laptop as an example. I have external monitors, so I never open the damn thing except on the rare occasions I need to use the power button. This happens infrequently enough that it gives me a pretty good notion of how often people need the actual power button on a modern computer.
If the button can be reached without turning over the device or even picking it up, as it sure appears, what’s the problem? Other than that it’s an Apple device and people love to hate on Apple devices.
How often do you need to actually turn it on? Won’t it sleep? Pretty much should only need to turn it on after moving the thing. You can restart from with in the OS if you need to.
Twitter did it before Reddit, IIRC. It was part of the conversation around API fees for Reddit.
There’s no shame in crymaxing.
It is but no educated person qualifies themselves by that name as it means nothing.
People seek to label themselves in the most accurate category not the broadest one.
I’m not sure that’s true. If you ask someone what they do for a living and they say, “I’m a doctor,” you don’t say, “I doubt it. A real doctor would say, ‘I’m a cardiovascular surgeon,’ or ‘I’m a pediatrician.’” We adjust our labels for our audience.
I wouldn’t be surprised to find a biologist or a climatologist who might just say, “I’m a scientist” to a broad audience. Not that they couldn’t use the more accurate label, just that they don’t necessarily have to.
Scientist is the broader category though. If a square says “I’m a rectangle” they aren’t lying.
The way I handle this is to parse them differently. They mean the same thing, but “I couldn’t care less” is sincere and “I could care less” is sarcastic.
Sort of like, “I suppose it’s possible that I could care less about that” reduced to the phrase.
Because both phrases obviously communicate the same meaning, a lack of care, the issue for me isn’t in the understanding but in the parsing. So I had to come up with a way to parse it as sarcasm so it doesn’t bother me.
Like when someone says, “I’ll try and be there” my brain, mildly traumatized by really good English teachers in my youth, screams, “YOU’LL TRY TO BE THERE.” But lately I’ve been making an effort to interpret the “and <verb>” following “try” as an alternate form of the infinitive, since it’s so readily accepted and common in spoken English. We already construct other verbs that way anyway (eg. “I’ll go and do that”).
I…might have a touch of the ‘tism. It wouldn’t surprise me. 😅
lol I feel like I’m living in a different planet.
😂 Are you just now learning that people experience different things in life?
I don’t use a shoehorn, and I’ve finally embraced the Skechers Slip-Ins lifestyle and loving it, but shoehorns would definitely have made my life easier in some respects.
Ex-boss, Michael Eisner.
I strongly disagree. There are many times where someone else’s death is something to hope for. I think if you try you can think of a few relatively easily.