A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
Images, yes, but mixing concepts is a mixed bag. Just because the model can draw, say, human faces and dog faces doesn’t mean it has the understanding necessary to blend those concepts. Without employing specialised models (and yes of course the furries have been busy) the best you’ll get is facepaint. The pope at a beach bar doesn’t even come close to exercising that kind of capability: The pope is still the pope and the beach bar is still the beach bar, and a person is still sitting there slurping a caipirinha.
A whole lot of weird stuff can be created by bashing things together with AI. The beauty of AI is after all that you can “edit” with high level concepts, not just raw pixels.
That’s not concept mixing, also, it’s not proper origami (paper doesn’t fold like that). The AI knows “realistic swan” and “origami swan”, meaning it has a gradient from “realistic” to “origami”, crucially: Not changing the subject, only the style. It also knows “realistic human”, now follow the gradient down to “origami human” and there you are. It’s the same capability that lets it draw a realistic mickey mouse.
It having understanding of two different subjects, say, “swan” and “human”, however, doesn’t mean that it has a gradient between the two, much less a usable one. It might be able to match up the legs and blend that a bit because the anatomy somehow matches, and well a beak is a protrusion and it might try to match it with the nose. Wings and arms? Well it has probably seen pictures of angels, and now we’re nowhere close to a proper chimera. There’s a model specialised on chimeras (gods is that ponycat cute) but when you flick through the examples you’ll see that it’s quite limited if you don’t happen to get lucky: You often get properties of both chimera ingredients but they’re not connected in any reasonable way. Which is different from the behaviour of base sdxl, which is way more prone to bail out and put the ingredients next to each other. If you want it to blend things reliably you’ll have to train a specialised model using appropriate input data, like e.g. this one.
Images, yes, but mixing concepts is a mixed bag. Just because the model can draw, say, human faces and dog faces doesn’t mean it has the understanding necessary to blend those concepts. Without employing specialised models (and yes of course the furries have been busy) the best you’ll get is facepaint. The pope at a beach bar doesn’t even come close to exercising that kind of capability: The pope is still the pope and the beach bar is still the beach bar, and a person is still sitting there slurping a caipirinha.
deleted by creator
I just leave this link here as counter point (somewhat NSFW):
https://www.reddit.com/r/StableDiffusion/comments/11un888/flamboyant_origami_fgures/
A whole lot of weird stuff can be created by bashing things together with AI. The beauty of AI is after all that you can “edit” with high level concepts, not just raw pixels.
And as for humans and dogs: https://imgur.com/a/TdXO7tz
That’s not concept mixing, also, it’s not proper origami (paper doesn’t fold like that). The AI knows “realistic swan” and “origami swan”, meaning it has a gradient from “realistic” to “origami”, crucially: Not changing the subject, only the style. It also knows “realistic human”, now follow the gradient down to “origami human” and there you are. It’s the same capability that lets it draw a realistic mickey mouse.
It having understanding of two different subjects, say, “swan” and “human”, however, doesn’t mean that it has a gradient between the two, much less a usable one. It might be able to match up the legs and blend that a bit because the anatomy somehow matches, and well a beak is a protrusion and it might try to match it with the nose. Wings and arms? Well it has probably seen pictures of angels, and now we’re nowhere close to a proper chimera. There’s a model specialised on chimeras (gods is that ponycat cute) but when you flick through the examples you’ll see that it’s quite limited if you don’t happen to get lucky: You often get properties of both chimera ingredients but they’re not connected in any reasonable way. Which is different from the behaviour of base sdxl, which is way more prone to bail out and put the ingredients next to each other. If you want it to blend things reliably you’ll have to train a specialised model using appropriate input data, like e.g. this one.