

Here’s a video about the manga if you’re interested.
Here’s a video about the manga if you’re interested.
He and I had this talk two years ago.
I keep forgetting how old Lucky☆Star is. It shows scenes of life from over twenty years ago.
It seems like what was said here worked. Thanks to @rumschlumpel@feddit.org @WillStealYourUsername@lemmy.blahaj.zone @riwo@lemmy.blahaj.zone and everyone else who commented.
What do you mean?
Thanks for taking the time to write such a nice reply. I hope to have good news soon.
Thanks for the recommendation.
They said they didn’t see the moderation abuse I was purporting for one, and they weren’t just going to take my word for it.
I desperately want to try, so any help would be appreciated.
They said since I wasn’t able to explain to them what was wrong with the ideology and the moderation on those instances to their liking, they didn’t see any problem.
Teal is blue.
I know Speed Weed from Law & Order.
Do you get them by mail?
Swarm UI is pretty beginner-friendly. This tutorial will get you started.
This seems like a good place for discussion so if you’ll humor me, I’d like to explain some things you might find in a prompt, maybe some things you weren’t aware you could do. Web services don’t allow for a lot of freedom to keep users from generating things outside their terms of use, but with open source tools you can get a lot more involved.
Take a look at these generation parameters:
sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>
Negative prompt: (worst quality, low quality:1.4), FastNegativeV2
Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",
ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True
To break down a bit of what’s going on here, I’d like to explain some of the elements found here. sarasf
is the token for the LoRA of the character in this image, and <lora:sarasf_V2-10:0.7>
is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.
The 0.7 in <lora:sarasf_V2-10:0.7>
refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles and concepts this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.
The Negative Prompt is where you include things you don’t want in your image. (worst quality, low quality:1.4),
here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative prompt FastNegativeV2
is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.
In the next part, Steps
stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer. VAE
is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in, Size
are the dimensions in pixels the image will be generated at. Seed
is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.
Model
is the name of the model used, and Sampler
is the name of the algorithm that solves the noise into an image. There are a few different samplers, also known as schedulers, each with their own trade-offs for speed, quality, and memory usage. CFG
is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output. Hires steps
represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts. Hires upscaler
is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.
After ADetailer
are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.
I could continue if you want to hear more.
I remember you. It was my thread you commented in. You’re mad because a moderator removed your comment after you showed up and started antagonizing no one in particular?
I’d like to ask you a question. How much experience do you have with any Stable Diffusion tools?
These fakes are definitely a problem. Don’t let your friends download shady shit.
It’s not often you get to experience CCP censorship first hand. It honestly lives up to the hype. They are in your face about it.
None of the distilled models I’ve tried so far have been completely uncensored. They all fail the Tiananmen Square question regularly. I wouldn’t be surprised if the answer variance and reasoning variance is due to censorship.
I got these two in a row, but after trying it some more, I’m getting some noes with the weirdest logic.
I think AI art should be allowed. It doesn’t matter if a shitpost is AI or not, and witch hunting should always be punished. There are too many people out there harassing and hurting people online. Their behavior should be discouraged.
Edit: It should be allowed just for the reason that people don’t need the excuse to act out. I can’t state an opinion without them feeling like they need to downvote to punish me.