Disclaimer: This is my own workflow and it's definitely not the only way to gen. I use NoobXL btw.
Let's first start with the prompt. Everybody has their own prompt style, so do whatever you like here.
My preference is to have a common prefix and suffix across most of my gens (see example), the artist tags right after the prefix, and then only have to change the middle part. For the negative prompt, usually that remains the same across all gens.
masterpiece, best quality, very aesthetic, absurdres, doujin cover,
yabby, nonco, try \(lsc\),
1girl, button gap, collared shirt, parted lips, indoors, long sleeves, looking at viewer, naked shirt, shirt, solo, standing, thighs, white shirt, window
lucy \(cyberpunk\), blunt bangs, grey hair, bob cut, cyberpounk, red eyeliner, large breasts
best quality, amazing quality, very aesthetic, absurdres, newest, year 2023, year 2024, film grain,
Negative: multiple views, sound effects, lowres, (bad), text, error, fewer, extra, missing, (worst quality, jpeg artifacts, low quality:1.1), watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, simple background, young, comic Find a seed you like for your prompt (e.g. you might like the composition, character, etc). If you want, you can also use variation seed with different weights.
These are the settings I use:
Something wrong with your gen? No problem. Use inpainting or img2img to make a similar gen that's more to your liking.
I used to do a fair bit of inpainting back in the sd1.5 days.
Nowadays, this is not as much needed. In any case, if you still want to do it, I recommend grabbing a controlnet (like kataragi-inpaint). This will help dramatically with inpainting.
Even if you have a good seed in txt2img, you might want to try to use img2img as well to see if you can improve your gen further.
From my experience, when you use img2img with lower denoise values (e.g. 0.3-0.4), you tend to get a washed out image. Maybe this is because of (vpred) NoobXL, who knows? Because of this, I adjusted my img2img settings.
My preference is to use the canny controlnet with the following settings:
Upscale your gen using your preferred upscaler, somewhere between x1.5 and x2 so you can still img2img your gen in the next step.
Some upscalers your might find useful:
Finally, you should img2img the upscale until it fixes all of the previous artifacts and gives you the style you want.
These are the settings I use:
Depending on the gen, another option I sometimes go for is to repeat the img2img settings from step 3 but with a lower denoise strength and higher controlnet weight. This will give you a lot more details in your gens, with the caveat that it might mess up due to the high resolution. Then I would follow again with the settings above to clear up any artifacts.

a quiet study
green plants on the windowsill
rays of morning light