I forgot to mention. I've had requests twice more from accounts pretending to be you. They have an offer and a whatsapp number. I report them obviously since it clearly isn't you. Just thought you should know. They use your image and everything.
So there'll be big money in making models more efficient to suit the target market, not bigger than they need to be.
Chips designed for exactly the type of computation a model may require.
And datacentres parked where things like climate control are less of an issue, so colder climates and also buried.
I have played with Stable Diffusion for the last two years and there are some models that have been trained to understand photographic terms like depth of field and bokeh, cowboy shot, low angle, wide angle, establishing view, close up, as well as even specifying lens lengths for certain types of distortion.
I think the people who have spent a long time thinking visually will just need to learn to speak prompt and they'll be fine. Many services like Midjourney, Krea, Hedra, Suno, Udio, etc in the multimedia space extrapolate the input prompt.
The thing I like about Stable Diffusion is it runs exactly and only the prompt I give it. My coding and systems background helps too I think given the path is model, LoRA's, Control Net, Inpainting, then some corrections.
Some of the artists I follow are starting to incorporate AI into their workflow and they are getting a lot of shit from their fan base which I think is stupid. New tools in Photoshop aren't critiqued so who cares if you can speed up outlines or colouring or lineart.
Keep up the good work simplifying the stuff.