How come you always have to install some version of pytorch or tensor flow to run these ml models? When I'm only doing inference shouldn't there be easier ways of doing that, with automatic hardware selection etc. Why aren't models distributed in a standard format like onnx, and inference on different platforms solved once per platform?
Atila from Apple on the expected performance:

> For distilled StableDiffusion 2 which requires 1 to 4 iterations instead of 50, the same M2 device should generate an image in <<1 second

There's also - which is an iOS / iPad app running Stable Diffusion.

The author has a detailed blogpost outlining how he modified the model to use Metal on iOS devices.

I think it's sad that Apple doesn't even give attribution to any of the authors. If you copy the Bibtex from this site, the Author field is just empty. Their names are also not mentioned anywhere on this site.

This site is purely a marketing effort.

I’ve been using InvokeAI:

Great support for M1, basically since the beginning. The install is painless.

Release video for InvokeAI 2.2:

Great stuff. I like that they give directions for both Swift and Python

This gets you text descriptions to images.

I have seen models that given a picture, then generate similar pictures. I want this because while I have many pictures of my grandmothers, I only have a couple of pictures of my grandfathers and it would be nice to generate a few more.

Core ML is so well done. A year ago I wrote a book on Swift AI and used Core ML in several examples.

Man, this takes a ton of room to do the CoreML conversions - ran out of space doing the unet conversion even though I started with 25GB free. Going on a delete spree to get it up to 50GB free before trying again.
For the uninitiated, which MacOS GUI app is this library most likely to show up in first/best? DiffusionBee?
I can't get fine-tune the model ron Apple Silicon due to PyTorch supportability issues. I don't have high-hopes it will be supported.

How does this compare with using the Hugging Face `diffusers` package with MPS acceleration through PyTorch Nightly? I was under the impression that that used CoreML under the hood as well to convert the models so they ran on the Neural Engine.
Would it be possible to run 2 SD instances in parallel on a single M1/M2 chip?

One on the GPU and another on the ML core?

Can anyone explain in relatively lay terms how Apple's neural cores differ from a GPU? If they can run stable diffusion so much faster, which normally runs on a GPU, why aren't they used to run shaders for AAA games?
This may sound naive, but what are some use cases of running SD models locally? If the free/cheap options exist (like running SD on powerful servers), then what's the advantage of this new method?
What are some good resources to get into working with this and learning the basics around ML to get some fundamental understanding of how this works?
While running locally on an M1 Pro is nice, recently I've switched over to a Runpod[0] instance running Stable Diffusion instead. The main reasons being high workloads placed on the laptop degrade the battery faster and it takes ~40s to render a single image. On an A5000 it takes mere seconds to do 40 steps. The cost is around $0.2/hr.


Can't wait to see this integrated into automatic1111 so I can use it as a normie
Where is the community for this project?
anyone know how to link this to a GUI?
Macbook Air M1 / 16GB RAM took 3.56 to generate an image, this is pretty wild
8 gb ram