> For distilled StableDiffusion 2 which requires 1 to 4 iterations instead of 50, the same M2 device should generate an image in <<1 second
The author has a detailed blogpost outlining how he modified the model to use Metal on iOS devices. https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-mo...
This site is purely a marketing effort.
This gets you text descriptions to images.
I have seen models that given a picture, then generate similar pictures. I want this because while I have many pictures of my grandmothers, I only have a couple of pictures of my grandfathers and it would be nice to generate a few more.
Core ML is so well done. A year ago I wrote a book on Swift AI and used Core ML in several examples.
One on the GPU and another on the ML core?