Table of Contents
Recently Google’s research team conducted a research and successfully ran stable diffusion on a smartphone in offline mode. Artificial intelligence models have changed many industries, from healthcare to finance. Most important part of AI is foundation models. Foundation models are powerful tools and can be trained on anything be it images, text or music. A foundation model requires a lot of power and time, and running stable diffusion on a smartphone is one difficult task and to ease these problems, a new approach has been developed by Google researchers to accelerate the performance of AI models on mobile devices.
Researchers developed a method to optimize the runtime of large diffusion models. Research focuses on finding ways to make the model run faster on smartphone. This approach has many benefits, including reducing server costs, providing offline functionality, and improving user privacy.
Minimizing RAM usage is important because it reduces the time it takes process, which can be a bottleneck in the performance of foundation models. Maximizing GPU utilization is also crucial because the GPU is the most used part of a computer when we are using AI models. By optimizing the foundation models to smartphone GPUs, the performance of the AI model can be greatly improved.
What does the research say?
Google researchers team led by Yu-Hui Chen conducted published a research paper which you can find here. The research team used stable diffusion 1.4 on a smartphone and they were able to generate a 512×512 image in 11.5 seconds consisted of 20 steps, which is just amazing. I will give you reference of mine. I use a laptop with GTX1650 processor and a 512×512 image on my laptop takes 50 seconds for 20 steps, I am using SD 1.5. So I am really excited for this as the image generation will significantly get faster for notebook users as well in the future.
In this research the overall generation time was reduced by 52% on samsung s23 ultra which is just amazing. Things to note are
- Stable Diffusion is a 1 billion parameter model and I am pretty sure Midjourney is even heavier than Stable Diffusion.
- This Google research also tells us that it is possible to make these models less power hungry which is just great news for people who have to wait for a minute to get their first image.
If you want to install stable diffusion on your laptop or pc follow this link
Reducing Server Costs
Running AI models like stable diffusion on a smartphone can greatly reduce server costs. Generally, AI models are run on computers with powerful GPUs, which are really expensive. By running the model on a user’s device, the need for a large server is eliminated, reducing costs. This can be especially beneficial for companies that require large amounts of computing power for their AI models and for the companies that host work from home environment where every employee can access these AI models with ease on their smartphone.
Providing Offline Functionality
Running AI models on mobile devices can also provide offline functionality. When an AI model is accessed on a server or any website, an internet connection is a must. Suppose you are stuck in some area where there is no network or you don’t have internet for some reason and you need to get your work done ASAP, this is where this research from Google researchers is gonna come handy.
Improving User Privacy
Running AI models on mobile devices can also improve user privacy. Whenever something is being accessed on the cloud its a general speculation that there is no privacy whatsoever. Therefore running the model offline on users device will improve privacy as well.
I keep writing about AI related stuff if you want to stay in touch you can bookmark this website for future. Thank you