Eykarim/stable-diffusion-v1

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at :hugs:’s Stable Diffusion with :firecracker:Diffusers blog.

For more information about the model, license and limitations check the original model card at CompVis/stable-diffusion-v1-4.

License (CreativeML OpenRAIL-M)

The full license can be found here: License - a Hugging Face Space by CompVis


This repository implements a custom handler task for text-to-image for :hugs: Inference Endpoints. The code for the customized pipeline is in the pipeline.py.

There is also a notebook included, on how to create the handler.py

expected Request payload

{
    "inputs": "A prompt used for image generation"
}

below is an example on how to run a request using Python and requests.

Run Request

import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO

ENDPOINT_URL = ""
HF_TOKEN = ""

# helper decoder
def decode_base64_image(image_string):
  base64_image = base64.b64decode(image_string)
  buffer = BytesIO(base64_image)
  return  Image.open(buffer)


def predict(prompt:str=None):
    payload = {"inputs": code_snippet,"parameters": parameters}
    response = r.post(
        ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json={"inputs": prompt}
    )
    resp = response.json()
    return decode_base64_image(resp["image"])

prediction = predict(
    prompt="the first animal on the mars"
)

expected output

1 Like