18910140161

Python-如何将后台任务函数从get.post fastAPI传递到html模板中?-堆栈溢出

顺晟科技

2022-10-19 12:14:56

222

其实我有两个问题。首先,我在api中运行一个后台任务,它拍摄图像并对其进行预测。.出于某种原因,我不能将后台任务存储在变量中并返回它。对于问题的第二部分,我需要这样做。

API代码:

from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn

app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")

model_dir = 'F:\\Saved-Models\\Dog-Cat-Models\\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)


def predict_image(image):
    pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
    pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
    input_arr = np.array([pp_dogcat_image_arr])
    prediction = np.argmax(model.predict(input_arr), axis=-1)

    if str(prediction) == '[1]':
        answer = "It's a Dog"
    else:
        answer = "It's a Cat"

    return answer


@app.get('/')
async def index():
    return RedirectResponse(url="/Templates/index.html")


# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
    answer = background_tasks.add_task(predict_image, image=dogcat_img)
    return answer


if __name__ == '__main__':
    uvicorn.run(app, host='localhost', port=8000)

第二个问题是,我试图将它以Jinja标记的形式传递回html文件。如果我可以存储我的后台任务,我想把它返回到实际的html模板。我已经看了很远很远的地方,FastAPI没有用get.post.

来做这件事的无用信息

HTML代码:

from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn

app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")

model_dir = 'F:\\Saved-Models\\Dog-Cat-Models\\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)


def predict_image(image):
    pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
    pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
    input_arr = np.array([pp_dogcat_image_arr])
    prediction = np.argmax(model.predict(input_arr), axis=-1)

    if str(prediction) == '[1]':
        answer = "It's a Dog"
    else:
        answer = "It's a Cat"

    return answer


@app.get('/')
async def index():
    return RedirectResponse(url="/Templates/index.html")


# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
    answer = background_tasks.add_task(predict_image, image=dogcat_img)
    return answer


if __name__ == '__main__':
    uvicorn.run(app, host='localhost', port=8000)

顺晟科技:

我同意@matslindh的评论,您可能需要像Celery这样的任务队列系统来调度图像预测任务。这也将有助于分离关注点,因此服务于HTTP的应用程序不必处理ML任务。

因此,background任务在首先返回响应后运行:

from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn

app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")

model_dir = 'F:\\Saved-Models\\Dog-Cat-Models\\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)


def predict_image(image):
    pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
    pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
    input_arr = np.array([pp_dogcat_image_arr])
    prediction = np.argmax(model.predict(input_arr), axis=-1)

    if str(prediction) == '[1]':
        answer = "It's a Dog"
    else:
        answer = "It's a Cat"

    return answer


@app.get('/')
async def index():
    return RedirectResponse(url="/Templates/index.html")


# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
    answer = background_tasks.add_task(predict_image, image=dogcat_img)
    return answer


if __name__ == '__main__':
    uvicorn.run(app, host='localhost', port=8000)

但是在您的情况下,您希望立即将预测结果返回给用户,并且由于任务是CPU绑定的,理想情况下它应该在另一个进程中运行,这样它就不会阻塞其他请求。

这里可以采用两种方法:

  1. 实现prg(post/reDirect/get)deign模式,这样就有了一个带有HTML表单的索引页,图像被发送到端点,请求被重定向到页以显示结果。您可能必须将结果存储在数据库中,或者找到其他方法传递并在页面上显示结果。

  2. 构建一个JavaScriptSPA应用程序(SInglePAgeaPP),该应用程序将使用JS向后端发出HTTP请求。

话虽如此,下面是一个简单的示例,它将使用processPoolExecutor进行预测,并使用JSfetch方法获取和显示结果:

main.py:

from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn

app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")

model_dir = 'F:\\Saved-Models\\Dog-Cat-Models\\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)


def predict_image(image):
    pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
    pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
    input_arr = np.array([pp_dogcat_image_arr])
    prediction = np.argmax(model.predict(input_arr), axis=-1)

    if str(prediction) == '[1]':
        answer = "It's a Dog"
    else:
        answer = "It's a Cat"

    return answer


@app.get('/')
async def index():
    return RedirectResponse(url="/Templates/index.html")


# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
    answer = background_tasks.add_task(predict_image, image=dogcat_img)
    return answer


if __name__ == '__main__':
    uvicorn.run(app, host='localhost', port=8000)

index.html:

from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn

app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")

model_dir = 'F:\\Saved-Models\\Dog-Cat-Models\\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)


def predict_image(image):
    pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
    pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
    input_arr = np.array([pp_dogcat_image_arr])
    prediction = np.argmax(model.predict(input_arr), axis=-1)

    if str(prediction) == '[1]':
        answer = "It's a Dog"
    else:
        answer = "It's a Cat"

    return answer


@app.get('/')
async def index():
    return RedirectResponse(url="/Templates/index.html")


# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
    answer = background_tasks.add_task(predict_image, image=dogcat_img)
    return answer


if __name__ == '__main__':
    uvicorn.run(app, host='localhost', port=8000)
  • TAG:
相关文章
我们已经准备好了,你呢?
2024我们与您携手共赢,为您的企业形象保驾护航