How to use the mlpm.app.aidserver function in mlpm

To help you get started, we’ve selected a few mlpm examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github autoai-org / AID / components / mlserve / mlpm / server.py View on Github external
@aidserver.route("/", methods=["GET"])
async def ping():
    return await json_resp({"status": "OK"}, status=200)
github autoai-org / AID / components / mlserve / mlpm / server.py View on Github external
@aidserver.route("/batch", methods=["POST"])
async def batch_infer():
    if request.method == 'POST':
        return await handle_batch_infer_request(request, UPLOAD_INFER_FOLDER, PUBLIC_FOLDER)
github autoai-org / AID / components / mlserve / aid / server.py View on Github external
@aidserver.route("/", methods=["GET"])
async def ping(request):
    return response.text('Hello world!', status=200)
github autoai-org / AID / components / mlserve / mlpm / server.py View on Github external
@aidserver.route("/static/")
async def send_static(filename):
    return await send_from_directory(os.path.abspath(PUBLIC_FOLDER), filename)