Open
Description
Doing hardware-accelerated inference in a serverless environment is compelling use case.
However, adding straight up GPU passthrough means that microVM can't oversubscribe memory, and we need to add PCI emulation to Firecracker, which comes with a lot of extra complexity/attack surface.
The first step here will be to research the options and alternatives (e.g., GPU passthrough, or something else), and figure out the path forward.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Researching