Skip to content

[Devices] virtio vsock #650

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
raduweiss opened this issue Nov 26, 2018 · 30 comments
Closed

[Devices] virtio vsock #650

raduweiss opened this issue Nov 26, 2018 · 30 comments
Assignees
Labels
Roadmap: Tracked Items tracked on the roadmap project.
Milestone

Comments

@raduweiss
Copy link
Contributor

raduweiss commented Nov 26, 2018

We want to have vsock support to enable container integration, but we don't want to use vhost since that would be another attack surface to directly exposes the host kernel. Instead, we'll write another back-end for vsock. See #194.

Currenty, virtio vsock exists in the codebase as an experimental development-only rust feature, with the vhost implementation.

@dhrgit
Copy link
Contributor

dhrgit commented Jan 22, 2019

Updating this issue with the current state of affairs, as discussed internally.

The main focus is on maintaining the Firecracker security barrier. I.e. Firecracker must control all data exchanged between the guest and the host. Using the stock vsock-via-vhost mechanism would bypass this barrier, allowing a malicious guest to pass data directly to the host kernel.

Currently I see two possible approaches to have both vsock and our security barrier:

  1. Emulate a vsock device via a different back-end on the host (e.g. a unix socket). That would mean the guest end would use AF_VSOCK, while the host end AF_UNIX.
    • pro: vhost is not used at all, so no new attack surface is added
    • con: added complexity to making AF_UNIX and AF_VSOCK work together
    • con: non-standard vsock use; requires the user to write more Firecracker-specific code
  2. Use vhost as back-end for an emulated vsock device, with a twist. I.e. implement both device-specific and driver-specific behavior in Firecracker, such that both the guest and vhost think they're taking to each other, while they're actually talking to Firecracker. This might work, since vhost doesn't depend on KVM, but needs further investigating / testing.
    • pro: standard AF_VSOCK functionality on both ends; no Firecracker-specific code required from the user
    • con: still using vhost code (new attack surface); all data fed into vhost is strictly controlled by Firecracker, though

I'm inclined towards the latter and I'll need to test it, since it's just a hypothesis, at the moment. Will update here in a couple of weeks.

@rn
Copy link
Contributor

rn commented Jan 22, 2019

An example of option 1 can be found in https://github.com/moby/hyperkit/blob/master/src/lib/pci_virtio_sock.c

@raduweiss raduweiss added this to the vsock Support milestone Feb 15, 2019
@alexandruag alexandruag added the Priority: High Indicates than an issue or pull request should be resolved ahead of issues or pull requests labelled label Feb 15, 2019
@dhrgit
Copy link
Contributor

dhrgit commented Feb 26, 2019

Update: internal discussion has settled on approach no 1. The guest end would be an emulated AF_VSOCK socket, while the host end would be provided via an AF_LOCAL / AF_UNIX socket.

This will require establishing some convention to support vsock ports, such as appending the port number to the host-end unix socket file path (e.g. /path/to/host_socket.sock:52).

Also, from the host end, this will require at least one listener (one socket) per guest, since each Firecracker VMM is designed to be isolated and independent.

If anybody tracking this issue has any input, please feel free to chime in. There are details not decided upon yet, so we'd appreciate any input into making this feature easier to integrate for Firecracker users.

@sboeuf
Copy link
Contributor

sboeuf commented Feb 26, 2019

@dhrgit
Trying to make sure I understand the proposal here. With the chosen approach no.1, you will basically implement some sort of translation/abstraction layer in Firecracker, from AF_UNIX to AF_VSOCK.
By AF_VSOCK emulation in the guest, I guess you mean you are thinking about writing a virtio driver that would communicate with the Firecracker backend, and that would expose an AF_VSOCK socket from the guest perspective, is that right?

I don't have a good grasp on potential performance issues related to this approach, but it'd be nice to include @stefanha in this discussion since he's vsock maintainer, and I'm sure he could give some valuable input here.

@stefanha
Copy link

Approach #1 is comparable to port forwarding. It's fine if you control all applications and the protocols they speak. If you wish to support existing applications then you may be forced to make invasive changes to those applications:

When communication involves exchanging network address information over an existing connection, the host application receiving AF_VSOCK address information would be confused because that address isn't directly usable with AF_LOCAL.

Also, managing port forwarding complicates things if you wish to allow users to add their own applications but is easy with a fixed set of applications that you control.

@dhrgit
Copy link
Contributor

dhrgit commented Feb 27, 2019

@sboeuf
There shouldn't be a need for any vsock driver changes. I meant Firecracker would emulate a vsock device, that works with the guest vsock driver. So, nothing changes on the guest/driver end.

On the host side, though, the only way to support AF_VSOCK would be via vhost. Since the dominant opinion is to avoid adding vhost as a new dependency (and attack surface), we need to find another solution. The proposed approach (to use an AF_UNIX socket) is what hyperkit does and is linked to above by @rn.

This does impose some limitations, since there's no perfect translation between AF_VSOCK and AF_UNIX. Both the host apps and the guest apps will need to be aware of these limitations, in order to establish a communication channel.

The implementation details are not yet set, so my proposal is to use this issue here to discuss them and come up with a solution that makes sense to everyone.

@samuelkarp
Copy link

Is it one AF_UNIX socket per guest, or one AF_UNIX socket per AF_VSOCK port? I think either would work for firecracker-containerd, but we expect to use multiple ports and would need to handle multiplexing if there's only a single AF_UNIX socket per guest.

@sboeuf
Copy link
Contributor

sboeuf commented Feb 28, 2019

@dhrgit @samuelkarp

@mcastelino and I talked about the approach you're about to take. From a Kata Containers perspective, it should be fine as we can tweak the application, kata-runtime and kata-shim in this case to handle the AF_UNIX socket instead of the regular AF_VSOCK one.

Now, from an implementation perspective, we were discussing how there were two distincts use cases that you need to handle depending if the application running inside the VM runs as:

  • Server: the virtio_vsock device will be the one running the server exposing an AF_UNIX socket to the host. The core logic will retrieve the vsock frames through the virtqueues, and will run one AF_UNIX socket server per vsock port used in the VM. I can imagine the user of Firecracker specifying the guest CID and a root directory path where to create the sockets. You will need some sort of convention so that the user knows how to find the created socket name inside this directory where you could have several sockets, all related to a specific port. Another approach might be to let the user create a virtio_vsock device per CID:port pair, in which case the socket path could be also fully provided by the user.
  • Client: This case means we expect to run the server on the host side. The user will have to provide a full socket path where it expects to listen from, and this will expose to the guest a single port.

Globally, you will proxy everything through the virtio_vsock device.

Let me know if I'm missing something or if I actually misunderstood parts of the design.

@sboeuf
Copy link
Contributor

sboeuf commented Mar 6, 2019

@dhrgit @samuelkarp @rn

@mcastelino and I thought about a slightly different approach that would not require the wrapper translation between AF_UNIX and AF_VSOCK.
The idea would be to create a new AF_VSOCK implementation in kernel (we need a brand new kernel module for this purpose, something like vsock_user.ko), getting rid of all the vhost part. Bottom line, the kernel executes socket callbacks on behalf of the application on the host requesting some interactions with the socket, but instead of having vhost_vsock.ko interacting with the guest memory (virtqueues) directly, it would copy data frames coming from the app to the firecracker virtio-vsock device.
We could have virtio-vsock setting things up with the kernel by registering a CID associated to a file descriptor. When the kernel receives some socket connection matching the same CID, it would dump the data to the file descriptor previously registered. It would remain virtio-vsock responsibility to write those data to the guest memory.

This approach is very similar to the one you proposed since it prevents the kernel from interacting directly with the virtqueues. It is also similar in the way that the kernel would still be the one passing data from the application in userspace to the virtio-vsock device also in userspace.

The main benefit of this approach is that you could run unmodified applications, since they would still talk to an AF_VSOCK socket. The second benefit is that virtio-vsock device will not have to figure out a non standard way to translate AF_UNIX connections into AF_VSOCK ones.

The main drawback of this approach is that we would need to introduce a new kernel module, but long term, this could be reused by a bunch of applications that would simply encapsulate the data that could be shared across a cluster of VMs for instance. Basically, this would make vsock more portable.

Anyway, let me know what you think about this approach!

/cc @stefanha

@stefanha
Copy link

stefanha commented Mar 6, 2019

@sboeuf A vhost_user transport is technically not necessary since vhost_vsock already provides the same functionality to any userspace process. While it might not be obvious at first glance, vhost_vsock is not tied to virtualization. You don't need to expose vhost_vsock to a guest. vhost_vsock is just a way of claiming a CID and transferring vsock messages to/from userspace.

The main difference between reusing vhost_vsock and implementing vhost_user is the userspace API: vhost ioctls + eventfd vs a tap-like file descriptor.

Both of these drivers still have to parse vsock messages in order to hand them to net/vmw_vsock/af_vsock.c (there are control messages, not just payloads). The security argument becomes whether you believe parsing an equivalent header from a tap-like file descriptor is more secure than parsing the header from the vring.

In other words, the vsock_user code could also have bugs that lead to a host kernel compromise. If you are willing to accept that risk, then I think it makes more sense to reuse vhost_vsock.ko without exposing it to the guest. This will save you a lot of time now and maintenance in the future.

@sboeuf
Copy link
Contributor

sboeuf commented Mar 7, 2019

@stefanha

While it might not be obvious at first glance, vhost_vsock is not tied to virtualization.

I can see that, since the memory regions given to vhost_vsock don't have to be guest memory, they could be any process memory on the host.

The main difference between reusing vhost_vsock and implementing vhost_user is the userspace API: vhost ioctls + eventfd vs a tap-like file descriptor.

How would I register a CID to the driver from the virtio-vsock in userspace if the only interface is a file descriptor?

Both of these drivers still have to parse vsock messages in order to hand them to net/vmw_vsock/af_vsock.c (there are control messages, not just payloads). The security argument becomes whether you believe parsing an equivalent header from a tap-like file descriptor is more secure than parsing the header from the vring.

Well I thought the point was that because you receive a vsock frame through the vring, and because the data has to be retrieved based on what is provided by the vring, it'd be better to make this happen from userspace. The virtio code in the VMM itself would reach out the virtqueue buffer pointed by the descriptor table, and in case of a malicious guest (putting wrong addresses in the descriptor table), it would not be able to access some random memory on the host.
Instead, by having the vrings processed by vhost in kernel, you leave the possibility to a malicious guest to access any address on the host.
I might be missing something, but I thought that was the main security concern here.
The parsing of each frame can subsequently happen in kernel, but at least we ensure the data being parsed is not coming from somewhere else than the guest memory.

@stefanha
Copy link

stefanha commented Mar 7, 2019

@stefanha

The main difference between reusing vhost_vsock and implementing vhost_user is the userspace API: vhost ioctls + eventfd vs a tap-like file descriptor.

How would I register a CID to the driver from the virtio-vsock in userspace if the only interface is a file descriptor?

The tun/tap driver uses an ioctl to register the interface. Something similar could be done for the vhost_user fd. But we should first discuss whether vhost_user is necessary.

Both of these drivers still have to parse vsock messages in order to hand them to net/vmw_vsock/af_vsock.c (there are control messages, not just payloads). The security argument becomes whether you believe parsing an equivalent header from a tap-like file descriptor is more secure than parsing the header from the vring.

Well I thought the point was that because you receive a vsock frame through the vring, and because the data has to be retrieved based on what is provided by the vring, it'd be better to make this happen from userspace. The virtio code in the VMM itself would reach out the virtqueue buffer pointed by the descriptor table, and in case of a malicious guest (putting wrong addresses in the descriptor table), it would not be able to access some random memory on the host.
Instead, by having the vrings processed by vhost in kernel, you leave the possibility to a malicious guest to access any address on the host.

Here is the vhost_vsock solution that is not exposed to the guest:

The VMM opens /dev/vhost-vsock and issues ioctls to set the guest's CID and a userspace memory region where messages will be placed.

The VMM emulates the virtio-vsock device. Virtqueue processing is done by the VMM. It takes messages from the virtqueue and sanity-checks them. It may also copy data buffers to/from guest memory if the goal is never to expose guest RAM to the host kernel.

The virtio-vsock device constructs new virtio-vsock messages in the userspace memory region registered with vhost-vsock and signals the kick eventfd. The host kernel vhost_vsock module then processes those messages and communicates with AF_VSOCK sockets on the host.

Replies from host AF_VSOCK sockets come back in the reverse direction. vhost_vsock signals the notify eventfd which the VMM is monitoring. The virtio-vsock device emulation code takes reply messages from the userspace memory region and places them into the vring in guest RAM and notifies the guest.

This way the guest never directly interacts with vhost_vsock. vhost_vsock serves only as the API for communicating with the host kernel network stack.

In this solution the guest vrings are processed by the VMM, not by vhost_vsock. vhost_vsock plays the same role as vhost_user, it never touches the guest's vrings.

I might be missing something, but I thought that was the main security concern here.

We need input from folks who originally said they cannot use vhost_vsock for security reasons. There is a spectrum here from "I don't want the host kernel network stack involved at all, it's too risky and I only trust AF_UNIX" to "I just don't want the host kernel to process guest vrings". I'm not sure what the consensus on this is in the Firecracker community.

@dhrgit
Copy link
Contributor

dhrgit commented Mar 7, 2019

This way the guest never directly interacts with vhost_vsock. vhost_vsock serves only as the API for communicating with the host kernel network stack.

This is quite an accurate description of the solution I've been advocating. I.e. reconstruct (and possibly sanitize) the virtio-vsock frames in the VMM userspace, and only have the VMM interact with vhost.

The main arguments against it, as I understood them, were a) vhost would be an extra dependency on the host, and b) sanitization code would be too complex. Perhaps @rn would be better suited to go into more details on those arguments.

@sboeuf
Copy link
Contributor

sboeuf commented Mar 7, 2019

@stefanha

But we should first discuss whether vhost_user is necessary.

Of course, agreed here.

This way the guest never directly interacts with vhost_vsock. vhost_vsock serves only as the API for communicating with the host kernel network stack.
In this solution the guest vrings are processed by the VMM, not by vhost_vsock. vhost_vsock plays the same role as vhost_user, it never touches the guest's vrings.

Now I get it :)
You're still proxying things from the VMM, but vhost_vsock interacts with another memory region that has been provided by the VMM, not the vring buffers.
And your description of the whole solution is very accurate and makes sense to me. Based on this, I now understand why it does not make too much sense to introduce a new vsock_user driver.

@dhrgit

vhost would be an extra dependency on the host

Why does that matter since it's on the host?

sanitization code would be too complex

Even with the AF_UNIX to AF_VSOCK solution, wouldn't you be sanitizing frames received from vrings before to inject them into the kernel through the socket?

Looking forward to @rn feedback :)

@sboeuf
Copy link
Contributor

sboeuf commented Mar 14, 2019

@rn
Any feedback?

@rn
Copy link
Contributor

rn commented Mar 16, 2019

Apologies for the long delay in replying. I had a hectic schedule.

Our main concern with using vhost is that there would be direct interaction of the guest with a complex, in kernel component (beyond MMIO). So the guest could provide arbitrary inputs and can try to exploit flaws in the in kernel implementation. Hence the proposal of implementing the virtio vsock "backend" in firecracker, a process which can be jailed. In my opinion, this is a very important property.

Now, once we do that, the next question is, how to we expose the vsocks to other processes? We could have the firecracker vsock backend basically proxy vsock in to the kernel. For that there seem to be two options, either feeding stuff into rings directly (after sanitizing) or proxy at a higher level, like a byte streams. I think the sanitizing is tricky to get right and you still risk the guest being able to more or less directly control the inputs into the kernel. If you proxy at a higher level (like byte streams) you basically terminate and re-originate vsock connections and this is not much different to proxying to a AF_UNIX socket. I would think that AF_UNIX is better understood and tested/hardened and does not require the host kernel to be configured to include vsock support (less code). Hence the suggestion to go with AF_UNIX.

@sboeuf
Copy link
Contributor

sboeuf commented Mar 18, 2019

Thanks for the feedback @rn

I would think that AF_UNIX is better understood and tested/hardened and does not require the host kernel to be configured to include vsock support (less code).

I agree this looks a bit more secure, but the price to pay is the introduction of some kind of hybrid protocol that will need to be handled by the application on the host. This means the application will not be compatible with other hypervisors unless specific modifications.

Also, bypassing the vsock kernel code like this, we are definitely not contributing for making it better long term.

I think the sanitizing is tricky to get right

What about the complexity of translating AF_VSOCK into AF_UNIX?

@stefanha
Copy link

I would think that AF_UNIX is better understood and tested/hardened and does not require the host kernel to be configured to include vsock support (less code).

I agree this looks a bit more secure, but the price to pay is the introduction of some kind of hybrid protocol that will need to be handled by the application on the host. This means the application will not be compatible with other hypervisors unless specific modifications.

As mentioned in a previous comment, one way to consider this trade-off is whether you have a small, fixed number of vsock services you wish to run (then the AF_UNIX approach is fine) or whether vsock should be generally usable for user-defined purposes (then the AF_UNIX approach is impractical because it requires extensive modifications to Sockets API applications and some protocols may be unportable).

Which use case do you have in mind?

@dhrgit
Copy link
Contributor

dhrgit commented Mar 19, 2019

Which use case do you have in mind?

I believe we are looking at the first use case. I.e. we are adding vsock, in order to enable container orchestrators to deploy and communicate with their agents inside the virtualized container / microvm.

We haven't explored, in depth, the idea of enabling generic applications over vsock. Is there demand for that?

@raduweiss
Copy link
Contributor Author

@stefanha your question is spot on and probably something we should have made more clear initially.

Indeed, we just want to support whatever minimum set of functionality works for container orchestrators (we've arrived here by taking in requirements from firecracker-contaierd and Kata Containers).

I think a good next step will be to detail the actual proposal and get feedback from container folks that want to use Firecracker as a microVM runtime.

@raduweiss
Copy link
Contributor Author

raduweiss commented Mar 19, 2019

By the way, if there's demand for the vhost/vsock solution, I'm actually very much inclined to work towards a place where this vsock/unix_socket and the full vsock/vhost (if there's demand for it) are optional Firecracker features (in the Rust sense) that live in separate rust-vmm crates. Such work would probably happen after the initial implementation we'll do now.

@sboeuf
Copy link
Contributor

sboeuf commented Mar 20, 2019

@raduweiss

a place where this vsock/unix_socket and the full vsock/vhost (if there's demand for it) are optional Firecracker features (in the Rust sense) that live in separate rust-vmm crates.

That's definitely what the end goal should be IMO.
As per the previous discussions, vsock/AF_UNIX sounds like a bit more secure approach but has a very limited scope (running an agent inside the VM as the server side of the connection). This scope is fine for specific use cases, but if someone tries to use Firecracker for more general purpose (without the container use case in mind), we might want a full vsock/AF_VSOCK support there.
I'm glad to see you're considering this as a viable option for Firecracker.

Such work would probably happen after the initial implementation we'll do now.

Agreed, as part of this work will be reused/shared with vsock/AF_VSOCK (accessing the queues), while the backend/proxying should be different.

@raduweiss
Copy link
Contributor Author

That's definitely what the end goal should be IMO.

And I think this is the path we're on now: Firecracker keeps developing as a very narrow, focused, and optimized building block (as per our charter statement "Our mission is to enable secure, multi-tenant, minimal-overhead execution of container and function workloads." ... and nothing more), while rust-vmm grows into a community project where the various crates provide really top-quality VMM functionality (and there's no problem if there more than one way to do things)

So, if Firecracker users end up wanting the full vsock/AF_VSOCK stack down the line, then we'll take it from rust-vmm and have it as a feature. If not, it will still be available in rust-vmm for use cases other than Firecracker.

I guess a discussion for the near future, once @dhrgit comes up with his vsock/AF_UNIX proposal, is if vsock/AF_UNIX is wanted in rust-vmm (@sboeuf, @stefanha, @andreeaflorescu).

@sboeuf
Copy link
Contributor

sboeuf commented Mar 21, 2019

@raduweiss

I guess a discussion for the near future, once @dhrgit comes up with his vsock/AF_UNIX proposal, is if vsock/AF_UNIX is wanted in rust-vmm

As long as the scope is clear for potential users, I can see how other VMM could benefit from it, hence no reason not to have this in rust-vmm.

@dhrgit
Copy link
Contributor

dhrgit commented Apr 2, 2019

Hey everyone - I've posted #1044 as an RFC on the proposed vsock design.

@raduweiss
Copy link
Contributor Author

We'll be happy to hear everyone's feedback & comments!

@jiangliu
Copy link
Contributor

The chosen solution has a great property which has no dependency on the host kernel version.

@dhrgit
Copy link
Contributor

dhrgit commented Apr 22, 2019

Update for visibility:

  • old vhost-based code removed,
  • new vsock device added
  • new vsock components and data path: device / virtq <-> epoll handler <-> muxer <-> connection <-> unix socket
  • nested epoll dispatch mechanism for the vsock muxer
  • (PoC-level) guest-initiated connection multiplexing and data exchange
  • improve packet assembly logic to use properly aligned data
  • add traits and logic for generalizing the vsock backend (i.e. unix sockets should be only one instance / implementation)
  • handle graceful connection shutdown
  • add flow control
  • handle incoming connections (host-initiated)
  • add connection shutdown kill timers
  • major code cleanup
  • improve throughput (currently really low, ~300mbps)
  • add metrics and tests

Current code available in my WiP branch: https://github.com/dhrgit/firecracker/tree/vsock-wip

andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
andreeaflorescu added a commit to andreeaflorescu/firecracker that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
firecracker-microvm#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes firecracker-microvm#1085

Signed-off-by: Andreea Florescu <[email protected]>
dianpopa pushed a commit that referenced this issue May 9, 2019
Added firecracker-experimental.yaml in api_server/swagger.
This file is a copy of firecracker.yaml with an additional definiton
for the vsock API request.

The point of this file is to be used by third-party projects (like
Kata Containers) to automatically generate an API client that knows
how to send requests to the Firecracker API. The definition currently
lies in a different file and not in firecracker.yaml because vsock
is consider an experimental feature. Once the production ready vsock
is merged we will get rid of this experimental yaml.

For tracking purposes, this is the issue regarding switching to
vsock with UDS:
#650

In the new proposed vsock implementation, the API request for
configuring the vsock will also change.

Fixes #1085

Signed-off-by: Andreea Florescu <[email protected]>
@dhrgit
Copy link
Contributor

dhrgit commented May 26, 2019

The PR is up: #1106

@dhrgit
Copy link
Contributor

dhrgit commented Sep 2, 2019

Closed by #1176 .

@dhrgit dhrgit closed this as completed Sep 2, 2019
@acatangiu acatangiu mentioned this issue Sep 4, 2019
7 tasks
@raduweiss raduweiss changed the title Support Virtio Vsock Devices: virtio vsock Sep 18, 2020
@raduweiss raduweiss changed the title Devices: virtio vsock [Devices] virtio vsock Sep 18, 2020
@raduweiss raduweiss removed the Priority: High Indicates than an issue or pull request should be resolved ahead of issues or pull requests labelled label Sep 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Roadmap: Tracked Items tracked on the roadmap project.
Projects
None yet
Development

No branches or pull requests

8 participants