mirror of
https://github.com/FEX-Emu/linux.git
synced 2024-12-26 11:28:28 +00:00
32acab3181
This patch adds native multipath support to the nvme driver. For each namespace we create only single block device node, which can be used to access that namespace through any of the controllers that refer to it. The gendisk for each controllers path to the name space still exists inside the kernel, but is hidden from userspace. The character device nodes are still available on a per-controller basis. A new link from the sysfs directory for the subsystem allows to find all controllers for a given subsystem. Currently we will always send I/O to the first available path, this will be changed once the NVMe Asynchronous Namespace Access (ANA) TP is ratified and implemented, at which point we will look at the ANA state for each namespace. Another possibility that was prototyped is to use the path that is closes to the submitting NUMA code, which will be mostly interesting for PCI, but might also be useful for RDMA or FC transports in the future. There is not plan to implement round robin or I/O service time path selectors, as those are not scalable with the performance rates provided by NVMe. The multipath device will go away once all paths to it disappear, any delay to keep it alive needs to be implemented at the controller level. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
60 lines
1.7 KiB
Plaintext
60 lines
1.7 KiB
Plaintext
config NVME_CORE
|
|
tristate
|
|
|
|
config BLK_DEV_NVME
|
|
tristate "NVM Express block device"
|
|
depends on PCI && BLOCK
|
|
select NVME_CORE
|
|
---help---
|
|
The NVM Express driver is for solid state drives directly
|
|
connected to the PCI or PCI Express bus. If you know you
|
|
don't have one of these, it is safe to answer N.
|
|
|
|
To compile this driver as a module, choose M here: the
|
|
module will be called nvme.
|
|
|
|
config NVME_MULTIPATH
|
|
bool "NVMe multipath support"
|
|
depends on NVME_CORE
|
|
---help---
|
|
This option enables support for multipath access to NVMe
|
|
subsystems. If this option is enabled only a single
|
|
/dev/nvmeXnY device will show up for each NVMe namespaces,
|
|
even if it is accessible through multiple controllers.
|
|
|
|
config NVME_FABRICS
|
|
tristate
|
|
|
|
config NVME_RDMA
|
|
tristate "NVM Express over Fabrics RDMA host driver"
|
|
depends on INFINIBAND && BLOCK
|
|
select NVME_CORE
|
|
select NVME_FABRICS
|
|
select SG_POOL
|
|
help
|
|
This provides support for the NVMe over Fabrics protocol using
|
|
the RDMA (Infiniband, RoCE, iWarp) transport. This allows you
|
|
to use remote block devices exported using the NVMe protocol set.
|
|
|
|
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
|
from https://github.com/linux-nvme/nvme-cli.
|
|
|
|
If unsure, say N.
|
|
|
|
config NVME_FC
|
|
tristate "NVM Express over Fabrics FC host driver"
|
|
depends on BLOCK
|
|
depends on HAS_DMA
|
|
select NVME_CORE
|
|
select NVME_FABRICS
|
|
select SG_POOL
|
|
help
|
|
This provides support for the NVMe over Fabrics protocol using
|
|
the FC transport. This allows you to use remote block devices
|
|
exported using the NVMe protocol set.
|
|
|
|
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
|
from https://github.com/linux-nvme/nvme-cli.
|
|
|
|
If unsure, say N.
|