Overview
The InfiniBand™ Architecture (IBA) is an industry standard that defines a new high-speed switched fabric subsystem designed to connect processor nodes and I/O nodes to form a system area network. This new interconnect method moves away from the local transaction-based I/O model across busses to a remote message-passing model across channels. The architecture is independent of the host operating system (OS) and the processor platform.
IBA provides both reliable and unreliable transport mechanisms in which messages are enqueued for delivery between end systems. Hardware transport protocols are defined that support reliable and unreliable messaging (send/receive), and memory manipulation semantics (e.g., RDMA read/write) without software intervention in the data transfer path.
The InfiniBand specification primarily defines the hardware electrical, mechanical, link-level, and management aspects of an InfiniBand fabric, but does not define the lowest layers of the operating system stack needed to communicate over an InfiniBand fabric. The remainder of the operating system stack to support storage, networking, IPC, and systems management is left to the operating system vendor for definition. More on the InfiniBand architecture can be found here.
The site is organized as a series of sub-projects; listed in the contents pane at the left. This page contains a high level overview of each sub-project.
High Level Architecture
Project Definition
The Linux InfiniBand Project is a collection of sub-projects and activities all focused around the common goal of providing the operating system software components needed to support an InfiniBand fabric, specifically for the Linux operating system. The architecture for several of these components is further influenced by existing and emerging standards that define uniform protocols for components of the operating system. Examples here are emerging protocols like Internet Protocol over InfiniBand (IPoIB) and the SCSI RDMA Protocol (SRP) and proposed definitions for standard InfiniBand transport and driver APIs.
Goals
This project is focused on promoting, enabling and delivering the software components needed to support an InfiniBand fabric for the Linux operating system. The primary goals of this project include:
This project serves as an overall umbrella for the development of these operating system components.
Licensing Details
This software is being made available under a choice of one of two licenses. You may choose to be licensed under either the GNU General Public License (GPL) Version 2, June 1991, available at http://www.fsf.org/copyleft/gpl.html, or the Intel BSD + Patent License, further described here.
This code was developed to comply with the
InfiniBand Architecture Specifications. The InfiniBand Architecture
Specifications were developed by the InfiniBand Trade Association (IBTA)
and there are certain legal rights and obligations associcated with
the specifications set forth by the IBTA. For more information on
InfiniBand and the IBTA, see
http://www.infinibandta.org,
The deliverables of this project are listed in the
contents at the left under the "Sub-Projects"
section. Below is a brief overview of each of these
sub-projects. Select a particular sub-project for more details and
specifics relevant to that sub-project. Hardware Drivers –
This is a vendor-specific piece of software that together with the
vendor’s HCA forms a unit capable of implementing the verbs as
specified in the InfiniBand specification.
HCA DDK – The
HCA Driver Developers Kit is used by HCA vendors in order to
plug their HCA Driver into the InfiniBand Access layer.
The HCA DDK contains a sample driver template and headers needed
to develop a driver that plugs into the access layer
infrastructure. Infrastructure Drivers:
InfiniBand Access
– This software exports the full capabilities of the
underlying HCA implementations to higher-level software.
It exists to provide useful services that would otherwise
be duplicated in the independent upper-level components, and to
coordinate access to shared InfiniBand resources (such as the
SMI and GSI) that are needed by multiple upper-level components.
The InfiniBand Access component is envisioned to provide
several services. Fabric Boot
– This software provides the necessary capabilities
to provide fabric boot support for operating systems that
support the Extensible Firmware Interface (EFI) definition. Storage Drivers:
SRP – The
SCSI RDMA Protocol (SRP) developed by the ANSI NCITS T10 working group is designed to take full advantage of the features
provided by the InfiniBand Architecture and enables access to remote storage devices
across an InfiniBand fabric. Networking Drivers:
IPoIB – IPoIB
provides standardized IP encapsulation over InfiniBand fabrics
as defined by the IETF. This driver operates only within
an InfiniBand fabric. SDP – The Sockets Direct Protocol (SDP) is an
InfiniBand specific protocol defined by the Software Working
Group (SWG) of the InfiniBand Trade Association (IBTA). It
defines a standard wire protocol over IBA fabric to support
user-mode stream sockets (SOCK_STREAM) networking over IBA. VNIC – The
Virtual NIC (VNIC) driver enables a host on an InfiniBand
fabric to access nodes on an external Ethernet without requiring
an Ethernet NIC to be installed in that host. IPC Drivers:
kDAPL – kDAPL is
the kernel Direct Access Provider Library reference
implementation for InfiniBand. It is a high performance Remote
Direct Access Memory (RDMA) API for the kernel. This
effort is being undertaken by the DAT
Collaborative. uDAPL – uDAPL is
the user-mode Direct Access Provider Library reference
implementation for InfiniBand. It is a high performance Remote
Direct Access Memory (RDMA) API for user-mode. Subnet Management –
Provides the basic Subnet Manager functionality as defined in Volume
1 of the InfiniBand specification. Subnet Management handles several areas related
to the operation of the subnet including: discovery, monitoring and
configuration of the ports connected to the subnet, responding to
subnet administrative (SA) queries, configuration of I/O units with
host channel drivers, performance management, and baseboard
management. The
code can be downloaded from the main sourceforge project page.
Download Code
Most of the code is currently at a close or at a production level
of quality and has been tested
using the Mellanox InfiniBand HCA and switches.
The code has been tested on various version of 2.4.x kernels.
See the INSTALL.txt and ReleaseNotes.txt in the tar ball for details.
Project News
The project news is included on the SourceForge site. Click the "News" link at the left under "SourceForge Services".
Last Updated: 05/02/2002 02:29 PM