Direct Rendering Manager

Direct Rendering Manager
Original author(s) kernel.org & freedesktop.org
Developer(s) kernel.org & freedesktop.org
Written in C
Type
License
Website dri.freedesktop.org/wiki/DRM

The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user space programs can use to send commands and data to the GPU, and perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel space component of the X Server's Direct Rendering Infrastructure,[1] but since then it has been used by other graphic stack alternatives such as Wayland.

User space programs can use the DRM API to command the GPU to do hardware accelerated 3D rendering, video decoding as well as GPGPU computing.

Overview

The Linux Kernel already had an API called fbdev allowing to manage the framebuffer of a graphics adapter,[2] but it couldn't be used to handle the needs of modern 3D accelerated GPU based video cards. These type of cards usually require setting and managing a command queue in the card's memory (Video RAM) to dispatch commands to the GPU, and also they need a proper management of the buffers and free space of the Video RAM itself.[3] Initially user space programs (such as the X Server) directly managed these resources, but these programs usually acted as if they were the only ones with access to the card's resources. When two or more programs tried to control the same video card at the same time, and set its resources each one in its own way, most times they ended catastrophically.[3]

Access to video card without DRM
Without DRM
Access to video card with DRM
With DRM
DRM allows multiple programs concurrently access to the 3D video card avoiding collisions

When the Direct Rendering Manager was first created, the purpose was that multiple programs using resources from the video card can cooperate through it. The DRM gets an exclusive access to the video card, and it's responsible for initializing and maintaining the command queue, the VRAM and any other hardware resource. The programs that want to use the GPU send their requests to DRM, which acts as an arbitrator and takes care to avoid possible conflicts.

Since then, the scope of DRM has been expanded over the years to cover more functionality previously handled by user space programs, such as framebuffer managing and mode setting, memory sharing objects and memory synchronization.[4][5] Some of these expansions had carried their own specific names, such as Graphics Execution Manager (GEM) or Kernel Mode-Setting (KMS), and the terminology prevails when the functionality they provide is specifically alluded. But they are really parts of the whole kernel DRM subsystem.

The trend to include two GPUs in a computer a discrete GPU and an integrated one led to new problems such as GPU switching that should be also solved at the DRM layer. In order to match the Nvidia Optimus technology, DRM was provided with GPU offloading abilities, called PRIME.[6]

Software architecture

A process using the Direct Rendering Manager of the Linux Kernel to access a 3D accelerated graphics card

The Direct Rendering Manager resides in kernel space, so the user space programs must use kernel system calls to request its services. However, DRM doesn't define its own customized system calls. Instead, it follows the Unix principle "everything is a file" to expose the GPUs through the filesystem name space using device files under the /dev hierarchy. Each GPU detected by DRM is referred as a DRM device, and a device file /dev/dri/cardX (where X is a sequential number) is created to interface with it.[7][8] User space programs that want to talk to the GPU must open the file and use ioctl calls to communicate with DRM. Different ioctls correspond to different functions of the DRM API.

A library called libdrm was created to facilitate the interface of user space programs with the DRM subsystem. This library is merely a wrapper that provides a function written in C for every ioctl of the DRM API, as well as constants, structures and other helper elements.[9] The use of libdrm not only avoids exposing the kernel interface directly to user space, but presents the usual advantages of reusing and sharing code between programs.

Direct Rendering Manager architecture details: DRM core and DRM driver (including GEM and KMS) interfaced by libdrm

DRM consists of two parts: a generic "DRM core" and a specific one ("DRM Driver") for each type of supported hardware.[10] DRM core provides the basic framework where different DRM drivers can register, and also provides to user space a minimum set of ioctls with common, hardware-independent functionality.[7] A DRM driver, on the other hand, implements the hardware-dependent part of the API, specific to the type of GPU it supports; it should provide the implementation to the remainder ioctls not covered by DRM core, but it may also extend the API offering additional ioctls with extra functionality only available on such hardware.[7] When a specific DRM driver provides an enhanced API, user space libdrm is also extended by an extra library libdrm-driver that can be used by user space to interface with the additional ioctls.

API

The DRM core exports several interfaces to user-space applications, generally intended to be used through corresponding libdrm wrapper functions. In addition, drivers export device-specific interfaces for use by user-space drivers & device-aware applications through ioctls and sysfs files. External interfaces include: memory mapping, context management, DMA operations, AGP management, vblank control, fence management, memory management, and output management.

Graphics Execution Manager

Due to the increasing size of video memory and the growing complexity of graphics APIs such as OpenGL, the strategy of reinitializing the graphics card state at each context switch was too expensive, performance-wise. Also, modern Linux desktops needed an optimal way to share off-screen buffers with the compositing manager. These requirements led to the development of new methods to manage graphics buffers inside the kernel. The Graphics Execution Manager (GEM) emerged as one of these methods.[5]

GEM provides an API with explicit memory management primitives.[5] Through GEM, a user space program can create, handle and destroy memory objects living in the GPU's video memory. These objects, called "GEM objects",[11] are persistent from the user space program's perspective, and don't need to be reloaded every time the program regains control of the GPU. When a user space program needs a chunk of video memory (to store a framebuffer, texture or any other data required by the GPU[12]), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory, and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations.[5][11] GEM API also provides operations to populate the buffer and to release it when is not needed anymore. Memory from unreleased GEM handles gets recovered when the user space process closes the DRM device file descriptor intentionally or because it terminates.[13]

GEM also allows two or more user space processes using the same DRM device (hence the same DRM driver) to share a GEM object.[13] GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called GEM names. A GEM name refers to one, and only one, GEM object created within the same DRM device by the same DRM driver, by using a unique 32 bit integer. GEM provides an operation, flink, to obtain a GEM name from a GEM handle.[13][14]:16 The process can then pass this GEM name this 32-bit integer to another process using any IPC mechanism available.[14]:15 The GEM name can be used by the recipient process to obtain a local GEM handler pointing to the original GEM object.

Unfortunately, the use of GEM names to share buffers is not secure.[14]:16[15][16] A malicious third party process accessing the same DRM device could try and guess the GEM name of a buffer, shared by another two processes, simply by probing 32-bit integers.[17][16] Once a GEM name is found, its contents can be accessed and modified, violating the confidentiality and integrity of the information of the buffer. This drawback was overcome later by the introduction of DMA-BUF support into DRM.

Another important task for any video memory management system besides managing the video memory space is handling the memory synchronization between the GPU and the CPU. Current memory architectures are very complex and usually involve various levels of caches for the system memory and sometimes for the video memory too. Therefore, video memory managers should also handle the cache coherence to ensure the data shared between CPU and GPU is consistent.[18] This means that often video memory management internals are highly dependent on hardware details of the GPU and memory architecture, and thus driver-specific.[19]

GEM was initially developed by Intel engineers to provide a video memory manager for its i915 driver.[18] The Intel GMA 9xx family are integrated GPUs with a Uniform Memory Architecture (UMA) where the GPU and CPU share the physical memory, and there is not a dedicated VRAM.[20] GEM defines "memory domains" for memory synchronization, and while these memory domains are GPU-independent,[5] they are specifically designed with an UMA memory architecture in mind, making them less suitable for other memory architectures like those with a separate VRAM. For this reason, other DRM drivers have decided to expose to user space programs the GEM API, but internally they implemented a different memory manager better suited for their particular hardware and memory architecture.[21]

The GEM API also provides ioctls for control of the execution flow (command buffers), but they are Intel specific to be used with Intel i915 and later GPUs.[5] No other DRM driver has attempted to implement any part of the GEM API beyond the memory management specific ioctls.

Translation Table Maps

The Translation Table Maps (TTM) memory manager was developed by Tungsten Graphics but later superseded by GEM.

AGP, PCIe and other graphics cards contain an IOMMU called Graphics address remapping table (GART) which can be used to map various pages of system memory into the GPU's address space. The result is that, at any time, an arbitrary (scattered) subset of the system's RAM pages are accessible to the GPU.[4]

Kernel Mode Setting

There must be a "DRM master" in user-space, this program has exclusive access to KMS.

In order to work properly, a video card or graphics adapter must set a mode a combination of screen resolution, color depth and refresh rate that is within the range of values supported by itself and the attached display screen. This operation is called mode-setting,[22] and it usually requires raw access to the graphics hardware i.e. the ability to write to certain registers of the video card.[23][24] A mode-setting operation must be performed prior to start using the framebuffer, and also when the mode is required to change by an application or the user.

In early days, the user space programs that want to use the graphical framebuffer were also responsible for providing the mode-setting operations,[3] and therefore they needed to run with privileged access to the video hardware. In Unix-type operating systems, the X Server was the most prominent example, and its mode-setting implementation lived in the DDX driver for each specific type of video card.[25] This approach, later referred as User space Mode-Setting or UMS,[26][27] poses several issues.[28][22] It not only breaks the isolation that operating systems should provide between programs and hardware, raising both stability and security concerns, but also could leave the graphics hardware in an inconsistent state if two or more user space programs try to do the mode-setting at the same time. To avoid these conflicts, the X Server became in practice the only user space program that performed mode-setting operations; the remainder user space programs relied on the X Server to set the appropriate mode and to handle any other operation involving mode-setting. Initially the mode-setting was performed exclusively during the server startup process, but later the X Server gained the ability to do it while running.[29] The XFree86-VidModeExtension extension was introduced in XFree86 3.1.2 to let any X client request modeline (resolution) changes to the X Server.[30][31] VidMode extension was later superseded by the more generic XRandR extension.

However, this was not the only code doing mode-setting in a Linux system. During the booting process, the Linux kernel should set a minimal text mode for the virtual console (based in the standard modes defined by VESA BIOS extensions).[32] Also the Linux kernel framebuffer driver contained mode-setting code to configure framebuffer devices.[2] To avoid mode-setting conflicts, the XFree86 Server and later the X.Org Server handled the case when the user switched from the graphical environment to a text virtual console by saving its mode-setting state, and restoring it when the user switched back to X.[33] This process caused an annoying flicker in the transition, and also can fail, leading to a corrupted or unusable output display.[34]

Finally, it was decided that the best approach was to move the mode-setting code to an single place inside the kernel, specifically to the existing DRM module.[28][29][35][36] Then, every process including the X Server should be able to command the kernel to perform mode-setting operations, and the kernel would ensure that concurrent operations don't result in an inconsistent state. The new kernel API and code added to the DRM module to perform these mode-setting operations was called Kernel Mode-Setting (KMS).[22]

KMS device model

KMS models and manages the output devices as a series of abstract hardware blocks commonly found on the display output pipeline of a display controller. These blocks are:[37]

Atomic Mode Setting

Atomic mode-setting brings atomicity to the mode setting and page flipping operations on a DRM device.[25]

Render nodes

In the original DRM API, the DRM device /dev/dri/cardX is used for both privileged (modesetting, other display control) and non-privileged (rendering, GPGPU compute) operations.[8] For security reasons, opening the associated DRM device file requires special privileges "equivalent to root-privileges".[40] This leads to an architecture where only some reliable user space programs (the X server, a graphical compositor, ...) have full access to the DRM API, including the privileged parts like the modeset API. The remainder user space applications that want to render or make GPGPU computations should be granted by the owner of the DRM device ("DRM Master") through the use of a special authentication interface.[41] Then the authenticated applications can render o make computations using a restricted version of the DRM API without privileged operations. This design imposes a severe constraint: there must always be a running graphics server (the X Server, a Wayland compositor, ...) acting as DRM-Master of a DRM device so that other user space programs can be granted to use the device, even in cases not involving any graphics display like GPGPU computations.[40][41]

The "render nodes" concept try to solve these scenarios by splitting the DRM user space API in two interfaces, one privileged and another non-privileged, and using separated device files (or "nodes") for each one.[8] For every GPU found, its corresponding DRM driver if it supports the render nodes capability creates a device file /dev/dri/renderDX, called the render node, in addition to the primary node /dev/dri/cardX.[41][8] Clients that use a direct rendering model and applications that want to take advantage of the computing facilities of a GPU, can do it without requiring additional privileges by simply opening any existing render node and dispatching GPU operations using the limited subset of the DRM API supported by those nodes provided they have file system permissions to open the device file. Display servers, compositors and any other program that requires the modeset API or any other privileged operation must open the standard primary node that grants access to the full DRM API and use it as usual. Render nodes restricted API explicitly disallow the GEM flink operation to prevent buffer sharing using insecure GEM global names; only PRIME (DMA-BUF) file descriptors can be used to share buffers with another client, including the graphics server.[8][41]

Hardware support

DRM is to be used by user-mode graphics devices driver, like e.g. AMD Catalyst or Mesa 3D. User-space programm use the Linux System Call Interface to access DRM. DRM augments the Linux System Call Interface with own system calls.[42]

The Linux DRM subsystem includes free and open source drivers to support hardware from the 3 main manufacturers of GPUs for desktop computers (AMD. NVIDIA and Intel), as well as from a growing number of mobile GPU and System on a chip (SoC) integrators. The quality of each driver highly varies, depending on the degree of cooperation by the manufacturer and other matters.

DRM drivers
Driver Since kernel Supported hardware Status/Notes
radeon 2.4.1 AMD (formerly ATi) Radeon GPU series, including R100, R200, R300, R400, Radeon X1000, HD 2000, HD 4000, HD 5000 ("Evergreen"), HD 6000 ("Northern Islands"), HD 7000/HD 8000 ("Southern Islands") and Rx 200 series
i915 2.6.9 Intel GMA 830M, 845G, 852GM, 855GM, 865G, 915G, 945G, 965G, G35, G41, G43, G45 chipsets. Intel HD and Iris Graphics HD Graphics 2000/3000/2500/4000/4200/4400/4600/P4600/P4700/5000, Iris Graphics 5100, Iris Pro Graphics 5200 integrated GPUs.
nouveau 2.6.33[43][44] NVIDIA Tesla, Fermi, Kepler, Maxwell based GeForce GPUs, Tegra K1 SoC
exynos 3.2[45] Samsung ARM-based Exynos SoCs
vmwgfx 3.2 (from staging)[46] Virtual GPU for the VMware SVGA2 virtual driver
gma500 3.3 (from staging)[47][48] Intel GMA 500 and other Imagination Technologies (PowerVR) based graphics GPUs experimental 2D KMS-only driver
ast 3.5[49] ASpeed Technologies 2000 series experimental
shmobile 3.7[50] Renesas SH Mobile
tegra 3.8[51] Nvidia Tegra20, Tegra30 SoCs
omapdrm 3.9[52] Texas Instruments OMAP5 SoCs
rcar-du 3.11[53] Renesas R-Car SoC Display Units
msm 3.12[54][55] Qualcomm's Adreno A2xx/A3xx/A4xx GPU families (Snapdragon SOCs)[56]
armada 3.13[57][58] Marvell Armada 510 SoCs
bochs 3.14[59] Virtual VGA cards using the Bochs dispi vga interface (such as QEMU stdvga) virtual driver
sti 3.17[60][61] STMicroelectronics SoC stiH41x series
imx 3.19 (from staging)[62][63] Freescale i.MX SoCs
rockchip 3.19[62][64] Rockchip SoC-based GPUs KMS-only
amdgpu[42] 4.2[65][66] AMD GCN 1.2 ("Volcanic Islands") microarchitecture GPUs, including Radeon R9 285 ("Tonga") and Radeon Rx 300 series ("Fiji"),[67] as well as "Carrizo" integrated APUs
virtio 4.2[68] virtual GPU driver for QEMU based virtual machine managers (like KVM or Xen) virtual driver
vc4 4.4[69][70][71] Raspberry Pi's Broadcom BCM2835 and BCM2836 SoCs (VideoCore IV GPU)
etnaviv 4.5[72][73][74] Vivante GPU cores found in several SoCs such as Marvell ARMADA and Freescale i.MX6 Series

There is also a number of drivers for old, obsolete hardware detailed in the next table for historical purposes. Some of them still remains in the kernel code, but others have been already removed.

Historic DRM drivers
Driver Since kernel Supported hardware Status/Notes
gamma 2.3.18 3Dlabs GLINT GMX 2000 Removed since 2.6.14[75]
ffb 2.4 Creator/Creator3D (used by Sun Microsystems Ultra workstations) Removed since 2.6.21[76]
tdfx 2.4 3dfx Banshee/Voodoo3+
mga 2.4 Matrox G200/G400/G450
r128 2.4 ATI Rage 128
i810 2.4 Intel i810
sis 2.4.17 SiS 300/630/540
i830 2.4.20 Intel 830M/845G/852GM/855GM/865G Removed since 2.6.39[77] (replaced by i915 driver)
via 2.6.13[78] VIA Unichrome / Unichrome Pro
savage 2.6.14[79] S3 Graphics Savage 3D/MX/IX/4/SuperSavage/Pro/Twister

Development

The Direct Rendering Manager is developed within the Linux kernel, and its source code resides in the /drivers/gpu/drm directory of the Linux source code. The subsystem maintainter is Dave Airlie, with other maintainers taking care of specific drivers.[80] As usual in the Linux kernel development, DRM submaintainers and contributors send their patches with new features and bug fixes to the main DRM maintainer which integrates them into its own Linux repository. The DRM maintainer in turn submits all of these patches that are ready to be mainlined to Linus Torvalds whenever a new Linux version is going to be released. Torvalds, as top maintainer of the whole kernel, holds the last word on whether a patch is suitable or not for inclusion in the kernel.

For historical reasons, the source code of the libdrm library is maintained under the umbrella of the Mesa project.[81]

History

In 1999, while developing DRI for XFree86, Precision Insight created the first version of DRM for the 3dfx video cards, as a Linux kernel patch included within the Mesa source code.[82] Later that year, the DRM code was mainlined in Linux kernel 2.3.18 under the /drivers/char/drm/ directory for character devices.[83] During the following years the number of supported video cards grew. When Linux 2.4.0 was released in January 2001 there was already support for Creative Labs GMX 2000, Intel i810, Matrox G200/G400 and ATI Rage 128, in addition to 3dfx Voodoo3 cards,[84] and that list expanded during the 2.4.x series, with drivers for ATI Radeon cards, some SiS video cards and Intel 830M and subsequent integrated GPUs.

The split of DRM into two components, DRM core and DRM driver, called DRM core/personality split was done during the second half of 2004,[10][85] and merged into kernel version 2.6.11.[86] This split allowed multiple DRM drivers for multiple devices to work simultaneously, opening the way to multi-GPU support.

The idea of putting all the video mode setting code in one place inside the kernel had been acknowledged for years,[87][88] but the graphics card manufacturers had argued that the only way to do the mode-setting was to use the routines provided by themselves and contained in the Video BIOS of each graphics card. Such code had to be executed using x86 real mode, which prevented it to be invoked from a kernel running in protected mode.[35] The situation changed when Luc Verhaegen and other developers found a way to do the mode-setting natively instead of BIOS-based,[89][35] showing that it was possible to do it using normal kernel code and laying the groundwork for what would become the Kernel Mode Setting. In May 2007 Jesse Barnes (Intel) published the first proposal for a drm-modesetting API and a working native implementation of mode-setting for Intel GPUs within the i915 DRM driver.[34] In December 2007 Jerome Glisse started to add the native mode-setting code for ATI cards to the radeon DRM driver.[90][91] Work on both the API and drivers continued during 2008, but got delayed by the necessity of a memory manager also in kernel space to handle the framebuffers.[92]

In October 2008 the Linux kernel 2.6.27 brought a major source code reorganization, prior to some significant upcoming changes. The DRM source code tree was moved to its own source directory /drivers/gpu/drm/ and the different drivers were moved into its own subdirectores. Headers were also moved into a new /include/drm directory.[93]

The increasing complexity of video memory management led to several approaches to solving this issue. The first attempt was the Translation Table Maps (TTM) memory manager, developed by Thomas Hellstrom (Tungsten Graphics) in collaboration with Eric Anholt (Intel) and Dave Airlie (Red Hat).[4] TTM was proposed for inclusion into mainline kernel 2.6.25 in November 2007,[4] and again in May 2008, but was ditched in favor of a new approach called Graphics Execution Manager (GEM).[94] GEM was first developed by Keith Packard and Eric Anholt from Intel as simpler solution for memory management for their i915 driver.[5] GEM was well received and merged into the Linux kernel version 2.6.28 released in December 2008.[95] Meanwhile, TTM had to wait until September 2009 to be finally merged into Linux 2.6.31 as a requirement of the new Radeon KMS DRM driver.[96]

With memory management in place to handle buffer objects, DRM developers could finally add to the kernel the already finished API and code to do mode setting. This expanded API is what is called Kernel Mode-setting (KMS) and the drivers which implement it are often referred to as KMS drivers. In March 2009, KMS was merged into the Linux kernel version 2.6.29,[22][97] along with KMS support for the i915 driver.[98] The KMS API have been exposed to user space programs since libdrm 2.4.3.[99] The userspace X.Org DDX driver for Intel graphics cards was also the first to use the new GEM and KMS APIs.[100] KMS support for the radeon DRM driver was added to Linux 2.6.31 release of September 2009.[101][102][103] The new radeon KMS driver used the TTM memory manager but exposed GEM-compatible interfaces and ioctls instead of TTM ones.[21]

Since 2006 the nouveau project had been developing a free software DRM driver for NVIDIA GPUs outside of the official Linux kernel. In 2010 the nouveau source code was merged into Linux 2.6.33 as an experimental driver.[43][44] At the time of merging, the driver had been already converted to KMS, and behind the GEM API it used TTM as its memory manager.[104]

The new KMS API including the GEM API was a big milestone in the development of DRM, but it didn't stop the API for being enhanced in the following years. KMS gained support for page flips in conjunction with asyncronous VBlank notifications in Linux 2.6.33[105][106] only for the i915 driver, radeon and nouveau added it later during Linux 2.6.38 release.[107] The new page flip interface was added to libdrm 2.4.17.[108] In early 2011, during the Linux 2.6.39 release cycle, the so-called dumb buffers a hardware-independent non-accelerated way to handle simple buffers suitable for use as framebuffers were added to the KMS API.[109][110] The goal was to reduce the complexity of applications such as Plymouth that don't need to use special accelerated operations provided by driver-specific ioctls.[111] The feature was exposed by libdrm from version 2.4.25 onwards.[112] Later that year it also gained a new main type of objects, called planes. Planes were developed to represent hardware overlays supported by the scanout engine.[113][114] Plane support was merged into Linux 3.3.[115] and libdrm 2.4.30. Another concept added to the API during Linux 3.5[116] and libdrm 2.4.36[117] releases were generic object properties, a method to add generic values to any KMS object. Properties are specially useful to set special behaviour or features to objects such as CRTCs and planes.

An early proof of concept to provide GPU offloading between DRM drivers was developed by Dave Airlie in 2010.[6][118] Since Airlie was trying to mimic the NVIDIA Optimus technology, he decided to name it "PRIME".[6] Airlie resumed his work on PRIME in late 2011, but based on the new DMA-BUF buffer sharing mechanism introduced by Linux kernel 3.3.[119] The basic DMA-BUF PRIME infrastructure was finished in March 2012[120] and merged into the Linux 3.4 release,[121][122][123] as well as into libdrm 2.4.34.[124] Later during the Linux 3.5 release, several DRM drivers implemented PRIME support, including i915 for Intel cards, radeon for AMD cards and nouveau for NVIDIA cards.[125][126]

In recent years, the DRM API has incrementally expanded with new and improved features. In 2013, as part of GSoC, David Herrmann developed the multiple render nodes feature.[40] His code was added to the Linux kernel version 3.12 as an experimental feature[127][128] supported by i915,[129] radeon[130] and nouveau[131] drivers, and enabled by default since Linux 3.17.[61] In 2014 Matt Roper (Intel) developed the universal planes (or unified planes) concept by which framebuffers (primary planes), overlays (secondary planes) and cursors (cursor planes) are all treated as a single type of objects with an unified API.[132] Universal planes support provides a more consistent DRM API with fewer, more generic ioctls.[25] In order to maintain the API backwards compatible, the feature is exposed by DRM core as an additional capability that a DRM driver can provide. Universal plane support debuted in Linux 3.15[133] and libdrm 2.4.55[134]. Several drivers such as the Intel i915[135] have already implemented it.

The most recent DRM API enhancement is the atomic mode-setting API, which brings atomicity to the mode-setting and page flipping operations on a DRM device. The idea of an atomic API for mode-setting was first proposed in early 2012.[136] Ville Syrjälä (Intel) took over the task of designing and implementing such atomic API.[137] Based on his work, Rob Clark (Texas Instruments) took a similar approach aiming to implement atomic page flips.[138] Later in 2013 both proposed features were reunited in a single one using a single ioctl for both tasks.[139] Since it was a requirement, the feature had to wait for the support of universal planes to be merged in mid-2014.[135] During the second half of 2014 the atomic code was greatly enhanced by Daniel Vetter (Intel) and other DRM developers[140]:18 in order to facilitate the transition for the existing KMS drivers to the new atomic framework.[141] All of this work was finally merged into Linux 3.19[142] and Linux 4.0[143][144][145] releases, and enabled by default since Linux 4.2.[146] libdrm exposed the new atomic API since version 2.4.62.[147] Several drivers have already been converted to the new atomic API.[140]:20

Adoption

The Direct Rendering Manager kernel subsystem was initially developed to be used with the new Direct Rendering Infrastructure of the XFree86 4.0 display server, later inherited by its successor the X.Org Server. Therefore, the main users of DRM were DRI clients that link to the hardware accelerated Open GL implementation that lives in the Mesa 3D library, as well as the X Server itself. Nowadays DRM is also used by several Wayland compositors including Weston reference compositor. kmscon is a virtual console implementation that runs in user space using DRM's KMS facilities.[148]

Version 358.09 (beta) of the proprietary Nvidia GeForce driver received support for the DRM mode-setting interface implemented as a new kernel blob called nvidia-modeset.ko. This new driver component works in conjunction with the nvidia.ko kernel module to program the display engine (i.e. display controller) of the GPU.[149]

See also

References

  1. "Linux kernel/drivers/gpu/drm/README.drm". kernel.org. Retrieved 2014-02-26.
  2. 1 2 Uytterhoeven, Geert. "The Frame Buffer Device". Kernel.org. Retrieved 28 January 2015.
  3. 1 2 3 White, Thomas. "How DRI and DRM Work". Retrieved 22 July 2014.
  4. 1 2 3 4 Corbet, Jonathan (6 November 2007). "Memory management for graphics processors". LWN.net. Retrieved 23 July 2014.
  5. 1 2 3 4 5 6 7 Packard, Keith; Anholt, Eric (13 May 2008). "GEM - the Graphics Execution Manager". dri-devel mailing list. Retrieved 23 July 2014.
  6. 1 2 3 Airlie, Dave (12 March 2010). "GPU offloading - PRIME - proof of concept". Retrieved 10 February 2015.
  7. 1 2 3 Kitching, Simon. "DRM and KMS kernel modules". Retrieved 23 July 2014.
  8. 1 2 3 4 5 Herrmann, David (1 September 2013). "Splitting DRM and KMS device nodes". Retrieved 23 July 2014.
  9. "libdrm README". Retrieved 23 July 2014.
  10. 1 2 Airlie, Dave (4 September 2004). "New proposed DRM interface design". dri-devel (Mailing list).
  11. 1 2 Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel. "Linux GPU Driver Developer's Guide - Memory management". Kernel.org. Retrieved 31 January 2015.
  12. Vetter, Daniel. "i915/GEM Crashcourse by Daniel Vetter". Intel Open Source Technology Center. Retrieved 31 January 2015. GEM essentially deals with graphics buffer objects (which can contain textures, renderbuffers, shaders, or all kinds of other state objects and data used by the gpu)
  13. 1 2 3 Vetter, Daniel (4 May 2011). "GEM Overview". Retrieved 13 February 2015.
  14. 1 2 3 Perens, Martin; Ravier, Timothée (2 February 2013). "DRI-next/DRM2: A walkthrough the Linux Graphics stack and its security" (PDF). Retrieved 13 April 2016.
  15. Packard, Keith (28 September 2012). "DRI-Next". Retrieved 13 February 2015. GEM flink has lots of issues. The flink names are global, allowing anyone with access to the device to access the flink data contents.
  16. 1 2 Herrmann, David (2 July 2013). "DRM Security". The 2013 X.Org Developer's Conference (XDC2013) Proceedings. Retrieved 13 February 2015. gem-flink doesn't provide any private namespaces to applications and servers. Instead, only one global namespace is provided per DRM node. Malicious authenticated applications can attack other clients via brute-force "name-guessing" of gem buffers
  17. Kerrisk, Michael (25 September 2012). "XDC2012: Graphics stack security". LWN.net. Retrieved 25 November 2015.
  18. 1 2 Packard, Keith (4 July 2008). "gem update". Retrieved 25 April 2016.
  19. "drm-memory man page". Ubuntu manuals. Retrieved 29 January 2015. Many modern high-end GPUs come with their own memory managers. They even include several different caches that need to be synchronized during access. [...] . Therefore, memory management on GPUs is highly driver- and hardware-dependent.
  20. "Intel Graphics Media Accelerator Developer's Guide". Intel Corporation. Retrieved 24 November 2015.
  21. 1 2 Larabel, Michael (26 August 2008). "A GEM-ified TTM Manager For Radeon". Phoronix. Retrieved 24 April 2016.
  22. 1 2 3 4 "Linux 2.6.29 - Kernel Modesetting". Linux Kernel Newbies. Retrieved 19 November 2015.
  23. "VGA Hardware". OSDev.org. Retrieved 23 November 2015.
  24. Rathmann, B. (15 February 2008). "The state of Nouveau, part I". LWN.net. Retrieved 23 November 2015. Graphics cards are programmed in numerous ways, but most initialization and mode setting is done via memory-mapped IO. This is just a set of registers accessible to the CPU via its standard memory address space. The registers in this address space are split up into ranges dealing with various features of the graphics card such as mode setup, output control, or clock configuration.
  25. 1 2 3 4 5 Paalanen, Pekka (5 June 2014). "From pre-history to beyond the global thermonuclear war". Retrieved 29 July 2014.
  26. "drm-kms manpage". Ubuntu manuals. Retrieved 19 November 2015.
  27. Corbet, Jonathan (13 January 2010). "The end of user-space mode setting?". LWN.net. Retrieved 20 November 2015.
  28. 1 2 "Mode Setting Design Discussion". X.Org Wiki. Retrieved 19 November 2015.
  29. 1 2 Corbet, Jonathan (22 January 2007). "LCA: Updates on the X Window System". LWN.net. Retrieved 23 November 2015.
  30. "XF86VIDMODE manual page". X.Org. Retrieved 23 April 2016.
  31. "X11R6.1 Release Notes". X.Org. 14 March 1996. Retrieved 23 April 2016.
  32. Corbet, Jonathan (20 July 2004). "Kernel Summit: Video Drivers". LWN.net. Retrieved 23 November 2015.
  33. "Fedora - Features/KernelModeSetting". Fedora Project. Retrieved 20 November 2015. Historically, the X server was responsible for saving output state when it started up, and then restoring it when it switched back to text mode. Fast user switching was accomplished with a VT switch, so switching away from the first user's X server would blink once to go to text mode, then immediately blink again to go to the second user's session.
  34. 1 2 Barnes, Jesse (17 May 2007). "[RFC] enhancing the kernel's graphics subsystem". linux-kernel (Mailing list).
  35. 1 2 3 Packard, Keith (16 September 2007). "kernel-mode-drivers". Retrieved 30 April 2016.
  36. "DrmModesetting - Enhancing kernel graphics". DRI Wiki. Retrieved 23 November 2015.
  37. 1 2 3 4 5 6 7 Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - KMS Initialization and Cleanup". Kernel.org. Retrieved 8 April 2016.
  38. 1 2 "Video Cards". X.Org Wiki. Retrieved 11 April 2016.
  39. 1 2 Deucher, Alex (15 April 2010). "Notes about radeon display hardware". Retrieved 8 April 2016.
  40. 1 2 3 Herrmann, David (29 May 2013). "DRM Render- and Modeset-Nodes". Retrieved 21 July 2014.
  41. 1 2 3 4 Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Render nodes". Kernel.org. Retrieved 21 April 2016.
  42. 1 2 Deucher, Alex (20 April 2015). "Initial amdgpu driver release". dri-devel (Mailing list).
  43. 1 2 "Linux 2.6.33 - Nouveau, a driver for Nvidia graphic cards". Linux Kernel Newbies. Retrieved 26 April 2016.
  44. 1 2 "drm/nouveau: Add DRM driver for NVIDIA GPUs". Kernel.org. Retrieved 27 January 2015.
  45. "DRM: add DRM Driver for Samsung SoC EXYNOS4210.". Kernel.org. Retrieved 3 March 2016.
  46. "vmwgfx: Take the driver out of staging". Kernel.org. Retrieved 3 March 2016.
  47. "Linux 3.3 - DriverArch - Graphics". Linux Kernel Newbies. Retrieved 3 March 2016.
  48. Larabel, Michael (10 January 2012). "The Linux 3.3 DRM Pull Is Heavy On Enhancements". Phoronix. Retrieved 3 March 2016.
  49. "drm: Initial KMS driver for AST (ASpeed Technologies) 2000 series (v2)". Kernel.org. Retrieved 3 March 2016.
  50. "drm: Renesas SH Mobile DRM driver". Kernel.org. Retrieved 3 March 2016.
  51. "drm: Add NVIDIA Tegra20 support". Kernel.org. Retrieved 3 March 2016.
  52. "drm/omap: move out of staging". Kernel.org. Retrieved 3 March 2016.
  53. "drm: Renesas R-Car Display Unit DRM driver". Kernel.org. Retrieved 3 March 2016.
  54. "drm/msm: basic KMS driver for snapdragon". Kernel.org. Retrieved 3 March 2016.
  55. Larabel, Michael (28 August 2013). "Snapdragon DRM/KMS Driver Merged For Linux 3.12". Phoronix. Retrieved 26 January 2015.
  56. Edge, Jake (8 April 2015). "An update on the freedreno graphics driver". LWN.net. Retrieved 23 April 2015.
  57. King, Russell (18 October 2013). "[GIT PULL] Armada DRM support". dri-devel (Mailing list).
  58. "DRM: Armada: Add Armada DRM driver". Kernel.org. Retrieved 3 March 2016.
  59. "drm/bochs: new driver". Kernel.org. Retrieved 3 March 2016.
  60. Larabel, Michael (8 August 2014). "Linux 3.17 DRM Pull Brings New Graphics Driver". Phoronix. Retrieved 3 March 2016.
  61. 1 2 Corbet, Jonathan (13 August 2014). "3.17 merge window, part 2". LWN.net. Retrieved 7 October 2014.
  62. 1 2 Corbet, Jonathan (17 December 2014). "3.19 Merge window part 2". LWN.net. Retrieved 9 February 2015.
  63. "drm: imx: Move imx-drm driver out of staging". Kernel.org. Retrieved 9 February 2015.
  64. "drm: rockchip: Add basic drm driver". Kernel.org. Retrieved 3 March 2016.
  65. Larabel, Michael (25 June 2015). "Linux 4.2 DRM Updates: Lots Of AMD Attention, No Nouveau Driver Changes". Phoronix. Retrieved 31 August 2015.
  66. Corbet, Jonathan (1 July 2015). "4.2 Merge window part 2". LWN.net. Retrieved 31 August 2015.
  67. Deucher, Alex (3 August 2015). "[PATCH 00/11] Add Fiji Support". dri-devel (Mailing list).
  68. "Add virtio gpu driver.". Kernel.org. Retrieved 3 March 2016.
  69. Corbet, Jonathan (11 November 2015). "4.4 Merge window, part 1". LWN.net. Retrieved 11 January 2016.
  70. Larabel, Michael (15 November 2015). "A Look At The New Features Of The Linux 4.4 Kernel". Phoronix. Retrieved 11 January 2016.
  71. "drm/vc4: Add KMS support for Raspberry Pi.". Kernel.org.
  72. "Linux 4.5-DriversArch - Graphics". Linux Kernel Newbies. Retrieved 14 March 2016.
  73. Larabel, Michael (24 January 2016). "The Many New Features & Improvements Of The Linux 4.5 Kernel". Phoronix. Retrieved 14 March 2016.
  74. Corbet, Jonathan (20 January 2016). "4.5 merge window part 2". LWN.Net. Retrieved 14 March 2016.
  75. "drm: remove the gamma driver". Kernel.org. Retrieved 27 January 2015.
  76. "[DRM]: Delete sparc64 FFB driver code that never gets built". Kernel.org. Retrieved 27 January 2015.
  77. "drm: remove i830 driver". Kernel.org. Retrieved 27 January 2015.
  78. "drm: Add via unichrome support". Kernel.org. Retrieved 27 January 2015.
  79. "drm: add savage driver". Kernel.org. Retrieved 27 January 2015.
  80. "List of maintainers of the linux kernel". Kernel.org. Retrieved 14 July 2014.
  81. "libdrm git repository". Retrieved 23 July 2014.
  82. "First DRI release of 3dfx driver.". Mesa 3D. Retrieved 15 July 2014.
  83. "Import 2.3.18pre1". The History of Linux in GIT Repository Format 1992-2010 (2010). Retrieved 15 July 2014.
  84. Torvalds, Linus. "Linux 2.4.0 source code". Kernel.org. Retrieved 29 July 2014.
  85. Airlie, Dave (30 December 2004). "[bk pull] drm core/personality split". linux-kernel (Mailing list).
  86. Torvalds, Linus (11 January 2005). "Linux 2.6.11-rc1". linux-kernel (Mailing list).
  87. Gettys, James; Packard, Keith (15 June 2004). "The (Re)Architecture of the X Window System". Retrieved 30 April 2016.
  88. Smirl, Jon (30 August 2005). "The State of Linux Graphics". Retrieved 30 April 2016. I believe the best solution to this problem is for the kernel to provide a single, comprehensive device driver for each piece of video hardware. This means that conflicting drivers like fbdev and DRM must be merged into a cooperating system. It also means that poking hardware from user space while a kernel based device driver is loaded should be prevented.
  89. Verhaegen, Luc (2 March 2006). "X and Modesetting: Atrophy illustrated" (PDF). Retrieved 30 April 2016.
  90. Glisse, Jerome (4 December 2007). "Radeon kernel modesetting". Retrieved 30 April 2016.
  91. Larabel, Michael (1 October 2008). "The State of Kernel Mode-Setting". Phoronix. Retrieved 30 April 2016.
  92. Packard, Keith (21 July 2008). "X output status july 2008". Retrieved 1 May 2016.
  93. "drm: reorganise drm tree to be more future proof". Kernel.org.
  94. Corbet, Jonathan (28 May 2008). "GEM v. TTM". LWN.net. Retrieved 10 February 2015.
  95. "Linux 2.6.28 - The GEM Memory Manager for GPU memory". Linux Kernel Newbies. Retrieved 23 July 2014.
  96. "drm: Add the TTM GPU memory manager subsystem.". Kernel.org.
  97. "DRM: add mode setting support". Kernel.org.
  98. "DRM: i915: add mode setting support". Kernel.org.
  99. Anholt, Eric (22 December 2008). "[ANNOUNCE] libdrm-2.4.3". dri-devel (Mailing list).
  100. Barnes, Jesse (20 October 2008). "[ANNOUNCE] xf86-video-intel 2.5.0". xorg-announce (Mailing list).
  101. "Linux 2.6.31 - ATI Radeon Kernel Mode Setting support". Linux Kernel Newbies. Retrieved 28 April 2016.
  102. Torvalds, Linus (9 September 2009). "Linux 2.6.31". linux-kernel (Mailing list).
  103. "drm/radeon: introduce kernel modesetting for radeon hardware". Kernel.org.
  104. "The irregular Nouveau Development Companion #40". Nouveau project. Retrieved 3 May 2016.
  105. "Linux 2.6.33 - Graphic improvements". Linux Kernel Newbies. Retrieved 28 April 2016.
  106. "drm/kms: add page flipping ioctl". Kernel.org.
  107. "Linux 2.6.38 - Graphics". Linux Kernel Newbies. Retrieved 28 April 2016.
  108. Airlie, Dave (21 December 2009). "[ANNOUNCE] libdrm 2.4.17". dri-devel (Mailing list).
  109. "drm: dumb scanout create/mmap for intel/radeon (v3)". Kernel.org.
  110. "Linux 2 6 39-DriversArch". Linux Kernel Newbies. Retrieved 19 April 2016.
  111. Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Mode Setting". Kernel.org. Retrieved 24 April 2016.
  112. Wilson, Chris (11 April 2011). "[ANNOUNCE] libdrm 2.4.25". dri-devel (Mailing list).
  113. Barnes, Jesse (25 April 2011). "[RFC] drm: add overlays as first class KMS objects". dri-devel (Mailing list).
  114. Barnes, Jesse (13 May 2011). "[RFC] drm: add overlays as first class KMS objects". dri-devel (Mailing list).
  115. "drm: add plane support v3". Kernel.org.
  116. "drm: add generic ioctls to get/set properties on any object". Kernel.org.
  117. Widawsky, Ben (27 June 2012). "[ANNOUNCE] libdrm 2.4.36". xorg-announce (Mailing list).
  118. Larabel, Michael. "Proof Of Concept: Open-Source Multi-GPU Rendering!". Phoronix. Retrieved 14 April 2016.
  119. Larabel, Michael (23 February 2012). "DRM Base PRIME Support Part Of VGEM Work". Phoronix. Retrieved 14 April 2016.
  120. Airlie, Dave (27 March 2012). "[PATCH] drm: base prime/dma-buf support (v5)". dri-devel (Mailing list).
  121. Larabel, Michael (30 March 2012). "Last Minute For Linux 3.4: DMA-BUF PRIME Support". Phoronix. Retrieved 15 April 2016.
  122. "drm: base prime/dma-buf support (v5)". Kernel.org.
  123. "Linux 3.4 DriverArch". Linux Kernel Newbies. Retrieved 15 April 2016.
  124. Anholt, Eric (10 May 2012). "[ANNOUNCE] libdrm 2.4.34". dri-devel (Mailing list).
  125. Larabel, Michael (12 May 2012). "DMA-BUF PRIME Coming Together For Linux 3.5". Phoronix. Retrieved 15 April 2016.
  126. "Linux 3.5 DriverArch". Linux Kernel Newbies. Retrieved 15 April 2016.
  127. Corbet, Jonathan (11 September 2013). "3.12 merge window, part 2". LWN.net. Retrieved 21 July 2014.
  128. "drm: implement experimental render nodes". Kernel.org.
  129. "drm/i915: Support render nodes". Kernel.org.
  130. "drm/radeon: Support render nodes". Kernel.org.
  131. "drm/nouveau: Support render nodes". Kernel.org.
  132. Roper, Matt (7 March 2014). "[RFCv2 00/10] Universal plane support". dri-devel (Mailing list).
  133. Larabel, Michael (2 April 2014). "Universal Plane Support Set For Linux 3.15". Phoronix. Retrieved 14 April 2016.
  134. Lankhorst, Maarten (25 July 2014). "[ANNOUNCE] libdrm 2.4.55". dri-devel (Mailing list).
  135. 1 2 Vetter, Daniel (7 August 2014). "Neat stuff for 3.17". Retrieved 14 April 2016.
  136. Barnes, Jesse (15 February 2012). "[RFC] drm: atomic mode set API". dri-devel (Mailing list).
  137. Syrjälä, Ville (24 May 2012). "[RFC][PATCH 0/6] WIP: drm: Atomic mode setting idea". dri-devel (Mailing list).
  138. Clark, Rob (9 September 2012). "[RFC 0/9] nuclear pageflip". dri-devel (Mailing list).
  139. Clark, Rob (6 October 2013). "[RFCv1 00/12] Atomic/nuclear modeset/pageflip". dri-devel (Mailing list).
  140. 1 2 Vetter, Daniel (3 February 2016). "Embrace the Atomic Display Age" (PDF). Retrieved 4 May 2016.
  141. Vetter, Daniel (2 November 2014). "Atomic Modeset Support for KMS Drivers". Retrieved 4 May 2016.
  142. Airlie, Dave (14 December 2014). "[git pull] drm for 3.19-rc1". dri-devel (Mailing list).
  143. Vetter, Daniel (28 January 2015). "Update for Atomic Display Updates". Retrieved 4 May 2016.
  144. Airlie, Dave (15 February 2015). "[git pull] drm pull for 3.20-rc1". dri-devel (Mailing list).
  145. "Linux 4.0 - DriverArch - Graphics". Linux Kernel Newbies. Retrieved 3 May 2016.
  146. "Linux 4.2 - Atomic modesetting API enabled by default". Linux Kernel Newbies. Retrieved 3 May 2016.
  147. Velikov, Emil (29 June 2015). "[ANNOUNCE] libdrm 2.4.62". dri-devel (Mailing list).
  148. Herrmann, David (10 December 2012). "KMSCON Introduction". Retrieved 22 November 2015.
  149. "Linux, Solaris, and FreeBSD driver 358.09 (beta)".

External links

This article is issued from Wikipedia - version of the Wednesday, May 04, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.