Age | Commit message (Collapse) | Author |
|
Absolutely unused. All the values are only ever initialized and
then used at most in some debug printout functions.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
This continues the theme started with vm_brk() and vm_munmap():
vm_mmap() does the same thing as do_mmap(), but additionally does the
required VM locking.
This uninlines (and rewrites it to be clearer) do_mmap(), which sadly
duplicates it in mm/mmap.c and mm/nommu.c. But that way we don't have
to export our internal do_mmap_pgoff() function.
Some day we hopefully don't have to export do_mmap() either, if all
modular users can become the simpler vm_mmap() instead. We're actually
very close to that already, with the notable exception of the (broken)
use in i810, and a couple of stragglers in binfmt_elf.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
They need this to get all the EXPORT_SYMBOL variants and THIS_MODULE
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
Drivers using multiple framebuffers got broken by commit
41c2e75e60200a860a74b7c84a6375c105e7437f which ignored the framebuffer
(or register) map offset when looking for existing maps. The rationale
was that the kernel-userspace ABI is fixed at a 32-bit offset, so the
real offsets could not always be handed over for comparison.
Instead of ignoring the offset we will compare the lower 32 bit. Drivers
using multiple framebuffers should just make sure that the lower 32 bit
are different. The existing drivers in question are practically limited
to 32-bit systems so that should be fine for them.
It is assumed that current drivers always specify a correct framebuffer
map offset, even if this offset was ignored since above commit. So this
patch should not change anything for drivers using only one framebuffer.
Drivers needing multiple framebuffers with 64-bit map offsets will need
to cook up something, for instance keeping an ID in the lower bit which
is to be aligned away when it comes to using the offset.
All of above applies to _DRM_REGISTERS as well.
Signed-off-by: Tormod Volden <debian.tormod@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Remove an obsolete Alpha adjustment, and modify another,
to go with the current Alpha architecture support.
Signed-off-by: Jay Estabrook <jay.estabrook@gmail.com>
Tested-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Currently most, if not all, memory allocation in drm_bufs.c is followed by initializing the memory with 0.
Replace the use of kmalloc+memset with kzalloc.
Signed-off-by: Davidlohr Bueso <dave@gnu.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
* drm-platform:
drm: Make sure the DRM offset matches the CPU
drm: Add __arm defines to DRM
drm: Add support for platform devices to register as DRM devices
drm: Remove drm_resource wrappers
|
|
Add __arm defines to specify behavior specific for
an ARM processor.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Remove the drm_resource wrappers and directly use the
actual PCI and/or platform functions in their place.
[airlied: fixup nouveau properly to build]
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
|
|
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
[Ss]ytem => [Ss]ystem
udpate => update
paramters => parameters
orginal => original
Signed-off-by: Thomas Weber <swirl@gmx.li>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
drm_pci_alloc() has input of address mask for setting pci dma
mask on the device, which should be properly setup by drm driver.
And leave it as a param for drm_pci_alloc() would cause confusion
or mistake would corrupt the correct dma mask setting, as seen on
intel hw which set wrong dma mask for hw status page. So remove
it from drm_pci_alloc() function.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Fix the error message: this is add, not rm.
Move the closing brace to proper spot: _DRM_GEM branch should not be
included in the block.
Signed-off-by: Pekka Paalanen <pq@iki.fi>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
It hasn't been used in ages, and having the user tell your how much
memory is being freed at free time is a recipe for disaster even if it
was ever used.
Signed-off-by: Eric Anholt <eric@anholt.net>
|
|
A driver will use the _DRM_DRIVER map flag to indicate that it wants
to be responsible for removing the map itself, bypassing the DRM's
automagic cleanup code.
Since the multi-master changes this has been broken, resulting in some
drivers having their registers unmapped before it's finished with them.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Currently, userspace can fail to obtain the SAREA mapping (among other
reasons) if it passes SAREA_MAX to drmAddMap without aligning it to the
page size. This breaks for example on PowerPC with 64K pages and radeon
despite the kernel radeon actually doing the right rouding in the first
place.
The way SAREA_MAX is defined with a bunch of ifdef's and duplicated
between libdrm and the X server is gross, ultimately it should be
retrieved by userspace from the kernel, but in the meantime, we have
plenty of existing userspace built with bad values that need to work.
This patch works around broken userspace by rounding the requested size
in drm_addmap_core() of any SHM map to the page size. Since the backing
memory for SHM maps is also allocated within addmap_core, there is no
danger of adjacent memory being exposed due to the increased map size.
The only side effect is that drivers that previously tried to create or
access SHM maps using a size < PAGE_SIZE and failed (getting -EINVAL),
will now succeed at the cost of a little bit more memory used if that
happens to be when the map is created.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
|
Platforms such as sparc64 have D-cache aliasing issues. We
cannot allow virtual mappings in different contexts to be such
that two cache lines can be loaded for the same backing data.
Updates to one cache line won't be seen by accesses to the other
cache line.
Code in sparc64 and other architectures solve this problem by
making sure that all userland mappings of MAP_SHARED objects have
the same virtual address base. They implement this by keying
off of the page offset, and using that to choose a suitably
consistent virtual address for mmap() requests.
Making things even worse, getting this wrong on sparc64 can result
in hangs during DRM lock acquisition. This is because, at least on
UltraSPARC-III, normal loads consult the D-cache but atomics such
as 'cas' (which is what cmpxchg() is implement using) only consult
the L2 cache. So if a D-cache alias is inserted, the load can
see different data than the atomic, and we'll loop forever because
the atomic compare-and-exchange will never complete successfully.
So to make this all work properly, we need to make sure that the
hash address computed by drm_map_handle() preserves the SHMLBA
relevant bits, and that's what this patch does for _DRM_SHM mappings.
As a historical note, many years ago this bug didn't exist because we
used to just use the low 32-bits of the address as the hash and just
hope for the best. This preserved the SHMLBA bits properly. But when
the hashtab code was added to DRM, this was no longer the case.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
This changes drm_local_map to use a resource_size for its "offset"
member instead of an unsigned long, thus allowing 32-bit machines
with a >32-bit physical address space to be able to store there
their register or framebuffer addresses when those are above 4G,
such as when using a PCI video card on a recent AMCC 440 SoC.
This patch isn't as "trivial" as it sounds: A few functions needed
to have some unsigned long/int changed to resource_size_t and a few
printk's had to be adjusted.
But also, because userspace isn't capable of passing such offsets,
I had to modify drm_find_matching_map() to ignore the offset passed
in for maps of type _DRM_FRAMEBUFFER or _DRM_REGISTERS.
If we ever support multiple _DRM_FRAMEBUFFER or _DRM_REGISTERS maps
for a given device, we might have to change that trick, but I don't
think that happens on any current driver.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
Once upon a time, the DRM made the distinction between the drm_map
data structure exchanged with user space and the drm_local_map used
in the kernel.
For some reasons, while the BSD port still has that "feature", the
linux part abused drm_map for kernel internal usage as the local
map only existed as a typedef of the struct drm_map.
This patch fixes it by declaring struct drm_local_map separately
(though its content is currently identical to the userspace variant),
and changing the kernel code to only use that, except when it's a
user<->kernel interface (ie. ioctl).
This allows subsequent changes to the in-kernel format
I've also replaced the use of drm_local_map_t with struct drm_local_map
in a couple of places. Mostly by accident but they are the same (the
former is a typedef of the later) and I have some remote plans and
half finished patch to completely kill the drm_local_map_t typedef
so I left those bits in.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
The DRM uses its own wrappers to obtain resources from PCI devices,
which currently convert the resource_size_t into an unsigned long.
This is broken on 32-bit platforms with >32-bit physical address
space.
This fixes them, along with a few occurences of unsigned long used
to store such a resource in drivers.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
Currently only one waiter is woken up, leaving other waiters
hanging waiting for the DRM lock.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
|
|
this exports the locked version of the symbol as struct_mutex locks it all.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Add core support for mapping of GEM objects. Drivers should provide a
vm_operations_struct if they want to support page faulting of objects.
The code for handling GEM object offsets was taken from TTM, which was
written by Thomas Hellström.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
This is step one towards having multiple masters sharing a drm
device in order to get fast-user-switching to work.
It splits out the information associated with the drm master
into a separate kref counted structure, and allocates this when
a master opens the device node. It also allows the current master
to abdicate (say while VT switched), and a new master to take over
the hardware.
It moves the Intel and radeon drivers to using the sarea from
within the new master structures.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
With the coming of kernel based modesetting and the memory manager stuff,
the everything in one directory approach was getting very ugly and
starting to be unmanageable.
This restructures the drm along the lines of other kernel components.
It creates a drivers/gpu/drm directory and moves the hw drivers into
subdirectores. It moves the includes into an include/drm, and
sets up the unifdef for the userspace headers we should be exporting.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|