diff options
489 files changed, 12938 insertions, 5755 deletions
diff --git a/Documentation/Intel-IOMMU.txt b/Documentation/Intel-IOMMU.txt new file mode 100644 index 00000000000..c2321903aa0 --- /dev/null +++ b/Documentation/Intel-IOMMU.txt @@ -0,0 +1,115 @@ +Linux IOMMU Support +=================== + +The architecture spec can be obtained from the below location. + +http://www.intel.com/technology/virtualization/ + +This guide gives a quick cheat sheet for some basic understanding. + +Some Keywords + +DMAR - DMA remapping +DRHD - DMA Engine Reporting Structure +RMRR - Reserved memory Region Reporting Structure +ZLR - Zero length reads from PCI devices +IOVA - IO Virtual address. + +Basic stuff +----------- + +ACPI enumerates and lists the different DMA engines in the platform, and +device scope relationships between PCI devices and which DMA engine controls +them. + +What is RMRR? +------------- + +There are some devices the BIOS controls, for e.g USB devices to perform +PS2 emulation. The regions of memory used for these devices are marked +reserved in the e820 map. When we turn on DMA translation, DMA to those +regions will fail. Hence BIOS uses RMRR to specify these regions along with +devices that need to access these regions. OS is expected to setup +unity mappings for these regions for these devices to access these regions. + +How is IOVA generated? +--------------------- + +Well behaved drivers call pci_map_*() calls before sending command to device +that needs to perform DMA. Once DMA is completed and mapping is no longer +required, device performs a pci_unmap_*() calls to unmap the region. + +The Intel IOMMU driver allocates a virtual address per domain. Each PCIE +device has its own domain (hence protection). Devices under p2p bridges +share the virtual address with all devices under the p2p bridge due to +transaction id aliasing for p2p bridges. + +IOVA generation is pretty generic. We used the same technique as vmalloc() +but these are not global address spaces, but separate for each domain. +Different DMA engines may support different number of domains. + +We also allocate gaurd pages with each mapping, so we can attempt to catch +any overflow that might happen. + + +Graphics Problems? +------------------ +If you encounter issues with graphics devices, you can try adding +option intel_iommu=igfx_off to turn off the integrated graphics engine. + +If it happens to be a PCI device included in the INCLUDE_ALL Engine, +then try enabling CONFIG_DMAR_GFX_WA to setup a 1-1 map. We hear +graphics drivers may be in process of using DMA api's in the near +future and at that time this option can be yanked out. + +Some exceptions to IOVA +----------------------- +Interrupt ranges are not address translated, (0xfee00000 - 0xfeefffff). +The same is true for peer to peer transactions. Hence we reserve the +address from PCI MMIO ranges so they are not allocated for IOVA addresses. + + +Fault reporting +--------------- +When errors are reported, the DMA engine signals via an interrupt. The fault +reason and device that caused it with fault reason is printed on console. + +See below for sample. + + +Boot Message Sample +------------------- + +Something like this gets printed indicating presence of DMAR tables +in ACPI. + +ACPI: DMAR (v001 A M I OEMDMAR 0x00000001 MSFT 0x00000097) @ 0x000000007f5b5ef0 + +When DMAR is being processed and initialized by ACPI, prints DMAR locations +and any RMRR's processed. + +ACPI DMAR:Host address width 36 +ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed90000 +ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed91000 +ACPI DMAR:DRHD (flags: 0x00000001)base: 0x00000000fed93000 +ACPI DMAR:RMRR base: 0x00000000000ed000 end: 0x00000000000effff +ACPI DMAR:RMRR base: 0x000000007f600000 end: 0x000000007fffffff + +When DMAR is enabled for use, you will notice.. + +PCI-DMA: Using DMAR IOMMU + +Fault reporting +--------------- + +DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 +DMAR:[fault reason 05] PTE Write access is not set +DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 +DMAR:[fault reason 05] PTE Write access is not set + +TBD +---- + +- For compatibility testing, could use unity map domain for all devices, just + provide a 1-1 for all useful memory under a single domain for all devices. +- API for paravirt ops for abstracting functionlity for VMM folks. diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt index 6b0f963f537..6bb9be54ab7 100644 --- a/Documentation/feature-removal-schedule.txt +++ b/Documentation/feature-removal-schedule.txt @@ -14,18 +14,6 @@ Who: Jiri Slaby <jirislaby@gmail.com> --------------------------- -What: V4L2 VIDIOC_G_MPEGCOMP and VIDIOC_S_MPEGCOMP -When: October 2007 -Why: Broken attempt to set MPEG compression parameters. These ioctls are - not able to implement the wide variety of parameters that can be set - by hardware MPEG encoders. A new MPEG control mechanism was created - in kernel 2.6.18 that replaces these ioctls. See the V4L2 specification - (section 1.9: Extended controls) for more information on this topic. -Who: Hans Verkuil <hverkuil@xs4all.nl> and - Mauro Carvalho Chehab <mchehab@infradead.org> - ---------------------------- - What: dev->power.power_state When: July 2007 Why: Broken design for runtime control over driver power states, confusing @@ -49,10 +37,10 @@ Who: David Miller <davem@davemloft.net> --------------------------- What: Video4Linux API 1 ioctls and video_decoder.h from Video devices. -When: December 2006 -Files: include/linux/video_decoder.h -Check: include/linux/video_decoder.h -Why: V4L1 AP1 was replaced by V4L2 API. during migration from 2.4 to 2.6 +When: December 2008 +Files: include/linux/video_decoder.h include/linux/videodev.h +Check: include/linux/video_decoder.h include/linux/videodev.h +Why: V4L1 AP1 was replaced by V4L2 API during migration from 2.4 to 2.6 series. The old API have lots of drawbacks and don't provide enough means to work with all video and audio standards. The newer API is already available on the main drivers and should be used instead. @@ -61,7 +49,9 @@ Why: V4L1 AP1 was replaced by V4L2 API. during migration from 2.4 to 2.6 Decoder iocts are using internally to allow video drivers to communicate with video decoders. This should also be improved to allow V4L2 calls being translated into compatible internal ioctls. -Who: Mauro Carvalho Chehab <mchehab@brturbo.com.br> + Compatibility ioctls will be provided, for a while, via + v4l1-compat module. +Who: Mauro Carvalho Chehab <mchehab@infradead.org> --------------------------- diff --git a/Documentation/filesystems/Exporting b/Documentation/filesystems/Exporting index 31047e0fe14..87019d2b598 100644 --- a/Documentation/filesystems/Exporting +++ b/Documentation/filesystems/Exporting @@ -2,9 +2,12 @@ Making Filesystems Exportable ============================= -Most filesystem operations require a dentry (or two) as a starting +Overview +-------- + +All filesystem operations require a dentry (or two) as a starting point. Local applications have a reference-counted hold on suitable -dentrys via open file descriptors or cwd/root. However remote +dentries via open file descriptors or cwd/root. However remote applications that access a filesystem via a remote filesystem protocol such as NFS may not be able to hold such a reference, and so need a different way to refer to a particular dentry. As the alternative @@ -13,14 +16,14 @@ server-reboot (among other things, though these tend to be the most problematic), there is no simple answer like 'filename'. The mechanism discussed here allows each filesystem implementation to -specify how to generate an opaque (out side of the filesystem) byte +specify how to generate an opaque (outside of the filesystem) byte string for any dentry, and how to find an appropriate dentry for any given opaque byte string. This byte string will be called a "filehandle fragment" as it corresponds to part of an NFS filehandle. A filesystem which supports the mapping between filehandle fragments -and dentrys will be termed "exportable". +and dentries will be termed "exportable". @@ -89,11 +92,9 @@ For a filesystem to be exportable it must: 1/ provide the filehandle fragment routines described below. 2/ make sure that d_splice_alias is used rather than d_add when ->lookup finds an inode for a given parent and name. - Typically the ->lookup routine will end: - if (inode) - return d_splice(inode, dentry); - d_add(dentry, inode); - return NULL; + Typically the ->lookup routine will end with a: + + return d_splice_alias(inode, dentry); } @@ -101,67 +102,39 @@ For a filesystem to be exportable it must: A file system implementation declares that instances of the filesystem are exportable by setting the s_export_op field in the struct super_block. This field must point to a "struct export_operations" -struct which could potentially be full of NULLs, though normally at -least get_parent will be set. - - The primary operations are decode_fh and encode_fh. -decode_fh takes a filehandle fragment and tries to find or create a -dentry for the object referred to by the filehandle. -encode_fh takes a dentry and creates a filehandle fragment which can -later be used to find/create a dentry for the same object. - -decode_fh will probably make use of "find_exported_dentry". -This function lives in the "exportfs" module which a filesystem does -not need unless it is being exported. So rather that calling -find_exported_dentry directly, each filesystem should call it through -the find_exported_dentry pointer in it's export_operations table. -This field is set correctly by the exporting agent (e.g. nfsd) when a -filesystem is exported, and before any export operations are called. - -find_exported_dentry needs three support functions from the -filesystem: - get_name. When given a parent dentry and a child dentry, this - should find a name in the directory identified by the parent - dentry, which leads to the object identified by the child dentry. - If no get_name function is supplied, a default implementation is - provided which uses vfs_readdir to find potential names, and - matches inode numbers to find the correct match. - - get_parent. When given a dentry for a directory, this should return - a dentry for the parent. Quite possibly the parent dentry will - have been allocated by d_alloc_anon. - The default get_parent function just returns an error so any - filehandle lookup that requires finding a parent will fail. - ->lookup("..") is *not* used as a default as it can leave ".." - entries in the dcache which are too messy to work with. - - get_dentry. When given an opaque datum, this should find the - implied object and create a dentry for it (possibly with - d_alloc_anon). - The opaque datum is whatever is passed down by the decode_fh - function, and is often simply a fragment of the filehandle - fragment. - decode_fh passes two datums through find_exported_dentry. One that - should be used to identify the target object, and one that can be - used to identify the object's parent, should that be necessary. - The default get_dentry function assumes that the datum contains an - inode number and a generation number, and it attempts to get the - inode using "iget" and check it's validity by matching the - generation number. A filesystem should only depend on the default - if iget can safely be used this way. - -If decode_fh and/or encode_fh are left as NULL, then default -implementations are used. These defaults are suitable for ext2 and -extremely similar filesystems (like ext3). - -The default encode_fh creates a filehandle fragment from the inode -number and generation number of the target together with the inode -number and generation number of the parent (if the parent is -required). - -The default decode_fh extract the target and parent datums from the -filehandle assuming the format used by the default encode_fh and -passed them to find_exported_dentry. +struct which has the following members: + + encode_fh (optional) + Takes a dentry and creates a filehandle fragment which can later be used + to find or create a dentry for the same object. The default + implementation creates a filehandle fragment that encodes a 32bit inode + and generation number for the inode encoded, and if necessary the + same information for the parent. + + fh_to_dentry (mandatory) + Given a filehandle fragment, this should find the implied object and + create a dentry for it (possibly with d_alloc_anon). + + fh_to_parent (optional but strongly recommended) + Given a filehandle fragment, this should find the parent of the + implied object and create a dentry for it (possibly with d_alloc_anon). + May fail if the filehandle fragment is too small. + + get_parent (optional but strongly recommended) + When given a dentry for a directory, this should return a dentry for + the parent. Quite possibly the parent dentry will have been allocated + by d_alloc_anon. The default get_parent function just returns an error + so any filehandle lookup that requires finding a parent will fail. + ->lookup("..") is *not* used as a default as it can leave ".." entries + in the dcache which are too messy to work with. + + get_name (optional) + When given a parent dentry and a child dentry, this should find a name + in the directory identified by the parent dentry, which leads to the + object identified by the child dentry. If no get_name function is + supplied, a default implementation is provided which uses vfs_readdir + to find potential names, and matches inode numbers to find the correct + match. A filehandle fragment consists of an array of 1 or more 4byte words, @@ -172,5 +145,3 @@ generated by encode_fh, in which case it will have been padded with nuls. Rather, the encode_fh routine should choose a "type" which indicates the decode_fh how much of the filehandle is valid, and how it should be interpreted. - - diff --git a/Documentation/i386/boot.txt b/Documentation/i386/boot.txt index 35985b34d5a..2f75e750e4f 100644 --- a/Documentation/i386/boot.txt +++ b/Documentation/i386/boot.txt @@ -168,6 +168,8 @@ Offset Proto Name Meaning 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0235/3 N/A pad2 Unused 0238/4 2.06+ cmdline_size Maximum size of the kernel command line +023C/4 2.07+ hardware_subarch Hardware subarchitecture +0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data (1) For backwards compatibility, if the setup_sects field contains 0, the real value is 4. @@ -204,7 +206,7 @@ boot loaders can ignore those fields. The byte order of all fields is littleendian (this is x86, after all.) -Field name: setup_secs +Field name: setup_sects Type: read Offset/size: 0x1f1/1 Protocol: ALL @@ -356,6 +358,13 @@ Protocol: 2.00+ - If 0, the protected-mode code is loaded at 0x10000. - If 1, the protected-mode code is loaded at 0x100000. + Bit 6 (write): KEEP_SEGMENTS + Protocol: 2.07+ + - if 0, reload the segment registers in the 32bit entry point. + - if 1, do not reload the segment registers in the 32bit entry point. + Assume that %cs %ds %ss %es are all set to flat segments with + a base of 0 (or the equivalent for their environment). + Bit 7 (write): CAN_USE_HEAP Set this bit to 1 to indicate that the value entered in the heap_end_ptr is valid. If this field is clear, some setup code @@ -480,6 +489,29 @@ Protocol: 2.06+ cmdline_size characters. With protocol version 2.05 and earlier, the maximum size was 255. +Field name: hardware_subarch +Type: write +Offset/size: 0x23c/4 +Protocol: 2.07+ + + In a paravirtualized environment the hardware low level architectural + pieces such as interrupt handling, page table handling, and + accessing process control registers needs to be done differently. + + This field allows the bootloader to inform the kernel we are in one + one of those environments. + + 0x00000000 The default x86/PC environment + 0x00000001 lguest + 0x00000002 Xen + +Field name: hardware_subarch_data +Type: write +Offset/size: 0x240/8 +Protocol: 2.07+ + + A pointer to data that is specific to hardware subarch + **** THE KERNEL COMMAND LINE diff --git a/Documentation/kbuild/makefiles.txt b/Documentation/kbuild/makefiles.txt index 6166e2d7da7..7a7753321a2 100644 --- a/Documentation/kbuild/makefiles.txt +++ b/Documentation/kbuild/makefiles.txt @@ -519,17 +519,17 @@ more details, with real examples. to the user why it stops. cc-cross-prefix - cc-cross-prefix is used to check if there exist a $(CC) in path with + cc-cross-prefix is used to check if there exists a $(CC) in path with one of the listed prefixes. The first prefix where there exist a prefix$(CC) in the PATH is returned - and if no prefix$(CC) is found then nothing is returned. Additional prefixes are separated by a single space in the call of cc-cross-prefix. - This functionality is usefull for architecture Makefile that try - to set CROSS_COMPILE to well know values but may have several + This functionality is useful for architecture Makefiles that try + to set CROSS_COMPILE to well-known values but may have several values to select between. - It is recommended only to try to set CROSS_COMPILE is it is a cross - build (host arch is different from target arch). And is CROSS_COMPILE + It is recommended only to try to set CROSS_COMPILE if it is a cross + build (host arch is different from target arch). And if CROSS_COMPILE is already set then leave it with the old value. Example: diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 6accd360da7..b2361667839 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -772,6 +772,23 @@ and is between 256 and 4096 characters. It is defined in the file inttest= [IA64] + intel_iommu= [DMAR] Intel IOMMU driver (DMAR) option + off + Disable intel iommu driver. + igfx_off [Default Off] + By default, gfx is mapped as normal device. If a gfx + device has a dedicated DMAR unit, the DMAR unit is + bypassed by not enabling DMAR with this option. In + this case, gfx device will use physical address for + DMA. + forcedac [x86_64] + With this option iommu will not optimize to look + for io virtual address below 32 bit forcing dual + address cycle on pci bus for cards supporting greater + than 32 bit addressing. The default is to look + for translation below 32 bit and if not available + then look in the higher range. + io7= [HW] IO7 for Marvel based alpha systems See comment before marvel_specify_io7 in arch/alpha/kernel/core_marvel.c. diff --git a/Documentation/memory-hotplug.txt b/Documentation/memory-hotplug.txt index 5fbcc22c98e..168117bd6ee 100644 --- a/Documentation/memory-hotplug.txt +++ b/Documentation/memory-hotplug.txt @@ -2,7 +2,8 @@ Memory Hotplug ============== -Last Updated: Jul 28 2007 +Created: Jul 28 2007 +Add description of notifier of memory hotplug Oct 11 2007 This document is about memory hotplug including how-to-use and current status. Because Memory Hotplug is still under development, contents of this text will @@ -24,7 +25,8 @@ be changed often. 6.1 Memory offline and ZONE_MOVABLE 6.2. How to offline memory 7. Physical memory remove -8. Future Work List +8. Memory hotplug event notifier +9. Future Work List Note(1): x86_64's has special implementation for memory hotplug. This text does not describe it. @@ -307,8 +309,58 @@ Need more implementation yet.... - Notification completion of remove works by OS to firmware. - Guard from remove if not yet. +-------------------------------- +8. Memory hotplug event notifier +-------------------------------- +Memory hotplug has event notifer. There are 6 types of notification. + +MEMORY_GOING_ONLINE + Generated before new memory becomes available in order to be able to + prepare subsystems to handle memory. The page allocator is still unable + to allocate from the new memory. + +MEMORY_CANCEL_ONLINE + Generated if MEMORY_GOING_ONLINE fails. + +MEMORY_ONLINE + Generated when memory has succesfully brought online. The callback may + allocate pages from the new memory. + +MEMORY_GOING_OFFLINE + Generated to begin the process of offlining memory. Allocations are no + longer possible from the memory but some of the memory to be offlined + is still in use. The callback can be used to free memory known to a + subsystem from the indicated memory section. + +MEMORY_CANCEL_OFFLINE + Generated if MEMORY_GOING_OFFLINE fails. Memory is available again from + the section that we attempted to offline. + +MEMORY_OFFLINE + Generated after offlining memory is complete. + +A callback routine can be registered by + hotplug_memory_notifier(callback_func, priority) + +The second argument of callback function (action) is event types of above. +The third argument is passed by pointer of struct memory_notify. + +struct memory_notify { + unsigned long start_pfn; + unsigned long nr_pages; + int status_cahnge_nid; +} + +start_pfn is start_pfn of online/offline memory. +nr_pages is # of pages of online/offline memory. +status_change_nid is set node id when N_HIGH_MEMORY of nodemask is (will be) +set/clear. It means a new(memoryless) node gets new memory by online and a +node loses all memory. If this is -1, then nodemask status is not changed. +If status_changed_nid >= 0, callback should create/discard structures for the +node if necessary. + -------------- -8. Future Work +9. Future Work -------------- - allowing memory hot-add to ZONE_MOVABLE. maybe we need some switch like sysctl or new control file. diff --git a/Documentation/powerpc/mpc52xx-device-tree-bindings.txt b/Documentation/powerpc/mpc52xx-device-tree-bindings.txt index 5f7d536cb0c..5e03610e186 100644 --- a/Documentation/powerpc/mpc52xx-device-tree-bindings.txt +++ b/Documentation/powerpc/mpc52xx-device-tree-bindings.txt @@ -185,7 +185,7 @@ bestcomm@<addr> dma-controller mpc5200-bestcomm 5200 pic also requires Recommended soc5200 child nodes; populate as needed for your board name device_type compatible Description ---- ----------- ---------- ----------- -gpt@<addr> gpt mpc5200-gpt General purpose timers +gpt@<addr> gpt fsl,mpc5200-gpt General purpose timers rtc@<addr> rtc mpc5200-rtc Real time clock mscan@<addr> mscan mpc5200-mscan CAN bus controller pci@<addr> pci mpc5200-pci PCI bridge @@ -213,7 +213,7 @@ cell-index int When multiple devices are present, is the 5) General Purpose Timer nodes (child of soc5200 node) On the mpc5200 and 5200b, GPT0 has a watchdog timer function. If the board design supports the internal wdt, then the device node for GPT0 should -include the empty property 'has-wdt'. +include the empty property 'fsl,has-wdt'. 6) PSC nodes (child of soc5200 node) PSC nodes can define the optional 'port-number' property to force assignment @@ -1505,15 +1505,16 @@ quiet_cmd_rmfiles = $(if $(wildcard $(rm-files)),CLEAN $(wildcard $(rm-files)) # and we build for the host arch quiet_cmd_depmod = DEPMOD $(KERNELRELEASE) cmd_depmod = \ - if [ -r System.map -a -x $(DEPMOD) -a "$(SUBARCH)" = "$(ARCH)" ]; then \ + if [ -r System.map -a -x $(DEPMOD) ]; then \ $(DEPMOD) -ae -F System.map \ $(if $(strip $(INSTALL_MOD_PATH)), -b $(INSTALL_MOD_PATH) -r) \ $(KERNELRELEASE); \ fi # Create temporary dir for module support files -cmd_crmodverdir = $(Q)mkdir -p $(MODVERDIR); rm -f $(MODVERDIR)/* - +# clean it up only when building all modules +cmd_crmodverdir = $(Q)mkdir -p $(MODVERDIR) \ + $(if $(KBUILD_MODULES),; rm -f $(MODVERDIR)/*) a_flags = -Wp,-MD,$(depfile) $(KBUILD_AFLAGS) $(AFLAGS_KERNEL) \ $(NOSTDINC_FLAGS) $(KBUILD_CPPFLAGS) \ diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index e1c470752eb..2d00a08d3f0 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -7,6 +7,7 @@ #include <linux/pci.h> #include <linux/slab.h> #include <linux/bootmem.h> +#include <linux/scatterlist.h> #include <linux/log2.h> #include <asm/io.h> @@ -465,7 +466,7 @@ EXPORT_SYMBOL(pci_free_consistent); Write dma_length of each leader with the combined lengths of the mergable followers. */ -#define SG_ENT_VIRT_ADDRESS(SG) (page_address((SG)->page) + (SG)->offset) +#define SG_ENT_VIRT_ADDRESS(SG) (sg_virt((SG))) #define SG_ENT_PHYS_ADDRESS(SG) __pa(SG_ENT_VIRT_ADDRESS(SG)) static void diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c index 44ab0dad403..52fc6a88328 100644 --- a/arch/arm/common/dmabounce.c +++ b/arch/arm/common/dmabounce.c @@ -29,6 +29,7 @@ #include <linux/dma-mapping.h> #include <linux/dmapool.h> #include <linux/list.h> +#include <linux/scatterlist.h> #include <asm/cacheflush.h> @@ -442,7 +443,7 @@ dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, BUG_ON(dir == DMA_NONE); for (i = 0; i < nents; i++, sg++) { - struct page *page = sg->page; + struct page *page = sg_page(sg); unsigned int offset = sg->offset; unsigned int length = sg->length; void *ptr = page_address(page) + offset; diff --git a/arch/blackfin/Makefile b/arch/blackfin/Makefile index 3c87291bcda..f7cac7c51e7 100644 --- a/arch/blackfin/Makefile +++ b/arch/blackfin/Makefile @@ -12,8 +12,8 @@ LDFLAGS_vmlinux := -X OBJCOPYFLAGS := -O binary -R .note -R .comment -S GZFLAGS := -9 -CFLAGS += $(call cc-option,-mno-fdpic) -AFLAGS += $(call cc-option,-mno-fdpic) +KBUILD_CFLAGS += $(call cc-option,-mno-fdpic) +KBUILD_AFLAGS += $(call cc-option,-mno-fdpic) CFLAGS_MODULE += -mlong-calls KALLSYMS += --symbol-prefix=_ diff --git a/arch/blackfin/kernel/dma-mapping.c b/arch/blackfin/kernel/dma-mapping.c index 94d7b119b71..a16cb03c529 100644 --- a/arch/blackfin/kernel/dma-mapping.c +++ b/arch/blackfin/kernel/dma-mapping.c @@ -160,8 +160,7 @@ dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, BUG_ON(direction == DMA_NONE); for (i = 0; i < nents; i++, sg++) { - sg->dma_address = (dma_addr_t)(page_address(sg->page) + - sg->offset); + sg->dma_address = (dma_addr_t) sg_virt(sg); invalidate_dcache_range(sg_dma_address(sg), sg_dma_address(sg) + diff --git a/arch/blackfin/kernel/setup.c b/arch/blackfin/kernel/setup.c index 0e746449c29..f1b059e5a06 100644 --- a/arch/blackfin/kernel/setup.c +++ b/arch/blackfin/kernel/setup.c @@ -501,7 +501,7 @@ EXPORT_SYMBOL(sclk_to_usecs); unsigned long usecs_to_sclk(unsigned long usecs) { - return get_sclk() / (USEC_PER_SEC * (u64)usecs); + return (get_sclk() * (u64)usecs) / USEC_PER_SEC; } EXPORT_SYMBOL(usecs_to_sclk); @@ -589,7 +589,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) #elif defined CONFIG_BFIN_WT "wt" #endif - , 0); + "", 0); seq_printf(m, "%s\n", cache); diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 3c95f4184b9..bc859a311ea 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -246,7 +246,7 @@ static int reserve_sba_gart = 1; static SBA_INLINE void sba_mark_invalid(struct ioc *, dma_addr_t, size_t); static SBA_INLINE void sba_free_range(struct ioc *, dma_addr_t, size_t); -#define sba_sg_address(sg) (page_address((sg)->page) + (sg)->offset) +#define sba_sg_address(sg) sg_virt((sg)) #ifdef FULL_VALID_PDIR static u64 prefetch_spill_page; diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c index a3a558a0675..6ef9b521993 100644 --- a/arch/ia64/hp/sim/simscsi.c +++ b/arch/ia64/hp/sim/simscsi.c @@ -131,7 +131,7 @@ simscsi_sg_readwrite (struct scsi_cmnd *sc, int mode, unsigned long offset) stat.fd = desc[sc->device->id]; scsi_for_each_sg(sc, sl, scsi_sg_count(sc), i) { - req.addr = __pa(page_address(sl->page) + sl->offset); + req.addr = __pa(sg_virt(sl)); req.len = sl->length; if (DBG) printk("simscsi_sg_%s @ %lx (off %lx) use_sg=%d len=%d\n", @@ -212,7 +212,7 @@ static void simscsi_fillresult(struct scsi_cmnd *sc, char *buf, unsigned len) if (!len) break; thislen = min(len, slp->length); - memcpy(page_address(slp->page) + slp->offset, buf, thislen); + memcpy(sg_virt(slp), buf, thislen); len -= thislen; } } diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c index 8e4894b205e..3f7ea13358e 100644 --- a/arch/ia64/kernel/efi.c +++ b/arch/ia64/kernel/efi.c @@ -1090,7 +1090,8 @@ efi_memmap_init(unsigned long *s, unsigned long *e) void efi_initialize_iomem_resources(struct resource *code_resource, - struct resource *data_resource) + struct resource *data_resource, + struct resource *bss_resource) { struct resource *res; void *efi_map_start, *efi_map_end, *p; @@ -1171,6 +1172,7 @@ efi_initialize_iomem_resources(struct resource *code_resource, */ insert_resource(res, code_resource); insert_resource(res, data_resource); + insert_resource(res, bss_resource); #ifdef CONFIG_KEXEC insert_resource(res, &efi_memmap_res); insert_resource(res, &boot_param_res); diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c index cbf67f1aa29..ae6c3c02e11 100644 --- a/arch/ia64/kernel/setup.c +++ b/arch/ia64/kernel/setup.c @@ -90,7 +90,12 @@ static struct resource code_resource = { .name = "Kernel code", .flags = IORESOURCE_BUSY | IORESOURCE_MEM }; -extern char _text[], _end[], _etext[]; + +static struct resource bss_resource = { + .name = "Kernel bss", + .flags = IORESOURCE_BUSY | IORESOURCE_MEM +}; +extern char _text[], _end[], _etext[], _edata[], _bss[]; unsigned long ia64_max_cacheline_size; @@ -200,8 +205,11 @@ static int __init register_memory(void) code_resource.start = ia64_tpa(_text); code_resource.end = ia64_tpa(_etext) - 1; data_resource.start = ia64_tpa(_etext); - data_resource.end = ia64_tpa(_end) - 1; - efi_initialize_iomem_resources(&code_resource, &data_resource); + data_resource.end = ia64_tpa(_edata) - 1; + bss_resource.start = ia64_tpa(_bss); + bss_resource.end = ia64_tpa(_end) - 1; + efi_initialize_iomem_resources(&code_resource, &data_resource, + &bss_resource); return 0; } diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c index ecd8a52b9b9..511db2fd7bf 100644 --- a/arch/ia64/sn/pci/pci_dma.c +++ b/arch/ia64/sn/pci/pci_dma.c @@ -16,7 +16,7 @@ #include <asm/sn/pcidev.h> #include <asm/sn/sn_sal.h> -#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset) +#define SG_ENT_VIRT_ADDRESS(sg) (sg_virt((sg))) #define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG)) /** diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c index 9d4e4b5b6bd..ef490e1ce60 100644 --- a/arch/m68k/kernel/dma.c +++ b/arch/m68k/kernel/dma.c @@ -121,7 +121,7 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, int i; for (i = 0; i < nents; sg++, i++) { - sg->dma_address = page_to_phys(sg->page) + sg->offset; + sg->dma_address = sg_phys(sg); dma_sync_single_for_device(dev, sg->dma_address, sg->length, dir); } return nents; diff --git a/arch/m68knommu/Kconfig b/arch/m68knommu/Kconfig index f52c627bdad..f4b582cbb56 100644 --- a/arch/m68knommu/Kconfig +++ b/arch/m68knommu/Kconfig @@ -451,6 +451,12 @@ config MOD5272 help Support for the Netburner MOD-5272 board. +config SAVANTrosie1 + bool "Savant Rosie1 board support" + depends on M523x + help + Support for the Savant Rosie1 board. + config ROMFS_FROM_ROM bool "ROMFS image not RAM resident" depends on (NETtel || SNAPGEAR) @@ -492,7 +498,12 @@ config SNEHA bool default y depends on CPU16B - + +config SAVANT + bool + default y + depends on SAVANTrosie1 + config AVNET bool default y diff --git a/arch/m68knommu/Makefile b/arch/m68knommu/Makefile index 92227aaaa26..30aa2553693 100644 --- a/arch/m68knommu/Makefile +++ b/arch/m68knommu/Makefile @@ -48,6 +48,7 @@ board-$(CONFIG_SNEHA) := SNEHA board-$(CONFIG_M5208EVB) := M5208EVB board-$(CONFIG_MOD5272) := MOD5272 board-$(CONFIG_AVNET) := AVNET +board-$(CONFIG_SAVANT) := SAVANT BOARD := $(board-y) model-$(CONFIG_RAMKERNEL) := ram @@ -117,4 +118,4 @@ core-y += arch/m68knommu/kernel/ \ libs-y += arch/m68knommu/lib/ archclean: - $(Q)$(MAKE) $(clean)=arch/m68knommu/boot + diff --git a/arch/m68knommu/defconfig b/arch/m68knommu/defconfig index 3891de09ac2..5a0ecaaee3b 100644 --- a/arch/m68knommu/defconfig +++ b/arch/m68knommu/defconfig @@ -1,41 +1,48 @@ # # Automatically generated make config: don't edit -# Linux kernel version: 2.6.17 -# Tue Jun 27 12:57:06 2006 +# Linux kernel version: 2.6.23 +# Thu Oct 18 13:17:38 2007 # CONFIG_M68K=y # CONFIG_MMU is not set # CONFIG_FPU is not set +CONFIG_ZONE_DMA=y CONFIG_RWSEM_GENERIC_SPINLOCK=y # CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +# CONFIG_ARCH_HAS_ILOG2_U32 is not set +# CONFIG_ARCH_HAS_ILOG2_U64 is not set CONFIG_GENERIC_FIND_NEXT_BIT=y CONFIG_GENERIC_HWEIGHT=y +CONFIG_GENERIC_HARDIRQS=y CONFIG_GENERIC_CALIBRATE_DELAY=y CONFIG_TIME_LOW_RES=y +CONFIG_NO_IOPORT=y +CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" # -# Code maturity level options +# General setup # CONFIG_EXPERIMENTAL=y CONFIG_BROKEN_ON_SMP=y CONFIG_INIT_ENV_ARG_LIMIT=32 - -# -# General setup -# CONFIG_LOCALVERSION="" CONFIG_LOCALVERSION_AUTO=y # CONFIG_SYSVIPC is not set # CONFIG_POSIX_MQUEUE is not set # CONFIG_BSD_PROCESS_ACCT is not set -# CONFIG_SYSCTL is not set +# CONFIG_TASKSTATS is not set +# CONFIG_USER_NS is not set # CONFIG_AUDIT is not set # CONFIG_IKCONFIG is not set +CONFIG_LOG_BUF_SHIFT=14 +# CONFIG_SYSFS_DEPRECATED is not set # CONFIG_RELAY is not set -CONFIG_INITRAMFS_SOURCE="" -CONFIG_UID16=y +# CONFIG_BLK_DEV_INITRD is not set # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_SYSCTL=y CONFIG_EMBEDDED=y +CONFIG_UID16=y +CONFIG_SYSCTL_SYSCALL=y # CONFIG_KALLSYMS is not set # CONFIG_HOTPLUG is not set CONFIG_PRINTK=y @@ -44,20 +51,25 @@ CONFIG_ELF_CORE=y CONFIG_BASE_FULL=y # CONFIG_FUTEX is not set # CONFIG_EPOLL is not set +# CONFIG_SIGNALFD is not set +# CONFIG_EVENTFD is not set +# CONFIG_VM_EVENT_COUNTERS is not set CONFIG_SLAB=y +# CONFIG_SLUB is not set +# CONFIG_SLOB is not set CONFIG_TINY_SHMEM=y CONFIG_BASE_SMALL=0 -# CONFIG_SLOB is not set - -# -# Loadable module support -# -# CONFIG_MODULES is not set - -# -# Block layer -# +CONFIG_MODULES=y +CONFIG_MODULE_UNLOAD=y +# CONFIG_MODULE_FORCE_UNLOAD is not set +# CONFIG_MODVERSIONS is not set +# CONFIG_MODULE_SRCVERSION_ALL is not set +# CONFIG_KMOD is not set +CONFIG_BLOCK=y +# CONFIG_LBD is not set # CONFIG_BLK_DEV_IO_TRACE is not set +# CONFIG_LSF is not set +# CONFIG_BLK_DEV_BSG is not set # # IO Schedulers @@ -99,6 +111,7 @@ CONFIG_CLOCK_DIV=1 # # Platform # +# CONFIG_UC5272 is not set CONFIG_M5272C3=y # CONFIG_COBRA5272 is not set # CONFIG_CANCam is not set @@ -107,7 +120,6 @@ CONFIG_M5272C3=y # CONFIG_CPU16B is not set # CONFIG_MOD5272 is not set CONFIG_FREESCALE=y -# CONFIG_LARGE_ALLOCS is not set CONFIG_4KSTACKS=y # @@ -121,6 +133,11 @@ CONFIG_RAMAUTOBIT=y # CONFIG_RAM8BIT is not set # CONFIG_RAM16BIT is not set # CONFIG_RAM32BIT is not set + +# +# ROM configuration +# +# CONFIG_ROM is not set CONFIG_RAMKERNEL=y # CONFIG_ROMKERNEL is not set CONFIG_SELECT_MEMORY_MODEL=y @@ -131,20 +148,19 @@ CONFIG_FLATMEM=y CONFIG_FLAT_NODE_MEM_MAP=y # CONFIG_SPARSEMEM_STATIC is not set CONFIG_SPLIT_PTLOCK_CPUS=4 +# CONFIG_RESOURCES_64BIT is not set +CONFIG_ZONE_DMA_FLAG=1 +CONFIG_VIRT_TO_BUS=y # # Bus options (PCI, PCMCIA, EISA, MCA, ISA) # # CONFIG_PCI is not set +# CONFIG_ARCH_SUPPORTS_MSI is not set # # PCCARD (PCMCIA/CardBus) support # -# CONFIG_PCCARD is not set - -# -# PCI Hotplug Support -# # # Executable file formats @@ -168,7 +184,6 @@ CONFIG_NET=y # # Networking options # -# CONFIG_NETDEBUG is not set CONFIG_PACKET=y # CONFIG_PACKET_MMAP is not set CONFIG_UNIX=y @@ -187,27 +202,21 @@ CONFIG_IP_FIB_HASH=y # CONFIG_INET_IPCOMP is not set # CONFIG_INET_XFRM_TUNNEL is not set # CONFIG_INET_TUNNEL is not set +# CONFIG_INET_XFRM_MODE_TRANSPORT is not set +# CONFIG_INET_XFRM_MODE_TUNNEL is not set +# CONFIG_INET_XFRM_MODE_BEET is not set # CONFIG_INET_DIAG is not set # CONFIG_TCP_CONG_ADVANCED is not set -CONFIG_TCP_CONG_BIC=y +CONFIG_TCP_CONG_CUBIC=y +CONFIG_DEFAULT_TCP_CONG="cubic" +# CONFIG_TCP_MD5SIG is not set # CONFIG_IPV6 is not set # CONFIG_INET6_XFRM_TUNNEL is not set # CONFIG_INET6_TUNNEL is not set +# CONFIG_NETWORK_SECMARK is not set # CONFIG_NETFILTER is not set - -# -# DCCP Configuration (EXPERIMENTAL) -# # CONFIG_IP_DCCP is not set - -# -# SCTP Configuration (EXPERIMENTAL) -# # CONFIG_IP_SCTP is not set - -# -# TIPC Configuration (EXPERIMENTAL) -# # CONFIG_TIPC is not set # CONFIG_ATM is not set # CONFIG_BRIDGE is not set @@ -218,7 +227,6 @@ CONFIG_TCP_CONG_BIC=y # CONFIG_ATALK is not set # CONFIG_X25 is not set # CONFIG_LAPB is not set -# CONFIG_NET_DIVERT is not set # CONFIG_ECONET is not set # CONFIG_WAN_ROUTER is not set @@ -234,7 +242,17 @@ CONFIG_TCP_CONG_BIC=y # CONFIG_HAMRADIO is not set # CONFIG_IRDA is not set # CONFIG_BT is not set +# CONFIG_AF_RXRPC is not set + +# +# Wireless +# +# CONFIG_CFG80211 is not set +# CONFIG_WIRELESS_EXT is not set +# CONFIG_MAC80211 is not set # CONFIG_IEEE80211 is not set +# CONFIG_RFKILL is not set +# CONFIG_NET_9P is not set # # Device Drivers @@ -245,16 +263,8 @@ CONFIG_TCP_CONG_BIC=y # CONFIG_STANDALONE=y CONFIG_PREVENT_FIRMWARE_BUILD=y -# CONFIG_FW_LOADER is not set - -# -# Connector - unified userspace <-> kernelspace linker -# +# CONFIG_SYS_HYPERVISOR is not set # CONFIG_CONNECTOR is not set - -# -# Memory Technology Devices (MTD) -# CONFIG_MTD=y # CONFIG_MTD_DEBUG is not set # CONFIG_MTD_CONCAT is not set @@ -266,11 +276,13 @@ CONFIG_MTD_PARTITIONS=y # User Modules And Translation Layers # CONFIG_MTD_CHAR=y +CONFIG_MTD_BLKDEVS=y CONFIG_MTD_BLOCK=y # CONFIG_FTL is not set # CONFIG_NFTL is not set # CONFIG_INFTL is not set # CONFIG_RFD_FTL is not set +# CONFIG_SSFDC is not set # # RAM/ROM/Flash chip drivers @@ -290,7 +302,6 @@ CONFIG_MTD_CFI_I2=y CONFIG_MTD_RAM=y # CONFIG_MTD_ROM is not set # CONFIG_MTD_ABSENT is not set -# CONFIG_MTD_OBSOLETE_CHIPS is not set # # Mapping drivers for chip access @@ -313,42 +324,25 @@ CONFIG_MTD_UCLINUX=y # CONFIG_MTD_DOC2000 is not set # CONFIG_MTD_DOC2001 is not set # CONFIG_MTD_DOC2001PLUS is not set - -# -# NAND Flash Device Drivers -# # CONFIG_MTD_NAND is not set - -# -# OneNAND Flash Device Drivers -# # CONFIG_MTD_ONENAND is not set # -# Parallel port support +# UBI - Unsorted block images # +# CONFIG_MTD_UBI is not set # CONFIG_PARPORT is not set - -# -# Plug and Play support -# - -# -# Block devices -# +CONFIG_BLK_DEV=y # CONFIG_BLK_DEV_COW_COMMON is not set # CONFIG_BLK_DEV_LOOP is not set # CONFIG_BLK_DEV_NBD is not set CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=4096 -# CONFIG_BLK_DEV_INITRD is not set +CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024 # CONFIG_CDROM_PKTCDVD is not set # CONFIG_ATA_OVER_ETH is not set - -# -# ATA/ATAPI/MFM/RLL support -# +# CONFIG_MISC_DEVICES is not set # CONFIG_IDE is not set # @@ -356,67 +350,29 @@ CONFIG_BLK_DEV_RAM_SIZE=4096 # # CONFIG_RAID_ATTRS is not set # CONFIG_SCSI is not set - -# -# Multi-device support (RAID and LVM) -# +# CONFIG_SCSI_DMA is not set +# CONFIG_SCSI_NETLINK is not set # CONFIG_MD is not set - -# -# Fusion MPT device support -# -# CONFIG_FUSION is not set - -# -# IEEE 1394 (FireWire) support -# - -# -# I2O device support -# - -# -# Network device support -# CONFIG_NETDEVICES=y +# CONFIG_NETDEVICES_MULTIQUEUE is not set # CONFIG_DUMMY is not set # CONFIG_BONDING is not set +# CONFIG_MACVLAN is not set # CONFIG_EQUALIZER is not set # CONFIG_TUN is not set - -# -# PHY device support -# # CONFIG_PHYLIB is not set - -# -# Ethernet (10 or 100Mbit) -# CONFIG_NET_ETHERNET=y # CONFIG_MII is not set CONFIG_FEC=y # CONFIG_FEC2 is not set +# CONFIG_NETDEV_1000 is not set +# CONFIG_NETDEV_10000 is not set # -# Ethernet (1000 Mbit) -# - -# -# Ethernet (10000 Mbit) -# - -# -# Token Ring devices -# - -# -# Wireless LAN (non-hamradio) -# -# CONFIG_NET_RADIO is not set - -# -# Wan interfaces +# Wireless LAN # +# CONFIG_WLAN_PRE80211 is not set +# CONFIG_WLAN_80211 is not set # CONFIG_WAN is not set CONFIG_PPP=y # CONFIG_PPP_MULTILINK is not set @@ -427,20 +383,14 @@ CONFIG_PPP=y # CONFIG_PPP_BSDCOMP is not set # CONFIG_PPP_MPPE is not set # CONFIG_PPPOE is not set +# CONFIG_PPPOL2TP is not set # CONFIG_SLIP is not set +CONFIG_SLHC=y # CONFIG_SHAPER is not set # CONFIG_NETCONSOLE is not set # CONFIG_NETPOLL is not set # CONFIG_NET_POLL_CONTROLLER is not set - -# -# ISDN subsystem -# # CONFIG_ISDN is not set - -# -# Telephony Support -# # CONFIG_PHONE is not set # @@ -472,34 +422,13 @@ CONFIG_SERIAL_COLDFIRE=y # CONFIG_UNIX98_PTYS is not set CONFIG_LEGACY_PTYS=y CONFIG_LEGACY_PTY_COUNT=256 - -# -# IPMI -# # CONFIG_IPMI_HANDLER is not set - -# -# Watchdog Cards -# # CONFIG_WATCHDOG is not set +# CONFIG_HW_RANDOM is not set # CONFIG_GEN_RTC is not set -# CONFIG_DTLK is not set # CONFIG_R3964 is not set - -# -# Ftape, the floppy tape device driver -# # CONFIG_RAW_DRIVER is not set - -# -# TPM devices -# # CONFIG_TCG_TPM is not set -# CONFIG_TELCLOCK is not set - -# -# I2C support -# # CONFIG_I2C is not set # @@ -507,101 +436,74 @@ CONFIG_LEGACY_PTY_COUNT=256 # # CONFIG_SPI is not set # CONFIG_SPI_MASTER is not set - -# -# Dallas's 1-wire bus -# # CONFIG_W1 is not set - -# -# Hardware Monitoring support -# +# CONFIG_POWER_SUPPLY is not set # CONFIG_HWMON is not set -# CONFIG_HWMON_VID is not set # -# Misc devices +# Multifunction device drivers # +# CONFIG_MFD_SM501 is not set # # Multimedia devices # # CONFIG_VIDEO_DEV is not set -CONFIG_VIDEO_V4L2=y +# CONFIG_DVB_CORE is not set +CONFIG_DAB=y # -# Digital Video Broadcasting Devices +# Graphics support # -# CONFIG_DVB is not set +# CONFIG_BACKLIGHT_LCD_SUPPORT is not set # -# Graphics support +# Display device support # +# CONFIG_DISPLAY_SUPPORT is not set +# CONFIG_VGASTATE is not set +CONFIG_VIDEO_OUTPUT_CONTROL=y # CONFIG_FB is not set # # Sound # # CONFIG_SOUND is not set - -# -# USB support -# -# CONFIG_USB_ARCH_HAS_HCD is not set -# CONFIG_USB_ARCH_HAS_OHCI is not set -# CONFIG_USB_ARCH_HAS_EHCI is not set - -# -# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' -# - -# -# USB Gadget Support -# -# CONFIG_USB_GADGET is not set - -# -# MMC/SD Card support -# +# CONFIG_USB_SUPPORT is not set # CONFIG_MMC is not set - -# -# LED devices -# # CONFIG_NEW_LEDS is not set +# CONFIG_RTC_CLASS is not set # -# LED drivers +# DMA Engine support # +# CONFIG_DMA_ENGINE is not set # -# LED Triggers +# DMA Clients # # -# InfiniBand support +# DMA Devices # # -# EDAC - error detection and reporting (RAS) (EXPERIMENTAL) +# Userspace I/O # - -# -# Real Time Clock -# -# CONFIG_RTC_CLASS is not set +# CONFIG_UIO is not set # # File systems # CONFIG_EXT2_FS=y # CONFIG_EXT2_FS_XATTR is not set -# CONFIG_EXT2_FS_XIP is not set # CONFIG_EXT3_FS is not set +# CONFIG_EXT4DEV_FS is not set # CONFIG_REISERFS_FS is not set # CONFIG_JFS_FS is not set # CONFIG_FS_POSIX_ACL is not set # CONFIG_XFS_FS is not set +# CONFIG_GFS2_FS is not set # CONFIG_OCFS2_FS is not set # CONFIG_MINIX_FS is not set CONFIG_ROMFS_FS=y @@ -629,6 +531,7 @@ CONFIG_ROMFS_FS=y # Pseudo filesystems # CONFIG_PROC_FS=y +CONFIG_PROC_SYSCTL=y CONFIG_SYSFS=y # CONFIG_TMPFS is not set # CONFIG_HUGETLB_PAGE is not set @@ -645,7 +548,6 @@ CONFIG_RAMFS=y # CONFIG_BEFS_FS is not set # CONFIG_BFS_FS is not set # CONFIG_EFS_FS is not set -# CONFIG_JFFS_FS is not set # CONFIG_JFFS2_FS is not set # CONFIG_CRAMFS is not set # CONFIG_VXFS_FS is not set @@ -664,7 +566,6 @@ CONFIG_RAMFS=y # CONFIG_NCP_FS is not set # CONFIG_CODA_FS is not set # CONFIG_AFS_FS is not set -# CONFIG_9P_FS is not set # # Partition Types @@ -678,15 +579,21 @@ CONFIG_MSDOS_PARTITION=y # CONFIG_NLS is not set # +# Distributed Lock Manager +# +# CONFIG_DLM is not set + +# # Kernel hacking # # CONFIG_PRINTK_TIME is not set +# CONFIG_ENABLE_MUST_CHECK is not set # CONFIG_MAGIC_SYSRQ is not set +# CONFIG_UNUSED_SYMBOLS is not set +# CONFIG_DEBUG_FS is not set +# CONFIG_HEADERS_CHECK is not set # CONFIG_DEBUG_KERNEL is not set -CONFIG_LOG_BUF_SHIFT=14 # CONFIG_DEBUG_BUGVERBOSE is not set -# CONFIG_DEBUG_FS is not set -# CONFIG_UNWIND_INFO is not set # CONFIG_FULLDEBUG is not set # CONFIG_HIGHPROFILE is not set # CONFIG_BOOTPARAM is not set @@ -699,20 +606,16 @@ CONFIG_LOG_BUF_SHIFT=14 # # CONFIG_KEYS is not set # CONFIG_SECURITY is not set - -# -# Cryptographic options -# # CONFIG_CRYPTO is not set # -# Hardware crypto devices -# - -# # Library routines # # CONFIG_CRC_CCITT is not set # CONFIG_CRC16 is not set +# CONFIG_CRC_ITU_T is not set # CONFIG_CRC32 is not set +# CONFIG_CRC7 is not set # CONFIG_LIBCRC32C is not set +CONFIG_HAS_IOMEM=y +CONFIG_HAS_DMA=y diff --git a/arch/m68knommu/kernel/setup.c b/arch/m68knommu/kernel/setup.c index 3f86ade3a22..74bf94948ec 100644 --- a/arch/m68knommu/kernel/setup.c +++ b/arch/m68knommu/kernel/setup.c @@ -151,27 +151,15 @@ void setup_arch(char **cmdline_p) #ifdef CONFIG_ELITE printk(KERN_INFO "Modified for M5206eLITE by Rob Scott, rscott@mtrob.fdns.net\n"); #endif -#ifdef CONFIG_TELOS - printk(KERN_INFO "Modified for Omnia ToolVox by James D. Schettine, james@telos-systems.com\n"); -#endif #endif printk(KERN_INFO "Flat model support (C) 1998,1999 Kenneth Albanowski, D. Jeff Dionne\n"); #if defined( CONFIG_PILOT ) && defined( CONFIG_M68328 ) printk(KERN_INFO "TRG SuperPilot FLASH card support <info@trgnet.com>\n"); #endif - #if defined( CONFIG_PILOT ) && defined( CONFIG_M68EZ328 ) printk(KERN_INFO "PalmV support by Lineo Inc. <jeff@uclinux.com>\n"); #endif - -#ifdef CONFIG_M68EZ328ADS - printk(KERN_INFO "M68EZ328ADS board support (C) 1999 Vladimir Gurevich <vgurevic@cisco.com>\n"); -#endif - -#ifdef CONFIG_ALMA_ANS - printk(KERN_INFO "Alma Electronics board support (C) 1999 Vladimir Gurevich <vgurevic@cisco.com>\n"); -#endif #if defined (CONFIG_M68360) printk(KERN_INFO "QUICC port done by SED Systems <hamilton@sedsystems.ca>,\n"); printk(KERN_INFO "based on 2.0.38 port by Lineo Inc. <mleslie@lineo.com>.\n"); @@ -188,11 +176,9 @@ void setup_arch(char **cmdline_p) "BSS=0x%06x-0x%06x\n", (int) &_stext, (int) &_etext, (int) &_sdata, (int) &_edata, (int) &_sbss, (int) &_ebss); - printk(KERN_DEBUG "KERNEL -> ROMFS=0x%06x-0x%06x MEM=0x%06x-0x%06x " - "STACK=0x%06x-0x%06x\n", + printk(KERN_DEBUG "MEMORY -> ROMFS=0x%06x-0x%06x MEM=0x%06x-0x%06x\n ", (int) &_ebss, (int) memory_start, - (int) memory_start, (int) memory_end, - (int) memory_end, (int) _ramend); + (int) memory_start, (int) memory_end); #endif /* Keep a copy of command line */ @@ -287,12 +273,3 @@ struct seq_operations cpuinfo_op = { .show = show_cpuinfo, }; -void arch_gettod(int *year, int *mon, int *day, int *hour, - int *min, int *sec) -{ - if (mach_gettod) - mach_gettod(year, mon, day, hour, min, sec); - else - *year = *mon = *day = *hour = *min = *sec = 0; -} - diff --git a/arch/m68knommu/kernel/signal.c b/arch/m68knommu/kernel/signal.c index 437f8c6c14a..70371378db8 100644 --- a/arch/m68knommu/kernel/signal.c +++ b/arch/m68knommu/kernel/signal.c @@ -781,15 +781,7 @@ asmlinkage int do_signal(sigset_t *oldset, struct pt_regs *regs) /* Did we come from a system call? */ if (regs->orig_d0 >= 0) { /* Restart the system call - no handlers present */ - if (regs->d0 == -ERESTARTNOHAND - || regs->d0 == -ERESTARTSYS - || regs->d0 == -ERESTARTNOINTR) { - regs->d0 = regs->orig_d0; - regs->pc -= 2; - } else if (regs->d0 == -ERESTART_RESTARTBLOCK) { - regs->d0 = __NR_restart_syscall; - regs->pc -= 2; - } + handle_restart(regs, NULL, 0); } return 0; } diff --git a/arch/m68knommu/kernel/time.c b/arch/m68knommu/kernel/time.c index 467053da2d0..77e5375a2dd 100644 --- a/arch/m68knommu/kernel/time.c +++ b/arch/m68knommu/kernel/time.c @@ -27,7 +27,6 @@ #define TICK_SIZE (tick_nsec / 1000) - static inline int set_rtc_mmss(unsigned long nowtime) { if (mach_set_clock_mmss) @@ -39,15 +38,11 @@ static inline int set_rtc_mmss(unsigned long nowtime) * timer_interrupt() needs to keep up the real-time clock, * as well as call the "do_timer()" routine every clocktick */ -static irqreturn_t timer_interrupt(int irq, void *dummy) +irqreturn_t arch_timer_interrupt(int irq, void *dummy) { /* last time the cmos clock got updated */ static long last_rtc_update=0; - /* may need to kick the hardware timer */ - if (mach_tick) - mach_tick(); - write_seqlock(&xtime_lock); do_timer(1); @@ -103,10 +98,10 @@ void time_init(void) { unsigned int year, mon, day, hour, min, sec; - extern void arch_gettod(int *year, int *mon, int *day, int *hour, - int *min, int *sec); - - arch_gettod(&year, &mon, &day, &hour, &min, &sec); + if (mach_gettod) + mach_gettod(&year, &mon, &day, &hour, &min, &sec); + else + year = mon = day = hour = min = sec = 0; if ((year += 1900) < 1970) year += 100; @@ -114,7 +109,7 @@ void time_init(void) xtime.tv_nsec = 0; wall_to_monotonic.tv_sec = -xtime.tv_sec; - mach_sched_init(timer_interrupt); + hw_timer_init(); } /* @@ -128,7 +123,7 @@ void do_gettimeofday(struct timeval *tv) do { seq = read_seqbegin_irqsave(&xtime_lock, flags); - usec = mach_gettimeoffset ? mach_gettimeoffset() : 0; + usec = hw_timer_offset(); sec = xtime.tv_sec; usec += (xtime.tv_nsec / 1000); } while (read_seqretry_irqrestore(&xtime_lock, seq, flags)); @@ -160,8 +155,7 @@ int do_settimeofday(struct timespec *tv) * Discover what correction gettimeofday * would have done, and then undo it! */ - if (mach_gettimeoffset) - nsec -= (mach_gettimeoffset() * 1000); + nsec -= (hw_timer_offset() * 1000); wtm_sec = wall_to_monotonic.tv_sec + (xtime.tv_sec - sec); wtm_nsec = wall_to_monotonic.tv_nsec + (xtime.tv_nsec - nsec); diff --git a/arch/m68knommu/platform/5206/config.c b/arch/m68knommu/platform/5206/config.c index d0f2dc5cb5a..b3c4dd4cc13 100644 --- a/arch/m68knommu/platform/5206/config.c +++ b/arch/m68knommu/platform/5206/config.c @@ -10,13 +10,10 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> #include <asm/mcftimer.h> @@ -25,9 +22,6 @@ /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -97,9 +91,6 @@ int mcf_timerirqpending(int timer) void config_BSP(char *commandp, int size) { mcf_setimr(MCFSIM_IMR_MASKALL); - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/5206e/config.c b/arch/m68knommu/platform/5206e/config.c index 425703fb6ce..f84a4aea8cb 100644 --- a/arch/m68knommu/platform/5206e/config.c +++ b/arch/m68knommu/platform/5206e/config.c @@ -9,23 +9,16 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -102,9 +95,6 @@ void config_BSP(char *commandp, int size) commandp[size-1] = 0; #endif /* CONFIG_NETtel */ - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/520x/config.c b/arch/m68knommu/platform/520x/config.c index a2c95bebd00..6edbd41261c 100644 --- a/arch/m68knommu/platform/520x/config.c +++ b/arch/m68knommu/platform/520x/config.c @@ -27,9 +27,6 @@ unsigned int dma_device_address[MAX_M68K_DMA_CHANNELS]; /***************************************************************************/ -void coldfire_pit_tick(void); -void coldfire_pit_init(irq_handler_t handler); -unsigned long coldfire_pit_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -47,10 +44,7 @@ void mcf_autovector(unsigned int vec) void config_BSP(char *commandp, int size) { - mach_sched_init = coldfire_pit_init; - mach_tick = coldfire_pit_tick; - mach_gettimeoffset = coldfire_pit_offset; - mach_reset = coldfire_reset; + mach_reset = coldfire_reset; } /***************************************************************************/ diff --git a/arch/m68knommu/platform/523x/config.c b/arch/m68knommu/platform/523x/config.c index 0a3af05a434..e7f80c8e863 100644 --- a/arch/m68knommu/platform/523x/config.c +++ b/arch/m68knommu/platform/523x/config.c @@ -13,12 +13,10 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> #include <asm/mcfsim.h> @@ -26,9 +24,6 @@ /***************************************************************************/ -void coldfire_pit_tick(void); -void coldfire_pit_init(irq_handler_t handler); -unsigned long coldfire_pit_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -62,9 +57,6 @@ void mcf_autovector(unsigned int vec) void config_BSP(char *commandp, int size) { mcf_disableall(); - mach_sched_init = coldfire_pit_init; - mach_tick = coldfire_pit_tick; - mach_gettimeoffset = coldfire_pit_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/5249/config.c b/arch/m68knommu/platform/5249/config.c index dc2c362590c..d4d39435cb1 100644 --- a/arch/m68knommu/platform/5249/config.c +++ b/arch/m68knommu/platform/5249/config.c @@ -9,24 +9,17 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -95,9 +88,6 @@ int mcf_timerirqpending(int timer) void config_BSP(char *commandp, int size) { mcf_setimr(MCFSIM_IMR_MASKALL); - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/5272/config.c b/arch/m68knommu/platform/5272/config.c index 1365a8300d5..634a6375e4a 100644 --- a/arch/m68knommu/platform/5272/config.c +++ b/arch/m68knommu/platform/5272/config.c @@ -10,24 +10,17 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); extern unsigned int mcf_timervector; @@ -128,9 +121,6 @@ void config_BSP(char *commandp, int size) mcf_timervector = 69; mcf_profilevector = 70; - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/527x/config.c b/arch/m68knommu/platform/527x/config.c index 1b820441419..9cbfbc68ae4 100644 --- a/arch/m68knommu/platform/527x/config.c +++ b/arch/m68knommu/platform/527x/config.c @@ -13,12 +13,10 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> #include <asm/mcfsim.h> @@ -26,9 +24,6 @@ /***************************************************************************/ -void coldfire_pit_tick(void); -void coldfire_pit_init(irq_handler_t handler); -unsigned long coldfire_pit_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -62,9 +57,6 @@ void mcf_autovector(unsigned int vec) void config_BSP(char *commandp, int size) { mcf_disableall(); - mach_sched_init = coldfire_pit_init; - mach_tick = coldfire_pit_tick; - mach_gettimeoffset = coldfire_pit_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/528x/config.c b/arch/m68knommu/platform/528x/config.c index a089e951369..acbd43486d9 100644 --- a/arch/m68knommu/platform/528x/config.c +++ b/arch/m68knommu/platform/528x/config.c @@ -13,12 +13,10 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> #include <asm/mcfsim.h> @@ -26,9 +24,6 @@ /***************************************************************************/ -void coldfire_pit_tick(void); -void coldfire_pit_init(irq_handler_t handler); -unsigned long coldfire_pit_offset(void); void coldfire_reset(void); /***************************************************************************/ @@ -62,9 +57,6 @@ void mcf_autovector(unsigned int vec) void config_BSP(char *commandp, int size) { mcf_disableall(); - mach_sched_init = coldfire_pit_init; - mach_tick = coldfire_pit_tick; - mach_gettimeoffset = coldfire_pit_offset; mach_reset = coldfire_reset; } diff --git a/arch/m68knommu/platform/5307/config.c b/arch/m68knommu/platform/5307/config.c index e3461619fd6..6040821e637 100644 --- a/arch/m68knommu/platform/5307/config.c +++ b/arch/m68knommu/platform/5307/config.c @@ -10,25 +10,18 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> #include <asm/mcfwdebug.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); extern unsigned int mcf_timervector; @@ -122,9 +115,6 @@ void config_BSP(char *commandp, int size) mcf_timerlevel = 6; #endif - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; #ifdef MCF_BDM_DISABLE diff --git a/arch/m68knommu/platform/5307/entry.S b/arch/m68knommu/platform/5307/entry.S index a8cd867805c..b333731b875 100644 --- a/arch/m68knommu/platform/5307/entry.S +++ b/arch/m68knommu/platform/5307/entry.S @@ -74,7 +74,8 @@ ENTRY(system_call) movel %sp,%d2 /* get thread_info pointer */ andl #-THREAD_SIZE,%d2 /* at start of kernel stack */ movel %d2,%a0 - movel %sp,%a0@(THREAD_ESP0) /* save top of frame */ + movel %a0@,%a1 /* save top of frame */ + movel %sp,%a1@(TASK_THREAD+THREAD_ESP0) btst #(TIF_SYSCALL_TRACE%8),%a0@(TI_FLAGS+(31-TIF_SYSCALL_TRACE)/8) bnes 1f @@ -83,6 +84,8 @@ ENTRY(system_call) movel %d0,%sp@(PT_D0) /* save the return value */ jra ret_from_exception 1: + movel #-ENOSYS,%d2 /* strace needs -ENOSYS in PT_D0 */ + movel %d2,PT_D0(%sp) /* on syscall entry */ subql #4,%sp SAVE_SWITCH_STACK jbsr syscall_trace diff --git a/arch/m68knommu/platform/5307/pit.c b/arch/m68knommu/platform/5307/pit.c index f18352fa35a..173b754d1cd 100644 --- a/arch/m68knommu/platform/5307/pit.c +++ b/arch/m68knommu/platform/5307/pit.c @@ -17,6 +17,7 @@ #include <linux/init.h> #include <linux/interrupt.h> #include <linux/irq.h> +#include <asm/machdep.h> #include <asm/io.h> #include <asm/coldfire.h> #include <asm/mcfpit.h> @@ -31,28 +32,30 @@ /***************************************************************************/ -void coldfire_pit_tick(void) +static irqreturn_t hw_tick(int irq, void *dummy) { unsigned short pcsr; /* Reset the ColdFire timer */ pcsr = __raw_readw(TA(MCFPIT_PCSR)); __raw_writew(pcsr | MCFPIT_PCSR_PIF, TA(MCFPIT_PCSR)); + + return arch_timer_interrupt(irq, dummy); } /***************************************************************************/ static struct irqaction coldfire_pit_irq = { - .name = "timer", - .flags = IRQF_DISABLED | IRQF_TIMER, + .name = "timer", + .flags = IRQF_DISABLED | IRQF_TIMER, + .handler = hw_tick, }; -void coldfire_pit_init(irq_handler_t handler) +void hw_timer_init(void) { volatile unsigned char *icrp; volatile unsigned long *imrp; - coldfire_pit_irq.handler = handler; setup_irq(MCFINT_VECBASE + MCFINT_PIT1, &coldfire_pit_irq); icrp = (volatile unsigned char *) (MCF_IPSBAR + MCFICM_INTC0 + @@ -71,7 +74,7 @@ void coldfire_pit_init(irq_handler_t handler) /***************************************************************************/ -unsigned long coldfire_pit_offset(void) +unsigned long hw_timer_offset(void) { volatile unsigned long *ipr; unsigned long pmr, pcntr, offset; diff --git a/arch/m68knommu/platform/5307/timers.c b/arch/m68knommu/platform/5307/timers.c index 64bd0ff9029..489dec85c85 100644 --- a/arch/m68knommu/platform/5307/timers.c +++ b/arch/m68knommu/platform/5307/timers.c @@ -9,10 +9,9 @@ /***************************************************************************/ #include <linux/kernel.h> +#include <linux/init.h> #include <linux/sched.h> -#include <linux/param.h> #include <linux/interrupt.h> -#include <linux/init.h> #include <linux/irq.h> #include <asm/io.h> #include <asm/traps.h> @@ -54,24 +53,28 @@ extern int mcf_timerirqpending(int timer); /***************************************************************************/ -void coldfire_tick(void) +static irqreturn_t hw_tick(int irq, void *dummy) { /* Reset the ColdFire timer */ __raw_writeb(MCFTIMER_TER_CAP | MCFTIMER_TER_REF, TA(MCFTIMER_TER)); + + return arch_timer_interrupt(irq, dummy); } /***************************************************************************/ static struct irqaction coldfire_timer_irq = { - .name = "timer", - .flags = IRQF_DISABLED | IRQF_TIMER, + .name = "timer", + .flags = IRQF_DISABLED | IRQF_TIMER, + .handler = hw_tick, }; +/***************************************************************************/ + static int ticks_per_intr; -void coldfire_timer_init(irq_handler_t handler) +void hw_timer_init(void) { - coldfire_timer_irq.handler = handler; setup_irq(mcf_timervector, &coldfire_timer_irq); __raw_writew(MCFTIMER_TMR_DISABLE, TA(MCFTIMER_TMR)); @@ -89,7 +92,7 @@ void coldfire_timer_init(irq_handler_t handler) /***************************************************************************/ -unsigned long coldfire_timer_offset(void) +unsigned long hw_timer_offset(void) { unsigned long tcn, offset; diff --git a/arch/m68knommu/platform/532x/config.c b/arch/m68knommu/platform/532x/config.c index b32c6425f82..f77328b7b6d 100644 --- a/arch/m68knommu/platform/532x/config.c +++ b/arch/m68knommu/platform/532x/config.c @@ -18,25 +18,18 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> #include <asm/mcfwdebug.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); extern unsigned int mcf_timervector; @@ -104,9 +97,6 @@ void config_BSP(char *commandp, int size) mcf_timervector = 64+32; mcf_profilevector = 64+33; - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; #ifdef MCF_BDM_DISABLE diff --git a/arch/m68knommu/platform/5407/config.c b/arch/m68knommu/platform/5407/config.c index e692536817d..2d3b62eba7c 100644 --- a/arch/m68knommu/platform/5407/config.c +++ b/arch/m68knommu/platform/5407/config.c @@ -10,24 +10,17 @@ /***************************************************************************/ #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/param.h> #include <linux/init.h> #include <linux/interrupt.h> -#include <asm/irq.h> #include <asm/dma.h> -#include <asm/traps.h> #include <asm/machdep.h> #include <asm/coldfire.h> -#include <asm/mcftimer.h> #include <asm/mcfsim.h> #include <asm/mcfdma.h> /***************************************************************************/ -void coldfire_tick(void); -void coldfire_timer_init(irq_handler_t handler); -unsigned long coldfire_timer_offset(void); void coldfire_reset(void); extern unsigned int mcf_timervector; @@ -108,9 +101,6 @@ void config_BSP(char *commandp, int size) mcf_timerlevel = 6; #endif - mach_sched_init = coldfire_timer_init; - mach_tick = coldfire_tick; - mach_gettimeoffset = coldfire_timer_offset; mach_reset = coldfire_reset; } diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 3ecff5e9e4f..61262c5f9c6 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -66,6 +66,7 @@ config BCM47XX config MIPS_COBALT bool "Cobalt Server" select CEVT_R4K + select CEVT_GT641XX select DMA_NONCOHERENT select HW_HAS_PCI select I8253 @@ -729,6 +730,9 @@ config ARCH_MAY_HAVE_PC_FDC config BOOT_RAW bool +config CEVT_GT641XX + bool + config CEVT_R4K bool diff --git a/arch/mips/Kconfig.debug b/arch/mips/Kconfig.debug index 3efe117721a..fd7124c1b75 100644 --- a/arch/mips/Kconfig.debug +++ b/arch/mips/Kconfig.debug @@ -6,18 +6,6 @@ config TRACE_IRQFLAGS_SUPPORT source "lib/Kconfig.debug" -config CROSSCOMPILE - bool "Are you using a crosscompiler" - help - Say Y here if you are compiling the kernel on a different - architecture than the one it is intended to run on. This is just a - convenience option which will select the appropriate value for - the CROSS_COMPILE make variable which otherwise has to be passed on - the command line from mips-linux-, mipsel-linux-, mips64-linux- and - mips64el-linux- as appropriate for a particular kernel configuration. - You will have to pass the value for CROSS_COMPILE manually if the - name prefix for your tools is different. - config CMDLINE string "Default kernel command string" default "" diff --git a/arch/mips/Makefile b/arch/mips/Makefile index 14164c2b879..23c17755eca 100644 --- a/arch/mips/Makefile +++ b/arch/mips/Makefile @@ -18,15 +18,15 @@ cflags-y := # Select the object file format to substitute into the linker script. # ifdef CONFIG_CPU_LITTLE_ENDIAN -32bit-tool-prefix = mipsel-linux- -64bit-tool-prefix = mips64el-linux- +32bit-tool-archpref = mipsel +64bit-tool-archpref = mips64el 32bit-bfd = elf32-tradlittlemips 64bit-bfd = elf64-tradlittlemips 32bit-emul = elf32ltsmip 64bit-emul = elf64ltsmip else -32bit-tool-prefix = mips-linux- -64bit-tool-prefix = mips64-linux- +32bit-tool-archpref = mips +64bit-tool-archpref = mips64 32bit-bfd = elf32-tradbigmips 64bit-bfd = elf64-tradbigmips 32bit-emul = elf32btsmip @@ -34,16 +34,18 @@ else endif ifdef CONFIG_32BIT -tool-prefix = $(32bit-tool-prefix) +tool-archpref = $(32bit-tool-archpref) UTS_MACHINE := mips endif ifdef CONFIG_64BIT -tool-prefix = $(64bit-tool-prefix) +tool-archpref = $(64bit-tool-archpref) UTS_MACHINE := mips64 endif -ifdef CONFIG_CROSSCOMPILE -CROSS_COMPILE := $(tool-prefix) +ifneq ($(SUBARCH),$(ARCH)) + ifeq ($(CROSS_COMPILE),) + CROSS_COMPILE := $(call cc-cross-prefix, $(tool-archpref)-linux- $(tool-archpref)-gnu-linux- $(tool-archpref)-unknown-gnu-linux-) + endif endif ifdef CONFIG_32BIT diff --git a/arch/mips/cobalt/Makefile b/arch/mips/cobalt/Makefile index 6b83f4ddc8f..d73833b7c78 100644 --- a/arch/mips/cobalt/Makefile +++ b/arch/mips/cobalt/Makefile @@ -2,7 +2,7 @@ # Makefile for the Cobalt micro systems family specific parts of the kernel # -obj-y := buttons.o irq.o led.o reset.o rtc.o serial.o setup.o +obj-y := buttons.o irq.o led.o reset.o rtc.o serial.o setup.o time.o obj-$(CONFIG_PCI) += pci.o obj-$(CONFIG_EARLY_PRINTK) += console.o diff --git a/arch/mips/cobalt/setup.c b/arch/mips/cobalt/setup.c index d11bb1bc7b6..dd23beb8604 100644 --- a/arch/mips/cobalt/setup.c +++ b/arch/mips/cobalt/setup.c @@ -9,19 +9,17 @@ * Copyright (C) 2001, 2002, 2003 by Liam Davies (ldavies@agile.tv) * */ -#include <linux/interrupt.h> #include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/ioport.h> #include <linux/pm.h> #include <asm/bootinfo.h> -#include <asm/time.h> -#include <asm/i8253.h> -#include <asm/io.h> #include <asm/reboot.h> #include <asm/gt64120.h> #include <cobalt.h> -#include <irq.h> extern void cobalt_machine_restart(char *command); extern void cobalt_machine_halt(void); @@ -41,17 +39,6 @@ const char *get_system_type(void) return "MIPS Cobalt"; } -void __init plat_timer_setup(struct irqaction *irq) -{ - /* Load timer value for HZ (TCLK is 50MHz) */ - GT_WRITE(GT_TC0_OFS, 50*1000*1000 / HZ); - - /* Enable timer0 */ - GT_WRITE(GT_TC_CONTROL_OFS, GT_TC_CONTROL_ENTC0_MSK | GT_TC_CONTROL_SELTC0_MSK); - - setup_irq(GT641XX_TIMER0_IRQ, irq); -} - /* * Cobalt doesn't have PS/2 keyboard/mouse interfaces, * keyboard conntroller is never used. @@ -84,11 +71,6 @@ static struct resource cobalt_reserved_resources[] = { }, }; -void __init plat_time_init(void) -{ - setup_pit_timer(); -} - void __init plat_mem_setup(void) { int i; diff --git a/arch/mips/cobalt/time.c b/arch/mips/cobalt/time.c new file mode 100644 index 00000000000..fa819fccd5d --- /dev/null +++ b/arch/mips/cobalt/time.c @@ -0,0 +1,35 @@ +/* + * Cobalt time initialization. + * + * Copyright (C) 2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include <linux/init.h> + +#include <asm/gt64120.h> +#include <asm/i8253.h> +#include <asm/time.h> + +#define GT641XX_BASE_CLOCK 50000000 /* 50MHz */ + +void __init plat_time_init(void) +{ + setup_pit_timer(); + + gt641xx_set_base_clock(GT641XX_BASE_CLOCK); + + mips_timer_state = gt641xx_timer0_state; +} diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile index a3afa39faae..d7745c8976f 100644 --- a/arch/mips/kernel/Makefile +++ b/arch/mips/kernel/Makefile @@ -9,6 +9,7 @@ obj-y += cpu-probe.o branch.o entry.o genex.o irq.o process.o \ time.o topology.o traps.o unaligned.o obj-$(CONFIG_CEVT_R4K) += cevt-r4k.o +obj-$(CONFIG_CEVT_GT641XX) += cevt-gt641xx.o binfmt_irix-objs := irixelf.o irixinv.o irixioctl.o irixsig.o \ irix5sys.o sysirix.o diff --git a/arch/mips/kernel/cevt-gt641xx.c b/arch/mips/kernel/cevt-gt641xx.c new file mode 100644 index 00000000000..4c651b2680f --- /dev/null +++ b/arch/mips/kernel/cevt-gt641xx.c @@ -0,0 +1,144 @@ +/* + * GT641xx clockevent routines. + * + * Copyright (C) 2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include <linux/clockchips.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> + +#include <asm/gt64120.h> +#include <asm/time.h> + +#include <irq.h> + +static DEFINE_SPINLOCK(gt641xx_timer_lock); +static unsigned int gt641xx_base_clock; + +void gt641xx_set_base_clock(unsigned int clock) +{ + gt641xx_base_clock = clock; +} + +int gt641xx_timer0_state(void) +{ + if (GT_READ(GT_TC0_OFS)) + return 0; + + GT_WRITE(GT_TC0_OFS, gt641xx_base_clock / HZ); + GT_WRITE(GT_TC_CONTROL_OFS, GT_TC_CONTROL_ENTC0_MSK); + + return 1; +} + +static int gt641xx_timer0_set_next_event(unsigned long delta, + struct clock_event_device *evt) +{ + unsigned long flags; + u32 ctrl; + + spin_lock_irqsave(>641xx_timer_lock, flags); + + ctrl = GT_READ(GT_TC_CONTROL_OFS); + ctrl &= ~(GT_TC_CONTROL_ENTC0_MSK | GT_TC_CONTROL_SELTC0_MSK); + ctrl |= GT_TC_CONTROL_ENTC0_MSK; + + GT_WRITE(GT_TC0_OFS, delta); + GT_WRITE(GT_TC_CONTROL_OFS, ctrl); + + spin_unlock_irqrestore(>641xx_timer_lock, flags); + + return 0; +} + +static void gt641xx_timer0_set_mode(enum clock_event_mode mode, + struct clock_event_device *evt) +{ + unsigned long flags; + u32 ctrl; + + spin_lock_irqsave(>641xx_timer_lock, flags); + + ctrl = GT_READ(GT_TC_CONTROL_OFS); + ctrl &= ~(GT_TC_CONTROL_ENTC0_MSK | GT_TC_CONTROL_SELTC0_MSK); + + switch (mode) { + case CLOCK_EVT_MODE_PERIODIC: + ctrl |= GT_TC_CONTROL_ENTC0_MSK | GT_TC_CONTROL_SELTC0_MSK; + break; + case CLOCK_EVT_MODE_ONESHOT: + ctrl |= GT_TC_CONTROL_ENTC0_MSK; + break; + default: + break; + } + + GT_WRITE(GT_TC_CONTROL_OFS, ctrl); + + spin_unlock_irqrestore(>641xx_timer_lock, flags); +} + +static void gt641xx_timer0_event_handler(struct clock_event_device *dev) +{ +} + +static struct clock_event_device gt641xx_timer0_clockevent = { + .name = "gt641xx-timer0", + .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT, + .cpumask = CPU_MASK_CPU0, + .irq = GT641XX_TIMER0_IRQ, + .set_next_event = gt641xx_timer0_set_next_event, + .set_mode = gt641xx_timer0_set_mode, + .event_handler = gt641xx_timer0_event_handler, +}; + +static irqreturn_t gt641xx_timer0_interrupt(int irq, void *dev_id) +{ + struct clock_event_device *cd = >641xx_timer0_clockevent; + + cd->event_handler(cd); + + return IRQ_HANDLED; +} + +static struct irqaction gt641xx_timer0_irqaction = { + .handler = gt641xx_timer0_interrupt, + .flags = IRQF_DISABLED | IRQF_PERCPU, + .name = "gt641xx_timer0", +}; + +static int __init gt641xx_timer0_clockevent_init(void) +{ + struct clock_event_device *cd; + + if (!gt641xx_base_clock) + return 0; + + GT_WRITE(GT_TC0_OFS, gt641xx_base_clock / HZ); + + cd = >641xx_timer0_clockevent; + cd->rating = 200 + gt641xx_base_clock / 10000000; + cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd); + cd->min_delta_ns = clockevent_delta2ns(0x300, cd); + clockevent_set_clock(cd, gt641xx_base_clock); + + clockevents_register_device(>641xx_timer0_clockevent); + + return setup_irq(GT641XX_TIMER0_IRQ, >641xx_timer0_irqaction); +} +arch_initcall(gt641xx_timer0_clockevent_init); diff --git a/arch/mips/kernel/cevt-r4k.c b/arch/mips/kernel/cevt-r4k.c index a915e569342..ae2984fff58 100644 --- a/arch/mips/kernel/cevt-r4k.c +++ b/arch/mips/kernel/cevt-r4k.c @@ -186,7 +186,7 @@ static int c0_compare_int_usable(void) * IP7 already pending? Try to clear it by acking the timer. */ if (c0_compare_int_pending()) { - write_c0_compare(read_c0_compare()); + write_c0_compare(read_c0_count()); irq_disable_hazard(); if (c0_compare_int_pending()) return 0; @@ -202,7 +202,7 @@ static int c0_compare_int_usable(void) if (!c0_compare_int_pending()) return 0; - write_c0_compare(read_c0_compare()); + write_c0_compare(read_c0_count()); irq_disable_hazard(); if (c0_compare_int_pending()) return 0; diff --git a/arch/mips/kernel/time.c b/arch/mips/kernel/time.c index c4e6866d5cb..6c6849a8f13 100644 --- a/arch/mips/kernel/time.c +++ b/arch/mips/kernel/time.c @@ -195,8 +195,8 @@ void __cpuinit clockevent_set_clock(struct clock_event_device *cd, /* Find a shift value */ for (shift = 32; shift > 0; shift--) { - temp = (u64) NSEC_PER_SEC << shift; - do_div(temp, clock); + temp = (u64) clock << shift; + do_div(temp, NSEC_PER_SEC); if ((temp >> 32) == 0) break; } diff --git a/arch/mips/mips-boards/generic/time.c b/arch/mips/mips-boards/generic/time.c index 1d00b778ff1..9d6243a8c15 100644 --- a/arch/mips/mips-boards/generic/time.c +++ b/arch/mips/mips-boards/generic/time.c @@ -147,21 +147,8 @@ void __init plat_time_init(void) #endif } -//static irqreturn_t mips_perf_interrupt(int irq, void *dev_id) -//{ -// return perf_irq(); -//} - -//static struct irqaction perf_irqaction = { -// .handler = mips_perf_interrupt, -// .flags = IRQF_DISABLED | IRQF_PERCPU, -// .name = "performance", -//}; - void __init plat_perf_setup(void) { -// struct irqaction *irq = &perf_irqaction; - cp0_perfcount_irq = -1; #ifdef MSC01E_INT_BASE diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c index 98b5e5bac02..b1b40527658 100644 --- a/arch/mips/mm/dma-default.c +++ b/arch/mips/mm/dma-default.c @@ -13,6 +13,7 @@ #include <linux/mm.h> #include <linux/module.h> #include <linux/string.h> +#include <linux/scatterlist.h> #include <asm/cache.h> #include <asm/io.h> @@ -165,12 +166,11 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, for (i = 0; i < nents; i++, sg++) { unsigned long addr; - addr = (unsigned long) page_address(sg->page); + addr = (unsigned long) sg_virt(sg); if (!plat_device_is_coherent(dev) && addr) - __dma_sync(addr + sg->offset, sg->length, direction); + __dma_sync(addr, sg->length, direction); sg->dma_address = plat_map_dma_mem(dev, - (void *)(addr + sg->offset), - sg->length); + (void *)addr, sg->length); } return nents; @@ -223,10 +223,9 @@ void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, for (i = 0; i < nhwentries; i++, sg++) { if (!plat_device_is_coherent(dev) && direction != DMA_TO_DEVICE) { - addr = (unsigned long) page_address(sg->page); + addr = (unsigned long) sg_virt(sg); if (addr) - __dma_sync(addr + sg->offset, sg->length, - direction); + __dma_sync(addr, sg->length, direction); } plat_unmap_dma_mem(sg->dma_address); } @@ -304,7 +303,7 @@ void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, /* Make sure that gcc doesn't leave the empty loop body. */ for (i = 0; i < nelems; i++, sg++) { if (cpu_is_noncoherent_r10000(dev)) - __dma_sync((unsigned long)page_address(sg->page), + __dma_sync((unsigned long)page_address(sg_page(sg)), sg->length, direction); plat_unmap_dma_mem(sg->dma_address); } @@ -322,7 +321,7 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nele /* Make sure that gcc doesn't leave the empty loop body. */ for (i = 0; i < nelems; i++, sg++) { if (!plat_device_is_coherent(dev)) - __dma_sync((unsigned long)page_address(sg->page), + __dma_sync((unsigned long)page_address(sg_page(sg)), sg->length, direction); plat_unmap_dma_mem(sg->dma_address); } diff --git a/arch/mips/sgi-ip27/ip27-init.c b/arch/mips/sgi-ip27/ip27-init.c index 681b593071c..3305fa9ae66 100644 --- a/arch/mips/sgi-ip27/ip27-init.c +++ b/arch/mips/sgi-ip27/ip27-init.c @@ -110,7 +110,7 @@ static void __init per_hub_init(cnodeid_t cnode) } } -void __init per_cpu_init(void) +void __cpuinit per_cpu_init(void) { int cpu = smp_processor_id(); int slice = LOCAL_HUB_L(PI_CPU_NUM); diff --git a/arch/mips/sgi-ip27/ip27-timer.c b/arch/mips/sgi-ip27/ip27-timer.c index d467bf4f6c3..f5dccf01da1 100644 --- a/arch/mips/sgi-ip27/ip27-timer.c +++ b/arch/mips/sgi-ip27/ip27-timer.c @@ -111,8 +111,24 @@ unsigned long read_persistent_clock(void) return mktime(year, month, date, hour, min, sec); } -static int rt_set_next_event(unsigned long delta, - struct clock_event_device *evt) +static void enable_rt_irq(unsigned int irq) +{ +} + +static void disable_rt_irq(unsigned int irq) +{ +} + +static struct irq_chip rt_irq_type = { + .name = "SN HUB RT timer", + .ack = disable_rt_irq, + .mask = disable_rt_irq, + .mask_ack = disable_rt_irq, + .unmask = enable_rt_irq, + .eoi = enable_rt_irq, +}; + +static int rt_next_event(unsigned long delta, struct clock_event_device *evt) { unsigned int cpu = smp_processor_id(); int slice = cputoslice(cpu) == 0; @@ -129,50 +145,24 @@ static void rt_set_mode(enum clock_event_mode mode, struct clock_event_device *evt) { switch (mode) { - case CLOCK_EVT_MODE_PERIODIC: + case CLOCK_EVT_MODE_ONESHOT: /* The only mode supported */ break; + case CLOCK_EVT_MODE_PERIODIC: case CLOCK_EVT_MODE_UNUSED: case CLOCK_EVT_MODE_SHUTDOWN: - case CLOCK_EVT_MODE_ONESHOT: case CLOCK_EVT_MODE_RESUME: /* Nothing to do */ break; } } -struct clock_event_device rt_clock_event_device = { - .name = "HUB-RT", - .features = CLOCK_EVT_FEAT_ONESHOT, - - .rating = 300, - .set_next_event = rt_set_next_event, - .set_mode = rt_set_mode, -}; - -static void enable_rt_irq(unsigned int irq) -{ -} - -static void disable_rt_irq(unsigned int irq) -{ -} - -static struct irq_chip rt_irq_type = { - .name = "SN HUB RT timer", - .ack = disable_rt_irq, - .mask = disable_rt_irq, - .mask_ack = disable_rt_irq, - .unmask = enable_rt_irq, - .eoi = enable_rt_irq, -}; - unsigned int rt_timer_irq; -static irqreturn_t ip27_rt_timer_interrupt(int irq, void *dev_id) +static irqreturn_t hub_rt_counter_handler(int irq, void *dev_id) { - struct clock_event_device *cd = &rt_clock_event_device; + struct clock_event_device *cd = dev_id; unsigned int cpu = smp_processor_id(); int slice = cputoslice(cpu) == 0; @@ -182,11 +172,10 @@ static irqreturn_t ip27_rt_timer_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } -static struct irqaction rt_irqaction = { - .handler = (irq_handler_t) ip27_rt_timer_interrupt, - .flags = IRQF_DISABLED, - .mask = CPU_MASK_NONE, - .name = "timer" +struct irqaction hub_rt_irqaction = { + .handler = hub_rt_counter_handler, + .flags = IRQF_DISABLED | IRQF_PERCPU, + .name = "hub-rt", }; /* @@ -200,32 +189,48 @@ static struct irqaction rt_irqaction = { #define NSEC_PER_CYCLE 800 #define CYCLES_PER_SEC (NSEC_PER_SEC / NSEC_PER_CYCLE) -static void __init ip27_rt_clock_event_init(void) +static DEFINE_PER_CPU(struct clock_event_device, hub_rt_clockevent); +static DEFINE_PER_CPU(char [11], hub_rt_name); + +static void __cpuinit hub_rt_clock_event_init(void) { - struct clock_event_device *cd = &rt_clock_event_device; unsigned int cpu = smp_processor_id(); - int irq = allocate_irqno(); - - if (irq < 0) - panic("Can't allocate interrupt number for timer interrupt"); - - rt_timer_irq = irq; - + struct clock_event_device *cd = &per_cpu(hub_rt_clockevent, cpu); + unsigned char *name = per_cpu(hub_rt_name, cpu); + int irq = rt_timer_irq; + + sprintf(name, "hub-rt %d", cpu); + cd->name = "HUB-RT", + cd->features = CLOCK_EVT_FEAT_ONESHOT, + clockevent_set_clock(cd, CYCLES_PER_SEC); + cd->max_delta_ns = clockevent_delta2ns(0xfffffffffffff, cd); + cd->min_delta_ns = clockevent_delta2ns(0x300, cd); + cd->rating = 200, cd->irq = irq, cd->cpumask = cpumask_of_cpu(cpu), - - /* - * Calculate the min / max delta - */ - cd->mult = - div_sc((unsigned long) CYCLES_PER_SEC, NSEC_PER_SEC, 32); - cd->shift = 32; - cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd); - cd->min_delta_ns = clockevent_delta2ns(0x300, cd); + cd->rating = 300, + cd->set_next_event = rt_next_event, + cd->set_mode = rt_set_mode, clockevents_register_device(cd); +} + +static void __init hub_rt_clock_event_global_init(void) +{ + unsigned int irq; + + do { + smp_wmb(); + irq = rt_timer_irq; + if (irq) + break; + + irq = allocate_irqno(); + if (irq < 0) + panic("Allocation of irq number for timer failed"); + } while (xchg(&rt_timer_irq, irq)); set_irq_chip_and_handler(irq, &rt_irq_type, handle_percpu_irq); - setup_irq(irq, &rt_irqaction); + setup_irq(irq, &hub_rt_irqaction); } static cycle_t hub_rt_read(void) @@ -233,27 +238,29 @@ static cycle_t hub_rt_read(void) return REMOTE_HUB_L(cputonasid(0), PI_RT_COUNT); } -struct clocksource ht_rt_clocksource = { +struct clocksource hub_rt_clocksource = { .name = "HUB-RT", .rating = 200, .read = hub_rt_read, .mask = CLOCKSOURCE_MASK(52), - .shift = 32, .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; -static void __init ip27_rt_clocksource_init(void) +static void __init hub_rt_clocksource_init(void) { - clocksource_register(&ht_rt_clocksource); + struct clocksource *cs = &hub_rt_clocksource; + + clocksource_set_clock(cs, CYCLES_PER_SEC); + clocksource_register(cs); } void __init plat_time_init(void) { - ip27_rt_clock_event_init(); - ip27_rt_clocksource_init(); + hub_rt_clocksource_init(); + hub_rt_clock_event_global_init(); } -void __init cpu_time_init(void) +void __cpuinit cpu_time_init(void) { lboard_t *board; klcpu_t *cpu; @@ -271,6 +278,7 @@ void __init cpu_time_init(void) printk("CPU %d clock is %dMHz.\n", smp_processor_id(), cpu->cpu_speed); + hub_rt_clock_event_init(); set_c0_status(SRB_TIMOCLK); } diff --git a/arch/mips/sibyte/bcm1480/irq.c b/arch/mips/sibyte/bcm1480/irq.c index 7aa79bf63c4..10299bafeab 100644 --- a/arch/mips/sibyte/bcm1480/irq.c +++ b/arch/mips/sibyte/bcm1480/irq.c @@ -452,6 +452,43 @@ static void bcm1480_kgdb_interrupt(void) extern void bcm1480_mailbox_interrupt(void); +static inline void dispatch_ip4(void) +{ + int cpu = smp_processor_id(); + int irq = K_BCM1480_INT_TIMER_0 + cpu; + + /* Reset the timer */ + __raw_writeq(M_SCD_TIMER_ENABLE|M_SCD_TIMER_MODE_CONTINUOUS, + IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG))); + + do_IRQ(irq); +} + +static inline void dispatch_ip2(void) +{ + unsigned long long mask_h, mask_l; + unsigned int cpu = smp_processor_id(); + unsigned long base; + + /* + * Default...we've hit an IP[2] interrupt, which means we've got to + * check the 1480 interrupt registers to figure out what to do. Need + * to detect which CPU we're on, now that smp_affinity is supported. + */ + base = A_BCM1480_IMR_MAPPER(cpu); + mask_h = __raw_readq( + IOADDR(base + R_BCM1480_IMR_INTERRUPT_STATUS_BASE_H)); + mask_l = __raw_readq( + IOADDR(base + R_BCM1480_IMR_INTERRUPT_STATUS_BASE_L)); + + if (mask_h) { + if (mask_h ^ 1) + do_IRQ(fls64(mask_h) - 1); + else if (mask_l) + do_IRQ(63 + fls64(mask_l)); + } +} + asmlinkage void plat_irq_dispatch(void) { unsigned int pending; @@ -469,17 +506,8 @@ asmlinkage void plat_irq_dispatch(void) else #endif - if (pending & CAUSEF_IP4) { - int cpu = smp_processor_id(); - int irq = K_BCM1480_INT_TIMER_0 + cpu; - - /* Reset the timer */ - __raw_writeq(M_SCD_TIMER_ENABLE|M_SCD_TIMER_MODE_CONTINUOUS, - IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG))); - - do_IRQ(irq); - } - + if (pending & CAUSEF_IP4) + dispatch_ip4(); #ifdef CONFIG_SMP else if (pending & CAUSEF_IP3) bcm1480_mailbox_interrupt(); @@ -490,27 +518,6 @@ asmlinkage void plat_irq_dispatch(void) bcm1480_kgdb_interrupt(); /* KGDB (uart 1) */ #endif - else if (pending & CAUSEF_IP2) { - unsigned long long mask_h, mask_l; - unsigned long base; - - /* - * Default...we've hit an IP[2] interrupt, which means we've - * got to check the 1480 interrupt registers to figure out what - * to do. Need to detect which CPU we're on, now that - * smp_affinity is supported. - */ - base = A_BCM1480_IMR_MAPPER(smp_processor_id()); - mask_h = __raw_readq( - IOADDR(base + R_BCM1480_IMR_INTERRUPT_STATUS_BASE_H)); - mask_l = __raw_readq( - IOADDR(base + R_BCM1480_IMR_INTERRUPT_STATUS_BASE_L)); - - if (mask_h) { - if (mask_h ^ 1) - do_IRQ(fls64(mask_h) - 1); - else - do_IRQ(63 + fls64(mask_l)); - } - } + else if (pending & CAUSEF_IP2) + dispatch_ip2(); } diff --git a/arch/mips/sibyte/bcm1480/smp.c b/arch/mips/sibyte/bcm1480/smp.c index 02b266a31c4..436ba78359a 100644 --- a/arch/mips/sibyte/bcm1480/smp.c +++ b/arch/mips/sibyte/bcm1480/smp.c @@ -58,7 +58,7 @@ static void *mailbox_0_regs[] = { /* * SMP init and finish on secondary CPUs */ -void bcm1480_smp_init(void) +void __cpuinit bcm1480_smp_init(void) { unsigned int imask = STATUSF_IP4 | STATUSF_IP3 | STATUSF_IP2 | STATUSF_IP1 | STATUSF_IP0; @@ -67,7 +67,7 @@ void bcm1480_smp_init(void) change_c0_status(ST0_IM, imask); } -void bcm1480_smp_finish(void) +void __cpuinit bcm1480_smp_finish(void) { extern void sb1480_clockevent_init(void); diff --git a/arch/mips/sibyte/bcm1480/time.c b/arch/mips/sibyte/bcm1480/time.c index c730744aa47..610f0253954 100644 --- a/arch/mips/sibyte/bcm1480/time.c +++ b/arch/mips/sibyte/bcm1480/time.c @@ -15,22 +15,12 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ - -/* - * These are routines to set up and handle interrupts from the - * bcm1480 general purpose timer 0. We're using the timer as a - * system clock, so we set it up to run at 100 Hz. On every - * interrupt, we update our idea of what the time of day is, - * then call do_timer() in the architecture-independent kernel - * code to do general bookkeeping (e.g. update jiffies, run - * bottom halves, etc.) - */ #include <linux/clockchips.h> #include <linux/interrupt.h> +#include <linux/irq.h> #include <linux/percpu.h> #include <linux/spinlock.h> -#include <asm/irq.h> #include <asm/addrspace.h> #include <asm/time.h> #include <asm/io.h> @@ -47,33 +37,10 @@ #define IMR_IP3_VAL K_BCM1480_INT_MAP_I1 #define IMR_IP4_VAL K_BCM1480_INT_MAP_I2 -#ifdef CONFIG_SIMULATION -#define BCM1480_HPT_VALUE 50000 -#else -#define BCM1480_HPT_VALUE 1000000 -#endif - extern int bcm1480_steal_irq(int irq); -void __init plat_time_init(void) -{ - unsigned int cpu = smp_processor_id(); - unsigned int irq = K_BCM1480_INT_TIMER_0 + cpu; - - BUG_ON(cpu > 3); /* Only have 4 general purpose timers */ - - bcm1480_mask_irq(cpu, irq); - - /* Map the timer interrupt to ip[4] of this cpu */ - __raw_writeq(IMR_IP4_VAL, IOADDR(A_BCM1480_IMR_REGISTER(cpu, R_BCM1480_IMR_INTERRUPT_MAP_BASE_H) - + (irq<<3))); - - bcm1480_unmask_irq(cpu, irq); - bcm1480_steal_irq(irq); -} - /* - * The general purpose timer ticks at 1 Mhz independent if + * The general purpose timer ticks at 1MHz independent if * the rest of the system */ static void sibyte_set_mode(enum clock_event_mode mode, @@ -88,7 +55,7 @@ static void sibyte_set_mode(enum clock_event_mode mode, switch (mode) { case CLOCK_EVT_MODE_PERIODIC: __raw_writeq(0, timer_cfg); - __raw_writeq(BCM1480_HPT_VALUE / HZ - 1, timer_init); + __raw_writeq((V_SCD_TIMER_FREQ / HZ) - 1, timer_init); __raw_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS, timer_cfg); break; @@ -121,80 +88,96 @@ static int sibyte_next_event(unsigned long delta, struct clock_event_device *cd) return res; } -static DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent); - static irqreturn_t sibyte_counter_handler(int irq, void *dev_id) { unsigned int cpu = smp_processor_id(); - struct clock_event_device *cd = &per_cpu(sibyte_hpt_clockevent, cpu); + struct clock_event_device *cd = dev_id; + void __iomem *timer_cfg; + + timer_cfg = IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)); /* Reset the timer */ __raw_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS, - IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG))); + timer_cfg); cd->event_handler(cd); return IRQ_HANDLED; } -static struct irqaction sibyte_counter_irqaction = { - .handler = sibyte_counter_handler, - .flags = IRQF_DISABLED | IRQF_PERCPU, - .name = "timer", -}; +static DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent); +static DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction); +static DEFINE_PER_CPU(char [18], sibyte_hpt_name); -/* - * This interrupt is "special" in that it doesn't use the request_irq - * way to hook the irq line. The timer interrupt is initialized early - * enough to make this a major pain, and it's also firing enough to - * warrant a bit of special case code. bcm1480_timer_interrupt is - * called directly from irq_handler.S when IP[4] is set during an - * interrupt - */ void __cpuinit sb1480_clockevent_init(void) { unsigned int cpu = smp_processor_id(); unsigned int irq = K_BCM1480_INT_TIMER_0 + cpu; + struct irqaction *action = &per_cpu(sibyte_hpt_irqaction, cpu); struct clock_event_device *cd = &per_cpu(sibyte_hpt_clockevent, cpu); + unsigned char *name = per_cpu(sibyte_hpt_name, cpu); + + BUG_ON(cpu > 3); /* Only have 4 general purpose timers */ - cd->name = "bcm1480-counter"; + sprintf(name, "bcm1480-counter %d", cpu); + cd->name = name; cd->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_MODE_ONESHOT; + clockevent_set_clock(cd, V_SCD_TIMER_FREQ); + cd->max_delta_ns = clockevent_delta2ns(0x7fffff, cd); + cd->min_delta_ns = clockevent_delta2ns(1, cd); + cd->rating = 200; + cd->irq = irq; + cd->cpumask = cpumask_of_cpu(cpu); cd->set_next_event = sibyte_next_event; cd->set_mode = sibyte_set_mode; - cd->irq = irq; - clockevent_set_clock(cd, BCM1480_HPT_VALUE); + clockevents_register_device(cd); + + bcm1480_mask_irq(cpu, irq); + + /* + * Map timer interrupt to IP[4] of this cpu + */ + __raw_writeq(IMR_IP4_VAL, + IOADDR(A_BCM1480_IMR_REGISTER(cpu, + R_BCM1480_IMR_INTERRUPT_MAP_BASE_H) + (irq << 3))); - setup_irq(irq, &sibyte_counter_irqaction); + bcm1480_unmask_irq(cpu, irq); + bcm1480_steal_irq(irq); + + action->handler = sibyte_counter_handler; + action->flags = IRQF_DISABLED | IRQF_PERCPU; + action->name = name; + action->dev_id = cd; + setup_irq(irq, action); } static cycle_t bcm1480_hpt_read(void) { - /* We assume this function is called xtime_lock held. */ - unsigned long count = - __raw_readq(IOADDR(A_SCD_TIMER_REGISTER(0, R_SCD_TIMER_CNT))); - return (jiffies + 1) * (BCM1480_HPT_VALUE / HZ) - count; + return (cycle_t) __raw_readq(IOADDR(A_SCD_ZBBUS_CYCLE_COUNT)); } struct clocksource bcm1480_clocksource = { - .name = "MIPS", + .name = "zbbus-cycles", .rating = 200, .read = bcm1480_hpt_read, - .mask = CLOCKSOURCE_MASK(32), - .shift = 32, + .mask = CLOCKSOURCE_MASK(64), .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; void __init sb1480_clocksource_init(void) { struct clocksource *cs = &bcm1480_clocksource; + unsigned int plldiv; + unsigned long zbbus; - clocksource_set_clock(cs, BCM1480_HPT_VALUE); + plldiv = G_BCM1480_SYS_PLL_DIV(__raw_readq(IOADDR(A_SCD_SYSTEM_CFG))); + zbbus = ((plldiv >> 1) * 50000000) + ((plldiv & 1) * 25000000); + clocksource_set_clock(cs, zbbus); clocksource_register(cs); } -void __init bcm1480_hpt_setup(void) +void __init plat_time_init(void) { - mips_hpt_frequency = BCM1480_HPT_VALUE; sb1480_clocksource_init(); sb1480_clockevent_init(); } diff --git a/arch/mips/sibyte/sb1250/irq.c b/arch/mips/sibyte/sb1250/irq.c index 500d17e84c0..53780a179d1 100644 --- a/arch/mips/sibyte/sb1250/irq.c +++ b/arch/mips/sibyte/sb1250/irq.c @@ -402,6 +402,22 @@ static void sb1250_kgdb_interrupt(void) extern void sb1250_mailbox_interrupt(void); +static inline void dispatch_ip2(void) +{ + unsigned int cpu = smp_processor_id(); + unsigned long long mask; + + /* + * Default...we've hit an IP[2] interrupt, which means we've got to + * check the 1250 interrupt registers to figure out what to do. Need + * to detect which CPU we're on, now that smp_affinity is supported. + */ + mask = __raw_readq(IOADDR(A_IMR_REGISTER(cpu, + R_IMR_INTERRUPT_STATUS_BASE))); + if (mask) + do_IRQ(fls64(mask) - 1); +} + asmlinkage void plat_irq_dispatch(void) { unsigned int cpu = smp_processor_id(); @@ -434,21 +450,8 @@ asmlinkage void plat_irq_dispatch(void) sb1250_kgdb_interrupt(); #endif - else if (pending & CAUSEF_IP2) { - unsigned long long mask; - - /* - * Default...we've hit an IP[2] interrupt, which means we've - * got to check the 1250 interrupt registers to figure out what - * to do. Need to detect which CPU we're on, now that - * smp_affinity is supported. - */ - mask = __raw_readq(IOADDR(A_IMR_REGISTER(smp_processor_id(), - R_IMR_INTERRUPT_STATUS_BASE))); - if (mask) - do_IRQ(fls64(mask) - 1); - else - spurious_interrupt(); - } else + else if (pending & CAUSEF_IP2) + dispatch_ip2(); + else spurious_interrupt(); } diff --git a/arch/mips/sibyte/sb1250/smp.c b/arch/mips/sibyte/sb1250/smp.c index aaa4f30dda7..3f52c95a4eb 100644 --- a/arch/mips/sibyte/sb1250/smp.c +++ b/arch/mips/sibyte/sb1250/smp.c @@ -46,7 +46,7 @@ static void *mailbox_regs[] = { /* * SMP init and finish on secondary CPUs */ -void sb1250_smp_init(void) +void __cpuinit sb1250_smp_init(void) { unsigned int imask = STATUSF_IP4 | STATUSF_IP3 | STATUSF_IP2 | STATUSF_IP1 | STATUSF_IP0; @@ -55,7 +55,7 @@ void sb1250_smp_init(void) change_c0_status(ST0_IM, imask); } -void sb1250_smp_finish(void) +void __cpuinit sb1250_smp_finish(void) { extern void sb1250_clockevent_init(void); diff --git a/arch/mips/sibyte/sb1250/time.c b/arch/mips/sibyte/sb1250/time.c index 9ef54628bc9..a41e908bc21 100644 --- a/arch/mips/sibyte/sb1250/time.c +++ b/arch/mips/sibyte/sb1250/time.c @@ -52,26 +52,6 @@ extern int sb1250_steal_irq(int irq); -static cycle_t sb1250_hpt_read(void); - -void __init sb1250_hpt_setup(void) -{ - int cpu = smp_processor_id(); - - if (!cpu) { - /* Setup hpt using timer #3 but do not enable irq for it */ - __raw_writeq(0, IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, R_SCD_TIMER_CFG))); - __raw_writeq(SB1250_HPT_VALUE, - IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, R_SCD_TIMER_INIT))); - __raw_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS, - IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, R_SCD_TIMER_CFG))); - - mips_hpt_frequency = V_SCD_TIMER_FREQ; - clocksource_mips.read = sb1250_hpt_read; - clocksource_mips.mask = M_SCD_TIMER_INIT; - } -} - /* * The general purpose timer ticks at 1 Mhz independent if * the rest of the system @@ -121,18 +101,14 @@ sibyte_next_event(unsigned long delta, struct clock_event_device *evt) return 0; } -struct clock_event_device sibyte_hpt_clockevent = { - .name = "sb1250-counter", - .features = CLOCK_EVT_FEAT_PERIODIC, - .set_mode = sibyte_set_mode, - .set_next_event = sibyte_next_event, - .shift = 32, - .irq = 0, -}; - static irqreturn_t sibyte_counter_handler(int irq, void *dev_id) { - struct clock_event_device *cd = &sibyte_hpt_clockevent; + unsigned int cpu = smp_processor_id(); + struct clock_event_device *cd = dev_id; + + /* ACK interrupt */ + ____raw_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS, + IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG))); cd->event_handler(cd); @@ -145,15 +121,35 @@ static struct irqaction sibyte_irqaction = { .name = "timer", }; +static DEFINE_PER_CPU(struct clock_event_device, sibyte_hpt_clockevent); +static DEFINE_PER_CPU(struct irqaction, sibyte_hpt_irqaction); +static DEFINE_PER_CPU(char [18], sibyte_hpt_name); + void __cpuinit sb1250_clockevent_init(void) { - struct clock_event_device *cd = &sibyte_hpt_clockevent; unsigned int cpu = smp_processor_id(); - int irq = K_INT_TIMER_0 + cpu; + unsigned int irq = K_INT_TIMER_0 + cpu; + struct irqaction *action = &per_cpu(sibyte_hpt_irqaction, cpu); + struct clock_event_device *cd = &per_cpu(sibyte_hpt_clockevent, cpu); + unsigned char *name = per_cpu(sibyte_hpt_name, cpu); /* Only have 4 general purpose timers, and we use last one as hpt */ BUG_ON(cpu > 2); + sprintf(name, "bcm1480-counter %d", cpu); + cd->name = name; + cd->features = CLOCK_EVT_FEAT_PERIODIC | + CLOCK_EVT_MODE_ONESHOT; + clockevent_set_clock(cd, V_SCD_TIMER_FREQ); + cd->max_delta_ns = clockevent_delta2ns(0x7fffff, cd); + cd->min_delta_ns = clockevent_delta2ns(1, cd); + cd->rating = 200; + cd->irq = irq; + cd->cpumask = cpumask_of_cpu(cpu); + cd->set_next_event = sibyte_next_event; + cd->set_mode = sibyte_set_mode; + clockevents_register_device(cd); + sb1250_mask_irq(cpu, irq); /* Map the timer interrupt to ip[4] of this cpu */ @@ -165,17 +161,11 @@ void __cpuinit sb1250_clockevent_init(void) sb1250_unmask_irq(cpu, irq); sb1250_steal_irq(irq); - /* - * This interrupt is "special" in that it doesn't use the request_irq - * way to hook the irq line. The timer interrupt is initialized early - * enough to make this a major pain, and it's also firing enough to - * warrant a bit of special case code. sb1250_timer_interrupt is - * called directly from irq_handler.S when IP[4] is set during an - * interrupt - */ + action->handler = sibyte_counter_handler; + action->flags = IRQF_DISABLED | IRQF_PERCPU; + action->name = name; + action->dev_id = cd; setup_irq(irq, &sibyte_irqaction); - - clockevents_register_device(cd); } /* @@ -195,8 +185,7 @@ struct clocksource bcm1250_clocksource = { .name = "MIPS", .rating = 200, .read = sb1250_hpt_read, - .mask = CLOCKSOURCE_MASK(32), - .shift = 32, + .mask = CLOCKSOURCE_MASK(23), .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; @@ -204,6 +193,17 @@ void __init sb1250_clocksource_init(void) { struct clocksource *cs = &bcm1250_clocksource; + /* Setup hpt using timer #3 but do not enable irq for it */ + __raw_writeq(0, + IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, + R_SCD_TIMER_CFG))); + __raw_writeq(SB1250_HPT_VALUE, + IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, + R_SCD_TIMER_INIT))); + __raw_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS, + IOADDR(A_SCD_TIMER_REGISTER(SB1250_HPT_NUM, + R_SCD_TIMER_CFG))); + clocksource_set_clock(cs, V_SCD_TIMER_FREQ); clocksource_register(cs); } diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 41f8e321e34..9448d4e9114 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -25,6 +25,7 @@ #include <linux/slab.h> #include <linux/string.h> #include <linux/types.h> +#include <linux/scatterlist.h> #include <asm/cacheflush.h> #include <asm/dma.h> /* for DMA_CHUNK_SIZE */ diff --git a/arch/powerpc/boot/dts/lite5200.dts b/arch/powerpc/boot/dts/lite5200.dts index bc45f5fbb06..6731763f028 100644 --- a/arch/powerpc/boot/dts/lite5200.dts +++ b/arch/powerpc/boot/dts/lite5200.dts @@ -70,18 +70,16 @@ }; gpt@600 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <0>; reg = <600 10>; interrupts = <1 9 0>; interrupt-parent = <&mpc5200_pic>; - has-wdt; + fsl,has-wdt; }; gpt@610 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <1>; reg = <610 10>; interrupts = <1 a 0>; @@ -89,8 +87,7 @@ }; gpt@620 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <2>; reg = <620 10>; interrupts = <1 b 0>; @@ -98,8 +95,7 @@ }; gpt@630 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <3>; reg = <630 10>; interrupts = <1 c 0>; @@ -107,8 +103,7 @@ }; gpt@640 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <4>; reg = <640 10>; interrupts = <1 d 0>; @@ -116,8 +111,7 @@ }; gpt@650 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <5>; reg = <650 10>; interrupts = <1 e 0>; @@ -125,8 +119,7 @@ }; gpt@660 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <6>; reg = <660 10>; interrupts = <1 f 0>; @@ -134,8 +127,7 @@ }; gpt@670 { // General Purpose Timer - compatible = "mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200-gpt"; cell-index = <7>; reg = <670 10>; interrupts = <1 10 0>; diff --git a/arch/powerpc/boot/dts/lite5200b.dts b/arch/powerpc/boot/dts/lite5200b.dts index 6582c9a39b2..b540388c608 100644 --- a/arch/powerpc/boot/dts/lite5200b.dts +++ b/arch/powerpc/boot/dts/lite5200b.dts @@ -70,18 +70,16 @@ }; gpt@600 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <0>; reg = <600 10>; interrupts = <1 9 0>; interrupt-parent = <&mpc5200_pic>; - has-wdt; + fsl,has-wdt; }; gpt@610 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <1>; reg = <610 10>; interrupts = <1 a 0>; @@ -89,8 +87,7 @@ }; gpt@620 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <2>; reg = <620 10>; interrupts = <1 b 0>; @@ -98,8 +95,7 @@ }; gpt@630 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <3>; reg = <630 10>; interrupts = <1 c 0>; @@ -107,8 +103,7 @@ }; gpt@640 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <4>; reg = <640 10>; interrupts = <1 d 0>; @@ -116,8 +111,7 @@ }; gpt@650 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <5>; reg = <650 10>; interrupts = <1 e 0>; @@ -125,8 +119,7 @@ }; gpt@660 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <6>; reg = <660 10>; interrupts = <1 f 0>; @@ -134,8 +127,7 @@ }; gpt@670 { // General Purpose Timer - compatible = "mpc5200b-gpt","mpc5200-gpt"; - device_type = "gpt"; + compatible = "fsl,mpc5200b-gpt","fsl,mpc5200-gpt"; cell-index = <7>; reg = <670 10>; interrupts = <1 10 0>; diff --git a/arch/powerpc/kernel/dma_64.c b/arch/powerpc/kernel/dma_64.c index 9001104b56b..14206e3f081 100644 --- a/arch/powerpc/kernel/dma_64.c +++ b/arch/powerpc/kernel/dma_64.c @@ -161,8 +161,7 @@ static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int i; for_each_sg(sgl, sg, nents, i) { - sg->dma_address = (page_to_phys(sg->page) + sg->offset) | - dma_direct_offset; + sg->dma_address = sg_phys(sg) | dma_direct_offset; sg->dma_length = sg->length; } diff --git a/arch/powerpc/kernel/ibmebus.c b/arch/powerpc/kernel/ibmebus.c index 289d7e93591..72fd87156b2 100644 --- a/arch/powerpc/kernel/ibmebus.c +++ b/arch/powerpc/kernel/ibmebus.c @@ -102,8 +102,7 @@ static int ibmebus_map_sg(struct device *dev, int i; for_each_sg(sgl, sg, nents, i) { - sg->dma_address = (dma_addr_t)page_address(sg->page) - + sg->offset; + sg->dma_address = (dma_addr_t) sg_virt(sg); sg->dma_length = sg->length; } diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index 306a6f75b6c..2d0c9ef555e 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -307,7 +307,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist, continue; } /* Allocate iommu entries for that segment */ - vaddr = (unsigned long)page_address(s->page) + s->offset; + vaddr = (unsigned long) sg_virt(s); npages = iommu_num_pages(vaddr, slen); entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0); diff --git a/arch/powerpc/platforms/52xx/lite5200.c b/arch/powerpc/platforms/52xx/lite5200.c index 65b7ae42623..25d2bfa3d9d 100644 --- a/arch/powerpc/platforms/52xx/lite5200.c +++ b/arch/powerpc/platforms/52xx/lite5200.c @@ -145,6 +145,9 @@ static void __init lite5200_setup_arch(void) /* Some mpc5200 & mpc5200b related configuration */ mpc5200_setup_xlb_arbiter(); + /* Map wdt for mpc52xx_restart() */ + mpc52xx_map_wdt(); + #ifdef CONFIG_PM mpc52xx_suspend.board_suspend_prepare = lite5200_suspend_prepare; mpc52xx_suspend.board_resume_finish = lite5200_resume_finish; @@ -183,5 +186,6 @@ define_machine(lite5200) { .init = mpc52xx_declare_of_platform_devices, .init_IRQ = mpc52xx_init_irq, .get_irq = mpc52xx_get_irq, + .restart = mpc52xx_restart, .calibrate_decr = generic_calibrate_decr, }; diff --git a/arch/powerpc/platforms/52xx/mpc52xx_common.c b/arch/powerpc/platforms/52xx/mpc52xx_common.c index 3bc201e07e6..9850685c542 100644 --- a/arch/powerpc/platforms/52xx/mpc52xx_common.c +++ b/arch/powerpc/platforms/52xx/mpc52xx_common.c @@ -18,15 +18,20 @@ #include <asm/prom.h> #include <asm/mpc52xx.h> +/* + * This variable is mapped in mpc52xx_map_wdt() and used in mpc52xx_restart(). + * Permanent mapping is required because mpc52xx_restart() can be called + * from interrupt context while node mapping (which calls ioremap()) + * cannot be used at such point. + */ +static volatile struct mpc52xx_gpt *mpc52xx_wdt = NULL; -void __iomem * -mpc52xx_find_and_map(const char *compatible) +static void __iomem * +mpc52xx_map_node(struct device_node *ofn) { - struct device_node *ofn; const u32 *regaddr_p; u64 regaddr64, size64; - ofn = of_find_compatible_node(NULL, NULL, compatible); if (!ofn) return NULL; @@ -42,8 +47,23 @@ mpc52xx_find_and_map(const char *compatible) return ioremap((u32)regaddr64, (u32)size64); } + +void __iomem * +mpc52xx_find_and_map(const char *compatible) +{ + return mpc52xx_map_node( + of_find_compatible_node(NULL, NULL, compatible)); +} + EXPORT_SYMBOL(mpc52xx_find_and_map); +void __iomem * +mpc52xx_find_and_map_path(const char *path) +{ + return mpc52xx_map_node(of_find_node_by_path(path)); +} + +EXPORT_SYMBOL(mpc52xx_find_and_map_path); /** * mpc52xx_find_ipb_freq - Find the IPB bus frequency for a device @@ -113,3 +133,46 @@ mpc52xx_declare_of_platform_devices(void) "Error while probing of_platform bus\n"); } +void __init +mpc52xx_map_wdt(void) +{ + const void *has_wdt; + struct device_node *np; + + /* mpc52xx_wdt is mapped here and used in mpc52xx_restart, + * possibly from a interrupt context. wdt is only implement + * on a gpt0, so check has-wdt property before mapping. + */ + for_each_compatible_node(np, NULL, "fsl,mpc5200-gpt") { + has_wdt = of_get_property(np, "fsl,has-wdt", NULL); + if (has_wdt) { + mpc52xx_wdt = mpc52xx_map_node(np); + return; + } + } + for_each_compatible_node(np, NULL, "mpc5200-gpt") { + has_wdt = of_get_property(np, "has-wdt", NULL); + if (has_wdt) { + mpc52xx_wdt = mpc52xx_map_node(np); + return; + } + } +} + +void +mpc52xx_restart(char *cmd) +{ + local_irq_disable(); + + /* Turn on the watchdog and wait for it to expire. + * It effectively does a reset. */ + if (mpc52xx_wdt) { + out_be32(&mpc52xx_wdt->mode, 0x00000000); + out_be32(&mpc52xx_wdt->count, 0x000000ff); + out_be32(&mpc52xx_wdt->mode, 0x00009004); + } else + printk("mpc52xx_restart: Can't access wdt. " + "Restart impossible, system halted.\n"); + + while (1); +} diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platforms/ps3/system-bus.c index 07e64b48e7f..6405f4a3676 100644 --- a/arch/powerpc/platforms/ps3/system-bus.c +++ b/arch/powerpc/platforms/ps3/system-bus.c @@ -628,9 +628,8 @@ static int ps3_sb_map_sg(struct device *_dev, struct scatterlist *sgl, int i; for_each_sg(sgl, sg, nents, i) { - int result = ps3_dma_map(dev->d_region, - page_to_phys(sg->page) + sg->offset, sg->length, - &sg->dma_address, 0); + int result = ps3_dma_map(dev->d_region, sg_phys(sg), + sg->length, &sg->dma_address, 0); if (result) { pr_debug("%s:%d: ps3_dma_map failed (%d)\n", diff --git a/arch/powerpc/sysdev/bestcomm/bestcomm.c b/arch/powerpc/sysdev/bestcomm/bestcomm.c index 48492a83e5a..740ad73ce5c 100644 --- a/arch/powerpc/sysdev/bestcomm/bestcomm.c +++ b/arch/powerpc/sysdev/bestcomm/bestcomm.c @@ -269,6 +269,7 @@ bcom_engine_init(void) int task; phys_addr_t tdt_pa, ctx_pa, var_pa, fdt_pa; unsigned int tdt_size, ctx_size, var_size, fdt_size; + u16 regval; /* Allocate & clear SRAM zones for FDT, TDTs, contexts and vars/incs */ tdt_size = BCOM_MAX_TASKS * sizeof(struct bcom_tdt); @@ -319,9 +320,11 @@ bcom_engine_init(void) /* Init 'always' initiator */ out_8(&bcom_eng->regs->ipr[BCOM_INITIATOR_ALWAYS], BCOM_IPR_ALWAYS); - /* Disable COMM Bus Prefetch, apparently it's not reliable yet */ - /* FIXME: This should be done on 5200 and not 5200B ... */ - out_be16(&bcom_eng->regs->PtdCntrl, in_be16(&bcom_eng->regs->PtdCntrl) | 1); + /* Disable COMM Bus Prefetch on the original 5200; it's broken */ + if ((mfspr(SPRN_SVR) & MPC5200_SVR_MASK) == MPC5200_SVR) { + regval = in_be16(&bcom_eng->regs->PtdCntrl); + out_be16(&bcom_eng->regs->PtdCntrl, regval | 1); + } /* Init lock */ spin_lock_init(&bcom_eng->lock); diff --git a/arch/s390/defconfig b/arch/s390/defconfig index 2aae23dba4b..ece7b99da89 100644 --- a/arch/s390/defconfig +++ b/arch/s390/defconfig @@ -1,7 +1,7 @@ # # Automatically generated make config: don't edit -# Linux kernel version: 2.6.22 -# Tue Jul 17 12:50:23 2007 +# Linux kernel version: 2.6.23 +# Mon Oct 22 12:10:44 2007 # CONFIG_MMU=y CONFIG_ZONE_DMA=y @@ -19,15 +19,11 @@ CONFIG_S390=y CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" # -# Code maturity level options +# General setup # CONFIG_EXPERIMENTAL=y CONFIG_LOCK_KERNEL=y CONFIG_INIT_ENV_ARG_LIMIT=32 - -# -# General setup -# CONFIG_LOCALVERSION="" CONFIG_LOCALVERSION_AUTO=y CONFIG_SWAP=y @@ -42,7 +38,14 @@ CONFIG_AUDIT=y CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y CONFIG_LOG_BUF_SHIFT=17 +CONFIG_CGROUPS=y +# CONFIG_CGROUP_DEBUG is not set +CONFIG_CGROUP_NS=y +CONFIG_CGROUP_CPUACCT=y # CONFIG_CPUSETS is not set +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_FAIR_USER_SCHED=y +# CONFIG_FAIR_CGROUP_SCHED is not set CONFIG_SYSFS_DEPRECATED=y # CONFIG_RELAY is not set CONFIG_BLK_DEV_INITRD=y @@ -63,7 +66,6 @@ CONFIG_FUTEX=y CONFIG_ANON_INODES=y CONFIG_EPOLL=y CONFIG_SIGNALFD=y -CONFIG_TIMERFD=y CONFIG_EVENTFD=y CONFIG_SHMEM=y CONFIG_VM_EVENT_COUNTERS=y @@ -83,6 +85,7 @@ CONFIG_STOP_MACHINE=y CONFIG_BLOCK=y # CONFIG_BLK_DEV_IO_TRACE is not set CONFIG_BLK_DEV_BSG=y +CONFIG_BLOCK_COMPAT=y # # IO Schedulers @@ -108,7 +111,6 @@ CONFIG_64BIT=y CONFIG_SMP=y CONFIG_NR_CPUS=32 CONFIG_HOTPLUG_CPU=y -CONFIG_DEFAULT_MIGRATION_COST=1000000 CONFIG_COMPAT=y CONFIG_SYSVIPC_COMPAT=y CONFIG_AUDIT_ARCH=y @@ -143,9 +145,11 @@ CONFIG_FLATMEM_MANUAL=y CONFIG_FLATMEM=y CONFIG_FLAT_NODE_MEM_MAP=y # CONFIG_SPARSEMEM_STATIC is not set +# CONFIG_SPARSEMEM_VMEMMAP_ENABLE is not set CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_RESOURCES_64BIT=y CONFIG_ZONE_DMA_FLAG=1 +CONFIG_BOUNCE=y CONFIG_VIRT_TO_BUS=y CONFIG_HOLES_IN_ZONE=y @@ -219,12 +223,14 @@ CONFIG_INET_TUNNEL=y CONFIG_INET_XFRM_MODE_TRANSPORT=y CONFIG_INET_XFRM_MODE_TUNNEL=y CONFIG_INET_XFRM_MODE_BEET=y +CONFIG_INET_LRO=y CONFIG_INET_DIAG=y CONFIG_INET_TCP_DIAG=y # CONFIG_TCP_CONG_ADVANCED is not set CONFIG_TCP_CONG_CUBIC=y CONFIG_DEFAULT_TCP_CONG="cubic" # CONFIG_TCP_MD5SIG is not set +# CONFIG_IP_VS is not set CONFIG_IPV6=y # CONFIG_IPV6_PRIVACY is not set # CONFIG_IPV6_ROUTER_PREF is not set @@ -243,7 +249,48 @@ CONFIG_IPV6_SIT=y # CONFIG_IPV6_TUNNEL is not set # CONFIG_IPV6_MULTIPLE_TABLES is not set # CONFIG_NETWORK_SECMARK is not set -# CONFIG_NETFILTER is not set +CONFIG_NETFILTER=y +# CONFIG_NETFILTER_DEBUG is not set + +# +# Core Netfilter Configuration +# +CONFIG_NETFILTER_NETLINK=m +CONFIG_NETFILTER_NETLINK_QUEUE=m +CONFIG_NETFILTER_NETLINK_LOG=m +CONFIG_NF_CONNTRACK_ENABLED=m +CONFIG_NF_CONNTRACK=m +# CONFIG_NF_CT_ACCT is not set +# CONFIG_NF_CONNTRACK_MARK is not set +# CONFIG_NF_CONNTRACK_EVENTS is not set +# CONFIG_NF_CT_PROTO_SCTP is not set +# CONFIG_NF_CT_PROTO_UDPLITE is not set +# CONFIG_NF_CONNTRACK_AMANDA is not set +# CONFIG_NF_CONNTRACK_FTP is not set +# CONFIG_NF_CONNTRACK_H323 is not set +# CONFIG_NF_CONNTRACK_IRC is not set +# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set +# CONFIG_NF_CONNTRACK_PPTP is not set +# CONFIG_NF_CONNTRACK_SANE is not set +# CONFIG_NF_CONNTRACK_SIP is not set +# CONFIG_NF_CONNTRACK_TFTP is not set +# CONFIG_NF_CT_NETLINK is not set +# CONFIG_NETFILTER_XTABLES is not set + +# +# IP: Netfilter Configuration +# +# CONFIG_NF_CONNTRACK_IPV4 is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_ARPTABLES is not set + +# +# IPv6: Netfilter Configuration (EXPERIMENTAL) +# +# CONFIG_NF_CONNTRACK_IPV6 is not set +# CONFIG_IP6_NF_QUEUE is not set +# CONFIG_IP6_NF_IPTABLES is not set # CONFIG_IP_DCCP is not set CONFIG_IP_SCTP=m # CONFIG_SCTP_DBG_MSG is not set @@ -263,12 +310,7 @@ CONFIG_SCTP_HMAC_MD5=y # CONFIG_LAPB is not set # CONFIG_ECONET is not set # CONFIG_WAN_ROUTER is not set - -# -# QoS and/or fair queueing -# CONFIG_NET_SCHED=y -CONFIG_NET_SCH_FIFO=y # # Queueing/Scheduling @@ -306,10 +348,12 @@ CONFIG_NET_CLS_ACT=y CONFIG_NET_ACT_POLICE=y # CONFIG_NET_ACT_GACT is not set # CONFIG_NET_ACT_MIRRED is not set +CONFIG_NET_ACT_NAT=m # CONFIG_NET_ACT_PEDIT is not set # CONFIG_NET_ACT_SIMP is not set CONFIG_NET_CLS_POLICE=y # CONFIG_NET_CLS_IND is not set +CONFIG_NET_SCH_FIFO=y # # Network testing @@ -329,6 +373,7 @@ CONFIG_CCW=y # # Generic Driver Options # +CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_STANDALONE=y CONFIG_PREVENT_FIRMWARE_BUILD=y # CONFIG_FW_LOADER is not set @@ -400,17 +445,11 @@ CONFIG_SCSI_FC_ATTRS=y # CONFIG_SCSI_ISCSI_ATTRS is not set # CONFIG_SCSI_SAS_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set - -# -# SCSI low-level drivers -# +# CONFIG_SCSI_SRP_ATTRS is not set +CONFIG_SCSI_LOWLEVEL=y # CONFIG_ISCSI_TCP is not set # CONFIG_SCSI_DEBUG is not set CONFIG_ZFCP=y - -# -# Multi-device support (RAID and LVM) -# CONFIG_MD=y CONFIG_BLK_DEV_MD=y CONFIG_MD_LINEAR=m @@ -429,7 +468,9 @@ CONFIG_DM_ZERO=y CONFIG_DM_MULTIPATH=y # CONFIG_DM_MULTIPATH_EMC is not set # CONFIG_DM_MULTIPATH_RDAC is not set +# CONFIG_DM_MULTIPATH_HP is not set # CONFIG_DM_DELAY is not set +# CONFIG_DM_UEVENT is not set CONFIG_NETDEVICES=y # CONFIG_NETDEVICES_MULTIQUEUE is not set # CONFIG_IFB is not set @@ -438,8 +479,13 @@ CONFIG_BONDING=m # CONFIG_MACVLAN is not set CONFIG_EQUALIZER=m CONFIG_TUN=m +CONFIG_VETH=m CONFIG_NET_ETHERNET=y # CONFIG_MII is not set +# CONFIG_IBM_NEW_EMAC_ZMII is not set +# CONFIG_IBM_NEW_EMAC_RGMII is not set +# CONFIG_IBM_NEW_EMAC_TAH is not set +# CONFIG_IBM_NEW_EMAC_EMAC4 is not set CONFIG_NETDEV_1000=y CONFIG_NETDEV_10000=y # CONFIG_TR is not set @@ -473,7 +519,6 @@ CONFIG_CCWGROUP=y CONFIG_UNIX98_PTYS=y CONFIG_LEGACY_PTYS=y CONFIG_LEGACY_PTY_COUNT=256 -# CONFIG_WATCHDOG is not set CONFIG_HW_RANDOM=m # CONFIG_R3964 is not set CONFIG_RAW_DRIVER=m @@ -490,7 +535,6 @@ CONFIG_TN3270_CONSOLE=y CONFIG_TN3215=y CONFIG_TN3215_CONSOLE=y CONFIG_CCW_CONSOLE=y -CONFIG_SCLP=y CONFIG_SCLP_TTY=y CONFIG_SCLP_CONSOLE=y CONFIG_SCLP_VT220_TTY=y @@ -514,6 +558,11 @@ CONFIG_S390_TAPE_34XX=m CONFIG_MONWRITER=m CONFIG_S390_VMUR=m # CONFIG_POWER_SUPPLY is not set +# CONFIG_WATCHDOG is not set + +# +# Sonics Silicon Backplane +# # # File systems @@ -569,7 +618,6 @@ CONFIG_SYSFS=y CONFIG_TMPFS=y CONFIG_TMPFS_POSIX_ACL=y # CONFIG_HUGETLB_PAGE is not set -CONFIG_RAMFS=y CONFIG_CONFIGFS_FS=m # @@ -588,10 +636,7 @@ CONFIG_CONFIGFS_FS=m # CONFIG_QNX4FS_FS is not set # CONFIG_SYSV_FS is not set # CONFIG_UFS_FS is not set - -# -# Network File Systems -# +CONFIG_NETWORK_FILESYSTEMS=y CONFIG_NFS_FS=y CONFIG_NFS_V3=y # CONFIG_NFS_V3_ACL is not set @@ -638,27 +683,13 @@ CONFIG_MSDOS_PARTITION=y # CONFIG_KARMA_PARTITION is not set # CONFIG_EFI_PARTITION is not set # CONFIG_SYSV68_PARTITION is not set - -# -# Native Language Support -# # CONFIG_NLS is not set - -# -# Distributed Lock Manager -# CONFIG_DLM=m # CONFIG_DLM_DEBUG is not set - -# -# Instrumentation Support -# - -# -# Profiling support -# +CONFIG_INSTRUMENTATION=y # CONFIG_PROFILING is not set CONFIG_KPROBES=y +# CONFIG_MARKERS is not set # # Kernel hacking @@ -682,6 +713,7 @@ CONFIG_DEBUG_SPINLOCK=y CONFIG_DEBUG_MUTEXES=y # CONFIG_DEBUG_LOCK_ALLOC is not set # CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set CONFIG_DEBUG_SPINLOCK_SLEEP=y # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set # CONFIG_DEBUG_KOBJECT is not set @@ -694,14 +726,17 @@ CONFIG_FORCED_INLINING=y # CONFIG_RCU_TORTURE_TEST is not set # CONFIG_LKDTM is not set # CONFIG_FAULT_INJECTION is not set +CONFIG_SAMPLES=y # # Security options # # CONFIG_KEYS is not set # CONFIG_SECURITY is not set +# CONFIG_SECURITY_FILE_CAPABILITIES is not set CONFIG_CRYPTO=y CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_AEAD=m CONFIG_CRYPTO_BLKCIPHER=y CONFIG_CRYPTO_HASH=m CONFIG_CRYPTO_MANAGER=y @@ -720,6 +755,7 @@ CONFIG_CRYPTO_ECB=m CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_PCBC=m # CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_XTS is not set # CONFIG_CRYPTO_CRYPTD is not set # CONFIG_CRYPTO_DES is not set CONFIG_CRYPTO_FCRYPT=m @@ -733,11 +769,13 @@ CONFIG_CRYPTO_FCRYPT=m # CONFIG_CRYPTO_ARC4 is not set # CONFIG_CRYPTO_KHAZAD is not set # CONFIG_CRYPTO_ANUBIS is not set +CONFIG_CRYPTO_SEED=m # CONFIG_CRYPTO_DEFLATE is not set # CONFIG_CRYPTO_MICHAEL_MIC is not set # CONFIG_CRYPTO_CRC32C is not set CONFIG_CRYPTO_CAMELLIA=m # CONFIG_CRYPTO_TEST is not set +CONFIG_CRYPTO_AUTHENC=m CONFIG_CRYPTO_HW=y # CONFIG_CRYPTO_SHA1_S390 is not set # CONFIG_CRYPTO_SHA256_S390 is not set @@ -755,5 +793,6 @@ CONFIG_BITREVERSE=m # CONFIG_CRC16 is not set # CONFIG_CRC_ITU_T is not set CONFIG_CRC32=m +CONFIG_CRC7=m # CONFIG_LIBCRC32C is not set CONFIG_PLIST=y diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c index 66b51901c87..ce0856d3250 100644 --- a/arch/s390/kernel/ipl.c +++ b/arch/s390/kernel/ipl.c @@ -648,6 +648,8 @@ static int dump_set_type(enum dump_type type) case DUMP_TYPE_CCW: if (MACHINE_IS_VM) dump_method = DUMP_METHOD_CCW_VM; + else if (diag308_set_works) + dump_method = DUMP_METHOD_CCW_DIAG; else dump_method = DUMP_METHOD_CCW_CIO; break; diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index 70c57378f42..96492cf2d49 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -44,6 +44,7 @@ #include <asm/processor.h> #include <asm/irq.h> #include <asm/timer.h> +#include <asm/cpu.h> asmlinkage void ret_from_fork(void) asm ("ret_from_fork"); @@ -91,6 +92,14 @@ EXPORT_SYMBOL(unregister_idle_notifier); void do_monitor_call(struct pt_regs *regs, long interruption_code) { + struct s390_idle_data *idle; + + idle = &__get_cpu_var(s390_idle); + spin_lock(&idle->lock); + idle->idle_time += get_clock() - idle->idle_enter; + idle->in_idle = 0; + spin_unlock(&idle->lock); + /* disable monitor call class 0 */ __ctl_clear_bit(8, 15); @@ -105,6 +114,7 @@ extern void s390_handle_mcck(void); static void default_idle(void) { int cpu, rc; + struct s390_idle_data *idle; /* CPU is going idle. */ cpu = smp_processor_id(); @@ -142,6 +152,12 @@ static void default_idle(void) return; } + idle = &__get_cpu_var(s390_idle); + spin_lock(&idle->lock); + idle->idle_count++; + idle->in_idle = 1; + idle->idle_enter = get_clock(); + spin_unlock(&idle->lock); trace_hardirqs_on(); /* Wait for external, I/O or machine check interrupt. */ __load_psw_mask(psw_kernel_bits | PSW_MASK_WAIT | @@ -254,14 +270,12 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long new_stackp, save_fp_regs(¤t->thread.fp_regs); memcpy(&p->thread.fp_regs, ¤t->thread.fp_regs, sizeof(s390_fp_regs)); - p->thread.user_seg = __pa((unsigned long) p->mm->pgd) | _SEGMENT_TABLE; /* Set a new TLS ? */ if (clone_flags & CLONE_SETTLS) p->thread.acrs[0] = regs->gprs[6]; #else /* CONFIG_64BIT */ /* Save the fpu registers to new thread structure. */ save_fp_regs(&p->thread.fp_regs); - p->thread.user_seg = __pa((unsigned long) p->mm->pgd) | _REGION_TABLE; /* Set a new TLS ? */ if (clone_flags & CLONE_SETTLS) { if (test_thread_flag(TIF_31BIT)) { diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 35edbef1d22..1d97fe1c0e5 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -42,6 +42,7 @@ #include <asm/tlbflush.h> #include <asm/timer.h> #include <asm/lowcore.h> +#include <asm/cpu.h> /* * An array with a pointer the lowcore of every CPU. @@ -325,7 +326,7 @@ static void smp_ext_bitcall(int cpu, ec_bit_sig sig) */ void smp_ptlb_callback(void *info) { - local_flush_tlb(); + __tlb_flush_local(); } void smp_ptlb_all(void) @@ -494,6 +495,8 @@ int __cpuinit start_secondary(void *cpuvoid) return 0; } +DEFINE_PER_CPU(struct s390_idle_data, s390_idle); + static void __init smp_create_idle(unsigned int cpu) { struct task_struct *p; @@ -506,6 +509,7 @@ static void __init smp_create_idle(unsigned int cpu) if (IS_ERR(p)) panic("failed fork for CPU %u: %li", cpu, PTR_ERR(p)); current_set[cpu] = p; + spin_lock_init(&(&per_cpu(s390_idle, cpu))->lock); } static int cpu_stopped(int cpu) @@ -724,6 +728,7 @@ void __init smp_prepare_boot_cpu(void) cpu_set(0, cpu_online_map); S390_lowcore.percpu_offset = __per_cpu_offset[0]; current_set[0] = current; + spin_lock_init(&(&__get_cpu_var(s390_idle))->lock); } void __init smp_cpus_done(unsigned int max_cpus) @@ -756,22 +761,71 @@ static ssize_t show_capability(struct sys_device *dev, char *buf) } static SYSDEV_ATTR(capability, 0444, show_capability, NULL); +static ssize_t show_idle_count(struct sys_device *dev, char *buf) +{ + struct s390_idle_data *idle; + unsigned long long idle_count; + + idle = &per_cpu(s390_idle, dev->id); + spin_lock_irq(&idle->lock); + idle_count = idle->idle_count; + spin_unlock_irq(&idle->lock); + return sprintf(buf, "%llu\n", idle_count); +} +static SYSDEV_ATTR(idle_count, 0444, show_idle_count, NULL); + +static ssize_t show_idle_time(struct sys_device *dev, char *buf) +{ + struct s390_idle_data *idle; + unsigned long long new_time; + + idle = &per_cpu(s390_idle, dev->id); + spin_lock_irq(&idle->lock); + if (idle->in_idle) { + new_time = get_clock(); + idle->idle_time += new_time - idle->idle_enter; + idle->idle_enter = new_time; + } + new_time = idle->idle_time; + spin_unlock_irq(&idle->lock); + return sprintf(buf, "%llu us\n", new_time >> 12); +} +static SYSDEV_ATTR(idle_time, 0444, show_idle_time, NULL); + +static struct attribute *cpu_attrs[] = { + &attr_capability.attr, + &attr_idle_count.attr, + &attr_idle_time.attr, + NULL, +}; + +static struct attribute_group cpu_attr_group = { + .attrs = cpu_attrs, +}; + static int __cpuinit smp_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu) { unsigned int cpu = (unsigned int)(long)hcpu; struct cpu *c = &per_cpu(cpu_devices, cpu); struct sys_device *s = &c->sysdev; + struct s390_idle_data *idle; switch (action) { case CPU_ONLINE: case CPU_ONLINE_FROZEN: - if (sysdev_create_file(s, &attr_capability)) + idle = &per_cpu(s390_idle, cpu); + spin_lock_irq(&idle->lock); + idle->idle_enter = 0; + idle->idle_time = 0; + idle->idle_count = 0; + spin_unlock_irq(&idle->lock); + if (sysfs_create_group(&s->kobj, &cpu_attr_group)) return NOTIFY_BAD; break; case CPU_DEAD: case CPU_DEAD_FROZEN: - sysdev_remove_file(s, &attr_capability); + sysfs_remove_group(&s->kobj, &cpu_attr_group); break; } return NOTIFY_OK; @@ -784,6 +838,7 @@ static struct notifier_block __cpuinitdata smp_cpu_nb = { static int __init topology_init(void) { int cpu; + int rc; register_cpu_notifier(&smp_cpu_nb); @@ -796,7 +851,9 @@ static int __init topology_init(void) if (!cpu_online(cpu)) continue; s = &c->sysdev; - sysdev_create_file(s, &attr_capability); + rc = sysfs_create_group(&s->kobj, &cpu_attr_group); + if (rc) + return rc; } return 0; } diff --git a/arch/s390/lib/uaccess_pt.c b/arch/s390/lib/uaccess_pt.c index b159a9d6568..7e8efaade2e 100644 --- a/arch/s390/lib/uaccess_pt.c +++ b/arch/s390/lib/uaccess_pt.c @@ -15,6 +15,27 @@ #include <asm/futex.h> #include "uaccess.h" +static inline pte_t *follow_table(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset(mm, addr); + if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + return NULL; + + pud = pud_offset(pgd, addr); + if (pud_none(*pud) || unlikely(pud_bad(*pud))) + return NULL; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) + return NULL; + + return pte_offset_map(pmd, addr); +} + static int __handle_fault(struct mm_struct *mm, unsigned long address, int write_access) { @@ -85,8 +106,6 @@ static size_t __user_copy_pt(unsigned long uaddr, void *kptr, { struct mm_struct *mm = current->mm; unsigned long offset, pfn, done, size; - pgd_t *pgd; - pmd_t *pmd; pte_t *pte; void *from, *to; @@ -94,15 +113,7 @@ static size_t __user_copy_pt(unsigned long uaddr, void *kptr, retry: spin_lock(&mm->page_table_lock); do { - pgd = pgd_offset(mm, uaddr); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - goto fault; - - pmd = pmd_offset(pgd, uaddr); - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) - goto fault; - - pte = pte_offset_map(pmd, uaddr); + pte = follow_table(mm, uaddr); if (!pte || !pte_present(*pte) || (write_user && !pte_write(*pte))) goto fault; @@ -142,22 +153,12 @@ static unsigned long __dat_user_addr(unsigned long uaddr) { struct mm_struct *mm = current->mm; unsigned long pfn, ret; - pgd_t *pgd; - pmd_t *pmd; pte_t *pte; int rc; ret = 0; retry: - pgd = pgd_offset(mm, uaddr); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - goto fault; - - pmd = pmd_offset(pgd, uaddr); - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) - goto fault; - - pte = pte_offset_map(pmd, uaddr); + pte = follow_table(mm, uaddr); if (!pte || !pte_present(*pte)) goto fault; @@ -229,8 +230,6 @@ static size_t strnlen_user_pt(size_t count, const char __user *src) unsigned long uaddr = (unsigned long) src; struct mm_struct *mm = current->mm; unsigned long offset, pfn, done, len; - pgd_t *pgd; - pmd_t *pmd; pte_t *pte; size_t len_str; @@ -240,15 +239,7 @@ static size_t strnlen_user_pt(size_t count, const char __user *src) retry: spin_lock(&mm->page_table_lock); do { - pgd = pgd_offset(mm, uaddr); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - goto fault; - - pmd = pmd_offset(pgd, uaddr); - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) - goto fault; - - pte = pte_offset_map(pmd, uaddr); + pte = follow_table(mm, uaddr); if (!pte || !pte_present(*pte)) goto fault; @@ -308,8 +299,6 @@ static size_t copy_in_user_pt(size_t n, void __user *to, uaddr, done, size; unsigned long uaddr_from = (unsigned long) from; unsigned long uaddr_to = (unsigned long) to; - pgd_t *pgd_from, *pgd_to; - pmd_t *pmd_from, *pmd_to; pte_t *pte_from, *pte_to; int write_user; @@ -317,39 +306,14 @@ static size_t copy_in_user_pt(size_t n, void __user *to, retry: spin_lock(&mm->page_table_lock); do { - pgd_from = pgd_offset(mm, uaddr_from); - if (pgd_none(*pgd_from) || unlikely(pgd_bad(*pgd_from))) { - uaddr = uaddr_from; - write_user = 0; - goto fault; - } - pgd_to = pgd_offset(mm, uaddr_to); - if (pgd_none(*pgd_to) || unlikely(pgd_bad(*pgd_to))) { - uaddr = uaddr_to; - write_user = 1; - goto fault; - } - - pmd_from = pmd_offset(pgd_from, uaddr_from); - if (pmd_none(*pmd_from) || unlikely(pmd_bad(*pmd_from))) { - uaddr = uaddr_from; - write_user = 0; - goto fault; - } - pmd_to = pmd_offset(pgd_to, uaddr_to); - if (pmd_none(*pmd_to) || unlikely(pmd_bad(*pmd_to))) { - uaddr = uaddr_to; - write_user = 1; - goto fault; - } - - pte_from = pte_offset_map(pmd_from, uaddr_from); + pte_from = follow_table(mm, uaddr_from); if (!pte_from || !pte_present(*pte_from)) { uaddr = uaddr_from; write_user = 0; goto fault; } - pte_to = pte_offset_map(pmd_to, uaddr_to); + + pte_to = follow_table(mm, uaddr_to); if (!pte_to || !pte_present(*pte_to) || !pte_write(*pte_to)) { uaddr = uaddr_to; write_user = 1; diff --git a/arch/s390/mm/Makefile b/arch/s390/mm/Makefile index f95449b29fa..66401930f83 100644 --- a/arch/s390/mm/Makefile +++ b/arch/s390/mm/Makefile @@ -2,6 +2,6 @@ # Makefile for the linux s390-specific parts of the memory manager. # -obj-y := init.o fault.o extmem.o mmap.o vmem.o +obj-y := init.o fault.o extmem.o mmap.o vmem.o pgtable.o obj-$(CONFIG_CMM) += cmm.o diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3a25bbf2eb0..b234bb4a6da 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -81,6 +81,7 @@ void show_mem(void) static void __init setup_ro_region(void) { pgd_t *pgd; + pud_t *pud; pmd_t *pmd; pte_t *pte; pte_t new_pte; @@ -91,7 +92,8 @@ static void __init setup_ro_region(void) for (; address < end; address += PAGE_SIZE) { pgd = pgd_offset_k(address); - pmd = pmd_offset(pgd, address); + pud = pud_offset(pgd, address); + pmd = pmd_offset(pud, address); pte = pte_offset_kernel(pmd, address); new_pte = mk_pte_phys(address, __pgprot(_PAGE_RO)); *pte = new_pte; @@ -103,32 +105,28 @@ static void __init setup_ro_region(void) */ void __init paging_init(void) { - pgd_t *pg_dir; - int i; - unsigned long pgdir_k; static const int ssm_mask = 0x04000000L; unsigned long max_zone_pfns[MAX_NR_ZONES]; + unsigned long pgd_type; - pg_dir = swapper_pg_dir; - + init_mm.pgd = swapper_pg_dir; + S390_lowcore.kernel_asce = __pa(init_mm.pgd) & PAGE_MASK; #ifdef CONFIG_64BIT - pgdir_k = (__pa(swapper_pg_dir) & PAGE_MASK) | _KERN_REGION_TABLE; - for (i = 0; i < PTRS_PER_PGD; i++) - pgd_clear_kernel(pg_dir + i); + S390_lowcore.kernel_asce |= _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; + pgd_type = _REGION3_ENTRY_EMPTY; #else - pgdir_k = (__pa(swapper_pg_dir) & PAGE_MASK) | _KERNSEG_TABLE; - for (i = 0; i < PTRS_PER_PGD; i++) - pmd_clear_kernel((pmd_t *)(pg_dir + i)); + S390_lowcore.kernel_asce |= _ASCE_TABLE_LENGTH; + pgd_type = _SEGMENT_ENTRY_EMPTY; #endif + clear_table((unsigned long *) init_mm.pgd, pgd_type, + sizeof(unsigned long)*2048); vmem_map_init(); setup_ro_region(); - S390_lowcore.kernel_asce = pgdir_k; - /* enable virtual mapping in kernel mode */ - __ctl_load(pgdir_k, 1, 1); - __ctl_load(pgdir_k, 7, 7); - __ctl_load(pgdir_k, 13, 13); + __ctl_load(S390_lowcore.kernel_asce, 1, 1); + __ctl_load(S390_lowcore.kernel_asce, 7, 7); + __ctl_load(S390_lowcore.kernel_asce, 13, 13); __raw_local_irq_ssm(ssm_mask); memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c new file mode 100644 index 00000000000..e60e0ae1340 --- /dev/null +++ b/arch/s390/mm/pgtable.c @@ -0,0 +1,94 @@ +/* + * arch/s390/mm/pgtable.c + * + * Copyright IBM Corp. 2007 + * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com> + */ + +#include <linux/sched.h> +#include <linux/kernel.h> +#include <linux/errno.h> +#include <linux/mm.h> +#include <linux/swap.h> +#include <linux/smp.h> +#include <linux/highmem.h> +#include <linux/slab.h> +#include <linux/pagemap.h> +#include <linux/spinlock.h> +#include <linux/module.h> +#include <linux/quicklist.h> + +#include <asm/system.h> +#include <asm/pgtable.h> +#include <asm/pgalloc.h> +#include <asm/tlb.h> +#include <asm/tlbflush.h> + +#ifndef CONFIG_64BIT +#define ALLOC_ORDER 1 +#else +#define ALLOC_ORDER 2 +#endif + +unsigned long *crst_table_alloc(struct mm_struct *mm, int noexec) +{ + struct page *page = alloc_pages(GFP_KERNEL, ALLOC_ORDER); + + if (!page) + return NULL; + page->index = 0; + if (noexec) { + struct page *shadow = alloc_pages(GFP_KERNEL, ALLOC_ORDER); + if (!shadow) { + __free_pages(page, ALLOC_ORDER); + return NULL; + } + page->index = page_to_phys(shadow); + } + return (unsigned long *) page_to_phys(page); +} + +void crst_table_free(unsigned long *table) +{ + unsigned long *shadow = get_shadow_table(table); + + if (shadow) + free_pages((unsigned long) shadow, ALLOC_ORDER); + free_pages((unsigned long) table, ALLOC_ORDER); +} + +/* + * page table entry allocation/free routines. + */ +unsigned long *page_table_alloc(int noexec) +{ + struct page *page = alloc_page(GFP_KERNEL); + unsigned long *table; + + if (!page) + return NULL; + page->index = 0; + if (noexec) { + struct page *shadow = alloc_page(GFP_KERNEL); + if (!shadow) { + __free_page(page); + return NULL; + } + table = (unsigned long *) page_to_phys(shadow); + clear_table(table, _PAGE_TYPE_EMPTY, PAGE_SIZE); + page->index = (addr_t) table; + } + table = (unsigned long *) page_to_phys(page); + clear_table(table, _PAGE_TYPE_EMPTY, PAGE_SIZE); + return table; +} + +void page_table_free(unsigned long *table) +{ + unsigned long *shadow = get_shadow_pte(table); + + if (shadow) + free_page((unsigned long) shadow); + free_page((unsigned long) table); + +} diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index fd594d5fe14..fb9c5a85aa5 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -73,31 +73,28 @@ static void __init_refok *vmem_alloc_pages(unsigned int order) return alloc_bootmem_pages((1 << order) * PAGE_SIZE); } +#define vmem_pud_alloc() ({ BUG(); ((pud_t *) NULL); }) + static inline pmd_t *vmem_pmd_alloc(void) { - pmd_t *pmd; - int i; + pmd_t *pmd = NULL; - pmd = vmem_alloc_pages(PMD_ALLOC_ORDER); +#ifdef CONFIG_64BIT + pmd = vmem_alloc_pages(2); if (!pmd) return NULL; - for (i = 0; i < PTRS_PER_PMD; i++) - pmd_clear_kernel(pmd + i); + clear_table((unsigned long *) pmd, _SEGMENT_ENTRY_EMPTY, PAGE_SIZE*4); +#endif return pmd; } static inline pte_t *vmem_pte_alloc(void) { - pte_t *pte; - pte_t empty_pte; - int i; + pte_t *pte = vmem_alloc_pages(0); - pte = vmem_alloc_pages(PTE_ALLOC_ORDER); if (!pte) return NULL; - pte_val(empty_pte) = _PAGE_TYPE_EMPTY; - for (i = 0; i < PTRS_PER_PTE; i++) - pte[i] = empty_pte; + clear_table((unsigned long *) pte, _PAGE_TYPE_EMPTY, PAGE_SIZE); return pte; } @@ -108,6 +105,7 @@ static int vmem_add_range(unsigned long start, unsigned long size) { unsigned long address; pgd_t *pg_dir; + pud_t *pu_dir; pmd_t *pm_dir; pte_t *pt_dir; pte_t pte; @@ -116,13 +114,21 @@ static int vmem_add_range(unsigned long start, unsigned long size) for (address = start; address < start + size; address += PAGE_SIZE) { pg_dir = pgd_offset_k(address); if (pgd_none(*pg_dir)) { + pu_dir = vmem_pud_alloc(); + if (!pu_dir) + goto out; + pgd_populate_kernel(&init_mm, pg_dir, pu_dir); + } + + pu_dir = pud_offset(pg_dir, address); + if (pud_none(*pu_dir)) { pm_dir = vmem_pmd_alloc(); if (!pm_dir) goto out; - pgd_populate_kernel(&init_mm, pg_dir, pm_dir); + pud_populate_kernel(&init_mm, pu_dir, pm_dir); } - pm_dir = pmd_offset(pg_dir, address); + pm_dir = pmd_offset(pu_dir, address); if (pmd_none(*pm_dir)) { pt_dir = vmem_pte_alloc(); if (!pt_dir) @@ -148,6 +154,7 @@ static void vmem_remove_range(unsigned long start, unsigned long size) { unsigned long address; pgd_t *pg_dir; + pud_t *pu_dir; pmd_t *pm_dir; pte_t *pt_dir; pte_t pte; @@ -155,9 +162,10 @@ static void vmem_remove_range(unsigned long start, unsigned long size) pte_val(pte) = _PAGE_TYPE_EMPTY; for (address = start; address < start + size; address += PAGE_SIZE) { pg_dir = pgd_offset_k(address); - if (pgd_none(*pg_dir)) + pu_dir = pud_offset(pg_dir, address); + if (pud_none(*pu_dir)) continue; - pm_dir = pmd_offset(pg_dir, address); + pm_dir = pmd_offset(pu_dir, address); if (pmd_none(*pm_dir)) continue; pt_dir = pte_offset_kernel(pm_dir, address); @@ -174,6 +182,7 @@ static int vmem_add_mem_map(unsigned long start, unsigned long size) unsigned long address, start_addr, end_addr; struct page *map_start, *map_end; pgd_t *pg_dir; + pud_t *pu_dir; pmd_t *pm_dir; pte_t *pt_dir; pte_t pte; @@ -188,13 +197,21 @@ static int vmem_add_mem_map(unsigned long start, unsigned long size) for (address = start_addr; address < end_addr; address += PAGE_SIZE) { pg_dir = pgd_offset_k(address); if (pgd_none(*pg_dir)) { + pu_dir = vmem_pud_alloc(); + if (!pu_dir) + goto out; + pgd_populate_kernel(&init_mm, pg_dir, pu_dir); + } + + pu_dir = pud_offset(pg_dir, address); + if (pud_none(*pu_dir)) { pm_dir = vmem_pmd_alloc(); if (!pm_dir) goto out; - pgd_populate_kernel(&init_mm, pg_dir, pm_dir); + pud_populate_kernel(&init_mm, pu_dir, pm_dir); } - pm_dir = pmd_offset(pg_dir, address); + pm_dir = pmd_offset(pu_dir, address); if (pmd_none(*pm_dir)) { pt_dir = vmem_pte_alloc(); if (!pt_dir) diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c index 9c3ed88853f..97aa50d1e4a 100644 --- a/arch/sparc/kernel/ioport.c +++ b/arch/sparc/kernel/ioport.c @@ -727,9 +727,8 @@ int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, BUG_ON(direction == PCI_DMA_NONE); /* IIep is write-through, not flushing. */ for_each_sg(sgl, sg, nents, n) { - BUG_ON(page_address(sg->page) == NULL); - sg->dvma_address = - virt_to_phys(page_address(sg->page)) + sg->offset; + BUG_ON(page_address(sg_page(sg)) == NULL); + sg->dvma_address = virt_to_phys(sg_virt(sg)); sg->dvma_length = sg->length; } return nents; @@ -748,9 +747,9 @@ void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, BUG_ON(direction == PCI_DMA_NONE); if (direction != PCI_DMA_TODEVICE) { for_each_sg(sgl, sg, nents, n) { - BUG_ON(page_address(sg->page) == NULL); + BUG_ON(page_address(sg_page(sg)) == NULL); mmu_inval_dma_area( - (unsigned long) page_address(sg->page), + (unsigned long) page_address(sg_page(sg)), (sg->length + PAGE_SIZE-1) & PAGE_MASK); } } @@ -798,9 +797,9 @@ void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sgl, int BUG_ON(direction == PCI_DMA_NONE); if (direction != PCI_DMA_TODEVICE) { for_each_sg(sgl, sg, nents, n) { - BUG_ON(page_address(sg->page) == NULL); + BUG_ON(page_address(sg_page(sg)) == NULL); mmu_inval_dma_area( - (unsigned long) page_address(sg->page), + (unsigned long) page_address(sg_page(sg)), (sg->length + PAGE_SIZE-1) & PAGE_MASK); } } @@ -814,9 +813,9 @@ void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sgl, BUG_ON(direction == PCI_DMA_NONE); if (direction != PCI_DMA_TODEVICE) { for_each_sg(sgl, sg, nents, n) { - BUG_ON(page_address(sg->page) == NULL); + BUG_ON(page_address(sg_page(sg)) == NULL); mmu_inval_dma_area( - (unsigned long) page_address(sg->page), + (unsigned long) page_address(sg_page(sg)), (sg->length + PAGE_SIZE-1) & PAGE_MASK); } } diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c index 375b4db6370..1666087c5b8 100644 --- a/arch/sparc/mm/io-unit.c +++ b/arch/sparc/mm/io-unit.c @@ -144,7 +144,7 @@ static void iounit_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus spin_lock_irqsave(&iounit->lock, flags); while (sz != 0) { --sz; - sg->dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg->page) + sg->offset, sg->length); + sg->dvma_address = iounit_get_area(iounit, sg_virt(sg), sg->length); sg->dvma_length = sg->length; sg = sg_next(sg); } diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 283656d9f6e..4b934270f05 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -238,7 +238,7 @@ static void iommu_get_scsi_sgl_noflush(struct scatterlist *sg, int sz, struct sb while (sz != 0) { --sz; n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; - sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; + sg->dvma_address = iommu_get_one(sg_page(sg), n, sbus) + sg->offset; sg->dvma_length = (__u32) sg->length; sg = sg_next(sg); } @@ -252,7 +252,7 @@ static void iommu_get_scsi_sgl_gflush(struct scatterlist *sg, int sz, struct sbu while (sz != 0) { --sz; n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; - sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; + sg->dvma_address = iommu_get_one(sg_page(sg), n, sbus) + sg->offset; sg->dvma_length = (__u32) sg->length; sg = sg_next(sg); } @@ -273,7 +273,7 @@ static void iommu_get_scsi_sgl_pflush(struct scatterlist *sg, int sz, struct sbu * XXX Is this a good assumption? * XXX What if someone else unmaps it here and races us? */ - if ((page = (unsigned long) page_address(sg->page)) != 0) { + if ((page = (unsigned long) page_address(sg_page(sg))) != 0) { for (i = 0; i < n; i++) { if (page != oldpage) { /* Already flushed? */ flush_page_for_dma(page); @@ -283,7 +283,7 @@ static void iommu_get_scsi_sgl_pflush(struct scatterlist *sg, int sz, struct sbu } } - sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; + sg->dvma_address = iommu_get_one(sg_page(sg), n, sbus) + sg->offset; sg->dvma_length = (__u32) sg->length; sg = sg_next(sg); } diff --git a/arch/sparc/mm/sun4c.c b/arch/sparc/mm/sun4c.c index ee6708fc449..a2cc141291c 100644 --- a/arch/sparc/mm/sun4c.c +++ b/arch/sparc/mm/sun4c.c @@ -1228,7 +1228,7 @@ static void sun4c_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus * { while (sz != 0) { --sz; - sg->dvma_address = (__u32)sun4c_lockarea(page_address(sg->page) + sg->offset, sg->length); + sg->dvma_address = (__u32)sun4c_lockarea(sg_virt(sg), sg->length); sg->dvma_length = sg->length; sg = sg_next(sg); } diff --git a/arch/sparc64/Kconfig b/arch/sparc64/Kconfig index c7a74e37698..03c4e5c1b94 100644 --- a/arch/sparc64/Kconfig +++ b/arch/sparc64/Kconfig @@ -72,6 +72,10 @@ config ARCH_NO_VIRT_TO_BUS config OF def_bool y +config GENERIC_HARDIRQS_NO__DO_IRQ + bool + def_bool y + choice prompt "Kernel page size" default SPARC64_PAGE_SIZE_8KB diff --git a/arch/sparc64/Makefile b/arch/sparc64/Makefile index 6c92a42efe7..01159cb5f16 100644 --- a/arch/sparc64/Makefile +++ b/arch/sparc64/Makefile @@ -18,8 +18,6 @@ NEW_GCC := $(call cc-option-yn, -m64 -mcmodel=medlow) NEW_GAS := $(shell if $(LD) -V 2>&1 | grep 'elf64_sparc' > /dev/null; then echo y; else echo n; fi) UNDECLARED_REGS := $(shell if $(CC) -c -x assembler /dev/null -Wa,--help | grep undeclared-regs > /dev/null; then echo y; else echo n; fi; ) -export NEW_GCC - ifneq ($(NEW_GAS),y) AS = sparc64-linux-as LD = sparc64-linux-ld @@ -58,8 +56,6 @@ core-y += arch/sparc64/kernel/ arch/sparc64/mm/ core-$(CONFIG_SOLARIS_EMUL) += arch/sparc64/solaris/ core-y += arch/sparc64/math-emu/ libs-y += arch/sparc64/prom/ arch/sparc64/lib/ - -# FIXME: is drivers- right? drivers-$(CONFIG_OPROFILE) += arch/sparc64/oprofile/ boot := arch/sparc64/boot diff --git a/arch/sparc64/defconfig b/arch/sparc64/defconfig index 1aa2c4048e4..e023d4b2fef 100644 --- a/arch/sparc64/defconfig +++ b/arch/sparc64/defconfig @@ -1,7 +1,7 @@ # # Automatically generated make config: don't edit # Linux kernel version: 2.6.23 -# Sat Oct 13 21:53:54 2007 +# Sun Oct 21 19:57:44 2007 # CONFIG_SPARC=y CONFIG_SPARC64=y @@ -49,6 +49,10 @@ CONFIG_POSIX_MQUEUE=y # CONFIG_AUDIT is not set # CONFIG_IKCONFIG is not set CONFIG_LOG_BUF_SHIFT=18 +# CONFIG_CGROUPS is not set +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_FAIR_USER_SCHED=y +# CONFIG_FAIR_CGROUP_SCHED is not set CONFIG_SYSFS_DEPRECATED=y CONFIG_RELAY=y # CONFIG_BLK_DEV_INITRD is not set @@ -145,7 +149,10 @@ CONFIG_SELECT_MEMORY_MODEL=y CONFIG_SPARSEMEM_MANUAL=y CONFIG_SPARSEMEM=y CONFIG_HAVE_MEMORY_PRESENT=y -CONFIG_SPARSEMEM_STATIC=y +# CONFIG_SPARSEMEM_STATIC is not set +CONFIG_SPARSEMEM_EXTREME=y +CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y +CONFIG_SPARSEMEM_VMEMMAP=y CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_RESOURCES_64BIT=y CONFIG_ZONE_DMA_FLAG=0 @@ -275,10 +282,6 @@ CONFIG_VLAN_8021Q=m # CONFIG_LAPB is not set # CONFIG_ECONET is not set # CONFIG_WAN_ROUTER is not set - -# -# QoS and/or fair queueing -# # CONFIG_NET_SCHED is not set # @@ -372,8 +375,6 @@ CONFIG_IDEPCI_PCIBUS_ORDER=y # CONFIG_BLK_DEV_GENERIC is not set # CONFIG_BLK_DEV_OPTI621 is not set CONFIG_BLK_DEV_IDEDMA_PCI=y -# CONFIG_BLK_DEV_IDEDMA_FORCED is not set -CONFIG_IDEDMA_ONLYDISK=y # CONFIG_BLK_DEV_AEC62XX is not set CONFIG_BLK_DEV_ALI15X3=y # CONFIG_WDC_ALI15X3 is not set @@ -401,6 +402,7 @@ CONFIG_BLK_DEV_ALI15X3=y # CONFIG_BLK_DEV_TC86C001 is not set # CONFIG_IDE_ARM is not set CONFIG_BLK_DEV_IDEDMA=y +CONFIG_IDE_ARCH_OBSOLETE_INIT=y # CONFIG_BLK_DEV_HD is not set # @@ -441,6 +443,7 @@ CONFIG_SCSI_FC_ATTRS=y CONFIG_SCSI_ISCSI_ATTRS=m # CONFIG_SCSI_SAS_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set +# CONFIG_SCSI_SRP_ATTRS is not set CONFIG_SCSI_LOWLEVEL=y CONFIG_ISCSI_TCP=m # CONFIG_BLK_DEV_3W_XXXX_RAID is not set @@ -492,14 +495,8 @@ CONFIG_DM_MIRROR=m CONFIG_DM_ZERO=m # CONFIG_DM_MULTIPATH is not set # CONFIG_DM_DELAY is not set - -# -# Fusion MPT device support -# +# CONFIG_DM_UEVENT is not set # CONFIG_FUSION is not set -# CONFIG_FUSION_SPI is not set -# CONFIG_FUSION_FC is not set -# CONFIG_FUSION_SAS is not set # # IEEE 1394 (FireWire) support @@ -638,7 +635,6 @@ CONFIG_INPUT_MOUSEDEV_PSAUX=y CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 # CONFIG_INPUT_JOYDEV is not set -# CONFIG_INPUT_TSDEV is not set CONFIG_INPUT_EVDEV=y # CONFIG_INPUT_EVBUG is not set @@ -714,11 +710,9 @@ CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_UNIX98_PTYS=y # CONFIG_LEGACY_PTYS is not set # CONFIG_IPMI_HANDLER is not set -# CONFIG_WATCHDOG is not set # CONFIG_HW_RANDOM is not set # CONFIG_R3964 is not set # CONFIG_APPLICOM is not set -# CONFIG_DRM is not set # CONFIG_RAW_DRIVER is not set # CONFIG_TCG_TPM is not set CONFIG_DEVPORT=y @@ -786,8 +780,6 @@ CONFIG_I2C_ALGOBIT=y # CONFIG_POWER_SUPPLY is not set CONFIG_HWMON=y # CONFIG_HWMON_VID is not set -# CONFIG_SENSORS_ABITUGURU is not set -# CONFIG_SENSORS_ABITUGURU3 is not set # CONFIG_SENSORS_AD7418 is not set # CONFIG_SENSORS_ADM1021 is not set # CONFIG_SENSORS_ADM1025 is not set @@ -795,12 +787,12 @@ CONFIG_HWMON=y # CONFIG_SENSORS_ADM1029 is not set # CONFIG_SENSORS_ADM1031 is not set # CONFIG_SENSORS_ADM9240 is not set -# CONFIG_SENSORS_ASB100 is not set +# CONFIG_SENSORS_ADT7470 is not set # CONFIG_SENSORS_ATXP1 is not set # CONFIG_SENSORS_DS1621 is not set # CONFIG_SENSORS_F71805F is not set -# CONFIG_SENSORS_FSCHER is not set -# CONFIG_SENSORS_FSCPOS is not set +# CONFIG_SENSORS_F71882FG is not set +# CONFIG_SENSORS_F75375S is not set # CONFIG_SENSORS_GL518SM is not set # CONFIG_SENSORS_GL520SM is not set # CONFIG_SENSORS_IT87 is not set @@ -836,6 +828,7 @@ CONFIG_HWMON=y # CONFIG_SENSORS_W83627HF is not set # CONFIG_SENSORS_W83627EHF is not set # CONFIG_HWMON_DEBUG_CHIP is not set +# CONFIG_WATCHDOG is not set # # Sonics Silicon Backplane @@ -858,12 +851,7 @@ CONFIG_SSB_POSSIBLE=y # # Graphics support # -# CONFIG_BACKLIGHT_LCD_SUPPORT is not set - -# -# Display device support -# -# CONFIG_DISPLAY_SUPPORT is not set +# CONFIG_DRM is not set # CONFIG_VGASTATE is not set # CONFIG_VIDEO_OUTPUT_CONTROL is not set CONFIG_FB=y @@ -872,6 +860,7 @@ CONFIG_FB_DDC=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y +# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set # CONFIG_FB_SYS_FILLRECT is not set # CONFIG_FB_SYS_COPYAREA is not set # CONFIG_FB_SYS_IMAGEBLIT is not set @@ -890,6 +879,7 @@ CONFIG_FB_TILEBLITTING=y # CONFIG_FB_PM2 is not set # CONFIG_FB_ASILIANT is not set # CONFIG_FB_IMSTT is not set +# CONFIG_FB_UVESA is not set # CONFIG_FB_SBUS is not set # CONFIG_FB_XVR500 is not set # CONFIG_FB_XVR2500 is not set @@ -915,6 +905,12 @@ CONFIG_FB_RADEON_I2C=y # CONFIG_FB_ARK is not set # CONFIG_FB_PM3 is not set # CONFIG_FB_VIRTUAL is not set +# CONFIG_BACKLIGHT_LCD_SUPPORT is not set + +# +# Display device support +# +# CONFIG_DISPLAY_SUPPORT is not set # # Console display driver support @@ -1066,6 +1062,7 @@ CONFIG_AC97_BUS=m CONFIG_HID_SUPPORT=y CONFIG_HID=y # CONFIG_HID_DEBUG is not set +# CONFIG_HIDRAW is not set # # USB Input Devices @@ -1187,19 +1184,6 @@ CONFIG_USB_STORAGE=m # CONFIG_RTC_CLASS is not set # -# DMA Engine support -# -# CONFIG_DMA_ENGINE is not set - -# -# DMA Clients -# - -# -# DMA Devices -# - -# # Userspace I/O # # CONFIG_UIO is not set @@ -1275,7 +1259,6 @@ CONFIG_TMPFS=y # CONFIG_TMPFS_POSIX_ACL is not set CONFIG_HUGETLBFS=y CONFIG_HUGETLB_PAGE=y -CONFIG_RAMFS=y # CONFIG_CONFIGFS_FS is not set # @@ -1295,10 +1278,7 @@ CONFIG_RAMFS=y # CONFIG_QNX4FS_FS is not set # CONFIG_SYSV_FS is not set # CONFIG_UFS_FS is not set - -# -# Network File Systems -# +CONFIG_NETWORK_FILESYSTEMS=y # CONFIG_NFS_FS is not set # CONFIG_NFSD is not set # CONFIG_SMB_FS is not set @@ -1313,10 +1293,6 @@ CONFIG_RAMFS=y # CONFIG_PARTITION_ADVANCED is not set CONFIG_MSDOS_PARTITION=y CONFIG_SUN_PARTITION=y - -# -# Native Language Support -# CONFIG_NLS=m CONFIG_NLS_DEFAULT="iso8859-1" # CONFIG_NLS_CODEPAGE_437 is not set @@ -1357,18 +1333,12 @@ CONFIG_NLS_DEFAULT="iso8859-1" # CONFIG_NLS_KOI8_R is not set # CONFIG_NLS_KOI8_U is not set # CONFIG_NLS_UTF8 is not set - -# -# Distributed Lock Manager -# # CONFIG_DLM is not set - -# -# Instrumentation Support -# +CONFIG_INSTRUMENTATION=y CONFIG_PROFILING=y CONFIG_OPROFILE=m CONFIG_KPROBES=y +# CONFIG_MARKERS is not set # # Kernel hacking @@ -1402,9 +1372,11 @@ CONFIG_DEBUG_BUGVERBOSE=y # CONFIG_DEBUG_VM is not set # CONFIG_DEBUG_LIST is not set CONFIG_FORCED_INLINING=y +# CONFIG_BOOT_PRINTK_DELAY is not set # CONFIG_RCU_TORTURE_TEST is not set # CONFIG_LKDTM is not set # CONFIG_FAULT_INJECTION is not set +# CONFIG_SAMPLES is not set # CONFIG_DEBUG_STACK_USAGE is not set # CONFIG_DEBUG_DCFLUSH is not set # CONFIG_STACK_DEBUG is not set @@ -1417,6 +1389,7 @@ CONFIG_FORCED_INLINING=y CONFIG_KEYS=y # CONFIG_KEYS_DEBUG_PROC_KEYS is not set # CONFIG_SECURITY is not set +# CONFIG_SECURITY_FILE_CAPABILITIES is not set CONFIG_XOR_BLOCKS=m CONFIG_ASYNC_CORE=m CONFIG_ASYNC_MEMCPY=m diff --git a/arch/sparc64/kernel/Makefile b/arch/sparc64/kernel/Makefile index 112c46e6657..ef50d217432 100644 --- a/arch/sparc64/kernel/Makefile +++ b/arch/sparc64/kernel/Makefile @@ -39,12 +39,3 @@ else obj-y += sys_sunos32.o sunos_ioctl32.o endif endif - -ifneq ($(NEW_GCC),y) - CMODEL_CFLAG := -mmedlow -else - CMODEL_CFLAG := -m64 -mcmodel=medlow -endif - -head.o: head.S ttable.S itlb_miss.S dtlb_miss.S ktlb.S tsb.S \ - etrap.S rtrap.S winfixup.S entry.S diff --git a/arch/sparc64/kernel/iommu.c b/arch/sparc64/kernel/iommu.c index 29af777d7ac..070a4846c0c 100644 --- a/arch/sparc64/kernel/iommu.c +++ b/arch/sparc64/kernel/iommu.c @@ -472,8 +472,7 @@ static void dma_4u_unmap_single(struct device *dev, dma_addr_t bus_addr, spin_unlock_irqrestore(&iommu->lock, flags); } -#define SG_ENT_PHYS_ADDRESS(SG) \ - (__pa(page_address((SG)->page)) + (SG)->offset) +#define SG_ENT_PHYS_ADDRESS(SG) (__pa(sg_virt((SG)))) static void fill_sg(iopte_t *iopte, struct scatterlist *sg, int nused, int nelems, @@ -565,9 +564,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist, /* Fast path single entry scatterlists. */ if (nelems == 1) { sglist->dma_address = - dma_4u_map_single(dev, - (page_address(sglist->page) + - sglist->offset), + dma_4u_map_single(dev, sg_virt(sglist), sglist->length, direction); if (unlikely(sglist->dma_address == DMA_ERROR_CODE)) return 0; diff --git a/arch/sparc64/kernel/iommu_common.c b/arch/sparc64/kernel/iommu_common.c index d7ca900ec51..b70324e0d83 100644 --- a/arch/sparc64/kernel/iommu_common.c +++ b/arch/sparc64/kernel/iommu_common.c @@ -73,7 +73,7 @@ static int verify_one_map(struct scatterlist *dma_sg, struct scatterlist **__sg, daddr = dma_sg->dma_address; sglen = sg->length; - sgaddr = (unsigned long) (page_address(sg->page) + sg->offset); + sgaddr = (unsigned long) sg_virt(sg); while (dlen > 0) { unsigned long paddr; @@ -123,7 +123,7 @@ static int verify_one_map(struct scatterlist *dma_sg, struct scatterlist **__sg, sg = sg_next(sg); if (--nents <= 0) break; - sgaddr = (unsigned long) (page_address(sg->page) + sg->offset); + sgaddr = (unsigned long) sg_virt(sg); sglen = sg->length; } if (dlen < 0) { @@ -191,7 +191,7 @@ void verify_sglist(struct scatterlist *sglist, int nents, iopte_t *iopte, int np printk("sg(%d): page_addr(%p) off(%x) length(%x) " "dma_address[%016x] dma_length[%016x]\n", i, - page_address(sg->page), sg->offset, + page_address(sg_page(sg)), sg->offset, sg->length, sg->dma_address, sg->dma_length); } @@ -207,15 +207,14 @@ unsigned long prepare_sg(struct scatterlist *sg, int nents) unsigned long prev; u32 dent_addr, dent_len; - prev = (unsigned long) (page_address(sg->page) + sg->offset); + prev = (unsigned long) sg_virt(sg); prev += (unsigned long) (dent_len = sg->length); - dent_addr = (u32) ((unsigned long)(page_address(sg->page) + sg->offset) - & (IO_PAGE_SIZE - 1UL)); + dent_addr = (u32) ((unsigned long)(sg_virt(sg)) & (IO_PAGE_SIZE - 1UL)); while (--nents) { unsigned long addr; sg = sg_next(sg); - addr = (unsigned long) (page_address(sg->page) + sg->offset); + addr = (unsigned long) sg_virt(sg); if (! VCONTIG(prev, addr)) { dma_sg->dma_address = dent_addr; dma_sg->dma_length = dent_len; @@ -234,6 +233,11 @@ unsigned long prepare_sg(struct scatterlist *sg, int nents) dma_sg->dma_address = dent_addr; dma_sg->dma_length = dent_len; + if (dma_sg != sg) { + dma_sg = next_sg(dma_sg); + dma_sg->dma_length = 0; + } + return ((unsigned long) dent_addr + (unsigned long) dent_len + (IO_PAGE_SIZE - 1UL)) >> IO_PAGE_SHIFT; diff --git a/arch/sparc64/kernel/irq.c b/arch/sparc64/kernel/irq.c index 2c3bea22815..30431bd24e1 100644 --- a/arch/sparc64/kernel/irq.c +++ b/arch/sparc64/kernel/irq.c @@ -257,8 +257,8 @@ struct irq_handler_data { unsigned long imap; void (*pre_handler)(unsigned int, void *, void *); - void *pre_handler_arg1; - void *pre_handler_arg2; + void *arg1; + void *arg2; }; #ifdef CONFIG_SMP @@ -346,7 +346,7 @@ static void sun4u_irq_disable(unsigned int virt_irq) } } -static void sun4u_irq_end(unsigned int virt_irq) +static void sun4u_irq_eoi(unsigned int virt_irq) { struct irq_handler_data *data = get_irq_chip_data(virt_irq); struct irq_desc *desc = irq_desc + virt_irq; @@ -401,7 +401,7 @@ static void sun4v_irq_disable(unsigned int virt_irq) "err(%d)\n", ino, err); } -static void sun4v_irq_end(unsigned int virt_irq) +static void sun4v_irq_eoi(unsigned int virt_irq) { unsigned int ino = virt_irq_table[virt_irq].dev_ino; struct irq_desc *desc = irq_desc + virt_irq; @@ -478,7 +478,7 @@ static void sun4v_virq_disable(unsigned int virt_irq) dev_handle, dev_ino, err); } -static void sun4v_virq_end(unsigned int virt_irq) +static void sun4v_virq_eoi(unsigned int virt_irq) { struct irq_desc *desc = irq_desc + virt_irq; unsigned long dev_handle, dev_ino; @@ -498,33 +498,11 @@ static void sun4v_virq_end(unsigned int virt_irq) dev_handle, dev_ino, err); } -static void run_pre_handler(unsigned int virt_irq) -{ - struct irq_handler_data *data = get_irq_chip_data(virt_irq); - unsigned int ino; - - ino = virt_irq_table[virt_irq].dev_ino; - if (likely(data->pre_handler)) { - data->pre_handler(ino, - data->pre_handler_arg1, - data->pre_handler_arg2); - } -} - static struct irq_chip sun4u_irq = { .typename = "sun4u", .enable = sun4u_irq_enable, .disable = sun4u_irq_disable, - .end = sun4u_irq_end, - .set_affinity = sun4u_set_affinity, -}; - -static struct irq_chip sun4u_irq_ack = { - .typename = "sun4u+ack", - .enable = sun4u_irq_enable, - .disable = sun4u_irq_disable, - .ack = run_pre_handler, - .end = sun4u_irq_end, + .eoi = sun4u_irq_eoi, .set_affinity = sun4u_set_affinity, }; @@ -532,7 +510,7 @@ static struct irq_chip sun4v_irq = { .typename = "sun4v", .enable = sun4v_irq_enable, .disable = sun4v_irq_disable, - .end = sun4v_irq_end, + .eoi = sun4v_irq_eoi, .set_affinity = sun4v_set_affinity, }; @@ -540,31 +518,33 @@ static struct irq_chip sun4v_virq = { .typename = "vsun4v", .enable = sun4v_virq_enable, .disable = sun4v_virq_disable, - .end = sun4v_virq_end, + .eoi = sun4v_virq_eoi, .set_affinity = sun4v_virt_set_affinity, }; +static void fastcall pre_flow_handler(unsigned int virt_irq, + struct irq_desc *desc) +{ + struct irq_handler_data *data = get_irq_chip_data(virt_irq); + unsigned int ino = virt_irq_table[virt_irq].dev_ino; + + data->pre_handler(ino, data->arg1, data->arg2); + + handle_fasteoi_irq(virt_irq, desc); +} + void irq_install_pre_handler(int virt_irq, void (*func)(unsigned int, void *, void *), void *arg1, void *arg2) { struct irq_handler_data *data = get_irq_chip_data(virt_irq); - struct irq_chip *chip = get_irq_chip(virt_irq); - - if (WARN_ON(chip == &sun4v_irq || chip == &sun4v_virq)) { - printk(KERN_ERR "IRQ: Trying to install pre-handler on " - "sun4v irq %u\n", virt_irq); - return; - } + struct irq_desc *desc = irq_desc + virt_irq; data->pre_handler = func; - data->pre_handler_arg1 = arg1; - data->pre_handler_arg2 = arg2; - - if (chip == &sun4u_irq_ack) - return; + data->arg1 = arg1; + data->arg2 = arg2; - set_irq_chip(virt_irq, &sun4u_irq_ack); + desc->handle_irq = pre_flow_handler; } unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap) @@ -582,7 +562,10 @@ unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap) if (!virt_irq) { virt_irq = virt_irq_alloc(0, ino); bucket_set_virt_irq(__pa(bucket), virt_irq); - set_irq_chip(virt_irq, &sun4u_irq); + set_irq_chip_and_handler_name(virt_irq, + &sun4u_irq, + handle_fasteoi_irq, + "IVEC"); } data = get_irq_chip_data(virt_irq); @@ -617,7 +600,9 @@ static unsigned int sun4v_build_common(unsigned long sysino, if (!virt_irq) { virt_irq = virt_irq_alloc(0, sysino); bucket_set_virt_irq(__pa(bucket), virt_irq); - set_irq_chip(virt_irq, chip); + set_irq_chip_and_handler_name(virt_irq, chip, + handle_fasteoi_irq, + "IVEC"); } data = get_irq_chip_data(virt_irq); @@ -665,7 +650,10 @@ unsigned int sun4v_build_virq(u32 devhandle, unsigned int devino) virt_irq = virt_irq_alloc(devhandle, devino); bucket_set_virt_irq(__pa(bucket), virt_irq); - set_irq_chip(virt_irq, &sun4v_virq); + + set_irq_chip_and_handler_name(virt_irq, &sun4v_virq, + handle_fasteoi_irq, + "IVEC"); data = kzalloc(sizeof(struct irq_handler_data), GFP_ATOMIC); if (unlikely(!data)) @@ -724,6 +712,7 @@ void handler_irq(int irq, struct pt_regs *regs) : "memory"); while (bucket_pa) { + struct irq_desc *desc; unsigned long next_pa; unsigned int virt_irq; @@ -731,7 +720,9 @@ void handler_irq(int irq, struct pt_regs *regs) virt_irq = bucket_get_virt_irq(bucket_pa); bucket_clear_chain_pa(bucket_pa); - __do_IRQ(virt_irq); + desc = irq_desc + virt_irq; + + desc->handle_irq(virt_irq, desc); bucket_pa = next_pa; } diff --git a/arch/sparc64/kernel/ldc.c b/arch/sparc64/kernel/ldc.c index 85a2be0b096..c8313cb60f0 100644 --- a/arch/sparc64/kernel/ldc.c +++ b/arch/sparc64/kernel/ldc.c @@ -2057,7 +2057,7 @@ static void fill_cookies(struct cookie_state *sp, unsigned long pa, static int sg_count_one(struct scatterlist *sg) { - unsigned long base = page_to_pfn(sg->page) << PAGE_SHIFT; + unsigned long base = page_to_pfn(sg_page(sg)) << PAGE_SHIFT; long len = sg->length; if ((sg->offset | len) & (8UL - 1)) diff --git a/arch/sparc64/kernel/pci.c b/arch/sparc64/kernel/pci.c index 9b808640a19..63b3ebc0c3c 100644 --- a/arch/sparc64/kernel/pci.c +++ b/arch/sparc64/kernel/pci.c @@ -207,8 +207,7 @@ static struct { { "SUNW,sun4v-pci", sun4v_pci_init }, { "pciex108e,80f0", fire_pci_init }, }; -#define PCI_NUM_CONTROLLER_TYPES (sizeof(pci_controller_table) / \ - sizeof(pci_controller_table[0])) +#define PCI_NUM_CONTROLLER_TYPES ARRAY_SIZE(pci_controller_table) static int __init pci_controller_init(const char *model_name, int namelen, struct device_node *dp) { diff --git a/arch/sparc64/kernel/pci_msi.c b/arch/sparc64/kernel/pci_msi.c index 31a165fd3e4..d6d64b44af6 100644 --- a/arch/sparc64/kernel/pci_msi.c +++ b/arch/sparc64/kernel/pci_msi.c @@ -28,8 +28,15 @@ static irqreturn_t sparc64_msiq_interrupt(int irq, void *cookie) unsigned long msi; err = ops->dequeue_msi(pbm, msiqid, &head, &msi); - if (likely(err > 0)) - __do_IRQ(pbm->msi_irq_table[msi - pbm->msi_first]); + if (likely(err > 0)) { + struct irq_desc *desc; + unsigned int virt_irq; + + virt_irq = pbm->msi_irq_table[msi - pbm->msi_first]; + desc = irq_desc + virt_irq; + + desc->handle_irq(virt_irq, desc); + } if (unlikely(err < 0)) goto err_dequeue; @@ -128,7 +135,8 @@ int sparc64_setup_msi_irq(unsigned int *virt_irq_p, if (!*virt_irq_p) goto out_err; - set_irq_chip(*virt_irq_p, &msi_irq); + set_irq_chip_and_handler_name(*virt_irq_p, &msi_irq, + handle_simple_irq, "MSI"); err = alloc_msi(pbm); if (unlikely(err < 0)) diff --git a/arch/sparc64/kernel/pci_sun4v.c b/arch/sparc64/kernel/pci_sun4v.c index fe46ace3e59..8c4875bdb4a 100644 --- a/arch/sparc64/kernel/pci_sun4v.c +++ b/arch/sparc64/kernel/pci_sun4v.c @@ -365,8 +365,7 @@ static void dma_4v_unmap_single(struct device *dev, dma_addr_t bus_addr, spin_unlock_irqrestore(&iommu->lock, flags); } -#define SG_ENT_PHYS_ADDRESS(SG) \ - (__pa(page_address((SG)->page)) + (SG)->offset) +#define SG_ENT_PHYS_ADDRESS(SG) (__pa(sg_virt((SG)))) static long fill_sg(long entry, struct device *dev, struct scatterlist *sg, @@ -477,9 +476,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist, /* Fast path single entry scatterlists. */ if (nelems == 1) { sglist->dma_address = - dma_4v_map_single(dev, - (page_address(sglist->page) + - sglist->offset), + dma_4v_map_single(dev, sg_virt(sglist), sglist->length, direction); if (unlikely(sglist->dma_address == DMA_ERROR_CODE)) return 0; diff --git a/arch/sparc64/math-emu/Makefile b/arch/sparc64/math-emu/Makefile index a0b06fd2946..cc5cb9baf6a 100644 --- a/arch/sparc64/math-emu/Makefile +++ b/arch/sparc64/math-emu/Makefile @@ -4,4 +4,4 @@ obj-y := math.o -EXTRA_CFLAGS = -I. -Iinclude/math-emu -w +EXTRA_CFLAGS = -Iinclude/math-emu -w diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c index 25b248a0250..3a8cd3dfb51 100644 --- a/arch/um/drivers/ubd_kern.c +++ b/arch/um/drivers/ubd_kern.c @@ -1115,7 +1115,7 @@ static void do_ubd_request(struct request_queue *q) } prepare_request(req, io_req, (unsigned long long) req->sector << 9, - sg->offset, sg->length, sg->page); + sg->offset, sg->length, sg_page(sg)); last_sectors = sg->length >> 9; n = os_write_file(thread_fd, &io_req, diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S index f35ea223752..a0ae2e7f6ce 100644 --- a/arch/x86/boot/compressed/head_32.S +++ b/arch/x86/boot/compressed/head_32.S @@ -27,13 +27,22 @@ #include <asm/segment.h> #include <asm/page.h> #include <asm/boot.h> +#include <asm/asm-offsets.h> .section ".text.head","ax",@progbits .globl startup_32 startup_32: - cld - cli + /* check to see if KEEP_SEGMENTS flag is meaningful */ + cmpw $0x207, BP_version(%esi) + jb 1f + + /* test KEEP_SEGMENTS flag to see if the bootloader is asking + * us to not reload segments */ + testb $(1<<6), BP_loadflags(%esi) + jnz 2f + +1: cli movl $(__BOOT_DS),%eax movl %eax,%ds movl %eax,%es @@ -41,6 +50,8 @@ startup_32: movl %eax,%gs movl %eax,%ss +2: cld + /* Calculate the delta between where we were compiled to run * at and where we were actually loaded at. This can only be done * with a short local call on x86. Nothing else will tell us what diff --git a/arch/x86/boot/compressed/misc_32.c b/arch/x86/boot/compressed/misc_32.c index 1dc1e19c0a9..b74d60d1b2f 100644 --- a/arch/x86/boot/compressed/misc_32.c +++ b/arch/x86/boot/compressed/misc_32.c @@ -247,6 +247,9 @@ static void putstr(const char *s) int x,y,pos; char c; + if (RM_SCREEN_INFO.orig_video_mode == 0 && lines == 0 && cols == 0) + return; + x = RM_SCREEN_INFO.orig_x; y = RM_SCREEN_INFO.orig_y; diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S index f3140e596d4..8353c81c41c 100644 --- a/arch/x86/boot/header.S +++ b/arch/x86/boot/header.S @@ -119,7 +119,7 @@ _start: # Part 2 of the header, from the old setup.S .ascii "HdrS" # header signature - .word 0x0206 # header version number (>= 0x0105) + .word 0x0207 # header version number (>= 0x0105) # or else old loadlin-1.5 will fail) .globl realmode_swtch realmode_swtch: .word 0, 0 # default_switch, SETUPSEG @@ -214,6 +214,11 @@ cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line, #added with boot protocol #version 2.06 +hardware_subarch: .long 0 # subarchitecture, added with 2.07 + # default to 0 for normal x86 PC + +hardware_subarch_data: .quad 0 + # End of setup header ##################################################### .section ".inittext", "ax" diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c index f1b7cdda82b..f8764716b0c 100644 --- a/arch/x86/kernel/asm-offsets_32.c +++ b/arch/x86/kernel/asm-offsets_32.c @@ -15,6 +15,7 @@ #include <asm/fixmap.h> #include <asm/processor.h> #include <asm/thread_info.h> +#include <asm/bootparam.h> #include <asm/elf.h> #include <xen/interface/xen.h> @@ -146,4 +147,10 @@ void foo(void) OFFSET(LGUEST_PAGES_regs_errcode, lguest_pages, regs.errcode); OFFSET(LGUEST_PAGES_regs, lguest_pages, regs); #endif + + BLANK(); + OFFSET(BP_scratch, boot_params, scratch); + OFFSET(BP_loadflags, boot_params, hdr.loadflags); + OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch); + OFFSET(BP_version, boot_params, hdr.version); } diff --git a/arch/x86/kernel/e820_32.c b/arch/x86/kernel/e820_32.c index 58fd54eb557..18f500d185a 100644 --- a/arch/x86/kernel/e820_32.c +++ b/arch/x86/kernel/e820_32.c @@ -51,6 +51,13 @@ struct resource code_resource = { .flags = IORESOURCE_BUSY | IORESOURCE_MEM }; +struct resource bss_resource = { + .name = "Kernel bss", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_MEM +}; + static struct resource system_rom_resource = { .name = "System ROM", .start = 0xf0000, @@ -254,7 +261,9 @@ static void __init probe_roms(void) * and also for regions reported as reserved by the e820. */ static void __init -legacy_init_iomem_resources(struct resource *code_resource, struct resource *data_resource) +legacy_init_iomem_resources(struct resource *code_resource, + struct resource *data_resource, + struct resource *bss_resource) { int i; @@ -287,6 +296,7 @@ legacy_init_iomem_resources(struct resource *code_resource, struct resource *dat */ request_resource(res, code_resource); request_resource(res, data_resource); + request_resource(res, bss_resource); #ifdef CONFIG_KEXEC if (crashk_res.start != crashk_res.end) request_resource(res, &crashk_res); @@ -307,9 +317,11 @@ static int __init request_standard_resources(void) printk("Setting up standard PCI resources\n"); if (efi_enabled) - efi_initialize_iomem_resources(&code_resource, &data_resource); + efi_initialize_iomem_resources(&code_resource, + &data_resource, &bss_resource); else - legacy_init_iomem_resources(&code_resource, &data_resource); + legacy_init_iomem_resources(&code_resource, + &data_resource, &bss_resource); /* EFI systems may still have VGA */ request_resource(&iomem_resource, &video_ram_resource); diff --git a/arch/x86/kernel/e820_64.c b/arch/x86/kernel/e820_64.c index 57616865d8a..04698e0b056 100644 --- a/arch/x86/kernel/e820_64.c +++ b/arch/x86/kernel/e820_64.c @@ -47,7 +47,7 @@ unsigned long end_pfn_map; */ static unsigned long __initdata end_user_pfn = MAXMEM>>PAGE_SHIFT; -extern struct resource code_resource, data_resource; +extern struct resource code_resource, data_resource, bss_resource; /* Check for some hardcoded bad areas that early boot is not allowed to touch */ static inline int bad_addr(unsigned long *addrp, unsigned long size) @@ -225,6 +225,7 @@ void __init e820_reserve_resources(void) */ request_resource(res, &code_resource); request_resource(res, &data_resource); + request_resource(res, &bss_resource); #ifdef CONFIG_KEXEC if (crashk_res.start != crashk_res.end) request_resource(res, &crashk_res); @@ -729,3 +730,22 @@ __init void e820_setup_gap(void) printk(KERN_INFO "Allocating PCI resources starting at %lx (gap: %lx:%lx)\n", pci_mem_start, gapstart, gapsize); } + +int __init arch_get_ram_range(int slot, u64 *addr, u64 *size) +{ + int i; + + if (slot < 0 || slot >= e820.nr_map) + return -1; + for (i = slot; i < e820.nr_map; i++) { + if (e820.map[i].type != E820_RAM) + continue; + break; + } + if (i == e820.nr_map || e820.map[i].addr > (max_pfn << PAGE_SHIFT)) + return -1; + *addr = e820.map[i].addr; + *size = min_t(u64, e820.map[i].size + e820.map[i].addr, + max_pfn << PAGE_SHIFT) - *addr; + return i + 1; +} diff --git a/arch/x86/kernel/efi_32.c b/arch/x86/kernel/efi_32.c index b42558c48e9..e2be78f4939 100644 --- a/arch/x86/kernel/efi_32.c +++ b/arch/x86/kernel/efi_32.c @@ -603,7 +603,8 @@ void __init efi_enter_virtual_mode(void) void __init efi_initialize_iomem_resources(struct resource *code_resource, - struct resource *data_resource) + struct resource *data_resource, + struct resource *bss_resource) { struct resource *res; efi_memory_desc_t *md; @@ -675,6 +676,7 @@ efi_initialize_iomem_resources(struct resource *code_resource, if (md->type == EFI_CONVENTIONAL_MEMORY) { request_resource(res, code_resource); request_resource(res, data_resource); + request_resource(res, bss_resource); #ifdef CONFIG_KEXEC request_resource(res, &crashk_res); #endif diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 39677965e16..00b1c2c5645 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -79,22 +79,30 @@ INIT_MAP_BEYOND_END = BOOTBITMAP_SIZE + (PAGE_TABLE_SIZE + ALLOCATOR_SLOP)*PAGE_ */ .section .text.head,"ax",@progbits ENTRY(startup_32) + /* check to see if KEEP_SEGMENTS flag is meaningful */ + cmpw $0x207, BP_version(%esi) + jb 1f + + /* test KEEP_SEGMENTS flag to see if the bootloader is asking + us to not reload segments */ + testb $(1<<6), BP_loadflags(%esi) + jnz 2f /* * Set segments to known values. */ - cld - lgdt boot_gdt_descr - __PAGE_OFFSET +1: lgdt boot_gdt_descr - __PAGE_OFFSET movl $(__BOOT_DS),%eax movl %eax,%ds movl %eax,%es movl %eax,%fs movl %eax,%gs +2: /* * Clear BSS first so that there are no surprises... - * No need to cld as DF is already clear from cld above... */ + cld xorl %eax,%eax movl $__bss_start - __PAGE_OFFSET,%edi movl $__bss_stop - __PAGE_OFFSET,%ecx @@ -128,6 +136,35 @@ ENTRY(startup_32) movsl 1: +#ifdef CONFIG_PARAVIRT + cmpw $0x207, (boot_params + BP_version - __PAGE_OFFSET) + jb default_entry + + /* Paravirt-compatible boot parameters. Look to see what architecture + we're booting under. */ + movl (boot_params + BP_hardware_subarch - __PAGE_OFFSET), %eax + cmpl $num_subarch_entries, %eax + jae bad_subarch + + movl subarch_entries - __PAGE_OFFSET(,%eax,4), %eax + subl $__PAGE_OFFSET, %eax + jmp *%eax + +bad_subarch: +WEAK(lguest_entry) +WEAK(xen_entry) + /* Unknown implementation; there's really + nothing we can do at this point. */ + ud2a +.data +subarch_entries: + .long default_entry /* normal x86/PC */ + .long lguest_entry /* lguest hypervisor */ + .long xen_entry /* Xen hypervisor */ +num_subarch_entries = (. - subarch_entries) / 4 +.previous +#endif /* CONFIG_PARAVIRT */ + /* * Initialize page tables. This creates a PDE and a set of page * tables, which are located immediately beyond _end. The variable @@ -140,6 +177,7 @@ ENTRY(startup_32) */ page_pde_offset = (__PAGE_OFFSET >> 20); +default_entry: movl $(pg0 - __PAGE_OFFSET), %edi movl $(swapper_pg_dir - __PAGE_OFFSET), %edx movl $0x007, %eax /* 0x007 = PRESENT+RW+USER */ diff --git a/arch/x86/kernel/io_apic_64.c b/arch/x86/kernel/io_apic_64.c index b3c2d268d70..953328b55a3 100644 --- a/arch/x86/kernel/io_apic_64.c +++ b/arch/x86/kernel/io_apic_64.c @@ -31,6 +31,7 @@ #include <linux/sysdev.h> #include <linux/msi.h> #include <linux/htirq.h> +#include <linux/dmar.h> #ifdef CONFIG_ACPI #include <acpi/acpi_bus.h> #endif @@ -2031,8 +2032,64 @@ void arch_teardown_msi_irq(unsigned int irq) destroy_irq(irq); } -#endif /* CONFIG_PCI_MSI */ +#ifdef CONFIG_DMAR +#ifdef CONFIG_SMP +static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask) +{ + struct irq_cfg *cfg = irq_cfg + irq; + struct msi_msg msg; + unsigned int dest; + cpumask_t tmp; + + cpus_and(tmp, mask, cpu_online_map); + if (cpus_empty(tmp)) + return; + + if (assign_irq_vector(irq, mask)) + return; + + cpus_and(tmp, cfg->domain, mask); + dest = cpu_mask_to_apicid(tmp); + + dmar_msi_read(irq, &msg); + + msg.data &= ~MSI_DATA_VECTOR_MASK; + msg.data |= MSI_DATA_VECTOR(cfg->vector); + msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK; + msg.address_lo |= MSI_ADDR_DEST_ID(dest); + + dmar_msi_write(irq, &msg); + irq_desc[irq].affinity = mask; +} +#endif /* CONFIG_SMP */ + +struct irq_chip dmar_msi_type = { + .name = "DMAR_MSI", + .unmask = dmar_msi_unmask, + .mask = dmar_msi_mask, + .ack = ack_apic_edge, +#ifdef CONFIG_SMP + .set_affinity = dmar_msi_set_affinity, +#endif + .retrigger = ioapic_retrigger_irq, +}; + +int arch_setup_dmar_msi(unsigned int irq) +{ + int ret; + struct msi_msg msg; + + ret = msi_compose_msg(NULL, irq, &msg); + if (ret < 0) + return ret; + dmar_msi_write(irq, &msg); + set_irq_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, + "edge"); + return 0; +} +#endif +#endif /* CONFIG_PCI_MSI */ /* * Hypertransport interrupt support */ diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c index 5098f58063a..1a20fe31338 100644 --- a/arch/x86/kernel/pci-calgary_64.c +++ b/arch/x86/kernel/pci-calgary_64.c @@ -411,8 +411,10 @@ static int calgary_nontranslate_map_sg(struct device* dev, int i; for_each_sg(sg, s, nelems, i) { - BUG_ON(!s->page); - s->dma_address = virt_to_bus(page_address(s->page) +s->offset); + struct page *p = sg_page(s); + + BUG_ON(!p); + s->dma_address = virt_to_bus(sg_virt(s)); s->dma_length = s->length; } return nelems; @@ -432,9 +434,9 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg, return calgary_nontranslate_map_sg(dev, sg, nelems, direction); for_each_sg(sg, s, nelems, i) { - BUG_ON(!s->page); + BUG_ON(!sg_page(s)); - vaddr = (unsigned long)page_address(s->page) + s->offset; + vaddr = (unsigned long) sg_virt(s); npages = num_dma_pages(vaddr, s->length); entry = iommu_range_alloc(tbl, npages); diff --git a/arch/x86/kernel/pci-dma_64.c b/arch/x86/kernel/pci-dma_64.c index afaf9f12c03..393e2725a6e 100644 --- a/arch/x86/kernel/pci-dma_64.c +++ b/arch/x86/kernel/pci-dma_64.c @@ -7,6 +7,7 @@ #include <linux/string.h> #include <linux/pci.h> #include <linux/module.h> +#include <linux/dmar.h> #include <asm/io.h> #include <asm/iommu.h> #include <asm/calgary.h> @@ -305,6 +306,8 @@ void __init pci_iommu_alloc(void) detect_calgary(); #endif + detect_intel_iommu(); + #ifdef CONFIG_SWIOTLB pci_swiotlb_init(); #endif @@ -316,6 +319,8 @@ static int __init pci_iommu_init(void) calgary_iommu_init(); #endif + intel_iommu_init(); + #ifdef CONFIG_IOMMU gart_iommu_init(); #endif diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c index 5cdfab65e93..c56e9ee6496 100644 --- a/arch/x86/kernel/pci-gart_64.c +++ b/arch/x86/kernel/pci-gart_64.c @@ -302,7 +302,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, #endif for_each_sg(sg, s, nents, i) { - unsigned long addr = page_to_phys(s->page) + s->offset; + unsigned long addr = sg_phys(s); if (nonforced_iommu(dev, addr, s->length)) { addr = dma_map_area(dev, addr, s->length, dir); if (addr == bad_dma_address) { @@ -397,7 +397,7 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, start_sg = sgmap = sg; ps = NULL; /* shut up gcc */ for_each_sg(sg, s, nents, i) { - dma_addr_t addr = page_to_phys(s->page) + s->offset; + dma_addr_t addr = sg_phys(s); s->dma_address = addr; BUG_ON(s->length == 0); diff --git a/arch/x86/kernel/pci-nommu_64.c b/arch/x86/kernel/pci-nommu_64.c index e85d4360360..faf70bdca33 100644 --- a/arch/x86/kernel/pci-nommu_64.c +++ b/arch/x86/kernel/pci-nommu_64.c @@ -62,8 +62,8 @@ static int nommu_map_sg(struct device *hwdev, struct scatterlist *sg, int i; for_each_sg(sg, s, nents, i) { - BUG_ON(!s->page); - s->dma_address = virt_to_bus(page_address(s->page) +s->offset); + BUG_ON(!sg_page(s)); + s->dma_address = virt_to_bus(sg_virt(s)); if (!check_addr("map_sg", hwdev, s->dma_address, s->length)) return 0; s->dma_length = s->length; diff --git a/arch/x86/kernel/setup_32.c b/arch/x86/kernel/setup_32.c index ba2e165a8a0..cc0e91447b7 100644 --- a/arch/x86/kernel/setup_32.c +++ b/arch/x86/kernel/setup_32.c @@ -60,6 +60,7 @@ #include <asm/vmi.h> #include <setup_arch.h> #include <bios_ebda.h> +#include <asm/cacheflush.h> /* This value is set up by the early boot code to point to the value immediately after the boot time page tables. It contains a *physical* @@ -73,6 +74,7 @@ int disable_pse __devinitdata = 0; */ extern struct resource code_resource; extern struct resource data_resource; +extern struct resource bss_resource; /* cpu data as detected by the assembly code in head.S */ struct cpuinfo_x86 new_cpu_data __cpuinitdata = { 0, 0, 0, 0, -1, 1, 0, 0, -1 }; @@ -600,6 +602,8 @@ void __init setup_arch(char **cmdline_p) code_resource.end = virt_to_phys(_etext)-1; data_resource.start = virt_to_phys(_etext); data_resource.end = virt_to_phys(_edata)-1; + bss_resource.start = virt_to_phys(&__bss_start); + bss_resource.end = virt_to_phys(&__bss_stop)-1; parse_early_param(); diff --git a/arch/x86/kernel/setup_64.c b/arch/x86/kernel/setup_64.c index 31322d42eaa..e7a9e36bd52 100644 --- a/arch/x86/kernel/setup_64.c +++ b/arch/x86/kernel/setup_64.c @@ -58,6 +58,7 @@ #include <asm/numa.h> #include <asm/sections.h> #include <asm/dmi.h> +#include <asm/cacheflush.h> /* * Machine setup.. @@ -133,6 +134,12 @@ struct resource code_resource = { .end = 0, .flags = IORESOURCE_RAM, }; +struct resource bss_resource = { + .name = "Kernel bss", + .start = 0, + .end = 0, + .flags = IORESOURCE_RAM, +}; #ifdef CONFIG_PROC_VMCORE /* elfcorehdr= specifies the location of elf core header @@ -276,6 +283,8 @@ void __init setup_arch(char **cmdline_p) code_resource.end = virt_to_phys(&_etext)-1; data_resource.start = virt_to_phys(&_etext); data_resource.end = virt_to_phys(&_edata)-1; + bss_resource.start = virt_to_phys(&__bss_start); + bss_resource.end = virt_to_phys(&__bss_stop)-1; early_identify_cpu(&boot_cpu_data); diff --git a/arch/x86/mm/pageattr_64.c b/arch/x86/mm/pageattr_64.c index c7b7dfe1d40..c40afbaaf93 100644 --- a/arch/x86/mm/pageattr_64.c +++ b/arch/x86/mm/pageattr_64.c @@ -61,10 +61,10 @@ static struct page *split_large_page(unsigned long address, pgprot_t prot, return base; } -static void cache_flush_page(void *adr) +void clflush_cache_range(void *adr, int size) { int i; - for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size) + for (i = 0; i < size; i += boot_cpu_data.x86_clflush_size) clflush(adr+i); } @@ -80,7 +80,7 @@ static void flush_kernel_map(void *arg) asm volatile("wbinvd" ::: "memory"); else list_for_each_entry(pg, l, lru) { void *adr = page_address(pg); - cache_flush_page(adr); + clflush_cache_range(adr, PAGE_SIZE); } __flush_tlb_all(); } diff --git a/arch/x86_64/Kconfig b/arch/x86_64/Kconfig index aab25f3ba3c..c2d24991bb2 100644 --- a/arch/x86_64/Kconfig +++ b/arch/x86_64/Kconfig @@ -750,6 +750,38 @@ config PCI_DOMAINS depends on PCI default y +config DMAR + bool "Support for DMA Remapping Devices (EXPERIMENTAL)" + depends on PCI_MSI && ACPI && EXPERIMENTAL + default y + help + DMA remapping (DMAR) devices support enables independent address + translations for Direct Memory Access (DMA) from devices. + These DMA remapping devices are reported via ACPI tables + and include PCI device scope covered by these DMA + remapping devices. + +config DMAR_GFX_WA + bool "Support for Graphics workaround" + depends on DMAR + default y + help + Current Graphics drivers tend to use physical address + for DMA and avoid using DMA APIs. Setting this config + option permits the IOMMU driver to set a unity map for + all the OS-visible memory. Hence the driver can continue + to use physical addresses for DMA. + +config DMAR_FLOPPY_WA + bool + depends on DMAR + default y + help + Floppy disk drivers are know to bypass DMA API calls + thereby failing to work when IOMMU is enabled. This + workaround will setup a 1:1 mapping for the first + 16M to make floppy (an ISA device) work. + source "drivers/pci/pcie/Kconfig" source "drivers/pci/Kconfig" diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index 8025d646ab3..de5ba479c22 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -1351,11 +1351,22 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, new_segment: if (!sg) sg = sglist; - else + else { + /* + * If the driver previously mapped a shorter + * list, we could see a termination bit + * prematurely unless it fully inits the sg + * table on each mapping. We KNOW that there + * must be more entries here or the driver + * would be buggy, so force clear the + * termination bit to avoid doing a full + * sg_init_table() in drivers for each command. + */ + sg->page_link &= ~0x02; sg = sg_next(sg); + } - memset(sg, 0, sizeof(*sg)); - sg->page = bvec->bv_page; + sg_set_page(sg, bvec->bv_page); sg->length = nbytes; sg->offset = bvec->bv_offset; nsegs++; @@ -1363,6 +1374,9 @@ new_segment: bvprv = bvec; } /* segments in rq */ + if (sg) + __sg_mark_end(sg); + return nsegs; } diff --git a/crypto/digest.c b/crypto/digest.c index e56de6748b1..8871dec8cae 100644 --- a/crypto/digest.c +++ b/crypto/digest.c @@ -41,7 +41,7 @@ static int update2(struct hash_desc *desc, return 0; for (;;) { - struct page *pg = sg->page; + struct page *pg = sg_page(sg); unsigned int offset = sg->offset; unsigned int l = sg->length; diff --git a/crypto/hmac.c b/crypto/hmac.c index 8802fb6dd5a..e4eb6ac53b5 100644 --- a/crypto/hmac.c +++ b/crypto/hmac.c @@ -159,7 +159,8 @@ static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg, desc.flags = pdesc->flags & CRYPTO_TFM_REQ_MAY_SLEEP; sg_set_buf(sg1, ipad, bs); - sg1[1].page = (void *)sg; + + sg_set_page(&sg[1], (void *) sg); sg1[1].length = 0; sg_set_buf(sg2, opad, bs + ds); diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index d6852c33cfb..b9bbda0bb9f 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -54,7 +54,7 @@ static void scatterwalk_pagedone(struct scatter_walk *walk, int out, if (out) { struct page *page; - page = walk->sg->page + ((walk->offset - 1) >> PAGE_SHIFT); + page = sg_page(walk->sg) + ((walk->offset - 1) >> PAGE_SHIFT); flush_dcache_page(page); } diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h index 9c73e37a42c..87ed681cceb 100644 --- a/crypto/scatterwalk.h +++ b/crypto/scatterwalk.h @@ -22,13 +22,13 @@ static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg) { - return (++sg)->length ? sg : (void *)sg->page; + return (++sg)->length ? sg : (void *) sg_page(sg); } static inline unsigned long scatterwalk_samebuf(struct scatter_walk *walk_in, struct scatter_walk *walk_out) { - return !(((walk_in->sg->page - walk_out->sg->page) << PAGE_SHIFT) + + return !(((sg_page(walk_in->sg) - sg_page(walk_out->sg)) << PAGE_SHIFT) + (int)(walk_in->offset - walk_out->offset)); } @@ -60,7 +60,7 @@ static inline unsigned int scatterwalk_aligned(struct scatter_walk *walk, static inline struct page *scatterwalk_page(struct scatter_walk *walk) { - return walk->sg->page + (walk->offset >> PAGE_SHIFT); + return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); } static inline void scatterwalk_unmap(void *vaddr, int out) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 18d489c8b93..d741c63af42 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -317,7 +317,7 @@ static void test_cipher(char *algo, int enc, goto out; } - q = kmap(sg[0].page) + sg[0].offset; + q = kmap(sg_page(&sg[0])) + sg[0].offset; hexdump(q, cipher_tv[i].rlen); printk("%s\n", @@ -390,7 +390,7 @@ static void test_cipher(char *algo, int enc, temp = 0; for (k = 0; k < cipher_tv[i].np; k++) { printk("page %u\n", k); - q = kmap(sg[k].page) + sg[k].offset; + q = kmap(sg_page(&sg[k])) + sg[k].offset; hexdump(q, cipher_tv[i].tap[k]); printk("%s\n", memcmp(q, cipher_tv[i].result + temp, diff --git a/crypto/xcbc.c b/crypto/xcbc.c index 9f502b86e0e..ac68f3b62fd 100644 --- a/crypto/xcbc.c +++ b/crypto/xcbc.c @@ -120,7 +120,7 @@ static int crypto_xcbc_digest_update2(struct hash_desc *pdesc, do { - struct page *pg = sg[i].page; + struct page *pg = sg_page(&sg[i]); unsigned int offset = sg[i].offset; unsigned int slen = sg[i].length; diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 629eadbd0ec..69092bce1ad 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4296,7 +4296,7 @@ void ata_sg_clean(struct ata_queued_cmd *qc) sg_last(sg, qc->orig_n_elem)->length += qc->pad_len; if (pad_buf) { struct scatterlist *psg = &qc->pad_sgent; - void *addr = kmap_atomic(psg->page, KM_IRQ0); + void *addr = kmap_atomic(sg_page(psg), KM_IRQ0); memcpy(addr + psg->offset, pad_buf, qc->pad_len); kunmap_atomic(addr, KM_IRQ0); } @@ -4686,11 +4686,11 @@ static int ata_sg_setup(struct ata_queued_cmd *qc) * data in this function or read data in ata_sg_clean. */ offset = lsg->offset + lsg->length - qc->pad_len; - psg->page = nth_page(lsg->page, offset >> PAGE_SHIFT); + sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT)); psg->offset = offset_in_page(offset); if (qc->tf.flags & ATA_TFLAG_WRITE) { - void *addr = kmap_atomic(psg->page, KM_IRQ0); + void *addr = kmap_atomic(sg_page(psg), KM_IRQ0); memcpy(pad_buf, addr + psg->offset, qc->pad_len); kunmap_atomic(addr, KM_IRQ0); } @@ -4836,7 +4836,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) if (qc->curbytes == qc->nbytes - qc->sect_size) ap->hsm_task_state = HSM_ST_LAST; - page = qc->cursg->page; + page = sg_page(qc->cursg); offset = qc->cursg->offset + qc->cursg_ofs; /* get the current page and offset */ @@ -4988,7 +4988,7 @@ next_sg: sg = qc->cursg; - page = sg->page; + page = sg_page(sg); offset = sg->offset + qc->cursg_ofs; /* get the current page and offset */ diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c index 9fbb39cd0f5..5b758b9ad0b 100644 --- a/drivers/ata/libata-scsi.c +++ b/drivers/ata/libata-scsi.c @@ -1544,7 +1544,7 @@ static unsigned int ata_scsi_rbuf_get(struct scsi_cmnd *cmd, u8 **buf_out) struct scatterlist *sg = scsi_sglist(cmd); if (sg) { - buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; buflen = sg->length; } else { buf = NULL; diff --git a/drivers/base/memory.c b/drivers/base/memory.c index c41d0728efe..7868707c7ed 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -137,7 +137,7 @@ static ssize_t show_mem_state(struct sys_device *dev, char *buf) return len; } -static inline int memory_notify(unsigned long val, void *v) +int memory_notify(unsigned long val, void *v) { return blocking_notifier_call_chain(&memory_chain, val, v); } @@ -183,7 +183,6 @@ memory_block_action(struct memory_block *mem, unsigned long action) break; case MEM_OFFLINE: mem->state = MEM_GOING_OFFLINE; - memory_notify(MEM_GOING_OFFLINE, NULL); start_paddr = page_to_pfn(first_page) << PAGE_SHIFT; ret = remove_memory(start_paddr, PAGES_PER_SECTION << PAGE_SHIFT); @@ -191,7 +190,6 @@ memory_block_action(struct memory_block *mem, unsigned long action) mem->state = old_state; break; } - memory_notify(MEM_MAPPING_INVALID, NULL); break; default: printk(KERN_WARNING "%s(%p, %ld) unknown action: %ld\n", @@ -199,11 +197,6 @@ memory_block_action(struct memory_block *mem, unsigned long action) WARN_ON(1); ret = -EINVAL; } - /* - * For now, only notify on successful memory operations - */ - if (!ret) - memory_notify(action, NULL); return ret; } diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c index 84d6aa500e2..53505422867 100644 --- a/drivers/block/DAC960.c +++ b/drivers/block/DAC960.c @@ -345,6 +345,7 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) Command->V1.ScatterGatherList = (DAC960_V1_ScatterGatherSegment_T *)ScatterGatherCPU; Command->V1.ScatterGatherListDMA = ScatterGatherDMA; + sg_init_table(Command->cmd_sglist, DAC960_V1_ScatterGatherLimit); } else { Command->cmd_sglist = Command->V2.ScatterList; Command->V2.ScatterGatherList = @@ -353,6 +354,7 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) Command->V2.RequestSense = (DAC960_SCSI_RequestSense_T *)RequestSenseCPU; Command->V2.RequestSenseDMA = RequestSenseDMA; + sg_init_table(Command->cmd_sglist, DAC960_V2_ScatterGatherLimit); } } return true; diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c index 7c2cfde08f1..5a6fe17fc63 100644 --- a/drivers/block/cciss.c +++ b/drivers/block/cciss.c @@ -2610,7 +2610,7 @@ static void do_cciss_request(struct request_queue *q) (int)creq->nr_sectors); #endif /* CCISS_DEBUG */ - memset(tmp_sg, 0, sizeof(tmp_sg)); + sg_init_table(tmp_sg, MAXSGENTRIES); seg = blk_rq_map_sg(q, creq, tmp_sg); /* get the DMA records for the setup */ @@ -2621,7 +2621,7 @@ static void do_cciss_request(struct request_queue *q) for (i = 0; i < seg; i++) { c->SG[i].Len = tmp_sg[i].length; - temp64.val = (__u64) pci_map_page(h->pdev, tmp_sg[i].page, + temp64.val = (__u64) pci_map_page(h->pdev, sg_page(&tmp_sg[i]), tmp_sg[i].offset, tmp_sg[i].length, dir); c->SG[i].Addr.lower = temp64.val32.lower; diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c index 568603d3043..efab27fa108 100644 --- a/drivers/block/cpqarray.c +++ b/drivers/block/cpqarray.c @@ -918,6 +918,7 @@ queue_next: DBGPX( printk("sector=%d, nr_sectors=%d\n", creq->sector, creq->nr_sectors); ); + sg_init_table(tmp_sg, SG_MAX); seg = blk_rq_map_sg(q, creq, tmp_sg); /* Now do all the DMA Mappings */ @@ -929,7 +930,7 @@ DBGPX( { c->req.sg[i].size = tmp_sg[i].length; c->req.sg[i].addr = (__u32) pci_map_page(h->pci_dev, - tmp_sg[i].page, + sg_page(&tmp_sg[i]), tmp_sg[i].offset, tmp_sg[i].length, dir); } diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c index 40535036e89..1b58b010797 100644 --- a/drivers/block/cryptoloop.c +++ b/drivers/block/cryptoloop.c @@ -26,6 +26,7 @@ #include <linux/crypto.h> #include <linux/blkdev.h> #include <linux/loop.h> +#include <linux/scatterlist.h> #include <asm/semaphore.h> #include <asm/uaccess.h> @@ -119,14 +120,17 @@ cryptoloop_transfer(struct loop_device *lo, int cmd, .tfm = tfm, .flags = CRYPTO_TFM_REQ_MAY_SLEEP, }; - struct scatterlist sg_out = { NULL, }; - struct scatterlist sg_in = { NULL, }; + struct scatterlist sg_out; + struct scatterlist sg_in; encdec_cbc_t encdecfunc; struct page *in_page, *out_page; unsigned in_offs, out_offs; int err; + sg_init_table(&sg_out, 1); + sg_init_table(&sg_in, 1); + if (cmd == READ) { in_page = raw_page; in_offs = raw_off; @@ -146,11 +150,11 @@ cryptoloop_transfer(struct loop_device *lo, int cmd, u32 iv[4] = { 0, }; iv[0] = cpu_to_le32(IV & 0xffffffff); - sg_in.page = in_page; + sg_set_page(&sg_in, in_page); sg_in.offset = in_offs; sg_in.length = sz; - sg_out.page = out_page; + sg_set_page(&sg_out, out_page); sg_out.offset = out_offs; sg_out.length = sz; diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index 317a790c153..7276f7d207c 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -388,6 +388,7 @@ static int __send_request(struct request *req) op = VD_OP_BWRITE; } + sg_init_table(sg, port->ring_cookies); nsg = blk_rq_map_sg(req->q, req, sg); len = 0; diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c index 402209fec59..282a69558e8 100644 --- a/drivers/block/sx8.c +++ b/drivers/block/sx8.c @@ -522,6 +522,7 @@ static struct carm_request *carm_get_request(struct carm_host *host) host->n_msgs++; assert(host->n_msgs <= CARM_MAX_REQ); + sg_init_table(crq->sg, CARM_MAX_REQ_SG); return crq; } diff --git a/drivers/block/ub.c b/drivers/block/ub.c index c57dd2b3a0c..14143f2c484 100644 --- a/drivers/block/ub.c +++ b/drivers/block/ub.c @@ -25,6 +25,7 @@ #include <linux/usb_usual.h> #include <linux/blkdev.h> #include <linux/timer.h> +#include <linux/scatterlist.h> #include <scsi/scsi.h> #define DRV_NAME "ub" @@ -656,6 +657,7 @@ static int ub_request_fn_1(struct ub_lun *lun, struct request *rq) if ((cmd = ub_get_cmd(lun)) == NULL) return -1; memset(cmd, 0, sizeof(struct ub_scsi_cmd)); + sg_init_table(cmd->sgv, UB_MAX_REQ_SG); blkdev_dequeue_request(rq); @@ -1309,9 +1311,8 @@ static void ub_data_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd) else pipe = sc->send_bulk_pipe; sc->last_pipe = pipe; - usb_fill_bulk_urb(&sc->work_urb, sc->dev, pipe, - page_address(sg->page) + sg->offset, sg->length, - ub_urb_complete, sc); + usb_fill_bulk_urb(&sc->work_urb, sc->dev, pipe, sg_virt(sg), + sg->length, ub_urb_complete, sc); sc->work_urb.actual_length = 0; sc->work_urb.error_count = 0; sc->work_urb.status = 0; @@ -1427,7 +1428,7 @@ static void ub_state_sense(struct ub_dev *sc, struct ub_scsi_cmd *cmd) scmd->state = UB_CMDST_INIT; scmd->nsg = 1; sg = &scmd->sgv[0]; - sg->page = virt_to_page(sc->top_sense); + sg_set_page(sg, virt_to_page(sc->top_sense)); sg->offset = (unsigned long)sc->top_sense & (PAGE_SIZE-1); sg->length = UB_SENSE_SIZE; scmd->len = UB_SENSE_SIZE; @@ -1863,7 +1864,7 @@ static int ub_sync_read_cap(struct ub_dev *sc, struct ub_lun *lun, cmd->state = UB_CMDST_INIT; cmd->nsg = 1; sg = &cmd->sgv[0]; - sg->page = virt_to_page(p); + sg_set_page(sg, virt_to_page(p)); sg->offset = (unsigned long)p & (PAGE_SIZE-1); sg->length = 8; cmd->len = 8; diff --git a/drivers/block/viodasd.c b/drivers/block/viodasd.c index e824b672e05..ab5d404faa1 100644 --- a/drivers/block/viodasd.c +++ b/drivers/block/viodasd.c @@ -41,6 +41,7 @@ #include <linux/dma-mapping.h> #include <linux/completion.h> #include <linux/device.h> +#include <linux/scatterlist.h> #include <asm/uaccess.h> #include <asm/vio.h> @@ -270,6 +271,7 @@ static int send_request(struct request *req) d = req->rq_disk->private_data; /* Now build the scatter-gather list */ + sg_init_table(sg, VIOMAXBLOCKDMA); nsg = blk_rq_map_sg(req->q, req, sg); nsg = dma_map_sg(d->dev, sg, nsg, direction); diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig index b9fbe6e7f9a..075598e1c50 100644 --- a/drivers/bluetooth/Kconfig +++ b/drivers/bluetooth/Kconfig @@ -22,6 +22,30 @@ config BT_HCIUSB_SCO Say Y here to compile support for SCO over HCI USB. +config BT_HCIBTUSB + tristate "HCI USB driver (alternate version)" + depends on USB && EXPERIMENTAL && BT_HCIUSB=n + help + Bluetooth HCI USB driver. + This driver is required if you want to use Bluetooth devices with + USB interface. + + This driver is still experimental and has no SCO support. + + Say Y here to compile support for Bluetooth USB devices into the + kernel or say M to compile it as module (btusb). + +config BT_HCIBTSDIO + tristate "HCI SDIO driver" + depends on MMC + help + Bluetooth HCI SDIO driver. + This driver is required if you want to use Bluetooth device with + SDIO interface. + + Say Y here to compile support for Bluetooth SDIO devices into the + kernel or say M to compile it as module (btsdio). + config BT_HCIUART tristate "HCI UART driver" help @@ -55,6 +79,17 @@ config BT_HCIUART_BCSP Say Y here to compile support for HCI BCSP protocol. +config BT_HCIUART_LL + bool "HCILL protocol support" + depends on BT_HCIUART + help + HCILL (HCI Low Level) is a serial protocol for communication + between Bluetooth device and host. This protocol is required for + serial Bluetooth devices that are based on Texas Instruments' + BRF chips. + + Say Y here to compile support for HCILL protocol. + config BT_HCIBCM203X tristate "HCI BCM203x USB driver" depends on USB diff --git a/drivers/bluetooth/Makefile b/drivers/bluetooth/Makefile index 08c10e178e0..77444afbf10 100644 --- a/drivers/bluetooth/Makefile +++ b/drivers/bluetooth/Makefile @@ -13,7 +13,11 @@ obj-$(CONFIG_BT_HCIBT3C) += bt3c_cs.o obj-$(CONFIG_BT_HCIBLUECARD) += bluecard_cs.o obj-$(CONFIG_BT_HCIBTUART) += btuart_cs.o +obj-$(CONFIG_BT_HCIBTUSB) += btusb.o +obj-$(CONFIG_BT_HCIBTSDIO) += btsdio.o + hci_uart-y := hci_ldisc.o hci_uart-$(CONFIG_BT_HCIUART_H4) += hci_h4.o hci_uart-$(CONFIG_BT_HCIUART_BCSP) += hci_bcsp.o +hci_uart-$(CONFIG_BT_HCIUART_LL) += hci_ll.o hci_uart-objs := $(hci_uart-y) diff --git a/drivers/bluetooth/bluecard_cs.c b/drivers/bluetooth/bluecard_cs.c index 851de4d5b7d..bcf57927b7a 100644 --- a/drivers/bluetooth/bluecard_cs.c +++ b/drivers/bluetooth/bluecard_cs.c @@ -503,10 +503,7 @@ static irqreturn_t bluecard_interrupt(int irq, void *dev_inst) unsigned int iobase; unsigned char reg; - if (!info || !info->hdev) { - BT_ERR("Call of irq %d for unknown device", irq); - return IRQ_NONE; - } + BUG_ON(!info->hdev); if (!test_bit(CARD_READY, &(info->hw_state))) return IRQ_HANDLED; diff --git a/drivers/bluetooth/bpa10x.c b/drivers/bluetooth/bpa10x.c index e8ebd5d3de8..1375b5345a0 100644 --- a/drivers/bluetooth/bpa10x.c +++ b/drivers/bluetooth/bpa10x.c @@ -2,7 +2,7 @@ * * Digianswer Bluetooth USB driver * - * Copyright (C) 2004-2005 Marcel Holtmann <marcel@holtmann.org> + * Copyright (C) 2004-2007 Marcel Holtmann <marcel@holtmann.org> * * * This program is free software; you can redistribute it and/or modify @@ -21,13 +21,14 @@ * */ -#include <linux/module.h> - #include <linux/kernel.h> +#include <linux/module.h> #include <linux/init.h> #include <linux/slab.h> #include <linux/types.h> +#include <linux/sched.h> #include <linux/errno.h> +#include <linux/skbuff.h> #include <linux/usb.h> @@ -39,7 +40,7 @@ #define BT_DBG(D...) #endif -#define VERSION "0.8" +#define VERSION "0.9" static int ignore = 0; @@ -52,393 +53,285 @@ static struct usb_device_id bpa10x_table[] = { MODULE_DEVICE_TABLE(usb, bpa10x_table); -#define BPA10X_CMD_EP 0x00 -#define BPA10X_EVT_EP 0x81 -#define BPA10X_TX_EP 0x02 -#define BPA10X_RX_EP 0x82 - -#define BPA10X_CMD_BUF_SIZE 252 -#define BPA10X_EVT_BUF_SIZE 16 -#define BPA10X_TX_BUF_SIZE 384 -#define BPA10X_RX_BUF_SIZE 384 - struct bpa10x_data { - struct hci_dev *hdev; - struct usb_device *udev; + struct hci_dev *hdev; + struct usb_device *udev; - rwlock_t lock; + struct usb_anchor tx_anchor; + struct usb_anchor rx_anchor; - struct sk_buff_head cmd_queue; - struct urb *cmd_urb; - struct urb *evt_urb; - struct sk_buff *evt_skb; - unsigned int evt_len; - - struct sk_buff_head tx_queue; - struct urb *tx_urb; - struct urb *rx_urb; + struct sk_buff *rx_skb[2]; }; -#define HCI_VENDOR_HDR_SIZE 5 +#define HCI_VENDOR_HDR_SIZE 5 struct hci_vendor_hdr { - __u8 type; - __le16 snum; - __le16 dlen; + __u8 type; + __le16 snum; + __le16 dlen; } __attribute__ ((packed)); -static void bpa10x_recv_bulk(struct bpa10x_data *data, unsigned char *buf, int count) +static int bpa10x_recv(struct hci_dev *hdev, int queue, void *buf, int count) { - struct hci_acl_hdr *ah; - struct hci_sco_hdr *sh; - struct hci_vendor_hdr *vh; - struct sk_buff *skb; - int len; + struct bpa10x_data *data = hdev->driver_data; + + BT_DBG("%s queue %d buffer %p count %d", hdev->name, + queue, buf, count); + + if (queue < 0 || queue > 1) + return -EILSEQ; + + hdev->stat.byte_rx += count; while (count) { - switch (*buf++) { - case HCI_ACLDATA_PKT: - ah = (struct hci_acl_hdr *) buf; - len = HCI_ACL_HDR_SIZE + __le16_to_cpu(ah->dlen); - skb = bt_skb_alloc(len, GFP_ATOMIC); - if (skb) { - memcpy(skb_put(skb, len), buf, len); - skb->dev = (void *) data->hdev; - bt_cb(skb)->pkt_type = HCI_ACLDATA_PKT; - hci_recv_frame(skb); - } - break; + struct sk_buff *skb = data->rx_skb[queue]; + struct { __u8 type; int expect; } *scb; + int type, len = 0; - case HCI_SCODATA_PKT: - sh = (struct hci_sco_hdr *) buf; - len = HCI_SCO_HDR_SIZE + sh->dlen; - skb = bt_skb_alloc(len, GFP_ATOMIC); - if (skb) { - memcpy(skb_put(skb, len), buf, len); - skb->dev = (void *) data->hdev; - bt_cb(skb)->pkt_type = HCI_SCODATA_PKT; - hci_recv_frame(skb); + if (!skb) { + /* Start of the frame */ + + type = *((__u8 *) buf); + count--; buf++; + + switch (type) { + case HCI_EVENT_PKT: + if (count >= HCI_EVENT_HDR_SIZE) { + struct hci_event_hdr *h = buf; + len = HCI_EVENT_HDR_SIZE + h->plen; + } else + return -EILSEQ; + break; + + case HCI_ACLDATA_PKT: + if (count >= HCI_ACL_HDR_SIZE) { + struct hci_acl_hdr *h = buf; + len = HCI_ACL_HDR_SIZE + + __le16_to_cpu(h->dlen); + } else + return -EILSEQ; + break; + + case HCI_SCODATA_PKT: + if (count >= HCI_SCO_HDR_SIZE) { + struct hci_sco_hdr *h = buf; + len = HCI_SCO_HDR_SIZE + h->dlen; + } else + return -EILSEQ; + break; + + case HCI_VENDOR_PKT: + if (count >= HCI_VENDOR_HDR_SIZE) { + struct hci_vendor_hdr *h = buf; + len = HCI_VENDOR_HDR_SIZE + + __le16_to_cpu(h->dlen); + } else + return -EILSEQ; + break; } - break; - case HCI_VENDOR_PKT: - vh = (struct hci_vendor_hdr *) buf; - len = HCI_VENDOR_HDR_SIZE + __le16_to_cpu(vh->dlen); skb = bt_skb_alloc(len, GFP_ATOMIC); - if (skb) { - memcpy(skb_put(skb, len), buf, len); - skb->dev = (void *) data->hdev; - bt_cb(skb)->pkt_type = HCI_VENDOR_PKT; - hci_recv_frame(skb); + if (!skb) { + BT_ERR("%s no memory for packet", hdev->name); + return -ENOMEM; } - break; - - default: - len = count - 1; - break; - } - buf += len; - count -= (len + 1); - } -} - -static int bpa10x_recv_event(struct bpa10x_data *data, unsigned char *buf, int size) -{ - BT_DBG("data %p buf %p size %d", data, buf, size); + skb->dev = (void *) hdev; - if (data->evt_skb) { - struct sk_buff *skb = data->evt_skb; + data->rx_skb[queue] = skb; - memcpy(skb_put(skb, size), buf, size); + scb = (void *) skb->cb; + scb->type = type; + scb->expect = len; + } else { + /* Continuation */ - if (skb->len == data->evt_len) { - data->evt_skb = NULL; - data->evt_len = 0; - hci_recv_frame(skb); - } - } else { - struct sk_buff *skb; - struct hci_event_hdr *hdr; - unsigned char pkt_type; - int pkt_len = 0; - - if (size < HCI_EVENT_HDR_SIZE + 1) { - BT_ERR("%s event packet block with size %d is too short", - data->hdev->name, size); - return -EILSEQ; + scb = (void *) skb->cb; + len = scb->expect; } - pkt_type = *buf++; - size--; - - if (pkt_type != HCI_EVENT_PKT) { - BT_ERR("%s unexpected event packet start byte 0x%02x", - data->hdev->name, pkt_type); - return -EPROTO; - } + len = min(len, count); - hdr = (struct hci_event_hdr *) buf; - pkt_len = HCI_EVENT_HDR_SIZE + hdr->plen; + memcpy(skb_put(skb, len), buf, len); - skb = bt_skb_alloc(pkt_len, GFP_ATOMIC); - if (!skb) { - BT_ERR("%s no memory for new event packet", - data->hdev->name); - return -ENOMEM; - } + scb->expect -= len; - skb->dev = (void *) data->hdev; - bt_cb(skb)->pkt_type = pkt_type; + if (scb->expect == 0) { + /* Complete frame */ - memcpy(skb_put(skb, size), buf, size); + data->rx_skb[queue] = NULL; - if (pkt_len == size) { + bt_cb(skb)->pkt_type = scb->type; hci_recv_frame(skb); - } else { - data->evt_skb = skb; - data->evt_len = pkt_len; } + + count -= len; buf += len; } return 0; } -static void bpa10x_wakeup(struct bpa10x_data *data) +static void bpa10x_tx_complete(struct urb *urb) { - struct urb *urb; - struct sk_buff *skb; - int err; + struct sk_buff *skb = urb->context; + struct hci_dev *hdev = (struct hci_dev *) skb->dev; - BT_DBG("data %p", data); + BT_DBG("%s urb %p status %d count %d", hdev->name, + urb, urb->status, urb->actual_length); - urb = data->cmd_urb; - if (urb->status == -EINPROGRESS) - skb = NULL; + if (!test_bit(HCI_RUNNING, &hdev->flags)) + goto done; + + if (!urb->status) + hdev->stat.byte_tx += urb->transfer_buffer_length; else - skb = skb_dequeue(&data->cmd_queue); + hdev->stat.err_tx++; - if (skb) { - struct usb_ctrlrequest *cr; +done: + kfree(urb->setup_packet); - if (skb->len > BPA10X_CMD_BUF_SIZE) { - BT_ERR("%s command packet with size %d is too big", - data->hdev->name, skb->len); - kfree_skb(skb); - return; - } + kfree_skb(skb); +} + +static void bpa10x_rx_complete(struct urb *urb) +{ + struct hci_dev *hdev = urb->context; + struct bpa10x_data *data = hdev->driver_data; + int err; - cr = (struct usb_ctrlrequest *) urb->setup_packet; - cr->wLength = __cpu_to_le16(skb->len); + BT_DBG("%s urb %p status %d count %d", hdev->name, + urb, urb->status, urb->actual_length); - skb_copy_from_linear_data(skb, urb->transfer_buffer, skb->len); - urb->transfer_buffer_length = skb->len; + if (!test_bit(HCI_RUNNING, &hdev->flags)) + return; - err = usb_submit_urb(urb, GFP_ATOMIC); - if (err < 0 && err != -ENODEV) { - BT_ERR("%s submit failed for command urb %p with error %d", - data->hdev->name, urb, err); - skb_queue_head(&data->cmd_queue, skb); - } else - kfree_skb(skb); + if (urb->status == 0) { + if (bpa10x_recv(hdev, usb_pipebulk(urb->pipe), + urb->transfer_buffer, + urb->actual_length) < 0) { + BT_ERR("%s corrupted event packet", hdev->name); + hdev->stat.err_rx++; + } } - urb = data->tx_urb; - if (urb->status == -EINPROGRESS) - skb = NULL; - else - skb = skb_dequeue(&data->tx_queue); - - if (skb) { - skb_copy_from_linear_data(skb, urb->transfer_buffer, skb->len); - urb->transfer_buffer_length = skb->len; - - err = usb_submit_urb(urb, GFP_ATOMIC); - if (err < 0 && err != -ENODEV) { - BT_ERR("%s submit failed for command urb %p with error %d", - data->hdev->name, urb, err); - skb_queue_head(&data->tx_queue, skb); - } else - kfree_skb(skb); + usb_anchor_urb(urb, &data->rx_anchor); + + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p failed to resubmit (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); } } -static void bpa10x_complete(struct urb *urb) +static inline int bpa10x_submit_intr_urb(struct hci_dev *hdev) { - struct bpa10x_data *data = urb->context; - unsigned char *buf = urb->transfer_buffer; - int err, count = urb->actual_length; + struct bpa10x_data *data = hdev->driver_data; + struct urb *urb; + unsigned char *buf; + unsigned int pipe; + int err, size = 16; - BT_DBG("data %p urb %p buf %p count %d", data, urb, buf, count); + BT_DBG("%s", hdev->name); - read_lock(&data->lock); + urb = usb_alloc_urb(0, GFP_KERNEL); + if (!urb) + return -ENOMEM; - if (!test_bit(HCI_RUNNING, &data->hdev->flags)) - goto unlock; + buf = kmalloc(size, GFP_KERNEL); + if (!buf) { + usb_free_urb(urb); + return -ENOMEM; + } - if (urb->status < 0 || !count) - goto resubmit; + pipe = usb_rcvintpipe(data->udev, 0x81); - if (usb_pipein(urb->pipe)) { - data->hdev->stat.byte_rx += count; + usb_fill_int_urb(urb, data->udev, pipe, buf, size, + bpa10x_rx_complete, hdev, 1); - if (usb_pipetype(urb->pipe) == PIPE_INTERRUPT) - bpa10x_recv_event(data, buf, count); + urb->transfer_flags |= URB_FREE_BUFFER; - if (usb_pipetype(urb->pipe) == PIPE_BULK) - bpa10x_recv_bulk(data, buf, count); - } else { - data->hdev->stat.byte_tx += count; + usb_anchor_urb(urb, &data->rx_anchor); - bpa10x_wakeup(data); + err = usb_submit_urb(urb, GFP_KERNEL); + if (err < 0) { + BT_ERR("%s urb %p submission failed (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); + kfree(buf); } -resubmit: - if (usb_pipein(urb->pipe)) { - err = usb_submit_urb(urb, GFP_ATOMIC); - if (err < 0 && err != -ENODEV) { - BT_ERR("%s urb %p type %d resubmit status %d", - data->hdev->name, urb, usb_pipetype(urb->pipe), err); - } - } + usb_free_urb(urb); -unlock: - read_unlock(&data->lock); + return err; } -static inline struct urb *bpa10x_alloc_urb(struct usb_device *udev, unsigned int pipe, - size_t size, gfp_t flags, void *data) +static inline int bpa10x_submit_bulk_urb(struct hci_dev *hdev) { + struct bpa10x_data *data = hdev->driver_data; struct urb *urb; - struct usb_ctrlrequest *cr; unsigned char *buf; + unsigned int pipe; + int err, size = 64; - BT_DBG("udev %p data %p", udev, data); + BT_DBG("%s", hdev->name); - urb = usb_alloc_urb(0, flags); + urb = usb_alloc_urb(0, GFP_KERNEL); if (!urb) - return NULL; + return -ENOMEM; - buf = kmalloc(size, flags); + buf = kmalloc(size, GFP_KERNEL); if (!buf) { usb_free_urb(urb); - return NULL; + return -ENOMEM; } - switch (usb_pipetype(pipe)) { - case PIPE_CONTROL: - cr = kmalloc(sizeof(*cr), flags); - if (!cr) { - kfree(buf); - usb_free_urb(urb); - return NULL; - } + pipe = usb_rcvbulkpipe(data->udev, 0x82); - cr->bRequestType = USB_TYPE_VENDOR; - cr->bRequest = 0; - cr->wIndex = 0; - cr->wValue = 0; - cr->wLength = __cpu_to_le16(0); + usb_fill_bulk_urb(urb, data->udev, pipe, + buf, size, bpa10x_rx_complete, hdev); - usb_fill_control_urb(urb, udev, pipe, (void *) cr, buf, 0, bpa10x_complete, data); - break; + urb->transfer_flags |= URB_FREE_BUFFER; - case PIPE_INTERRUPT: - usb_fill_int_urb(urb, udev, pipe, buf, size, bpa10x_complete, data, 1); - break; + usb_anchor_urb(urb, &data->rx_anchor); - case PIPE_BULK: - usb_fill_bulk_urb(urb, udev, pipe, buf, size, bpa10x_complete, data); - break; - - default: + err = usb_submit_urb(urb, GFP_KERNEL); + if (err < 0) { + BT_ERR("%s urb %p submission failed (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); kfree(buf); - usb_free_urb(urb); - return NULL; } - return urb; -} - -static inline void bpa10x_free_urb(struct urb *urb) -{ - BT_DBG("urb %p", urb); - - if (!urb) - return; - - kfree(urb->setup_packet); - kfree(urb->transfer_buffer); - usb_free_urb(urb); + + return err; } static int bpa10x_open(struct hci_dev *hdev) { struct bpa10x_data *data = hdev->driver_data; - struct usb_device *udev = data->udev; - unsigned long flags; int err; - BT_DBG("hdev %p data %p", hdev, data); + BT_DBG("%s", hdev->name); if (test_and_set_bit(HCI_RUNNING, &hdev->flags)) return 0; - data->cmd_urb = bpa10x_alloc_urb(udev, usb_sndctrlpipe(udev, BPA10X_CMD_EP), - BPA10X_CMD_BUF_SIZE, GFP_KERNEL, data); - if (!data->cmd_urb) { - err = -ENOMEM; - goto done; - } - - data->evt_urb = bpa10x_alloc_urb(udev, usb_rcvintpipe(udev, BPA10X_EVT_EP), - BPA10X_EVT_BUF_SIZE, GFP_KERNEL, data); - if (!data->evt_urb) { - bpa10x_free_urb(data->cmd_urb); - err = -ENOMEM; - goto done; - } - - data->rx_urb = bpa10x_alloc_urb(udev, usb_rcvbulkpipe(udev, BPA10X_RX_EP), - BPA10X_RX_BUF_SIZE, GFP_KERNEL, data); - if (!data->rx_urb) { - bpa10x_free_urb(data->evt_urb); - bpa10x_free_urb(data->cmd_urb); - err = -ENOMEM; - goto done; - } - - data->tx_urb = bpa10x_alloc_urb(udev, usb_sndbulkpipe(udev, BPA10X_TX_EP), - BPA10X_TX_BUF_SIZE, GFP_KERNEL, data); - if (!data->rx_urb) { - bpa10x_free_urb(data->rx_urb); - bpa10x_free_urb(data->evt_urb); - bpa10x_free_urb(data->cmd_urb); - err = -ENOMEM; - goto done; - } + err = bpa10x_submit_intr_urb(hdev); + if (err < 0) + goto error; - write_lock_irqsave(&data->lock, flags); + err = bpa10x_submit_bulk_urb(hdev); + if (err < 0) + goto error; - err = usb_submit_urb(data->evt_urb, GFP_ATOMIC); - if (err < 0) { - BT_ERR("%s submit failed for event urb %p with error %d", - data->hdev->name, data->evt_urb, err); - } else { - err = usb_submit_urb(data->rx_urb, GFP_ATOMIC); - if (err < 0) { - BT_ERR("%s submit failed for rx urb %p with error %d", - data->hdev->name, data->evt_urb, err); - usb_kill_urb(data->evt_urb); - } - } + return 0; - write_unlock_irqrestore(&data->lock, flags); +error: + usb_kill_anchored_urbs(&data->rx_anchor); -done: - if (err < 0) - clear_bit(HCI_RUNNING, &hdev->flags); + clear_bit(HCI_RUNNING, &hdev->flags); return err; } @@ -446,27 +339,13 @@ done: static int bpa10x_close(struct hci_dev *hdev) { struct bpa10x_data *data = hdev->driver_data; - unsigned long flags; - BT_DBG("hdev %p data %p", hdev, data); + BT_DBG("%s", hdev->name); if (!test_and_clear_bit(HCI_RUNNING, &hdev->flags)) return 0; - write_lock_irqsave(&data->lock, flags); - - skb_queue_purge(&data->cmd_queue); - usb_kill_urb(data->cmd_urb); - usb_kill_urb(data->evt_urb); - usb_kill_urb(data->rx_urb); - usb_kill_urb(data->tx_urb); - - write_unlock_irqrestore(&data->lock, flags); - - bpa10x_free_urb(data->cmd_urb); - bpa10x_free_urb(data->evt_urb); - bpa10x_free_urb(data->rx_urb); - bpa10x_free_urb(data->tx_urb); + usb_kill_anchored_urbs(&data->rx_anchor); return 0; } @@ -475,9 +354,9 @@ static int bpa10x_flush(struct hci_dev *hdev) { struct bpa10x_data *data = hdev->driver_data; - BT_DBG("hdev %p data %p", hdev, data); + BT_DBG("%s", hdev->name); - skb_queue_purge(&data->cmd_queue); + usb_kill_anchored_urbs(&data->tx_anchor); return 0; } @@ -485,45 +364,78 @@ static int bpa10x_flush(struct hci_dev *hdev) static int bpa10x_send_frame(struct sk_buff *skb) { struct hci_dev *hdev = (struct hci_dev *) skb->dev; - struct bpa10x_data *data; - - BT_DBG("hdev %p skb %p type %d len %d", hdev, skb, bt_cb(skb)->pkt_type, skb->len); + struct bpa10x_data *data = hdev->driver_data; + struct usb_ctrlrequest *dr; + struct urb *urb; + unsigned int pipe; + int err; - if (!hdev) { - BT_ERR("Frame for unknown HCI device"); - return -ENODEV; - } + BT_DBG("%s", hdev->name); if (!test_bit(HCI_RUNNING, &hdev->flags)) return -EBUSY; - data = hdev->driver_data; + urb = usb_alloc_urb(0, GFP_ATOMIC); + if (!urb) + return -ENOMEM; /* Prepend skb with frame type */ - memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1); + *skb_push(skb, 1) = bt_cb(skb)->pkt_type; switch (bt_cb(skb)->pkt_type) { case HCI_COMMAND_PKT: + dr = kmalloc(sizeof(*dr), GFP_ATOMIC); + if (!dr) { + usb_free_urb(urb); + return -ENOMEM; + } + + dr->bRequestType = USB_TYPE_VENDOR; + dr->bRequest = 0; + dr->wIndex = 0; + dr->wValue = 0; + dr->wLength = __cpu_to_le16(skb->len); + + pipe = usb_sndctrlpipe(data->udev, 0x00); + + usb_fill_control_urb(urb, data->udev, pipe, (void *) dr, + skb->data, skb->len, bpa10x_tx_complete, skb); + hdev->stat.cmd_tx++; - skb_queue_tail(&data->cmd_queue, skb); break; case HCI_ACLDATA_PKT: + pipe = usb_sndbulkpipe(data->udev, 0x02); + + usb_fill_bulk_urb(urb, data->udev, pipe, + skb->data, skb->len, bpa10x_tx_complete, skb); + hdev->stat.acl_tx++; - skb_queue_tail(&data->tx_queue, skb); break; case HCI_SCODATA_PKT: + pipe = usb_sndbulkpipe(data->udev, 0x02); + + usb_fill_bulk_urb(urb, data->udev, pipe, + skb->data, skb->len, bpa10x_tx_complete, skb); + hdev->stat.sco_tx++; - skb_queue_tail(&data->tx_queue, skb); break; - }; - read_lock(&data->lock); + default: + return -EILSEQ; + } + + usb_anchor_urb(urb, &data->tx_anchor); - bpa10x_wakeup(data); + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p submission failed", hdev->name, urb); + kfree(urb->setup_packet); + usb_unanchor_urb(urb); + } - read_unlock(&data->lock); + usb_free_urb(urb); return 0; } @@ -532,16 +444,17 @@ static void bpa10x_destruct(struct hci_dev *hdev) { struct bpa10x_data *data = hdev->driver_data; - BT_DBG("hdev %p data %p", hdev, data); + BT_DBG("%s", hdev->name); + kfree(data->rx_skb[0]); + kfree(data->rx_skb[1]); kfree(data); } static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id *id) { - struct usb_device *udev = interface_to_usbdev(intf); - struct hci_dev *hdev; struct bpa10x_data *data; + struct hci_dev *hdev; int err; BT_DBG("intf %p id %p", intf, id); @@ -549,48 +462,43 @@ static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id * if (ignore) return -ENODEV; - if (intf->cur_altsetting->desc.bInterfaceNumber > 0) + if (intf->cur_altsetting->desc.bInterfaceNumber != 0) return -ENODEV; data = kzalloc(sizeof(*data), GFP_KERNEL); - if (!data) { - BT_ERR("Can't allocate data structure"); + if (!data) return -ENOMEM; - } - - data->udev = udev; - rwlock_init(&data->lock); + data->udev = interface_to_usbdev(intf); - skb_queue_head_init(&data->cmd_queue); - skb_queue_head_init(&data->tx_queue); + init_usb_anchor(&data->tx_anchor); + init_usb_anchor(&data->rx_anchor); hdev = hci_alloc_dev(); if (!hdev) { - BT_ERR("Can't allocate HCI device"); kfree(data); return -ENOMEM; } - data->hdev = hdev; - hdev->type = HCI_USB; hdev->driver_data = data; + + data->hdev = hdev; + SET_HCIDEV_DEV(hdev, &intf->dev); - hdev->open = bpa10x_open; - hdev->close = bpa10x_close; - hdev->flush = bpa10x_flush; - hdev->send = bpa10x_send_frame; - hdev->destruct = bpa10x_destruct; + hdev->open = bpa10x_open; + hdev->close = bpa10x_close; + hdev->flush = bpa10x_flush; + hdev->send = bpa10x_send_frame; + hdev->destruct = bpa10x_destruct; hdev->owner = THIS_MODULE; err = hci_register_dev(hdev); if (err < 0) { - BT_ERR("Can't register HCI device"); - kfree(data); hci_free_dev(hdev); + kfree(data); return err; } @@ -602,19 +510,17 @@ static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id * static void bpa10x_disconnect(struct usb_interface *intf) { struct bpa10x_data *data = usb_get_intfdata(intf); - struct hci_dev *hdev = data->hdev; BT_DBG("intf %p", intf); - if (!hdev) + if (!data) return; usb_set_intfdata(intf, NULL); - if (hci_unregister_dev(hdev) < 0) - BT_ERR("Can't unregister HCI device %s", hdev->name); + hci_unregister_dev(data->hdev); - hci_free_dev(hdev); + hci_free_dev(data->hdev); } static struct usb_driver bpa10x_driver = { @@ -626,15 +532,9 @@ static struct usb_driver bpa10x_driver = { static int __init bpa10x_init(void) { - int err; - BT_INFO("Digianswer Bluetooth USB driver ver %s", VERSION); - err = usb_register(&bpa10x_driver); - if (err < 0) - BT_ERR("Failed to register USB driver"); - - return err; + return usb_register(&bpa10x_driver); } static void __exit bpa10x_exit(void) diff --git a/drivers/bluetooth/bt3c_cs.c b/drivers/bluetooth/bt3c_cs.c index 39516074636..a18f9b8c9e1 100644 --- a/drivers/bluetooth/bt3c_cs.c +++ b/drivers/bluetooth/bt3c_cs.c @@ -344,10 +344,7 @@ static irqreturn_t bt3c_interrupt(int irq, void *dev_inst) unsigned int iobase; int iir; - if (!info || !info->hdev) { - BT_ERR("Call of irq %d for unknown device", irq); - return IRQ_NONE; - } + BUG_ON(!info->hdev); iobase = info->p_dev->io.BasePort1; diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c new file mode 100644 index 00000000000..b786f618790 --- /dev/null +++ b/drivers/bluetooth/btsdio.c @@ -0,0 +1,406 @@ +/* + * + * Generic Bluetooth SDIO driver + * + * Copyright (C) 2007 Cambridge Silicon Radio Ltd. + * Copyright (C) 2007 Marcel Holtmann <marcel@holtmann.org> + * + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/sched.h> +#include <linux/errno.h> +#include <linux/skbuff.h> + +#include <linux/mmc/sdio_ids.h> +#include <linux/mmc/sdio_func.h> + +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> + +#ifndef CONFIG_BT_HCIBTSDIO_DEBUG +#undef BT_DBG +#define BT_DBG(D...) +#endif + +#define VERSION "0.1" + +static const struct sdio_device_id btsdio_table[] = { + /* Generic Bluetooth Type-A SDIO device */ + { SDIO_DEVICE_CLASS(SDIO_CLASS_BT_A) }, + + /* Generic Bluetooth Type-B SDIO device */ + { SDIO_DEVICE_CLASS(SDIO_CLASS_BT_B) }, + + { } /* Terminating entry */ +}; + +MODULE_DEVICE_TABLE(sdio, btsdio_table); + +struct btsdio_data { + struct hci_dev *hdev; + struct sdio_func *func; + + struct work_struct work; + + struct sk_buff_head txq; +}; + +#define REG_RDAT 0x00 /* Receiver Data */ +#define REG_TDAT 0x00 /* Transmitter Data */ +#define REG_PC_RRT 0x10 /* Read Packet Control */ +#define REG_PC_WRT 0x11 /* Write Packet Control */ +#define REG_RTC_STAT 0x12 /* Retry Control Status */ +#define REG_RTC_SET 0x12 /* Retry Control Set */ +#define REG_INTRD 0x13 /* Interrupt Indication */ +#define REG_CL_INTRD 0x13 /* Interrupt Clear */ +#define REG_EN_INTRD 0x14 /* Interrupt Enable */ +#define REG_MD_STAT 0x20 /* Bluetooth Mode Status */ + +static int btsdio_tx_packet(struct btsdio_data *data, struct sk_buff *skb) +{ + int err; + + BT_DBG("%s", data->hdev->name); + + /* Prepend Type-A header */ + skb_push(skb, 4); + skb->data[0] = (skb->len & 0x0000ff); + skb->data[1] = (skb->len & 0x00ff00) >> 8; + skb->data[2] = (skb->len & 0xff0000) >> 16; + skb->data[3] = bt_cb(skb)->pkt_type; + + err = sdio_writesb(data->func, REG_TDAT, skb->data, skb->len); + if (err < 0) { + sdio_writeb(data->func, 0x01, REG_PC_WRT, NULL); + return err; + } + + data->hdev->stat.byte_tx += skb->len; + + kfree_skb(skb); + + return 0; +} + +static void btsdio_work(struct work_struct *work) +{ + struct btsdio_data *data = container_of(work, struct btsdio_data, work); + struct sk_buff *skb; + int err; + + BT_DBG("%s", data->hdev->name); + + sdio_claim_host(data->func); + + while ((skb = skb_dequeue(&data->txq))) { + err = btsdio_tx_packet(data, skb); + if (err < 0) { + data->hdev->stat.err_tx++; + skb_queue_head(&data->txq, skb); + break; + } + } + + sdio_release_host(data->func); +} + +static int btsdio_rx_packet(struct btsdio_data *data) +{ + u8 hdr[4] __attribute__ ((aligned(4))); + struct sk_buff *skb; + int err, len; + + BT_DBG("%s", data->hdev->name); + + err = sdio_readsb(data->func, hdr, REG_RDAT, 4); + if (err < 0) + return err; + + len = hdr[0] | (hdr[1] << 8) | (hdr[2] << 16); + if (len < 4 || len > 65543) + return -EILSEQ; + + skb = bt_skb_alloc(len - 4, GFP_KERNEL); + if (!skb) { + /* Out of memory. Prepare a read retry and just + * return with the expectation that the next time + * we're called we'll have more memory. */ + return -ENOMEM; + } + + skb_put(skb, len - 4); + + err = sdio_readsb(data->func, skb->data, REG_RDAT, len - 4); + if (err < 0) { + kfree(skb); + return err; + } + + data->hdev->stat.byte_rx += len; + + skb->dev = (void *) data->hdev; + bt_cb(skb)->pkt_type = hdr[3]; + + err = hci_recv_frame(skb); + if (err < 0) { + kfree(skb); + return err; + } + + sdio_writeb(data->func, 0x00, REG_PC_RRT, NULL); + + return 0; +} + +static void btsdio_interrupt(struct sdio_func *func) +{ + struct btsdio_data *data = sdio_get_drvdata(func); + int intrd; + + BT_DBG("%s", data->hdev->name); + + intrd = sdio_readb(func, REG_INTRD, NULL); + if (intrd & 0x01) { + sdio_writeb(func, 0x01, REG_CL_INTRD, NULL); + + if (btsdio_rx_packet(data) < 0) { + data->hdev->stat.err_rx++; + sdio_writeb(data->func, 0x01, REG_PC_RRT, NULL); + } + } +} + +static int btsdio_open(struct hci_dev *hdev) +{ + struct btsdio_data *data = hdev->driver_data; + int err; + + BT_DBG("%s", hdev->name); + + if (test_and_set_bit(HCI_RUNNING, &hdev->flags)) + return 0; + + sdio_claim_host(data->func); + + err = sdio_enable_func(data->func); + if (err < 0) { + clear_bit(HCI_RUNNING, &hdev->flags); + goto release; + } + + err = sdio_claim_irq(data->func, btsdio_interrupt); + if (err < 0) { + sdio_disable_func(data->func); + clear_bit(HCI_RUNNING, &hdev->flags); + goto release; + } + + if (data->func->class == SDIO_CLASS_BT_B) + sdio_writeb(data->func, 0x00, REG_MD_STAT, NULL); + + sdio_writeb(data->func, 0x01, REG_EN_INTRD, NULL); + +release: + sdio_release_host(data->func); + + return err; +} + +static int btsdio_close(struct hci_dev *hdev) +{ + struct btsdio_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + if (!test_and_clear_bit(HCI_RUNNING, &hdev->flags)) + return 0; + + sdio_claim_host(data->func); + + sdio_writeb(data->func, 0x00, REG_EN_INTRD, NULL); + + sdio_release_irq(data->func); + sdio_disable_func(data->func); + + sdio_release_host(data->func); + + return 0; +} + +static int btsdio_flush(struct hci_dev *hdev) +{ + struct btsdio_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + skb_queue_purge(&data->txq); + + return 0; +} + +static int btsdio_send_frame(struct sk_buff *skb) +{ + struct hci_dev *hdev = (struct hci_dev *) skb->dev; + struct btsdio_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + if (!test_bit(HCI_RUNNING, &hdev->flags)) + return -EBUSY; + + switch (bt_cb(skb)->pkt_type) { + case HCI_COMMAND_PKT: + hdev->stat.cmd_tx++; + break; + + case HCI_ACLDATA_PKT: + hdev->stat.acl_tx++; + break; + + case HCI_SCODATA_PKT: + hdev->stat.sco_tx++; + break; + + default: + return -EILSEQ; + } + + skb_queue_tail(&data->txq, skb); + + schedule_work(&data->work); + + return 0; +} + +static void btsdio_destruct(struct hci_dev *hdev) +{ + struct btsdio_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + kfree(data); +} + +static int btsdio_probe(struct sdio_func *func, + const struct sdio_device_id *id) +{ + struct btsdio_data *data; + struct hci_dev *hdev; + struct sdio_func_tuple *tuple = func->tuples; + int err; + + BT_DBG("func %p id %p class 0x%04x", func, id, func->class); + + while (tuple) { + BT_DBG("code 0x%x size %d", tuple->code, tuple->size); + tuple = tuple->next; + } + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->func = func; + + INIT_WORK(&data->work, btsdio_work); + + skb_queue_head_init(&data->txq); + + hdev = hci_alloc_dev(); + if (!hdev) { + kfree(data); + return -ENOMEM; + } + + hdev->type = HCI_SDIO; + hdev->driver_data = data; + + data->hdev = hdev; + + SET_HCIDEV_DEV(hdev, &func->dev); + + hdev->open = btsdio_open; + hdev->close = btsdio_close; + hdev->flush = btsdio_flush; + hdev->send = btsdio_send_frame; + hdev->destruct = btsdio_destruct; + + hdev->owner = THIS_MODULE; + + err = hci_register_dev(hdev); + if (err < 0) { + hci_free_dev(hdev); + kfree(data); + return err; + } + + sdio_set_drvdata(func, data); + + return 0; +} + +static void btsdio_remove(struct sdio_func *func) +{ + struct btsdio_data *data = sdio_get_drvdata(func); + struct hci_dev *hdev; + + BT_DBG("func %p", func); + + if (!data) + return; + + hdev = data->hdev; + + sdio_set_drvdata(func, NULL); + + hci_unregister_dev(hdev); + + hci_free_dev(hdev); +} + +static struct sdio_driver btsdio_driver = { + .name = "btsdio", + .probe = btsdio_probe, + .remove = btsdio_remove, + .id_table = btsdio_table, +}; + +static int __init btsdio_init(void) +{ + BT_INFO("Generic Bluetooth SDIO driver ver %s", VERSION); + + return sdio_register_driver(&btsdio_driver); +} + +static void __exit btsdio_exit(void) +{ + sdio_unregister_driver(&btsdio_driver); +} + +module_init(btsdio_init); +module_exit(btsdio_exit); + +MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>"); +MODULE_DESCRIPTION("Generic Bluetooth SDIO driver ver " VERSION); +MODULE_VERSION(VERSION); +MODULE_LICENSE("GPL"); diff --git a/drivers/bluetooth/btuart_cs.c b/drivers/bluetooth/btuart_cs.c index d7d2ea0d86a..08f48d577ab 100644 --- a/drivers/bluetooth/btuart_cs.c +++ b/drivers/bluetooth/btuart_cs.c @@ -294,10 +294,7 @@ static irqreturn_t btuart_interrupt(int irq, void *dev_inst) int boguscount = 0; int iir, lsr; - if (!info || !info->hdev) { - BT_ERR("Call of irq %d for unknown device", irq); - return IRQ_NONE; - } + BUG_ON(!info->hdev); iobase = info->p_dev->io.BasePort1; diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c new file mode 100644 index 00000000000..12e108914f1 --- /dev/null +++ b/drivers/bluetooth/btusb.c @@ -0,0 +1,564 @@ +/* + * + * Generic Bluetooth USB driver + * + * Copyright (C) 2005-2007 Marcel Holtmann <marcel@holtmann.org> + * + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/sched.h> +#include <linux/errno.h> +#include <linux/skbuff.h> + +#include <linux/usb.h> + +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> + +//#define CONFIG_BT_HCIBTUSB_DEBUG +#ifndef CONFIG_BT_HCIBTUSB_DEBUG +#undef BT_DBG +#define BT_DBG(D...) +#endif + +#define VERSION "0.1" + +static struct usb_device_id btusb_table[] = { + /* Generic Bluetooth USB device */ + { USB_DEVICE_INFO(0xe0, 0x01, 0x01) }, + + { } /* Terminating entry */ +}; + +MODULE_DEVICE_TABLE(usb, btusb_table); + +static struct usb_device_id blacklist_table[] = { + { } /* Terminating entry */ +}; + +#define BTUSB_INTR_RUNNING 0 +#define BTUSB_BULK_RUNNING 1 + +struct btusb_data { + struct hci_dev *hdev; + struct usb_device *udev; + + spinlock_t lock; + + unsigned long flags; + + struct work_struct work; + + struct usb_anchor tx_anchor; + struct usb_anchor intr_anchor; + struct usb_anchor bulk_anchor; + + struct usb_endpoint_descriptor *intr_ep; + struct usb_endpoint_descriptor *bulk_tx_ep; + struct usb_endpoint_descriptor *bulk_rx_ep; +}; + +static void btusb_intr_complete(struct urb *urb) +{ + struct hci_dev *hdev = urb->context; + struct btusb_data *data = hdev->driver_data; + int err; + + BT_DBG("%s urb %p status %d count %d", hdev->name, + urb, urb->status, urb->actual_length); + + if (!test_bit(HCI_RUNNING, &hdev->flags)) + return; + + if (urb->status == 0) { + if (hci_recv_fragment(hdev, HCI_EVENT_PKT, + urb->transfer_buffer, + urb->actual_length) < 0) { + BT_ERR("%s corrupted event packet", hdev->name); + hdev->stat.err_rx++; + } + } + + if (!test_bit(BTUSB_INTR_RUNNING, &data->flags)) + return; + + usb_anchor_urb(urb, &data->intr_anchor); + + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p failed to resubmit (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); + } +} + +static inline int btusb_submit_intr_urb(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + struct urb *urb; + unsigned char *buf; + unsigned int pipe; + int err, size; + + BT_DBG("%s", hdev->name); + + urb = usb_alloc_urb(0, GFP_ATOMIC); + if (!urb) + return -ENOMEM; + + size = le16_to_cpu(data->intr_ep->wMaxPacketSize); + + buf = kmalloc(size, GFP_ATOMIC); + if (!buf) { + usb_free_urb(urb); + return -ENOMEM; + } + + pipe = usb_rcvintpipe(data->udev, data->intr_ep->bEndpointAddress); + + usb_fill_int_urb(urb, data->udev, pipe, buf, size, + btusb_intr_complete, hdev, + data->intr_ep->bInterval); + + urb->transfer_flags |= URB_FREE_BUFFER; + + usb_anchor_urb(urb, &data->intr_anchor); + + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p submission failed (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); + kfree(buf); + } + + usb_free_urb(urb); + + return err; +} + +static void btusb_bulk_complete(struct urb *urb) +{ + struct hci_dev *hdev = urb->context; + struct btusb_data *data = hdev->driver_data; + int err; + + BT_DBG("%s urb %p status %d count %d", hdev->name, + urb, urb->status, urb->actual_length); + + if (!test_bit(HCI_RUNNING, &hdev->flags)) + return; + + if (urb->status == 0) { + if (hci_recv_fragment(hdev, HCI_ACLDATA_PKT, + urb->transfer_buffer, + urb->actual_length) < 0) { + BT_ERR("%s corrupted ACL packet", hdev->name); + hdev->stat.err_rx++; + } + } + + if (!test_bit(BTUSB_BULK_RUNNING, &data->flags)) + return; + + usb_anchor_urb(urb, &data->bulk_anchor); + + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p failed to resubmit (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); + } +} + +static inline int btusb_submit_bulk_urb(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + struct urb *urb; + unsigned char *buf; + unsigned int pipe; + int err, size; + + BT_DBG("%s", hdev->name); + + urb = usb_alloc_urb(0, GFP_KERNEL); + if (!urb) + return -ENOMEM; + + size = le16_to_cpu(data->bulk_rx_ep->wMaxPacketSize); + + buf = kmalloc(size, GFP_KERNEL); + if (!buf) { + usb_free_urb(urb); + return -ENOMEM; + } + + pipe = usb_rcvbulkpipe(data->udev, data->bulk_rx_ep->bEndpointAddress); + + usb_fill_bulk_urb(urb, data->udev, pipe, + buf, size, btusb_bulk_complete, hdev); + + urb->transfer_flags |= URB_FREE_BUFFER; + + usb_anchor_urb(urb, &data->bulk_anchor); + + err = usb_submit_urb(urb, GFP_KERNEL); + if (err < 0) { + BT_ERR("%s urb %p submission failed (%d)", + hdev->name, urb, -err); + usb_unanchor_urb(urb); + kfree(buf); + } + + usb_free_urb(urb); + + return err; +} + +static void btusb_tx_complete(struct urb *urb) +{ + struct sk_buff *skb = urb->context; + struct hci_dev *hdev = (struct hci_dev *) skb->dev; + + BT_DBG("%s urb %p status %d count %d", hdev->name, + urb, urb->status, urb->actual_length); + + if (!test_bit(HCI_RUNNING, &hdev->flags)) + goto done; + + if (!urb->status) + hdev->stat.byte_tx += urb->transfer_buffer_length; + else + hdev->stat.err_tx++; + +done: + kfree(urb->setup_packet); + + kfree_skb(skb); +} + +static int btusb_open(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + int err; + + BT_DBG("%s", hdev->name); + + if (test_and_set_bit(HCI_RUNNING, &hdev->flags)) + return 0; + + if (test_and_set_bit(BTUSB_INTR_RUNNING, &data->flags)) + return 0; + + err = btusb_submit_intr_urb(hdev); + if (err < 0) { + clear_bit(BTUSB_INTR_RUNNING, &hdev->flags); + clear_bit(HCI_RUNNING, &hdev->flags); + } + + return err; +} + +static int btusb_close(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + if (!test_and_clear_bit(HCI_RUNNING, &hdev->flags)) + return 0; + + clear_bit(BTUSB_BULK_RUNNING, &data->flags); + usb_kill_anchored_urbs(&data->bulk_anchor); + + clear_bit(BTUSB_INTR_RUNNING, &data->flags); + usb_kill_anchored_urbs(&data->intr_anchor); + + return 0; +} + +static int btusb_flush(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + usb_kill_anchored_urbs(&data->tx_anchor); + + return 0; +} + +static int btusb_send_frame(struct sk_buff *skb) +{ + struct hci_dev *hdev = (struct hci_dev *) skb->dev; + struct btusb_data *data = hdev->driver_data; + struct usb_ctrlrequest *dr; + struct urb *urb; + unsigned int pipe; + int err; + + BT_DBG("%s", hdev->name); + + if (!test_bit(HCI_RUNNING, &hdev->flags)) + return -EBUSY; + + switch (bt_cb(skb)->pkt_type) { + case HCI_COMMAND_PKT: + urb = usb_alloc_urb(0, GFP_ATOMIC); + if (!urb) + return -ENOMEM; + + dr = kmalloc(sizeof(*dr), GFP_ATOMIC); + if (!dr) { + usb_free_urb(urb); + return -ENOMEM; + } + + dr->bRequestType = USB_TYPE_CLASS; + dr->bRequest = 0; + dr->wIndex = 0; + dr->wValue = 0; + dr->wLength = __cpu_to_le16(skb->len); + + pipe = usb_sndctrlpipe(data->udev, 0x00); + + usb_fill_control_urb(urb, data->udev, pipe, (void *) dr, + skb->data, skb->len, btusb_tx_complete, skb); + + hdev->stat.cmd_tx++; + break; + + case HCI_ACLDATA_PKT: + urb = usb_alloc_urb(0, GFP_ATOMIC); + if (!urb) + return -ENOMEM; + + pipe = usb_sndbulkpipe(data->udev, + data->bulk_tx_ep->bEndpointAddress); + + usb_fill_bulk_urb(urb, data->udev, pipe, + skb->data, skb->len, btusb_tx_complete, skb); + + hdev->stat.acl_tx++; + break; + + case HCI_SCODATA_PKT: + hdev->stat.sco_tx++; + kfree_skb(skb); + return 0; + + default: + return -EILSEQ; + } + + usb_anchor_urb(urb, &data->tx_anchor); + + err = usb_submit_urb(urb, GFP_ATOMIC); + if (err < 0) { + BT_ERR("%s urb %p submission failed", hdev->name, urb); + kfree(urb->setup_packet); + usb_unanchor_urb(urb); + } + + usb_free_urb(urb); + + return err; +} + +static void btusb_destruct(struct hci_dev *hdev) +{ + struct btusb_data *data = hdev->driver_data; + + BT_DBG("%s", hdev->name); + + kfree(data); +} + +static void btusb_notify(struct hci_dev *hdev, unsigned int evt) +{ + struct btusb_data *data = hdev->driver_data; + + BT_DBG("%s evt %d", hdev->name, evt); + + if (evt == HCI_NOTIFY_CONN_ADD || evt == HCI_NOTIFY_CONN_DEL) + schedule_work(&data->work); +} + +static void btusb_work(struct work_struct *work) +{ + struct btusb_data *data = container_of(work, struct btusb_data, work); + struct hci_dev *hdev = data->hdev; + + if (hdev->conn_hash.acl_num == 0) { + clear_bit(BTUSB_BULK_RUNNING, &data->flags); + usb_kill_anchored_urbs(&data->bulk_anchor); + return; + } + + if (!test_and_set_bit(BTUSB_BULK_RUNNING, &data->flags)) { + if (btusb_submit_bulk_urb(hdev) < 0) + clear_bit(BTUSB_BULK_RUNNING, &data->flags); + else + btusb_submit_bulk_urb(hdev); + } +} + +static int btusb_probe(struct usb_interface *intf, + const struct usb_device_id *id) +{ + struct usb_endpoint_descriptor *ep_desc; + struct btusb_data *data; + struct hci_dev *hdev; + int i, err; + + BT_DBG("intf %p id %p", intf, id); + + if (intf->cur_altsetting->desc.bInterfaceNumber != 0) + return -ENODEV; + + if (!id->driver_info) { + const struct usb_device_id *match; + match = usb_match_id(intf, blacklist_table); + if (match) + id = match; + } + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) { + ep_desc = &intf->cur_altsetting->endpoint[i].desc; + + if (!data->intr_ep && usb_endpoint_is_int_in(ep_desc)) { + data->intr_ep = ep_desc; + continue; + } + + if (!data->bulk_tx_ep && usb_endpoint_is_bulk_out(ep_desc)) { + data->bulk_tx_ep = ep_desc; + continue; + } + + if (!data->bulk_rx_ep && usb_endpoint_is_bulk_in(ep_desc)) { + data->bulk_rx_ep = ep_desc; + continue; + } + } + + if (!data->intr_ep || !data->bulk_tx_ep || !data->bulk_rx_ep) { + kfree(data); + return -ENODEV; + } + + data->udev = interface_to_usbdev(intf); + + spin_lock_init(&data->lock); + + INIT_WORK(&data->work, btusb_work); + + init_usb_anchor(&data->tx_anchor); + init_usb_anchor(&data->intr_anchor); + init_usb_anchor(&data->bulk_anchor); + + hdev = hci_alloc_dev(); + if (!hdev) { + kfree(data); + return -ENOMEM; + } + + hdev->type = HCI_USB; + hdev->driver_data = data; + + data->hdev = hdev; + + SET_HCIDEV_DEV(hdev, &intf->dev); + + hdev->open = btusb_open; + hdev->close = btusb_close; + hdev->flush = btusb_flush; + hdev->send = btusb_send_frame; + hdev->destruct = btusb_destruct; + hdev->notify = btusb_notify; + + hdev->owner = THIS_MODULE; + + set_bit(HCI_QUIRK_RESET_ON_INIT, &hdev->quirks); + + err = hci_register_dev(hdev); + if (err < 0) { + hci_free_dev(hdev); + kfree(data); + return err; + } + + usb_set_intfdata(intf, data); + + return 0; +} + +static void btusb_disconnect(struct usb_interface *intf) +{ + struct btusb_data *data = usb_get_intfdata(intf); + struct hci_dev *hdev; + + BT_DBG("intf %p", intf); + + if (!data) + return; + + hdev = data->hdev; + + usb_set_intfdata(intf, NULL); + + hci_unregister_dev(hdev); + + hci_free_dev(hdev); +} + +static struct usb_driver btusb_driver = { + .name = "btusb", + .probe = btusb_probe, + .disconnect = btusb_disconnect, + .id_table = btusb_table, +}; + +static int __init btusb_init(void) +{ + BT_INFO("Generic Bluetooth USB driver ver %s", VERSION); + + return usb_register(&btusb_driver); +} + +static void __exit btusb_exit(void) +{ + usb_deregister(&btusb_driver); +} + +module_init(btusb_init); +module_exit(btusb_exit); + +MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>"); +MODULE_DESCRIPTION("Generic Bluetooth USB driver ver " VERSION); +MODULE_VERSION(VERSION); +MODULE_LICENSE("GPL"); diff --git a/drivers/bluetooth/dtl1_cs.c b/drivers/bluetooth/dtl1_cs.c index 7f9c54b9964..dae45cdf02b 100644 --- a/drivers/bluetooth/dtl1_cs.c +++ b/drivers/bluetooth/dtl1_cs.c @@ -298,10 +298,7 @@ static irqreturn_t dtl1_interrupt(int irq, void *dev_inst) int boguscount = 0; int iir, lsr; - if (!info || !info->hdev) { - BT_ERR("Call of irq %d for unknown device", irq); - return IRQ_NONE; - } + BUG_ON(!info->hdev); iobase = info->p_dev->io.BasePort1; diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c index d66064ccb31..696f7528f02 100644 --- a/drivers/bluetooth/hci_bcsp.c +++ b/drivers/bluetooth/hci_bcsp.c @@ -237,7 +237,8 @@ static struct sk_buff *bcsp_prepare_pkt(struct bcsp_struct *bcsp, u8 *data, if (hciextn && chan == 5) { struct hci_command_hdr *hdr = (struct hci_command_hdr *) data; - if (hci_opcode_ogf(__le16_to_cpu(hdr->opcode)) == OGF_VENDOR_CMD) { + /* Vendor specific commands */ + if (hci_opcode_ogf(__le16_to_cpu(hdr->opcode)) == 0x3f) { u8 desc = *(data + HCI_COMMAND_HDR_SIZE); if ((desc & 0xf0) == 0xc0) { data += HCI_COMMAND_HDR_SIZE + 1; diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c index 6055b9c0ac0..e68821d074b 100644 --- a/drivers/bluetooth/hci_ldisc.c +++ b/drivers/bluetooth/hci_ldisc.c @@ -549,7 +549,10 @@ static int __init hci_uart_init(void) #ifdef CONFIG_BT_HCIUART_BCSP bcsp_init(); #endif - +#ifdef CONFIG_BT_HCIUART_LL + ll_init(); +#endif + return 0; } @@ -563,6 +566,9 @@ static void __exit hci_uart_exit(void) #ifdef CONFIG_BT_HCIUART_BCSP bcsp_deinit(); #endif +#ifdef CONFIG_BT_HCIUART_LL + ll_deinit(); +#endif /* Release tty registration of line discipline */ if ((err = tty_unregister_ldisc(N_HCI))) diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c new file mode 100644 index 00000000000..8c3e62a17b4 --- /dev/null +++ b/drivers/bluetooth/hci_ll.c @@ -0,0 +1,531 @@ +/* + * Texas Instruments' Bluetooth HCILL UART protocol + * + * HCILL (HCI Low Level) is a Texas Instruments' power management + * protocol extension to H4. + * + * Copyright (C) 2007 Texas Instruments, Inc. + * + * Written by Ohad Ben-Cohen <ohad@bencohen.org> + * + * Acknowledgements: + * This file is based on hci_h4.c, which was written + * by Maxim Krasnyansky and Marcel Holtmann. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/module.h> +#include <linux/kernel.h> + +#include <linux/init.h> +#include <linux/sched.h> +#include <linux/types.h> +#include <linux/fcntl.h> +#include <linux/interrupt.h> +#include <linux/ptrace.h> +#include <linux/poll.h> + +#include <linux/slab.h> +#include <linux/tty.h> +#include <linux/errno.h> +#include <linux/string.h> +#include <linux/signal.h> +#include <linux/ioctl.h> +#include <linux/skbuff.h> + +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> + +#include "hci_uart.h" + +/* HCILL commands */ +#define HCILL_GO_TO_SLEEP_IND 0x30 +#define HCILL_GO_TO_SLEEP_ACK 0x31 +#define HCILL_WAKE_UP_IND 0x32 +#define HCILL_WAKE_UP_ACK 0x33 + +/* HCILL receiver States */ +#define HCILL_W4_PACKET_TYPE 0 +#define HCILL_W4_EVENT_HDR 1 +#define HCILL_W4_ACL_HDR 2 +#define HCILL_W4_SCO_HDR 3 +#define HCILL_W4_DATA 4 + +/* HCILL states */ +enum hcill_states_e { + HCILL_ASLEEP, + HCILL_ASLEEP_TO_AWAKE, + HCILL_AWAKE, + HCILL_AWAKE_TO_ASLEEP +}; + +struct hcill_cmd { + u8 cmd; +} __attribute__((packed)); + +struct ll_struct { + unsigned long rx_state; + unsigned long rx_count; + struct sk_buff *rx_skb; + struct sk_buff_head txq; + spinlock_t hcill_lock; /* HCILL state lock */ + unsigned long hcill_state; /* HCILL power state */ + struct sk_buff_head tx_wait_q; /* HCILL wait queue */ +}; + +/* + * Builds and sends an HCILL command packet. + * These are very simple packets with only 1 cmd byte + */ +static int send_hcill_cmd(u8 cmd, struct hci_uart *hu) +{ + int err = 0; + struct sk_buff *skb = NULL; + struct ll_struct *ll = hu->priv; + struct hcill_cmd *hcill_packet; + + BT_DBG("hu %p cmd 0x%x", hu, cmd); + + /* allocate packet */ + skb = bt_skb_alloc(1, GFP_ATOMIC); + if (!skb) { + BT_ERR("cannot allocate memory for HCILL packet"); + err = -ENOMEM; + goto out; + } + + /* prepare packet */ + hcill_packet = (struct hcill_cmd *) skb_put(skb, 1); + hcill_packet->cmd = cmd; + skb->dev = (void *) hu->hdev; + + /* send packet */ + skb_queue_tail(&ll->txq, skb); +out: + return err; +} + +/* Initialize protocol */ +static int ll_open(struct hci_uart *hu) +{ + struct ll_struct *ll; + + BT_DBG("hu %p", hu); + + ll = kzalloc(sizeof(*ll), GFP_ATOMIC); + if (!ll) + return -ENOMEM; + + skb_queue_head_init(&ll->txq); + skb_queue_head_init(&ll->tx_wait_q); + spin_lock_init(&ll->hcill_lock); + + ll->hcill_state = HCILL_AWAKE; + + hu->priv = ll; + + return 0; +} + +/* Flush protocol data */ +static int ll_flush(struct hci_uart *hu) +{ + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p", hu); + + skb_queue_purge(&ll->tx_wait_q); + skb_queue_purge(&ll->txq); + + return 0; +} + +/* Close protocol */ +static int ll_close(struct hci_uart *hu) +{ + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p", hu); + + skb_queue_purge(&ll->tx_wait_q); + skb_queue_purge(&ll->txq); + + if (ll->rx_skb) + kfree_skb(ll->rx_skb); + + hu->priv = NULL; + + kfree(ll); + + return 0; +} + +/* + * internal function, which does common work of the device wake up process: + * 1. places all pending packets (waiting in tx_wait_q list) in txq list. + * 2. changes internal state to HCILL_AWAKE. + * Note: assumes that hcill_lock spinlock is taken, + * shouldn't be called otherwise! + */ +static void __ll_do_awake(struct ll_struct *ll) +{ + struct sk_buff *skb = NULL; + + while ((skb = skb_dequeue(&ll->tx_wait_q))) + skb_queue_tail(&ll->txq, skb); + + ll->hcill_state = HCILL_AWAKE; +} + +/* + * Called upon a wake-up-indication from the device + */ +static void ll_device_want_to_wakeup(struct hci_uart *hu) +{ + unsigned long flags; + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p", hu); + + /* lock hcill state */ + spin_lock_irqsave(&ll->hcill_lock, flags); + + switch (ll->hcill_state) { + case HCILL_ASLEEP: + /* acknowledge device wake up */ + if (send_hcill_cmd(HCILL_WAKE_UP_ACK, hu) < 0) { + BT_ERR("cannot acknowledge device wake up"); + goto out; + } + break; + case HCILL_ASLEEP_TO_AWAKE: + /* + * this state means that a wake-up-indication + * is already on its way to the device, + * and will serve as the required wake-up-ack + */ + BT_DBG("dual wake-up-indication"); + break; + default: + /* any other state are illegal */ + BT_ERR("received HCILL_WAKE_UP_IND in state %ld", ll->hcill_state); + break; + } + + /* send pending packets and change state to HCILL_AWAKE */ + __ll_do_awake(ll); + +out: + spin_unlock_irqrestore(&ll->hcill_lock, flags); + + /* actually send the packets */ + hci_uart_tx_wakeup(hu); +} + +/* + * Called upon a sleep-indication from the device + */ +static void ll_device_want_to_sleep(struct hci_uart *hu) +{ + unsigned long flags; + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p", hu); + + /* lock hcill state */ + spin_lock_irqsave(&ll->hcill_lock, flags); + + /* sanity check */ + if (ll->hcill_state != HCILL_AWAKE) + BT_ERR("ERR: HCILL_GO_TO_SLEEP_IND in state %ld", ll->hcill_state); + + /* acknowledge device sleep */ + if (send_hcill_cmd(HCILL_GO_TO_SLEEP_ACK, hu) < 0) { + BT_ERR("cannot acknowledge device sleep"); + goto out; + } + + /* update state */ + ll->hcill_state = HCILL_ASLEEP; + +out: + spin_unlock_irqrestore(&ll->hcill_lock, flags); + + /* actually send the sleep ack packet */ + hci_uart_tx_wakeup(hu); +} + +/* + * Called upon wake-up-acknowledgement from the device + */ +static void ll_device_woke_up(struct hci_uart *hu) +{ + unsigned long flags; + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p", hu); + + /* lock hcill state */ + spin_lock_irqsave(&ll->hcill_lock, flags); + + /* sanity check */ + if (ll->hcill_state != HCILL_ASLEEP_TO_AWAKE) + BT_ERR("received HCILL_WAKE_UP_ACK in state %ld", ll->hcill_state); + + /* send pending packets and change state to HCILL_AWAKE */ + __ll_do_awake(ll); + + spin_unlock_irqrestore(&ll->hcill_lock, flags); + + /* actually send the packets */ + hci_uart_tx_wakeup(hu); +} + +/* Enqueue frame for transmittion (padding, crc, etc) */ +/* may be called from two simultaneous tasklets */ +static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb) +{ + unsigned long flags = 0; + struct ll_struct *ll = hu->priv; + + BT_DBG("hu %p skb %p", hu, skb); + + /* Prepend skb with frame type */ + memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1); + + /* lock hcill state */ + spin_lock_irqsave(&ll->hcill_lock, flags); + + /* act according to current state */ + switch (ll->hcill_state) { + case HCILL_AWAKE: + BT_DBG("device awake, sending normally"); + skb_queue_tail(&ll->txq, skb); + break; + case HCILL_ASLEEP: + BT_DBG("device asleep, waking up and queueing packet"); + /* save packet for later */ + skb_queue_tail(&ll->tx_wait_q, skb); + /* awake device */ + if (send_hcill_cmd(HCILL_WAKE_UP_IND, hu) < 0) { + BT_ERR("cannot wake up device"); + break; + } + ll->hcill_state = HCILL_ASLEEP_TO_AWAKE; + break; + case HCILL_ASLEEP_TO_AWAKE: + BT_DBG("device waking up, queueing packet"); + /* transient state; just keep packet for later */ + skb_queue_tail(&ll->tx_wait_q, skb); + break; + default: + BT_ERR("illegal hcill state: %ld (losing packet)", ll->hcill_state); + kfree_skb(skb); + break; + } + + spin_unlock_irqrestore(&ll->hcill_lock, flags); + + return 0; +} + +static inline int ll_check_data_len(struct ll_struct *ll, int len) +{ + register int room = skb_tailroom(ll->rx_skb); + + BT_DBG("len %d room %d", len, room); + + if (!len) { + hci_recv_frame(ll->rx_skb); + } else if (len > room) { + BT_ERR("Data length is too large"); + kfree_skb(ll->rx_skb); + } else { + ll->rx_state = HCILL_W4_DATA; + ll->rx_count = len; + return len; + } + + ll->rx_state = HCILL_W4_PACKET_TYPE; + ll->rx_skb = NULL; + ll->rx_count = 0; + + return 0; +} + +/* Recv data */ +static int ll_recv(struct hci_uart *hu, void *data, int count) +{ + struct ll_struct *ll = hu->priv; + register char *ptr; + struct hci_event_hdr *eh; + struct hci_acl_hdr *ah; + struct hci_sco_hdr *sh; + register int len, type, dlen; + + BT_DBG("hu %p count %d rx_state %ld rx_count %ld", hu, count, ll->rx_state, ll->rx_count); + + ptr = data; + while (count) { + if (ll->rx_count) { + len = min_t(unsigned int, ll->rx_count, count); + memcpy(skb_put(ll->rx_skb, len), ptr, len); + ll->rx_count -= len; count -= len; ptr += len; + + if (ll->rx_count) + continue; + + switch (ll->rx_state) { + case HCILL_W4_DATA: + BT_DBG("Complete data"); + hci_recv_frame(ll->rx_skb); + + ll->rx_state = HCILL_W4_PACKET_TYPE; + ll->rx_skb = NULL; + continue; + + case HCILL_W4_EVENT_HDR: + eh = (struct hci_event_hdr *) ll->rx_skb->data; + + BT_DBG("Event header: evt 0x%2.2x plen %d", eh->evt, eh->plen); + + ll_check_data_len(ll, eh->plen); + continue; + + case HCILL_W4_ACL_HDR: + ah = (struct hci_acl_hdr *) ll->rx_skb->data; + dlen = __le16_to_cpu(ah->dlen); + + BT_DBG("ACL header: dlen %d", dlen); + + ll_check_data_len(ll, dlen); + continue; + + case HCILL_W4_SCO_HDR: + sh = (struct hci_sco_hdr *) ll->rx_skb->data; + + BT_DBG("SCO header: dlen %d", sh->dlen); + + ll_check_data_len(ll, sh->dlen); + continue; + } + } + + /* HCILL_W4_PACKET_TYPE */ + switch (*ptr) { + case HCI_EVENT_PKT: + BT_DBG("Event packet"); + ll->rx_state = HCILL_W4_EVENT_HDR; + ll->rx_count = HCI_EVENT_HDR_SIZE; + type = HCI_EVENT_PKT; + break; + + case HCI_ACLDATA_PKT: + BT_DBG("ACL packet"); + ll->rx_state = HCILL_W4_ACL_HDR; + ll->rx_count = HCI_ACL_HDR_SIZE; + type = HCI_ACLDATA_PKT; + break; + + case HCI_SCODATA_PKT: + BT_DBG("SCO packet"); + ll->rx_state = HCILL_W4_SCO_HDR; + ll->rx_count = HCI_SCO_HDR_SIZE; + type = HCI_SCODATA_PKT; + break; + + /* HCILL signals */ + case HCILL_GO_TO_SLEEP_IND: + BT_DBG("HCILL_GO_TO_SLEEP_IND packet"); + ll_device_want_to_sleep(hu); + ptr++; count--; + continue; + + case HCILL_GO_TO_SLEEP_ACK: + /* shouldn't happen */ + BT_ERR("received HCILL_GO_TO_SLEEP_ACK (in state %ld)", ll->hcill_state); + ptr++; count--; + continue; + + case HCILL_WAKE_UP_IND: + BT_DBG("HCILL_WAKE_UP_IND packet"); + ll_device_want_to_wakeup(hu); + ptr++; count--; + continue; + + case HCILL_WAKE_UP_ACK: + BT_DBG("HCILL_WAKE_UP_ACK packet"); + ll_device_woke_up(hu); + ptr++; count--; + continue; + + default: + BT_ERR("Unknown HCI packet type %2.2x", (__u8)*ptr); + hu->hdev->stat.err_rx++; + ptr++; count--; + continue; + }; + + ptr++; count--; + + /* Allocate packet */ + ll->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC); + if (!ll->rx_skb) { + BT_ERR("Can't allocate mem for new packet"); + ll->rx_state = HCILL_W4_PACKET_TYPE; + ll->rx_count = 0; + return 0; + } + + ll->rx_skb->dev = (void *) hu->hdev; + bt_cb(ll->rx_skb)->pkt_type = type; + } + + return count; +} + +static struct sk_buff *ll_dequeue(struct hci_uart *hu) +{ + struct ll_struct *ll = hu->priv; + return skb_dequeue(&ll->txq); +} + +static struct hci_uart_proto llp = { + .id = HCI_UART_LL, + .open = ll_open, + .close = ll_close, + .recv = ll_recv, + .enqueue = ll_enqueue, + .dequeue = ll_dequeue, + .flush = ll_flush, +}; + +int ll_init(void) +{ + int err = hci_uart_register_proto(&llp); + + if (!err) + BT_INFO("HCILL protocol initialized"); + else + BT_ERR("HCILL protocol registration failed"); + + return err; +} + +int ll_deinit(void) +{ + return hci_uart_unregister_proto(&llp); +} diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h index 1097ce72393..50113db06b9 100644 --- a/drivers/bluetooth/hci_uart.h +++ b/drivers/bluetooth/hci_uart.h @@ -33,12 +33,13 @@ #define HCIUARTGETDEVICE _IOR('U', 202, int) /* UART protocols */ -#define HCI_UART_MAX_PROTO 4 +#define HCI_UART_MAX_PROTO 5 #define HCI_UART_H4 0 #define HCI_UART_BCSP 1 #define HCI_UART_3WIRE 2 #define HCI_UART_H4DS 3 +#define HCI_UART_LL 4 struct hci_uart; @@ -85,3 +86,8 @@ int h4_deinit(void); int bcsp_init(void); int bcsp_deinit(void); #endif + +#ifdef CONFIG_BT_HCIUART_LL +int ll_init(void); +int ll_deinit(void); +#endif diff --git a/drivers/char/cyclades.c b/drivers/char/cyclades.c index d1bd0f08a33..e4f579c3e24 100644 --- a/drivers/char/cyclades.c +++ b/drivers/char/cyclades.c @@ -1602,8 +1602,8 @@ static void cyz_handle_tx(struct cyclades_port *info, info->icount.tx++; } #endif -ztxdone: tty_wakeup(tty); +ztxdone: /* Update tx_put */ cy_writel(&buf_ctrl->tx_put, tx_put); } diff --git a/drivers/firewire/fw-ohci.c b/drivers/firewire/fw-ohci.c index 2f307c4df33..67588326ae5 100644 --- a/drivers/firewire/fw-ohci.c +++ b/drivers/firewire/fw-ohci.c @@ -606,7 +606,7 @@ static int at_context_queue_packet(struct context *ctx, struct fw_packet *packet) { struct fw_ohci *ohci = ctx->ohci; - dma_addr_t d_bus, payload_bus; + dma_addr_t d_bus, uninitialized_var(payload_bus); struct driver_data *driver_data; struct descriptor *d, *last; __le32 *header; @@ -1459,7 +1459,7 @@ ohci_allocate_iso_context(struct fw_card *card, int type, size_t header_size) /* FIXME: We need a fallback for pre 1.1 OHCI. */ if (callback == handle_ir_dualbuffer_packet && ohci->version < OHCI_VERSION_1_1) - return ERR_PTR(-EINVAL); + return ERR_PTR(-ENOSYS); spin_lock_irqsave(&ohci->lock, flags); index = ffs(*mask) - 1; @@ -1778,7 +1778,7 @@ ohci_queue_iso(struct fw_iso_context *base, buffer, payload); else /* FIXME: Implement fallback for OHCI 1.0 controllers. */ - return -EINVAL; + return -ENOSYS; } static const struct fw_card_driver ohci_driver = { @@ -1898,7 +1898,12 @@ pci_probe(struct pci_dev *dev, const struct pci_device_id *ent) ohci->version = reg_read(ohci, OHCI1394_Version) & 0x00ff00ff; fw_notify("Added fw-ohci device %s, OHCI version %x.%x\n", dev->dev.bus_id, ohci->version >> 16, ohci->version & 0xff); - + if (ohci->version < OHCI_VERSION_1_1) { + fw_notify(" Isochronous I/O is not yet implemented for " + "OHCI 1.0 chips.\n"); + fw_notify(" Cameras, audio devices etc. won't work on " + "this controller with this driver version.\n"); + } return 0; fail_self_id: diff --git a/drivers/ide/cris/ide-cris.c b/drivers/ide/cris/ide-cris.c index ff20377b4c8..e196aefa207 100644 --- a/drivers/ide/cris/ide-cris.c +++ b/drivers/ide/cris/ide-cris.c @@ -935,11 +935,11 @@ static int cris_ide_build_dmatable (ide_drive_t *drive) * than two possibly non-adjacent physical 4kB pages. */ /* group sequential buffers into one large buffer */ - addr = page_to_phys(sg->page) + sg->offset; + addr = sg_phys(sg); size = sg_dma_len(sg); while (--i) { sg = sg_next(sg); - if ((addr + size) != page_to_phys(sg->page) + sg->offset) + if ((addr + size) != sg_phys(sg)) break; size += sg_dma_len(sg); } diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c index d5146c57e5b..6a6f2e066b4 100644 --- a/drivers/ide/ide-probe.c +++ b/drivers/ide/ide-probe.c @@ -47,6 +47,7 @@ #include <linux/spinlock.h> #include <linux/kmod.h> #include <linux/pci.h> +#include <linux/scatterlist.h> #include <asm/byteorder.h> #include <asm/irq.h> @@ -1317,12 +1318,14 @@ static int hwif_init(ide_hwif_t *hwif) if (!hwif->sg_max_nents) hwif->sg_max_nents = PRD_ENTRIES; - hwif->sg_table = kzalloc(sizeof(struct scatterlist)*hwif->sg_max_nents, + hwif->sg_table = kmalloc(sizeof(struct scatterlist)*hwif->sg_max_nents, GFP_KERNEL); if (!hwif->sg_table) { printk(KERN_ERR "%s: unable to allocate SG table.\n", hwif->name); goto out; } + + sg_init_table(hwif->sg_table, hwif->sg_max_nents); if (init_irq(hwif) == 0) goto done; diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c index 73ef6bf5fbc..d066546f283 100644 --- a/drivers/ide/ide-taskfile.c +++ b/drivers/ide/ide-taskfile.c @@ -261,7 +261,7 @@ static void ide_pio_sector(ide_drive_t *drive, unsigned int write) hwif->cursg = sg; } - page = cursg->page; + page = sg_page(cursg); offset = cursg->offset + hwif->cursg_ofs * SECTOR_SIZE; /* get the current page and offset */ diff --git a/drivers/ide/mips/au1xxx-ide.c b/drivers/ide/mips/au1xxx-ide.c index 1de58566e5b..a4ce3ba15d6 100644 --- a/drivers/ide/mips/au1xxx-ide.c +++ b/drivers/ide/mips/au1xxx-ide.c @@ -276,8 +276,7 @@ static int auide_build_dmatable(ide_drive_t *drive) if (iswrite) { if(!put_source_flags(ahwif->tx_chan, - (void*)(page_address(sg->page) - + sg->offset), + (void*) sg_virt(sg), tc, flags)) { printk(KERN_ERR "%s failed %d\n", __FUNCTION__, __LINE__); @@ -285,8 +284,7 @@ static int auide_build_dmatable(ide_drive_t *drive) } else { if(!put_dest_flags(ahwif->rx_chan, - (void*)(page_address(sg->page) - + sg->offset), + (void*) sg_virt(sg), tc, flags)) { printk(KERN_ERR "%s failed %d\n", __FUNCTION__, __LINE__); diff --git a/drivers/ieee1394/dma.c b/drivers/ieee1394/dma.c index 45d60558192..25e113b50d8 100644 --- a/drivers/ieee1394/dma.c +++ b/drivers/ieee1394/dma.c @@ -111,7 +111,7 @@ int dma_region_alloc(struct dma_region *dma, unsigned long n_bytes, unsigned long va = (unsigned long)dma->kvirt + (i << PAGE_SHIFT); - dma->sglist[i].page = vmalloc_to_page((void *)va); + sg_set_page(&dma->sglist[i], vmalloc_to_page((void *)va)); dma->sglist[i].length = PAGE_SIZE; } diff --git a/drivers/ieee1394/sbp2.c b/drivers/ieee1394/sbp2.c index 1b353b964b3..d5dfe11aa5c 100644 --- a/drivers/ieee1394/sbp2.c +++ b/drivers/ieee1394/sbp2.c @@ -1466,7 +1466,7 @@ static void sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb, cmd->dma_size = sgpnt[0].length; cmd->dma_type = CMD_DMA_PAGE; cmd->cmd_dma = dma_map_page(hi->host->device.parent, - sgpnt[0].page, sgpnt[0].offset, + sg_page(&sgpnt[0]), sgpnt[0].offset, cmd->dma_size, cmd->dma_dir); orb->data_descriptor_lo = cmd->cmd_dma; diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 2f54e29dc7a..14159ff2940 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -55,9 +55,11 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d ib_dma_unmap_sg(dev, chunk->page_list, chunk->nents, DMA_BIDIRECTIONAL); for (i = 0; i < chunk->nents; ++i) { + struct page *page = sg_page(&chunk->page_list[i]); + if (umem->writable && dirty) - set_page_dirty_lock(chunk->page_list[i].page); - put_page(chunk->page_list[i].page); + set_page_dirty_lock(page); + put_page(page); } kfree(chunk); @@ -164,11 +166,12 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, } chunk->nents = min_t(int, ret, IB_UMEM_MAX_PAGE_CHUNK); + sg_init_table(chunk->page_list, chunk->nents); for (i = 0; i < chunk->nents; ++i) { if (vma_list && !is_vm_hugetlb_page(vma_list[i + off])) umem->hugetlb = 0; - chunk->page_list[i].page = page_list[i + off]; + sg_set_page(&chunk->page_list[i], page_list[i + off]); chunk->page_list[i].offset = 0; chunk->page_list[i].length = PAGE_SIZE; } @@ -179,7 +182,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, DMA_BIDIRECTIONAL); if (chunk->nmap <= 0) { for (i = 0; i < chunk->nents; ++i) - put_page(chunk->page_list[i].page); + put_page(sg_page(&chunk->page_list[i])); kfree(chunk); ret = -ENOMEM; diff --git a/drivers/infiniband/hw/ehca/ehca_mrmw.c b/drivers/infiniband/hw/ehca/ehca_mrmw.c index da88738265e..ead7230d773 100644 --- a/drivers/infiniband/hw/ehca/ehca_mrmw.c +++ b/drivers/infiniband/hw/ehca/ehca_mrmw.c @@ -1776,7 +1776,7 @@ static int ehca_set_pagebuf_user1(struct ehca_mr_pginfo *pginfo, list_for_each_entry_continue( chunk, (&(pginfo->u.usr.region->chunk_list)), list) { for (i = pginfo->u.usr.next_nmap; i < chunk->nmap; ) { - pgaddr = page_to_pfn(chunk->page_list[i].page) + pgaddr = page_to_pfn(sg_page(&chunk->page_list[i])) << PAGE_SHIFT ; *kpage = phys_to_abs(pgaddr + (pginfo->next_hwpage * @@ -1832,7 +1832,7 @@ static int ehca_check_kpages_per_ate(struct scatterlist *page_list, { int t; for (t = start_idx; t <= end_idx; t++) { - u64 pgaddr = page_to_pfn(page_list[t].page) << PAGE_SHIFT; + u64 pgaddr = page_to_pfn(sg_page(&page_list[t])) << PAGE_SHIFT; ehca_gen_dbg("chunk_page=%lx value=%016lx", pgaddr, *(u64 *)abs_to_virt(phys_to_abs(pgaddr))); if (pgaddr - PAGE_SIZE != *prev_pgaddr) { @@ -1867,7 +1867,7 @@ static int ehca_set_pagebuf_user2(struct ehca_mr_pginfo *pginfo, chunk, (&(pginfo->u.usr.region->chunk_list)), list) { for (i = pginfo->u.usr.next_nmap; i < chunk->nmap; ) { if (nr_kpages == kpages_per_hwpage) { - pgaddr = ( page_to_pfn(chunk->page_list[i].page) + pgaddr = ( page_to_pfn(sg_page(&chunk->page_list[i])) << PAGE_SHIFT ); *kpage = phys_to_abs(pgaddr); if ( !(*kpage) ) { diff --git a/drivers/infiniband/hw/ipath/ipath_dma.c b/drivers/infiniband/hw/ipath/ipath_dma.c index 22709a4f8fc..e90a0ea538a 100644 --- a/drivers/infiniband/hw/ipath/ipath_dma.c +++ b/drivers/infiniband/hw/ipath/ipath_dma.c @@ -108,7 +108,7 @@ static int ipath_map_sg(struct ib_device *dev, struct scatterlist *sgl, BUG_ON(!valid_dma_direction(direction)); for_each_sg(sgl, sg, nents, i) { - addr = (u64) page_address(sg->page); + addr = (u64) page_address(sg_page(sg)); /* TODO: handle highmem pages */ if (!addr) { ret = 0; @@ -127,7 +127,7 @@ static void ipath_unmap_sg(struct ib_device *dev, static u64 ipath_sg_dma_address(struct ib_device *dev, struct scatterlist *sg) { - u64 addr = (u64) page_address(sg->page); + u64 addr = (u64) page_address(sg_page(sg)); if (addr) addr += sg->offset; diff --git a/drivers/infiniband/hw/ipath/ipath_mr.c b/drivers/infiniband/hw/ipath/ipath_mr.c index e442470a237..db4ba92f79f 100644 --- a/drivers/infiniband/hw/ipath/ipath_mr.c +++ b/drivers/infiniband/hw/ipath/ipath_mr.c @@ -225,7 +225,7 @@ struct ib_mr *ipath_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, for (i = 0; i < chunk->nents; i++) { void *vaddr; - vaddr = page_address(chunk->page_list[i].page); + vaddr = page_address(sg_page(&chunk->page_list[i])); if (!vaddr) { ret = ERR_PTR(-EINVAL); goto bail; diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index e61f3e62698..007b38157fc 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -71,7 +71,7 @@ static void mthca_free_icm_pages(struct mthca_dev *dev, struct mthca_icm_chunk * PCI_DMA_BIDIRECTIONAL); for (i = 0; i < chunk->npages; ++i) - __free_pages(chunk->mem[i].page, + __free_pages(sg_page(&chunk->mem[i]), get_order(chunk->mem[i].length)); } @@ -81,7 +81,7 @@ static void mthca_free_icm_coherent(struct mthca_dev *dev, struct mthca_icm_chun for (i = 0; i < chunk->npages; ++i) { dma_free_coherent(&dev->pdev->dev, chunk->mem[i].length, - lowmem_page_address(chunk->mem[i].page), + lowmem_page_address(sg_page(&chunk->mem[i])), sg_dma_address(&chunk->mem[i])); } } @@ -107,10 +107,13 @@ void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm, int coherent) static int mthca_alloc_icm_pages(struct scatterlist *mem, int order, gfp_t gfp_mask) { - mem->page = alloc_pages(gfp_mask, order); - if (!mem->page) + struct page *page; + + page = alloc_pages(gfp_mask, order); + if (!page) return -ENOMEM; + sg_set_page(mem, page); mem->length = PAGE_SIZE << order; mem->offset = 0; return 0; @@ -157,6 +160,7 @@ struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages, if (!chunk) goto fail; + sg_init_table(chunk->mem, MTHCA_ICM_CHUNK_LEN); chunk->npages = 0; chunk->nsg = 0; list_add_tail(&chunk->list, &icm->chunk_list); @@ -304,7 +308,7 @@ void *mthca_table_find(struct mthca_icm_table *table, int obj, dma_addr_t *dma_h * so if we found the page, dma_handle has already * been assigned to. */ if (chunk->mem[i].length > offset) { - page = chunk->mem[i].page; + page = sg_page(&chunk->mem[i]); goto out; } offset -= chunk->mem[i].length; @@ -445,6 +449,7 @@ static u64 mthca_uarc_virt(struct mthca_dev *dev, struct mthca_uar *uar, int pag int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, struct mthca_user_db_table *db_tab, int index, u64 uaddr) { + struct page *pages[1]; int ret = 0; u8 status; int i; @@ -472,16 +477,17 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, } ret = get_user_pages(current, current->mm, uaddr & PAGE_MASK, 1, 1, 0, - &db_tab->page[i].mem.page, NULL); + pages, NULL); if (ret < 0) goto out; + sg_set_page(&db_tab->page[i].mem, pages[0]); db_tab->page[i].mem.length = MTHCA_ICM_PAGE_SIZE; db_tab->page[i].mem.offset = uaddr & ~PAGE_MASK; ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(db_tab->page[i].mem.page); + put_page(pages[0]); goto out; } @@ -491,7 +497,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = -EINVAL; if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(db_tab->page[i].mem.page); + put_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -557,7 +563,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1, &status); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(db_tab->page[i].mem.page); + put_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index f3529b6f0a3..d6879806179 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -131,7 +131,7 @@ static int iser_start_rdma_unaligned_sg(struct iscsi_iser_cmd_task *iser_ctask, p = mem; for_each_sg(sgl, sg, data->size, i) { - from = kmap_atomic(sg->page, KM_USER0); + from = kmap_atomic(sg_page(sg), KM_USER0); memcpy(p, from + sg->offset, sg->length); @@ -191,7 +191,7 @@ void iser_finalize_rdma_unaligned_sg(struct iscsi_iser_cmd_task *iser_ctask, p = mem; for_each_sg(sgl, sg, sg_size, i) { - to = kmap_atomic(sg->page, KM_SOFTIRQ0); + to = kmap_atomic(sg_page(sg), KM_SOFTIRQ0); memcpy(to + sg->offset, p, sg->length); @@ -300,7 +300,7 @@ static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data, for_each_sg(sgl, sg, data->dma_nents, i) { /* iser_dbg("Checking sg iobuf [%d]: phys=0x%08lX " "offset: %ld sz: %ld\n", i, - (unsigned long)page_to_phys(sg->page), + (unsigned long)sg_phys(sg), (unsigned long)sg->offset, (unsigned long)sg->length); */ end_addr = ib_sg_dma_address(ibdev, sg) + @@ -336,7 +336,7 @@ static void iser_data_buf_dump(struct iser_data_buf *data, iser_err("sg[%d] dma_addr:0x%lX page:0x%p " "off:0x%x sz:0x%x dma_len:0x%x\n", i, (unsigned long)ib_sg_dma_address(ibdev, sg), - sg->page, sg->offset, + sg_page(sg), sg->offset, sg->length, ib_sg_dma_len(ibdev, sg)); } diff --git a/drivers/input/keyboard/bf54x-keys.c b/drivers/input/keyboard/bf54x-keys.c index a67b29b089e..e5f4da92834 100644 --- a/drivers/input/keyboard/bf54x-keys.c +++ b/drivers/input/keyboard/bf54x-keys.c @@ -256,7 +256,6 @@ static int __devinit bfin_kpad_probe(struct platform_device *pdev) printk(KERN_ERR DRV_NAME ": unable to claim irq %d; error %d\n", bf54x_kpad->irq, error); - error = -EBUSY; goto out2; } diff --git a/drivers/input/mouse/appletouch.c b/drivers/input/mouse/appletouch.c index 0117817bf53..f132702d137 100644 --- a/drivers/input/mouse/appletouch.c +++ b/drivers/input/mouse/appletouch.c @@ -504,25 +504,22 @@ static void atp_complete(struct urb* urb) memset(dev->xy_acc, 0, sizeof(dev->xy_acc)); } - /* Geyser 3 will continue to send packets continually after + input_report_key(dev->input, BTN_LEFT, key); + input_sync(dev->input); + + /* Many Geysers will continue to send packets continually after the first touch unless reinitialised. Do so if it's been idle for a while in order to avoid waking the kernel up several hundred times a second */ - if (atp_is_geyser_3(dev)) { - if (!x && !y && !key) { - dev->idlecount++; - if (dev->idlecount == 10) { - dev->valid = 0; - schedule_work(&dev->work); - } + if (!x && !y && !key) { + dev->idlecount++; + if (dev->idlecount == 10) { + dev->valid = 0; + schedule_work(&dev->work); } - else - dev->idlecount = 0; - } - - input_report_key(dev->input, BTN_LEFT, key); - input_sync(dev->input); + } else + dev->idlecount = 0; exit: retval = usb_submit_urb(dev->urb, GFP_ATOMIC); diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c index 11dafc0ee99..1a0cea3c529 100644 --- a/drivers/input/serio/i8042.c +++ b/drivers/input/serio/i8042.c @@ -20,6 +20,7 @@ #include <linux/err.h> #include <linux/rcupdate.h> #include <linux/platform_device.h> +#include <linux/i8042.h> #include <asm/io.h> @@ -208,7 +209,7 @@ static int __i8042_command(unsigned char *param, int command) return 0; } -static int i8042_command(unsigned char *param, int command) +int i8042_command(unsigned char *param, int command) { unsigned long flags; int retval; @@ -219,6 +220,7 @@ static int i8042_command(unsigned char *param, int command) return retval; } +EXPORT_SYMBOL(i8042_command); /* * i8042_kbd_write() sends a byte out through the keyboard interface. diff --git a/drivers/input/serio/i8042.h b/drivers/input/serio/i8042.h index b3eb7a72d96..dd22d91f8b3 100644 --- a/drivers/input/serio/i8042.h +++ b/drivers/input/serio/i8042.h @@ -61,28 +61,6 @@ #define I8042_CTR_XLATE 0x40 /* - * Commands. - */ - -#define I8042_CMD_CTL_RCTR 0x0120 -#define I8042_CMD_CTL_WCTR 0x1060 -#define I8042_CMD_CTL_TEST 0x01aa - -#define I8042_CMD_KBD_DISABLE 0x00ad -#define I8042_CMD_KBD_ENABLE 0x00ae -#define I8042_CMD_KBD_TEST 0x01ab -#define I8042_CMD_KBD_LOOP 0x11d2 - -#define I8042_CMD_AUX_DISABLE 0x00a7 -#define I8042_CMD_AUX_ENABLE 0x00a8 -#define I8042_CMD_AUX_TEST 0x01a9 -#define I8042_CMD_AUX_SEND 0x10d4 -#define I8042_CMD_AUX_LOOP 0x11d3 - -#define I8042_CMD_MUX_PFX 0x0090 -#define I8042_CMD_MUX_SEND 0x1090 - -/* * Return codes. */ diff --git a/drivers/input/touchscreen/Kconfig b/drivers/input/touchscreen/Kconfig index e3e0baa1a15..fa8442b6241 100644 --- a/drivers/input/touchscreen/Kconfig +++ b/drivers/input/touchscreen/Kconfig @@ -202,6 +202,7 @@ config TOUCHSCREEN_USB_COMPOSITE - DMC TSC-10/25 - IRTOUCHSYSTEMS/UNITOP - IdealTEK URTC1000 + - GoTop Super_Q2/GogoPen/PenPower tablets Have a look at <http://linux.chapter7.ch/touchkit/> for a usage description and the required user-space stuff. @@ -259,4 +260,9 @@ config TOUCHSCREEN_USB_GENERAL_TOUCH bool "GeneralTouch Touchscreen device support" if EMBEDDED depends on TOUCHSCREEN_USB_COMPOSITE +config TOUCHSCREEN_USB_GOTOP + default y + bool "GoTop Super_Q2/GogoPen/PenPower tablet device support" if EMBEDDED + depends on TOUCHSCREEN_USB_COMPOSITE + endif diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c index 5f34b78d5dd..19055e7381f 100644 --- a/drivers/input/touchscreen/usbtouchscreen.c +++ b/drivers/input/touchscreen/usbtouchscreen.c @@ -11,8 +11,9 @@ * - DMC TSC-10/25 * - IRTOUCHSYSTEMS/UNITOP * - IdealTEK URTC1000 + * - GoTop Super_Q2/GogoPen/PenPower tablets * - * Copyright (C) 2004-2006 by Daniel Ritz <daniel.ritz@gmx.ch> + * Copyright (C) 2004-2007 by Daniel Ritz <daniel.ritz@gmx.ch> * Copyright (C) by Todd E. Johnson (mtouchusb.c) * * This program is free software; you can redistribute it and/or @@ -115,6 +116,7 @@ enum { DEVTYPE_IRTOUCH, DEVTYPE_IDEALTEK, DEVTYPE_GENERAL_TOUCH, + DEVTYPE_GOTOP, }; static struct usb_device_id usbtouch_devices[] = { @@ -168,6 +170,12 @@ static struct usb_device_id usbtouch_devices[] = { {USB_DEVICE(0x0dfc, 0x0001), .driver_info = DEVTYPE_GENERAL_TOUCH}, #endif +#ifdef CONFIG_TOUCHSCREEN_USB_GOTOP + {USB_DEVICE(0x08f2, 0x007f), .driver_info = DEVTYPE_GOTOP}, + {USB_DEVICE(0x08f2, 0x00ce), .driver_info = DEVTYPE_GOTOP}, + {USB_DEVICE(0x08f2, 0x00f4), .driver_info = DEVTYPE_GOTOP}, +#endif + {} }; @@ -501,6 +509,20 @@ static int general_touch_read_data(struct usbtouch_usb *dev, unsigned char *pkt) #endif /***************************************************************************** + * GoTop Part + */ +#ifdef CONFIG_TOUCHSCREEN_USB_GOTOP +static int gotop_read_data(struct usbtouch_usb *dev, unsigned char *pkt) +{ + dev->x = ((pkt[1] & 0x38) << 4) | pkt[2]; + dev->y = ((pkt[1] & 0x07) << 7) | pkt[3]; + dev->touch = pkt[0] & 0x01; + return 1; +} +#endif + + +/***************************************************************************** * the different device descriptors */ static struct usbtouch_device_info usbtouch_dev_info[] = { @@ -623,9 +645,19 @@ static struct usbtouch_device_info usbtouch_dev_info[] = { .max_yc = 0x0500, .rept_size = 7, .read_data = general_touch_read_data, - } + }, #endif +#ifdef CONFIG_TOUCHSCREEN_USB_GOTOP + [DEVTYPE_GOTOP] = { + .min_xc = 0x0, + .max_xc = 0x03ff, + .min_yc = 0x0, + .max_yc = 0x03ff, + .rept_size = 4, + .read_data = gotop_read_data, + }, +#endif }; diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c index af2d288c881..07ae280e8fe 100644 --- a/drivers/kvm/kvm_main.c +++ b/drivers/kvm/kvm_main.c @@ -198,21 +198,15 @@ static void vcpu_put(struct kvm_vcpu *vcpu) static void ack_flush(void *_completed) { - atomic_t *completed = _completed; - - atomic_inc(completed); } void kvm_flush_remote_tlbs(struct kvm *kvm) { - int i, cpu, needed; + int i, cpu; cpumask_t cpus; struct kvm_vcpu *vcpu; - atomic_t completed; - atomic_set(&completed, 0); cpus_clear(cpus); - needed = 0; for (i = 0; i < KVM_MAX_VCPUS; ++i) { vcpu = kvm->vcpus[i]; if (!vcpu) @@ -221,23 +215,9 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) continue; cpu = vcpu->cpu; if (cpu != -1 && cpu != raw_smp_processor_id()) - if (!cpu_isset(cpu, cpus)) { - cpu_set(cpu, cpus); - ++needed; - } - } - - /* - * We really want smp_call_function_mask() here. But that's not - * available, so ipi all cpus in parallel and wait for them - * to complete. - */ - for (cpu = first_cpu(cpus); cpu != NR_CPUS; cpu = next_cpu(cpu, cpus)) - smp_call_function_single(cpu, ack_flush, &completed, 1, 0); - while (atomic_read(&completed) != needed) { - cpu_relax(); - barrier(); + cpu_set(cpu, cpus); } + smp_call_function_mask(cpus, ack_flush, NULL, 1); } int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) @@ -2054,12 +2034,21 @@ again: kvm_x86_ops->run(vcpu, kvm_run); - kvm_guest_exit(); vcpu->guest_mode = 0; local_irq_enable(); ++vcpu->stat.exits; + /* + * We must have an instruction between local_irq_enable() and + * kvm_guest_exit(), so the timer interrupt isn't delayed by + * the interrupt shadow. The stat.exits increment will do nicely. + * But we need to prevent reordering, hence this barrier(): + */ + barrier(); + + kvm_guest_exit(); + preempt_enable(); /* diff --git a/drivers/kvm/lapic.c b/drivers/kvm/lapic.c index a190587cf6a..238fcad3cec 100644 --- a/drivers/kvm/lapic.c +++ b/drivers/kvm/lapic.c @@ -494,12 +494,19 @@ static void apic_send_ipi(struct kvm_lapic *apic) static u32 apic_get_tmcct(struct kvm_lapic *apic) { - u32 counter_passed; - ktime_t passed, now = apic->timer.dev.base->get_time(); - u32 tmcct = apic_get_reg(apic, APIC_TMICT); + u64 counter_passed; + ktime_t passed, now; + u32 tmcct; ASSERT(apic != NULL); + now = apic->timer.dev.base->get_time(); + tmcct = apic_get_reg(apic, APIC_TMICT); + + /* if initial count is 0, current count should also be 0 */ + if (tmcct == 0) + return 0; + if (unlikely(ktime_to_ns(now) <= ktime_to_ns(apic->timer.last_update))) { /* Wrap around */ @@ -514,15 +521,24 @@ static u32 apic_get_tmcct(struct kvm_lapic *apic) counter_passed = div64_64(ktime_to_ns(passed), (APIC_BUS_CYCLE_NS * apic->timer.divide_count)); - tmcct -= counter_passed; - if (tmcct <= 0) { - if (unlikely(!apic_lvtt_period(apic))) + if (counter_passed > tmcct) { + if (unlikely(!apic_lvtt_period(apic))) { + /* one-shot timers stick at 0 until reset */ tmcct = 0; - else - do { - tmcct += apic_get_reg(apic, APIC_TMICT); - } while (tmcct <= 0); + } else { + /* + * periodic timers reset to APIC_TMICT when they + * hit 0. The while loop simulates this happening N + * times. (counter_passed %= tmcct) would also work, + * but might be slower or not work on 32-bit?? + */ + while (counter_passed > tmcct) + counter_passed -= tmcct; + tmcct -= counter_passed; + } + } else { + tmcct -= counter_passed; } return tmcct; @@ -853,7 +869,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu) apic_set_reg(apic, APIC_ISR + 0x10 * i, 0); apic_set_reg(apic, APIC_TMR + 0x10 * i, 0); } - apic->timer.divide_count = 0; + update_divide_count(apic); atomic_set(&apic->timer.pending, 0); if (vcpu->vcpu_id == 0) vcpu->apic_base |= MSR_IA32_APICBASE_BSP; diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c index 6d84d30f5ed..feb5ac986c5 100644 --- a/drivers/kvm/mmu.c +++ b/drivers/kvm/mmu.c @@ -1049,6 +1049,7 @@ int kvm_mmu_reset_context(struct kvm_vcpu *vcpu) destroy_kvm_mmu(vcpu); return init_kvm_mmu(vcpu); } +EXPORT_SYMBOL_GPL(kvm_mmu_reset_context); int kvm_mmu_load(struct kvm_vcpu *vcpu) { @@ -1088,7 +1089,7 @@ static void mmu_pte_write_zap_pte(struct kvm_vcpu *vcpu, mmu_page_remove_parent_pte(child, spte); } } - *spte = 0; + set_shadow_pte(spte, 0); kvm_flush_remote_tlbs(vcpu->kvm); } diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c index 4f115a8e45e..bb56ae3f89b 100644 --- a/drivers/kvm/vmx.c +++ b/drivers/kvm/vmx.c @@ -523,6 +523,8 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu) static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) { + if (vcpu->rmode.active) + rflags |= IOPL_MASK | X86_EFLAGS_VM; vmcs_writel(GUEST_RFLAGS, rflags); } @@ -1128,6 +1130,7 @@ static void enter_rmode(struct kvm_vcpu *vcpu) fix_rmode_seg(VCPU_SREG_GS, &vcpu->rmode.gs); fix_rmode_seg(VCPU_SREG_FS, &vcpu->rmode.fs); + kvm_mmu_reset_context(vcpu); init_rmode_tss(vcpu->kvm); } @@ -1760,10 +1763,8 @@ static int handle_exception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) set_bit(irq / BITS_PER_LONG, &vcpu->irq_summary); } - if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == 0x200) { /* nmi */ - asm ("int $2"); - return 1; - } + if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == 0x200) /* nmi */ + return 1; /* already handled by vmx_vcpu_run() */ if (is_no_device(intr_info)) { vmx_fpu_activate(vcpu); @@ -2196,6 +2197,7 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu) static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) { struct vcpu_vmx *vmx = to_vmx(vcpu); + u32 intr_info; /* * Loading guest fpu may have cleared host cr0.ts @@ -2322,6 +2324,12 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) asm ("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS)); vmx->launched = 1; + + intr_info = vmcs_read32(VM_EXIT_INTR_INFO); + + /* We need to handle NMIs before interrupts are enabled */ + if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == 0x200) /* nmi */ + asm("int $2"); } static void vmx_inject_page_fault(struct kvm_vcpu *vcpu, diff --git a/drivers/kvm/x86_emulate.c b/drivers/kvm/x86_emulate.c index 9737c3b2f48..a6ace302e0c 100644 --- a/drivers/kvm/x86_emulate.c +++ b/drivers/kvm/x86_emulate.c @@ -212,7 +212,8 @@ static u16 twobyte_table[256] = { 0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov, DstReg | SrcMem16 | ModRM | Mov, /* 0xC0 - 0xCF */ - 0, 0, 0, 0, 0, 0, 0, ImplicitOps | ModRM, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, DstMem | SrcReg | ModRM | Mov, 0, 0, 0, ImplicitOps | ModRM, + 0, 0, 0, 0, 0, 0, 0, 0, /* 0xD0 - 0xDF */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0xE0 - 0xEF */ @@ -596,11 +597,10 @@ x86_emulate_memop(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops) case 0xf0: /* LOCK */ lock_prefix = 1; break; + case 0xf2: /* REPNE/REPNZ */ case 0xf3: /* REP/REPE/REPZ */ rep_prefix = 1; break; - case 0xf2: /* REPNE/REPNZ */ - break; default: goto done_prefixes; } @@ -825,6 +825,14 @@ done_prefixes: if (twobyte && b == 0x01 && modrm_reg == 7) break; srcmem_common: + /* + * For instructions with a ModR/M byte, switch to register + * access if Mod = 3. + */ + if ((d & ModRM) && modrm_mod == 3) { + src.type = OP_REG; + break; + } src.type = OP_MEM; src.ptr = (unsigned long *)cr2; src.val = 0; @@ -893,6 +901,14 @@ done_prefixes: dst.ptr = (unsigned long *)cr2; dst.bytes = (d & ByteOp) ? 1 : op_bytes; dst.val = 0; + /* + * For instructions with a ModR/M byte, switch to register + * access if Mod = 3. + */ + if ((d & ModRM) && modrm_mod == 3) { + dst.type = OP_REG; + break; + } if (d & BitOp) { unsigned long mask = ~(dst.bytes * 8 - 1); @@ -1083,31 +1099,6 @@ push: case 0xd2 ... 0xd3: /* Grp2 */ src.val = _regs[VCPU_REGS_RCX]; goto grp2; - case 0xe8: /* call (near) */ { - long int rel; - switch (op_bytes) { - case 2: - rel = insn_fetch(s16, 2, _eip); - break; - case 4: - rel = insn_fetch(s32, 4, _eip); - break; - case 8: - rel = insn_fetch(s64, 8, _eip); - break; - default: - DPRINTF("Call: Invalid op_bytes\n"); - goto cannot_emulate; - } - src.val = (unsigned long) _eip; - JMP_REL(rel); - goto push; - } - case 0xe9: /* jmp rel */ - case 0xeb: /* jmp rel short */ - JMP_REL(src.val); - no_wb = 1; /* Disable writeback. */ - break; case 0xf6 ... 0xf7: /* Grp3 */ switch (modrm_reg) { case 0 ... 1: /* test */ @@ -1350,6 +1341,32 @@ special_insn: case 0xae ... 0xaf: /* scas */ DPRINTF("Urk! I don't handle SCAS.\n"); goto cannot_emulate; + case 0xe8: /* call (near) */ { + long int rel; + switch (op_bytes) { + case 2: + rel = insn_fetch(s16, 2, _eip); + break; + case 4: + rel = insn_fetch(s32, 4, _eip); + break; + case 8: + rel = insn_fetch(s64, 8, _eip); + break; + default: + DPRINTF("Call: Invalid op_bytes\n"); + goto cannot_emulate; + } + src.val = (unsigned long) _eip; + JMP_REL(rel); + goto push; + } + case 0xe9: /* jmp rel */ + case 0xeb: /* jmp rel short */ + JMP_REL(src.val); + no_wb = 1; /* Disable writeback. */ + break; + } goto writeback; @@ -1501,6 +1518,10 @@ twobyte_insn: dst.bytes = op_bytes; dst.val = (d & ByteOp) ? (s8) src.val : (s16) src.val; break; + case 0xc3: /* movnti */ + dst.bytes = op_bytes; + dst.val = (op_bytes == 4) ? (u32) src.val : (u64) src.val; + break; } goto writeback; diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c index 927cb34c480..7c426d07a55 100644 --- a/drivers/md/bitmap.c +++ b/drivers/md/bitmap.c @@ -274,7 +274,7 @@ static int write_sb_page(struct bitmap *bitmap, struct page *page, int wait) if (bitmap->offset < 0) { /* DATA BITMAP METADATA */ if (bitmap->offset - + page->index * (PAGE_SIZE/512) + + (long)(page->index * (PAGE_SIZE/512)) + size/512 > 0) /* bitmap runs in to metadata */ return -EINVAL; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 0eb5416798b..ac54f697c50 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -348,16 +348,17 @@ static int crypt_convert(struct crypt_config *cc, ctx->idx_out < ctx->bio_out->bi_vcnt) { struct bio_vec *bv_in = bio_iovec_idx(ctx->bio_in, ctx->idx_in); struct bio_vec *bv_out = bio_iovec_idx(ctx->bio_out, ctx->idx_out); - struct scatterlist sg_in = { - .page = bv_in->bv_page, - .offset = bv_in->bv_offset + ctx->offset_in, - .length = 1 << SECTOR_SHIFT - }; - struct scatterlist sg_out = { - .page = bv_out->bv_page, - .offset = bv_out->bv_offset + ctx->offset_out, - .length = 1 << SECTOR_SHIFT - }; + struct scatterlist sg_in, sg_out; + + sg_init_table(&sg_in, 1); + sg_set_page(&sg_in, bv_in->bv_page); + sg_in.offset = bv_in->bv_offset + ctx->offset_in; + sg_in.length = 1 << SECTOR_SHIFT; + + sg_init_table(&sg_out, 1); + sg_set_page(&sg_out, bv_out->bv_page); + sg_out.offset = bv_out->bv_offset + ctx->offset_out; + sg_out.length = 1 << SECTOR_SHIFT; ctx->offset_in += sg_in.length; if (ctx->offset_in >= bv_in->bv_len) { diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 8ee181a01f5..80a67d789b7 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -376,7 +376,12 @@ static unsigned long get_stripe_work(struct stripe_head *sh) ack++; sh->ops.count -= ack; - BUG_ON(sh->ops.count < 0); + if (unlikely(sh->ops.count < 0)) { + printk(KERN_ERR "pending: %#lx ops.pending: %#lx ops.ack: %#lx " + "ops.complete: %#lx\n", pending, sh->ops.pending, + sh->ops.ack, sh->ops.complete); + BUG(); + } return pending; } @@ -550,8 +555,7 @@ static void ops_complete_biofill(void *stripe_head_ref) } } } - clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); - clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); + set_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); return_io(return_bi); @@ -2893,6 +2897,13 @@ static void handle_stripe6(struct stripe_head *sh, struct page *tmp_page) s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); /* Now to look around and see what can be done */ + /* clean-up completed biofill operations */ + if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); + } + rcu_read_lock(); for (i=disks; i--; ) { mdk_rdev_t *rdev; diff --git a/drivers/media/common/ir-keymaps.c b/drivers/media/common/ir-keymaps.c index aefcf28da1c..185e8a860c1 100644 --- a/drivers/media/common/ir-keymaps.c +++ b/drivers/media/common/ir-keymaps.c @@ -1074,41 +1074,41 @@ EXPORT_SYMBOL_GPL(ir_codes_manli); /* Mike Baikov <mike@baikov.com> */ IR_KEYTAB_TYPE ir_codes_gotview7135[IR_KEYTAB_SIZE] = { - [ 0x21 ] = KEY_POWER, - [ 0x69 ] = KEY_TV, - [ 0x33 ] = KEY_0, - [ 0x51 ] = KEY_1, - [ 0x31 ] = KEY_2, - [ 0x71 ] = KEY_3, - [ 0x3b ] = KEY_4, - [ 0x58 ] = KEY_5, - [ 0x41 ] = KEY_6, - [ 0x48 ] = KEY_7, - [ 0x30 ] = KEY_8, - [ 0x53 ] = KEY_9, - [ 0x73 ] = KEY_AGAIN, /* LOOP */ - [ 0x0a ] = KEY_AUDIO, - [ 0x61 ] = KEY_PRINT, /* PREVIEW */ - [ 0x7a ] = KEY_VIDEO, - [ 0x20 ] = KEY_CHANNELUP, - [ 0x40 ] = KEY_CHANNELDOWN, - [ 0x18 ] = KEY_VOLUMEDOWN, - [ 0x50 ] = KEY_VOLUMEUP, - [ 0x10 ] = KEY_MUTE, - [ 0x4a ] = KEY_SEARCH, - [ 0x7b ] = KEY_SHUFFLE, /* SNAPSHOT */ - [ 0x22 ] = KEY_RECORD, - [ 0x62 ] = KEY_STOP, - [ 0x78 ] = KEY_PLAY, - [ 0x39 ] = KEY_REWIND, - [ 0x59 ] = KEY_PAUSE, - [ 0x19 ] = KEY_FORWARD, - [ 0x09 ] = KEY_ZOOM, - - [ 0x52 ] = KEY_F21, /* LIVE TIMESHIFT */ - [ 0x1a ] = KEY_F22, /* MIN TIMESHIFT */ - [ 0x3a ] = KEY_F23, /* TIMESHIFT */ - [ 0x70 ] = KEY_F24, /* NORMAL TIMESHIFT */ + [ 0x11 ] = KEY_POWER, + [ 0x35 ] = KEY_TV, + [ 0x1b ] = KEY_0, + [ 0x29 ] = KEY_1, + [ 0x19 ] = KEY_2, + [ 0x39 ] = KEY_3, + [ 0x1f ] = KEY_4, + [ 0x2c ] = KEY_5, + [ 0x21 ] = KEY_6, + [ 0x24 ] = KEY_7, + [ 0x18 ] = KEY_8, + [ 0x2b ] = KEY_9, + [ 0x3b ] = KEY_AGAIN, /* LOOP */ + [ 0x06 ] = KEY_AUDIO, + [ 0x31 ] = KEY_PRINT, /* PREVIEW */ + [ 0x3e ] = KEY_VIDEO, + [ 0x10 ] = KEY_CHANNELUP, + [ 0x20 ] = KEY_CHANNELDOWN, + [ 0x0c ] = KEY_VOLUMEDOWN, + [ 0x28 ] = KEY_VOLUMEUP, + [ 0x08 ] = KEY_MUTE, + [ 0x26 ] = KEY_SEARCH, /*SCAN*/ + [ 0x3f ] = KEY_SHUFFLE, /* SNAPSHOT */ + [ 0x12 ] = KEY_RECORD, + [ 0x32 ] = KEY_STOP, + [ 0x3c ] = KEY_PLAY, + [ 0x1d ] = KEY_REWIND, + [ 0x2d ] = KEY_PAUSE, + [ 0x0d ] = KEY_FORWARD, + [ 0x05 ] = KEY_ZOOM, /*FULL*/ + + [ 0x2a ] = KEY_F21, /* LIVE TIMESHIFT */ + [ 0x0e ] = KEY_F22, /* MIN TIMESHIFT */ + [ 0x1e ] = KEY_F23, /* TIMESHIFT */ + [ 0x38 ] = KEY_F24, /* NORMAL TIMESHIFT */ }; EXPORT_SYMBOL_GPL(ir_codes_gotview7135); diff --git a/drivers/media/common/saa7146_core.c b/drivers/media/common/saa7146_core.c index 365a22118a0..2b1f8b4be00 100644 --- a/drivers/media/common/saa7146_core.c +++ b/drivers/media/common/saa7146_core.c @@ -112,12 +112,13 @@ static struct scatterlist* vmalloc_to_sg(unsigned char *virt, int nr_pages) sglist = kcalloc(nr_pages, sizeof(struct scatterlist), GFP_KERNEL); if (NULL == sglist) return NULL; + sg_init_table(sglist, nr_pages); for (i = 0; i < nr_pages; i++, virt += PAGE_SIZE) { pg = vmalloc_to_page(virt); if (NULL == pg) goto err; BUG_ON(PageHighMem(pg)); - sglist[i].page = pg; + sg_set_page(&sglist[i], pg); sglist[i].length = PAGE_SIZE; } return sglist; diff --git a/drivers/media/dvb/cinergyT2/cinergyT2.c b/drivers/media/dvb/cinergyT2/cinergyT2.c index a05e5c18228..db08b0a8888 100644 --- a/drivers/media/dvb/cinergyT2/cinergyT2.c +++ b/drivers/media/dvb/cinergyT2/cinergyT2.c @@ -345,7 +345,9 @@ static int cinergyt2_start_feed(struct dvb_demux_feed *dvbdmxfeed) struct dvb_demux *demux = dvbdmxfeed->demux; struct cinergyt2 *cinergyt2 = demux->priv; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->sem)) + if (cinergyt2->disconnect_pending) + return -EAGAIN; + if (mutex_lock_interruptible(&cinergyt2->sem)) return -ERESTARTSYS; if (cinergyt2->streaming == 0) @@ -361,7 +363,9 @@ static int cinergyt2_stop_feed(struct dvb_demux_feed *dvbdmxfeed) struct dvb_demux *demux = dvbdmxfeed->demux; struct cinergyt2 *cinergyt2 = demux->priv; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->sem)) + if (cinergyt2->disconnect_pending) + return -EAGAIN; + if (mutex_lock_interruptible(&cinergyt2->sem)) return -ERESTARTSYS; if (--cinergyt2->streaming == 0) @@ -481,12 +485,16 @@ static int cinergyt2_open (struct inode *inode, struct file *file) { struct dvb_device *dvbdev = file->private_data; struct cinergyt2 *cinergyt2 = dvbdev->priv; - int err = -ERESTARTSYS; + int err = -EAGAIN; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->wq_sem)) + if (cinergyt2->disconnect_pending) + goto out; + err = mutex_lock_interruptible(&cinergyt2->wq_sem); + if (err) goto out; - if (mutex_lock_interruptible(&cinergyt2->sem)) + err = mutex_lock_interruptible(&cinergyt2->sem); + if (err) goto out_unlock1; if ((err = dvb_generic_open(inode, file))) @@ -550,7 +558,9 @@ static unsigned int cinergyt2_poll (struct file *file, struct poll_table_struct struct cinergyt2 *cinergyt2 = dvbdev->priv; unsigned int mask = 0; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->sem)) + if (cinergyt2->disconnect_pending) + return -EAGAIN; + if (mutex_lock_interruptible(&cinergyt2->sem)) return -ERESTARTSYS; poll_wait(file, &cinergyt2->poll_wq, wait); @@ -625,7 +635,9 @@ static int cinergyt2_ioctl (struct inode *inode, struct file *file, if (copy_from_user(&p, (void __user*) arg, sizeof(p))) return -EFAULT; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->sem)) + if (cinergyt2->disconnect_pending) + return -EAGAIN; + if (mutex_lock_interruptible(&cinergyt2->sem)) return -ERESTARTSYS; param->cmd = CINERGYT2_EP1_SET_TUNER_PARAMETERS; @@ -996,7 +1008,9 @@ static int cinergyt2_suspend (struct usb_interface *intf, pm_message_t state) { struct cinergyt2 *cinergyt2 = usb_get_intfdata (intf); - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->wq_sem)) + if (cinergyt2->disconnect_pending) + return -EAGAIN; + if (mutex_lock_interruptible(&cinergyt2->wq_sem)) return -ERESTARTSYS; cinergyt2_suspend_rc(cinergyt2); @@ -1017,16 +1031,18 @@ static int cinergyt2_resume (struct usb_interface *intf) { struct cinergyt2 *cinergyt2 = usb_get_intfdata (intf); struct dvbt_set_parameters_msg *param = &cinergyt2->param; - int err = -ERESTARTSYS; + int err = -EAGAIN; - if (cinergyt2->disconnect_pending || mutex_lock_interruptible(&cinergyt2->wq_sem)) + if (cinergyt2->disconnect_pending) + goto out; + err = mutex_lock_interruptible(&cinergyt2->wq_sem); + if (err) goto out; - if (mutex_lock_interruptible(&cinergyt2->sem)) + err = mutex_lock_interruptible(&cinergyt2->sem); + if (err) goto out_unlock1; - err = 0; - if (!cinergyt2->sleeping) { cinergyt2_sleep(cinergyt2, 0); cinergyt2_command(cinergyt2, (char *) param, sizeof(*param), NULL, 0); diff --git a/drivers/media/dvb/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb/dvb-core/dvb_ca_en50221.c index 084a508a03d..89437fdab8b 100644 --- a/drivers/media/dvb/dvb-core/dvb_ca_en50221.c +++ b/drivers/media/dvb/dvb-core/dvb_ca_en50221.c @@ -972,7 +972,7 @@ static int dvb_ca_en50221_thread(void *data) /* main loop */ while (!kthread_should_stop()) { /* sleep for a bit */ - while (!ca->wakeup) { + if (!ca->wakeup) { set_current_state(TASK_INTERRUPTIBLE); schedule_timeout(ca->delay); if (kthread_should_stop()) diff --git a/drivers/media/dvb/dvb-usb/dib0700_devices.c b/drivers/media/dvb/dvb-usb/dib0700_devices.c index e8c4a869453..58452b52002 100644 --- a/drivers/media/dvb/dvb-usb/dib0700_devices.c +++ b/drivers/media/dvb/dvb-usb/dib0700_devices.c @@ -828,7 +828,7 @@ MODULE_DEVICE_TABLE(usb, dib0700_usb_id_table); #define DIB0700_DEFAULT_DEVICE_PROPERTIES \ .caps = DVB_USB_IS_AN_I2C_ADAPTER, \ .usb_ctrl = DEVICE_SPECIFIC, \ - .firmware = "dvb-usb-dib0700-03-pre1.fw", \ + .firmware = "dvb-usb-dib0700-1.10.fw", \ .download_firmware = dib0700_download_firmware, \ .no_reconnect = 1, \ .size_of_priv = sizeof(struct dib0700_state), \ diff --git a/drivers/media/radio/miropcm20-radio.c b/drivers/media/radio/miropcm20-radio.c index c7c9d1dc069..3ae56fef8c9 100644 --- a/drivers/media/radio/miropcm20-radio.c +++ b/drivers/media/radio/miropcm20-radio.c @@ -229,7 +229,6 @@ static struct video_device pcm20_radio = { .owner = THIS_MODULE, .name = "Miro PCM 20 radio", .type = VID_TYPE_TUNER, - .hardware = VID_HARDWARE_RTRACK, .fops = &pcm20_fops, .priv = &pcm20_unit }; diff --git a/drivers/media/radio/radio-gemtek.c b/drivers/media/radio/radio-gemtek.c index 0c963db0361..5e4b9ddb23c 100644 --- a/drivers/media/radio/radio-gemtek.c +++ b/drivers/media/radio/radio-gemtek.c @@ -554,7 +554,6 @@ static struct video_device gemtek_radio = { .owner = THIS_MODULE, .name = "GemTek Radio card", .type = VID_TYPE_TUNER, - .hardware = VID_HARDWARE_GEMTEK, .fops = &gemtek_fops, .vidioc_querycap = vidioc_querycap, .vidioc_g_tuner = vidioc_g_tuner, diff --git a/drivers/media/video/arv.c b/drivers/media/video/arv.c index 19e9929ffa0..c94a4d0f280 100644 --- a/drivers/media/video/arv.c +++ b/drivers/media/video/arv.c @@ -755,7 +755,6 @@ static struct video_device ar_template = { .owner = THIS_MODULE, .name = "Colour AR VGA", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_ARV, .fops = &ar_fops, .release = ar_release, .minor = -1, diff --git a/drivers/media/video/bt8xx/bttv-driver.c b/drivers/media/video/bt8xx/bttv-driver.c index 7a332b3efe5..9feeb636ff9 100644 --- a/drivers/media/video/bt8xx/bttv-driver.c +++ b/drivers/media/video/bt8xx/bttv-driver.c @@ -3877,7 +3877,6 @@ static struct video_device bttv_video_template = .name = "UNSET", .type = VID_TYPE_CAPTURE|VID_TYPE_TUNER| VID_TYPE_CLIPPING|VID_TYPE_SCALES, - .hardware = VID_HARDWARE_BT848, .fops = &bttv_fops, .minor = -1, }; @@ -3886,7 +3885,6 @@ static struct video_device bttv_vbi_template = { .name = "bt848/878 vbi", .type = VID_TYPE_TUNER|VID_TYPE_TELETEXT, - .hardware = VID_HARDWARE_BT848, .fops = &bttv_fops, .minor = -1, }; @@ -4034,7 +4032,6 @@ static struct video_device radio_template = { .name = "bt848/878 radio", .type = VID_TYPE_TUNER, - .hardware = VID_HARDWARE_BT848, .fops = &radio_fops, .minor = -1, }; diff --git a/drivers/media/video/bw-qcam.c b/drivers/media/video/bw-qcam.c index 7f7e3d3398d..58423525591 100644 --- a/drivers/media/video/bw-qcam.c +++ b/drivers/media/video/bw-qcam.c @@ -899,7 +899,6 @@ static struct video_device qcam_template= .owner = THIS_MODULE, .name = "Connectix Quickcam", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_QCAM_BW, .fops = &qcam_fops, }; diff --git a/drivers/media/video/c-qcam.c b/drivers/media/video/c-qcam.c index f76c6a6c376..cf1546b5a7f 100644 --- a/drivers/media/video/c-qcam.c +++ b/drivers/media/video/c-qcam.c @@ -699,7 +699,6 @@ static struct video_device qcam_template= .owner = THIS_MODULE, .name = "Colour QuickCam", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_QCAM_C, .fops = &qcam_fops, }; diff --git a/drivers/media/video/cpia.c b/drivers/media/video/cpia.c index a1d02e5ce0f..7c630f5ee72 100644 --- a/drivers/media/video/cpia.c +++ b/drivers/media/video/cpia.c @@ -65,10 +65,6 @@ MODULE_PARM_DESC(colorspace_conv, #define ABOUT "V4L-Driver for Vision CPiA based cameras" -#ifndef VID_HARDWARE_CPIA -#define VID_HARDWARE_CPIA 24 /* FIXME -> from linux/videodev.h */ -#endif - #define CPIA_MODULE_CPIA (0<<5) #define CPIA_MODULE_SYSTEM (1<<5) #define CPIA_MODULE_VP_CTRL (5<<5) @@ -3804,7 +3800,6 @@ static struct video_device cpia_template = { .owner = THIS_MODULE, .name = "CPiA Camera", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_CPIA, .fops = &cpia_fops, }; diff --git a/drivers/media/video/cpia2/cpia2_v4l.c b/drivers/media/video/cpia2/cpia2_v4l.c index e3aaba1e0e0..e378abec806 100644 --- a/drivers/media/video/cpia2/cpia2_v4l.c +++ b/drivers/media/video/cpia2/cpia2_v4l.c @@ -86,10 +86,6 @@ MODULE_LICENSE("GPL"); #define ABOUT "V4L-Driver for Vision CPiA2 based cameras" -#ifndef VID_HARDWARE_CPIA2 -#error "VID_HARDWARE_CPIA2 should have been defined in linux/videodev.h" -#endif - struct control_menu_info { int value; char name[32]; @@ -1942,7 +1938,6 @@ static struct video_device cpia2_template = { .type= VID_TYPE_CAPTURE, .type2 = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING, - .hardware= VID_HARDWARE_CPIA2, .minor= -1, .fops= &fops_template, .release= video_device_release, diff --git a/drivers/media/video/cx23885/cx23885-core.c b/drivers/media/video/cx23885/cx23885-core.c index af16505bd2e..3cdd136477e 100644 --- a/drivers/media/video/cx23885/cx23885-core.c +++ b/drivers/media/video/cx23885/cx23885-core.c @@ -793,7 +793,7 @@ static int cx23885_dev_setup(struct cx23885_dev *dev) dev->pci->subsystem_device); cx23885_devcount--; - goto fail_free; + return -ENODEV; } /* PCIe stuff */ @@ -835,10 +835,6 @@ static int cx23885_dev_setup(struct cx23885_dev *dev) } return 0; - -fail_free: - kfree(dev); - return -ENODEV; } void cx23885_dev_unregister(struct cx23885_dev *dev) diff --git a/drivers/media/video/cx88/cx88-alsa.c b/drivers/media/video/cx88/cx88-alsa.c index 141dadf7cf1..40ffd7a5579 100644 --- a/drivers/media/video/cx88/cx88-alsa.c +++ b/drivers/media/video/cx88/cx88-alsa.c @@ -39,6 +39,7 @@ #include <sound/pcm_params.h> #include <sound/control.h> #include <sound/initval.h> +#include <sound/tlv.h> #include "cx88.h" #include "cx88-reg.h" @@ -82,6 +83,7 @@ typedef struct cx88_audio_dev snd_cx88_card_t; + /**************************************************************************** Module global static vars ****************************************************************************/ @@ -545,8 +547,8 @@ static int __devinit snd_cx88_pcm(snd_cx88_card_t *chip, int device, char *name) /**************************************************************************** CONTROL INTERFACE ****************************************************************************/ -static int snd_cx88_capture_volume_info(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_info *info) +static int snd_cx88_volume_info(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_info *info) { info->type = SNDRV_CTL_ELEM_TYPE_INTEGER; info->count = 2; @@ -556,9 +558,8 @@ static int snd_cx88_capture_volume_info(struct snd_kcontrol *kcontrol, return 0; } -/* OK - TODO: test it */ -static int snd_cx88_capture_volume_get(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *value) +static int snd_cx88_volume_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *value) { snd_cx88_card_t *chip = snd_kcontrol_chip(kcontrol); struct cx88_core *core=chip->core; @@ -573,8 +574,8 @@ static int snd_cx88_capture_volume_get(struct snd_kcontrol *kcontrol, } /* OK - TODO: test it */ -static int snd_cx88_capture_volume_put(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *value) +static int snd_cx88_volume_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *value) { snd_cx88_card_t *chip = snd_kcontrol_chip(kcontrol); struct cx88_core *core=chip->core; @@ -605,14 +606,67 @@ static int snd_cx88_capture_volume_put(struct snd_kcontrol *kcontrol, return changed; } -static struct snd_kcontrol_new snd_cx88_capture_volume = { +static const DECLARE_TLV_DB_SCALE(snd_cx88_db_scale, -6300, 100, 0); + +static struct snd_kcontrol_new snd_cx88_volume = { + .iface = SNDRV_CTL_ELEM_IFACE_MIXER, + .access = SNDRV_CTL_ELEM_ACCESS_READWRITE | + SNDRV_CTL_ELEM_ACCESS_TLV_READ, + .name = "Playback Volume", + .info = snd_cx88_volume_info, + .get = snd_cx88_volume_get, + .put = snd_cx88_volume_put, + .tlv.p = snd_cx88_db_scale, +}; + +static int snd_cx88_switch_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *value) +{ + snd_cx88_card_t *chip = snd_kcontrol_chip(kcontrol); + struct cx88_core *core = chip->core; + u32 bit = kcontrol->private_value; + + value->value.integer.value[0] = !(cx_read(AUD_VOL_CTL) & bit); + return 0; +} + +static int snd_cx88_switch_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *value) +{ + snd_cx88_card_t *chip = snd_kcontrol_chip(kcontrol); + struct cx88_core *core = chip->core; + u32 bit = kcontrol->private_value; + int ret = 0; + u32 vol; + + spin_lock_irq(&chip->reg_lock); + vol = cx_read(AUD_VOL_CTL); + if (value->value.integer.value[0] != !(vol & bit)) { + vol ^= bit; + cx_write(AUD_VOL_CTL, vol); + ret = 1; + } + spin_unlock_irq(&chip->reg_lock); + return ret; +} + +static struct snd_kcontrol_new snd_cx88_dac_switch = { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, - .name = "Capture Volume", - .info = snd_cx88_capture_volume_info, - .get = snd_cx88_capture_volume_get, - .put = snd_cx88_capture_volume_put, + .name = "Playback Switch", + .info = snd_ctl_boolean_mono_info, + .get = snd_cx88_switch_get, + .put = snd_cx88_switch_put, + .private_value = (1<<8), }; +static struct snd_kcontrol_new snd_cx88_source_switch = { + .iface = SNDRV_CTL_ELEM_IFACE_MIXER, + .name = "Capture Switch", + .info = snd_ctl_boolean_mono_info, + .get = snd_cx88_switch_get, + .put = snd_cx88_switch_put, + .private_value = (1<<6), +}; /**************************************************************************** Basic Flow for Sound Devices @@ -762,7 +816,13 @@ static int __devinit cx88_audio_initdev(struct pci_dev *pci, if (err < 0) goto error; - err = snd_ctl_add(card, snd_ctl_new1(&snd_cx88_capture_volume, chip)); + err = snd_ctl_add(card, snd_ctl_new1(&snd_cx88_volume, chip)); + if (err < 0) + goto error; + err = snd_ctl_add(card, snd_ctl_new1(&snd_cx88_dac_switch, chip)); + if (err < 0) + goto error; + err = snd_ctl_add(card, snd_ctl_new1(&snd_cx88_source_switch, chip)); if (err < 0) goto error; diff --git a/drivers/media/video/cx88/cx88-blackbird.c b/drivers/media/video/cx88/cx88-blackbird.c index 6d6f5048d76..f33f0b47142 100644 --- a/drivers/media/video/cx88/cx88-blackbird.c +++ b/drivers/media/video/cx88/cx88-blackbird.c @@ -527,44 +527,6 @@ static void blackbird_codec_settings(struct cx8802_dev *dev) cx2341x_update(dev, blackbird_mbox_func, NULL, &dev->params); } -static struct v4l2_mpeg_compression default_mpeg_params = { - .st_type = V4L2_MPEG_PS_2, - .st_bitrate = { - .mode = V4L2_BITRATE_CBR, - .min = 0, - .target = 0, - .max = 0 - }, - .ts_pid_pmt = 16, - .ts_pid_audio = 260, - .ts_pid_video = 256, - .ts_pid_pcr = 259, - .ps_size = 0, - .au_type = V4L2_MPEG_AU_2_II, - .au_bitrate = { - .mode = V4L2_BITRATE_CBR, - .min = 224, - .target = 224, - .max = 224 - }, - .au_sample_rate = 48000, - .au_pesid = 0, - .vi_type = V4L2_MPEG_VI_2, - .vi_aspect_ratio = V4L2_MPEG_ASPECT_4_3, - .vi_bitrate = { - .mode = V4L2_BITRATE_CBR, - .min = 4000, - .target = 4500, - .max = 6000 - }, - .vi_frame_rate = 25, - .vi_frames_per_gop = 12, - .vi_bframes_count = 2, - .vi_pesid = 0, - .closed_gops = 1, - .pulldown = 0 -}; - static int blackbird_initialize_codec(struct cx8802_dev *dev) { struct cx88_core *core = dev->core; @@ -852,23 +814,6 @@ static int vidioc_streamoff(struct file *file, void *priv, enum v4l2_buf_type i) return videobuf_streamoff(&fh->mpegq); } -static int vidioc_g_mpegcomp (struct file *file, void *fh, - struct v4l2_mpeg_compression *f) -{ - printk(KERN_WARNING "VIDIOC_G_MPEGCOMP is obsolete. " - "Replace with VIDIOC_G_EXT_CTRLS!"); - memcpy(f,&default_mpeg_params,sizeof(*f)); - return 0; -} - -static int vidioc_s_mpegcomp (struct file *file, void *fh, - struct v4l2_mpeg_compression *f) -{ - printk(KERN_WARNING "VIDIOC_S_MPEGCOMP is obsolete. " - "Replace with VIDIOC_S_EXT_CTRLS!"); - return 0; -} - static int vidioc_g_ext_ctrls (struct file *file, void *priv, struct v4l2_ext_controls *f) { @@ -1216,8 +1161,6 @@ static struct video_device cx8802_mpeg_template = .vidioc_dqbuf = vidioc_dqbuf, .vidioc_streamon = vidioc_streamon, .vidioc_streamoff = vidioc_streamoff, - .vidioc_g_mpegcomp = vidioc_g_mpegcomp, - .vidioc_s_mpegcomp = vidioc_s_mpegcomp, .vidioc_g_ext_ctrls = vidioc_g_ext_ctrls, .vidioc_s_ext_ctrls = vidioc_s_ext_ctrls, .vidioc_try_ext_ctrls = vidioc_try_ext_ctrls, diff --git a/drivers/media/video/cx88/cx88-dvb.c b/drivers/media/video/cx88/cx88-dvb.c index d16e5c6d21c..fce19caf9d0 100644 --- a/drivers/media/video/cx88/cx88-dvb.c +++ b/drivers/media/video/cx88/cx88-dvb.c @@ -475,8 +475,9 @@ static int dvb_register(struct cx8802_dev *dev) break; case CX88_BOARD_DNTV_LIVE_DVB_T_PRO: #if defined(CONFIG_VIDEO_CX88_VP3054) || (defined(CONFIG_VIDEO_CX88_VP3054_MODULE) && defined(MODULE)) + /* MT352 is on a secondary I2C bus made from some GPIO lines */ dev->dvb.frontend = dvb_attach(mt352_attach, &dntv_live_dvbt_pro_config, - &((struct vp3054_i2c_state *)dev->card_priv)->adap); + &dev->vp3054->adap); if (dev->dvb.frontend != NULL) { dvb_attach(dvb_pll_attach, dev->dvb.frontend, 0x61, &dev->core->i2c_adap, DVB_PLL_FMD1216ME); diff --git a/drivers/media/video/cx88/cx88-mpeg.c b/drivers/media/video/cx88/cx88-mpeg.c index a652f294d23..448c6738094 100644 --- a/drivers/media/video/cx88/cx88-mpeg.c +++ b/drivers/media/video/cx88/cx88-mpeg.c @@ -79,7 +79,8 @@ static int cx8802_start_dma(struct cx8802_dev *dev, { struct cx88_core *core = dev->core; - dprintk(1, "cx8802_start_dma w: %d, h: %d, f: %d\n", dev->width, dev->height, buf->vb.field); + dprintk(1, "cx8802_start_dma w: %d, h: %d, f: %d\n", + buf->vb.width, buf->vb.height, buf->vb.field); /* setup fifo + format */ cx88_sram_channel_setup(core, &cx88_sram_channels[SRAM_CH28], @@ -177,7 +178,6 @@ static int cx8802_restart_queue(struct cx8802_dev *dev, struct cx88_dmaqueue *q) { struct cx88_buffer *buf; - struct list_head *item; dprintk( 1, "cx8802_restart_queue\n" ); if (list_empty(&q->active)) @@ -223,10 +223,8 @@ static int cx8802_restart_queue(struct cx8802_dev *dev, dprintk(2,"restart_queue [%p/%d]: restart dma\n", buf, buf->vb.i); cx8802_start_dma(dev, q, buf); - list_for_each(item,&q->active) { - buf = list_entry(item, struct cx88_buffer, vb.queue); + list_for_each_entry(buf, &q->active, vb.queue) buf->count = q->count++; - } mod_timer(&q->timeout, jiffies+BUFFER_TIMEOUT); return 0; } @@ -572,42 +570,29 @@ int cx8802_resume_common(struct pci_dev *pci_dev) return 0; } +#if defined(CONFIG_VIDEO_CX88_BLACKBIRD) || \ + defined(CONFIG_VIDEO_CX88_BLACKBIRD_MODULE) struct cx8802_dev * cx8802_get_device(struct inode *inode) { int minor = iminor(inode); - struct cx8802_dev *h = NULL; - struct list_head *list; + struct cx8802_dev *dev; - list_for_each(list,&cx8802_devlist) { - h = list_entry(list, struct cx8802_dev, devlist); - if (h->mpeg_dev && h->mpeg_dev->minor == minor) - return h; - } + list_for_each_entry(dev, &cx8802_devlist, devlist) + if (dev->mpeg_dev && dev->mpeg_dev->minor == minor) + return dev; return NULL; } +EXPORT_SYMBOL(cx8802_get_device); +#endif struct cx8802_driver * cx8802_get_driver(struct cx8802_dev *dev, enum cx88_board_type btype) { - struct cx8802_dev *h = NULL; - struct cx8802_driver *d = NULL; - struct list_head *list; - struct list_head *list2; - - list_for_each(list,&cx8802_devlist) { - h = list_entry(list, struct cx8802_dev, devlist); - if (h != dev) - continue; - - list_for_each(list2, &h->drvlist.devlist) { - d = list_entry(list2, struct cx8802_driver, devlist); + struct cx8802_driver *d; - /* only unregister the correct driver type */ - if (d->type_id == btype) { - return d; - } - } - } + list_for_each_entry(d, &dev->drvlist, drvlist) + if (d->type_id == btype) + return d; return NULL; } @@ -671,10 +656,9 @@ static int cx8802_check_driver(struct cx8802_driver *drv) int cx8802_register_driver(struct cx8802_driver *drv) { - struct cx8802_dev *h; + struct cx8802_dev *dev; struct cx8802_driver *driver; - struct list_head *list; - int err = 0, i = 0; + int err, i = 0; printk(KERN_INFO "cx88/2: registering cx8802 driver, type: %s access: %s\n", @@ -686,14 +670,12 @@ int cx8802_register_driver(struct cx8802_driver *drv) return err; } - list_for_each(list,&cx8802_devlist) { - h = list_entry(list, struct cx8802_dev, devlist); - + list_for_each_entry(dev, &cx8802_devlist, devlist) { printk(KERN_INFO "%s/2: subsystem: %04x:%04x, board: %s [card=%d]\n", - h->core->name, h->pci->subsystem_vendor, - h->pci->subsystem_device, h->core->board.name, - h->core->boardnr); + dev->core->name, dev->pci->subsystem_vendor, + dev->pci->subsystem_device, dev->core->board.name, + dev->core->boardnr); /* Bring up a new struct for each driver instance */ driver = kzalloc(sizeof(*drv),GFP_KERNEL); @@ -701,7 +683,7 @@ int cx8802_register_driver(struct cx8802_driver *drv) return -ENOMEM; /* Snapshot of the driver registration data */ - drv->core = h->core; + drv->core = dev->core; drv->suspend = cx8802_suspend_common; drv->resume = cx8802_resume_common; drv->request_acquire = cx8802_request_acquire; @@ -712,49 +694,38 @@ int cx8802_register_driver(struct cx8802_driver *drv) if (err == 0) { i++; mutex_lock(&drv->core->lock); - list_add_tail(&driver->devlist,&h->drvlist.devlist); + list_add_tail(&driver->drvlist, &dev->drvlist); mutex_unlock(&drv->core->lock); } else { printk(KERN_ERR "%s/2: cx8802 probe failed, err = %d\n", - h->core->name, err); + dev->core->name, err); } } - if (i == 0) - err = -ENODEV; - else - err = 0; - return err; + return i ? 0 : -ENODEV; } int cx8802_unregister_driver(struct cx8802_driver *drv) { - struct cx8802_dev *h; - struct cx8802_driver *d; - struct list_head *list; - struct list_head *list2, *q; - int err = 0, i = 0; + struct cx8802_dev *dev; + struct cx8802_driver *d, *dtmp; + int err = 0; printk(KERN_INFO "cx88/2: unregistering cx8802 driver, type: %s access: %s\n", drv->type_id == CX88_MPEG_DVB ? "dvb" : "blackbird", drv->hw_access == CX8802_DRVCTL_SHARED ? "shared" : "exclusive"); - list_for_each(list,&cx8802_devlist) { - i++; - h = list_entry(list, struct cx8802_dev, devlist); - + list_for_each_entry(dev, &cx8802_devlist, devlist) { printk(KERN_INFO "%s/2: subsystem: %04x:%04x, board: %s [card=%d]\n", - h->core->name, h->pci->subsystem_vendor, - h->pci->subsystem_device, h->core->board.name, - h->core->boardnr); - - list_for_each_safe(list2, q, &h->drvlist.devlist) { - d = list_entry(list2, struct cx8802_driver, devlist); + dev->core->name, dev->pci->subsystem_vendor, + dev->pci->subsystem_device, dev->core->board.name, + dev->core->boardnr); + list_for_each_entry_safe(d, dtmp, &dev->drvlist, drvlist) { /* only unregister the correct driver type */ if (d->type_id != drv->type_id) continue; @@ -762,12 +733,12 @@ int cx8802_unregister_driver(struct cx8802_driver *drv) err = d->remove(d); if (err == 0) { mutex_lock(&drv->core->lock); - list_del(list2); + list_del(&d->drvlist); mutex_unlock(&drv->core->lock); + kfree(d); } else printk(KERN_ERR "%s/2: cx8802 driver remove " - "failed (%d)\n", h->core->name, err); - + "failed (%d)\n", dev->core->name, err); } } @@ -805,7 +776,7 @@ static int __devinit cx8802_probe(struct pci_dev *pci_dev, if (err != 0) goto fail_free; - INIT_LIST_HEAD(&dev->drvlist.devlist); + INIT_LIST_HEAD(&dev->drvlist); list_add_tail(&dev->devlist,&cx8802_devlist); /* Maintain a reference so cx88-video can query the 8802 device. */ @@ -825,23 +796,30 @@ static int __devinit cx8802_probe(struct pci_dev *pci_dev, static void __devexit cx8802_remove(struct pci_dev *pci_dev) { struct cx8802_dev *dev; - struct cx8802_driver *h; - struct list_head *list; dev = pci_get_drvdata(pci_dev); dprintk( 1, "%s\n", __FUNCTION__); - list_for_each(list,&dev->drvlist.devlist) { - h = list_entry(list, struct cx8802_driver, devlist); - dprintk( 1, " ->driver\n"); - if (h->remove == NULL) { - printk(KERN_ERR "%s .. skipping driver, no probe function\n", __FUNCTION__); - continue; + if (!list_empty(&dev->drvlist)) { + struct cx8802_driver *drv, *tmp; + int err; + + printk(KERN_WARNING "%s/2: Trying to remove cx8802 driver " + "while cx8802 sub-drivers still loaded?!\n", + dev->core->name); + + list_for_each_entry_safe(drv, tmp, &dev->drvlist, drvlist) { + err = drv->remove(drv); + if (err == 0) { + mutex_lock(&drv->core->lock); + list_del(&drv->drvlist); + mutex_unlock(&drv->core->lock); + } else + printk(KERN_ERR "%s/2: cx8802 driver remove " + "failed (%d)\n", dev->core->name, err); + kfree(drv); } - printk(KERN_INFO "%s .. Removing driver type %d\n", __FUNCTION__, h->type_id); - cx8802_unregister_driver(h); - list_del(&dev->drvlist.devlist); } /* Destroy any 8802 reference. */ @@ -901,7 +879,6 @@ EXPORT_SYMBOL(cx8802_fini_common); EXPORT_SYMBOL(cx8802_register_driver); EXPORT_SYMBOL(cx8802_unregister_driver); -EXPORT_SYMBOL(cx8802_get_device); EXPORT_SYMBOL(cx8802_get_driver); /* ----------------------------------------------------------- */ /* diff --git a/drivers/media/video/cx88/cx88-video.c b/drivers/media/video/cx88/cx88-video.c index 231ae6c4dd2..5ee05f8f3fa 100644 --- a/drivers/media/video/cx88/cx88-video.c +++ b/drivers/media/video/cx88/cx88-video.c @@ -1675,7 +1675,6 @@ static struct video_device cx8800_radio_template = { .name = "cx8800-radio", .type = VID_TYPE_TUNER, - .hardware = 0, .fops = &radio_fops, .minor = -1, .vidioc_querycap = radio_querycap, diff --git a/drivers/media/video/cx88/cx88-vp3054-i2c.c b/drivers/media/video/cx88/cx88-vp3054-i2c.c index 77c37889232..6ce5af48847 100644 --- a/drivers/media/video/cx88/cx88-vp3054-i2c.c +++ b/drivers/media/video/cx88/cx88-vp3054-i2c.c @@ -41,7 +41,7 @@ static void vp3054_bit_setscl(void *data, int state) { struct cx8802_dev *dev = data; struct cx88_core *core = dev->core; - struct vp3054_i2c_state *vp3054_i2c = dev->card_priv; + struct vp3054_i2c_state *vp3054_i2c = dev->vp3054; if (state) { vp3054_i2c->state |= 0x0001; /* SCL high */ @@ -58,7 +58,7 @@ static void vp3054_bit_setsda(void *data, int state) { struct cx8802_dev *dev = data; struct cx88_core *core = dev->core; - struct vp3054_i2c_state *vp3054_i2c = dev->card_priv; + struct vp3054_i2c_state *vp3054_i2c = dev->vp3054; if (state) { vp3054_i2c->state |= 0x0002; /* SDA high */ @@ -113,10 +113,10 @@ int vp3054_i2c_probe(struct cx8802_dev *dev) if (core->boardnr != CX88_BOARD_DNTV_LIVE_DVB_T_PRO) return 0; - dev->card_priv = kzalloc(sizeof(*vp3054_i2c), GFP_KERNEL); - if (dev->card_priv == NULL) + vp3054_i2c = kzalloc(sizeof(*vp3054_i2c), GFP_KERNEL); + if (vp3054_i2c == NULL) return -ENOMEM; - vp3054_i2c = dev->card_priv; + dev->vp3054 = vp3054_i2c; memcpy(&vp3054_i2c->algo, &vp3054_i2c_algo_template, sizeof(vp3054_i2c->algo)); @@ -139,8 +139,8 @@ int vp3054_i2c_probe(struct cx8802_dev *dev) if (0 != rc) { printk("%s: vp3054_i2c register FAILED\n", core->name); - kfree(dev->card_priv); - dev->card_priv = NULL; + kfree(dev->vp3054); + dev->vp3054 = NULL; } return rc; @@ -148,7 +148,7 @@ int vp3054_i2c_probe(struct cx8802_dev *dev) void vp3054_i2c_remove(struct cx8802_dev *dev) { - struct vp3054_i2c_state *vp3054_i2c = dev->card_priv; + struct vp3054_i2c_state *vp3054_i2c = dev->vp3054; if (vp3054_i2c == NULL || dev->core->boardnr != CX88_BOARD_DNTV_LIVE_DVB_T_PRO) diff --git a/drivers/media/video/cx88/cx88.h b/drivers/media/video/cx88/cx88.h index 42e0a9b8c55..eb296bdecb1 100644 --- a/drivers/media/video/cx88/cx88.h +++ b/drivers/media/video/cx88/cx88.h @@ -412,7 +412,9 @@ struct cx8802_suspend_state { struct cx8802_driver { struct cx88_core *core; - struct list_head devlist; + + /* List of drivers attached to device */ + struct list_head drvlist; /* Type of driver and access required */ enum cx88_board_type type_id; @@ -453,27 +455,33 @@ struct cx8802_dev { /* for blackbird only */ struct list_head devlist; +#if defined(CONFIG_VIDEO_CX88_BLACKBIRD) || \ + defined(CONFIG_VIDEO_CX88_BLACKBIRD_MODULE) struct video_device *mpeg_dev; u32 mailbox; int width; int height; + /* mpeg params */ + struct cx2341x_mpeg_params params; +#endif + #if defined(CONFIG_VIDEO_CX88_DVB) || defined(CONFIG_VIDEO_CX88_DVB_MODULE) /* for dvb only */ struct videobuf_dvb dvb; +#endif - void *card_priv; +#if defined(CONFIG_VIDEO_CX88_VP3054) || \ + defined(CONFIG_VIDEO_CX88_VP3054_MODULE) + /* For VP3045 secondary I2C bus support */ + struct vp3054_i2c_state *vp3054; #endif /* for switching modulation types */ unsigned char ts_gen_cntrl; - /* mpeg params */ - struct cx2341x_mpeg_params params; - /* List of attached drivers */ - struct cx8802_driver drvlist; - struct work_struct request_module_wk; - + struct list_head drvlist; + struct work_struct request_module_wk; }; /* ----------------------------------------------------------- */ diff --git a/drivers/media/video/em28xx/em28xx-core.c b/drivers/media/video/em28xx/em28xx-core.c index d3282ec62c5..d56484f2046 100644 --- a/drivers/media/video/em28xx/em28xx-core.c +++ b/drivers/media/video/em28xx/em28xx-core.c @@ -648,7 +648,7 @@ void em28xx_uninit_isoc(struct em28xx *dev) */ int em28xx_init_isoc(struct em28xx *dev) { - /* change interface to 3 which allowes the biggest packet sizes */ + /* change interface to 3 which allows the biggest packet sizes */ int i, errCode; const int sb_size = EM28XX_NUM_PACKETS * dev->max_pkt_size; @@ -673,6 +673,7 @@ int em28xx_init_isoc(struct em28xx *dev) ("unable to allocate %i bytes for transfer buffer %i\n", sb_size, i); em28xx_uninit_isoc(dev); + usb_free_urb(urb); return -ENOMEM; } memset(dev->transfer_buffer[i], 0, sb_size); diff --git a/drivers/media/video/em28xx/em28xx-video.c b/drivers/media/video/em28xx/em28xx-video.c index e467682aabd..a4c2a907124 100644 --- a/drivers/media/video/em28xx/em28xx-video.c +++ b/drivers/media/video/em28xx/em28xx-video.c @@ -1617,7 +1617,6 @@ static int em28xx_init_dev(struct em28xx **devhandle, struct usb_device *udev, /* Fills VBI device info */ dev->vbi_dev->type = VFL_TYPE_VBI; - dev->vbi_dev->hardware = 0; dev->vbi_dev->fops = &em28xx_v4l_fops; dev->vbi_dev->minor = -1; dev->vbi_dev->dev = &dev->udev->dev; @@ -1629,7 +1628,6 @@ static int em28xx_init_dev(struct em28xx **devhandle, struct usb_device *udev, dev->vdev->type = VID_TYPE_CAPTURE; if (dev->has_tuner) dev->vdev->type |= VID_TYPE_TUNER; - dev->vdev->hardware = 0; dev->vdev->fops = &em28xx_v4l_fops; dev->vdev->minor = -1; dev->vdev->dev = &dev->udev->dev; diff --git a/drivers/media/video/et61x251/et61x251_core.c b/drivers/media/video/et61x251/et61x251_core.c index d5fef4c01c8..d19d73b81ed 100644 --- a/drivers/media/video/et61x251/et61x251_core.c +++ b/drivers/media/video/et61x251/et61x251_core.c @@ -2585,7 +2585,6 @@ et61x251_usb_probe(struct usb_interface* intf, const struct usb_device_id* id) strcpy(cam->v4ldev->name, "ET61X[12]51 PC Camera"); cam->v4ldev->owner = THIS_MODULE; cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES; - cam->v4ldev->hardware = 0; cam->v4ldev->fops = &et61x251_fops; cam->v4ldev->minor = video_nr[dev_nr]; cam->v4ldev->release = video_device_release; diff --git a/drivers/media/video/ir-kbd-i2c.c b/drivers/media/video/ir-kbd-i2c.c index d98dd0d1e37..29779d8bf7f 100644 --- a/drivers/media/video/ir-kbd-i2c.c +++ b/drivers/media/video/ir-kbd-i2c.c @@ -528,6 +528,7 @@ static int ir_probe(struct i2c_adapter *adap) break; case I2C_HW_B_CX2388x: probe = probe_cx88; + break; case I2C_HW_B_CX23885: probe = probe_cx23885; break; diff --git a/drivers/media/video/ivtv/ivtv-driver.c b/drivers/media/video/ivtv/ivtv-driver.c index fd7a932e1d3..6d2dd8764f8 100644 --- a/drivers/media/video/ivtv/ivtv-driver.c +++ b/drivers/media/video/ivtv/ivtv-driver.c @@ -1003,8 +1003,6 @@ static int __devinit ivtv_probe(struct pci_dev *dev, IVTV_DEBUG_INFO("base addr: 0x%08x\n", itv->base_addr); - mutex_lock(&itv->serialize_lock); - /* PCI Device Setup */ if ((retval = ivtv_setup_pci(itv, dev, pci_id)) != 0) { if (retval == -EIO) @@ -1064,7 +1062,7 @@ static int __devinit ivtv_probe(struct pci_dev *dev, IVTV_DEBUG_INFO("activating i2c...\n"); if (init_ivtv_i2c(itv)) { IVTV_ERR("Could not initialize i2c\n"); - goto free_irq; + goto free_io; } IVTV_DEBUG_INFO("Active card count: %d.\n", ivtv_cards_active); @@ -1176,7 +1174,11 @@ static int __devinit ivtv_probe(struct pci_dev *dev, IVTV_ERR("Failed to register irq %d\n", retval); goto free_streams; } - mutex_unlock(&itv->serialize_lock); + retval = ivtv_streams_register(itv); + if (retval) { + IVTV_ERR("Error %d registering devices\n", retval); + goto free_irq; + } IVTV_INFO("Initialized card #%d: %s\n", itv->num, itv->card_name); return 0; @@ -1195,7 +1197,6 @@ static int __devinit ivtv_probe(struct pci_dev *dev, release_mem_region(itv->base_addr + IVTV_DECODER_OFFSET, IVTV_DECODER_SIZE); free_workqueue: destroy_workqueue(itv->irq_work_queues); - mutex_unlock(&itv->serialize_lock); err: if (retval == 0) retval = -ENODEV; diff --git a/drivers/media/video/ivtv/ivtv-fileops.c b/drivers/media/video/ivtv/ivtv-fileops.c index da50fa4a72a..a200a8a95a2 100644 --- a/drivers/media/video/ivtv/ivtv-fileops.c +++ b/drivers/media/video/ivtv/ivtv-fileops.c @@ -822,6 +822,11 @@ int ivtv_v4l2_close(struct inode *inode, struct file *filp) crystal_freq.flags = 0; ivtv_saa7115(itv, VIDIOC_INT_S_CRYSTAL_FREQ, &crystal_freq); } + if (atomic_read(&itv->capturing) > 0) { + /* Undo video mute */ + ivtv_vapi(itv, CX2341X_ENC_MUTE_VIDEO, 1, + itv->params.video_mute | (itv->params.video_mute_yuv << 8)); + } /* Done! Unmute and continue. */ ivtv_unmute(itv); ivtv_release_stream(s); @@ -892,6 +897,7 @@ static int ivtv_serialized_open(struct ivtv_stream *s, struct file *filp) if (atomic_read(&itv->capturing) > 0) { /* switching to radio while capture is in progress is not polite */ + ivtv_release_stream(s); kfree(item); return -EBUSY; } @@ -947,7 +953,7 @@ int ivtv_v4l2_open(struct inode *inode, struct file *filp) if (itv == NULL) { /* Couldn't find a device registered on that minor, shouldn't happen! */ - IVTV_WARN("No ivtv device found on minor %d\n", minor); + printk(KERN_WARNING "No ivtv device found on minor %d\n", minor); return -ENXIO; } diff --git a/drivers/media/video/ivtv/ivtv-ioctl.c b/drivers/media/video/ivtv/ivtv-ioctl.c index 206eee7542d..fd6826f472e 100644 --- a/drivers/media/video/ivtv/ivtv-ioctl.c +++ b/drivers/media/video/ivtv/ivtv-ioctl.c @@ -555,6 +555,7 @@ static int ivtv_try_or_set_fmt(struct ivtv *itv, int streamtype, /* set window size */ if (fmt->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) { + struct cx2341x_mpeg_params *p = &itv->params; int w = fmt->fmt.pix.width; int h = fmt->fmt.pix.height; @@ -566,17 +567,19 @@ static int ivtv_try_or_set_fmt(struct ivtv *itv, int streamtype, fmt->fmt.pix.width = w; fmt->fmt.pix.height = h; - if (!set_fmt || (itv->params.width == w && itv->params.height == h)) + if (!set_fmt || (p->width == w && p->height == h)) return 0; if (atomic_read(&itv->capturing) > 0) return -EBUSY; - itv->params.width = w; - itv->params.height = h; + p->width = w; + p->height = h; if (w != 720 || h != (itv->is_50hz ? 576 : 480)) - itv->params.video_temporal_filter = 0; + p->video_temporal_filter = 0; else - itv->params.video_temporal_filter = 8; + p->video_temporal_filter = 8; + if (p->video_encoding == V4L2_MPEG_VIDEO_ENCODING_MPEG_1) + fmt->fmt.pix.width /= 2; itv->video_dec_func(itv, VIDIOC_S_FMT, fmt); return ivtv_get_fmt(itv, streamtype, fmt); } diff --git a/drivers/media/video/ivtv/ivtv-streams.c b/drivers/media/video/ivtv/ivtv-streams.c index fd135985e70..aa03e61ef31 100644 --- a/drivers/media/video/ivtv/ivtv-streams.c +++ b/drivers/media/video/ivtv/ivtv-streams.c @@ -166,10 +166,9 @@ static void ivtv_stream_init(struct ivtv *itv, int type) ivtv_queue_init(&s->q_io); } -static int ivtv_reg_dev(struct ivtv *itv, int type) +static int ivtv_prep_dev(struct ivtv *itv, int type) { struct ivtv_stream *s = &itv->streams[type]; - int vfl_type = ivtv_stream_info[type].vfl_type; int minor_offset = ivtv_stream_info[type].minor_offset; int minor; @@ -187,15 +186,12 @@ static int ivtv_reg_dev(struct ivtv *itv, int type) if (type >= IVTV_DEC_STREAM_TYPE_MPG && !(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT)) return 0; - if (minor_offset >= 0) - /* card number + user defined offset + device offset */ - minor = itv->num + ivtv_first_minor + minor_offset; - else - minor = -1; + /* card number + user defined offset + device offset */ + minor = itv->num + ivtv_first_minor + minor_offset; /* User explicitly selected 0 buffers for these streams, so don't create them. */ - if (minor >= 0 && ivtv_stream_info[type].dma != PCI_DMA_NONE && + if (ivtv_stream_info[type].dma != PCI_DMA_NONE && itv->options.kilobytes[type] == 0) { IVTV_INFO("Disabled %s device\n", ivtv_stream_info[type].name); return 0; @@ -223,21 +219,53 @@ static int ivtv_reg_dev(struct ivtv *itv, int type) s->v4l2dev->fops = ivtv_stream_info[type].fops; s->v4l2dev->release = video_device_release; - if (minor >= 0) { - /* Register device. First try the desired minor, then any free one. */ - if (video_register_device(s->v4l2dev, vfl_type, minor) && - video_register_device(s->v4l2dev, vfl_type, -1)) { - IVTV_ERR("Couldn't register v4l2 device for %s minor %d\n", - s->name, minor); - video_device_release(s->v4l2dev); - s->v4l2dev = NULL; - return -ENOMEM; - } + return 0; +} + +/* Initialize v4l2 variables and prepare v4l2 devices */ +int ivtv_streams_setup(struct ivtv *itv) +{ + int type; + + /* Setup V4L2 Devices */ + for (type = 0; type < IVTV_MAX_STREAMS; type++) { + /* Prepare device */ + if (ivtv_prep_dev(itv, type)) + break; + + if (itv->streams[type].v4l2dev == NULL) + continue; + + /* Allocate Stream */ + if (ivtv_stream_alloc(&itv->streams[type])) + break; } - else { - /* Don't register a 'hidden' stream (OSD) */ - IVTV_INFO("Created framebuffer stream for %s\n", s->name); + if (type == IVTV_MAX_STREAMS) return 0; + + /* One or more streams could not be initialized. Clean 'em all up. */ + ivtv_streams_cleanup(itv); + return -ENOMEM; +} + +static int ivtv_reg_dev(struct ivtv *itv, int type) +{ + struct ivtv_stream *s = &itv->streams[type]; + int vfl_type = ivtv_stream_info[type].vfl_type; + int minor; + + if (s->v4l2dev == NULL) + return 0; + + minor = s->v4l2dev->minor; + /* Register device. First try the desired minor, then any free one. */ + if (video_register_device(s->v4l2dev, vfl_type, minor) && + video_register_device(s->v4l2dev, vfl_type, -1)) { + IVTV_ERR("Couldn't register v4l2 device for %s minor %d\n", + s->name, minor); + video_device_release(s->v4l2dev); + s->v4l2dev = NULL; + return -ENOMEM; } switch (vfl_type) { @@ -262,27 +290,18 @@ static int ivtv_reg_dev(struct ivtv *itv, int type) return 0; } -/* Initialize v4l2 variables and register v4l2 devices */ -int ivtv_streams_setup(struct ivtv *itv) +/* Register v4l2 devices */ +int ivtv_streams_register(struct ivtv *itv) { int type; + int err = 0; - /* Setup V4L2 Devices */ - for (type = 0; type < IVTV_MAX_STREAMS; type++) { - /* Register Device */ - if (ivtv_reg_dev(itv, type)) - break; - - if (itv->streams[type].v4l2dev == NULL) - continue; + /* Register V4L2 devices */ + for (type = 0; type < IVTV_MAX_STREAMS; type++) + err |= ivtv_reg_dev(itv, type); - /* Allocate Stream */ - if (ivtv_stream_alloc(&itv->streams[type])) - break; - } - if (type == IVTV_MAX_STREAMS) { + if (err == 0) return 0; - } /* One or more streams could not be initialized. Clean 'em all up. */ ivtv_streams_cleanup(itv); @@ -303,11 +322,8 @@ void ivtv_streams_cleanup(struct ivtv *itv) continue; ivtv_stream_free(&itv->streams[type]); - /* Free Device */ - if (vdev->minor == -1) /* 'Hidden' never registered stream (OSD) */ - video_device_release(vdev); - else /* All others, just unregister. */ - video_unregister_device(vdev); + /* Unregister device */ + video_unregister_device(vdev); } } @@ -425,6 +441,7 @@ int ivtv_start_v4l2_encode_stream(struct ivtv_stream *s) { u32 data[CX2341X_MBOX_MAX_DATA]; struct ivtv *itv = s->itv; + struct cx2341x_mpeg_params *p = &itv->params; int captype = 0, subtype = 0; int enable_passthrough = 0; @@ -445,7 +462,7 @@ int ivtv_start_v4l2_encode_stream(struct ivtv_stream *s) } itv->mpg_data_received = itv->vbi_data_inserted = 0; itv->dualwatch_jiffies = jiffies; - itv->dualwatch_stereo_mode = itv->params.audio_properties & 0x0300; + itv->dualwatch_stereo_mode = p->audio_properties & 0x0300; itv->search_pack_header = 0; break; @@ -477,9 +494,6 @@ int ivtv_start_v4l2_encode_stream(struct ivtv_stream *s) s->subtype = subtype; s->buffers_stolen = 0; - /* mute/unmute video */ - ivtv_vapi(itv, CX2341X_ENC_MUTE_VIDEO, 1, test_bit(IVTV_F_I_RADIO_USER, &itv->i_flags) ? 1 : 0); - /* Clear Streamoff flags in case left from last capture */ clear_bit(IVTV_F_S_STREAMOFF, &s->s_flags); @@ -536,7 +550,12 @@ int ivtv_start_v4l2_encode_stream(struct ivtv_stream *s) itv->pgm_info_offset, itv->pgm_info_num); /* Setup API for Stream */ - cx2341x_update(itv, ivtv_api_func, NULL, &itv->params); + cx2341x_update(itv, ivtv_api_func, NULL, p); + + /* mute if capturing radio */ + if (test_bit(IVTV_F_I_RADIO_USER, &itv->i_flags)) + ivtv_vapi(itv, CX2341X_ENC_MUTE_VIDEO, 1, + 1 | (p->video_mute_yuv << 8)); } /* Vsync Setup */ @@ -585,6 +604,7 @@ static int ivtv_setup_v4l2_decode_stream(struct ivtv_stream *s) { u32 data[CX2341X_MBOX_MAX_DATA]; struct ivtv *itv = s->itv; + struct cx2341x_mpeg_params *p = &itv->params; int datatype; if (s->v4l2dev == NULL) @@ -623,7 +643,7 @@ static int ivtv_setup_v4l2_decode_stream(struct ivtv_stream *s) break; } if (ivtv_vapi(itv, CX2341X_DEC_SET_DECODER_SOURCE, 4, datatype, - itv->params.width, itv->params.height, itv->params.audio_properties)) { + p->width, p->height, p->audio_properties)) { IVTV_DEBUG_WARN("Couldn't initialize decoder source\n"); } return 0; diff --git a/drivers/media/video/ivtv/ivtv-streams.h b/drivers/media/video/ivtv/ivtv-streams.h index 8f5f5b1c7c8..3d76a415fbd 100644 --- a/drivers/media/video/ivtv/ivtv-streams.h +++ b/drivers/media/video/ivtv/ivtv-streams.h @@ -22,6 +22,7 @@ #define IVTV_STREAMS_H int ivtv_streams_setup(struct ivtv *itv); +int ivtv_streams_register(struct ivtv *itv); void ivtv_streams_cleanup(struct ivtv *itv); /* Capture related */ diff --git a/drivers/media/video/ivtv/ivtv-udma.c b/drivers/media/video/ivtv/ivtv-udma.c index c4626d1cdf4..912b424e520 100644 --- a/drivers/media/video/ivtv/ivtv-udma.c +++ b/drivers/media/video/ivtv/ivtv-udma.c @@ -63,10 +63,10 @@ int ivtv_udma_fill_sg_list (struct ivtv_user_dma *dma, struct ivtv_dma_page_info memcpy(page_address(dma->bouncemap[map_offset]) + offset, src, len); kunmap_atomic(src, KM_BOUNCE_READ); local_irq_restore(flags); - dma->SGlist[map_offset].page = dma->bouncemap[map_offset]; + sg_set_page(&dma->SGlist[map_offset], dma->bouncemap[map_offset]); } else { - dma->SGlist[map_offset].page = dma->map[map_offset]; + sg_set_page(&dma->SGlist[map_offset], dma->map[map_offset]); } offset = 0; map_offset++; diff --git a/drivers/media/video/ivtv/ivtv-yuv.c b/drivers/media/video/ivtv/ivtv-yuv.c index e2288f224ab..9091c4837bb 100644 --- a/drivers/media/video/ivtv/ivtv-yuv.c +++ b/drivers/media/video/ivtv/ivtv-yuv.c @@ -710,7 +710,7 @@ static u32 ivtv_yuv_window_setup (struct ivtv *itv, struct yuv_frame_info *windo /* If there's nothing to safe to display, we may as well stop now */ if ((int)window->dst_w <= 2 || (int)window->dst_h <= 2 || (int)window->src_w <= 2 || (int)window->src_h <= 2) { - return 0; + return IVTV_YUV_UPDATE_INVALID; } /* Ensure video remains inside OSD area */ @@ -791,7 +791,7 @@ static u32 ivtv_yuv_window_setup (struct ivtv *itv, struct yuv_frame_info *windo /* Check again. If there's nothing to safe to display, stop now */ if ((int)window->dst_w <= 2 || (int)window->dst_h <= 2 || (int)window->src_w <= 2 || (int)window->src_h <= 2) { - return 0; + return IVTV_YUV_UPDATE_INVALID; } /* Both x offset & width are linked, so they have to be done together */ @@ -840,110 +840,118 @@ void ivtv_yuv_work_handler (struct ivtv *itv) if (!(yuv_update = ivtv_yuv_window_setup (itv, &window))) return; - /* Update horizontal settings */ - if (yuv_update & IVTV_YUV_UPDATE_HORIZONTAL) - ivtv_yuv_handle_horizontal(itv, &window); + if (yuv_update & IVTV_YUV_UPDATE_INVALID) { + write_reg(0x01008080, 0x2898); + } else if (yuv_update) { + write_reg(0x00108080, 0x2898); - if (yuv_update & IVTV_YUV_UPDATE_VERTICAL) - ivtv_yuv_handle_vertical(itv, &window); + if (yuv_update & IVTV_YUV_UPDATE_HORIZONTAL) + ivtv_yuv_handle_horizontal(itv, &window); + + if (yuv_update & IVTV_YUV_UPDATE_VERTICAL) + ivtv_yuv_handle_vertical(itv, &window); + } memcpy(&itv->yuv_info.old_frame_info, &window, sizeof (itv->yuv_info.old_frame_info)); } static void ivtv_yuv_init (struct ivtv *itv) { + struct yuv_playback_info *yi = &itv->yuv_info; + IVTV_DEBUG_YUV("ivtv_yuv_init\n"); /* Take a snapshot of the current register settings */ - itv->yuv_info.reg_2834 = read_reg(0x02834); - itv->yuv_info.reg_2838 = read_reg(0x02838); - itv->yuv_info.reg_283c = read_reg(0x0283c); - itv->yuv_info.reg_2840 = read_reg(0x02840); - itv->yuv_info.reg_2844 = read_reg(0x02844); - itv->yuv_info.reg_2848 = read_reg(0x02848); - itv->yuv_info.reg_2854 = read_reg(0x02854); - itv->yuv_info.reg_285c = read_reg(0x0285c); - itv->yuv_info.reg_2864 = read_reg(0x02864); - itv->yuv_info.reg_2870 = read_reg(0x02870); - itv->yuv_info.reg_2874 = read_reg(0x02874); - itv->yuv_info.reg_2898 = read_reg(0x02898); - itv->yuv_info.reg_2890 = read_reg(0x02890); - - itv->yuv_info.reg_289c = read_reg(0x0289c); - itv->yuv_info.reg_2918 = read_reg(0x02918); - itv->yuv_info.reg_291c = read_reg(0x0291c); - itv->yuv_info.reg_2920 = read_reg(0x02920); - itv->yuv_info.reg_2924 = read_reg(0x02924); - itv->yuv_info.reg_2928 = read_reg(0x02928); - itv->yuv_info.reg_292c = read_reg(0x0292c); - itv->yuv_info.reg_2930 = read_reg(0x02930); - itv->yuv_info.reg_2934 = read_reg(0x02934); - itv->yuv_info.reg_2938 = read_reg(0x02938); - itv->yuv_info.reg_293c = read_reg(0x0293c); - itv->yuv_info.reg_2940 = read_reg(0x02940); - itv->yuv_info.reg_2944 = read_reg(0x02944); - itv->yuv_info.reg_2948 = read_reg(0x02948); - itv->yuv_info.reg_294c = read_reg(0x0294c); - itv->yuv_info.reg_2950 = read_reg(0x02950); - itv->yuv_info.reg_2954 = read_reg(0x02954); - itv->yuv_info.reg_2958 = read_reg(0x02958); - itv->yuv_info.reg_295c = read_reg(0x0295c); - itv->yuv_info.reg_2960 = read_reg(0x02960); - itv->yuv_info.reg_2964 = read_reg(0x02964); - itv->yuv_info.reg_2968 = read_reg(0x02968); - itv->yuv_info.reg_296c = read_reg(0x0296c); - itv->yuv_info.reg_2970 = read_reg(0x02970); - - itv->yuv_info.v_filter_1 = -1; - itv->yuv_info.v_filter_2 = -1; - itv->yuv_info.h_filter = -1; + yi->reg_2834 = read_reg(0x02834); + yi->reg_2838 = read_reg(0x02838); + yi->reg_283c = read_reg(0x0283c); + yi->reg_2840 = read_reg(0x02840); + yi->reg_2844 = read_reg(0x02844); + yi->reg_2848 = read_reg(0x02848); + yi->reg_2854 = read_reg(0x02854); + yi->reg_285c = read_reg(0x0285c); + yi->reg_2864 = read_reg(0x02864); + yi->reg_2870 = read_reg(0x02870); + yi->reg_2874 = read_reg(0x02874); + yi->reg_2898 = read_reg(0x02898); + yi->reg_2890 = read_reg(0x02890); + + yi->reg_289c = read_reg(0x0289c); + yi->reg_2918 = read_reg(0x02918); + yi->reg_291c = read_reg(0x0291c); + yi->reg_2920 = read_reg(0x02920); + yi->reg_2924 = read_reg(0x02924); + yi->reg_2928 = read_reg(0x02928); + yi->reg_292c = read_reg(0x0292c); + yi->reg_2930 = read_reg(0x02930); + yi->reg_2934 = read_reg(0x02934); + yi->reg_2938 = read_reg(0x02938); + yi->reg_293c = read_reg(0x0293c); + yi->reg_2940 = read_reg(0x02940); + yi->reg_2944 = read_reg(0x02944); + yi->reg_2948 = read_reg(0x02948); + yi->reg_294c = read_reg(0x0294c); + yi->reg_2950 = read_reg(0x02950); + yi->reg_2954 = read_reg(0x02954); + yi->reg_2958 = read_reg(0x02958); + yi->reg_295c = read_reg(0x0295c); + yi->reg_2960 = read_reg(0x02960); + yi->reg_2964 = read_reg(0x02964); + yi->reg_2968 = read_reg(0x02968); + yi->reg_296c = read_reg(0x0296c); + yi->reg_2970 = read_reg(0x02970); + + yi->v_filter_1 = -1; + yi->v_filter_2 = -1; + yi->h_filter = -1; /* Set some valid size info */ - itv->yuv_info.osd_x_offset = read_reg(0x02a04) & 0x00000FFF; - itv->yuv_info.osd_y_offset = (read_reg(0x02a04) >> 16) & 0x00000FFF; + yi->osd_x_offset = read_reg(0x02a04) & 0x00000FFF; + yi->osd_y_offset = (read_reg(0x02a04) >> 16) & 0x00000FFF; /* Bit 2 of reg 2878 indicates current decoder output format 0 : NTSC 1 : PAL */ if (read_reg(0x2878) & 4) - itv->yuv_info.decode_height = 576; + yi->decode_height = 576; else - itv->yuv_info.decode_height = 480; + yi->decode_height = 480; - /* If no visible size set, assume full size */ - if (!itv->yuv_info.osd_vis_w) - itv->yuv_info.osd_vis_w = 720 - itv->yuv_info.osd_x_offset; - - if (!itv->yuv_info.osd_vis_h) { - itv->yuv_info.osd_vis_h = itv->yuv_info.decode_height - itv->yuv_info.osd_y_offset; + if (!itv->osd_info) { + yi->osd_vis_w = 720 - yi->osd_x_offset; + yi->osd_vis_h = yi->decode_height - yi->osd_y_offset; } else { - /* If output video standard has changed, requested height may - not be legal */ - if (itv->yuv_info.osd_vis_h + itv->yuv_info.osd_y_offset > itv->yuv_info.decode_height) { - IVTV_DEBUG_WARN("Clipping yuv output - fb size (%d) exceeds video standard limit (%d)\n", - itv->yuv_info.osd_vis_h + itv->yuv_info.osd_y_offset, - itv->yuv_info.decode_height); - itv->yuv_info.osd_vis_h = itv->yuv_info.decode_height - itv->yuv_info.osd_y_offset; + /* If no visible size set, assume full size */ + if (!yi->osd_vis_w) + yi->osd_vis_w = 720 - yi->osd_x_offset; + + if (!yi->osd_vis_h) + yi->osd_vis_h = yi->decode_height - yi->osd_y_offset; + else { + /* If output video standard has changed, requested height may + not be legal */ + if (yi->osd_vis_h + yi->osd_y_offset > yi->decode_height) { + IVTV_DEBUG_WARN("Clipping yuv output - fb size (%d) exceeds video standard limit (%d)\n", + yi->osd_vis_h + yi->osd_y_offset, + yi->decode_height); + yi->osd_vis_h = yi->decode_height - yi->osd_y_offset; + } } } /* We need a buffer for blanking when Y plane is offset - non-fatal if we can't get one */ - itv->yuv_info.blanking_ptr = kzalloc(720*16,GFP_KERNEL); - if (itv->yuv_info.blanking_ptr) { - itv->yuv_info.blanking_dmaptr = pci_map_single(itv->dev, itv->yuv_info.blanking_ptr, 720*16, PCI_DMA_TODEVICE); - } + yi->blanking_ptr = kzalloc(720*16, GFP_KERNEL); + if (yi->blanking_ptr) + yi->blanking_dmaptr = pci_map_single(itv->dev, yi->blanking_ptr, 720*16, PCI_DMA_TODEVICE); else { - itv->yuv_info.blanking_dmaptr = 0; - IVTV_DEBUG_WARN ("Failed to allocate yuv blanking buffer\n"); + yi->blanking_dmaptr = 0; + IVTV_DEBUG_WARN("Failed to allocate yuv blanking buffer\n"); } - IVTV_DEBUG_WARN("Enable video output\n"); - write_reg_sync(0x00108080, 0x2898); - /* Enable YUV decoder output */ write_reg_sync(0x01, IVTV_REG_VDM); set_bit(IVTV_F_I_DECODING_YUV, &itv->i_flags); - atomic_set(&itv->yuv_info.next_dma_frame,0); + atomic_set(&yi->next_dma_frame, 0); } int ivtv_yuv_prep_frame(struct ivtv *itv, struct ivtv_dma_frame *args) diff --git a/drivers/media/video/ivtv/ivtv-yuv.h b/drivers/media/video/ivtv/ivtv-yuv.h index f7215eeca01..3b966f0a204 100644 --- a/drivers/media/video/ivtv/ivtv-yuv.h +++ b/drivers/media/video/ivtv/ivtv-yuv.h @@ -34,6 +34,7 @@ #define IVTV_YUV_UPDATE_HORIZONTAL 0x01 #define IVTV_YUV_UPDATE_VERTICAL 0x02 +#define IVTV_YUV_UPDATE_INVALID 0x04 extern const u32 yuv_offset[4]; diff --git a/drivers/media/video/ivtv/ivtvfb.c b/drivers/media/video/ivtv/ivtvfb.c index 9684048fe56..52ffd154a3d 100644 --- a/drivers/media/video/ivtv/ivtvfb.c +++ b/drivers/media/video/ivtv/ivtvfb.c @@ -55,7 +55,6 @@ static int ivtvfb_card_id = -1; static int ivtvfb_debug = 0; static int osd_laced; -static int osd_compat; static int osd_depth; static int osd_upper; static int osd_left; @@ -65,7 +64,6 @@ static int osd_xres; module_param(ivtvfb_card_id, int, 0444); module_param_named(debug,ivtvfb_debug, int, 0644); module_param(osd_laced, bool, 0444); -module_param(osd_compat, bool, 0444); module_param(osd_depth, int, 0444); module_param(osd_upper, int, 0444); module_param(osd_left, int, 0444); @@ -80,12 +78,6 @@ MODULE_PARM_DESC(debug, "Debug level (bitmask). Default: errors only\n" "\t\t\t(debug = 3 gives full debugging)"); -MODULE_PARM_DESC(osd_compat, - "Compatibility mode - Display size is locked (use for old X drivers)\n" - "\t\t\t0=off\n" - "\t\t\t1=on\n" - "\t\t\tdefault off"); - /* Why upper, left, xres, yres, depth, laced ? To match terminology used by fbset. Why start at 1 for left & upper coordinate ? Because X doesn't allow 0 */ @@ -166,9 +158,6 @@ struct osd_info { unsigned long fb_end_aligned_physaddr; #endif - /* Current osd mode */ - int osd_mode; - /* Store the buffer offset */ int set_osd_coords_x; int set_osd_coords_y; @@ -470,13 +459,11 @@ static int ivtvfb_set_var(struct ivtv *itv, struct fb_var_screeninfo *var) IVTVFB_DEBUG_WARN("ivtvfb_set_var - Invalid bpp\n"); } - /* Change osd mode if needed. - Although rare, things can go wrong. The extra mode - change seems to help... */ - if (osd_mode != -1 && osd_mode != oi->osd_mode) { + /* Set video mode. Although rare, the display can become scrambled even + if we don't change mode. Always 'bounce' to osd_mode via mode 0 */ + if (osd_mode != -1) { ivtv_vapi(itv, CX2341X_OSD_SET_PIXEL_FORMAT, 1, 0); ivtv_vapi(itv, CX2341X_OSD_SET_PIXEL_FORMAT, 1, osd_mode); - oi->osd_mode = osd_mode; } oi->bits_per_pixel = var->bits_per_pixel; @@ -579,14 +566,6 @@ static int _ivtvfb_check_var(struct fb_var_screeninfo *var, struct ivtv *itv) osd_height_limit = 480; } - /* Check the bits per pixel */ - if (osd_compat) { - if (var->bits_per_pixel != 32) { - IVTVFB_DEBUG_WARN("Invalid colour mode: %d\n", var->bits_per_pixel); - return -EINVAL; - } - } - if (var->bits_per_pixel == 8 || var->bits_per_pixel == 32) { var->transp.offset = 24; var->transp.length = 8; @@ -638,32 +617,20 @@ static int _ivtvfb_check_var(struct fb_var_screeninfo *var, struct ivtv *itv) } /* Check the resolution */ - if (osd_compat) { - if (var->xres != oi->ivtvfb_defined.xres || - var->yres != oi->ivtvfb_defined.yres || - var->xres_virtual != oi->ivtvfb_defined.xres_virtual || - var->yres_virtual != oi->ivtvfb_defined.yres_virtual) { - IVTVFB_DEBUG_WARN("Invalid resolution: %dx%d (virtual %dx%d)\n", - var->xres, var->yres, var->xres_virtual, var->yres_virtual); - return -EINVAL; - } + if (var->xres > IVTV_OSD_MAX_WIDTH || var->yres > osd_height_limit) { + IVTVFB_DEBUG_WARN("Invalid resolution: %dx%d\n", + var->xres, var->yres); + return -EINVAL; } - else { - if (var->xres > IVTV_OSD_MAX_WIDTH || var->yres > osd_height_limit) { - IVTVFB_DEBUG_WARN("Invalid resolution: %dx%d\n", - var->xres, var->yres); - return -EINVAL; - } - /* Max horizontal size is 1023 @ 32bpp, 2046 & 16bpp, 4092 @ 8bpp */ - if (var->xres_virtual > 4095 / (var->bits_per_pixel / 8) || - var->xres_virtual * var->yres_virtual * (var->bits_per_pixel / 8) > oi->video_buffer_size || - var->xres_virtual < var->xres || - var->yres_virtual < var->yres) { - IVTVFB_DEBUG_WARN("Invalid virtual resolution: %dx%d\n", - var->xres_virtual, var->yres_virtual); - return -EINVAL; - } + /* Max horizontal size is 1023 @ 32bpp, 2046 & 16bpp, 4092 @ 8bpp */ + if (var->xres_virtual > 4095 / (var->bits_per_pixel / 8) || + var->xres_virtual * var->yres_virtual * (var->bits_per_pixel / 8) > oi->video_buffer_size || + var->xres_virtual < var->xres || + var->yres_virtual < var->yres) { + IVTVFB_DEBUG_WARN("Invalid virtual resolution: %dx%d\n", + var->xres_virtual, var->yres_virtual); + return -EINVAL; } /* Some extra checks if in 8 bit mode */ @@ -877,17 +844,15 @@ static int ivtvfb_init_vidmode(struct ivtv *itv) /* Color mode */ - if (osd_compat) osd_depth = 32; - if (osd_depth != 8 && osd_depth != 16 && osd_depth != 32) osd_depth = 8; + if (osd_depth != 8 && osd_depth != 16 && osd_depth != 32) + osd_depth = 8; oi->bits_per_pixel = osd_depth; oi->bytes_per_pixel = oi->bits_per_pixel / 8; - /* Invalidate current osd mode to force a mode switch later */ - oi->osd_mode = -1; - /* Horizontal size & position */ - if (osd_xres > 720) osd_xres = 720; + if (osd_xres > 720) + osd_xres = 720; /* Must be a multiple of 4 for 8bpp & 2 for 16bpp */ if (osd_depth == 8) @@ -895,10 +860,7 @@ static int ivtvfb_init_vidmode(struct ivtv *itv) else if (osd_depth == 16) osd_xres &= ~1; - if (osd_xres) - start_window.width = osd_xres; - else - start_window.width = osd_compat ? 720: 640; + start_window.width = osd_xres ? osd_xres : 640; /* Check horizontal start (osd_left). */ if (osd_left && osd_left + start_window.width > 721) { @@ -921,10 +883,7 @@ static int ivtvfb_init_vidmode(struct ivtv *itv) if (osd_yres > max_height) osd_yres = max_height; - if (osd_yres) - start_window.height = osd_yres; - else - start_window.height = osd_compat ? max_height : (itv->is_50hz ? 480 : 400); + start_window.height = osd_yres ? osd_yres : itv->is_50hz ? 480 : 400; /* Check vertical start (osd_upper). */ if (osd_upper + start_window.height > max_height + 1) { @@ -1127,10 +1086,6 @@ static int ivtvfb_init_card(struct ivtv *itv) /* Enable the osd */ ivtvfb_blank(FB_BLANK_UNBLANK, &itv->osd_info->ivtvfb_info); - /* Note if we're running in compatibility mode */ - if (osd_compat) - IVTVFB_INFO("Running in compatibility mode. Display resize & mode change disabled\n"); - /* Allocate DMA */ ivtv_udma_alloc(itv); return 0; @@ -1177,9 +1132,12 @@ static void ivtvfb_cleanup(void) for (i = 0; i < ivtv_cards_active; i++) { itv = ivtv_cards[i]; if (itv && (itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT) && itv->osd_info) { + if (unregister_framebuffer(&itv->osd_info->ivtvfb_info)) { + IVTVFB_WARN("Framebuffer %d is in use, cannot unload\n", i); + return; + } IVTVFB_DEBUG_INFO("Unregister framebuffer %d\n", i); ivtvfb_blank(FB_BLANK_POWERDOWN, &itv->osd_info->ivtvfb_info); - unregister_framebuffer(&itv->osd_info->ivtvfb_info); ivtvfb_release_buffers(itv); itv->osd_video_pbase = 0; } diff --git a/drivers/media/video/meye.c b/drivers/media/video/meye.c index 69283926a8d..c3116329043 100644 --- a/drivers/media/video/meye.c +++ b/drivers/media/video/meye.c @@ -1762,7 +1762,6 @@ static struct video_device meye_template = { .owner = THIS_MODULE, .name = "meye", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_MEYE, .fops = &meye_fops, .release = video_device_release, .minor = -1, diff --git a/drivers/media/video/ov511.c b/drivers/media/video/ov511.c index b8d4ac0d938..d55d5800efb 100644 --- a/drivers/media/video/ov511.c +++ b/drivers/media/video/ov511.c @@ -4668,7 +4668,6 @@ static struct video_device vdev_template = { .owner = THIS_MODULE, .name = "OV511 USB Camera", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_OV511, .fops = &ov511_fops, .release = video_device_release, .minor = -1, diff --git a/drivers/media/video/planb.c b/drivers/media/video/planb.c index 0ef73d9d584..ce4b2f9791e 100644 --- a/drivers/media/video/planb.c +++ b/drivers/media/video/planb.c @@ -2013,7 +2013,6 @@ static struct video_device planb_template= .owner = THIS_MODULE, .name = PLANB_DEVICE_NAME, .type = VID_TYPE_OVERLAY, - .hardware = VID_HARDWARE_PLANB, .open = planb_open, .close = planb_close, .read = planb_read, diff --git a/drivers/media/video/pms.c b/drivers/media/video/pms.c index b5a67f0dd19..6820c2aabd3 100644 --- a/drivers/media/video/pms.c +++ b/drivers/media/video/pms.c @@ -895,7 +895,6 @@ static struct video_device pms_template= .owner = THIS_MODULE, .name = "Mediavision PMS", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_PMS, .fops = &pms_fops, }; diff --git a/drivers/media/video/pvrusb2/pvrusb2-encoder.c b/drivers/media/video/pvrusb2/pvrusb2-encoder.c index 20b614436d2..205087a3e13 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-encoder.c +++ b/drivers/media/video/pvrusb2/pvrusb2-encoder.c @@ -209,6 +209,11 @@ static int pvr2_encoder_cmd(void *ctxt, LOCK_TAKE(hdw->ctl_lock); do { + if (!hdw->flag_encoder_ok) { + ret = -EIO; + break; + } + retry_flag = 0; try_count++; ret = 0; @@ -273,6 +278,7 @@ static int pvr2_encoder_cmd(void *ctxt, ret = -EBUSY; } if (ret) { + hdw->flag_encoder_ok = 0; pvr2_trace( PVR2_TRACE_ERROR_LEGS, "Giving up on command." diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw-internal.h b/drivers/media/video/pvrusb2/pvrusb2-hdw-internal.h index 985d9ae7f5e..f873994b088 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw-internal.h +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw-internal.h @@ -225,11 +225,12 @@ struct pvr2_hdw { unsigned int cmd_debug_write_len; // unsigned int cmd_debug_read_len; // - int flag_ok; // device in known good state - int flag_disconnected; // flag_ok == 0 due to disconnect - int flag_init_ok; // true if structure is fully initialized - int flag_streaming_enabled; // true if streaming should be on - int fw1_state; // current situation with fw1 + int flag_ok; /* device in known good state */ + int flag_disconnected; /* flag_ok == 0 due to disconnect */ + int flag_init_ok; /* true if structure is fully initialized */ + int flag_streaming_enabled; /* true if streaming should be on */ + int fw1_state; /* current situation with fw1 */ + int flag_encoder_ok; /* True if encoder is healthy */ int flag_decoder_is_tuned; diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw.c b/drivers/media/video/pvrusb2/pvrusb2-hdw.c index 27b12b4b5c8..402c5948825 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw.c +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw.c @@ -1248,6 +1248,8 @@ int pvr2_upload_firmware2(struct pvr2_hdw *hdw) time we configure the encoder, then we'll fully configure it. */ hdw->enc_cur_valid = 0; + hdw->flag_encoder_ok = 0; + /* First prepare firmware loading */ ret |= pvr2_write_register(hdw, 0x0048, 0xffffffff); /*interrupt mask*/ ret |= pvr2_hdw_gpio_chg_dir(hdw,0xffffffff,0x00000088); /*gpio dir*/ @@ -1346,6 +1348,7 @@ int pvr2_upload_firmware2(struct pvr2_hdw *hdw) pvr2_trace(PVR2_TRACE_ERROR_LEGS, "firmware2 upload post-proc failure"); } else { + hdw->flag_encoder_ok = !0; hdw->subsys_enabled_mask |= (1<<PVR2_SUBSYS_B_ENC_FIRMWARE); } return ret; diff --git a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c index 4563b3df8a0..7a596ea7cfe 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c +++ b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c @@ -1121,15 +1121,12 @@ static const struct file_operations vdev_fops = { }; -#define VID_HARDWARE_PVRUSB2 38 /* FIXME : need a good value */ - static struct video_device vdev_template = { .owner = THIS_MODULE, .type = VID_TYPE_CAPTURE | VID_TYPE_TUNER, .type2 = (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VBI_CAPTURE | V4L2_CAP_TUNER | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE), - .hardware = VID_HARDWARE_PVRUSB2, .fops = &vdev_fops, }; diff --git a/drivers/media/video/pwc/pwc-if.c b/drivers/media/video/pwc/pwc-if.c index 950da254214..7300ace8f44 100644 --- a/drivers/media/video/pwc/pwc-if.c +++ b/drivers/media/video/pwc/pwc-if.c @@ -166,7 +166,6 @@ static struct video_device pwc_template = { .owner = THIS_MODULE, .name = "Philips Webcam", /* Filled in later */ .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_PWC, .release = video_device_release, .fops = &pwc_fops, .minor = -1, diff --git a/drivers/media/video/saa7134/saa6752hs.c b/drivers/media/video/saa7134/saa6752hs.c index 57f1f5d409e..002e70a33a4 100644 --- a/drivers/media/video/saa7134/saa6752hs.c +++ b/drivers/media/video/saa7134/saa6752hs.c @@ -71,7 +71,6 @@ static const struct v4l2_format v4l2_format_table[] = struct saa6752hs_state { struct i2c_client client; - struct v4l2_mpeg_compression old_params; struct saa6752hs_mpeg_params params; enum saa6752hs_videoformat video_format; v4l2_std_id standard; @@ -161,35 +160,6 @@ static struct saa6752hs_mpeg_params param_defaults = .au_l2_bitrate = V4L2_MPEG_AUDIO_L2_BITRATE_256K, }; -static struct v4l2_mpeg_compression old_param_defaults = -{ - .st_type = V4L2_MPEG_TS_2, - .st_bitrate = { - .mode = V4L2_BITRATE_CBR, - .target = 7000, - }, - - .ts_pid_pmt = 16, - .ts_pid_video = 260, - .ts_pid_audio = 256, - .ts_pid_pcr = 259, - - .vi_type = V4L2_MPEG_VI_2, - .vi_aspect_ratio = V4L2_MPEG_ASPECT_4_3, - .vi_bitrate = { - .mode = V4L2_BITRATE_VBR, - .target = 4000, - .max = 6000, - }, - - .au_type = V4L2_MPEG_AU_2_II, - .au_bitrate = { - .mode = V4L2_BITRATE_CBR, - .target = 256, - }, - -}; - /* ---------------------------------------------------------------------- */ static int saa6752hs_chip_command(struct i2c_client* client, @@ -362,74 +332,6 @@ static void saa6752hs_set_subsampling(struct i2c_client* client, } -static void saa6752hs_old_set_params(struct i2c_client* client, - struct v4l2_mpeg_compression* params) -{ - struct saa6752hs_state *h = i2c_get_clientdata(client); - - /* check PIDs */ - if (params->ts_pid_pmt <= MPEG_PID_MAX) { - h->old_params.ts_pid_pmt = params->ts_pid_pmt; - h->params.ts_pid_pmt = params->ts_pid_pmt; - } - if (params->ts_pid_pcr <= MPEG_PID_MAX) { - h->old_params.ts_pid_pcr = params->ts_pid_pcr; - h->params.ts_pid_pcr = params->ts_pid_pcr; - } - if (params->ts_pid_video <= MPEG_PID_MAX) { - h->old_params.ts_pid_video = params->ts_pid_video; - h->params.ts_pid_video = params->ts_pid_video; - } - if (params->ts_pid_audio <= MPEG_PID_MAX) { - h->old_params.ts_pid_audio = params->ts_pid_audio; - h->params.ts_pid_audio = params->ts_pid_audio; - } - - /* check bitrate parameters */ - if ((params->vi_bitrate.mode == V4L2_BITRATE_CBR) || - (params->vi_bitrate.mode == V4L2_BITRATE_VBR)) { - h->old_params.vi_bitrate.mode = params->vi_bitrate.mode; - h->params.vi_bitrate_mode = (params->vi_bitrate.mode == V4L2_BITRATE_VBR) ? - V4L2_MPEG_VIDEO_BITRATE_MODE_VBR : V4L2_MPEG_VIDEO_BITRATE_MODE_CBR; - } - if (params->vi_bitrate.mode != V4L2_BITRATE_NONE) - h->old_params.st_bitrate.target = params->st_bitrate.target; - if (params->vi_bitrate.mode != V4L2_BITRATE_NONE) - h->old_params.vi_bitrate.target = params->vi_bitrate.target; - if (params->vi_bitrate.mode == V4L2_BITRATE_VBR) - h->old_params.vi_bitrate.max = params->vi_bitrate.max; - if (params->au_bitrate.mode != V4L2_BITRATE_NONE) - h->old_params.au_bitrate.target = params->au_bitrate.target; - - /* aspect ratio */ - if (params->vi_aspect_ratio == V4L2_MPEG_ASPECT_4_3 || - params->vi_aspect_ratio == V4L2_MPEG_ASPECT_16_9) { - h->old_params.vi_aspect_ratio = params->vi_aspect_ratio; - if (params->vi_aspect_ratio == V4L2_MPEG_ASPECT_4_3) - h->params.vi_aspect = V4L2_MPEG_VIDEO_ASPECT_4x3; - else - h->params.vi_aspect = V4L2_MPEG_VIDEO_ASPECT_16x9; - } - - /* range checks */ - if (h->old_params.st_bitrate.target > MPEG_TOTAL_TARGET_BITRATE_MAX) - h->old_params.st_bitrate.target = MPEG_TOTAL_TARGET_BITRATE_MAX; - if (h->old_params.vi_bitrate.target > MPEG_VIDEO_TARGET_BITRATE_MAX) - h->old_params.vi_bitrate.target = MPEG_VIDEO_TARGET_BITRATE_MAX; - if (h->old_params.vi_bitrate.max > MPEG_VIDEO_MAX_BITRATE_MAX) - h->old_params.vi_bitrate.max = MPEG_VIDEO_MAX_BITRATE_MAX; - h->params.vi_bitrate = params->vi_bitrate.target; - h->params.vi_bitrate_peak = params->vi_bitrate.max; - if (h->old_params.au_bitrate.target <= 256) { - h->old_params.au_bitrate.target = 256; - h->params.au_l2_bitrate = V4L2_MPEG_AUDIO_L2_BITRATE_256K; - } - else { - h->old_params.au_bitrate.target = 384; - h->params.au_l2_bitrate = V4L2_MPEG_AUDIO_L2_BITRATE_384K; - } -} - static int handle_ctrl(struct saa6752hs_mpeg_params *params, struct v4l2_ext_control *ctrl, unsigned int cmd) { @@ -697,7 +599,6 @@ static int saa6752hs_attach(struct i2c_adapter *adap, int addr, int kind) return -ENOMEM; h->client = client_template; h->params = param_defaults; - h->old_params = old_param_defaults; h->client.adapter = adap; h->client.addr = addr; @@ -734,23 +635,11 @@ saa6752hs_command(struct i2c_client *client, unsigned int cmd, void *arg) { struct saa6752hs_state *h = i2c_get_clientdata(client); struct v4l2_ext_controls *ctrls = arg; - struct v4l2_mpeg_compression *old_params = arg; struct saa6752hs_mpeg_params params; int err = 0; int i; switch (cmd) { - case VIDIOC_S_MPEGCOMP: - if (NULL == old_params) { - /* apply settings and start encoder */ - saa6752hs_init(client); - break; - } - saa6752hs_old_set_params(client, old_params); - /* fall through */ - case VIDIOC_G_MPEGCOMP: - *old_params = h->old_params; - break; case VIDIOC_S_EXT_CTRLS: if (ctrls->ctrl_class != V4L2_CTRL_CLASS_MPEG) return -EINVAL; diff --git a/drivers/media/video/saa7134/saa7134-core.c b/drivers/media/video/saa7134/saa7134-core.c index 1a4a24471f2..a499eea379e 100644 --- a/drivers/media/video/saa7134/saa7134-core.c +++ b/drivers/media/video/saa7134/saa7134-core.c @@ -429,7 +429,7 @@ int saa7134_set_dmabits(struct saa7134_dev *dev) assert_spin_locked(&dev->slock); - if (dev->inresume) + if (dev->insuspend) return 0; /* video capture -- dma 0 + video task A */ @@ -563,6 +563,9 @@ static irqreturn_t saa7134_irq(int irq, void *dev_id) unsigned long report,status; int loop, handled = 0; + if (dev->insuspend) + goto out; + for (loop = 0; loop < 10; loop++) { report = saa_readl(SAA7134_IRQ_REPORT); status = saa_readl(SAA7134_IRQ_STATUS); @@ -1163,6 +1166,7 @@ static void __devexit saa7134_finidev(struct pci_dev *pci_dev) kfree(dev); } +#ifdef CONFIG_PM static int saa7134_suspend(struct pci_dev *pci_dev , pm_message_t state) { @@ -1176,6 +1180,19 @@ static int saa7134_suspend(struct pci_dev *pci_dev , pm_message_t state) saa_writel(SAA7134_IRQ2, 0); saa_writel(SAA7134_MAIN_CTRL, 0); + synchronize_irq(pci_dev->irq); + dev->insuspend = 1; + + /* Disable timeout timers - if we have active buffers, we will + fill them on resume*/ + + del_timer(&dev->video_q.timeout); + del_timer(&dev->vbi_q.timeout); + del_timer(&dev->ts_q.timeout); + + if (dev->remote) + saa7134_ir_stop(dev); + pci_set_power_state(pci_dev, pci_choose_state(pci_dev, state)); pci_save_state(pci_dev); @@ -1194,24 +1211,27 @@ static int saa7134_resume(struct pci_dev *pci_dev) /* Do things that are done in saa7134_initdev , except of initializing memory structures.*/ - dev->inresume = 1; saa7134_board_init1(dev); + /* saa7134_hwinit1 */ if (saa7134_boards[dev->board].video_out) saa7134_videoport_init(dev); - if (card_has_mpeg(dev)) saa7134_ts_init_hw(dev); - + if (dev->remote) + saa7134_ir_start(dev, dev->remote); saa7134_hw_enable1(dev); - saa7134_set_decoder(dev); - saa7134_i2c_call_clients(dev, VIDIOC_S_STD, &dev->tvnorm->id); + + saa7134_board_init2(dev); - saa7134_hw_enable2(dev); + /*saa7134_hwinit2*/ + saa7134_set_tvnorm_hw(dev); saa7134_tvaudio_setmute(dev); saa7134_tvaudio_setvolume(dev, dev->ctl_volume); + saa7134_tvaudio_do_scan(dev); saa7134_enable_i2s(dev); + saa7134_hw_enable2(dev); /*resume unfinished buffer(s)*/ spin_lock_irqsave(&dev->slock, flags); @@ -1219,13 +1239,19 @@ static int saa7134_resume(struct pci_dev *pci_dev) saa7134_buffer_requeue(dev, &dev->vbi_q); saa7134_buffer_requeue(dev, &dev->ts_q); + /* FIXME: Disable DMA audio sound - temporary till proper support + is implemented*/ + + dev->dmasound.dma_running = 0; + /* start DMA now*/ - dev->inresume = 0; + dev->insuspend = 0; saa7134_set_dmabits(dev); spin_unlock_irqrestore(&dev->slock, flags); return 0; } +#endif /* ----------------------------------------------------------- */ @@ -1262,8 +1288,10 @@ static struct pci_driver saa7134_pci_driver = { .id_table = saa7134_pci_tbl, .probe = saa7134_initdev, .remove = __devexit_p(saa7134_finidev), +#ifdef CONFIG_PM .suspend = saa7134_suspend, .resume = saa7134_resume +#endif }; static int saa7134_init(void) diff --git a/drivers/media/video/saa7134/saa7134-empress.c b/drivers/media/video/saa7134/saa7134-empress.c index 34ca874dd7f..75d0c5bf46d 100644 --- a/drivers/media/video/saa7134/saa7134-empress.c +++ b/drivers/media/video/saa7134/saa7134-empress.c @@ -284,17 +284,6 @@ static int ts_do_ioctl(struct inode *inode, struct file *file, case VIDIOC_S_CTRL: return saa7134_common_ioctl(dev, cmd, arg); - case VIDIOC_S_MPEGCOMP: - printk(KERN_WARNING "VIDIOC_S_MPEGCOMP is obsolete. " - "Replace with VIDIOC_S_EXT_CTRLS!"); - saa7134_i2c_call_clients(dev, VIDIOC_S_MPEGCOMP, arg); - ts_init_encoder(dev); - return 0; - case VIDIOC_G_MPEGCOMP: - printk(KERN_WARNING "VIDIOC_G_MPEGCOMP is obsolete. " - "Replace with VIDIOC_G_EXT_CTRLS!"); - saa7134_i2c_call_clients(dev, VIDIOC_G_MPEGCOMP, arg); - return 0; case VIDIOC_S_EXT_CTRLS: /* count == 0 is abused in saa6752hs.c, so that special case is handled here explicitly. */ @@ -342,7 +331,6 @@ static struct video_device saa7134_empress_template = .name = "saa7134-empress", .type = 0 /* FIXME */, .type2 = 0 /* FIXME */, - .hardware = 0, .fops = &ts_fops, .minor = -1, }; diff --git a/drivers/media/video/saa7134/saa7134-input.c b/drivers/media/video/saa7134/saa7134-input.c index 80d2644f765..3abaa1b8ac9 100644 --- a/drivers/media/video/saa7134/saa7134-input.c +++ b/drivers/media/video/saa7134/saa7134-input.c @@ -44,6 +44,14 @@ module_param(ir_rc5_remote_gap, int, 0644); static int ir_rc5_key_timeout = 115; module_param(ir_rc5_key_timeout, int, 0644); +static int repeat_delay = 500; +module_param(repeat_delay, int, 0644); +MODULE_PARM_DESC(repeat_delay, "delay before key repeat started"); +static int repeat_period = 33; +module_param(repeat_period, int, 0644); +MODULE_PARM_DESC(repeat_period, "repeat period between" + "keypresses when key is down"); + #define dprintk(fmt, arg...) if (ir_debug) \ printk(KERN_DEBUG "%s/ir: " fmt, dev->name , ## arg) #define i2cdprintk(fmt, arg...) if (ir_debug) \ @@ -59,6 +67,13 @@ static int build_key(struct saa7134_dev *dev) struct card_ir *ir = dev->remote; u32 gpio, data; + /* here comes the additional handshake steps for some cards */ + switch (dev->board) { + case SAA7134_BOARD_GOTVIEW_7135: + saa_setb(SAA7134_GPIO_GPSTATUS1, 0x80); + saa_clearb(SAA7134_GPIO_GPSTATUS1, 0x80); + break; + } /* rising SAA7134_GPIO_GPRESCAN reads the status */ saa_clearb(SAA7134_GPIO_GPMODE3,SAA7134_GPIO_GPRESCAN); saa_setb(SAA7134_GPIO_GPMODE3,SAA7134_GPIO_GPRESCAN); @@ -159,7 +174,7 @@ static void saa7134_input_timer(unsigned long data) mod_timer(&ir->timer, jiffies + msecs_to_jiffies(ir->polling)); } -static void saa7134_ir_start(struct saa7134_dev *dev, struct card_ir *ir) +void saa7134_ir_start(struct saa7134_dev *dev, struct card_ir *ir) { if (ir->polling) { setup_timer(&ir->timer, saa7134_input_timer, @@ -182,7 +197,7 @@ static void saa7134_ir_start(struct saa7134_dev *dev, struct card_ir *ir) } } -static void saa7134_ir_stop(struct saa7134_dev *dev) +void saa7134_ir_stop(struct saa7134_dev *dev) { if (dev->remote->polling) del_timer_sync(&dev->remote->timer); @@ -285,10 +300,10 @@ int saa7134_input_init1(struct saa7134_dev *dev) break; case SAA7134_BOARD_GOTVIEW_7135: ir_codes = ir_codes_gotview7135; - mask_keycode = 0x0003EC; - mask_keyup = 0x008000; + mask_keycode = 0x0003CC; mask_keydown = 0x000010; - polling = 50; // ms + polling = 5; /* ms */ + saa_setb(SAA7134_GPIO_GPMODE1, 0x80); break; case SAA7134_BOARD_VIDEOMATE_TV_PVR: case SAA7134_BOARD_VIDEOMATE_GOLD_PLUS: @@ -386,6 +401,10 @@ int saa7134_input_init1(struct saa7134_dev *dev) if (err) goto err_out_stop; + /* the remote isn't as bouncy as a keyboard */ + ir->dev->rep[REP_DELAY] = repeat_delay; + ir->dev->rep[REP_PERIOD] = repeat_period; + return 0; err_out_stop: diff --git a/drivers/media/video/saa7134/saa7134-tvaudio.c b/drivers/media/video/saa7134/saa7134-tvaudio.c index 1b9e39a5ea4..f8e304c7623 100644 --- a/drivers/media/video/saa7134/saa7134-tvaudio.c +++ b/drivers/media/video/saa7134/saa7134-tvaudio.c @@ -27,6 +27,7 @@ #include <linux/kthread.h> #include <linux/slab.h> #include <linux/delay.h> +#include <linux/freezer.h> #include <asm/div64.h> #include "saa7134-reg.h" @@ -231,7 +232,7 @@ static void mute_input_7134(struct saa7134_dev *dev) } if (dev->hw_mute == mute && - dev->hw_input == in && !dev->inresume) { + dev->hw_input == in && !dev->insuspend) { dprintk("mute/input: nothing to do [mute=%d,input=%s]\n", mute,in->name); return; @@ -502,13 +503,17 @@ static int tvaudio_thread(void *data) unsigned int i, audio, nscan; int max1,max2,carrier,rx,mode,lastmode,default_carrier; - allow_signal(SIGTERM); + + set_freezable(); + for (;;) { tvaudio_sleep(dev,-1); - if (kthread_should_stop() || signal_pending(current)) + if (kthread_should_stop()) goto done; restart: + try_to_freeze(); + dev->thread.scan1 = dev->thread.scan2; dprintk("tvaudio thread scan start [%d]\n",dev->thread.scan1); dev->tvaudio = NULL; @@ -612,9 +617,12 @@ static int tvaudio_thread(void *data) lastmode = 42; for (;;) { + + try_to_freeze(); + if (tvaudio_sleep(dev,5000)) goto restart; - if (kthread_should_stop() || signal_pending(current)) + if (kthread_should_stop()) break; if (UNSET == dev->thread.mode) { rx = tvaudio_getstereo(dev,&tvaudio[i]); @@ -630,6 +638,7 @@ static int tvaudio_thread(void *data) } done: + dev->thread.stopped = 1; return 0; } @@ -777,7 +786,8 @@ static int tvaudio_thread_ddep(void *data) struct saa7134_dev *dev = data; u32 value, norms, clock; - allow_signal(SIGTERM); + + set_freezable(); clock = saa7134_boards[dev->board].audio_clock; if (UNSET != audio_clock_override) @@ -790,10 +800,13 @@ static int tvaudio_thread_ddep(void *data) for (;;) { tvaudio_sleep(dev,-1); - if (kthread_should_stop() || signal_pending(current)) + if (kthread_should_stop()) goto done; restart: + + try_to_freeze(); + dev->thread.scan1 = dev->thread.scan2; dprintk("tvaudio thread scan start [%d]\n",dev->thread.scan1); @@ -870,6 +883,7 @@ static int tvaudio_thread_ddep(void *data) } done: + dev->thread.stopped = 1; return 0; } @@ -997,7 +1011,7 @@ int saa7134_tvaudio_init2(struct saa7134_dev *dev) int saa7134_tvaudio_fini(struct saa7134_dev *dev) { /* shutdown tvaudio thread */ - if (dev->thread.thread) + if (dev->thread.thread && !dev->thread.stopped) kthread_stop(dev->thread.thread); saa_andorb(SAA7134_ANALOG_IO_SELECT, 0x07, 0x00); /* LINE1 */ @@ -1013,7 +1027,9 @@ int saa7134_tvaudio_do_scan(struct saa7134_dev *dev) } else if (dev->thread.thread) { dev->thread.mode = UNSET; dev->thread.scan2++; - wake_up_process(dev->thread.thread); + + if (!dev->insuspend && !dev->thread.stopped) + wake_up_process(dev->thread.thread); } else { dev->automute = 0; saa7134_tvaudio_setmute(dev); diff --git a/drivers/media/video/saa7134/saa7134-video.c b/drivers/media/video/saa7134/saa7134-video.c index 471b92793c1..3b9ffb4b648 100644 --- a/drivers/media/video/saa7134/saa7134-video.c +++ b/drivers/media/video/saa7134/saa7134-video.c @@ -560,15 +560,8 @@ void set_tvnorm(struct saa7134_dev *dev, struct saa7134_tvnorm *norm) dev->crop_current = dev->crop_defrect; - saa7134_set_decoder(dev); + saa7134_set_tvnorm_hw(dev); - if (card_in(dev, dev->ctl_input).tv) { - if ((card(dev).tuner_type == TUNER_PHILIPS_TDA8290) - && ((card(dev).tuner_config == 1) - || (card(dev).tuner_config == 2))) - saa7134_set_gpio(dev, 22, 5); - saa7134_i2c_call_clients(dev, VIDIOC_S_STD, &norm->id); - } } static void video_mux(struct saa7134_dev *dev, int input) @@ -579,7 +572,8 @@ static void video_mux(struct saa7134_dev *dev, int input) saa7134_tvaudio_setinput(dev, &card_in(dev, input)); } -void saa7134_set_decoder(struct saa7134_dev *dev) + +static void saa7134_set_decoder(struct saa7134_dev *dev) { int luma_control, sync_control, mux; @@ -630,6 +624,19 @@ void saa7134_set_decoder(struct saa7134_dev *dev) saa_writeb(SAA7134_RAW_DATA_OFFSET, 0x80); } +void saa7134_set_tvnorm_hw(struct saa7134_dev *dev) +{ + saa7134_set_decoder(dev); + + if (card_in(dev, dev->ctl_input).tv) { + if ((card(dev).tuner_type == TUNER_PHILIPS_TDA8290) + && ((card(dev).tuner_config == 1) + || (card(dev).tuner_config == 2))) + saa7134_set_gpio(dev, 22, 5); + saa7134_i2c_call_clients(dev, VIDIOC_S_STD, &dev->tvnorm->id); + } +} + static void set_h_prescale(struct saa7134_dev *dev, int task, int prescale) { static const struct { @@ -2352,7 +2359,6 @@ struct video_device saa7134_video_template = .name = "saa7134-video", .type = VID_TYPE_CAPTURE|VID_TYPE_TUNER| VID_TYPE_CLIPPING|VID_TYPE_SCALES, - .hardware = 0, .fops = &video_fops, .minor = -1, }; @@ -2361,7 +2367,6 @@ struct video_device saa7134_vbi_template = { .name = "saa7134-vbi", .type = VID_TYPE_TUNER|VID_TYPE_TELETEXT, - .hardware = 0, .fops = &video_fops, .minor = -1, }; @@ -2370,7 +2375,6 @@ struct video_device saa7134_radio_template = { .name = "saa7134-radio", .type = VID_TYPE_TUNER, - .hardware = 0, .fops = &radio_fops, .minor = -1, }; diff --git a/drivers/media/video/saa7134/saa7134.h b/drivers/media/video/saa7134/saa7134.h index 28ec6804bd5..66a390c321a 100644 --- a/drivers/media/video/saa7134/saa7134.h +++ b/drivers/media/video/saa7134/saa7134.h @@ -333,6 +333,7 @@ struct saa7134_thread { unsigned int scan1; unsigned int scan2; unsigned int mode; + unsigned int stopped; }; /* buffer for one video/vbi/ts frame */ @@ -524,7 +525,7 @@ struct saa7134_dev { unsigned int hw_mute; int last_carrier; int nosignal; - unsigned int inresume; + unsigned int insuspend; /* SAA7134_MPEG_* */ struct saa7134_ts ts; @@ -632,7 +633,7 @@ extern struct video_device saa7134_radio_template; void set_tvnorm(struct saa7134_dev *dev, struct saa7134_tvnorm *norm); int saa7134_videoport_init(struct saa7134_dev *dev); -void saa7134_set_decoder(struct saa7134_dev *dev); +void saa7134_set_tvnorm_hw(struct saa7134_dev *dev); int saa7134_common_ioctl(struct saa7134_dev *dev, unsigned int cmd, void *arg); @@ -706,6 +707,8 @@ int saa7134_input_init1(struct saa7134_dev *dev); void saa7134_input_fini(struct saa7134_dev *dev); void saa7134_input_irq(struct saa7134_dev *dev); void saa7134_set_i2c_ir(struct saa7134_dev *dev, struct IR_i2c *ir); +void saa7134_ir_start(struct saa7134_dev *dev, struct card_ir *ir); +void saa7134_ir_stop(struct saa7134_dev *dev); /* diff --git a/drivers/media/video/se401.c b/drivers/media/video/se401.c index 93fb04ed99a..d5d7d6cf734 100644 --- a/drivers/media/video/se401.c +++ b/drivers/media/video/se401.c @@ -1231,7 +1231,6 @@ static struct video_device se401_template = { .owner = THIS_MODULE, .name = "se401 USB camera", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_SE401, .fops = &se401_fops, }; diff --git a/drivers/media/video/sn9c102/sn9c102_core.c b/drivers/media/video/sn9c102/sn9c102_core.c index 6991e06f765..511847912c4 100644 --- a/drivers/media/video/sn9c102/sn9c102_core.c +++ b/drivers/media/video/sn9c102/sn9c102_core.c @@ -3319,7 +3319,6 @@ sn9c102_usb_probe(struct usb_interface* intf, const struct usb_device_id* id) strcpy(cam->v4ldev->name, "SN9C1xx PC Camera"); cam->v4ldev->owner = THIS_MODULE; cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES; - cam->v4ldev->hardware = 0; cam->v4ldev->fops = &sn9c102_fops; cam->v4ldev->minor = video_nr[dev_nr]; cam->v4ldev->release = video_device_release; diff --git a/drivers/media/video/stradis.c b/drivers/media/video/stradis.c index eb220461ac7..3fb85af5d1f 100644 --- a/drivers/media/video/stradis.c +++ b/drivers/media/video/stradis.c @@ -1917,7 +1917,6 @@ static const struct file_operations saa_fops = { static struct video_device saa_template = { .name = "SAA7146A", .type = VID_TYPE_CAPTURE | VID_TYPE_OVERLAY, - .hardware = VID_HARDWARE_SAA7146, .fops = &saa_fops, .minor = -1, }; diff --git a/drivers/media/video/stv680.c b/drivers/media/video/stv680.c index 9e009a7ab86..afc32aa56fd 100644 --- a/drivers/media/video/stv680.c +++ b/drivers/media/video/stv680.c @@ -1398,7 +1398,6 @@ static struct video_device stv680_template = { .owner = THIS_MODULE, .name = "STV0680 USB camera", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_SE401, .fops = &stv680_fops, .release = video_device_release, .minor = -1, diff --git a/drivers/media/video/tuner-core.c b/drivers/media/video/tuner-core.c index 94843086cda..6a777604f07 100644 --- a/drivers/media/video/tuner-core.c +++ b/drivers/media/video/tuner-core.c @@ -113,7 +113,7 @@ static void fe_standby(struct tuner *t) static int fe_has_signal(struct tuner *t) { struct dvb_tuner_ops *fe_tuner_ops = &t->fe.ops.tuner_ops; - u16 strength; + u16 strength = 0; if (fe_tuner_ops->get_rf_strength) fe_tuner_ops->get_rf_strength(&t->fe, &strength); diff --git a/drivers/media/video/usbvideo/usbvideo.c b/drivers/media/video/usbvideo/usbvideo.c index 37ce36b9e58..fb434b5602a 100644 --- a/drivers/media/video/usbvideo/usbvideo.c +++ b/drivers/media/video/usbvideo/usbvideo.c @@ -952,7 +952,6 @@ static const struct file_operations usbvideo_fops = { static const struct video_device usbvideo_template = { .owner = THIS_MODULE, .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_CPIA, .fops = &usbvideo_fops, }; diff --git a/drivers/media/video/usbvideo/vicam.c b/drivers/media/video/usbvideo/vicam.c index db3c9e3deb2..da1ba021110 100644 --- a/drivers/media/video/usbvideo/vicam.c +++ b/drivers/media/video/usbvideo/vicam.c @@ -1074,7 +1074,6 @@ static struct video_device vicam_template = { .owner = THIS_MODULE, .name = "ViCam-based USB Camera", .type = VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_VICAM, .fops = &vicam_fops, .minor = -1, }; diff --git a/drivers/media/video/usbvision/usbvision-video.c b/drivers/media/video/usbvision/usbvision-video.c index e2f3c01cfa1..36e689fa16c 100644 --- a/drivers/media/video/usbvision/usbvision-video.c +++ b/drivers/media/video/usbvision/usbvision-video.c @@ -1400,7 +1400,6 @@ static const struct file_operations usbvision_fops = { static struct video_device usbvision_video_template = { .owner = THIS_MODULE, .type = VID_TYPE_TUNER | VID_TYPE_CAPTURE, - .hardware = VID_HARDWARE_USBVISION, .fops = &usbvision_fops, .name = "usbvision-video", .release = video_device_release, @@ -1455,7 +1454,6 @@ static struct video_device usbvision_radio_template= { .owner = THIS_MODULE, .type = VID_TYPE_TUNER, - .hardware = VID_HARDWARE_USBVISION, .fops = &usbvision_radio_fops, .name = "usbvision-radio", .release = video_device_release, @@ -1492,7 +1490,6 @@ static struct video_device usbvision_vbi_template= { .owner = THIS_MODULE, .type = VID_TYPE_TUNER, - .hardware = VID_HARDWARE_USBVISION, .fops = &usbvision_vbi_fops, .release = video_device_release, .name = "usbvision-vbi", diff --git a/drivers/media/video/v4l2-common.c b/drivers/media/video/v4l2-common.c index 321249240d0..1141b4bf41c 100644 --- a/drivers/media/video/v4l2-common.c +++ b/drivers/media/video/v4l2-common.c @@ -317,8 +317,6 @@ static const char *v4l2_ioctls[] = { [_IOC_NR(VIDIOC_ENUM_FMT)] = "VIDIOC_ENUM_FMT", [_IOC_NR(VIDIOC_G_FMT)] = "VIDIOC_G_FMT", [_IOC_NR(VIDIOC_S_FMT)] = "VIDIOC_S_FMT", - [_IOC_NR(VIDIOC_G_MPEGCOMP)] = "VIDIOC_G_MPEGCOMP", - [_IOC_NR(VIDIOC_S_MPEGCOMP)] = "VIDIOC_S_MPEGCOMP", [_IOC_NR(VIDIOC_REQBUFS)] = "VIDIOC_REQBUFS", [_IOC_NR(VIDIOC_QUERYBUF)] = "VIDIOC_QUERYBUF", [_IOC_NR(VIDIOC_G_FBUF)] = "VIDIOC_G_FBUF", diff --git a/drivers/media/video/videobuf-core.c b/drivers/media/video/videobuf-core.c index 5599a36490f..89a44f16f0b 100644 --- a/drivers/media/video/videobuf-core.c +++ b/drivers/media/video/videobuf-core.c @@ -967,6 +967,7 @@ int videobuf_cgmbuf(struct videobuf_queue *q, return 0; } +EXPORT_SYMBOL_GPL(videobuf_cgmbuf); #endif /* --------------------------------------------------------------------- */ @@ -985,7 +986,6 @@ EXPORT_SYMBOL_GPL(videobuf_reqbufs); EXPORT_SYMBOL_GPL(videobuf_querybuf); EXPORT_SYMBOL_GPL(videobuf_qbuf); EXPORT_SYMBOL_GPL(videobuf_dqbuf); -EXPORT_SYMBOL_GPL(videobuf_cgmbuf); EXPORT_SYMBOL_GPL(videobuf_streamon); EXPORT_SYMBOL_GPL(videobuf_streamoff); diff --git a/drivers/media/video/videobuf-dma-sg.c b/drivers/media/video/videobuf-dma-sg.c index 3eb6123227b..0a18286279d 100644 --- a/drivers/media/video/videobuf-dma-sg.c +++ b/drivers/media/video/videobuf-dma-sg.c @@ -60,12 +60,13 @@ videobuf_vmalloc_to_sg(unsigned char *virt, int nr_pages) sglist = kcalloc(nr_pages, sizeof(struct scatterlist), GFP_KERNEL); if (NULL == sglist) return NULL; + sg_init_table(sglist, nr_pages); for (i = 0; i < nr_pages; i++, virt += PAGE_SIZE) { pg = vmalloc_to_page(virt); if (NULL == pg) goto err; BUG_ON(PageHighMem(pg)); - sglist[i].page = pg; + sg_set_page(&sglist[i], pg); sglist[i].length = PAGE_SIZE; } return sglist; @@ -86,13 +87,14 @@ videobuf_pages_to_sg(struct page **pages, int nr_pages, int offset) sglist = kcalloc(nr_pages, sizeof(*sglist), GFP_KERNEL); if (NULL == sglist) return NULL; + sg_init_table(sglist, nr_pages); if (NULL == pages[0]) goto nopage; if (PageHighMem(pages[0])) /* DMA to highmem pages might not work */ goto highmem; - sglist[0].page = pages[0]; + sg_set_page(&sglist[0], pages[0]); sglist[0].offset = offset; sglist[0].length = PAGE_SIZE - offset; for (i = 1; i < nr_pages; i++) { @@ -100,7 +102,7 @@ videobuf_pages_to_sg(struct page **pages, int nr_pages, int offset) goto nopage; if (PageHighMem(pages[i])) goto highmem; - sglist[i].page = pages[i]; + sg_set_page(&sglist[i], pages[i]); sglist[i].length = PAGE_SIZE; } return sglist; diff --git a/drivers/media/video/videocodec.c b/drivers/media/video/videocodec.c index f2bbd7a4d56..87951ec8254 100644 --- a/drivers/media/video/videocodec.c +++ b/drivers/media/video/videocodec.c @@ -86,8 +86,8 @@ videocodec_attach (struct videocodec_master *master) } dprintk(2, - "videocodec_attach: '%s', type: %x, flags %lx, magic %lx\n", - master->name, master->type, master->flags, master->magic); + "videocodec_attach: '%s', flags %lx, magic %lx\n", + master->name, master->flags, master->magic); if (!h) { dprintk(1, diff --git a/drivers/media/video/videodev.c b/drivers/media/video/videodev.c index 8d8e517b344..9611c399028 100644 --- a/drivers/media/video/videodev.c +++ b/drivers/media/video/videodev.c @@ -1313,48 +1313,6 @@ static int __video_do_ioctl(struct inode *inode, struct file *file, ret=vfd->vidioc_cropcap(file, fh, p); break; } - case VIDIOC_G_MPEGCOMP: - { - struct v4l2_mpeg_compression *p=arg; - - /*FIXME: Several fields not shown */ - if (!vfd->vidioc_g_mpegcomp) - break; - ret=vfd->vidioc_g_mpegcomp(file, fh, p); - if (!ret) - dbgarg (cmd, "ts_pid_pmt=%d, ts_pid_audio=%d," - " ts_pid_video=%d, ts_pid_pcr=%d, " - "ps_size=%d, au_sample_rate=%d, " - "au_pesid=%c, vi_frame_rate=%d, " - "vi_frames_per_gop=%d, " - "vi_bframes_count=%d, vi_pesid=%c\n", - p->ts_pid_pmt,p->ts_pid_audio, - p->ts_pid_video,p->ts_pid_pcr, - p->ps_size, p->au_sample_rate, - p->au_pesid, p->vi_frame_rate, - p->vi_frames_per_gop, - p->vi_bframes_count, p->vi_pesid); - break; - } - case VIDIOC_S_MPEGCOMP: - { - struct v4l2_mpeg_compression *p=arg; - /*FIXME: Several fields not shown */ - if (!vfd->vidioc_s_mpegcomp) - break; - dbgarg (cmd, "ts_pid_pmt=%d, ts_pid_audio=%d, " - "ts_pid_video=%d, ts_pid_pcr=%d, ps_size=%d, " - "au_sample_rate=%d, au_pesid=%c, " - "vi_frame_rate=%d, vi_frames_per_gop=%d, " - "vi_bframes_count=%d, vi_pesid=%c\n", - p->ts_pid_pmt,p->ts_pid_audio, p->ts_pid_video, - p->ts_pid_pcr, p->ps_size, p->au_sample_rate, - p->au_pesid, p->vi_frame_rate, - p->vi_frames_per_gop, p->vi_bframes_count, - p->vi_pesid); - ret=vfd->vidioc_s_mpegcomp(file, fh, p); - break; - } case VIDIOC_G_JPEGCOMP: { struct v4l2_jpegcompression *p=arg; diff --git a/drivers/media/video/vivi.c b/drivers/media/video/vivi.c index b532aa280a1..ee73dc75131 100644 --- a/drivers/media/video/vivi.c +++ b/drivers/media/video/vivi.c @@ -1119,7 +1119,6 @@ static const struct file_operations vivi_fops = { static struct video_device vivi = { .name = "vivi", .type = VID_TYPE_CAPTURE, - .hardware = 0, .fops = &vivi_fops, .minor = -1, // .release = video_device_release, diff --git a/drivers/media/video/w9966.c b/drivers/media/video/w9966.c index 47366408637..08aaae07c7e 100644 --- a/drivers/media/video/w9966.c +++ b/drivers/media/video/w9966.c @@ -196,7 +196,6 @@ static struct video_device w9966_template = { .owner = THIS_MODULE, .name = W9966_DRIVERNAME, .type = VID_TYPE_CAPTURE | VID_TYPE_SCALES, - .hardware = VID_HARDWARE_W9966, .fops = &w9966_fops, }; diff --git a/drivers/media/video/w9968cf.c b/drivers/media/video/w9968cf.c index 9e7f3e685d7..2ae1430f5f7 100644 --- a/drivers/media/video/w9968cf.c +++ b/drivers/media/video/w9968cf.c @@ -3549,7 +3549,6 @@ w9968cf_usb_probe(struct usb_interface* intf, const struct usb_device_id* id) strcpy(cam->v4ldev->name, symbolic(camlist, mod_id)); cam->v4ldev->owner = THIS_MODULE; cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES; - cam->v4ldev->hardware = VID_HARDWARE_W9968CF; cam->v4ldev->fops = &w9968cf_fops; cam->v4ldev->minor = video_nr[dev_nr]; cam->v4ldev->release = video_device_release; diff --git a/drivers/media/video/zc0301/zc0301_core.c b/drivers/media/video/zc0301/zc0301_core.c index 08a93c31c0a..2c5665c8244 100644 --- a/drivers/media/video/zc0301/zc0301_core.c +++ b/drivers/media/video/zc0301/zc0301_core.c @@ -1985,7 +1985,6 @@ zc0301_usb_probe(struct usb_interface* intf, const struct usb_device_id* id) strcpy(cam->v4ldev->name, "ZC0301[P] PC Camera"); cam->v4ldev->owner = THIS_MODULE; cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES; - cam->v4ldev->hardware = 0; cam->v4ldev->fops = &zc0301_fops; cam->v4ldev->minor = video_nr[dev_nr]; cam->v4ldev->release = video_device_release; diff --git a/drivers/media/video/zoran_card.c b/drivers/media/video/zoran_card.c index 48da36a15fc..6e0ac4c5c37 100644 --- a/drivers/media/video/zoran_card.c +++ b/drivers/media/video/zoran_card.c @@ -1235,8 +1235,14 @@ zoran_setup_videocodec (struct zoran *zr, return m; } - m->magic = 0L; /* magic not used */ - m->type = VID_HARDWARE_ZR36067; + /* magic and type are unused for master struct. Makes sense only at + codec structs. + In the past, .type were initialized to the old V4L1 .hardware + value, as VID_HARDWARE_ZR36067 + */ + m->magic = 0L; + m->type = 0; + m->flags = CODEC_FLAG_ENCODER | CODEC_FLAG_DECODER; strncpy(m->name, ZR_DEVNAME(zr), sizeof(m->name)); m->data = zr; diff --git a/drivers/media/video/zoran_driver.c b/drivers/media/video/zoran_driver.c index 419e5af7853..dd3d7d2c8b0 100644 --- a/drivers/media/video/zoran_driver.c +++ b/drivers/media/video/zoran_driver.c @@ -60,7 +60,6 @@ #include <linux/spinlock.h> #define MAP_NR(x) virt_to_page(x) -#define ZORAN_HARDWARE VID_HARDWARE_ZR36067 #define ZORAN_VID_TYPE ( \ VID_TYPE_CAPTURE | \ VID_TYPE_OVERLAY | \ @@ -4659,7 +4658,6 @@ struct video_device zoran_template __devinitdata = { #ifdef CONFIG_VIDEO_V4L2 .type2 = ZORAN_V4L2_VID_FLAGS, #endif - .hardware = ZORAN_HARDWARE, .fops = &zoran_fops, .release = &zoran_vdev_release, .minor = -1 diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index a5d0354bbbd..9203a0b221b 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -13,6 +13,7 @@ #include <linux/blkdev.h> #include <linux/freezer.h> #include <linux/kthread.h> +#include <linux/scatterlist.h> #include <linux/mmc/card.h> #include <linux/mmc/host.h> @@ -153,19 +154,21 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock blk_queue_max_hw_segments(mq->queue, bouncesz / 512); blk_queue_max_segment_size(mq->queue, bouncesz); - mq->sg = kzalloc(sizeof(struct scatterlist), + mq->sg = kmalloc(sizeof(struct scatterlist), GFP_KERNEL); if (!mq->sg) { ret = -ENOMEM; goto cleanup_queue; } + sg_init_table(mq->sg, 1); - mq->bounce_sg = kzalloc(sizeof(struct scatterlist) * + mq->bounce_sg = kmalloc(sizeof(struct scatterlist) * bouncesz / 512, GFP_KERNEL); if (!mq->bounce_sg) { ret = -ENOMEM; goto cleanup_queue; } + sg_init_table(mq->bounce_sg, bouncesz / 512); } } #endif @@ -302,12 +305,12 @@ static void copy_sg(struct scatterlist *dst, unsigned int dst_len, BUG_ON(dst_len == 0); if (dst_size == 0) { - dst_buf = page_address(dst->page) + dst->offset; + dst_buf = sg_virt(dst); dst_size = dst->length; } if (src_size == 0) { - src_buf = page_address(src->page) + src->offset; + src_buf = sg_virt(dst); src_size = src->length; } @@ -353,9 +356,7 @@ unsigned int mmc_queue_map_sg(struct mmc_queue *mq) return 1; } - mq->sg[0].page = virt_to_page(mq->bounce_buf); - mq->sg[0].offset = offset_in_page(mq->bounce_buf); - mq->sg[0].length = 0; + sg_init_one(mq->sg, mq->bounce_buf, 0); while (sg_len) { mq->sg[0].length += mq->bounce_sg[sg_len - 1].length; diff --git a/drivers/mmc/host/at91_mci.c b/drivers/mmc/host/at91_mci.c index 7a452c2ad1f..b1edcefdd4f 100644 --- a/drivers/mmc/host/at91_mci.c +++ b/drivers/mmc/host/at91_mci.c @@ -149,7 +149,7 @@ static inline void at91_mci_sg_to_dma(struct at91mci_host *host, struct mmc_data sg = &data->sg[i]; - sgbuffer = kmap_atomic(sg->page, KM_BIO_SRC_IRQ) + sg->offset; + sgbuffer = kmap_atomic(sg_page(sg), KM_BIO_SRC_IRQ) + sg->offset; amount = min(size, sg->length); size -= amount; @@ -226,7 +226,7 @@ static void at91_mci_pre_dma_read(struct at91mci_host *host) sg = &data->sg[host->transfer_index++]; pr_debug("sg = %p\n", sg); - sg->dma_address = dma_map_page(NULL, sg->page, sg->offset, sg->length, DMA_FROM_DEVICE); + sg->dma_address = dma_map_page(NULL, sg_page(sg), sg->offset, sg->length, DMA_FROM_DEVICE); pr_debug("dma address = %08X, length = %d\n", sg->dma_address, sg->length); @@ -283,7 +283,7 @@ static void at91_mci_post_dma_read(struct at91mci_host *host) int index; /* Swap the contents of the buffer */ - buffer = kmap_atomic(sg->page, KM_BIO_SRC_IRQ) + sg->offset; + buffer = kmap_atomic(sg_page(sg), KM_BIO_SRC_IRQ) + sg->offset; pr_debug("buffer = %p, length = %d\n", buffer, sg->length); for (index = 0; index < (sg->length / 4); index++) @@ -292,7 +292,7 @@ static void at91_mci_post_dma_read(struct at91mci_host *host) kunmap_atomic(buffer, KM_BIO_SRC_IRQ); } - flush_dcache_page(sg->page); + flush_dcache_page(sg_page(sg)); } /* Is there another transfer to trigger? */ diff --git a/drivers/mmc/host/au1xmmc.c b/drivers/mmc/host/au1xmmc.c index 92c4d0dfee4..bcbb6d247bf 100644 --- a/drivers/mmc/host/au1xmmc.c +++ b/drivers/mmc/host/au1xmmc.c @@ -340,7 +340,7 @@ static void au1xmmc_send_pio(struct au1xmmc_host *host) /* This is the pointer to the data buffer */ sg = &data->sg[host->pio.index]; - sg_ptr = page_address(sg->page) + sg->offset + host->pio.offset; + sg_ptr = sg_virt(sg) + host->pio.offset; /* This is the space left inside the buffer */ sg_len = data->sg[host->pio.index].length - host->pio.offset; @@ -400,7 +400,7 @@ static void au1xmmc_receive_pio(struct au1xmmc_host *host) if (host->pio.index < host->dma.len) { sg = &data->sg[host->pio.index]; - sg_ptr = page_address(sg->page) + sg->offset + host->pio.offset; + sg_ptr = sg_virt(sg) + host->pio.offset; /* This is the space left inside the buffer */ sg_len = sg_dma_len(&data->sg[host->pio.index]) - host->pio.offset; @@ -613,14 +613,11 @@ au1xmmc_prepare_data(struct au1xmmc_host *host, struct mmc_data *data) if (host->flags & HOST_F_XMIT){ ret = au1xxx_dbdma_put_source_flags(channel, - (void *) (page_address(sg->page) + - sg->offset), - len, flags); + (void *) sg_virt(sg), len, flags); } else { ret = au1xxx_dbdma_put_dest_flags(channel, - (void *) (page_address(sg->page) + - sg->offset), + (void *) sg_virt(sg), len, flags); } diff --git a/drivers/mmc/host/imxmmc.c b/drivers/mmc/host/imxmmc.c index 6ebc41e7592..fc72e1fadb6 100644 --- a/drivers/mmc/host/imxmmc.c +++ b/drivers/mmc/host/imxmmc.c @@ -262,7 +262,7 @@ static void imxmci_setup_data(struct imxmci_host *host, struct mmc_data *data) } /* Convert back to virtual address */ - host->data_ptr = (u16*)(page_address(data->sg->page) + data->sg->offset); + host->data_ptr = (u16*)sg_virt(sg); host->data_cnt = 0; clear_bit(IMXMCI_PEND_DMA_DATA_b, &host->pending_events); diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c index 7ae18eaed6c..12c2d807c14 100644 --- a/drivers/mmc/host/mmc_spi.c +++ b/drivers/mmc/host/mmc_spi.c @@ -813,7 +813,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, && dir == DMA_FROM_DEVICE) dir = DMA_BIDIRECTIONAL; - dma_addr = dma_map_page(dma_dev, sg->page, 0, + dma_addr = dma_map_page(dma_dev, sg_page(sg), 0, PAGE_SIZE, dir); if (direction == DMA_TO_DEVICE) t->tx_dma = dma_addr + sg->offset; @@ -822,7 +822,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, } /* allow pio too; we don't allow highmem */ - kmap_addr = kmap(sg->page); + kmap_addr = kmap(sg_page(sg)); if (direction == DMA_TO_DEVICE) t->tx_buf = kmap_addr + sg->offset; else @@ -855,8 +855,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, /* discard mappings */ if (direction == DMA_FROM_DEVICE) - flush_kernel_dcache_page(sg->page); - kunmap(sg->page); + flush_kernel_dcache_page(sg_page(sg)); + kunmap(sg_page(sg)); if (dma_dev) dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir); diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c index 60a67dfcda6..971e18b91f4 100644 --- a/drivers/mmc/host/omap.c +++ b/drivers/mmc/host/omap.c @@ -24,10 +24,10 @@ #include <linux/mmc/host.h> #include <linux/mmc/card.h> #include <linux/clk.h> +#include <linux/scatterlist.h> #include <asm/io.h> #include <asm/irq.h> -#include <asm/scatterlist.h> #include <asm/mach-types.h> #include <asm/arch/board.h> @@ -383,7 +383,7 @@ mmc_omap_sg_to_buf(struct mmc_omap_host *host) sg = host->data->sg + host->sg_idx; host->buffer_bytes_left = sg->length; - host->buffer = page_address(sg->page) + sg->offset; + host->buffer = sg_virt(sg); if (host->buffer_bytes_left > host->total_bytes_left) host->buffer_bytes_left = host->total_bytes_left; } diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index b397121b947..0db837e44b7 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -231,7 +231,7 @@ static void sdhci_deactivate_led(struct sdhci_host *host) static inline char* sdhci_sg_to_buffer(struct sdhci_host* host) { - return page_address(host->cur_sg->page) + host->cur_sg->offset; + return sg_virt(host->cur_sg); } static inline int sdhci_next_sg(struct sdhci_host* host) diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c index 9b904795eb7..c11a3d25605 100644 --- a/drivers/mmc/host/tifm_sd.c +++ b/drivers/mmc/host/tifm_sd.c @@ -192,7 +192,7 @@ static void tifm_sd_transfer_data(struct tifm_sd *host) } off = sg[host->sg_pos].offset + host->block_pos; - pg = nth_page(sg[host->sg_pos].page, off >> PAGE_SHIFT); + pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); @@ -241,18 +241,18 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) } off = sg[host->sg_pos].offset + host->block_pos; - pg = nth_page(sg[host->sg_pos].page, off >> PAGE_SHIFT); + pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); p_cnt = min(p_cnt, t_size); if (r_data->flags & MMC_DATA_WRITE) - tifm_sd_copy_page(host->bounce_buf.page, + tifm_sd_copy_page(sg_page(&host->bounce_buf), r_data->blksz - t_size, pg, p_off, p_cnt); else if (r_data->flags & MMC_DATA_READ) - tifm_sd_copy_page(pg, p_off, host->bounce_buf.page, + tifm_sd_copy_page(pg, p_off, sg_page(&host->bounce_buf), r_data->blksz - t_size, p_cnt); t_size -= p_cnt; diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c index 80db11c05f2..fa4c8c53cc7 100644 --- a/drivers/mmc/host/wbsd.c +++ b/drivers/mmc/host/wbsd.c @@ -269,7 +269,7 @@ static inline int wbsd_next_sg(struct wbsd_host *host) static inline char *wbsd_sg_to_buffer(struct wbsd_host *host) { - return page_address(host->cur_sg->page) + host->cur_sg->offset; + return sg_virt(host->cur_sg); } static inline void wbsd_sg_to_dma(struct wbsd_host *host, struct mmc_data *data) @@ -283,7 +283,7 @@ static inline void wbsd_sg_to_dma(struct wbsd_host *host, struct mmc_data *data) len = data->sg_len; for (i = 0; i < len; i++) { - sgbuf = page_address(sg[i].page) + sg[i].offset; + sgbuf = sg_virt(&sg[i]); memcpy(dmabuf, sgbuf, sg[i].length); dmabuf += sg[i].length; } @@ -300,7 +300,7 @@ static inline void wbsd_dma_to_sg(struct wbsd_host *host, struct mmc_data *data) len = data->sg_len; for (i = 0; i < len; i++) { - sgbuf = page_address(sg[i].page) + sg[i].offset; + sgbuf = sg_virt(&sg[i]); memcpy(sgbuf, dmabuf, sg[i].length); dmabuf += sg[i].length; } diff --git a/drivers/net/cpmac.c b/drivers/net/cpmac.c index ed53aaab4c0..ae419736158 100644 --- a/drivers/net/cpmac.c +++ b/drivers/net/cpmac.c @@ -471,7 +471,7 @@ static int cpmac_start_xmit(struct sk_buff *skb, struct net_device *dev) } len = max(skb->len, ETH_ZLEN); - queue = skb->queue_mapping; + queue = skb_get_queue_mapping(skb); #ifdef CONFIG_NETDEVICES_MULTIQUEUE netif_stop_subqueue(dev, queue); #else diff --git a/drivers/net/fec.c b/drivers/net/fec.c index 2b5782056dd..0fbf1bbbaee 100644 --- a/drivers/net/fec.c +++ b/drivers/net/fec.c @@ -751,13 +751,11 @@ mii_queue(struct net_device *dev, int regval, void (*func)(uint, struct net_devi if (mii_head) { mii_tail->mii_next = mip; mii_tail = mip; - } - else { + } else { mii_head = mii_tail = mip; fep->hwp->fec_mii_data = regval; } - } - else { + } else { retval = 1; } @@ -768,14 +766,11 @@ mii_queue(struct net_device *dev, int regval, void (*func)(uint, struct net_devi static void mii_do_cmd(struct net_device *dev, const phy_cmd_t *c) { - int k; - if(!c) return; - for(k = 0; (c+k)->mii_data != mk_mii_end; k++) { - mii_queue(dev, (c+k)->mii_data, (c+k)->funct); - } + for (; c->mii_data != mk_mii_end; c++) + mii_queue(dev, c->mii_data, c->funct); } static void mii_parse_sr(uint mii_reg, struct net_device *dev) @@ -792,7 +787,6 @@ static void mii_parse_sr(uint mii_reg, struct net_device *dev) status |= PHY_STAT_FAULT; if (mii_reg & 0x0020) status |= PHY_STAT_ANC; - *s = status; } @@ -1239,7 +1233,6 @@ mii_link_interrupt(int irq, void * dev_id); #endif #if defined(CONFIG_M5272) - /* * Code specific to Coldfire 5272 setup. */ @@ -2020,8 +2013,7 @@ static void mii_relink(struct work_struct *work) & (PHY_STAT_100FDX | PHY_STAT_10FDX)) duplex = 1; fec_restart(dev, duplex); - } - else + } else fec_stop(dev); #if 0 @@ -2119,8 +2111,7 @@ mii_discover_phy(uint mii_reg, struct net_device *dev) fep->phy_id = phytype << 16; mii_queue(dev, mk_mii_read(MII_REG_PHYIR2), mii_discover_phy3); - } - else { + } else { fep->phy_addr++; mii_queue(dev, mk_mii_read(MII_REG_PHYIR1), mii_discover_phy); @@ -2574,8 +2565,7 @@ fec_restart(struct net_device *dev, int duplex) if (duplex) { fecp->fec_r_cntrl = OPT_FRAME_SIZE | 0x04;/* MII enable */ fecp->fec_x_cntrl = 0x04; /* FD enable */ - } - else { + } else { /* MII enable|No Rcv on Xmit */ fecp->fec_r_cntrl = OPT_FRAME_SIZE | 0x06; fecp->fec_x_cntrl = 0x00; diff --git a/drivers/net/mlx4/icm.c b/drivers/net/mlx4/icm.c index 4b3c109d5ea..887633b207d 100644 --- a/drivers/net/mlx4/icm.c +++ b/drivers/net/mlx4/icm.c @@ -60,7 +60,7 @@ static void mlx4_free_icm_pages(struct mlx4_dev *dev, struct mlx4_icm_chunk *chu PCI_DMA_BIDIRECTIONAL); for (i = 0; i < chunk->npages; ++i) - __free_pages(chunk->mem[i].page, + __free_pages(sg_page(&chunk->mem[i]), get_order(chunk->mem[i].length)); } @@ -70,7 +70,7 @@ static void mlx4_free_icm_coherent(struct mlx4_dev *dev, struct mlx4_icm_chunk * for (i = 0; i < chunk->npages; ++i) dma_free_coherent(&dev->pdev->dev, chunk->mem[i].length, - lowmem_page_address(chunk->mem[i].page), + lowmem_page_address(sg_page(&chunk->mem[i])), sg_dma_address(&chunk->mem[i])); } @@ -95,10 +95,13 @@ void mlx4_free_icm(struct mlx4_dev *dev, struct mlx4_icm *icm, int coherent) static int mlx4_alloc_icm_pages(struct scatterlist *mem, int order, gfp_t gfp_mask) { - mem->page = alloc_pages(gfp_mask, order); - if (!mem->page) + struct page *page; + + page = alloc_pages(gfp_mask, order); + if (!page) return -ENOMEM; + sg_set_page(mem, page); mem->length = PAGE_SIZE << order; mem->offset = 0; return 0; @@ -145,6 +148,7 @@ struct mlx4_icm *mlx4_alloc_icm(struct mlx4_dev *dev, int npages, if (!chunk) goto fail; + sg_init_table(chunk->mem, MLX4_ICM_CHUNK_LEN); chunk->npages = 0; chunk->nsg = 0; list_add_tail(&chunk->list, &icm->chunk_list); @@ -334,7 +338,7 @@ void *mlx4_table_find(struct mlx4_icm_table *table, int obj, dma_addr_t *dma_han * been assigned to. */ if (chunk->mem[i].length > offset) { - page = chunk->mem[i].page; + page = sg_page(&chunk->mem[i]); goto out; } offset -= chunk->mem[i].length; diff --git a/drivers/net/niu.c b/drivers/net/niu.c index ed1f9bbb2a3..112ab079ce7 100644 --- a/drivers/net/niu.c +++ b/drivers/net/niu.c @@ -3103,31 +3103,12 @@ static int niu_alloc_tx_ring_info(struct niu *np, static void niu_size_rbr(struct niu *np, struct rx_ring_info *rp) { - u16 bs; + u16 bss; - switch (PAGE_SIZE) { - case 4 * 1024: - case 8 * 1024: - case 16 * 1024: - case 32 * 1024: - rp->rbr_block_size = PAGE_SIZE; - rp->rbr_blocks_per_page = 1; - break; + bss = min(PAGE_SHIFT, 15); - default: - if (PAGE_SIZE % (32 * 1024) == 0) - bs = 32 * 1024; - else if (PAGE_SIZE % (16 * 1024) == 0) - bs = 16 * 1024; - else if (PAGE_SIZE % (8 * 1024) == 0) - bs = 8 * 1024; - else if (PAGE_SIZE % (4 * 1024) == 0) - bs = 4 * 1024; - else - BUG(); - rp->rbr_block_size = bs; - rp->rbr_blocks_per_page = PAGE_SIZE / bs; - } + rp->rbr_block_size = 1 << bss; + rp->rbr_blocks_per_page = 1 << (PAGE_SHIFT-bss); rp->rbr_sizes[0] = 256; rp->rbr_sizes[1] = 1024; @@ -7902,12 +7883,7 @@ static int __init niu_init(void) { int err = 0; - BUILD_BUG_ON((PAGE_SIZE < 4 * 1024) || - ((PAGE_SIZE > 32 * 1024) && - ((PAGE_SIZE % (32 * 1024)) != 0 && - (PAGE_SIZE % (16 * 1024)) != 0 && - (PAGE_SIZE % (8 * 1024)) != 0 && - (PAGE_SIZE % (4 * 1024)) != 0))); + BUILD_BUG_ON(PAGE_SIZE < 4 * 1024); niu_debug = netif_msg_init(debug, NIU_MSG_DEFAULT); diff --git a/drivers/net/ppp_mppe.c b/drivers/net/ppp_mppe.c index c0b6d19d145..bcb0885011c 100644 --- a/drivers/net/ppp_mppe.c +++ b/drivers/net/ppp_mppe.c @@ -55,7 +55,7 @@ #include <linux/mm.h> #include <linux/ppp_defs.h> #include <linux/ppp-comp.h> -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include "ppp_mppe.h" @@ -68,9 +68,7 @@ MODULE_VERSION("1.0.2"); static unsigned int setup_sg(struct scatterlist *sg, const void *address, unsigned int length) { - sg[0].page = virt_to_page(address); - sg[0].offset = offset_in_page(address); - sg[0].length = length; + sg_init_one(sg, address, length); return length; } diff --git a/drivers/net/tg3.c b/drivers/net/tg3.c index 014dc2cfe4d..09440d783e6 100644 --- a/drivers/net/tg3.c +++ b/drivers/net/tg3.c @@ -64,8 +64,8 @@ #define DRV_MODULE_NAME "tg3" #define PFX DRV_MODULE_NAME ": " -#define DRV_MODULE_VERSION "3.84" -#define DRV_MODULE_RELDATE "October 12, 2007" +#define DRV_MODULE_VERSION "3.85" +#define DRV_MODULE_RELDATE "October 18, 2007" #define TG3_DEF_MAC_MODE 0 #define TG3_DEF_RX_MODE 0 @@ -200,6 +200,7 @@ static struct pci_device_id tg3_pci_tbl[] = { {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5906M)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5784)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5764)}, + {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5723)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5761)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5761E)}, {PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9DXX)}, @@ -5028,10 +5029,7 @@ static int tg3_poll_fw(struct tg3 *tp) /* Save PCI command register before chip reset */ static void tg3_save_pci_state(struct tg3 *tp) { - u32 val; - - pci_read_config_dword(tp->pdev, TG3PCI_COMMAND, &val); - tp->pci_cmd = val; + pci_read_config_word(tp->pdev, PCI_COMMAND, &tp->pci_cmd); } /* Restore PCI state after chip reset */ @@ -5054,7 +5052,7 @@ static void tg3_restore_pci_state(struct tg3 *tp) PCISTATE_ALLOW_APE_SHMEM_WR; pci_write_config_dword(tp->pdev, TG3PCI_PCISTATE, val); - pci_write_config_dword(tp->pdev, TG3PCI_COMMAND, tp->pci_cmd); + pci_write_config_word(tp->pdev, PCI_COMMAND, tp->pci_cmd); if (!(tp->tg3_flags2 & TG3_FLG2_PCI_EXPRESS)) { pci_write_config_byte(tp->pdev, PCI_CACHE_LINE_SIZE, @@ -10820,9 +10818,24 @@ out_not_found: strcpy(tp->board_part_number, "none"); } +static int __devinit tg3_fw_img_is_valid(struct tg3 *tp, u32 offset) +{ + u32 val; + + if (tg3_nvram_read_swab(tp, offset, &val) || + (val & 0xfc000000) != 0x0c000000 || + tg3_nvram_read_swab(tp, offset + 4, &val) || + val != 0) + return 0; + + return 1; +} + static void __devinit tg3_read_fw_ver(struct tg3 *tp) { u32 val, offset, start; + u32 ver_offset; + int i, bcnt; if (tg3_nvram_read_swab(tp, 0, &val)) return; @@ -10835,29 +10848,71 @@ static void __devinit tg3_read_fw_ver(struct tg3 *tp) return; offset = tg3_nvram_logical_addr(tp, offset); - if (tg3_nvram_read_swab(tp, offset, &val)) + + if (!tg3_fw_img_is_valid(tp, offset) || + tg3_nvram_read_swab(tp, offset + 8, &ver_offset)) return; - if ((val & 0xfc000000) == 0x0c000000) { - u32 ver_offset, addr; - int i; + offset = offset + ver_offset - start; + for (i = 0; i < 16; i += 4) { + if (tg3_nvram_read(tp, offset + i, &val)) + return; - if (tg3_nvram_read_swab(tp, offset + 4, &val) || - tg3_nvram_read_swab(tp, offset + 8, &ver_offset)) + val = le32_to_cpu(val); + memcpy(tp->fw_ver + i, &val, 4); + } + + if (!(tp->tg3_flags & TG3_FLAG_ENABLE_ASF) || + (tp->tg3_flags & TG3_FLG3_ENABLE_APE)) + return; + + for (offset = TG3_NVM_DIR_START; + offset < TG3_NVM_DIR_END; + offset += TG3_NVM_DIRENT_SIZE) { + if (tg3_nvram_read_swab(tp, offset, &val)) return; - if (val != 0) + if ((val >> TG3_NVM_DIRTYPE_SHIFT) == TG3_NVM_DIRTYPE_ASFINI) + break; + } + + if (offset == TG3_NVM_DIR_END) + return; + + if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) + start = 0x08000000; + else if (tg3_nvram_read_swab(tp, offset - 4, &start)) + return; + + if (tg3_nvram_read_swab(tp, offset + 4, &offset) || + !tg3_fw_img_is_valid(tp, offset) || + tg3_nvram_read_swab(tp, offset + 8, &val)) + return; + + offset += val - start; + + bcnt = strlen(tp->fw_ver); + + tp->fw_ver[bcnt++] = ','; + tp->fw_ver[bcnt++] = ' '; + + for (i = 0; i < 4; i++) { + if (tg3_nvram_read(tp, offset, &val)) return; - addr = offset + ver_offset - start; - for (i = 0; i < 16; i += 4) { - if (tg3_nvram_read(tp, addr + i, &val)) - return; + val = le32_to_cpu(val); + offset += sizeof(val); - val = cpu_to_le32(val); - memcpy(tp->fw_ver + i, &val, 4); + if (bcnt > TG3_VER_SIZE - sizeof(val)) { + memcpy(&tp->fw_ver[bcnt], &val, TG3_VER_SIZE - bcnt); + break; } + + memcpy(&tp->fw_ver[bcnt], &val, sizeof(val)); + bcnt += sizeof(val); } + + tp->fw_ver[TG3_VER_SIZE - 1] = 0; } static struct pci_dev * __devinit tg3_find_peer(struct tg3 *); diff --git a/drivers/net/tg3.h b/drivers/net/tg3.h index 6dbdad2b8f8..1d5b2a3dd29 100644 --- a/drivers/net/tg3.h +++ b/drivers/net/tg3.h @@ -1540,6 +1540,12 @@ #define TG3_EEPROM_MAGIC_HW 0xabcd #define TG3_EEPROM_MAGIC_HW_MSK 0xffff +#define TG3_NVM_DIR_START 0x18 +#define TG3_NVM_DIR_END 0x78 +#define TG3_NVM_DIRENT_SIZE 0xc +#define TG3_NVM_DIRTYPE_SHIFT 24 +#define TG3_NVM_DIRTYPE_ASFINI 1 + /* 32K Window into NIC internal memory */ #define NIC_SRAM_WIN_BASE 0x00008000 @@ -2415,10 +2421,11 @@ struct tg3 { #define PHY_REV_BCM5411_X0 0x1 /* Found on Netgear GA302T */ u32 led_ctrl; - u32 pci_cmd; + u16 pci_cmd; char board_part_number[24]; - char fw_ver[16]; +#define TG3_VER_SIZE 32 + char fw_ver[TG3_VER_SIZE]; u32 nic_sram_data_cfg; u32 pci_clock_ctrl; struct pci_dev *pdev_peer; diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index b3c4dbff26b..7c60cbd85dc 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -42,6 +42,7 @@ #include <linux/reboot.h> #include <linux/proc_fs.h> #include <linux/seq_file.h> +#include <linux/scatterlist.h> #include <asm/byteorder.h> #include <asm/cache.h> /* for L1_CACHE_BYTES */ diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index e5c323936ea..e527a0e1d6c 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -28,6 +28,7 @@ #include <linux/mm.h> #include <linux/string.h> #include <linux/pci.h> +#include <linux/scatterlist.h> #include <asm/byteorder.h> #include <asm/io.h> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 006054a4099..55505565073 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -20,6 +20,9 @@ obj-$(CONFIG_PCI_MSI) += msi.o # Build the Hypertransport interrupt support obj-$(CONFIG_HT_IRQ) += htirq.o +# Build Intel IOMMU support +obj-$(CONFIG_DMAR) += dmar.o iova.o intel-iommu.o + # # Some architectures use the generic PCI setup functions # diff --git a/drivers/pci/dmar.c b/drivers/pci/dmar.c new file mode 100644 index 00000000000..5dfdfdac92e --- /dev/null +++ b/drivers/pci/dmar.c @@ -0,0 +1,329 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + * Copyright (C) Ashok Raj <ashok.raj@intel.com> + * Copyright (C) Shaohua Li <shaohua.li@intel.com> + * Copyright (C) Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> + * + * This file implements early detection/parsing of DMA Remapping Devices + * reported to OS through BIOS via DMA remapping reporting (DMAR) ACPI + * tables. + */ + +#include <linux/pci.h> +#include <linux/dmar.h> + +#undef PREFIX +#define PREFIX "DMAR:" + +/* No locks are needed as DMA remapping hardware unit + * list is constructed at boot time and hotplug of + * these units are not supported by the architecture. + */ +LIST_HEAD(dmar_drhd_units); +LIST_HEAD(dmar_rmrr_units); + +static struct acpi_table_header * __initdata dmar_tbl; + +static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd) +{ + /* + * add INCLUDE_ALL at the tail, so scan the list will find it at + * the very end. + */ + if (drhd->include_all) + list_add_tail(&drhd->list, &dmar_drhd_units); + else + list_add(&drhd->list, &dmar_drhd_units); +} + +static void __init dmar_register_rmrr_unit(struct dmar_rmrr_unit *rmrr) +{ + list_add(&rmrr->list, &dmar_rmrr_units); +} + +static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope, + struct pci_dev **dev, u16 segment) +{ + struct pci_bus *bus; + struct pci_dev *pdev = NULL; + struct acpi_dmar_pci_path *path; + int count; + + bus = pci_find_bus(segment, scope->bus); + path = (struct acpi_dmar_pci_path *)(scope + 1); + count = (scope->length - sizeof(struct acpi_dmar_device_scope)) + / sizeof(struct acpi_dmar_pci_path); + + while (count) { + if (pdev) + pci_dev_put(pdev); + /* + * Some BIOSes list non-exist devices in DMAR table, just + * ignore it + */ + if (!bus) { + printk(KERN_WARNING + PREFIX "Device scope bus [%d] not found\n", + scope->bus); + break; + } + pdev = pci_get_slot(bus, PCI_DEVFN(path->dev, path->fn)); + if (!pdev) { + printk(KERN_WARNING PREFIX + "Device scope device [%04x:%02x:%02x.%02x] not found\n", + segment, bus->number, path->dev, path->fn); + break; + } + path ++; + count --; + bus = pdev->subordinate; + } + if (!pdev) { + printk(KERN_WARNING PREFIX + "Device scope device [%04x:%02x:%02x.%02x] not found\n", + segment, scope->bus, path->dev, path->fn); + *dev = NULL; + return 0; + } + if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && \ + pdev->subordinate) || (scope->entry_type == \ + ACPI_DMAR_SCOPE_TYPE_BRIDGE && !pdev->subordinate)) { + pci_dev_put(pdev); + printk(KERN_WARNING PREFIX + "Device scope type does not match for %s\n", + pci_name(pdev)); + return -EINVAL; + } + *dev = pdev; + return 0; +} + +static int __init dmar_parse_dev_scope(void *start, void *end, int *cnt, + struct pci_dev ***devices, u16 segment) +{ + struct acpi_dmar_device_scope *scope; + void * tmp = start; + int index; + int ret; + + *cnt = 0; + while (start < end) { + scope = start; + if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || + scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) + (*cnt)++; + else + printk(KERN_WARNING PREFIX + "Unsupported device scope\n"); + start += scope->length; + } + if (*cnt == 0) + return 0; + + *devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL); + if (!*devices) + return -ENOMEM; + + start = tmp; + index = 0; + while (start < end) { + scope = start; + if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT || + scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) { + ret = dmar_parse_one_dev_scope(scope, + &(*devices)[index], segment); + if (ret) { + kfree(*devices); + return ret; + } + index ++; + } + start += scope->length; + } + + return 0; +} + +/** + * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition + * structure which uniquely represent one DMA remapping hardware unit + * present in the platform + */ +static int __init +dmar_parse_one_drhd(struct acpi_dmar_header *header) +{ + struct acpi_dmar_hardware_unit *drhd; + struct dmar_drhd_unit *dmaru; + int ret = 0; + static int include_all; + + dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL); + if (!dmaru) + return -ENOMEM; + + drhd = (struct acpi_dmar_hardware_unit *)header; + dmaru->reg_base_addr = drhd->address; + dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */ + + if (!dmaru->include_all) + ret = dmar_parse_dev_scope((void *)(drhd + 1), + ((void *)drhd) + header->length, + &dmaru->devices_cnt, &dmaru->devices, + drhd->segment); + else { + /* Only allow one INCLUDE_ALL */ + if (include_all) { + printk(KERN_WARNING PREFIX "Only one INCLUDE_ALL " + "device scope is allowed\n"); + ret = -EINVAL; + } + include_all = 1; + } + + if (ret || (dmaru->devices_cnt == 0 && !dmaru->include_all)) + kfree(dmaru); + else + dmar_register_drhd_unit(dmaru); + return ret; +} + +static int __init +dmar_parse_one_rmrr(struct acpi_dmar_header *header) +{ + struct acpi_dmar_reserved_memory *rmrr; + struct dmar_rmrr_unit *rmrru; + int ret = 0; + + rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL); + if (!rmrru) + return -ENOMEM; + + rmrr = (struct acpi_dmar_reserved_memory *)header; + rmrru->base_address = rmrr->base_address; + rmrru->end_address = rmrr->end_address; + ret = dmar_parse_dev_scope((void *)(rmrr + 1), + ((void *)rmrr) + header->length, + &rmrru->devices_cnt, &rmrru->devices, rmrr->segment); + + if (ret || (rmrru->devices_cnt == 0)) + kfree(rmrru); + else + dmar_register_rmrr_unit(rmrru); + return ret; +} + +static void __init +dmar_table_print_dmar_entry(struct acpi_dmar_header *header) +{ + struct acpi_dmar_hardware_unit *drhd; + struct acpi_dmar_reserved_memory *rmrr; + + switch (header->type) { + case ACPI_DMAR_TYPE_HARDWARE_UNIT: + drhd = (struct acpi_dmar_hardware_unit *)header; + printk (KERN_INFO PREFIX + "DRHD (flags: 0x%08x)base: 0x%016Lx\n", + drhd->flags, drhd->address); + break; + case ACPI_DMAR_TYPE_RESERVED_MEMORY: + rmrr = (struct acpi_dmar_reserved_memory *)header; + + printk (KERN_INFO PREFIX + "RMRR base: 0x%016Lx end: 0x%016Lx\n", + rmrr->base_address, rmrr->end_address); + break; + } +} + +/** + * parse_dmar_table - parses the DMA reporting table + */ +static int __init +parse_dmar_table(void) +{ + struct acpi_table_dmar *dmar; + struct acpi_dmar_header *entry_header; + int ret = 0; + + dmar = (struct acpi_table_dmar *)dmar_tbl; + if (!dmar) + return -ENODEV; + + if (!dmar->width) { + printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n"); + return -EINVAL; + } + + printk (KERN_INFO PREFIX "Host address width %d\n", + dmar->width + 1); + + entry_header = (struct acpi_dmar_header *)(dmar + 1); + while (((unsigned long)entry_header) < + (((unsigned long)dmar) + dmar_tbl->length)) { + dmar_table_print_dmar_entry(entry_header); + + switch (entry_header->type) { + case ACPI_DMAR_TYPE_HARDWARE_UNIT: + ret = dmar_parse_one_drhd(entry_header); + break; + case ACPI_DMAR_TYPE_RESERVED_MEMORY: + ret = dmar_parse_one_rmrr(entry_header); + break; + default: + printk(KERN_WARNING PREFIX + "Unknown DMAR structure type\n"); + ret = 0; /* for forward compatibility */ + break; + } + if (ret) + break; + + entry_header = ((void *)entry_header + entry_header->length); + } + return ret; +} + + +int __init dmar_table_init(void) +{ + + parse_dmar_table(); + if (list_empty(&dmar_drhd_units)) { + printk(KERN_INFO PREFIX "No DMAR devices found\n"); + return -ENODEV; + } + return 0; +} + +/** + * early_dmar_detect - checks to see if the platform supports DMAR devices + */ +int __init early_dmar_detect(void) +{ + acpi_status status = AE_OK; + + /* if we could find DMAR table, then there are DMAR devices */ + status = acpi_get_table(ACPI_SIG_DMAR, 0, + (struct acpi_table_header **)&dmar_tbl); + + if (ACPI_SUCCESS(status) && !dmar_tbl) { + printk (KERN_WARNING PREFIX "Unable to map DMAR\n"); + status = AE_NOT_FOUND; + } + + return (ACPI_SUCCESS(status) ? 1 : 0); +} diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c new file mode 100644 index 00000000000..0c4ab3b0727 --- /dev/null +++ b/drivers/pci/intel-iommu.c @@ -0,0 +1,2271 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + * Copyright (C) Ashok Raj <ashok.raj@intel.com> + * Copyright (C) Shaohua Li <shaohua.li@intel.com> + * Copyright (C) Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> + */ + +#include <linux/init.h> +#include <linux/bitmap.h> +#include <linux/slab.h> +#include <linux/irq.h> +#include <linux/interrupt.h> +#include <linux/sysdev.h> +#include <linux/spinlock.h> +#include <linux/pci.h> +#include <linux/dmar.h> +#include <linux/dma-mapping.h> +#include <linux/mempool.h> +#include "iova.h" +#include "intel-iommu.h" +#include <asm/proto.h> /* force_iommu in this header in x86-64*/ +#include <asm/cacheflush.h> +#include <asm/iommu.h> +#include "pci.h" + +#define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY) +#define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) + +#define IOAPIC_RANGE_START (0xfee00000) +#define IOAPIC_RANGE_END (0xfeefffff) +#define IOVA_START_ADDR (0x1000) + +#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48 + +#define DMAR_OPERATION_TIMEOUT (HZ*60) /* 1m */ + +#define DOMAIN_MAX_ADDR(gaw) ((((u64)1) << gaw) - 1) + +static void domain_remove_dev_info(struct dmar_domain *domain); + +static int dmar_disabled; +static int __initdata dmar_map_gfx = 1; +static int dmar_forcedac; + +#define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1)) +static DEFINE_SPINLOCK(device_domain_lock); +static LIST_HEAD(device_domain_list); + +static int __init intel_iommu_setup(char *str) +{ + if (!str) + return -EINVAL; + while (*str) { + if (!strncmp(str, "off", 3)) { + dmar_disabled = 1; + printk(KERN_INFO"Intel-IOMMU: disabled\n"); + } else if (!strncmp(str, "igfx_off", 8)) { + dmar_map_gfx = 0; + printk(KERN_INFO + "Intel-IOMMU: disable GFX device mapping\n"); + } else if (!strncmp(str, "forcedac", 8)) { + printk (KERN_INFO + "Intel-IOMMU: Forcing DAC for PCI devices\n"); + dmar_forcedac = 1; + } + + str += strcspn(str, ","); + while (*str == ',') + str++; + } + return 0; +} +__setup("intel_iommu=", intel_iommu_setup); + +static struct kmem_cache *iommu_domain_cache; +static struct kmem_cache *iommu_devinfo_cache; +static struct kmem_cache *iommu_iova_cache; + +static inline void *iommu_kmem_cache_alloc(struct kmem_cache *cachep) +{ + unsigned int flags; + void *vaddr; + + /* trying to avoid low memory issues */ + flags = current->flags & PF_MEMALLOC; + current->flags |= PF_MEMALLOC; + vaddr = kmem_cache_alloc(cachep, GFP_ATOMIC); + current->flags &= (~PF_MEMALLOC | flags); + return vaddr; +} + + +static inline void *alloc_pgtable_page(void) +{ + unsigned int flags; + void *vaddr; + + /* trying to avoid low memory issues */ + flags = current->flags & PF_MEMALLOC; + current->flags |= PF_MEMALLOC; + vaddr = (void *)get_zeroed_page(GFP_ATOMIC); + current->flags &= (~PF_MEMALLOC | flags); + return vaddr; +} + +static inline void free_pgtable_page(void *vaddr) +{ + free_page((unsigned long)vaddr); +} + +static inline void *alloc_domain_mem(void) +{ + return iommu_kmem_cache_alloc(iommu_domain_cache); +} + +static inline void free_domain_mem(void *vaddr) +{ + kmem_cache_free(iommu_domain_cache, vaddr); +} + +static inline void * alloc_devinfo_mem(void) +{ + return iommu_kmem_cache_alloc(iommu_devinfo_cache); +} + +static inline void free_devinfo_mem(void *vaddr) +{ + kmem_cache_free(iommu_devinfo_cache, vaddr); +} + +struct iova *alloc_iova_mem(void) +{ + return iommu_kmem_cache_alloc(iommu_iova_cache); +} + +void free_iova_mem(struct iova *iova) +{ + kmem_cache_free(iommu_iova_cache, iova); +} + +static inline void __iommu_flush_cache( + struct intel_iommu *iommu, void *addr, int size) +{ + if (!ecap_coherent(iommu->ecap)) + clflush_cache_range(addr, size); +} + +/* Gets context entry for a given bus and devfn */ +static struct context_entry * device_to_context_entry(struct intel_iommu *iommu, + u8 bus, u8 devfn) +{ + struct root_entry *root; + struct context_entry *context; + unsigned long phy_addr; + unsigned long flags; + + spin_lock_irqsave(&iommu->lock, flags); + root = &iommu->root_entry[bus]; + context = get_context_addr_from_root(root); + if (!context) { + context = (struct context_entry *)alloc_pgtable_page(); + if (!context) { + spin_unlock_irqrestore(&iommu->lock, flags); + return NULL; + } + __iommu_flush_cache(iommu, (void *)context, PAGE_SIZE_4K); + phy_addr = virt_to_phys((void *)context); + set_root_value(root, phy_addr); + set_root_present(root); + __iommu_flush_cache(iommu, root, sizeof(*root)); + } + spin_unlock_irqrestore(&iommu->lock, flags); + return &context[devfn]; +} + +static int device_context_mapped(struct intel_iommu *iommu, u8 bus, u8 devfn) +{ + struct root_entry *root; + struct context_entry *context; + int ret; + unsigned long flags; + + spin_lock_irqsave(&iommu->lock, flags); + root = &iommu->root_entry[bus]; + context = get_context_addr_from_root(root); + if (!context) { + ret = 0; + goto out; + } + ret = context_present(context[devfn]); +out: + spin_unlock_irqrestore(&iommu->lock, flags); + return ret; +} + +static void clear_context_table(struct intel_iommu *iommu, u8 bus, u8 devfn) +{ + struct root_entry *root; + struct context_entry *context; + unsigned long flags; + + spin_lock_irqsave(&iommu->lock, flags); + root = &iommu->root_entry[bus]; + context = get_context_addr_from_root(root); + if (context) { + context_clear_entry(context[devfn]); + __iommu_flush_cache(iommu, &context[devfn], \ + sizeof(*context)); + } + spin_unlock_irqrestore(&iommu->lock, flags); +} + +static void free_context_table(struct intel_iommu *iommu) +{ + struct root_entry *root; + int i; + unsigned long flags; + struct context_entry *context; + + spin_lock_irqsave(&iommu->lock, flags); + if (!iommu->root_entry) { + goto out; + } + for (i = 0; i < ROOT_ENTRY_NR; i++) { + root = &iommu->root_entry[i]; + context = get_context_addr_from_root(root); + if (context) + free_pgtable_page(context); + } + free_pgtable_page(iommu->root_entry); + iommu->root_entry = NULL; +out: + spin_unlock_irqrestore(&iommu->lock, flags); +} + +/* page table handling */ +#define LEVEL_STRIDE (9) +#define LEVEL_MASK (((u64)1 << LEVEL_STRIDE) - 1) + +static inline int agaw_to_level(int agaw) +{ + return agaw + 2; +} + +static inline int agaw_to_width(int agaw) +{ + return 30 + agaw * LEVEL_STRIDE; + +} + +static inline int width_to_agaw(int width) +{ + return (width - 30) / LEVEL_STRIDE; +} + +static inline unsigned int level_to_offset_bits(int level) +{ + return (12 + (level - 1) * LEVEL_STRIDE); +} + +static inline int address_level_offset(u64 addr, int level) +{ + return ((addr >> level_to_offset_bits(level)) & LEVEL_MASK); +} + +static inline u64 level_mask(int level) +{ + return ((u64)-1 << level_to_offset_bits(level)); +} + +static inline u64 level_size(int level) +{ + return ((u64)1 << level_to_offset_bits(level)); +} + +static inline u64 align_to_level(u64 addr, int level) +{ + return ((addr + level_size(level) - 1) & level_mask(level)); +} + +static struct dma_pte * addr_to_dma_pte(struct dmar_domain *domain, u64 addr) +{ + int addr_width = agaw_to_width(domain->agaw); + struct dma_pte *parent, *pte = NULL; + int level = agaw_to_level(domain->agaw); + int offset; + unsigned long flags; + + BUG_ON(!domain->pgd); + + addr &= (((u64)1) << addr_width) - 1; + parent = domain->pgd; + + spin_lock_irqsave(&domain->mapping_lock, flags); + while (level > 0) { + void *tmp_page; + + offset = address_level_offset(addr, level); + pte = &parent[offset]; + if (level == 1) + break; + + if (!dma_pte_present(*pte)) { + tmp_page = alloc_pgtable_page(); + + if (!tmp_page) { + spin_unlock_irqrestore(&domain->mapping_lock, + flags); + return NULL; + } + __iommu_flush_cache(domain->iommu, tmp_page, + PAGE_SIZE_4K); + dma_set_pte_addr(*pte, virt_to_phys(tmp_page)); + /* + * high level table always sets r/w, last level page + * table control read/write + */ + dma_set_pte_readable(*pte); + dma_set_pte_writable(*pte); + __iommu_flush_cache(domain->iommu, pte, sizeof(*pte)); + } + parent = phys_to_virt(dma_pte_addr(*pte)); + level--; + } + + spin_unlock_irqrestore(&domain->mapping_lock, flags); + return pte; +} + +/* return address's pte at specific level */ +static struct dma_pte *dma_addr_level_pte(struct dmar_domain *domain, u64 addr, + int level) +{ + struct dma_pte *parent, *pte = NULL; + int total = agaw_to_level(domain->agaw); + int offset; + + parent = domain->pgd; + while (level <= total) { + offset = address_level_offset(addr, total); + pte = &parent[offset]; + if (level == total) + return pte; + + if (!dma_pte_present(*pte)) + break; + parent = phys_to_virt(dma_pte_addr(*pte)); + total--; + } + return NULL; +} + +/* clear one page's page table */ +static void dma_pte_clear_one(struct dmar_domain *domain, u64 addr) +{ + struct dma_pte *pte = NULL; + + /* get last level pte */ + pte = dma_addr_level_pte(domain, addr, 1); + + if (pte) { + dma_clear_pte(*pte); + __iommu_flush_cache(domain->iommu, pte, sizeof(*pte)); + } +} + +/* clear last level pte, a tlb flush should be followed */ +static void dma_pte_clear_range(struct dmar_domain *domain, u64 start, u64 end) +{ + int addr_width = agaw_to_width(domain->agaw); + + start &= (((u64)1) << addr_width) - 1; + end &= (((u64)1) << addr_width) - 1; + /* in case it's partial page */ + start = PAGE_ALIGN_4K(start); + end &= PAGE_MASK_4K; + + /* we don't need lock here, nobody else touches the iova range */ + while (start < end) { + dma_pte_clear_one(domain, start); + start += PAGE_SIZE_4K; + } +} + +/* free page table pages. last level pte should already be cleared */ +static void dma_pte_free_pagetable(struct dmar_domain *domain, + u64 start, u64 end) +{ + int addr_width = agaw_to_width(domain->agaw); + struct dma_pte *pte; + int total = agaw_to_level(domain->agaw); + int level; + u64 tmp; + + start &= (((u64)1) << addr_width) - 1; + end &= (((u64)1) << addr_width) - 1; + + /* we don't need lock here, nobody else touches the iova range */ + level = 2; + while (level <= total) { + tmp = align_to_level(start, level); + if (tmp >= end || (tmp + level_size(level) > end)) + return; + + while (tmp < end) { + pte = dma_addr_level_pte(domain, tmp, level); + if (pte) { + free_pgtable_page( + phys_to_virt(dma_pte_addr(*pte))); + dma_clear_pte(*pte); + __iommu_flush_cache(domain->iommu, + pte, sizeof(*pte)); + } + tmp += level_size(level); + } + level++; + } + /* free pgd */ + if (start == 0 && end >= ((((u64)1) << addr_width) - 1)) { + free_pgtable_page(domain->pgd); + domain->pgd = NULL; + } +} + +/* iommu handling */ +static int iommu_alloc_root_entry(struct intel_iommu *iommu) +{ + struct root_entry *root; + unsigned long flags; + + root = (struct root_entry *)alloc_pgtable_page(); + if (!root) + return -ENOMEM; + + __iommu_flush_cache(iommu, root, PAGE_SIZE_4K); + + spin_lock_irqsave(&iommu->lock, flags); + iommu->root_entry = root; + spin_unlock_irqrestore(&iommu->lock, flags); + + return 0; +} + +#define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \ +{\ + unsigned long start_time = jiffies;\ + while (1) {\ + sts = op (iommu->reg + offset);\ + if (cond)\ + break;\ + if (time_after(jiffies, start_time + DMAR_OPERATION_TIMEOUT))\ + panic("DMAR hardware is malfunctioning\n");\ + cpu_relax();\ + }\ +} + +static void iommu_set_root_entry(struct intel_iommu *iommu) +{ + void *addr; + u32 cmd, sts; + unsigned long flag; + + addr = iommu->root_entry; + + spin_lock_irqsave(&iommu->register_lock, flag); + dmar_writeq(iommu->reg + DMAR_RTADDR_REG, virt_to_phys(addr)); + + cmd = iommu->gcmd | DMA_GCMD_SRTP; + writel(cmd, iommu->reg + DMAR_GCMD_REG); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, + readl, (sts & DMA_GSTS_RTPS), sts); + + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +static void iommu_flush_write_buffer(struct intel_iommu *iommu) +{ + u32 val; + unsigned long flag; + + if (!cap_rwbf(iommu->cap)) + return; + val = iommu->gcmd | DMA_GCMD_WBF; + + spin_lock_irqsave(&iommu->register_lock, flag); + writel(val, iommu->reg + DMAR_GCMD_REG); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, + readl, (!(val & DMA_GSTS_WBFS)), val); + + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +/* return value determine if we need a write buffer flush */ +static int __iommu_flush_context(struct intel_iommu *iommu, + u16 did, u16 source_id, u8 function_mask, u64 type, + int non_present_entry_flush) +{ + u64 val = 0; + unsigned long flag; + + /* + * In the non-present entry flush case, if hardware doesn't cache + * non-present entry we do nothing and if hardware cache non-present + * entry, we flush entries of domain 0 (the domain id is used to cache + * any non-present entries) + */ + if (non_present_entry_flush) { + if (!cap_caching_mode(iommu->cap)) + return 1; + else + did = 0; + } + + switch (type) { + case DMA_CCMD_GLOBAL_INVL: + val = DMA_CCMD_GLOBAL_INVL; + break; + case DMA_CCMD_DOMAIN_INVL: + val = DMA_CCMD_DOMAIN_INVL|DMA_CCMD_DID(did); + break; + case DMA_CCMD_DEVICE_INVL: + val = DMA_CCMD_DEVICE_INVL|DMA_CCMD_DID(did) + | DMA_CCMD_SID(source_id) | DMA_CCMD_FM(function_mask); + break; + default: + BUG(); + } + val |= DMA_CCMD_ICC; + + spin_lock_irqsave(&iommu->register_lock, flag); + dmar_writeq(iommu->reg + DMAR_CCMD_REG, val); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, DMAR_CCMD_REG, + dmar_readq, (!(val & DMA_CCMD_ICC)), val); + + spin_unlock_irqrestore(&iommu->register_lock, flag); + + /* flush context entry will implictly flush write buffer */ + return 0; +} + +static int inline iommu_flush_context_global(struct intel_iommu *iommu, + int non_present_entry_flush) +{ + return __iommu_flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL, + non_present_entry_flush); +} + +static int inline iommu_flush_context_domain(struct intel_iommu *iommu, u16 did, + int non_present_entry_flush) +{ + return __iommu_flush_context(iommu, did, 0, 0, DMA_CCMD_DOMAIN_INVL, + non_present_entry_flush); +} + +static int inline iommu_flush_context_device(struct intel_iommu *iommu, + u16 did, u16 source_id, u8 function_mask, int non_present_entry_flush) +{ + return __iommu_flush_context(iommu, did, source_id, function_mask, + DMA_CCMD_DEVICE_INVL, non_present_entry_flush); +} + +/* return value determine if we need a write buffer flush */ +static int __iommu_flush_iotlb(struct intel_iommu *iommu, u16 did, + u64 addr, unsigned int size_order, u64 type, + int non_present_entry_flush) +{ + int tlb_offset = ecap_iotlb_offset(iommu->ecap); + u64 val = 0, val_iva = 0; + unsigned long flag; + + /* + * In the non-present entry flush case, if hardware doesn't cache + * non-present entry we do nothing and if hardware cache non-present + * entry, we flush entries of domain 0 (the domain id is used to cache + * any non-present entries) + */ + if (non_present_entry_flush) { + if (!cap_caching_mode(iommu->cap)) + return 1; + else + did = 0; + } + + switch (type) { + case DMA_TLB_GLOBAL_FLUSH: + /* global flush doesn't need set IVA_REG */ + val = DMA_TLB_GLOBAL_FLUSH|DMA_TLB_IVT; + break; + case DMA_TLB_DSI_FLUSH: + val = DMA_TLB_DSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did); + break; + case DMA_TLB_PSI_FLUSH: + val = DMA_TLB_PSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did); + /* Note: always flush non-leaf currently */ + val_iva = size_order | addr; + break; + default: + BUG(); + } + /* Note: set drain read/write */ +#if 0 + /* + * This is probably to be super secure.. Looks like we can + * ignore it without any impact. + */ + if (cap_read_drain(iommu->cap)) + val |= DMA_TLB_READ_DRAIN; +#endif + if (cap_write_drain(iommu->cap)) + val |= DMA_TLB_WRITE_DRAIN; + + spin_lock_irqsave(&iommu->register_lock, flag); + /* Note: Only uses first TLB reg currently */ + if (val_iva) + dmar_writeq(iommu->reg + tlb_offset, val_iva); + dmar_writeq(iommu->reg + tlb_offset + 8, val); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, tlb_offset + 8, + dmar_readq, (!(val & DMA_TLB_IVT)), val); + + spin_unlock_irqrestore(&iommu->register_lock, flag); + + /* check IOTLB invalidation granularity */ + if (DMA_TLB_IAIG(val) == 0) + printk(KERN_ERR"IOMMU: flush IOTLB failed\n"); + if (DMA_TLB_IAIG(val) != DMA_TLB_IIRG(type)) + pr_debug("IOMMU: tlb flush request %Lx, actual %Lx\n", + DMA_TLB_IIRG(type), DMA_TLB_IAIG(val)); + /* flush context entry will implictly flush write buffer */ + return 0; +} + +static int inline iommu_flush_iotlb_global(struct intel_iommu *iommu, + int non_present_entry_flush) +{ + return __iommu_flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH, + non_present_entry_flush); +} + +static int inline iommu_flush_iotlb_dsi(struct intel_iommu *iommu, u16 did, + int non_present_entry_flush) +{ + return __iommu_flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH, + non_present_entry_flush); +} + +static int iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did, + u64 addr, unsigned int pages, int non_present_entry_flush) +{ + unsigned int mask; + + BUG_ON(addr & (~PAGE_MASK_4K)); + BUG_ON(pages == 0); + + /* Fallback to domain selective flush if no PSI support */ + if (!cap_pgsel_inv(iommu->cap)) + return iommu_flush_iotlb_dsi(iommu, did, + non_present_entry_flush); + + /* + * PSI requires page size to be 2 ^ x, and the base address is naturally + * aligned to the size + */ + mask = ilog2(__roundup_pow_of_two(pages)); + /* Fallback to domain selective flush if size is too big */ + if (mask > cap_max_amask_val(iommu->cap)) + return iommu_flush_iotlb_dsi(iommu, did, + non_present_entry_flush); + + return __iommu_flush_iotlb(iommu, did, addr, mask, + DMA_TLB_PSI_FLUSH, non_present_entry_flush); +} + +static int iommu_enable_translation(struct intel_iommu *iommu) +{ + u32 sts; + unsigned long flags; + + spin_lock_irqsave(&iommu->register_lock, flags); + writel(iommu->gcmd|DMA_GCMD_TE, iommu->reg + DMAR_GCMD_REG); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, + readl, (sts & DMA_GSTS_TES), sts); + + iommu->gcmd |= DMA_GCMD_TE; + spin_unlock_irqrestore(&iommu->register_lock, flags); + return 0; +} + +static int iommu_disable_translation(struct intel_iommu *iommu) +{ + u32 sts; + unsigned long flag; + + spin_lock_irqsave(&iommu->register_lock, flag); + iommu->gcmd &= ~DMA_GCMD_TE; + writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG); + + /* Make sure hardware complete it */ + IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, + readl, (!(sts & DMA_GSTS_TES)), sts); + + spin_unlock_irqrestore(&iommu->register_lock, flag); + return 0; +} + +/* iommu interrupt handling. Most stuff are MSI-like. */ + +static char *fault_reason_strings[] = +{ + "Software", + "Present bit in root entry is clear", + "Present bit in context entry is clear", + "Invalid context entry", + "Access beyond MGAW", + "PTE Write access is not set", + "PTE Read access is not set", + "Next page table ptr is invalid", + "Root table address invalid", + "Context table ptr is invalid", + "non-zero reserved fields in RTP", + "non-zero reserved fields in CTP", + "non-zero reserved fields in PTE", + "Unknown" +}; +#define MAX_FAULT_REASON_IDX ARRAY_SIZE(fault_reason_strings) + +char *dmar_get_fault_reason(u8 fault_reason) +{ + if (fault_reason > MAX_FAULT_REASON_IDX) + return fault_reason_strings[MAX_FAULT_REASON_IDX]; + else + return fault_reason_strings[fault_reason]; +} + +void dmar_msi_unmask(unsigned int irq) +{ + struct intel_iommu *iommu = get_irq_data(irq); + unsigned long flag; + + /* unmask it */ + spin_lock_irqsave(&iommu->register_lock, flag); + writel(0, iommu->reg + DMAR_FECTL_REG); + /* Read a reg to force flush the post write */ + readl(iommu->reg + DMAR_FECTL_REG); + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +void dmar_msi_mask(unsigned int irq) +{ + unsigned long flag; + struct intel_iommu *iommu = get_irq_data(irq); + + /* mask it */ + spin_lock_irqsave(&iommu->register_lock, flag); + writel(DMA_FECTL_IM, iommu->reg + DMAR_FECTL_REG); + /* Read a reg to force flush the post write */ + readl(iommu->reg + DMAR_FECTL_REG); + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +void dmar_msi_write(int irq, struct msi_msg *msg) +{ + struct intel_iommu *iommu = get_irq_data(irq); + unsigned long flag; + + spin_lock_irqsave(&iommu->register_lock, flag); + writel(msg->data, iommu->reg + DMAR_FEDATA_REG); + writel(msg->address_lo, iommu->reg + DMAR_FEADDR_REG); + writel(msg->address_hi, iommu->reg + DMAR_FEUADDR_REG); + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +void dmar_msi_read(int irq, struct msi_msg *msg) +{ + struct intel_iommu *iommu = get_irq_data(irq); + unsigned long flag; + + spin_lock_irqsave(&iommu->register_lock, flag); + msg->data = readl(iommu->reg + DMAR_FEDATA_REG); + msg->address_lo = readl(iommu->reg + DMAR_FEADDR_REG); + msg->address_hi = readl(iommu->reg + DMAR_FEUADDR_REG); + spin_unlock_irqrestore(&iommu->register_lock, flag); +} + +static int iommu_page_fault_do_one(struct intel_iommu *iommu, int type, + u8 fault_reason, u16 source_id, u64 addr) +{ + char *reason; + + reason = dmar_get_fault_reason(fault_reason); + + printk(KERN_ERR + "DMAR:[%s] Request device [%02x:%02x.%d] " + "fault addr %llx \n" + "DMAR:[fault reason %02d] %s\n", + (type ? "DMA Read" : "DMA Write"), + (source_id >> 8), PCI_SLOT(source_id & 0xFF), + PCI_FUNC(source_id & 0xFF), addr, fault_reason, reason); + return 0; +} + +#define PRIMARY_FAULT_REG_LEN (16) +static irqreturn_t iommu_page_fault(int irq, void *dev_id) +{ + struct intel_iommu *iommu = dev_id; + int reg, fault_index; + u32 fault_status; + unsigned long flag; + + spin_lock_irqsave(&iommu->register_lock, flag); + fault_status = readl(iommu->reg + DMAR_FSTS_REG); + + /* TBD: ignore advanced fault log currently */ + if (!(fault_status & DMA_FSTS_PPF)) + goto clear_overflow; + + fault_index = dma_fsts_fault_record_index(fault_status); + reg = cap_fault_reg_offset(iommu->cap); + while (1) { + u8 fault_reason; + u16 source_id; + u64 guest_addr; + int type; + u32 data; + + /* highest 32 bits */ + data = readl(iommu->reg + reg + + fault_index * PRIMARY_FAULT_REG_LEN + 12); + if (!(data & DMA_FRCD_F)) + break; + + fault_reason = dma_frcd_fault_reason(data); + type = dma_frcd_type(data); + + data = readl(iommu->reg + reg + + fault_index * PRIMARY_FAULT_REG_LEN + 8); + source_id = dma_frcd_source_id(data); + + guest_addr = dmar_readq(iommu->reg + reg + + fault_index * PRIMARY_FAULT_REG_LEN); + guest_addr = dma_frcd_page_addr(guest_addr); + /* clear the fault */ + writel(DMA_FRCD_F, iommu->reg + reg + + fault_index * PRIMARY_FAULT_REG_LEN + 12); + + spin_unlock_irqrestore(&iommu->register_lock, flag); + + iommu_page_fault_do_one(iommu, type, fault_reason, + source_id, guest_addr); + + fault_index++; + if (fault_index > cap_num_fault_regs(iommu->cap)) + fault_index = 0; + spin_lock_irqsave(&iommu->register_lock, flag); + } +clear_overflow: + /* clear primary fault overflow */ + fault_status = readl(iommu->reg + DMAR_FSTS_REG); + if (fault_status & DMA_FSTS_PFO) + writel(DMA_FSTS_PFO, iommu->reg + DMAR_FSTS_REG); + + spin_unlock_irqrestore(&iommu->register_lock, flag); + return IRQ_HANDLED; +} + +int dmar_set_interrupt(struct intel_iommu *iommu) +{ + int irq, ret; + + irq = create_irq(); + if (!irq) { + printk(KERN_ERR "IOMMU: no free vectors\n"); + return -EINVAL; + } + + set_irq_data(irq, iommu); + iommu->irq = irq; + + ret = arch_setup_dmar_msi(irq); + if (ret) { + set_irq_data(irq, NULL); + iommu->irq = 0; + destroy_irq(irq); + return 0; + } + + /* Force fault register is cleared */ + iommu_page_fault(irq, iommu); + + ret = request_irq(irq, iommu_page_fault, 0, iommu->name, iommu); + if (ret) + printk(KERN_ERR "IOMMU: can't request irq\n"); + return ret; +} + +static int iommu_init_domains(struct intel_iommu *iommu) +{ + unsigned long ndomains; + unsigned long nlongs; + + ndomains = cap_ndoms(iommu->cap); + pr_debug("Number of Domains supportd <%ld>\n", ndomains); + nlongs = BITS_TO_LONGS(ndomains); + + /* TBD: there might be 64K domains, + * consider other allocation for future chip + */ + iommu->domain_ids = kcalloc(nlongs, sizeof(unsigned long), GFP_KERNEL); + if (!iommu->domain_ids) { + printk(KERN_ERR "Allocating domain id array failed\n"); + return -ENOMEM; + } + iommu->domains = kcalloc(ndomains, sizeof(struct dmar_domain *), + GFP_KERNEL); + if (!iommu->domains) { + printk(KERN_ERR "Allocating domain array failed\n"); + kfree(iommu->domain_ids); + return -ENOMEM; + } + + /* + * if Caching mode is set, then invalid translations are tagged + * with domainid 0. Hence we need to pre-allocate it. + */ + if (cap_caching_mode(iommu->cap)) + set_bit(0, iommu->domain_ids); + return 0; +} + +static struct intel_iommu *alloc_iommu(struct dmar_drhd_unit *drhd) +{ + struct intel_iommu *iommu; + int ret; + int map_size; + u32 ver; + + iommu = kzalloc(sizeof(*iommu), GFP_KERNEL); + if (!iommu) + return NULL; + iommu->reg = ioremap(drhd->reg_base_addr, PAGE_SIZE_4K); + if (!iommu->reg) { + printk(KERN_ERR "IOMMU: can't map the region\n"); + goto error; + } + iommu->cap = dmar_readq(iommu->reg + DMAR_CAP_REG); + iommu->ecap = dmar_readq(iommu->reg + DMAR_ECAP_REG); + + /* the registers might be more than one page */ + map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap), + cap_max_fault_reg_offset(iommu->cap)); + map_size = PAGE_ALIGN_4K(map_size); + if (map_size > PAGE_SIZE_4K) { + iounmap(iommu->reg); + iommu->reg = ioremap(drhd->reg_base_addr, map_size); + if (!iommu->reg) { + printk(KERN_ERR "IOMMU: can't map the region\n"); + goto error; + } + } + + ver = readl(iommu->reg + DMAR_VER_REG); + pr_debug("IOMMU %llx: ver %d:%d cap %llx ecap %llx\n", + drhd->reg_base_addr, DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver), + iommu->cap, iommu->ecap); + ret = iommu_init_domains(iommu); + if (ret) + goto error_unmap; + spin_lock_init(&iommu->lock); + spin_lock_init(&iommu->register_lock); + + drhd->iommu = iommu; + return iommu; +error_unmap: + iounmap(iommu->reg); + iommu->reg = 0; +error: + kfree(iommu); + return NULL; +} + +static void domain_exit(struct dmar_domain *domain); +static void free_iommu(struct intel_iommu *iommu) +{ + struct dmar_domain *domain; + int i; + + if (!iommu) + return; + + i = find_first_bit(iommu->domain_ids, cap_ndoms(iommu->cap)); + for (; i < cap_ndoms(iommu->cap); ) { + domain = iommu->domains[i]; + clear_bit(i, iommu->domain_ids); + domain_exit(domain); + i = find_next_bit(iommu->domain_ids, + cap_ndoms(iommu->cap), i+1); + } + + if (iommu->gcmd & DMA_GCMD_TE) + iommu_disable_translation(iommu); + + if (iommu->irq) { + set_irq_data(iommu->irq, NULL); + /* This will mask the irq */ + free_irq(iommu->irq, iommu); + destroy_irq(iommu->irq); + } + + kfree(iommu->domains); + kfree(iommu->domain_ids); + + /* free context mapping */ + free_context_table(iommu); + + if (iommu->reg) + iounmap(iommu->reg); + kfree(iommu); +} + +static struct dmar_domain * iommu_alloc_domain(struct intel_iommu *iommu) +{ + unsigned long num; + unsigned long ndomains; + struct dmar_domain *domain; + unsigned long flags; + + domain = alloc_domain_mem(); + if (!domain) + return NULL; + + ndomains = cap_ndoms(iommu->cap); + + spin_lock_irqsave(&iommu->lock, flags); + num = find_first_zero_bit(iommu->domain_ids, ndomains); + if (num >= ndomains) { + spin_unlock_irqrestore(&iommu->lock, flags); + free_domain_mem(domain); + printk(KERN_ERR "IOMMU: no free domain ids\n"); + return NULL; + } + + set_bit(num, iommu->domain_ids); + domain->id = num; + domain->iommu = iommu; + iommu->domains[num] = domain; + spin_unlock_irqrestore(&iommu->lock, flags); + + return domain; +} + +static void iommu_free_domain(struct dmar_domain *domain) +{ + unsigned long flags; + + spin_lock_irqsave(&domain->iommu->lock, flags); + clear_bit(domain->id, domain->iommu->domain_ids); + spin_unlock_irqrestore(&domain->iommu->lock, flags); +} + +static struct iova_domain reserved_iova_list; + +static void dmar_init_reserved_ranges(void) +{ + struct pci_dev *pdev = NULL; + struct iova *iova; + int i; + u64 addr, size; + + init_iova_domain(&reserved_iova_list); + + /* IOAPIC ranges shouldn't be accessed by DMA */ + iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START), + IOVA_PFN(IOAPIC_RANGE_END)); + if (!iova) + printk(KERN_ERR "Reserve IOAPIC range failed\n"); + + /* Reserve all PCI MMIO to avoid peer-to-peer access */ + for_each_pci_dev(pdev) { + struct resource *r; + + for (i = 0; i < PCI_NUM_RESOURCES; i++) { + r = &pdev->resource[i]; + if (!r->flags || !(r->flags & IORESOURCE_MEM)) + continue; + addr = r->start; + addr &= PAGE_MASK_4K; + size = r->end - addr; + size = PAGE_ALIGN_4K(size); + iova = reserve_iova(&reserved_iova_list, IOVA_PFN(addr), + IOVA_PFN(size + addr) - 1); + if (!iova) + printk(KERN_ERR "Reserve iova failed\n"); + } + } + +} + +static void domain_reserve_special_ranges(struct dmar_domain *domain) +{ + copy_reserved_iova(&reserved_iova_list, &domain->iovad); +} + +static inline int guestwidth_to_adjustwidth(int gaw) +{ + int agaw; + int r = (gaw - 12) % 9; + + if (r == 0) + agaw = gaw; + else + agaw = gaw + 9 - r; + if (agaw > 64) + agaw = 64; + return agaw; +} + +static int domain_init(struct dmar_domain *domain, int guest_width) +{ + struct intel_iommu *iommu; + int adjust_width, agaw; + unsigned long sagaw; + + init_iova_domain(&domain->iovad); + spin_lock_init(&domain->mapping_lock); + + domain_reserve_special_ranges(domain); + + /* calculate AGAW */ + iommu = domain->iommu; + if (guest_width > cap_mgaw(iommu->cap)) + guest_width = cap_mgaw(iommu->cap); + domain->gaw = guest_width; + adjust_width = guestwidth_to_adjustwidth(guest_width); + agaw = width_to_agaw(adjust_width); + sagaw = cap_sagaw(iommu->cap); + if (!test_bit(agaw, &sagaw)) { + /* hardware doesn't support it, choose a bigger one */ + pr_debug("IOMMU: hardware doesn't support agaw %d\n", agaw); + agaw = find_next_bit(&sagaw, 5, agaw); + if (agaw >= 5) + return -ENODEV; + } + domain->agaw = agaw; + INIT_LIST_HEAD(&domain->devices); + + /* always allocate the top pgd */ + domain->pgd = (struct dma_pte *)alloc_pgtable_page(); + if (!domain->pgd) + return -ENOMEM; + __iommu_flush_cache(iommu, domain->pgd, PAGE_SIZE_4K); + return 0; +} + +static void domain_exit(struct dmar_domain *domain) +{ + u64 end; + + /* Domain 0 is reserved, so dont process it */ + if (!domain) + return; + + domain_remove_dev_info(domain); + /* destroy iovas */ + put_iova_domain(&domain->iovad); + end = DOMAIN_MAX_ADDR(domain->gaw); + end = end & (~PAGE_MASK_4K); + + /* clear ptes */ + dma_pte_clear_range(domain, 0, end); + + /* free page tables */ + dma_pte_free_pagetable(domain, 0, end); + + iommu_free_domain(domain); + free_domain_mem(domain); +} + +static int domain_context_mapping_one(struct dmar_domain *domain, + u8 bus, u8 devfn) +{ + struct context_entry *context; + struct intel_iommu *iommu = domain->iommu; + unsigned long flags; + + pr_debug("Set context mapping for %02x:%02x.%d\n", + bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); + BUG_ON(!domain->pgd); + context = device_to_context_entry(iommu, bus, devfn); + if (!context) + return -ENOMEM; + spin_lock_irqsave(&iommu->lock, flags); + if (context_present(*context)) { + spin_unlock_irqrestore(&iommu->lock, flags); + return 0; + } + + context_set_domain_id(*context, domain->id); + context_set_address_width(*context, domain->agaw); + context_set_address_root(*context, virt_to_phys(domain->pgd)); + context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL); + context_set_fault_enable(*context); + context_set_present(*context); + __iommu_flush_cache(iommu, context, sizeof(*context)); + + /* it's a non-present to present mapping */ + if (iommu_flush_context_device(iommu, domain->id, + (((u16)bus) << 8) | devfn, DMA_CCMD_MASK_NOBIT, 1)) + iommu_flush_write_buffer(iommu); + else + iommu_flush_iotlb_dsi(iommu, 0, 0); + spin_unlock_irqrestore(&iommu->lock, flags); + return 0; +} + +static int +domain_context_mapping(struct dmar_domain *domain, struct pci_dev *pdev) +{ + int ret; + struct pci_dev *tmp, *parent; + + ret = domain_context_mapping_one(domain, pdev->bus->number, + pdev->devfn); + if (ret) + return ret; + + /* dependent device mapping */ + tmp = pci_find_upstream_pcie_bridge(pdev); + if (!tmp) + return 0; + /* Secondary interface's bus number and devfn 0 */ + parent = pdev->bus->self; + while (parent != tmp) { + ret = domain_context_mapping_one(domain, parent->bus->number, + parent->devfn); + if (ret) + return ret; + parent = parent->bus->self; + } + if (tmp->is_pcie) /* this is a PCIE-to-PCI bridge */ + return domain_context_mapping_one(domain, + tmp->subordinate->number, 0); + else /* this is a legacy PCI bridge */ + return domain_context_mapping_one(domain, + tmp->bus->number, tmp->devfn); +} + +static int domain_context_mapped(struct dmar_domain *domain, + struct pci_dev *pdev) +{ + int ret; + struct pci_dev *tmp, *parent; + + ret = device_context_mapped(domain->iommu, + pdev->bus->number, pdev->devfn); + if (!ret) + return ret; + /* dependent device mapping */ + tmp = pci_find_upstream_pcie_bridge(pdev); + if (!tmp) + return ret; + /* Secondary interface's bus number and devfn 0 */ + parent = pdev->bus->self; + while (parent != tmp) { + ret = device_context_mapped(domain->iommu, parent->bus->number, + parent->devfn); + if (!ret) + return ret; + parent = parent->bus->self; + } + if (tmp->is_pcie) + return device_context_mapped(domain->iommu, + tmp->subordinate->number, 0); + else + return device_context_mapped(domain->iommu, + tmp->bus->number, tmp->devfn); +} + +static int +domain_page_mapping(struct dmar_domain *domain, dma_addr_t iova, + u64 hpa, size_t size, int prot) +{ + u64 start_pfn, end_pfn; + struct dma_pte *pte; + int index; + + if ((prot & (DMA_PTE_READ|DMA_PTE_WRITE)) == 0) + return -EINVAL; + iova &= PAGE_MASK_4K; + start_pfn = ((u64)hpa) >> PAGE_SHIFT_4K; + end_pfn = (PAGE_ALIGN_4K(((u64)hpa) + size)) >> PAGE_SHIFT_4K; + index = 0; + while (start_pfn < end_pfn) { + pte = addr_to_dma_pte(domain, iova + PAGE_SIZE_4K * index); + if (!pte) + return -ENOMEM; + /* We don't need lock here, nobody else + * touches the iova range + */ + BUG_ON(dma_pte_addr(*pte)); + dma_set_pte_addr(*pte, start_pfn << PAGE_SHIFT_4K); + dma_set_pte_prot(*pte, prot); + __iommu_flush_cache(domain->iommu, pte, sizeof(*pte)); + start_pfn++; + index++; + } + return 0; +} + +static void detach_domain_for_dev(struct dmar_domain *domain, u8 bus, u8 devfn) +{ + clear_context_table(domain->iommu, bus, devfn); + iommu_flush_context_global(domain->iommu, 0); + iommu_flush_iotlb_global(domain->iommu, 0); +} + +static void domain_remove_dev_info(struct dmar_domain *domain) +{ + struct device_domain_info *info; + unsigned long flags; + + spin_lock_irqsave(&device_domain_lock, flags); + while (!list_empty(&domain->devices)) { + info = list_entry(domain->devices.next, + struct device_domain_info, link); + list_del(&info->link); + list_del(&info->global); + if (info->dev) + info->dev->dev.archdata.iommu = NULL; + spin_unlock_irqrestore(&device_domain_lock, flags); + + detach_domain_for_dev(info->domain, info->bus, info->devfn); + free_devinfo_mem(info); + + spin_lock_irqsave(&device_domain_lock, flags); + } + spin_unlock_irqrestore(&device_domain_lock, flags); +} + +/* + * find_domain + * Note: we use struct pci_dev->dev.archdata.iommu stores the info + */ +struct dmar_domain * +find_domain(struct pci_dev *pdev) +{ + struct device_domain_info *info; + + /* No lock here, assumes no domain exit in normal case */ + info = pdev->dev.archdata.iommu; + if (info) + return info->domain; + return NULL; +} + +static int dmar_pci_device_match(struct pci_dev *devices[], int cnt, + struct pci_dev *dev) +{ + int index; + + while (dev) { + for (index = 0; index < cnt; index ++) + if (dev == devices[index]) + return 1; + + /* Check our parent */ + dev = dev->bus->self; + } + + return 0; +} + +static struct dmar_drhd_unit * +dmar_find_matched_drhd_unit(struct pci_dev *dev) +{ + struct dmar_drhd_unit *drhd = NULL; + + list_for_each_entry(drhd, &dmar_drhd_units, list) { + if (drhd->include_all || dmar_pci_device_match(drhd->devices, + drhd->devices_cnt, dev)) + return drhd; + } + + return NULL; +} + +/* domain is initialized */ +static struct dmar_domain *get_domain_for_dev(struct pci_dev *pdev, int gaw) +{ + struct dmar_domain *domain, *found = NULL; + struct intel_iommu *iommu; + struct dmar_drhd_unit *drhd; + struct device_domain_info *info, *tmp; + struct pci_dev *dev_tmp; + unsigned long flags; + int bus = 0, devfn = 0; + + domain = find_domain(pdev); + if (domain) + return domain; + + dev_tmp = pci_find_upstream_pcie_bridge(pdev); + if (dev_tmp) { + if (dev_tmp->is_pcie) { + bus = dev_tmp->subordinate->number; + devfn = 0; + } else { + bus = dev_tmp->bus->number; + devfn = dev_tmp->devfn; + } + spin_lock_irqsave(&device_domain_lock, flags); + list_for_each_entry(info, &device_domain_list, global) { + if (info->bus == bus && info->devfn == devfn) { + found = info->domain; + break; + } + } + spin_unlock_irqrestore(&device_domain_lock, flags); + /* pcie-pci bridge already has a domain, uses it */ + if (found) { + domain = found; + goto found_domain; + } + } + + /* Allocate new domain for the device */ + drhd = dmar_find_matched_drhd_unit(pdev); + if (!drhd) { + printk(KERN_ERR "IOMMU: can't find DMAR for device %s\n", + pci_name(pdev)); + return NULL; + } + iommu = drhd->iommu; + + domain = iommu_alloc_domain(iommu); + if (!domain) + goto error; + + if (domain_init(domain, gaw)) { + domain_exit(domain); + goto error; + } + + /* register pcie-to-pci device */ + if (dev_tmp) { + info = alloc_devinfo_mem(); + if (!info) { + domain_exit(domain); + goto error; + } + info->bus = bus; + info->devfn = devfn; + info->dev = NULL; + info->domain = domain; + /* This domain is shared by devices under p2p bridge */ + domain->flags |= DOMAIN_FLAG_MULTIPLE_DEVICES; + + /* pcie-to-pci bridge already has a domain, uses it */ + found = NULL; + spin_lock_irqsave(&device_domain_lock, flags); + list_for_each_entry(tmp, &device_domain_list, global) { + if (tmp->bus == bus && tmp->devfn == devfn) { + found = tmp->domain; + break; + } + } + if (found) { + free_devinfo_mem(info); + domain_exit(domain); + domain = found; + } else { + list_add(&info->link, &domain->devices); + list_add(&info->global, &device_domain_list); + } + spin_unlock_irqrestore(&device_domain_lock, flags); + } + +found_domain: + info = alloc_devinfo_mem(); + if (!info) + goto error; + info->bus = pdev->bus->number; + info->devfn = pdev->devfn; + info->dev = pdev; + info->domain = domain; + spin_lock_irqsave(&device_domain_lock, flags); + /* somebody is fast */ + found = find_domain(pdev); + if (found != NULL) { + spin_unlock_irqrestore(&device_domain_lock, flags); + if (found != domain) { + domain_exit(domain); + domain = found; + } + free_devinfo_mem(info); + return domain; + } + list_add(&info->link, &domain->devices); + list_add(&info->global, &device_domain_list); + pdev->dev.archdata.iommu = info; + spin_unlock_irqrestore(&device_domain_lock, flags); + return domain; +error: + /* recheck it here, maybe others set it */ + return find_domain(pdev); +} + +static int iommu_prepare_identity_map(struct pci_dev *pdev, u64 start, u64 end) +{ + struct dmar_domain *domain; + unsigned long size; + u64 base; + int ret; + + printk(KERN_INFO + "IOMMU: Setting identity map for device %s [0x%Lx - 0x%Lx]\n", + pci_name(pdev), start, end); + /* page table init */ + domain = get_domain_for_dev(pdev, DEFAULT_DOMAIN_ADDRESS_WIDTH); + if (!domain) + return -ENOMEM; + + /* The address might not be aligned */ + base = start & PAGE_MASK_4K; + size = end - base; + size = PAGE_ALIGN_4K(size); + if (!reserve_iova(&domain->iovad, IOVA_PFN(base), + IOVA_PFN(base + size) - 1)) { + printk(KERN_ERR "IOMMU: reserve iova failed\n"); + ret = -ENOMEM; + goto error; + } + + pr_debug("Mapping reserved region %lx@%llx for %s\n", + size, base, pci_name(pdev)); + /* + * RMRR range might have overlap with physical memory range, + * clear it first + */ + dma_pte_clear_range(domain, base, base + size); + + ret = domain_page_mapping(domain, base, base, size, + DMA_PTE_READ|DMA_PTE_WRITE); + if (ret) + goto error; + + /* context entry init */ + ret = domain_context_mapping(domain, pdev); + if (!ret) + return 0; +error: + domain_exit(domain); + return ret; + +} + +static inline int iommu_prepare_rmrr_dev(struct dmar_rmrr_unit *rmrr, + struct pci_dev *pdev) +{ + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO) + return 0; + return iommu_prepare_identity_map(pdev, rmrr->base_address, + rmrr->end_address + 1); +} + +#ifdef CONFIG_DMAR_GFX_WA +extern int arch_get_ram_range(int slot, u64 *addr, u64 *size); +static void __init iommu_prepare_gfx_mapping(void) +{ + struct pci_dev *pdev = NULL; + u64 base, size; + int slot; + int ret; + + for_each_pci_dev(pdev) { + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO || + !IS_GFX_DEVICE(pdev)) + continue; + printk(KERN_INFO "IOMMU: gfx device %s 1-1 mapping\n", + pci_name(pdev)); + slot = arch_get_ram_range(0, &base, &size); + while (slot >= 0) { + ret = iommu_prepare_identity_map(pdev, + base, base + size); + if (ret) + goto error; + slot = arch_get_ram_range(slot, &base, &size); + } + continue; +error: + printk(KERN_ERR "IOMMU: mapping reserved region failed\n"); + } +} +#endif + +#ifdef CONFIG_DMAR_FLOPPY_WA +static inline void iommu_prepare_isa(void) +{ + struct pci_dev *pdev; + int ret; + + pdev = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL); + if (!pdev) + return; + + printk(KERN_INFO "IOMMU: Prepare 0-16M unity mapping for LPC\n"); + ret = iommu_prepare_identity_map(pdev, 0, 16*1024*1024); + + if (ret) + printk("IOMMU: Failed to create 0-64M identity map, " + "floppy might not work\n"); + +} +#else +static inline void iommu_prepare_isa(void) +{ + return; +} +#endif /* !CONFIG_DMAR_FLPY_WA */ + +int __init init_dmars(void) +{ + struct dmar_drhd_unit *drhd; + struct dmar_rmrr_unit *rmrr; + struct pci_dev *pdev; + struct intel_iommu *iommu; + int ret, unit = 0; + + /* + * for each drhd + * allocate root + * initialize and program root entry to not present + * endfor + */ + for_each_drhd_unit(drhd) { + if (drhd->ignored) + continue; + iommu = alloc_iommu(drhd); + if (!iommu) { + ret = -ENOMEM; + goto error; + } + + /* + * TBD: + * we could share the same root & context tables + * amoung all IOMMU's. Need to Split it later. + */ + ret = iommu_alloc_root_entry(iommu); + if (ret) { + printk(KERN_ERR "IOMMU: allocate root entry failed\n"); + goto error; + } + } + + /* + * For each rmrr + * for each dev attached to rmrr + * do + * locate drhd for dev, alloc domain for dev + * allocate free domain + * allocate page table entries for rmrr + * if context not allocated for bus + * allocate and init context + * set present in root table for this bus + * init context with domain, translation etc + * endfor + * endfor + */ + for_each_rmrr_units(rmrr) { + int i; + for (i = 0; i < rmrr->devices_cnt; i++) { + pdev = rmrr->devices[i]; + /* some BIOS lists non-exist devices in DMAR table */ + if (!pdev) + continue; + ret = iommu_prepare_rmrr_dev(rmrr, pdev); + if (ret) + printk(KERN_ERR + "IOMMU: mapping reserved region failed\n"); + } + } + + iommu_prepare_gfx_mapping(); + + iommu_prepare_isa(); + + /* + * for each drhd + * enable fault log + * global invalidate context cache + * global invalidate iotlb + * enable translation + */ + for_each_drhd_unit(drhd) { + if (drhd->ignored) + continue; + iommu = drhd->iommu; + sprintf (iommu->name, "dmar%d", unit++); + + iommu_flush_write_buffer(iommu); + + ret = dmar_set_interrupt(iommu); + if (ret) + goto error; + + iommu_set_root_entry(iommu); + + iommu_flush_context_global(iommu, 0); + iommu_flush_iotlb_global(iommu, 0); + + ret = iommu_enable_translation(iommu); + if (ret) + goto error; + } + + return 0; +error: + for_each_drhd_unit(drhd) { + if (drhd->ignored) + continue; + iommu = drhd->iommu; + free_iommu(iommu); + } + return ret; +} + +static inline u64 aligned_size(u64 host_addr, size_t size) +{ + u64 addr; + addr = (host_addr & (~PAGE_MASK_4K)) + size; + return PAGE_ALIGN_4K(addr); +} + +struct iova * +iommu_alloc_iova(struct dmar_domain *domain, size_t size, u64 end) +{ + struct iova *piova; + + /* Make sure it's in range */ + end = min_t(u64, DOMAIN_MAX_ADDR(domain->gaw), end); + if (!size || (IOVA_START_ADDR + size > end)) + return NULL; + + piova = alloc_iova(&domain->iovad, + size >> PAGE_SHIFT_4K, IOVA_PFN(end), 1); + return piova; +} + +static struct iova * +__intel_alloc_iova(struct device *dev, struct dmar_domain *domain, + size_t size) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct iova *iova = NULL; + + if ((pdev->dma_mask <= DMA_32BIT_MASK) || (dmar_forcedac)) { + iova = iommu_alloc_iova(domain, size, pdev->dma_mask); + } else { + /* + * First try to allocate an io virtual address in + * DMA_32BIT_MASK and if that fails then try allocating + * from higer range + */ + iova = iommu_alloc_iova(domain, size, DMA_32BIT_MASK); + if (!iova) + iova = iommu_alloc_iova(domain, size, pdev->dma_mask); + } + + if (!iova) { + printk(KERN_ERR"Allocating iova for %s failed", pci_name(pdev)); + return NULL; + } + + return iova; +} + +static struct dmar_domain * +get_valid_domain_for_dev(struct pci_dev *pdev) +{ + struct dmar_domain *domain; + int ret; + + domain = get_domain_for_dev(pdev, + DEFAULT_DOMAIN_ADDRESS_WIDTH); + if (!domain) { + printk(KERN_ERR + "Allocating domain for %s failed", pci_name(pdev)); + return 0; + } + + /* make sure context mapping is ok */ + if (unlikely(!domain_context_mapped(domain, pdev))) { + ret = domain_context_mapping(domain, pdev); + if (ret) { + printk(KERN_ERR + "Domain context map for %s failed", + pci_name(pdev)); + return 0; + } + } + + return domain; +} + +static dma_addr_t intel_map_single(struct device *hwdev, void *addr, + size_t size, int dir) +{ + struct pci_dev *pdev = to_pci_dev(hwdev); + int ret; + struct dmar_domain *domain; + unsigned long start_addr; + struct iova *iova; + int prot = 0; + + BUG_ON(dir == DMA_NONE); + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO) + return virt_to_bus(addr); + + domain = get_valid_domain_for_dev(pdev); + if (!domain) + return 0; + + addr = (void *)virt_to_phys(addr); + size = aligned_size((u64)addr, size); + + iova = __intel_alloc_iova(hwdev, domain, size); + if (!iova) + goto error; + + start_addr = iova->pfn_lo << PAGE_SHIFT_4K; + + /* + * Check if DMAR supports zero-length reads on write only + * mappings.. + */ + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ + !cap_zlr(domain->iommu->cap)) + prot |= DMA_PTE_READ; + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) + prot |= DMA_PTE_WRITE; + /* + * addr - (addr + size) might be partial page, we should map the whole + * page. Note: if two part of one page are separately mapped, we + * might have two guest_addr mapping to the same host addr, but this + * is not a big problem + */ + ret = domain_page_mapping(domain, start_addr, + ((u64)addr) & PAGE_MASK_4K, size, prot); + if (ret) + goto error; + + pr_debug("Device %s request: %lx@%llx mapping: %lx@%llx, dir %d\n", + pci_name(pdev), size, (u64)addr, + size, (u64)start_addr, dir); + + /* it's a non-present to present mapping */ + ret = iommu_flush_iotlb_psi(domain->iommu, domain->id, + start_addr, size >> PAGE_SHIFT_4K, 1); + if (ret) + iommu_flush_write_buffer(domain->iommu); + + return (start_addr + ((u64)addr & (~PAGE_MASK_4K))); + +error: + if (iova) + __free_iova(&domain->iovad, iova); + printk(KERN_ERR"Device %s request: %lx@%llx dir %d --- failed\n", + pci_name(pdev), size, (u64)addr, dir); + return 0; +} + +static void intel_unmap_single(struct device *dev, dma_addr_t dev_addr, + size_t size, int dir) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct dmar_domain *domain; + unsigned long start_addr; + struct iova *iova; + + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO) + return; + domain = find_domain(pdev); + BUG_ON(!domain); + + iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr)); + if (!iova) + return; + + start_addr = iova->pfn_lo << PAGE_SHIFT_4K; + size = aligned_size((u64)dev_addr, size); + + pr_debug("Device %s unmapping: %lx@%llx\n", + pci_name(pdev), size, (u64)start_addr); + + /* clear the whole page */ + dma_pte_clear_range(domain, start_addr, start_addr + size); + /* free page tables */ + dma_pte_free_pagetable(domain, start_addr, start_addr + size); + + if (iommu_flush_iotlb_psi(domain->iommu, domain->id, start_addr, + size >> PAGE_SHIFT_4K, 0)) + iommu_flush_write_buffer(domain->iommu); + + /* free iova */ + __free_iova(&domain->iovad, iova); +} + +static void * intel_alloc_coherent(struct device *hwdev, size_t size, + dma_addr_t *dma_handle, gfp_t flags) +{ + void *vaddr; + int order; + + size = PAGE_ALIGN_4K(size); + order = get_order(size); + flags &= ~(GFP_DMA | GFP_DMA32); + + vaddr = (void *)__get_free_pages(flags, order); + if (!vaddr) + return NULL; + memset(vaddr, 0, size); + + *dma_handle = intel_map_single(hwdev, vaddr, size, DMA_BIDIRECTIONAL); + if (*dma_handle) + return vaddr; + free_pages((unsigned long)vaddr, order); + return NULL; +} + +static void intel_free_coherent(struct device *hwdev, size_t size, + void *vaddr, dma_addr_t dma_handle) +{ + int order; + + size = PAGE_ALIGN_4K(size); + order = get_order(size); + + intel_unmap_single(hwdev, dma_handle, size, DMA_BIDIRECTIONAL); + free_pages((unsigned long)vaddr, order); +} + +#define SG_ENT_VIRT_ADDRESS(sg) (sg_virt((sg))) +static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist, + int nelems, int dir) +{ + int i; + struct pci_dev *pdev = to_pci_dev(hwdev); + struct dmar_domain *domain; + unsigned long start_addr; + struct iova *iova; + size_t size = 0; + void *addr; + struct scatterlist *sg; + + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO) + return; + + domain = find_domain(pdev); + + iova = find_iova(&domain->iovad, IOVA_PFN(sglist[0].dma_address)); + if (!iova) + return; + for_each_sg(sglist, sg, nelems, i) { + addr = SG_ENT_VIRT_ADDRESS(sg); + size += aligned_size((u64)addr, sg->length); + } + + start_addr = iova->pfn_lo << PAGE_SHIFT_4K; + + /* clear the whole page */ + dma_pte_clear_range(domain, start_addr, start_addr + size); + /* free page tables */ + dma_pte_free_pagetable(domain, start_addr, start_addr + size); + + if (iommu_flush_iotlb_psi(domain->iommu, domain->id, start_addr, + size >> PAGE_SHIFT_4K, 0)) + iommu_flush_write_buffer(domain->iommu); + + /* free iova */ + __free_iova(&domain->iovad, iova); +} + +static int intel_nontranslate_map_sg(struct device *hddev, + struct scatterlist *sglist, int nelems, int dir) +{ + int i; + struct scatterlist *sg; + + for_each_sg(sglist, sg, nelems, i) { + BUG_ON(!sg_page(sg)); + sg->dma_address = virt_to_bus(SG_ENT_VIRT_ADDRESS(sg)); + sg->dma_length = sg->length; + } + return nelems; +} + +static int intel_map_sg(struct device *hwdev, struct scatterlist *sglist, + int nelems, int dir) +{ + void *addr; + int i; + struct pci_dev *pdev = to_pci_dev(hwdev); + struct dmar_domain *domain; + size_t size = 0; + int prot = 0; + size_t offset = 0; + struct iova *iova = NULL; + int ret; + struct scatterlist *sg; + unsigned long start_addr; + + BUG_ON(dir == DMA_NONE); + if (pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO) + return intel_nontranslate_map_sg(hwdev, sglist, nelems, dir); + + domain = get_valid_domain_for_dev(pdev); + if (!domain) + return 0; + + for_each_sg(sglist, sg, nelems, i) { + addr = SG_ENT_VIRT_ADDRESS(sg); + addr = (void *)virt_to_phys(addr); + size += aligned_size((u64)addr, sg->length); + } + + iova = __intel_alloc_iova(hwdev, domain, size); + if (!iova) { + sglist->dma_length = 0; + return 0; + } + + /* + * Check if DMAR supports zero-length reads on write only + * mappings.. + */ + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ + !cap_zlr(domain->iommu->cap)) + prot |= DMA_PTE_READ; + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) + prot |= DMA_PTE_WRITE; + + start_addr = iova->pfn_lo << PAGE_SHIFT_4K; + offset = 0; + for_each_sg(sglist, sg, nelems, i) { + addr = SG_ENT_VIRT_ADDRESS(sg); + addr = (void *)virt_to_phys(addr); + size = aligned_size((u64)addr, sg->length); + ret = domain_page_mapping(domain, start_addr + offset, + ((u64)addr) & PAGE_MASK_4K, + size, prot); + if (ret) { + /* clear the page */ + dma_pte_clear_range(domain, start_addr, + start_addr + offset); + /* free page tables */ + dma_pte_free_pagetable(domain, start_addr, + start_addr + offset); + /* free iova */ + __free_iova(&domain->iovad, iova); + return 0; + } + sg->dma_address = start_addr + offset + + ((u64)addr & (~PAGE_MASK_4K)); + sg->dma_length = sg->length; + offset += size; + } + + /* it's a non-present to present mapping */ + if (iommu_flush_iotlb_psi(domain->iommu, domain->id, + start_addr, offset >> PAGE_SHIFT_4K, 1)) + iommu_flush_write_buffer(domain->iommu); + return nelems; +} + +static struct dma_mapping_ops intel_dma_ops = { + .alloc_coherent = intel_alloc_coherent, + .free_coherent = intel_free_coherent, + .map_single = intel_map_single, + .unmap_single = intel_unmap_single, + .map_sg = intel_map_sg, + .unmap_sg = intel_unmap_sg, +}; + +static inline int iommu_domain_cache_init(void) +{ + int ret = 0; + + iommu_domain_cache = kmem_cache_create("iommu_domain", + sizeof(struct dmar_domain), + 0, + SLAB_HWCACHE_ALIGN, + + NULL); + if (!iommu_domain_cache) { + printk(KERN_ERR "Couldn't create iommu_domain cache\n"); + ret = -ENOMEM; + } + + return ret; +} + +static inline int iommu_devinfo_cache_init(void) +{ + int ret = 0; + + iommu_devinfo_cache = kmem_cache_create("iommu_devinfo", + sizeof(struct device_domain_info), + 0, + SLAB_HWCACHE_ALIGN, + + NULL); + if (!iommu_devinfo_cache) { + printk(KERN_ERR "Couldn't create devinfo cache\n"); + ret = -ENOMEM; + } + + return ret; +} + +static inline int iommu_iova_cache_init(void) +{ + int ret = 0; + + iommu_iova_cache = kmem_cache_create("iommu_iova", + sizeof(struct iova), + 0, + SLAB_HWCACHE_ALIGN, + + NULL); + if (!iommu_iova_cache) { + printk(KERN_ERR "Couldn't create iova cache\n"); + ret = -ENOMEM; + } + + return ret; +} + +static int __init iommu_init_mempool(void) +{ + int ret; + ret = iommu_iova_cache_init(); + if (ret) + return ret; + + ret = iommu_domain_cache_init(); + if (ret) + goto domain_error; + + ret = iommu_devinfo_cache_init(); + if (!ret) + return ret; + + kmem_cache_destroy(iommu_domain_cache); +domain_error: + kmem_cache_destroy(iommu_iova_cache); + + return -ENOMEM; +} + +static void __init iommu_exit_mempool(void) +{ + kmem_cache_destroy(iommu_devinfo_cache); + kmem_cache_destroy(iommu_domain_cache); + kmem_cache_destroy(iommu_iova_cache); + +} + +void __init detect_intel_iommu(void) +{ + if (swiotlb || no_iommu || iommu_detected || dmar_disabled) + return; + if (early_dmar_detect()) { + iommu_detected = 1; + } +} + +static void __init init_no_remapping_devices(void) +{ + struct dmar_drhd_unit *drhd; + + for_each_drhd_unit(drhd) { + if (!drhd->include_all) { + int i; + for (i = 0; i < drhd->devices_cnt; i++) + if (drhd->devices[i] != NULL) + break; + /* ignore DMAR unit if no pci devices exist */ + if (i == drhd->devices_cnt) + drhd->ignored = 1; + } + } + + if (dmar_map_gfx) + return; + + for_each_drhd_unit(drhd) { + int i; + if (drhd->ignored || drhd->include_all) + continue; + + for (i = 0; i < drhd->devices_cnt; i++) + if (drhd->devices[i] && + !IS_GFX_DEVICE(drhd->devices[i])) + break; + + if (i < drhd->devices_cnt) + continue; + + /* bypass IOMMU if it is just for gfx devices */ + drhd->ignored = 1; + for (i = 0; i < drhd->devices_cnt; i++) { + if (!drhd->devices[i]) + continue; + drhd->devices[i]->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO; + } + } +} + +int __init intel_iommu_init(void) +{ + int ret = 0; + + if (no_iommu || swiotlb || dmar_disabled) + return -ENODEV; + + if (dmar_table_init()) + return -ENODEV; + + iommu_init_mempool(); + dmar_init_reserved_ranges(); + + init_no_remapping_devices(); + + ret = init_dmars(); + if (ret) { + printk(KERN_ERR "IOMMU: dmar init failed\n"); + put_iova_domain(&reserved_iova_list); + iommu_exit_mempool(); + return ret; + } + printk(KERN_INFO + "PCI-DMA: Intel(R) Virtualization Technology for Directed I/O\n"); + + force_iommu = 1; + dma_ops = &intel_dma_ops; + return 0; +} + diff --git a/drivers/pci/intel-iommu.h b/drivers/pci/intel-iommu.h new file mode 100644 index 00000000000..ee88dd2400c --- /dev/null +++ b/drivers/pci/intel-iommu.h @@ -0,0 +1,325 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + * Copyright (C) Ashok Raj <ashok.raj@intel.com> + * Copyright (C) Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> + */ + +#ifndef _INTEL_IOMMU_H_ +#define _INTEL_IOMMU_H_ + +#include <linux/types.h> +#include <linux/msi.h> +#include "iova.h" +#include <linux/io.h> + +/* + * Intel IOMMU register specification per version 1.0 public spec. + */ + +#define DMAR_VER_REG 0x0 /* Arch version supported by this IOMMU */ +#define DMAR_CAP_REG 0x8 /* Hardware supported capabilities */ +#define DMAR_ECAP_REG 0x10 /* Extended capabilities supported */ +#define DMAR_GCMD_REG 0x18 /* Global command register */ +#define DMAR_GSTS_REG 0x1c /* Global status register */ +#define DMAR_RTADDR_REG 0x20 /* Root entry table */ +#define DMAR_CCMD_REG 0x28 /* Context command reg */ +#define DMAR_FSTS_REG 0x34 /* Fault Status register */ +#define DMAR_FECTL_REG 0x38 /* Fault control register */ +#define DMAR_FEDATA_REG 0x3c /* Fault event interrupt data register */ +#define DMAR_FEADDR_REG 0x40 /* Fault event interrupt addr register */ +#define DMAR_FEUADDR_REG 0x44 /* Upper address register */ +#define DMAR_AFLOG_REG 0x58 /* Advanced Fault control */ +#define DMAR_PMEN_REG 0x64 /* Enable Protected Memory Region */ +#define DMAR_PLMBASE_REG 0x68 /* PMRR Low addr */ +#define DMAR_PLMLIMIT_REG 0x6c /* PMRR low limit */ +#define DMAR_PHMBASE_REG 0x70 /* pmrr high base addr */ +#define DMAR_PHMLIMIT_REG 0x78 /* pmrr high limit */ + +#define OFFSET_STRIDE (9) +/* +#define dmar_readl(dmar, reg) readl(dmar + reg) +#define dmar_readq(dmar, reg) ({ \ + u32 lo, hi; \ + lo = readl(dmar + reg); \ + hi = readl(dmar + reg + 4); \ + (((u64) hi) << 32) + lo; }) +*/ +static inline u64 dmar_readq(void *addr) +{ + u32 lo, hi; + lo = readl(addr); + hi = readl(addr + 4); + return (((u64) hi) << 32) + lo; +} + +static inline void dmar_writeq(void __iomem *addr, u64 val) +{ + writel((u32)val, addr); + writel((u32)(val >> 32), addr + 4); +} + +#define DMAR_VER_MAJOR(v) (((v) & 0xf0) >> 4) +#define DMAR_VER_MINOR(v) ((v) & 0x0f) + +/* + * Decoding Capability Register + */ +#define cap_read_drain(c) (((c) >> 55) & 1) +#define cap_write_drain(c) (((c) >> 54) & 1) +#define cap_max_amask_val(c) (((c) >> 48) & 0x3f) +#define cap_num_fault_regs(c) ((((c) >> 40) & 0xff) + 1) +#define cap_pgsel_inv(c) (((c) >> 39) & 1) + +#define cap_super_page_val(c) (((c) >> 34) & 0xf) +#define cap_super_offset(c) (((find_first_bit(&cap_super_page_val(c), 4)) \ + * OFFSET_STRIDE) + 21) + +#define cap_fault_reg_offset(c) ((((c) >> 24) & 0x3ff) * 16) +#define cap_max_fault_reg_offset(c) \ + (cap_fault_reg_offset(c) + cap_num_fault_regs(c) * 16) + +#define cap_zlr(c) (((c) >> 22) & 1) +#define cap_isoch(c) (((c) >> 23) & 1) +#define cap_mgaw(c) ((((c) >> 16) & 0x3f) + 1) +#define cap_sagaw(c) (((c) >> 8) & 0x1f) +#define cap_caching_mode(c) (((c) >> 7) & 1) +#define cap_phmr(c) (((c) >> 6) & 1) +#define cap_plmr(c) (((c) >> 5) & 1) +#define cap_rwbf(c) (((c) >> 4) & 1) +#define cap_afl(c) (((c) >> 3) & 1) +#define cap_ndoms(c) (((unsigned long)1) << (4 + 2 * ((c) & 0x7))) +/* + * Extended Capability Register + */ + +#define ecap_niotlb_iunits(e) ((((e) >> 24) & 0xff) + 1) +#define ecap_iotlb_offset(e) ((((e) >> 8) & 0x3ff) * 16) +#define ecap_max_iotlb_offset(e) \ + (ecap_iotlb_offset(e) + ecap_niotlb_iunits(e) * 16) +#define ecap_coherent(e) ((e) & 0x1) + + +/* IOTLB_REG */ +#define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60) +#define DMA_TLB_DSI_FLUSH (((u64)2) << 60) +#define DMA_TLB_PSI_FLUSH (((u64)3) << 60) +#define DMA_TLB_IIRG(type) ((type >> 60) & 7) +#define DMA_TLB_IAIG(val) (((val) >> 57) & 7) +#define DMA_TLB_READ_DRAIN (((u64)1) << 49) +#define DMA_TLB_WRITE_DRAIN (((u64)1) << 48) +#define DMA_TLB_DID(id) (((u64)((id) & 0xffff)) << 32) +#define DMA_TLB_IVT (((u64)1) << 63) +#define DMA_TLB_IH_NONLEAF (((u64)1) << 6) +#define DMA_TLB_MAX_SIZE (0x3f) + +/* GCMD_REG */ +#define DMA_GCMD_TE (((u32)1) << 31) +#define DMA_GCMD_SRTP (((u32)1) << 30) +#define DMA_GCMD_SFL (((u32)1) << 29) +#define DMA_GCMD_EAFL (((u32)1) << 28) +#define DMA_GCMD_WBF (((u32)1) << 27) + +/* GSTS_REG */ +#define DMA_GSTS_TES (((u32)1) << 31) +#define DMA_GSTS_RTPS (((u32)1) << 30) +#define DMA_GSTS_FLS (((u32)1) << 29) +#define DMA_GSTS_AFLS (((u32)1) << 28) +#define DMA_GSTS_WBFS (((u32)1) << 27) + +/* CCMD_REG */ +#define DMA_CCMD_ICC (((u64)1) << 63) +#define DMA_CCMD_GLOBAL_INVL (((u64)1) << 61) +#define DMA_CCMD_DOMAIN_INVL (((u64)2) << 61) +#define DMA_CCMD_DEVICE_INVL (((u64)3) << 61) +#define DMA_CCMD_FM(m) (((u64)((m) & 0x3)) << 32) +#define DMA_CCMD_MASK_NOBIT 0 +#define DMA_CCMD_MASK_1BIT 1 +#define DMA_CCMD_MASK_2BIT 2 +#define DMA_CCMD_MASK_3BIT 3 +#define DMA_CCMD_SID(s) (((u64)((s) & 0xffff)) << 16) +#define DMA_CCMD_DID(d) ((u64)((d) & 0xffff)) + +/* FECTL_REG */ +#define DMA_FECTL_IM (((u32)1) << 31) + +/* FSTS_REG */ +#define DMA_FSTS_PPF ((u32)2) +#define DMA_FSTS_PFO ((u32)1) +#define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff) + +/* FRCD_REG, 32 bits access */ +#define DMA_FRCD_F (((u32)1) << 31) +#define dma_frcd_type(d) ((d >> 30) & 1) +#define dma_frcd_fault_reason(c) (c & 0xff) +#define dma_frcd_source_id(c) (c & 0xffff) +#define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */ + +/* + * 0: Present + * 1-11: Reserved + * 12-63: Context Ptr (12 - (haw-1)) + * 64-127: Reserved + */ +struct root_entry { + u64 val; + u64 rsvd1; +}; +#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry)) +static inline bool root_present(struct root_entry *root) +{ + return (root->val & 1); +} +static inline void set_root_present(struct root_entry *root) +{ + root->val |= 1; +} +static inline void set_root_value(struct root_entry *root, unsigned long value) +{ + root->val |= value & PAGE_MASK_4K; +} + +struct context_entry; +static inline struct context_entry * +get_context_addr_from_root(struct root_entry *root) +{ + return (struct context_entry *) + (root_present(root)?phys_to_virt( + root->val & PAGE_MASK_4K): + NULL); +} + +/* + * low 64 bits: + * 0: present + * 1: fault processing disable + * 2-3: translation type + * 12-63: address space root + * high 64 bits: + * 0-2: address width + * 3-6: aval + * 8-23: domain id + */ +struct context_entry { + u64 lo; + u64 hi; +}; +#define context_present(c) ((c).lo & 1) +#define context_fault_disable(c) (((c).lo >> 1) & 1) +#define context_translation_type(c) (((c).lo >> 2) & 3) +#define context_address_root(c) ((c).lo & PAGE_MASK_4K) +#define context_address_width(c) ((c).hi & 7) +#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1)) + +#define context_set_present(c) do {(c).lo |= 1;} while (0) +#define context_set_fault_enable(c) \ + do {(c).lo &= (((u64)-1) << 2) | 1;} while (0) +#define context_set_translation_type(c, val) \ + do { \ + (c).lo &= (((u64)-1) << 4) | 3; \ + (c).lo |= ((val) & 3) << 2; \ + } while (0) +#define CONTEXT_TT_MULTI_LEVEL 0 +#define context_set_address_root(c, val) \ + do {(c).lo |= (val) & PAGE_MASK_4K;} while (0) +#define context_set_address_width(c, val) do {(c).hi |= (val) & 7;} while (0) +#define context_set_domain_id(c, val) \ + do {(c).hi |= ((val) & ((1 << 16) - 1)) << 8;} while (0) +#define context_clear_entry(c) do {(c).lo = 0; (c).hi = 0;} while (0) + +/* + * 0: readable + * 1: writable + * 2-6: reserved + * 7: super page + * 8-11: available + * 12-63: Host physcial address + */ +struct dma_pte { + u64 val; +}; +#define dma_clear_pte(p) do {(p).val = 0;} while (0) + +#define DMA_PTE_READ (1) +#define DMA_PTE_WRITE (2) + +#define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while (0) +#define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while (0) +#define dma_set_pte_prot(p, prot) \ + do {(p).val = ((p).val & ~3) | ((prot) & 3); } while (0) +#define dma_pte_addr(p) ((p).val & PAGE_MASK_4K) +#define dma_set_pte_addr(p, addr) do {\ + (p).val |= ((addr) & PAGE_MASK_4K); } while (0) +#define dma_pte_present(p) (((p).val & 3) != 0) + +struct intel_iommu; + +struct dmar_domain { + int id; /* domain id */ + struct intel_iommu *iommu; /* back pointer to owning iommu */ + + struct list_head devices; /* all devices' list */ + struct iova_domain iovad; /* iova's that belong to this domain */ + + struct dma_pte *pgd; /* virtual address */ + spinlock_t mapping_lock; /* page table lock */ + int gaw; /* max guest address width */ + + /* adjusted guest address width, 0 is level 2 30-bit */ + int agaw; + +#define DOMAIN_FLAG_MULTIPLE_DEVICES 1 + int flags; +}; + +/* PCI domain-device relationship */ +struct device_domain_info { + struct list_head link; /* link to domain siblings */ + struct list_head global; /* link to global list */ + u8 bus; /* PCI bus numer */ + u8 devfn; /* PCI devfn number */ + struct pci_dev *dev; /* it's NULL for PCIE-to-PCI bridge */ + struct dmar_domain *domain; /* pointer to domain */ +}; + +extern int init_dmars(void); + +struct intel_iommu { + void __iomem *reg; /* Pointer to hardware regs, virtual addr */ + u64 cap; + u64 ecap; + unsigned long *domain_ids; /* bitmap of domains */ + struct dmar_domain **domains; /* ptr to domains */ + int seg; + u32 gcmd; /* Holds TE, EAFL. Don't need SRTP, SFL, WBF */ + spinlock_t lock; /* protect context, domain ids */ + spinlock_t register_lock; /* protect register handling */ + struct root_entry *root_entry; /* virtual address */ + + unsigned int irq; + unsigned char name[7]; /* Device Name */ + struct msi_msg saved_msg; + struct sys_device sysdev; +}; + +#ifndef CONFIG_DMAR_GFX_WA +static inline void iommu_prepare_gfx_mapping(void) +{ + return; +} +#endif /* !CONFIG_DMAR_GFX_WA */ + +#endif diff --git a/drivers/pci/iova.c b/drivers/pci/iova.c new file mode 100644 index 00000000000..a84571c2936 --- /dev/null +++ b/drivers/pci/iova.c @@ -0,0 +1,394 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This file is released under the GPLv2. + * + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> + */ + +#include "iova.h" + +void +init_iova_domain(struct iova_domain *iovad) +{ + spin_lock_init(&iovad->iova_alloc_lock); + spin_lock_init(&iovad->iova_rbtree_lock); + iovad->rbroot = RB_ROOT; + iovad->cached32_node = NULL; + +} + +static struct rb_node * +__get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) +{ + if ((*limit_pfn != DMA_32BIT_PFN) || + (iovad->cached32_node == NULL)) + return rb_last(&iovad->rbroot); + else { + struct rb_node *prev_node = rb_prev(iovad->cached32_node); + struct iova *curr_iova = + container_of(iovad->cached32_node, struct iova, node); + *limit_pfn = curr_iova->pfn_lo - 1; + return prev_node; + } +} + +static void +__cached_rbnode_insert_update(struct iova_domain *iovad, + unsigned long limit_pfn, struct iova *new) +{ + if (limit_pfn != DMA_32BIT_PFN) + return; + iovad->cached32_node = &new->node; +} + +static void +__cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) +{ + struct iova *cached_iova; + struct rb_node *curr; + + if (!iovad->cached32_node) + return; + curr = iovad->cached32_node; + cached_iova = container_of(curr, struct iova, node); + + if (free->pfn_lo >= cached_iova->pfn_lo) + iovad->cached32_node = rb_next(&free->node); +} + +/* Computes the padding size required, to make the + * the start address naturally aligned on its size + */ +static int +iova_get_pad_size(int size, unsigned int limit_pfn) +{ + unsigned int pad_size = 0; + unsigned int order = ilog2(size); + + if (order) + pad_size = (limit_pfn + 1) % (1 << order); + + return pad_size; +} + +static int __alloc_iova_range(struct iova_domain *iovad, unsigned long size, + unsigned long limit_pfn, struct iova *new, bool size_aligned) +{ + struct rb_node *curr = NULL; + unsigned long flags; + unsigned long saved_pfn; + unsigned int pad_size = 0; + + /* Walk the tree backwards */ + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + saved_pfn = limit_pfn; + curr = __get_cached_rbnode(iovad, &limit_pfn); + while (curr) { + struct iova *curr_iova = container_of(curr, struct iova, node); + if (limit_pfn < curr_iova->pfn_lo) + goto move_left; + else if (limit_pfn < curr_iova->pfn_hi) + goto adjust_limit_pfn; + else { + if (size_aligned) + pad_size = iova_get_pad_size(size, limit_pfn); + if ((curr_iova->pfn_hi + size + pad_size) <= limit_pfn) + break; /* found a free slot */ + } +adjust_limit_pfn: + limit_pfn = curr_iova->pfn_lo - 1; +move_left: + curr = rb_prev(curr); + } + + if (!curr) { + if (size_aligned) + pad_size = iova_get_pad_size(size, limit_pfn); + if ((IOVA_START_PFN + size + pad_size) > limit_pfn) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return -ENOMEM; + } + } + + /* pfn_lo will point to size aligned address if size_aligned is set */ + new->pfn_lo = limit_pfn - (size + pad_size) + 1; + new->pfn_hi = new->pfn_lo + size - 1; + + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return 0; +} + +static void +iova_insert_rbtree(struct rb_root *root, struct iova *iova) +{ + struct rb_node **new = &(root->rb_node), *parent = NULL; + /* Figure out where to put new node */ + while (*new) { + struct iova *this = container_of(*new, struct iova, node); + parent = *new; + + if (iova->pfn_lo < this->pfn_lo) + new = &((*new)->rb_left); + else if (iova->pfn_lo > this->pfn_lo) + new = &((*new)->rb_right); + else + BUG(); /* this should not happen */ + } + /* Add new node and rebalance tree. */ + rb_link_node(&iova->node, parent, new); + rb_insert_color(&iova->node, root); +} + +/** + * alloc_iova - allocates an iova + * @iovad - iova domain in question + * @size - size of page frames to allocate + * @limit_pfn - max limit address + * @size_aligned - set if size_aligned address range is required + * This function allocates an iova in the range limit_pfn to IOVA_START_PFN + * looking from limit_pfn instead from IOVA_START_PFN. If the size_aligned + * flag is set then the allocated address iova->pfn_lo will be naturally + * aligned on roundup_power_of_two(size). + */ +struct iova * +alloc_iova(struct iova_domain *iovad, unsigned long size, + unsigned long limit_pfn, + bool size_aligned) +{ + unsigned long flags; + struct iova *new_iova; + int ret; + + new_iova = alloc_iova_mem(); + if (!new_iova) + return NULL; + + /* If size aligned is set then round the size to + * to next power of two. + */ + if (size_aligned) + size = __roundup_pow_of_two(size); + + spin_lock_irqsave(&iovad->iova_alloc_lock, flags); + ret = __alloc_iova_range(iovad, size, limit_pfn, new_iova, + size_aligned); + + if (ret) { + spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags); + free_iova_mem(new_iova); + return NULL; + } + + /* Insert the new_iova into domain rbtree by holding writer lock */ + spin_lock(&iovad->iova_rbtree_lock); + iova_insert_rbtree(&iovad->rbroot, new_iova); + __cached_rbnode_insert_update(iovad, limit_pfn, new_iova); + spin_unlock(&iovad->iova_rbtree_lock); + + spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags); + + return new_iova; +} + +/** + * find_iova - find's an iova for a given pfn + * @iovad - iova domain in question. + * pfn - page frame number + * This function finds and returns an iova belonging to the + * given doamin which matches the given pfn. + */ +struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn) +{ + unsigned long flags; + struct rb_node *node; + + /* Take the lock so that no other thread is manipulating the rbtree */ + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + node = iovad->rbroot.rb_node; + while (node) { + struct iova *iova = container_of(node, struct iova, node); + + /* If pfn falls within iova's range, return iova */ + if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + /* We are not holding the lock while this iova + * is referenced by the caller as the same thread + * which called this function also calls __free_iova() + * and it is by desing that only one thread can possibly + * reference a particular iova and hence no conflict. + */ + return iova; + } + + if (pfn < iova->pfn_lo) + node = node->rb_left; + else if (pfn > iova->pfn_lo) + node = node->rb_right; + } + + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return NULL; +} + +/** + * __free_iova - frees the given iova + * @iovad: iova domain in question. + * @iova: iova in question. + * Frees the given iova belonging to the giving domain + */ +void +__free_iova(struct iova_domain *iovad, struct iova *iova) +{ + unsigned long flags; + + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + __cached_rbnode_delete_update(iovad, iova); + rb_erase(&iova->node, &iovad->rbroot); + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + free_iova_mem(iova); +} + +/** + * free_iova - finds and frees the iova for a given pfn + * @iovad: - iova domain in question. + * @pfn: - pfn that is allocated previously + * This functions finds an iova for a given pfn and then + * frees the iova from that domain. + */ +void +free_iova(struct iova_domain *iovad, unsigned long pfn) +{ + struct iova *iova = find_iova(iovad, pfn); + if (iova) + __free_iova(iovad, iova); + +} + +/** + * put_iova_domain - destroys the iova doamin + * @iovad: - iova domain in question. + * All the iova's in that domain are destroyed. + */ +void put_iova_domain(struct iova_domain *iovad) +{ + struct rb_node *node; + unsigned long flags; + + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + node = rb_first(&iovad->rbroot); + while (node) { + struct iova *iova = container_of(node, struct iova, node); + rb_erase(node, &iovad->rbroot); + free_iova_mem(iova); + node = rb_first(&iovad->rbroot); + } + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); +} + +static int +__is_range_overlap(struct rb_node *node, + unsigned long pfn_lo, unsigned long pfn_hi) +{ + struct iova *iova = container_of(node, struct iova, node); + + if ((pfn_lo <= iova->pfn_hi) && (pfn_hi >= iova->pfn_lo)) + return 1; + return 0; +} + +static struct iova * +__insert_new_range(struct iova_domain *iovad, + unsigned long pfn_lo, unsigned long pfn_hi) +{ + struct iova *iova; + + iova = alloc_iova_mem(); + if (!iova) + return iova; + + iova->pfn_hi = pfn_hi; + iova->pfn_lo = pfn_lo; + iova_insert_rbtree(&iovad->rbroot, iova); + return iova; +} + +static void +__adjust_overlap_range(struct iova *iova, + unsigned long *pfn_lo, unsigned long *pfn_hi) +{ + if (*pfn_lo < iova->pfn_lo) + iova->pfn_lo = *pfn_lo; + if (*pfn_hi > iova->pfn_hi) + *pfn_lo = iova->pfn_hi + 1; +} + +/** + * reserve_iova - reserves an iova in the given range + * @iovad: - iova domain pointer + * @pfn_lo: - lower page frame address + * @pfn_hi:- higher pfn adderss + * This function allocates reserves the address range from pfn_lo to pfn_hi so + * that this address is not dished out as part of alloc_iova. + */ +struct iova * +reserve_iova(struct iova_domain *iovad, + unsigned long pfn_lo, unsigned long pfn_hi) +{ + struct rb_node *node; + unsigned long flags; + struct iova *iova; + unsigned int overlap = 0; + + spin_lock_irqsave(&iovad->iova_alloc_lock, flags); + spin_lock(&iovad->iova_rbtree_lock); + for (node = rb_first(&iovad->rbroot); node; node = rb_next(node)) { + if (__is_range_overlap(node, pfn_lo, pfn_hi)) { + iova = container_of(node, struct iova, node); + __adjust_overlap_range(iova, &pfn_lo, &pfn_hi); + if ((pfn_lo >= iova->pfn_lo) && + (pfn_hi <= iova->pfn_hi)) + goto finish; + overlap = 1; + + } else if (overlap) + break; + } + + /* We are here either becasue this is the first reserver node + * or need to insert remaining non overlap addr range + */ + iova = __insert_new_range(iovad, pfn_lo, pfn_hi); +finish: + + spin_unlock(&iovad->iova_rbtree_lock); + spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags); + return iova; +} + +/** + * copy_reserved_iova - copies the reserved between domains + * @from: - source doamin from where to copy + * @to: - destination domin where to copy + * This function copies reserved iova's from one doamin to + * other. + */ +void +copy_reserved_iova(struct iova_domain *from, struct iova_domain *to) +{ + unsigned long flags; + struct rb_node *node; + + spin_lock_irqsave(&from->iova_alloc_lock, flags); + spin_lock(&from->iova_rbtree_lock); + for (node = rb_first(&from->rbroot); node; node = rb_next(node)) { + struct iova *iova = container_of(node, struct iova, node); + struct iova *new_iova; + new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi); + if (!new_iova) + printk(KERN_ERR "Reserve iova range %lx@%lx failed\n", + iova->pfn_lo, iova->pfn_lo); + } + spin_unlock(&from->iova_rbtree_lock); + spin_unlock_irqrestore(&from->iova_alloc_lock, flags); +} diff --git a/drivers/pci/iova.h b/drivers/pci/iova.h new file mode 100644 index 00000000000..ae3028d5a94 --- /dev/null +++ b/drivers/pci/iova.h @@ -0,0 +1,63 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This file is released under the GPLv2. + * + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> + * + */ + +#ifndef _IOVA_H_ +#define _IOVA_H_ + +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/rbtree.h> +#include <linux/dma-mapping.h> + +/* + * We need a fixed PAGE_SIZE of 4K irrespective of + * arch PAGE_SIZE for IOMMU page tables. + */ +#define PAGE_SHIFT_4K (12) +#define PAGE_SIZE_4K (1UL << PAGE_SHIFT_4K) +#define PAGE_MASK_4K (((u64)-1) << PAGE_SHIFT_4K) +#define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K) + +/* IO virtual address start page frame number */ +#define IOVA_START_PFN (1) + +#define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT_4K) +#define DMA_32BIT_PFN IOVA_PFN(DMA_32BIT_MASK) +#define DMA_64BIT_PFN IOVA_PFN(DMA_64BIT_MASK) + +/* iova structure */ +struct iova { + struct rb_node node; + unsigned long pfn_hi; /* IOMMU dish out addr hi */ + unsigned long pfn_lo; /* IOMMU dish out addr lo */ +}; + +/* holds all the iova translations for a domain */ +struct iova_domain { + spinlock_t iova_alloc_lock;/* Lock to protect iova allocation */ + spinlock_t iova_rbtree_lock; /* Lock to protect update of rbtree */ + struct rb_root rbroot; /* iova domain rbtree root */ + struct rb_node *cached32_node; /* Save last alloced node */ +}; + +struct iova *alloc_iova_mem(void); +void free_iova_mem(struct iova *iova); +void free_iova(struct iova_domain *iovad, unsigned long pfn); +void __free_iova(struct iova_domain *iovad, struct iova *iova); +struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size, + unsigned long limit_pfn, + bool size_aligned); +struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, + unsigned long pfn_hi); +void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); +void init_iova_domain(struct iova_domain *iovad); +struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); +void put_iova_domain(struct iova_domain *iovad); + +#endif diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 6fda33de84e..fc87e14b50d 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -90,3 +90,4 @@ pci_match_one_device(const struct pci_device_id *id, const struct pci_dev *dev) return NULL; } +struct pci_dev *pci_find_upstream_pcie_bridge(struct pci_dev *pdev); diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 5db6b6690b5..463a5a9d583 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -837,6 +837,19 @@ static void pci_release_dev(struct device *dev) kfree(pci_dev); } +static void set_pcie_port_type(struct pci_dev *pdev) +{ + int pos; + u16 reg16; + + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); + if (!pos) + return; + pdev->is_pcie = 1; + pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, ®16); + pdev->pcie_type = (reg16 & PCI_EXP_FLAGS_TYPE) >> 4; +} + /** * pci_cfg_space_size - get the configuration space size of the PCI device. * @dev: PCI device @@ -951,6 +964,7 @@ pci_scan_device(struct pci_bus *bus, int devfn) dev->device = (l >> 16) & 0xffff; dev->cfg_size = pci_cfg_space_size(dev); dev->error_state = pci_channel_io_normal; + set_pcie_port_type(dev); /* Assume 32-bit PCI; let 64-bit PCI cards (which are far rarer) set this higher, assuming the system even supports it. */ diff --git a/drivers/pci/search.c b/drivers/pci/search.c index c6e79d01ce3..b001b5922e3 100644 --- a/drivers/pci/search.c +++ b/drivers/pci/search.c @@ -14,6 +14,40 @@ #include "pci.h" DECLARE_RWSEM(pci_bus_sem); +/* + * find the upstream PCIE-to-PCI bridge of a PCI device + * if the device is PCIE, return NULL + * if the device isn't connected to a PCIE bridge (that is its parent is a + * legacy PCI bridge and the bridge is directly connected to bus 0), return its + * parent + */ +struct pci_dev * +pci_find_upstream_pcie_bridge(struct pci_dev *pdev) +{ + struct pci_dev *tmp = NULL; + + if (pdev->is_pcie) + return NULL; + while (1) { + if (!pdev->bus->self) + break; + pdev = pdev->bus->self; + /* a p2p bridge */ + if (!pdev->is_pcie) { + tmp = pdev; + continue; + } + /* PCI device should connect to a PCIE bridge */ + if (pdev->pcie_type != PCI_EXP_TYPE_PCI_BRIDGE) { + /* Busted hardware? */ + WARN_ON_ONCE(1); + return NULL; + } + return pdev; + } + + return tmp; +} static struct pci_bus *pci_do_find_bus(struct pci_bus *bus, unsigned char busnr) { diff --git a/drivers/power/apm_power.c b/drivers/power/apm_power.c index 39a90a6f0f8..bbf3ee10da0 100644 --- a/drivers/power/apm_power.c +++ b/drivers/power/apm_power.c @@ -26,65 +26,124 @@ static struct power_supply *main_battery; static void find_main_battery(void) { struct device *dev; - struct power_supply *bat, *batm; + struct power_supply *bat = NULL; + struct power_supply *max_charge_bat = NULL; + struct power_supply *max_energy_bat = NULL; union power_supply_propval full; int max_charge = 0; + int max_energy = 0; main_battery = NULL; - batm = NULL; + list_for_each_entry(dev, &power_supply_class->devices, node) { bat = dev_get_drvdata(dev); - /* If none of battery devices cantains 'use_for_apm' flag, - choice one with maximum design charge */ - if (!PSY_PROP(bat, CHARGE_FULL_DESIGN, &full)) { + + if (bat->use_for_apm) { + /* nice, we explicitly asked to report this battery. */ + main_battery = bat; + return; + } + + if (!PSY_PROP(bat, CHARGE_FULL_DESIGN, &full) || + !PSY_PROP(bat, CHARGE_FULL, &full)) { if (full.intval > max_charge) { - batm = bat; + max_charge_bat = bat; max_charge = full.intval; } + } else if (!PSY_PROP(bat, ENERGY_FULL_DESIGN, &full) || + !PSY_PROP(bat, ENERGY_FULL, &full)) { + if (full.intval > max_energy) { + max_energy_bat = bat; + max_energy = full.intval; + } } + } - if (bat->use_for_apm) - main_battery = bat; + if ((max_energy_bat && max_charge_bat) && + (max_energy_bat != max_charge_bat)) { + /* try guess battery with more capacity */ + if (!PSY_PROP(max_charge_bat, VOLTAGE_MAX_DESIGN, &full)) { + if (max_energy > max_charge * full.intval) + main_battery = max_energy_bat; + else + main_battery = max_charge_bat; + } else if (!PSY_PROP(max_energy_bat, VOLTAGE_MAX_DESIGN, + &full)) { + if (max_charge > max_energy / full.intval) + main_battery = max_charge_bat; + else + main_battery = max_energy_bat; + } else { + /* give up, choice any */ + main_battery = max_energy_bat; + } + } else if (max_charge_bat) { + main_battery = max_charge_bat; + } else if (max_energy_bat) { + main_battery = max_energy_bat; + } else { + /* give up, try the last if any */ + main_battery = bat; } - if (!main_battery) - main_battery = batm; } -static int calculate_time(int status) +static int calculate_time(int status, int using_charge) { - union power_supply_propval charge_full, charge_empty; - union power_supply_propval charge, I; + union power_supply_propval full; + union power_supply_propval empty; + union power_supply_propval cur; + union power_supply_propval I; + enum power_supply_property full_prop; + enum power_supply_property full_design_prop; + enum power_supply_property empty_prop; + enum power_supply_property empty_design_prop; + enum power_supply_property cur_avg_prop; + enum power_supply_property cur_now_prop; - if (MPSY_PROP(CHARGE_FULL, &charge_full)) { - /* if battery can't report this property, use design value */ - if (MPSY_PROP(CHARGE_FULL_DESIGN, &charge_full)) + if (MPSY_PROP(CURRENT_AVG, &I)) { + /* if battery can't report average value, use momentary */ + if (MPSY_PROP(CURRENT_NOW, &I)) return -1; } - if (MPSY_PROP(CHARGE_EMPTY, &charge_empty)) { - /* if battery can't report this property, use design value */ - if (MPSY_PROP(CHARGE_EMPTY_DESIGN, &charge_empty)) - charge_empty.intval = 0; + if (using_charge) { + full_prop = POWER_SUPPLY_PROP_CHARGE_FULL; + full_design_prop = POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN; + empty_prop = POWER_SUPPLY_PROP_CHARGE_EMPTY; + empty_design_prop = POWER_SUPPLY_PROP_CHARGE_EMPTY; + cur_avg_prop = POWER_SUPPLY_PROP_CHARGE_AVG; + cur_now_prop = POWER_SUPPLY_PROP_CHARGE_NOW; + } else { + full_prop = POWER_SUPPLY_PROP_ENERGY_FULL; + full_design_prop = POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN; + empty_prop = POWER_SUPPLY_PROP_ENERGY_EMPTY; + empty_design_prop = POWER_SUPPLY_PROP_CHARGE_EMPTY; + cur_avg_prop = POWER_SUPPLY_PROP_ENERGY_AVG; + cur_now_prop = POWER_SUPPLY_PROP_ENERGY_NOW; } - if (MPSY_PROP(CHARGE_AVG, &charge)) { - /* if battery can't report average value, use momentary */ - if (MPSY_PROP(CHARGE_NOW, &charge)) + if (_MPSY_PROP(full_prop, &full)) { + /* if battery can't report this property, use design value */ + if (_MPSY_PROP(full_design_prop, &full)) return -1; } - if (MPSY_PROP(CURRENT_AVG, &I)) { + if (_MPSY_PROP(empty_prop, &empty)) { + /* if battery can't report this property, use design value */ + if (_MPSY_PROP(empty_design_prop, &empty)) + empty.intval = 0; + } + + if (_MPSY_PROP(cur_avg_prop, &cur)) { /* if battery can't report average value, use momentary */ - if (MPSY_PROP(CURRENT_NOW, &I)) + if (_MPSY_PROP(cur_now_prop, &cur)) return -1; } if (status == POWER_SUPPLY_STATUS_CHARGING) - return ((charge.intval - charge_full.intval) * 60L) / - I.intval; + return ((cur.intval - full.intval) * 60L) / I.intval; else - return -((charge.intval - charge_empty.intval) * 60L) / - I.intval; + return -((cur.intval - empty.intval) * 60L) / I.intval; } static int calculate_capacity(int using_charge) @@ -200,18 +259,22 @@ static void apm_battery_apm_get_power_status(struct apm_power_info *info) info->units = APM_UNITS_MINS; if (status.intval == POWER_SUPPLY_STATUS_CHARGING) { - if (MPSY_PROP(TIME_TO_FULL_AVG, &time_to_full)) { - if (MPSY_PROP(TIME_TO_FULL_NOW, &time_to_full)) - info->time = calculate_time(status.intval); - else - info->time = time_to_full.intval / 60; + if (!MPSY_PROP(TIME_TO_FULL_AVG, &time_to_full) || + !MPSY_PROP(TIME_TO_FULL_NOW, &time_to_full)) { + info->time = time_to_full.intval / 60; + } else { + info->time = calculate_time(status.intval, 0); + if (info->time == -1) + info->time = calculate_time(status.intval, 1); } } else { - if (MPSY_PROP(TIME_TO_EMPTY_AVG, &time_to_empty)) { - if (MPSY_PROP(TIME_TO_EMPTY_NOW, &time_to_empty)) - info->time = calculate_time(status.intval); - else - info->time = time_to_empty.intval / 60; + if (!MPSY_PROP(TIME_TO_EMPTY_AVG, &time_to_empty) || + !MPSY_PROP(TIME_TO_EMPTY_NOW, &time_to_empty)) { + info->time = time_to_empty.intval / 60; + } else { + info->time = calculate_time(status.intval, 0); + if (info->time == -1) + info->time = calculate_time(status.intval, 1); } } diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c index 2edd5fb6d3d..8d1c64a24de 100644 --- a/drivers/s390/char/raw3270.c +++ b/drivers/s390/char/raw3270.c @@ -48,8 +48,8 @@ struct raw3270 { struct timer_list timer; /* Device timer. */ unsigned char *ascebc; /* ascii -> ebcdic table */ - struct class_device *clttydev; /* 3270-class tty device ptr */ - struct class_device *cltubdev; /* 3270-class tub device ptr */ + struct device *clttydev; /* 3270-class tty device ptr */ + struct device *cltubdev; /* 3270-class tub device ptr */ struct raw3270_request init_request; unsigned char init_data[256]; @@ -1107,11 +1107,9 @@ raw3270_delete_device(struct raw3270 *rp) /* Remove from device chain. */ mutex_lock(&raw3270_mutex); if (rp->clttydev && !IS_ERR(rp->clttydev)) - class_device_destroy(class3270, - MKDEV(IBM_TTY3270_MAJOR, rp->minor)); + device_destroy(class3270, MKDEV(IBM_TTY3270_MAJOR, rp->minor)); if (rp->cltubdev && !IS_ERR(rp->cltubdev)) - class_device_destroy(class3270, - MKDEV(IBM_FS3270_MAJOR, rp->minor)); + device_destroy(class3270, MKDEV(IBM_FS3270_MAJOR, rp->minor)); list_del_init(&rp->list); mutex_unlock(&raw3270_mutex); @@ -1181,24 +1179,22 @@ static int raw3270_create_attributes(struct raw3270 *rp) if (rc) goto out; - rp->clttydev = class_device_create(class3270, NULL, - MKDEV(IBM_TTY3270_MAJOR, rp->minor), - &rp->cdev->dev, "tty%s", - rp->cdev->dev.bus_id); + rp->clttydev = device_create(class3270, &rp->cdev->dev, + MKDEV(IBM_TTY3270_MAJOR, rp->minor), + "tty%s", rp->cdev->dev.bus_id); if (IS_ERR(rp->clttydev)) { rc = PTR_ERR(rp->clttydev); goto out_ttydev; } - rp->cltubdev = class_device_create(class3270, NULL, - MKDEV(IBM_FS3270_MAJOR, rp->minor), - &rp->cdev->dev, "tub%s", - rp->cdev->dev.bus_id); + rp->cltubdev = device_create(class3270, &rp->cdev->dev, + MKDEV(IBM_FS3270_MAJOR, rp->minor), + "tub%s", rp->cdev->dev.bus_id); if (!IS_ERR(rp->cltubdev)) goto out; rc = PTR_ERR(rp->cltubdev); - class_device_destroy(class3270, MKDEV(IBM_TTY3270_MAJOR, rp->minor)); + device_destroy(class3270, MKDEV(IBM_TTY3270_MAJOR, rp->minor)); out_ttydev: sysfs_remove_group(&rp->cdev->dev.kobj, &raw3270_attr_group); diff --git a/drivers/s390/char/tape_class.c b/drivers/s390/char/tape_class.c index 2e0d29730b6..aa7f166f403 100644 --- a/drivers/s390/char/tape_class.c +++ b/drivers/s390/char/tape_class.c @@ -69,12 +69,9 @@ struct tape_class_device *register_tape_dev( if (rc) goto fail_with_cdev; - tcd->class_device = class_device_create( - tape_class, - NULL, - tcd->char_device->dev, - device, - "%s", tcd->device_name + tcd->class_device = device_create(tape_class, device, + tcd->char_device->dev, + "%s", tcd->device_name ); rc = IS_ERR(tcd->class_device) ? PTR_ERR(tcd->class_device) : 0; if (rc) @@ -90,7 +87,7 @@ struct tape_class_device *register_tape_dev( return tcd; fail_with_class_device: - class_device_destroy(tape_class, tcd->char_device->dev); + device_destroy(tape_class, tcd->char_device->dev); fail_with_cdev: cdev_del(tcd->char_device); @@ -105,11 +102,9 @@ EXPORT_SYMBOL(register_tape_dev); void unregister_tape_dev(struct tape_class_device *tcd) { if (tcd != NULL && !IS_ERR(tcd)) { - sysfs_remove_link( - &tcd->class_device->dev->kobj, - tcd->mode_name - ); - class_device_destroy(tape_class, tcd->char_device->dev); + sysfs_remove_link(&tcd->class_device->kobj, + tcd->mode_name); + device_destroy(tape_class, tcd->char_device->dev); cdev_del(tcd->char_device); kfree(tcd); } diff --git a/drivers/s390/char/tape_class.h b/drivers/s390/char/tape_class.h index a8bd9b47fad..e2b5ac918ac 100644 --- a/drivers/s390/char/tape_class.h +++ b/drivers/s390/char/tape_class.h @@ -24,8 +24,8 @@ #define TAPECLASS_NAME_LEN 32 struct tape_class_device { - struct cdev * char_device; - struct class_device * class_device; + struct cdev *char_device; + struct device *class_device; char device_name[TAPECLASS_NAME_LEN]; char mode_name[TAPECLASS_NAME_LEN]; }; diff --git a/drivers/s390/char/vmlogrdr.c b/drivers/s390/char/vmlogrdr.c index 12f7a4ce82c..e0c4c508e12 100644 --- a/drivers/s390/char/vmlogrdr.c +++ b/drivers/s390/char/vmlogrdr.c @@ -74,7 +74,7 @@ struct vmlogrdr_priv_t { int dev_in_use; /* 1: already opened, 0: not opened*/ spinlock_t priv_lock; struct device *device; - struct class_device *class_device; + struct device *class_device; int autorecording; int autopurge; }; @@ -762,12 +762,10 @@ static int vmlogrdr_register_device(struct vmlogrdr_priv_t *priv) device_unregister(dev); return ret; } - priv->class_device = class_device_create( - vmlogrdr_class, - NULL, - MKDEV(vmlogrdr_major, priv->minor_num), - dev, - "%s", dev->bus_id ); + priv->class_device = device_create(vmlogrdr_class, dev, + MKDEV(vmlogrdr_major, + priv->minor_num), + "%s", dev->bus_id); if (IS_ERR(priv->class_device)) { ret = PTR_ERR(priv->class_device); priv->class_device=NULL; @@ -783,8 +781,7 @@ static int vmlogrdr_register_device(struct vmlogrdr_priv_t *priv) static int vmlogrdr_unregister_device(struct vmlogrdr_priv_t *priv) { - class_device_destroy(vmlogrdr_class, - MKDEV(vmlogrdr_major, priv->minor_num)); + device_destroy(vmlogrdr_class, MKDEV(vmlogrdr_major, priv->minor_num)); if (priv->device != NULL) { sysfs_remove_group(&priv->device->kobj, &vmlogrdr_attr_group); device_unregister(priv->device); diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c index 42c1f4659ad..297cdceb0ca 100644 --- a/drivers/s390/cio/chp.c +++ b/drivers/s390/cio/chp.c @@ -246,7 +246,7 @@ int chp_add_cmg_attr(struct channel_path *chp) static ssize_t chp_status_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct channel_path *chp = container_of(dev, struct channel_path, dev); + struct channel_path *chp = to_channelpath(dev); if (!chp) return 0; @@ -258,7 +258,7 @@ static ssize_t chp_status_write(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - struct channel_path *cp = container_of(dev, struct channel_path, dev); + struct channel_path *cp = to_channelpath(dev); char cmd[10]; int num_args; int error; @@ -286,7 +286,7 @@ static ssize_t chp_configure_show(struct device *dev, struct channel_path *cp; int status; - cp = container_of(dev, struct channel_path, dev); + cp = to_channelpath(dev); status = chp_info_get_status(cp->chpid); if (status < 0) return status; @@ -308,7 +308,7 @@ static ssize_t chp_configure_write(struct device *dev, return -EINVAL; if (val != 0 && val != 1) return -EINVAL; - cp = container_of(dev, struct channel_path, dev); + cp = to_channelpath(dev); chp_cfg_schedule(cp->chpid, val); cfg_wait_idle(); @@ -320,7 +320,7 @@ static DEVICE_ATTR(configure, 0644, chp_configure_show, chp_configure_write); static ssize_t chp_type_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct channel_path *chp = container_of(dev, struct channel_path, dev); + struct channel_path *chp = to_channelpath(dev); if (!chp) return 0; @@ -374,7 +374,7 @@ static void chp_release(struct device *dev) { struct channel_path *cp; - cp = container_of(dev, struct channel_path, dev); + cp = to_channelpath(dev); kfree(cp); } diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c index 5d83dd47146..838f7ac0dc3 100644 --- a/drivers/s390/cio/css.c +++ b/drivers/s390/cio/css.c @@ -182,6 +182,15 @@ static int css_register_subchannel(struct subchannel *sch) sch->dev.bus = &css_bus_type; sch->dev.release = &css_subchannel_release; sch->dev.groups = subch_attr_groups; + /* + * We don't want to generate uevents for I/O subchannels that don't + * have a working ccw device behind them since they will be + * unregistered before they can be used anyway, so we delay the add + * uevent until after device recognition was successful. + */ + if (!cio_is_console(sch->schid)) + /* Console is special, no need to suppress. */ + sch->dev.uevent_suppress = 1; css_update_ssd_info(sch); /* make it known to the system */ ret = css_sch_device_register(sch); diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index 7507067351b..fd5d0c1570d 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -559,6 +559,7 @@ zfcp_sg_list_alloc(struct zfcp_sg_list *sg_list, size_t size) retval = -ENOMEM; goto out; } + sg_init_table(sg_list->sg, sg_list->count); for (i = 0, sg = sg_list->sg; i < sg_list->count; i++, sg++) { sg->length = min(size, PAGE_SIZE); diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 57cac7008e0..326e7ee232c 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -63,7 +63,7 @@ static inline void * zfcp_sg_to_address(struct scatterlist *list) { - return (void *) (page_address(list->page) + list->offset); + return sg_virt(list); } /** @@ -74,7 +74,7 @@ zfcp_sg_to_address(struct scatterlist *list) static inline void zfcp_address_to_sg(void *address, struct scatterlist *list) { - list->page = virt_to_page(address); + sg_set_page(list, virt_to_page(address)); list->offset = ((unsigned long) address) & (PAGE_SIZE - 1); } diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index a6475a2bb8a..9438d0b2879 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -308,13 +308,15 @@ zfcp_erp_adisc(struct zfcp_port *port) if (send_els == NULL) goto nomem; - send_els->req = kzalloc(sizeof(struct scatterlist), GFP_ATOMIC); + send_els->req = kmalloc(sizeof(struct scatterlist), GFP_ATOMIC); if (send_els->req == NULL) goto nomem; + sg_init_table(send_els->req, 1); - send_els->resp = kzalloc(sizeof(struct scatterlist), GFP_ATOMIC); + send_els->resp = kmalloc(sizeof(struct scatterlist), GFP_ATOMIC); if (send_els->resp == NULL) goto nomem; + sg_init_table(send_els->resp, 1); address = (void *) get_zeroed_page(GFP_ATOMIC); if (address == NULL) @@ -363,7 +365,7 @@ zfcp_erp_adisc(struct zfcp_port *port) retval = -ENOMEM; freemem: if (address != NULL) - __free_pages(send_els->req->page, 0); + __free_pages(sg_page(send_els->req), 0); if (send_els != NULL) { kfree(send_els->req); kfree(send_els->resp); @@ -437,7 +439,7 @@ zfcp_erp_adisc_handler(unsigned long data) out: zfcp_port_put(port); - __free_pages(send_els->req->page, 0); + __free_pages(sg_page(send_els->req), 0); kfree(send_els->req); kfree(send_els->resp); kfree(send_els); diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c index fb14014ee16..afb262b4be1 100644 --- a/drivers/scsi/3w-9xxx.c +++ b/drivers/scsi/3w-9xxx.c @@ -1840,7 +1840,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, (scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) { if (srb->sc_data_direction == DMA_TO_DEVICE || srb->sc_data_direction == DMA_BIDIRECTIONAL) { struct scatterlist *sg = scsi_sglist(srb); - char *buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + char *buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; memcpy(tw_dev->generic_buffer_virt[request_id], buf, sg->length); kunmap_atomic(buf - sg->offset, KM_IRQ0); } @@ -1919,7 +1919,7 @@ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int re char *buf; unsigned long flags = 0; local_irq_save(flags); - buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; memcpy(buf, tw_dev->generic_buffer_virt[request_id], sg->length); kunmap_atomic(buf - sg->offset, KM_IRQ0); local_irq_restore(flags); diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c index a64153b9603..59716ebeb10 100644 --- a/drivers/scsi/3w-xxxx.c +++ b/drivers/scsi/3w-xxxx.c @@ -1469,7 +1469,7 @@ static void tw_transfer_internal(TW_Device_Extension *tw_dev, int request_id, struct scatterlist *sg = scsi_sglist(cmd); local_irq_save(flags); - buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; transfer_len = min(sg->length, len); memcpy(buf, data, transfer_len); diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c index 988f0bc5eda..2597209183d 100644 --- a/drivers/scsi/NCR5380.c +++ b/drivers/scsi/NCR5380.c @@ -298,8 +298,7 @@ static __inline__ void initialize_SCp(Scsi_Cmnd * cmd) if (cmd->use_sg) { cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer; cmd->SCp.buffers_residual = cmd->use_sg - 1; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page)+ - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); cmd->SCp.this_residual = cmd->SCp.buffer->length; } else { cmd->SCp.buffer = NULL; @@ -2143,8 +2142,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) { ++cmd->SCp.buffer; --cmd->SCp.buffers_residual; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page)+ - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); dprintk(NDEBUG_INFORMATION, ("scsi%d : %d bytes and %d buffers left\n", instance->host_no, cmd->SCp.this_residual, cmd->SCp.buffers_residual)); } /* diff --git a/drivers/scsi/NCR53C9x.c b/drivers/scsi/NCR53C9x.c index 96e8e29aa05..5b0efc90391 100644 --- a/drivers/scsi/NCR53C9x.c +++ b/drivers/scsi/NCR53C9x.c @@ -927,7 +927,7 @@ static void esp_get_dmabufs(struct NCR_ESP *esp, Scsi_Cmnd *sp) esp->dma_mmu_get_scsi_sgl(esp, sp); else sp->SCp.ptr = - (char *) virt_to_phys((page_address(sp->SCp.buffer->page) + sp->SCp.buffer->offset)); + (char *) virt_to_phys(sg_virt(sp->SCp.buffer)); } } @@ -1748,7 +1748,7 @@ static inline void advance_sg(struct NCR_ESP *esp, Scsi_Cmnd *sp) if (esp->dma_advance_sg) esp->dma_advance_sg (sp); else - sp->SCp.ptr = (char *) virt_to_phys((page_address(sp->SCp.buffer->page) + sp->SCp.buffer->offset)); + sp->SCp.ptr = (char *) virt_to_phys(sg_virt(sp->SCp.buffer)); } diff --git a/drivers/scsi/NCR53c406a.c b/drivers/scsi/NCR53c406a.c index 3168a179484..137d065db3d 100644 --- a/drivers/scsi/NCR53c406a.c +++ b/drivers/scsi/NCR53c406a.c @@ -875,8 +875,7 @@ static void NCR53c406a_intr(void *dev_id) outb(TRANSFER_INFO | DMA_OP, CMD_REG); #if USE_PIO scsi_for_each_sg(current_SC, sg, scsi_sg_count(current_SC), i) { - NCR53c406a_pio_write(page_address(sg->page) + sg->offset, - sg->length); + NCR53c406a_pio_write(sg_virt(sg), sg->length); } REG0; #endif /* USE_PIO */ @@ -897,8 +896,7 @@ static void NCR53c406a_intr(void *dev_id) outb(TRANSFER_INFO | DMA_OP, CMD_REG); #if USE_PIO scsi_for_each_sg(current_SC, sg, scsi_sg_count(current_SC), i) { - NCR53c406a_pio_read(page_address(sg->page) + sg->offset, - sg->length); + NCR53c406a_pio_read(sg_virt(sg), sg->length); } REG0; #endif /* USE_PIO */ diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c index 80e448d0f3d..a77ab8d693d 100644 --- a/drivers/scsi/aacraid/aachba.c +++ b/drivers/scsi/aacraid/aachba.c @@ -356,7 +356,7 @@ static void aac_internal_transfer(struct scsi_cmnd *scsicmd, void *data, unsigne int transfer_len; struct scatterlist *sg = scsi_sglist(scsicmd); - buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; transfer_len = min(sg->length, len + offset); transfer_len -= offset; diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c index a58c265dc8a..ea8c6994764 100644 --- a/drivers/scsi/aha152x.c +++ b/drivers/scsi/aha152x.c @@ -613,7 +613,7 @@ struct aha152x_scdata { #define SCNEXT(SCpnt) SCDATA(SCpnt)->next #define SCSEM(SCpnt) SCDATA(SCpnt)->done -#define SG_ADDRESS(buffer) ((char *) (page_address((buffer)->page)+(buffer)->offset)) +#define SG_ADDRESS(buffer) ((char *) sg_virt((buffer))) /* state handling */ static void seldi_run(struct Scsi_Host *shpnt); diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c index 961a1882cb7..bbcc2c52d79 100644 --- a/drivers/scsi/aha1542.c +++ b/drivers/scsi/aha1542.c @@ -49,7 +49,7 @@ #include "aha1542.h" #define SCSI_BUF_PA(address) isa_virt_to_bus(address) -#define SCSI_SG_PA(sgent) (isa_page_to_bus((sgent)->page) + (sgent)->offset) +#define SCSI_SG_PA(sgent) (isa_page_to_bus(sg_page((sgent))) + (sgent)->offset) static void BAD_DMA(void *address, unsigned int length) { @@ -66,8 +66,7 @@ static void BAD_SG_DMA(Scsi_Cmnd * SCpnt, int badseg) { printk(KERN_CRIT "sgpnt[%d:%d] page %p/0x%llx length %u\n", - badseg, nseg, - page_address(sgp->page) + sgp->offset, + badseg, nseg, sg_virt(sgp), (unsigned long long)SCSI_SG_PA(sgp), sgp->length); @@ -712,8 +711,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i); scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) { printk(KERN_CRIT "%d: %p %d\n", i, - (page_address(sg->page) + - sg->offset), sg->length); + sg_virt(sg), sg->length); }; printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr); ptr = (unsigned char *) &cptr[i]; diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c index f81777586b8..f7a252885a5 100644 --- a/drivers/scsi/arcmsr/arcmsr_hba.c +++ b/drivers/scsi/arcmsr/arcmsr_hba.c @@ -1343,7 +1343,7 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, \ /* 4 bytes: Areca io control code */ sg = scsi_sglist(cmd); - buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buffer = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; if (scsi_sg_count(cmd) > 1) { retvalue = ARCMSR_MESSAGE_FAIL; goto message_out; @@ -1593,7 +1593,7 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb, strncpy(&inqdata[32], "R001", 4); /* Product Revision */ sg = scsi_sglist(cmd); - buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buffer = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; memcpy(buffer, inqdata, sizeof(inqdata)); sg = scsi_sglist(cmd); diff --git a/drivers/scsi/atari_NCR5380.c b/drivers/scsi/atari_NCR5380.c index 52d0b87e9aa..d1780980fb2 100644 --- a/drivers/scsi/atari_NCR5380.c +++ b/drivers/scsi/atari_NCR5380.c @@ -515,8 +515,7 @@ static inline void initialize_SCp(Scsi_Cmnd *cmd) if (cmd->use_sg) { cmd->SCp.buffer = (struct scatterlist *)cmd->request_buffer; cmd->SCp.buffers_residual = cmd->use_sg - 1; - cmd->SCp.ptr = (char *)page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); cmd->SCp.this_residual = cmd->SCp.buffer->length; /* ++roman: Try to merge some scatter-buffers if they are at * contiguous physical addresses. @@ -2054,8 +2053,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) ++cmd->SCp.buffer; --cmd->SCp.buffers_residual; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); /* ++roman: Try to merge some scatter-buffers if * they are at contiguous physical addresses. */ diff --git a/drivers/scsi/eata_pio.c b/drivers/scsi/eata_pio.c index 96180bb47e4..982c5092be1 100644 --- a/drivers/scsi/eata_pio.c +++ b/drivers/scsi/eata_pio.c @@ -172,7 +172,7 @@ static void IncStat(struct scsi_pointer *SCp, unsigned int Increment) SCp->Status = 0; else { SCp->buffer++; - SCp->ptr = page_address(SCp->buffer->page) + SCp->buffer->offset; + SCp->ptr = sg_virt(SCp->buffer); SCp->this_residual = SCp->buffer->length; } } @@ -410,7 +410,7 @@ static int eata_pio_queue(struct scsi_cmnd *cmd, } else { cmd->SCp.buffer = cmd->request_buffer; cmd->SCp.buffers_residual = cmd->use_sg; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); cmd->SCp.this_residual = cmd->SCp.buffer->length; } cmd->SCp.Status = (cmd->SCp.this_residual != 0); /* TRUE as long as bytes diff --git a/drivers/scsi/fd_mcs.c b/drivers/scsi/fd_mcs.c index 668569e8856..8335b608e57 100644 --- a/drivers/scsi/fd_mcs.c +++ b/drivers/scsi/fd_mcs.c @@ -973,7 +973,7 @@ static irqreturn_t fd_mcs_intr(int irq, void *dev_id) if (current_SC->SCp.buffers_residual) { --current_SC->SCp.buffers_residual; ++current_SC->SCp.buffer; - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; } else break; @@ -1006,7 +1006,7 @@ static irqreturn_t fd_mcs_intr(int irq, void *dev_id) if (!current_SC->SCp.this_residual && current_SC->SCp.buffers_residual) { --current_SC->SCp.buffers_residual; ++current_SC->SCp.buffer; - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; } } @@ -1109,7 +1109,7 @@ static int fd_mcs_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) if (current_SC->use_sg) { current_SC->SCp.buffer = (struct scatterlist *) current_SC->request_buffer; - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; current_SC->SCp.buffers_residual = current_SC->use_sg - 1; } else { diff --git a/drivers/scsi/fdomain.c b/drivers/scsi/fdomain.c index 5d282e6a6ae..2cd6b4959eb 100644 --- a/drivers/scsi/fdomain.c +++ b/drivers/scsi/fdomain.c @@ -1321,7 +1321,7 @@ static irqreturn_t do_fdomain_16x0_intr(int irq, void *dev_id) if (current_SC->SCp.buffers_residual) { --current_SC->SCp.buffers_residual; ++current_SC->SCp.buffer; - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; } else break; @@ -1354,7 +1354,7 @@ static irqreturn_t do_fdomain_16x0_intr(int irq, void *dev_id) && current_SC->SCp.buffers_residual) { --current_SC->SCp.buffers_residual; ++current_SC->SCp.buffer; - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; } } @@ -1439,8 +1439,7 @@ static int fdomain_16x0_queue(struct scsi_cmnd *SCpnt, if (scsi_sg_count(current_SC)) { current_SC->SCp.buffer = scsi_sglist(current_SC); - current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) - + current_SC->SCp.buffer->offset; + current_SC->SCp.ptr = sg_virt(current_SC->SCp.buffer); current_SC->SCp.this_residual = current_SC->SCp.buffer->length; current_SC->SCp.buffers_residual = scsi_sg_count(current_SC) - 1; } else { diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c index 3ac080ee6e2..5ab3ce76248 100644 --- a/drivers/scsi/gdth.c +++ b/drivers/scsi/gdth.c @@ -2374,18 +2374,18 @@ static void gdth_copy_internal_data(gdth_ha_str *ha, Scsi_Cmnd *scp, if (cpsum+cpnow > cpcount) cpnow = cpcount - cpsum; cpsum += cpnow; - if (!sl->page) { + if (!sg_page(sl)) { printk("GDT-HA %d: invalid sc/gt element in gdth_copy_internal_data()\n", ha->hanum); return; } local_irq_save(flags); - address = kmap_atomic(sl->page, KM_BIO_SRC_IRQ) + sl->offset; + address = kmap_atomic(sg_page(sl), KM_BIO_SRC_IRQ) + sl->offset; if (to_buffer) memcpy(buffer, address, cpnow); else memcpy(address, buffer, cpnow); - flush_dcache_page(sl->page); + flush_dcache_page(sg_page(sl)); kunmap_atomic(address, KM_BIO_SRC_IRQ); local_irq_restore(flags); if (cpsum == cpcount) diff --git a/drivers/scsi/ibmmca.c b/drivers/scsi/ibmmca.c index 714e6273a70..db004a45073 100644 --- a/drivers/scsi/ibmmca.c +++ b/drivers/scsi/ibmmca.c @@ -1828,7 +1828,7 @@ static int ibmmca_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) BUG_ON(scsi_sg_count(cmd) > 16); scsi_for_each_sg(cmd, sg, scsi_sg_count(cmd), i) { - ld(shpnt)[ldn].sge[i].address = (void *) (isa_page_to_bus(sg->page) + sg->offset); + ld(shpnt)[ldn].sge[i].address = (void *) (isa_page_to_bus(sg_page(sg)) + sg->offset); ld(shpnt)[ldn].sge[i].byte_length = sg->length; } scb->enable |= IM_POINTER_TO_LIST; diff --git a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c index 252d1806467..8d0244c2e7d 100644 --- a/drivers/scsi/ide-scsi.c +++ b/drivers/scsi/ide-scsi.c @@ -175,18 +175,18 @@ static void idescsi_input_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigne while (bcount) { count = min(pc->sg->length - pc->b_count, bcount); - if (PageHighMem(pc->sg->page)) { + if (PageHighMem(sg_page(pc->sg))) { unsigned long flags; local_irq_save(flags); - buf = kmap_atomic(pc->sg->page, KM_IRQ0) + + buf = kmap_atomic(sg_page(pc->sg), KM_IRQ0) + pc->sg->offset; drive->hwif->atapi_input_bytes(drive, buf + pc->b_count, count); kunmap_atomic(buf - pc->sg->offset, KM_IRQ0); local_irq_restore(flags); } else { - buf = page_address(pc->sg->page) + pc->sg->offset; + buf = sg_virt(pc->sg); drive->hwif->atapi_input_bytes(drive, buf + pc->b_count, count); } @@ -212,18 +212,18 @@ static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsign while (bcount) { count = min(pc->sg->length - pc->b_count, bcount); - if (PageHighMem(pc->sg->page)) { + if (PageHighMem(sg_page(pc->sg))) { unsigned long flags; local_irq_save(flags); - buf = kmap_atomic(pc->sg->page, KM_IRQ0) + + buf = kmap_atomic(sg_page(pc->sg), KM_IRQ0) + pc->sg->offset; drive->hwif->atapi_output_bytes(drive, buf + pc->b_count, count); kunmap_atomic(buf - pc->sg->offset, KM_IRQ0); local_irq_restore(flags); } else { - buf = page_address(pc->sg->page) + pc->sg->offset; + buf = sg_virt(pc->sg); drive->hwif->atapi_output_bytes(drive, buf + pc->b_count, count); } diff --git a/drivers/scsi/imm.c b/drivers/scsi/imm.c index 74cdc1f0a78..a3d0c6b1495 100644 --- a/drivers/scsi/imm.c +++ b/drivers/scsi/imm.c @@ -705,9 +705,7 @@ static int imm_completion(struct scsi_cmnd *cmd) cmd->SCp.buffer++; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = - page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); /* * Make sure that we transfer even number of bytes @@ -844,9 +842,7 @@ static int imm_engine(imm_struct *dev, struct scsi_cmnd *cmd) cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = - page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); } else { /* else fill the only available buffer */ cmd->SCp.buffer = NULL; diff --git a/drivers/scsi/in2000.c b/drivers/scsi/in2000.c index ab7cbf3449c..c8b452f2878 100644 --- a/drivers/scsi/in2000.c +++ b/drivers/scsi/in2000.c @@ -372,7 +372,7 @@ static int in2000_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) if (cmd->use_sg) { cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer; cmd->SCp.buffers_residual = cmd->use_sg - 1; - cmd->SCp.ptr = (char *) page_address(cmd->SCp.buffer->page) + cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); cmd->SCp.this_residual = cmd->SCp.buffer->length; } else { cmd->SCp.buffer = NULL; @@ -764,7 +764,7 @@ static void transfer_bytes(Scsi_Cmnd * cmd, int data_in_dir) ++cmd->SCp.buffer; --cmd->SCp.buffers_residual; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); } /* Set up hardware registers */ diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index c316a0bcae6..439b97a6a26 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -2872,6 +2872,7 @@ static struct ipr_sglist *ipr_alloc_ucode_buffer(int buf_len) } scatterlist = sglist->scatterlist; + sg_init_table(scatterlist, num_elem); sglist->order = order; sglist->num_sg = num_elem; @@ -2884,12 +2885,12 @@ static struct ipr_sglist *ipr_alloc_ucode_buffer(int buf_len) /* Free up what we already allocated */ for (j = i - 1; j >= 0; j--) - __free_pages(scatterlist[j].page, order); + __free_pages(sg_page(&scatterlist[j]), order); kfree(sglist); return NULL; } - scatterlist[i].page = page; + sg_set_page(&scatterlist[i], page); } return sglist; @@ -2910,7 +2911,7 @@ static void ipr_free_ucode_buffer(struct ipr_sglist *sglist) int i; for (i = 0; i < sglist->num_sg; i++) - __free_pages(sglist->scatterlist[i].page, sglist->order); + __free_pages(sg_page(&sglist->scatterlist[i]), sglist->order); kfree(sglist); } @@ -2940,9 +2941,11 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, scatterlist = sglist->scatterlist; for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) { - kaddr = kmap(scatterlist[i].page); + struct page *page = sg_page(&scatterlist[i]); + + kaddr = kmap(page); memcpy(kaddr, buffer, bsize_elem); - kunmap(scatterlist[i].page); + kunmap(page); scatterlist[i].length = bsize_elem; @@ -2953,9 +2956,11 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, } if (len % bsize_elem) { - kaddr = kmap(scatterlist[i].page); + struct page *page = sg_page(&scatterlist[i]); + + kaddr = kmap(page); memcpy(kaddr, buffer, len % bsize_elem); - kunmap(scatterlist[i].page); + kunmap(page); scatterlist[i].length = len % bsize_elem; } diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c index edaac2714c5..5c5a9b2628f 100644 --- a/drivers/scsi/ips.c +++ b/drivers/scsi/ips.c @@ -1515,7 +1515,7 @@ static int ips_is_passthru(struct scsi_cmnd *SC) /* kmap_atomic() ensures addressability of the user buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buffer = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; if (buffer && buffer[0] == 'C' && buffer[1] == 'O' && buffer[2] == 'P' && buffer[3] == 'P') { kunmap_atomic(buffer - sg->offset, KM_IRQ0); @@ -3523,7 +3523,7 @@ ips_scmd_buf_write(struct scsi_cmnd *scmd, void *data, unsigned int count) /* kmap_atomic() ensures addressability of the data buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset; + buffer = kmap_atomic(sg_page(&sg[i]), KM_IRQ0) + sg[i].offset; memcpy(buffer, &cdata[xfer_cnt], min_cnt); kunmap_atomic(buffer - sg[i].offset, KM_IRQ0); local_irq_restore(flags); @@ -3556,7 +3556,7 @@ ips_scmd_buf_read(struct scsi_cmnd *scmd, void *data, unsigned int count) /* kmap_atomic() ensures addressability of the data buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset; + buffer = kmap_atomic(sg_page(&sg[i]), KM_IRQ0) + sg[i].offset; memcpy(&cdata[xfer_cnt], buffer, min_cnt); kunmap_atomic(buffer - sg[i].offset, KM_IRQ0); local_irq_restore(flags); diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index a21455d0274..6ce4109efdf 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -70,9 +70,7 @@ module_param_named(max_lun, iscsi_max_lun, uint, S_IRUGO); static inline void iscsi_buf_init_iov(struct iscsi_buf *ibuf, char *vbuf, int size) { - ibuf->sg.page = virt_to_page(vbuf); - ibuf->sg.offset = offset_in_page(vbuf); - ibuf->sg.length = size; + sg_init_one(&ibuf->sg, vbuf, size); ibuf->sent = 0; ibuf->use_sendmsg = 1; } @@ -80,13 +78,14 @@ iscsi_buf_init_iov(struct iscsi_buf *ibuf, char *vbuf, int size) static inline void iscsi_buf_init_sg(struct iscsi_buf *ibuf, struct scatterlist *sg) { - ibuf->sg.page = sg->page; + sg_init_table(&ibuf->sg, 1); + sg_set_page(&ibuf->sg, sg_page(sg)); ibuf->sg.offset = sg->offset; ibuf->sg.length = sg->length; /* * Fastpath: sg element fits into single page */ - if (sg->length + sg->offset <= PAGE_SIZE && !PageSlab(sg->page)) + if (sg->length + sg->offset <= PAGE_SIZE && !PageSlab(sg_page(sg))) ibuf->use_sendmsg = 0; else ibuf->use_sendmsg = 1; @@ -716,7 +715,7 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn) for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++) { char *dest; - dest = kmap_atomic(sg[i].page, KM_SOFTIRQ0); + dest = kmap_atomic(sg_page(&sg[i]), KM_SOFTIRQ0); rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg[i].offset, sg[i].length, offset); kunmap_atomic(dest, KM_SOFTIRQ0); @@ -1103,9 +1102,9 @@ iscsi_send(struct iscsi_conn *conn, struct iscsi_buf *buf, int size, int flags) * slab case. */ if (buf->use_sendmsg) - res = sock_no_sendpage(sk, buf->sg.page, offset, size, flags); + res = sock_no_sendpage(sk, sg_page(&buf->sg), offset, size, flags); else - res = tcp_conn->sendpage(sk, buf->sg.page, offset, size, flags); + res = tcp_conn->sendpage(sk, sg_page(&buf->sg), offset, size, flags); if (res >= 0) { conn->txdata_octets += res; diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 10d1aff9938..66c65203573 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -658,7 +658,7 @@ mega_build_cmd(adapter_t *adapter, Scsi_Cmnd *cmd, int *busy) struct scatterlist *sg; sg = scsi_sglist(cmd); - buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; + buf = kmap_atomic(sg_page(sg), KM_IRQ0) + sg->offset; memset(buf, 0, cmd->cmnd[4]); kunmap_atomic(buf - sg->offset, KM_IRQ0); @@ -1542,10 +1542,8 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status) if( cmd->cmnd[0] == INQUIRY && !islogical ) { sgl = scsi_sglist(cmd); - if( sgl->page ) { - c = *(unsigned char *) - page_address((&sgl[0])->page) + - (&sgl[0])->offset; + if( sg_page(sgl) ) { + c = *(unsigned char *) sg_virt(&sgl[0]); } else { printk(KERN_WARNING "megaraid: invalid sg.\n"); diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c index 78779209ac8..c8923108183 100644 --- a/drivers/scsi/megaraid/megaraid_mbox.c +++ b/drivers/scsi/megaraid/megaraid_mbox.c @@ -1584,10 +1584,8 @@ megaraid_mbox_build_cmd(adapter_t *adapter, struct scsi_cmnd *scp, int *busy) caddr_t vaddr; sgl = scsi_sglist(scp); - if (sgl->page) { - vaddr = (caddr_t) - (page_address((&sgl[0])->page) - + (&sgl[0])->offset); + if (sg_page(sgl)) { + vaddr = (caddr_t) sg_virt(&sgl[0]); memset(vaddr, 0, scp->cmnd[4]); } @@ -2328,10 +2326,8 @@ megaraid_mbox_dpc(unsigned long devp) && IS_RAID_CH(raid_dev, scb->dev_channel)) { sgl = scsi_sglist(scp); - if (sgl->page) { - c = *(unsigned char *) - (page_address((&sgl[0])->page) + - (&sgl[0])->offset); + if (sg_page(sgl)) { + c = *(unsigned char *) sg_virt(&sgl[0]); } else { con_log(CL_ANN, (KERN_WARNING "megaraid mailbox: invalid sg:%d\n", diff --git a/drivers/scsi/oktagon_esp.c b/drivers/scsi/oktagon_esp.c index 26a6d55faf3..8e5eadbd5c5 100644 --- a/drivers/scsi/oktagon_esp.c +++ b/drivers/scsi/oktagon_esp.c @@ -550,8 +550,7 @@ void dma_mmu_get_scsi_one(struct NCR_ESP *esp, Scsi_Cmnd *sp) void dma_mmu_get_scsi_sgl(struct NCR_ESP *esp, Scsi_Cmnd *sp) { - sp->SCp.ptr = page_address(sp->SCp.buffer->page)+ - sp->SCp.buffer->offset; + sp->SCp.ptr = sg_virt(sp->SCp.buffer); } void dma_mmu_release_scsi_one(struct NCR_ESP *esp, Scsi_Cmnd *sp) @@ -564,8 +563,7 @@ void dma_mmu_release_scsi_sgl(struct NCR_ESP *esp, Scsi_Cmnd *sp) void dma_advance_sg(Scsi_Cmnd *sp) { - sp->SCp.ptr = page_address(sp->SCp.buffer->page)+ - sp->SCp.buffer->offset; + sp->SCp.ptr = sg_virt(sp->SCp.buffer); } diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c index 331b789937c..1c5c4b68f20 100644 --- a/drivers/scsi/osst.c +++ b/drivers/scsi/osst.c @@ -542,7 +542,7 @@ static int osst_verify_frame(struct osst_tape * STp, int frame_seq_number, int q if (STp->raw) { if (STp->buffer->syscall_result) { for (i=0; i < STp->buffer->sg_segs; i++) - memset(page_address(STp->buffer->sg[i].page), + memset(page_address(sg_page(&STp->buffer->sg[i])), 0, STp->buffer->sg[i].length); strcpy(STp->buffer->b_data, "READ ERROR ON FRAME"); } else @@ -4437,7 +4437,7 @@ static int os_scsi_tape_open(struct inode * inode, struct file * filp) for (i = 0, b_size = 0; (i < STp->buffer->sg_segs) && ((b_size + STp->buffer->sg[i].length) <= OS_DATA_SIZE); b_size += STp->buffer->sg[i++].length); - STp->buffer->aux = (os_aux_t *) (page_address(STp->buffer->sg[i].page) + OS_DATA_SIZE - b_size); + STp->buffer->aux = (os_aux_t *) (page_address(sg_page(&STp->buffer->sg[i])) + OS_DATA_SIZE - b_size); #if DEBUG printk(OSST_DEB_MSG "%s:D: b_data points to %p in segment 0 at %p\n", name, STp->buffer->b_data, page_address(STp->buffer->sg[0].page)); @@ -5252,25 +5252,26 @@ static int enlarge_buffer(struct osst_buffer *STbuffer, int need_dma) /* Try to allocate the first segment up to OS_DATA_SIZE and the others big enough to reach the goal (code assumes no segments in place) */ for (b_size = OS_DATA_SIZE, order = OSST_FIRST_ORDER; b_size >= PAGE_SIZE; order--, b_size /= 2) { - STbuffer->sg[0].page = alloc_pages(priority, order); + struct page *page = alloc_pages(priority, order); + STbuffer->sg[0].offset = 0; - if (STbuffer->sg[0].page != NULL) { + if (page != NULL) { + sg_set_page(&STbuffer->sg[0], page); STbuffer->sg[0].length = b_size; - STbuffer->b_data = page_address(STbuffer->sg[0].page); + STbuffer->b_data = page_address(page); break; } } - if (STbuffer->sg[0].page == NULL) { + if (sg_page(&STbuffer->sg[0]) == NULL) { printk(KERN_NOTICE "osst :I: Can't allocate tape buffer main segment.\n"); return 0; } /* Got initial segment of 'bsize,order', continue with same size if possible, except for AUX */ for (segs=STbuffer->sg_segs=1, got=b_size; segs < max_segs && got < OS_FRAME_SIZE; ) { - STbuffer->sg[segs].page = - alloc_pages(priority, (OS_FRAME_SIZE - got <= PAGE_SIZE) ? 0 : order); + struct page *page = alloc_pages(priority, (OS_FRAME_SIZE - got <= PAGE_SIZE) ? 0 : order); STbuffer->sg[segs].offset = 0; - if (STbuffer->sg[segs].page == NULL) { + if (page == NULL) { if (OS_FRAME_SIZE - got <= (max_segs - segs) * b_size / 2 && order) { b_size /= 2; /* Large enough for the rest of the buffers */ order--; @@ -5284,6 +5285,7 @@ static int enlarge_buffer(struct osst_buffer *STbuffer, int need_dma) normalize_buffer(STbuffer); return 0; } + sg_set_page(&STbuffer->sg[segs], page); STbuffer->sg[segs].length = (OS_FRAME_SIZE - got <= PAGE_SIZE / 2) ? (OS_FRAME_SIZE - got) : b_size; got += STbuffer->sg[segs].length; STbuffer->buffer_size = got; @@ -5316,7 +5318,7 @@ static void normalize_buffer(struct osst_buffer *STbuffer) b_size < STbuffer->sg[i].length; b_size *= 2, order++); - __free_pages(STbuffer->sg[i].page, order); + __free_pages(sg_page(&STbuffer->sg[i]), order); STbuffer->buffer_size -= STbuffer->sg[i].length; } #if DEBUG @@ -5344,7 +5346,7 @@ static int append_to_buffer(const char __user *ubp, struct osst_buffer *st_bp, i for ( ; i < st_bp->sg_segs && do_count > 0; i++) { cnt = st_bp->sg[i].length - offset < do_count ? st_bp->sg[i].length - offset : do_count; - res = copy_from_user(page_address(st_bp->sg[i].page) + offset, ubp, cnt); + res = copy_from_user(page_address(sg_page(&st_bp->sg[i])) + offset, ubp, cnt); if (res) return (-EFAULT); do_count -= cnt; @@ -5377,7 +5379,7 @@ static int from_buffer(struct osst_buffer *st_bp, char __user *ubp, int do_count for ( ; i < st_bp->sg_segs && do_count > 0; i++) { cnt = st_bp->sg[i].length - offset < do_count ? st_bp->sg[i].length - offset : do_count; - res = copy_to_user(ubp, page_address(st_bp->sg[i].page) + offset, cnt); + res = copy_to_user(ubp, page_address(sg_page(&st_bp->sg[i])) + offset, cnt); if (res) return (-EFAULT); do_count -= cnt; @@ -5410,7 +5412,7 @@ static int osst_zero_buffer_tail(struct osst_buffer *st_bp) i < st_bp->sg_segs && do_count > 0; i++) { cnt = st_bp->sg[i].length - offset < do_count ? st_bp->sg[i].length - offset : do_count ; - memset(page_address(st_bp->sg[i].page) + offset, 0, cnt); + memset(page_address(sg_page(&st_bp->sg[i])) + offset, 0, cnt); do_count -= cnt; offset = 0; } @@ -5430,7 +5432,7 @@ static int osst_copy_to_buffer(struct osst_buffer *st_bp, unsigned char *ptr) for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) { cnt = st_bp->sg[i].length < do_count ? st_bp->sg[i].length : do_count ; - memcpy(page_address(st_bp->sg[i].page), ptr, cnt); + memcpy(page_address(sg_page(&st_bp->sg[i])), ptr, cnt); do_count -= cnt; ptr += cnt; } @@ -5451,7 +5453,7 @@ static int osst_copy_from_buffer(struct osst_buffer *st_bp, unsigned char *ptr) for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) { cnt = st_bp->sg[i].length < do_count ? st_bp->sg[i].length : do_count ; - memcpy(ptr, page_address(st_bp->sg[i].page), cnt); + memcpy(ptr, page_address(sg_page(&st_bp->sg[i])), cnt); do_count -= cnt; ptr += cnt; } diff --git a/drivers/scsi/pcmcia/nsp_cs.h b/drivers/scsi/pcmcia/nsp_cs.h index 98397559c53..7db28cd4944 100644 --- a/drivers/scsi/pcmcia/nsp_cs.h +++ b/drivers/scsi/pcmcia/nsp_cs.h @@ -393,7 +393,7 @@ enum _burst_mode { #define MSG_EXT_SDTR 0x01 /* scatter-gather table */ -# define BUFFER_ADDR ((char *)((unsigned int)(SCpnt->SCp.buffer->page) + SCpnt->SCp.buffer->offset)) +# define BUFFER_ADDR ((char *)((sg_virt(SCpnt->SCp.buffer)))) #endif /*__nsp_cs__*/ /* end */ diff --git a/drivers/scsi/pcmcia/sym53c500_cs.c b/drivers/scsi/pcmcia/sym53c500_cs.c index 190e2a7d706..969b9387a0c 100644 --- a/drivers/scsi/pcmcia/sym53c500_cs.c +++ b/drivers/scsi/pcmcia/sym53c500_cs.c @@ -443,8 +443,7 @@ SYM53C500_intr(int irq, void *dev_id) scsi_for_each_sg(curSC, sg, scsi_sg_count(curSC), i) { SYM53C500_pio_write(fast_pio, port_base, - page_address(sg->page) + sg->offset, - sg->length); + sg_virt(sg), sg->length); } REG0(port_base); } @@ -463,8 +462,7 @@ SYM53C500_intr(int irq, void *dev_id) scsi_for_each_sg(curSC, sg, scsi_sg_count(curSC), i) { SYM53C500_pio_read(fast_pio, port_base, - page_address(sg->page) + sg->offset, - sg->length); + sg_virt(sg), sg->length); } REG0(port_base); } diff --git a/drivers/scsi/ppa.c b/drivers/scsi/ppa.c index 67b6d76a6c8..67ee51a3d7e 100644 --- a/drivers/scsi/ppa.c +++ b/drivers/scsi/ppa.c @@ -608,9 +608,7 @@ static int ppa_completion(struct scsi_cmnd *cmd) cmd->SCp.buffer++; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = - page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); } } /* Now check to see if the drive is ready to comunicate */ @@ -756,8 +754,7 @@ static int ppa_engine(ppa_struct *dev, struct scsi_cmnd *cmd) /* if many buffers are available, start filling the first */ cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); } else { /* else fill the only available buffer */ cmd->SCp.buffer = NULL; diff --git a/drivers/scsi/ps3rom.c b/drivers/scsi/ps3rom.c index 0f43d1d046d..17b4a7c4618 100644 --- a/drivers/scsi/ps3rom.c +++ b/drivers/scsi/ps3rom.c @@ -111,14 +111,14 @@ static int fill_from_dev_buffer(struct scsi_cmnd *cmd, const void *buf) req_len = act_len = 0; scsi_for_each_sg(cmd, sgpnt, scsi_sg_count(cmd), k) { if (active) { - kaddr = kmap_atomic(sgpnt->page, KM_IRQ0); + kaddr = kmap_atomic(sg_page(sgpnt), KM_IRQ0); len = sgpnt->length; if ((req_len + len) > buflen) { active = 0; len = buflen - req_len; } memcpy(kaddr + sgpnt->offset, buf + req_len, len); - flush_kernel_dcache_page(sgpnt->page); + flush_kernel_dcache_page(sg_page(sgpnt)); kunmap_atomic(kaddr, KM_IRQ0); act_len += len; } @@ -147,7 +147,7 @@ static int fetch_to_dev_buffer(struct scsi_cmnd *cmd, void *buf) req_len = fin = 0; scsi_for_each_sg(cmd, sgpnt, scsi_sg_count(cmd), k) { - kaddr = kmap_atomic(sgpnt->page, KM_IRQ0); + kaddr = kmap_atomic(sg_page(sgpnt), KM_IRQ0); len = sgpnt->length; if ((req_len + len) > buflen) { len = buflen - req_len; diff --git a/drivers/scsi/qlogicfas408.c b/drivers/scsi/qlogicfas408.c index 2bfbf26c00e..de7b3bc2cbc 100644 --- a/drivers/scsi/qlogicfas408.c +++ b/drivers/scsi/qlogicfas408.c @@ -317,7 +317,7 @@ static unsigned int ql_pcmd(struct scsi_cmnd *cmd) return ((priv->qabort == 1 ? DID_ABORT : DID_RESET) << 16); } - buf = page_address(sg->page) + sg->offset; + buf = sg_virt(sg); if (ql_pdma(priv, phase, buf, sg->length)) break; } diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 72ee4c9cfb1..46cae5a212d 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -625,7 +625,7 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr, scsi_for_each_sg(scp, sg, scp->use_sg, k) { if (active) { kaddr = (unsigned char *) - kmap_atomic(sg->page, KM_USER0); + kmap_atomic(sg_page(sg), KM_USER0); if (NULL == kaddr) return (DID_ERROR << 16); kaddr_off = (unsigned char *)kaddr + sg->offset; @@ -672,7 +672,7 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr, sg = scsi_sglist(scp); req_len = fin = 0; for (k = 0; k < scp->use_sg; ++k, sg = sg_next(sg)) { - kaddr = (unsigned char *)kmap_atomic(sg->page, KM_USER0); + kaddr = (unsigned char *)kmap_atomic(sg_page(sg), KM_USER0); if (NULL == kaddr) return -1; kaddr_off = (unsigned char *)kaddr + sg->offset; diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index aac8a02cbe8..61fdaf02f25 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -295,7 +295,7 @@ static int scsi_req_map_sg(struct request *rq, struct scatterlist *sgl, int i, err, nr_vecs = 0; for_each_sg(sgl, sg, nsegs, i) { - page = sg->page; + page = sg_page(sg); off = sg->offset; len = sg->length; data_len += len; @@ -764,7 +764,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) if (unlikely(!sgl)) goto enomem; - memset(sgl, 0, sizeof(*sgl) * sgp->size); + sg_init_table(sgl, sgp->size); /* * first loop through, set initial index and return value @@ -781,6 +781,13 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl); /* + * if we have nothing left, mark the last segment as + * end-of-list + */ + if (!left) + sg_mark_end(sgl, this); + + /* * don't allow subsequent mempool allocs to sleep, it would * violate the mempool principle. */ @@ -2353,7 +2360,7 @@ void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count, *offset = *offset - len_complete + sg->offset; /* Assumption: contiguous pages can be accessed as "page + i" */ - page = nth_page(sg->page, (*offset >> PAGE_SHIFT)); + page = nth_page(sg_page(sg), (*offset >> PAGE_SHIFT)); *offset &= ~PAGE_MASK; /* Bytes in this sg-entry from *offset to the end of the page */ diff --git a/drivers/scsi/seagate.c b/drivers/scsi/seagate.c index ce80fa9ad81..b11324479b5 100644 --- a/drivers/scsi/seagate.c +++ b/drivers/scsi/seagate.c @@ -999,14 +999,14 @@ connect_loop: for (i = 0; i < nobuffs; ++i) printk("scsi%d : buffer %d address = %p length = %d\n", hostno, i, - page_address(buffer[i].page) + buffer[i].offset, + sg_virt(&buffer[i]), buffer[i].length); } #endif buffer = (struct scatterlist *) SCint->request_buffer; len = buffer->length; - data = page_address(buffer->page) + buffer->offset; + data = sg_virt(buffer); } else { DPRINTK (DEBUG_SG, "scsi%d : scatter gather not requested.\n", hostno); buffer = NULL; @@ -1239,7 +1239,7 @@ connect_loop: --nobuffs; ++buffer; len = buffer->length; - data = page_address(buffer->page) + buffer->offset; + data = sg_virt(buffer); DPRINTK (DEBUG_SG, "scsi%d : next scatter-gather buffer len = %d address = %08x\n", hostno, len, data); @@ -1396,7 +1396,7 @@ connect_loop: --nobuffs; ++buffer; len = buffer->length; - data = page_address(buffer->page) + buffer->offset; + data = sg_virt(buffer); DPRINTK (DEBUG_SG, "scsi%d : next scatter-gather buffer len = %d address = %08x\n", hostno, len, data); } break; diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c index 7238b2dfc49..cc197100284 100644 --- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -1169,7 +1169,7 @@ sg_vma_nopage(struct vm_area_struct *vma, unsigned long addr, int *type) len = vma->vm_end - sa; len = (len < sg->length) ? len : sg->length; if (offset < len) { - page = virt_to_page(page_address(sg->page) + offset); + page = virt_to_page(page_address(sg_page(sg)) + offset); get_page(page); /* increment page count */ break; } @@ -1717,13 +1717,13 @@ st_map_user_pages(struct scatterlist *sgl, const unsigned int max_pages, goto out_unlock; */ } - sgl[0].page = pages[0]; + sg_set_page(sgl, pages[0]); sgl[0].offset = uaddr & ~PAGE_MASK; if (nr_pages > 1) { sgl[0].length = PAGE_SIZE - sgl[0].offset; count -= sgl[0].length; for (i=1; i < nr_pages ; i++) { - sgl[i].page = pages[i]; + sg_set_page(&sgl[i], pages[i]); sgl[i].length = count < PAGE_SIZE ? count : PAGE_SIZE; count -= PAGE_SIZE; } @@ -1754,7 +1754,7 @@ st_unmap_user_pages(struct scatterlist *sgl, const unsigned int nr_pages, int i; for (i=0; i < nr_pages; i++) { - struct page *page = sgl[i].page; + struct page *page = sg_page(&sgl[i]); if (dirtied) SetPageDirty(page); @@ -1854,7 +1854,7 @@ sg_build_indirect(Sg_scatter_hold * schp, Sg_fd * sfp, int buff_size) scatter_elem_sz_prev = ret_sz; } } - sg->page = p; + sg_set_page(sg, p); sg->length = (ret_sz > num) ? num : ret_sz; SCSI_LOG_TIMEOUT(5, printk("sg_build_indirect: k=%d, num=%d, " @@ -1907,14 +1907,14 @@ sg_write_xfer(Sg_request * srp) onum = 1; ksglen = sg->length; - p = page_address(sg->page); + p = page_address(sg_page(sg)); for (j = 0, k = 0; j < onum; ++j) { res = sg_u_iovec(hp, iovec_count, j, 1, &usglen, &up); if (res) return res; for (; p; sg = sg_next(sg), ksglen = sg->length, - p = page_address(sg->page)) { + p = page_address(sg_page(sg))) { if (usglen <= 0) break; if (ksglen > usglen) { @@ -1991,12 +1991,12 @@ sg_remove_scat(Sg_scatter_hold * schp) } else { int k; - for (k = 0; (k < schp->k_use_sg) && sg->page; + for (k = 0; (k < schp->k_use_sg) && sg_page(sg); ++k, sg = sg_next(sg)) { SCSI_LOG_TIMEOUT(5, printk( "sg_remove_scat: k=%d, pg=0x%p, len=%d\n", - k, sg->page, sg->length)); - sg_page_free(sg->page, sg->length); + k, sg_page(sg), sg->length)); + sg_page_free(sg_page(sg), sg->length); } } kfree(schp->buffer); @@ -2038,7 +2038,7 @@ sg_read_xfer(Sg_request * srp) } else onum = 1; - p = page_address(sg->page); + p = page_address(sg_page(sg)); ksglen = sg->length; for (j = 0, k = 0; j < onum; ++j) { res = sg_u_iovec(hp, iovec_count, j, 0, &usglen, &up); @@ -2046,7 +2046,7 @@ sg_read_xfer(Sg_request * srp) return res; for (; p; sg = sg_next(sg), ksglen = sg->length, - p = page_address(sg->page)) { + p = page_address(sg_page(sg))) { if (usglen <= 0) break; if (ksglen > usglen) { @@ -2092,15 +2092,15 @@ sg_read_oxfer(Sg_request * srp, char __user *outp, int num_read_xfer) if ((!outp) || (num_read_xfer <= 0)) return 0; - for (k = 0; (k < schp->k_use_sg) && sg->page; ++k, sg = sg_next(sg)) { + for (k = 0; (k < schp->k_use_sg) && sg_page(sg); ++k, sg = sg_next(sg)) { num = sg->length; if (num > num_read_xfer) { - if (__copy_to_user(outp, page_address(sg->page), + if (__copy_to_user(outp, page_address(sg_page(sg)), num_read_xfer)) return -EFAULT; break; } else { - if (__copy_to_user(outp, page_address(sg->page), + if (__copy_to_user(outp, page_address(sg_page(sg)), num)) return -EFAULT; num_read_xfer -= num; diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 73c44cbdea4..ce69b9efc10 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -3797,7 +3797,7 @@ static void buf_to_sg(struct st_buffer *STbp, unsigned int length) sg = &(STbp->sg[0]); frp = STbp->frp; for (i=count=0; count < length; i++) { - sg[i].page = frp[i].page; + sg_set_page(&sg[i], frp[i].page); if (length - count > frp[i].length) sg[i].length = frp[i].length; else @@ -4446,14 +4446,14 @@ static int sgl_map_user_pages(struct scatterlist *sgl, const unsigned int max_pa } /* Populate the scatter/gather list */ - sgl[0].page = pages[0]; + sg_set_page(&sgl[0], pages[0]); sgl[0].offset = uaddr & ~PAGE_MASK; if (nr_pages > 1) { sgl[0].length = PAGE_SIZE - sgl[0].offset; count -= sgl[0].length; for (i=1; i < nr_pages ; i++) { + sg_set_page(&sgl[i], pages[i]);; sgl[i].offset = 0; - sgl[i].page = pages[i]; sgl[i].length = count < PAGE_SIZE ? count : PAGE_SIZE; count -= PAGE_SIZE; } @@ -4483,7 +4483,7 @@ static int sgl_unmap_user_pages(struct scatterlist *sgl, const unsigned int nr_p int i; for (i=0; i < nr_pages; i++) { - struct page *page = sgl[i].page; + struct page *page = sg_page(&sgl[i]); if (dirtied) SetPageDirty(page); diff --git a/drivers/scsi/sun3_NCR5380.c b/drivers/scsi/sun3_NCR5380.c index 4aafe89b557..2dcde373b20 100644 --- a/drivers/scsi/sun3_NCR5380.c +++ b/drivers/scsi/sun3_NCR5380.c @@ -272,8 +272,7 @@ static struct scsi_host_template *the_template = NULL; #define HOSTNO instance->host_no #define H_NO(cmd) (cmd)->device->host->host_no -#define SGADDR(buffer) (void *)(((unsigned long)page_address((buffer)->page)) + \ - (buffer)->offset) +#define SGADDR(buffer) (void *)(((unsigned long)sg_virt(((buffer))))) #ifdef SUPPORT_TAGS diff --git a/drivers/scsi/sym53c416.c b/drivers/scsi/sym53c416.c index 8befab7e983..90cee94d952 100644 --- a/drivers/scsi/sym53c416.c +++ b/drivers/scsi/sym53c416.c @@ -196,7 +196,7 @@ static unsigned int sym53c416_base_3[2] = {0,0}; #define MAXHOSTS 4 -#define SG_ADDRESS(buffer) ((char *) (page_address((buffer)->page)+(buffer)->offset)) +#define SG_ADDRESS(buffer) ((char *) sg_virt((buffer))) enum phases { diff --git a/drivers/scsi/tmscsim.c b/drivers/scsi/tmscsim.c index 5c72ca31a47..44193049c4a 100644 --- a/drivers/scsi/tmscsim.c +++ b/drivers/scsi/tmscsim.c @@ -430,10 +430,7 @@ static __inline__ void dc390_Going_remove (struct dc390_dcb* pDCB, struct dc390_ static struct scatterlist* dc390_sg_build_single(struct scatterlist *sg, void *addr, unsigned int length) { - memset(sg, 0, sizeof(struct scatterlist)); - sg->page = virt_to_page(addr); - sg->length = length; - sg->offset = (unsigned long)addr & ~PAGE_MASK; + sg_init_one(sg, addr, length); return sg; } diff --git a/drivers/scsi/ultrastor.c b/drivers/scsi/ultrastor.c index ea72bbeb8f9..6d1f0edd798 100644 --- a/drivers/scsi/ultrastor.c +++ b/drivers/scsi/ultrastor.c @@ -681,7 +681,7 @@ static inline void build_sg_list(struct mscp *mscp, struct scsi_cmnd *SCpnt) max = scsi_sg_count(SCpnt); scsi_for_each_sg(SCpnt, sg, max, i) { - mscp->sglist[i].address = isa_page_to_bus(sg->page) + sg->offset; + mscp->sglist[i].address = isa_page_to_bus(sg_page(sg)) + sg->offset; mscp->sglist[i].num_bytes = sg->length; transfer_length += sg->length; } diff --git a/drivers/scsi/wd33c93.c b/drivers/scsi/wd33c93.c index 0e8e642fd3b..fdbb92d1f72 100644 --- a/drivers/scsi/wd33c93.c +++ b/drivers/scsi/wd33c93.c @@ -410,8 +410,7 @@ wd33c93_queuecommand(struct scsi_cmnd *cmd, if (cmd->use_sg) { cmd->SCp.buffer = (struct scatterlist *) cmd->request_buffer; cmd->SCp.buffers_residual = cmd->use_sg - 1; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); cmd->SCp.this_residual = cmd->SCp.buffer->length; } else { cmd->SCp.buffer = NULL; @@ -745,8 +744,7 @@ transfer_bytes(const wd33c93_regs regs, struct scsi_cmnd *cmd, ++cmd->SCp.buffer; --cmd->SCp.buffers_residual; cmd->SCp.this_residual = cmd->SCp.buffer->length; - cmd->SCp.ptr = page_address(cmd->SCp.buffer->page) + - cmd->SCp.buffer->offset; + cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); } if (!cmd->SCp.this_residual) /* avoid bogus setups */ return; diff --git a/drivers/scsi/wd7000.c b/drivers/scsi/wd7000.c index 255c611e78b..03cd44f231d 100644 --- a/drivers/scsi/wd7000.c +++ b/drivers/scsi/wd7000.c @@ -1123,7 +1123,7 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt, any2scsi(scb->maxlen, nseg * sizeof(Sgb)); scsi_for_each_sg(SCpnt, sg, nseg, i) { - any2scsi(sgb[i].ptr, isa_page_to_bus(sg->page) + sg->offset); + any2scsi(sgb[i].ptr, isa_page_to_bus(sg_page(sg)) + sg->offset); any2scsi(sgb[i].len, sg->length); } } else { diff --git a/drivers/serial/mcf.c b/drivers/serial/mcf.c new file mode 100644 index 00000000000..a7d4360ea7d --- /dev/null +++ b/drivers/serial/mcf.c @@ -0,0 +1,653 @@ +/****************************************************************************/ + +/* + * mcf.c -- Freescale ColdFire UART driver + * + * (C) Copyright 2003-2007, Greg Ungerer <gerg@snapgear.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +/****************************************************************************/ + +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/console.h> +#include <linux/tty.h> +#include <linux/tty_flip.h> +#include <linux/serial.h> +#include <linux/serial_core.h> +#include <linux/io.h> +#include <asm/coldfire.h> +#include <asm/mcfsim.h> +#include <asm/mcfuart.h> +#include <asm/nettel.h> + +/****************************************************************************/ + +/* + * Some boards implement the DTR/DCD lines using GPIO lines, most + * don't. Dummy out the access macros for those that don't. Those + * that do should define these macros somewhere in there board + * specific inlude files. + */ +#if !defined(mcf_getppdcd) +#define mcf_getppdcd(p) (1) +#endif +#if !defined(mcf_getppdtr) +#define mcf_getppdtr(p) (1) +#endif +#if !defined(mcf_setppdtr) +#define mcf_setppdtr(p, v) do { } while (0) +#endif + +/****************************************************************************/ + +/* + * Local per-uart structure. + */ +struct mcf_uart { + struct uart_port port; + unsigned int sigs; /* Local copy of line sigs */ + unsigned char imr; /* Local IMR mirror */ +}; + +/****************************************************************************/ + +static unsigned int mcf_tx_empty(struct uart_port *port) +{ + return (readb(port->membase + MCFUART_USR) & MCFUART_USR_TXEMPTY) ? + TIOCSER_TEMT : 0; +} + +/****************************************************************************/ + +static unsigned int mcf_get_mctrl(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + unsigned int sigs; + + spin_lock_irqsave(&port->lock, flags); + sigs = (readb(port->membase + MCFUART_UIPR) & MCFUART_UIPR_CTS) ? + 0 : TIOCM_CTS; + sigs |= (pp->sigs & TIOCM_RTS); + sigs |= (mcf_getppdcd(port->line) ? TIOCM_CD : 0); + sigs |= (mcf_getppdtr(port->line) ? TIOCM_DTR : 0); + spin_unlock_irqrestore(&port->lock, flags); + return sigs; +} + +/****************************************************************************/ + +static void mcf_set_mctrl(struct uart_port *port, unsigned int sigs) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + pp->sigs = sigs; + mcf_setppdtr(port->line, (sigs & TIOCM_DTR)); + if (sigs & TIOCM_RTS) + writeb(MCFUART_UOP_RTS, port->membase + MCFUART_UOP1); + else + writeb(MCFUART_UOP_RTS, port->membase + MCFUART_UOP0); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_start_tx(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + pp->imr |= MCFUART_UIR_TXREADY; + writeb(pp->imr, port->membase + MCFUART_UIMR); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_stop_tx(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + pp->imr &= ~MCFUART_UIR_TXREADY; + writeb(pp->imr, port->membase + MCFUART_UIMR); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_stop_rx(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + pp->imr &= ~MCFUART_UIR_RXREADY; + writeb(pp->imr, port->membase + MCFUART_UIMR); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_break_ctl(struct uart_port *port, int break_state) +{ + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + if (break_state == -1) + writeb(MCFUART_UCR_CMDBREAKSTART, port->membase + MCFUART_UCR); + else + writeb(MCFUART_UCR_CMDBREAKSTOP, port->membase + MCFUART_UCR); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_enable_ms(struct uart_port *port) +{ +} + +/****************************************************************************/ + +static int mcf_startup(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + + /* Reset UART, get it into known state... */ + writeb(MCFUART_UCR_CMDRESETRX, port->membase + MCFUART_UCR); + writeb(MCFUART_UCR_CMDRESETTX, port->membase + MCFUART_UCR); + + /* Enable the UART transmitter and receiver */ + writeb(MCFUART_UCR_RXENABLE | MCFUART_UCR_TXENABLE, + port->membase + MCFUART_UCR); + + /* Enable RX interrupts now */ + pp->imr = MCFUART_UIR_RXREADY; + writeb(pp->imr, port->membase + MCFUART_UIMR); + + spin_unlock_irqrestore(&port->lock, flags); + + return 0; +} + +/****************************************************************************/ + +static void mcf_shutdown(struct uart_port *port) +{ + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + + /* Disable all interrupts now */ + pp->imr = 0; + writeb(pp->imr, port->membase + MCFUART_UIMR); + + /* Disable UART transmitter and receiver */ + writeb(MCFUART_UCR_CMDRESETRX, port->membase + MCFUART_UCR); + writeb(MCFUART_UCR_CMDRESETTX, port->membase + MCFUART_UCR); + + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_set_termios(struct uart_port *port, struct ktermios *termios, + struct ktermios *old) +{ + unsigned long flags; + unsigned int baud, baudclk; + unsigned char mr1, mr2; + + baud = uart_get_baud_rate(port, termios, old, 0, 230400); + baudclk = ((MCF_BUSCLK / baud) + 16) / 32; + + mr1 = MCFUART_MR1_RXIRQRDY | MCFUART_MR1_RXERRCHAR; + mr2 = 0; + + switch (termios->c_cflag & CSIZE) { + case CS5: mr1 |= MCFUART_MR1_CS5; break; + case CS6: mr1 |= MCFUART_MR1_CS6; break; + case CS7: mr1 |= MCFUART_MR1_CS7; break; + case CS8: + default: mr1 |= MCFUART_MR1_CS8; break; + } + + if (termios->c_cflag & PARENB) { + if (termios->c_cflag & CMSPAR) { + if (termios->c_cflag & PARODD) + mr1 |= MCFUART_MR1_PARITYMARK; + else + mr1 |= MCFUART_MR1_PARITYSPACE; + } else { + if (termios->c_cflag & PARODD) + mr1 |= MCFUART_MR1_PARITYODD; + else + mr1 |= MCFUART_MR1_PARITYEVEN; + } + } else { + mr1 |= MCFUART_MR1_PARITYNONE; + } + + if (termios->c_cflag & CSTOPB) + mr2 |= MCFUART_MR2_STOP2; + else + mr2 |= MCFUART_MR2_STOP1; + + if (termios->c_cflag & CRTSCTS) { + mr1 |= MCFUART_MR1_RXRTS; + mr2 |= MCFUART_MR2_TXCTS; + } + + spin_lock_irqsave(&port->lock, flags); + writeb(MCFUART_UCR_CMDRESETRX, port->membase + MCFUART_UCR); + writeb(MCFUART_UCR_CMDRESETTX, port->membase + MCFUART_UCR); + writeb(MCFUART_UCR_CMDRESETMRPTR, port->membase + MCFUART_UCR); + writeb(mr1, port->membase + MCFUART_UMR); + writeb(mr2, port->membase + MCFUART_UMR); + writeb((baudclk & 0xff00) >> 8, port->membase + MCFUART_UBG1); + writeb((baudclk & 0xff), port->membase + MCFUART_UBG2); + writeb(MCFUART_UCSR_RXCLKTIMER | MCFUART_UCSR_TXCLKTIMER, + port->membase + MCFUART_UCSR); + writeb(MCFUART_UCR_RXENABLE | MCFUART_UCR_TXENABLE, + port->membase + MCFUART_UCR); + spin_unlock_irqrestore(&port->lock, flags); +} + +/****************************************************************************/ + +static void mcf_rx_chars(struct mcf_uart *pp) +{ + struct uart_port *port = (struct uart_port *) pp; + unsigned char status, ch, flag; + + while ((status = readb(port->membase + MCFUART_USR)) & MCFUART_USR_RXREADY) { + ch = readb(port->membase + MCFUART_URB); + flag = TTY_NORMAL; + port->icount.rx++; + + if (status & MCFUART_USR_RXERR) { + writeb(MCFUART_UCR_CMDRESETERR, + port->membase + MCFUART_UCR); + + if (status & MCFUART_USR_RXBREAK) { + port->icount.brk++; + if (uart_handle_break(port)) + continue; + } else if (status & MCFUART_USR_RXPARITY) { + port->icount.parity++; + } else if (status & MCFUART_USR_RXOVERRUN) { + port->icount.overrun++; + } else if (status & MCFUART_USR_RXFRAMING) { + port->icount.frame++; + } + + status &= port->read_status_mask; + + if (status & MCFUART_USR_RXBREAK) + flag = TTY_BREAK; + else if (status & MCFUART_USR_RXPARITY) + flag = TTY_PARITY; + else if (status & MCFUART_USR_RXFRAMING) + flag = TTY_FRAME; + } + + if (uart_handle_sysrq_char(port, ch)) + continue; + uart_insert_char(port, status, MCFUART_USR_RXOVERRUN, ch, flag); + } + + tty_flip_buffer_push(port->info->tty); +} + +/****************************************************************************/ + +static void mcf_tx_chars(struct mcf_uart *pp) +{ + struct uart_port *port = (struct uart_port *) pp; + struct circ_buf *xmit = &port->info->xmit; + + if (port->x_char) { + /* Send special char - probably flow control */ + writeb(port->x_char, port->membase + MCFUART_UTB); + port->x_char = 0; + port->icount.tx++; + return; + } + + while (readb(port->membase + MCFUART_USR) & MCFUART_USR_TXREADY) { + if (xmit->head == xmit->tail) + break; + writeb(xmit->buf[xmit->tail], port->membase + MCFUART_UTB); + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE -1); + port->icount.tx++; + } + + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) + uart_write_wakeup(port); + + if (xmit->head == xmit->tail) { + pp->imr &= ~MCFUART_UIR_TXREADY; + writeb(pp->imr, port->membase + MCFUART_UIMR); + } +} + +/****************************************************************************/ + +static irqreturn_t mcf_interrupt(int irq, void *data) +{ + struct uart_port *port = data; + struct mcf_uart *pp = (struct mcf_uart *) port; + unsigned int isr; + + isr = readb(port->membase + MCFUART_UISR) & pp->imr; + if (isr & MCFUART_UIR_RXREADY) + mcf_rx_chars(pp); + if (isr & MCFUART_UIR_TXREADY) + mcf_tx_chars(pp); + return IRQ_HANDLED; +} + +/****************************************************************************/ + +static void mcf_config_port(struct uart_port *port, int flags) +{ + port->type = PORT_MCF; + + /* Clear mask, so no surprise interrupts. */ + writeb(0, port->membase + MCFUART_UIMR); + + if (request_irq(port->irq, mcf_interrupt, IRQF_DISABLED, "UART", port)) + printk(KERN_ERR "MCF: unable to attach ColdFire UART %d " + "interrupt vector=%d\n", port->line, port->irq); +} + +/****************************************************************************/ + +static const char *mcf_type(struct uart_port *port) +{ + return (port->type == PORT_MCF) ? "ColdFire UART" : NULL; +} + +/****************************************************************************/ + +static int mcf_request_port(struct uart_port *port) +{ + /* UARTs always present */ + return 0; +} + +/****************************************************************************/ + +static void mcf_release_port(struct uart_port *port) +{ + /* Nothing to release... */ +} + +/****************************************************************************/ + +static int mcf_verify_port(struct uart_port *port, struct serial_struct *ser) +{ + if ((ser->type != PORT_UNKNOWN) && (ser->type != PORT_MCF)) + return -EINVAL; + return 0; +} + +/****************************************************************************/ + +/* + * Define the basic serial functions we support. + */ +static struct uart_ops mcf_uart_ops = { + .tx_empty = mcf_tx_empty, + .get_mctrl = mcf_get_mctrl, + .set_mctrl = mcf_set_mctrl, + .start_tx = mcf_start_tx, + .stop_tx = mcf_stop_tx, + .stop_rx = mcf_stop_rx, + .enable_ms = mcf_enable_ms, + .break_ctl = mcf_break_ctl, + .startup = mcf_startup, + .shutdown = mcf_shutdown, + .set_termios = mcf_set_termios, + .type = mcf_type, + .request_port = mcf_request_port, + .release_port = mcf_release_port, + .config_port = mcf_config_port, + .verify_port = mcf_verify_port, +}; + +static struct mcf_uart mcf_ports[3]; + +#define MCF_MAXPORTS (sizeof(mcf_ports) / sizeof(struct mcf_uart)) + +/****************************************************************************/ +#if defined(CONFIG_SERIAL_MCF_CONSOLE) +/****************************************************************************/ + +int __init early_mcf_setup(struct mcf_platform_uart *platp) +{ + struct uart_port *port; + int i; + + for (i = 0; ((i < MCF_MAXPORTS) && (platp[i].mapbase)); i++) { + port = &mcf_ports[i].port; + + port->line = i; + port->type = PORT_MCF; + port->mapbase = platp[i].mapbase; + port->membase = (platp[i].membase) ? platp[i].membase : + (unsigned char __iomem *) port->mapbase; + port->iotype = SERIAL_IO_MEM; + port->irq = platp[i].irq; + port->uartclk = MCF_BUSCLK; + port->flags = ASYNC_BOOT_AUTOCONF; + port->ops = &mcf_uart_ops; + } + + return 0; +} + +/****************************************************************************/ + +static void mcf_console_putc(struct console *co, const char c) +{ + struct uart_port *port = &(mcf_ports + co->index)->port; + int i; + + for (i = 0; (i < 0x10000); i++) { + if (readb(port->membase + MCFUART_USR) & MCFUART_USR_TXREADY) + break; + } + writeb(c, port->membase + MCFUART_UTB); + for (i = 0; (i < 0x10000); i++) { + if (readb(port->membase + MCFUART_USR) & MCFUART_USR_TXREADY) + break; + } +} + +/****************************************************************************/ + +static void mcf_console_write(struct console *co, const char *s, unsigned int count) +{ + for (; (count); count--, s++) { + mcf_console_putc(co, *s); + if (*s == '\n') + mcf_console_putc(co, '\r'); + } +} + +/****************************************************************************/ + +static int __init mcf_console_setup(struct console *co, char *options) +{ + struct uart_port *port; + int baud = CONFIG_SERIAL_MCF_BAUDRATE; + int bits = 8; + int parity = 'n'; + int flow = 'n'; + + if ((co->index >= 0) && (co->index <= MCF_MAXPORTS)) + co->index = 0; + port = &mcf_ports[co->index].port; + if (port->membase == 0) + return -ENODEV; + + if (options) + uart_parse_options(options, &baud, &parity, &bits, &flow); + + return uart_set_options(port, co, baud, parity, bits, flow); +} + +/****************************************************************************/ + +static struct uart_driver mcf_driver; + +static struct console mcf_console = { + .name = "ttyS", + .write = mcf_console_write, + .device = uart_console_device, + .setup = mcf_console_setup, + .flags = CON_PRINTBUFFER, + .index = -1, + .data = &mcf_driver, +}; + +static int __init mcf_console_init(void) +{ + register_console(&mcf_console); + return 0; +} + +console_initcall(mcf_console_init); + +#define MCF_CONSOLE &mcf_console + +/****************************************************************************/ +#else +/****************************************************************************/ + +#define MCF_CONSOLE NULL + +/****************************************************************************/ +#endif /* CONFIG_MCF_CONSOLE */ +/****************************************************************************/ + +/* + * Define the mcf UART driver structure. + */ +static struct uart_driver mcf_driver = { + .owner = THIS_MODULE, + .driver_name = "mcf", + .dev_name = "ttyS", + .major = TTY_MAJOR, + .minor = 64, + .nr = MCF_MAXPORTS, + .cons = MCF_CONSOLE, +}; + +/****************************************************************************/ + +static int __devinit mcf_probe(struct platform_device *pdev) +{ + struct mcf_platform_uart *platp = pdev->dev.platform_data; + struct uart_port *port; + int i; + + for (i = 0; ((i < MCF_MAXPORTS) && (platp[i].mapbase)); i++) { + port = &mcf_ports[i].port; + + port->line = i; + port->type = PORT_MCF; + port->mapbase = platp[i].mapbase; + port->membase = (platp[i].membase) ? platp[i].membase : + (unsigned char __iomem *) platp[i].mapbase; + port->iotype = SERIAL_IO_MEM; + port->irq = platp[i].irq; + port->uartclk = MCF_BUSCLK; + port->ops = &mcf_uart_ops; + port->flags = ASYNC_BOOT_AUTOCONF; + + uart_add_one_port(&mcf_driver, port); + } + + return 0; +} + +/****************************************************************************/ + +static int mcf_remove(struct platform_device *pdev) +{ + struct uart_port *port; + int i; + + for (i = 0; (i < MCF_MAXPORTS); i++) { + port = &mcf_ports[i].port; + if (port) + uart_remove_one_port(&mcf_driver, port); + } + + return 0; +} + +/****************************************************************************/ + +static struct platform_driver mcf_platform_driver = { + .probe = mcf_probe, + .remove = __devexit_p(mcf_remove), + .driver = { + .name = "mcfuart", + .owner = THIS_MODULE, + }, +}; + +/****************************************************************************/ + +static int __init mcf_init(void) +{ + int rc; + + printk("ColdFire internal UART serial driver\n"); + + rc = uart_register_driver(&mcf_driver); + if (rc) + return rc; + rc = platform_driver_register(&mcf_platform_driver); + if (rc) + return rc; + return 0; +} + +/****************************************************************************/ + +static void __exit mcf_exit(void) +{ + platform_driver_unregister(&mcf_platform_driver); + uart_unregister_driver(&mcf_driver); +} + +/****************************************************************************/ + +module_init(mcf_init); +module_exit(mcf_exit); + +MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>"); +MODULE_DESCRIPTION("Freescale ColdFire UART driver"); +MODULE_LICENSE("GPL"); + +/****************************************************************************/ diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c index 8dd5a6afd51..90d64a80846 100644 --- a/drivers/usb/core/message.c +++ b/drivers/usb/core/message.c @@ -437,13 +437,11 @@ int usb_sg_init ( #if defined(CONFIG_HIGHMEM) || defined(CONFIG_IOMMU) io->urbs[i]->transfer_buffer = NULL; #else - io->urbs[i]->transfer_buffer = - page_address(sg[i].page) + sg[i].offset; + io->urbs[i]->transfer_buffer = sg_virt(&sg[i]); #endif } else { /* hc may use _only_ transfer_buffer */ - io->urbs [i]->transfer_buffer = - page_address (sg [i].page) + sg [i].offset; + io->urbs [i]->transfer_buffer = sg_virt(&sg[i]); len = sg [i].length; } diff --git a/drivers/usb/image/microtek.c b/drivers/usb/image/microtek.c index e7d982a7154..91e999c9f68 100644 --- a/drivers/usb/image/microtek.c +++ b/drivers/usb/image/microtek.c @@ -519,8 +519,7 @@ static void mts_do_sg (struct urb* transfer) context->fragment++; mts_int_submit_urb(transfer, context->data_pipe, - page_address(sg[context->fragment].page) + - sg[context->fragment].offset, + sg_virt(&sg[context->fragment]), sg[context->fragment].length, context->fragment + 1 == scsi_sg_count(context->srb) ? mts_data_done : mts_do_sg); @@ -557,7 +556,7 @@ mts_build_transfer_context(struct scsi_cmnd *srb, struct mts_desc* desc) return; } else { sg = scsi_sglist(srb); - desc->context.data = page_address(sg[0].page) + sg[0].offset; + desc->context.data = sg_virt(&sg[0]); desc->context.data_length = sg[0].length; } diff --git a/drivers/usb/misc/usbtest.c b/drivers/usb/misc/usbtest.c index e901d31e051..ea316214648 100644 --- a/drivers/usb/misc/usbtest.c +++ b/drivers/usb/misc/usbtest.c @@ -360,9 +360,9 @@ static void free_sglist (struct scatterlist *sg, int nents) if (!sg) return; for (i = 0; i < nents; i++) { - if (!sg [i].page) + if (!sg_page(&sg[i])) continue; - kfree (page_address (sg [i].page) + sg [i].offset); + kfree (sg_virt(&sg[i])); } kfree (sg); } diff --git a/drivers/usb/storage/protocol.c b/drivers/usb/storage/protocol.c index cc8f7c52c72..889622baac2 100644 --- a/drivers/usb/storage/protocol.c +++ b/drivers/usb/storage/protocol.c @@ -195,7 +195,7 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer, * the *offset and *index values for the next loop. */ cnt = 0; while (cnt < buflen) { - struct page *page = sg->page + + struct page *page = sg_page(sg) + ((sg->offset + *offset) >> PAGE_SHIFT); unsigned int poff = (sg->offset + *offset) & (PAGE_SIZE-1); diff --git a/drivers/watchdog/mpc5200_wdt.c b/drivers/watchdog/mpc5200_wdt.c index 9cfb9757662..11f6a111e75 100644 --- a/drivers/watchdog/mpc5200_wdt.c +++ b/drivers/watchdog/mpc5200_wdt.c @@ -176,6 +176,8 @@ static int mpc5200_wdt_probe(struct of_device *op, const struct of_device_id *ma has_wdt = of_get_property(op->node, "has-wdt", NULL); if (!has_wdt) + has_wdt = of_get_property(op->node, "fsl,has-wdt", NULL); + if (!has_wdt) return -ENODEV; wdt = kzalloc(sizeof(*wdt), GFP_KERNEL); @@ -254,6 +256,7 @@ static int mpc5200_wdt_shutdown(struct of_device *op) static struct of_device_id mpc5200_wdt_match[] = { { .compatible = "mpc5200-gpt", }, + { .compatible = "fsl,mpc5200-gpt", }, {}, }; static struct of_platform_driver mpc5200_wdt_driver = { diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h index 0a3ee5a322b..5574ba3ab1f 100644 --- a/fs/cifs/cifsfs.h +++ b/fs/cifs/cifsfs.h @@ -103,7 +103,7 @@ extern int cifs_ioctl(struct inode *inode, struct file *filep, unsigned int command, unsigned long arg); #ifdef CONFIG_CIFS_EXPERIMENTAL -extern struct export_operations cifs_export_ops; +extern const struct export_operations cifs_export_ops; #endif /* EXPERIMENTAL */ #define CIFS_VERSION "1.51" diff --git a/fs/cifs/export.c b/fs/cifs/export.c index d614b91caec..75949d6a5f1 100644 --- a/fs/cifs/export.c +++ b/fs/cifs/export.c @@ -53,7 +53,7 @@ static struct dentry *cifs_get_parent(struct dentry *dentry) return ERR_PTR(-EACCES); } -struct export_operations cifs_export_ops = { +const struct export_operations cifs_export_ops = { .get_parent = cifs_get_parent, /* Following five export operations are unneeded so far and can default: .get_dentry = diff --git a/fs/dcache.c b/fs/dcache.c index 2bb3f7ac683..d9ca1e5ceb9 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1479,6 +1479,8 @@ static void switch_names(struct dentry *dentry, struct dentry *target) * dentry:internal, target:external. Steal target's * storage and make target internal. */ + memcpy(target->d_iname, dentry->d_name.name, + dentry->d_name.len + 1); dentry->d_name.name = target->d_name.name; target->d_name.name = target->d_iname; } diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c index 1ae90ef2c74..0a9882edf56 100644 --- a/fs/ecryptfs/crypto.c +++ b/fs/ecryptfs/crypto.c @@ -283,7 +283,7 @@ int virt_to_scatterlist(const void *addr, int size, struct scatterlist *sg, pg = virt_to_page(addr); offset = offset_in_page(addr); if (sg) { - sg[i].page = pg; + sg_set_page(&sg[i], pg); sg[i].offset = offset; } remainder_of_page = PAGE_CACHE_SIZE - offset; @@ -713,10 +713,13 @@ ecryptfs_encrypt_page_offset(struct ecryptfs_crypt_stat *crypt_stat, { struct scatterlist src_sg, dst_sg; - src_sg.page = src_page; + sg_init_table(&src_sg, 1); + sg_init_table(&dst_sg, 1); + + sg_set_page(&src_sg, src_page); src_sg.offset = src_offset; src_sg.length = size; - dst_sg.page = dst_page; + sg_set_page(&dst_sg, dst_page); dst_sg.offset = dst_offset; dst_sg.length = size; return encrypt_scatterlist(crypt_stat, &dst_sg, &src_sg, size, iv); @@ -742,10 +745,13 @@ ecryptfs_decrypt_page_offset(struct ecryptfs_crypt_stat *crypt_stat, { struct scatterlist src_sg, dst_sg; - src_sg.page = src_page; + sg_init_table(&src_sg, 1); + sg_init_table(&dst_sg, 1); + + sg_set_page(&src_sg, src_page); src_sg.offset = src_offset; src_sg.length = size; - dst_sg.page = dst_page; + sg_set_page(&dst_sg, dst_page); dst_sg.offset = dst_offset; dst_sg.length = size; return decrypt_scatterlist(crypt_stat, &dst_sg, &src_sg, size, iv); diff --git a/fs/ecryptfs/keystore.c b/fs/ecryptfs/keystore.c index 89d9710dd63..263fed88c0c 100644 --- a/fs/ecryptfs/keystore.c +++ b/fs/ecryptfs/keystore.c @@ -1040,6 +1040,9 @@ decrypt_passphrase_encrypted_session_key(struct ecryptfs_auth_tok *auth_tok, }; int rc = 0; + sg_init_table(&dst_sg, 1); + sg_init_table(&src_sg, 1); + if (unlikely(ecryptfs_verbosity > 0)) { ecryptfs_printk( KERN_DEBUG, "Session key encryption key (size [%d]):\n", diff --git a/fs/efs/namei.c b/fs/efs/namei.c index 5276b19423c..f7f407075be 100644 --- a/fs/efs/namei.c +++ b/fs/efs/namei.c @@ -10,6 +10,8 @@ #include <linux/string.h> #include <linux/efs_fs.h> #include <linux/smp_lock.h> +#include <linux/exportfs.h> + static efs_ino_t efs_find_entry(struct inode *inode, const char *name, int len) { struct buffer_head *bh; @@ -75,13 +77,10 @@ struct dentry *efs_lookup(struct inode *dir, struct dentry *dentry, struct namei return NULL; } -struct dentry *efs_get_dentry(struct super_block *sb, void *vobjp) +static struct inode *efs_nfs_get_inode(struct super_block *sb, u64 ino, + u32 generation) { - __u32 *objp = vobjp; - unsigned long ino = objp[0]; - __u32 generation = objp[1]; struct inode *inode; - struct dentry *result; if (ino == 0) return ERR_PTR(-ESTALE); @@ -91,20 +90,25 @@ struct dentry *efs_get_dentry(struct super_block *sb, void *vobjp) if (is_bad_inode(inode) || (generation && inode->i_generation != generation)) { - result = ERR_PTR(-ESTALE); - goto out_iput; + iput(inode); + return ERR_PTR(-ESTALE); } - result = d_alloc_anon(inode); - if (!result) { - result = ERR_PTR(-ENOMEM); - goto out_iput; - } - return result; + return inode; +} - out_iput: - iput(inode); - return result; +struct dentry *efs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + efs_nfs_get_inode); +} + +struct dentry *efs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + efs_nfs_get_inode); } struct dentry *efs_get_parent(struct dentry *child) diff --git a/fs/efs/super.c b/fs/efs/super.c index 25d0326c5f1..c79bc627f10 100644 --- a/fs/efs/super.c +++ b/fs/efs/super.c @@ -113,8 +113,9 @@ static const struct super_operations efs_superblock_operations = { .remount_fs = efs_remount, }; -static struct export_operations efs_export_ops = { - .get_dentry = efs_get_dentry, +static const struct export_operations efs_export_ops = { + .fh_to_dentry = efs_fh_to_dentry, + .fh_to_parent = efs_fh_to_parent, .get_parent = efs_get_parent, }; diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c index 8adb32a9387..109ab5e44ec 100644 --- a/fs/exportfs/expfs.c +++ b/fs/exportfs/expfs.c @@ -1,4 +1,13 @@ - +/* + * Copyright (C) Neil Brown 2002 + * Copyright (C) Christoph Hellwig 2007 + * + * This file contains the code mapping from inodes to NFS file handles, + * and for mapping back from file handles to dentries. + * + * For details on why we do all the strange and hairy things in here + * take a look at Documentation/filesystems/Exporting. + */ #include <linux/exportfs.h> #include <linux/fs.h> #include <linux/file.h> @@ -9,32 +18,19 @@ #define dprintk(fmt, args...) do{}while(0) -static int get_name(struct dentry *dentry, char *name, +static int get_name(struct vfsmount *mnt, struct dentry *dentry, char *name, struct dentry *child); -static struct dentry *exportfs_get_dentry(struct super_block *sb, void *obj) +static int exportfs_get_name(struct vfsmount *mnt, struct dentry *dir, + char *name, struct dentry *child) { - struct dentry *result = ERR_PTR(-ESTALE); - - if (sb->s_export_op->get_dentry) { - result = sb->s_export_op->get_dentry(sb, obj); - if (!result) - result = ERR_PTR(-ESTALE); - } - - return result; -} - -static int exportfs_get_name(struct dentry *dir, char *name, - struct dentry *child) -{ - struct export_operations *nop = dir->d_sb->s_export_op; + const struct export_operations *nop = dir->d_sb->s_export_op; if (nop->get_name) return nop->get_name(dir, name, child); else - return get_name(dir, name, child); + return get_name(mnt, dir, name, child); } /* @@ -98,7 +94,7 @@ find_disconnected_root(struct dentry *dentry) * It may already be, as the flag isn't always updated when connection happens. */ static int -reconnect_path(struct super_block *sb, struct dentry *target_dir) +reconnect_path(struct vfsmount *mnt, struct dentry *target_dir) { char nbuf[NAME_MAX+1]; int noprogress = 0; @@ -121,7 +117,7 @@ reconnect_path(struct super_block *sb, struct dentry *target_dir) pd->d_flags &= ~DCACHE_DISCONNECTED; spin_unlock(&pd->d_lock); noprogress = 0; - } else if (pd == sb->s_root) { + } else if (pd == mnt->mnt_sb->s_root) { printk(KERN_ERR "export: Eeek filesystem root is not connected, impossible\n"); spin_lock(&pd->d_lock); pd->d_flags &= ~DCACHE_DISCONNECTED; @@ -147,8 +143,8 @@ reconnect_path(struct super_block *sb, struct dentry *target_dir) struct dentry *npd; mutex_lock(&pd->d_inode->i_mutex); - if (sb->s_export_op->get_parent) - ppd = sb->s_export_op->get_parent(pd); + if (mnt->mnt_sb->s_export_op->get_parent) + ppd = mnt->mnt_sb->s_export_op->get_parent(pd); mutex_unlock(&pd->d_inode->i_mutex); if (IS_ERR(ppd)) { @@ -161,7 +157,7 @@ reconnect_path(struct super_block *sb, struct dentry *target_dir) dprintk("%s: find name of %lu in %lu\n", __FUNCTION__, pd->d_inode->i_ino, ppd->d_inode->i_ino); - err = exportfs_get_name(ppd, nbuf, pd); + err = exportfs_get_name(mnt, ppd, nbuf, pd); if (err) { dput(ppd); dput(pd); @@ -214,125 +210,6 @@ reconnect_path(struct super_block *sb, struct dentry *target_dir) return 0; } -/** - * find_exported_dentry - helper routine to implement export_operations->decode_fh - * @sb: The &super_block identifying the filesystem - * @obj: An opaque identifier of the object to be found - passed to - * get_inode - * @parent: An optional opqaue identifier of the parent of the object. - * @acceptable: A function used to test possible &dentries to see if they are - * acceptable - * @context: A parameter to @acceptable so that it knows on what basis to - * judge. - * - * find_exported_dentry is the central helper routine to enable file systems - * to provide the decode_fh() export_operation. It's main task is to take - * an &inode, find or create an appropriate &dentry structure, and possibly - * splice this into the dcache in the correct place. - * - * The decode_fh() operation provided by the filesystem should call - * find_exported_dentry() with the same parameters that it received except - * that instead of the file handle fragment, pointers to opaque identifiers - * for the object and optionally its parent are passed. The default decode_fh - * routine passes one pointer to the start of the filehandle fragment, and - * one 8 bytes into the fragment. It is expected that most filesystems will - * take this approach, though the offset to the parent identifier may well be - * different. - * - * find_exported_dentry() will call get_dentry to get an dentry pointer from - * the file system. If any &dentry in the d_alias list is acceptable, it will - * be returned. Otherwise find_exported_dentry() will attempt to splice a new - * &dentry into the dcache using get_name() and get_parent() to find the - * appropriate place. - */ - -struct dentry * -find_exported_dentry(struct super_block *sb, void *obj, void *parent, - int (*acceptable)(void *context, struct dentry *de), - void *context) -{ - struct dentry *result, *alias; - int err = -ESTALE; - - /* - * Attempt to find the inode. - */ - result = exportfs_get_dentry(sb, obj); - if (IS_ERR(result)) - return result; - - if (S_ISDIR(result->d_inode->i_mode)) { - if (!(result->d_flags & DCACHE_DISCONNECTED)) { - if (acceptable(context, result)) - return result; - err = -EACCES; - goto err_result; - } - - err = reconnect_path(sb, result); - if (err) - goto err_result; - } else { - struct dentry *target_dir, *nresult; - char nbuf[NAME_MAX+1]; - - alias = find_acceptable_alias(result, acceptable, context); - if (alias) - return alias; - - if (parent == NULL) - goto err_result; - - target_dir = exportfs_get_dentry(sb,parent); - if (IS_ERR(target_dir)) { - err = PTR_ERR(target_dir); - goto err_result; - } - - err = reconnect_path(sb, target_dir); - if (err) { - dput(target_dir); - goto err_result; - } - - /* - * As we weren't after a directory, have one more step to go. - */ - err = exportfs_get_name(target_dir, nbuf, result); - if (!err) { - mutex_lock(&target_dir->d_inode->i_mutex); - nresult = lookup_one_len(nbuf, target_dir, - strlen(nbuf)); - mutex_unlock(&target_dir->d_inode->i_mutex); - if (!IS_ERR(nresult)) { - if (nresult->d_inode) { - dput(result); - result = nresult; - } else - dput(nresult); - } - } - dput(target_dir); - } - - alias = find_acceptable_alias(result, acceptable, context); - if (alias) - return alias; - - /* drat - I just cannot find anything acceptable */ - dput(result); - /* It might be justifiable to return ESTALE here, - * but the filehandle at-least looks reasonable good - * and it may just be a permission problem, so returning - * -EACCESS is safer - */ - return ERR_PTR(-EACCES); - - err_result: - dput(result); - return ERR_PTR(err); -} - struct getdents_callback { char *name; /* name that was found. It already points to a buffer NAME_MAX+1 is size */ @@ -370,8 +247,8 @@ static int filldir_one(void * __buf, const char * name, int len, * calls readdir on the parent until it finds an entry with * the same inode number as the child, and returns that. */ -static int get_name(struct dentry *dentry, char *name, - struct dentry *child) +static int get_name(struct vfsmount *mnt, struct dentry *dentry, + char *name, struct dentry *child) { struct inode *dir = dentry->d_inode; int error; @@ -387,7 +264,7 @@ static int get_name(struct dentry *dentry, char *name, /* * Open the directory ... */ - file = dentry_open(dget(dentry), NULL, O_RDONLY); + file = dentry_open(dget(dentry), mntget(mnt), O_RDONLY); error = PTR_ERR(file); if (IS_ERR(file)) goto out; @@ -434,100 +311,177 @@ out: * can be used to check that it is still valid. It places them in the * filehandle fragment where export_decode_fh expects to find them. */ -static int export_encode_fh(struct dentry *dentry, __u32 *fh, int *max_len, - int connectable) +static int export_encode_fh(struct dentry *dentry, struct fid *fid, + int *max_len, int connectable) { struct inode * inode = dentry->d_inode; int len = *max_len; - int type = 1; + int type = FILEID_INO32_GEN; if (len < 2 || (connectable && len < 4)) return 255; len = 2; - fh[0] = inode->i_ino; - fh[1] = inode->i_generation; + fid->i32.ino = inode->i_ino; + fid->i32.gen = inode->i_generation; if (connectable && !S_ISDIR(inode->i_mode)) { struct inode *parent; spin_lock(&dentry->d_lock); parent = dentry->d_parent->d_inode; - fh[2] = parent->i_ino; - fh[3] = parent->i_generation; + fid->i32.parent_ino = parent->i_ino; + fid->i32.parent_gen = parent->i_generation; spin_unlock(&dentry->d_lock); len = 4; - type = 2; + type = FILEID_INO32_GEN_PARENT; } *max_len = len; return type; } - -/** - * export_decode_fh - default export_operations->decode_fh function - * @sb: The superblock - * @fh: pointer to the file handle fragment - * @fh_len: length of file handle fragment - * @acceptable: function for testing acceptability of dentrys - * @context: context for @acceptable - * - * This is the default decode_fh() function. - * a fileid_type of 1 indicates that the filehandlefragment - * just contains an object identifier understood by get_dentry. - * a fileid_type of 2 says that there is also a directory - * identifier 8 bytes in to the filehandlefragement. - */ -static struct dentry *export_decode_fh(struct super_block *sb, __u32 *fh, int fh_len, - int fileid_type, - int (*acceptable)(void *context, struct dentry *de), - void *context) -{ - __u32 parent[2]; - parent[0] = parent[1] = 0; - if (fh_len < 2 || fileid_type > 2) - return NULL; - if (fileid_type == 2) { - if (fh_len > 2) parent[0] = fh[2]; - if (fh_len > 3) parent[1] = fh[3]; - } - return find_exported_dentry(sb, fh, parent, - acceptable, context); -} - -int exportfs_encode_fh(struct dentry *dentry, __u32 *fh, int *max_len, +int exportfs_encode_fh(struct dentry *dentry, struct fid *fid, int *max_len, int connectable) { - struct export_operations *nop = dentry->d_sb->s_export_op; + const struct export_operations *nop = dentry->d_sb->s_export_op; int error; if (nop->encode_fh) - error = nop->encode_fh(dentry, fh, max_len, connectable); + error = nop->encode_fh(dentry, fid->raw, max_len, connectable); else - error = export_encode_fh(dentry, fh, max_len, connectable); + error = export_encode_fh(dentry, fid, max_len, connectable); return error; } EXPORT_SYMBOL_GPL(exportfs_encode_fh); -struct dentry *exportfs_decode_fh(struct vfsmount *mnt, __u32 *fh, int fh_len, - int fileid_type, int (*acceptable)(void *, struct dentry *), - void *context) +struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid, + int fh_len, int fileid_type, + int (*acceptable)(void *, struct dentry *), void *context) { - struct export_operations *nop = mnt->mnt_sb->s_export_op; - struct dentry *result; + const struct export_operations *nop = mnt->mnt_sb->s_export_op; + struct dentry *result, *alias; + int err; - if (nop->decode_fh) { - result = nop->decode_fh(mnt->mnt_sb, fh, fh_len, fileid_type, - acceptable, context); + /* + * Try to get any dentry for the given file handle from the filesystem. + */ + result = nop->fh_to_dentry(mnt->mnt_sb, fid, fh_len, fileid_type); + if (!result) + result = ERR_PTR(-ESTALE); + if (IS_ERR(result)) + return result; + + if (S_ISDIR(result->d_inode->i_mode)) { + /* + * This request is for a directory. + * + * On the positive side there is only one dentry for each + * directory inode. On the negative side this implies that we + * to ensure our dentry is connected all the way up to the + * filesystem root. + */ + if (result->d_flags & DCACHE_DISCONNECTED) { + err = reconnect_path(mnt, result); + if (err) + goto err_result; + } + + if (!acceptable(context, result)) { + err = -EACCES; + goto err_result; + } + + return result; } else { - result = export_decode_fh(mnt->mnt_sb, fh, fh_len, fileid_type, - acceptable, context); + /* + * It's not a directory. Life is a little more complicated. + */ + struct dentry *target_dir, *nresult; + char nbuf[NAME_MAX+1]; + + /* + * See if either the dentry we just got from the filesystem + * or any alias for it is acceptable. This is always true + * if this filesystem is exported without the subtreecheck + * option. If the filesystem is exported with the subtree + * check option there's a fair chance we need to look at + * the parent directory in the file handle and make sure + * it's connected to the filesystem root. + */ + alias = find_acceptable_alias(result, acceptable, context); + if (alias) + return alias; + + /* + * Try to extract a dentry for the parent directory from the + * file handle. If this fails we'll have to give up. + */ + err = -ESTALE; + if (!nop->fh_to_parent) + goto err_result; + + target_dir = nop->fh_to_parent(mnt->mnt_sb, fid, + fh_len, fileid_type); + if (!target_dir) + goto err_result; + err = PTR_ERR(target_dir); + if (IS_ERR(target_dir)) + goto err_result; + + /* + * And as usual we need to make sure the parent directory is + * connected to the filesystem root. The VFS really doesn't + * like disconnected directories.. + */ + err = reconnect_path(mnt, target_dir); + if (err) { + dput(target_dir); + goto err_result; + } + + /* + * Now that we've got both a well-connected parent and a + * dentry for the inode we're after, make sure that our + * inode is actually connected to the parent. + */ + err = exportfs_get_name(mnt, target_dir, nbuf, result); + if (!err) { + mutex_lock(&target_dir->d_inode->i_mutex); + nresult = lookup_one_len(nbuf, target_dir, + strlen(nbuf)); + mutex_unlock(&target_dir->d_inode->i_mutex); + if (!IS_ERR(nresult)) { + if (nresult->d_inode) { + dput(result); + result = nresult; + } else + dput(nresult); + } + } + + /* + * At this point we are done with the parent, but it's pinned + * by the child dentry anyway. + */ + dput(target_dir); + + /* + * And finally make sure the dentry is actually acceptable + * to NFSD. + */ + alias = find_acceptable_alias(result, acceptable, context); + if (!alias) { + err = -EACCES; + goto err_result; + } + + return alias; } - return result; + err_result: + dput(result); + return ERR_PTR(err); } EXPORT_SYMBOL_GPL(exportfs_decode_fh); -EXPORT_SYMBOL(find_exported_dentry); - MODULE_LICENSE("GPL"); diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index 05d9342bb64..d868e26c15e 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -28,6 +28,24 @@ typedef struct ext2_dir_entry_2 ext2_dirent; +static inline unsigned ext2_rec_len_from_disk(__le16 dlen) +{ + unsigned len = le16_to_cpu(dlen); + + if (len == EXT2_MAX_REC_LEN) + return 1 << 16; + return len; +} + +static inline __le16 ext2_rec_len_to_disk(unsigned len) +{ + if (len == (1 << 16)) + return cpu_to_le16(EXT2_MAX_REC_LEN); + else if (len > (1 << 16)) + BUG(); + return cpu_to_le16(len); +} + /* * ext2 uses block-sized chunks. Arguably, sector-sized ones would be * more robust, but we have what we have @@ -106,7 +124,7 @@ static void ext2_check_page(struct page *page) } for (offs = 0; offs <= limit - EXT2_DIR_REC_LEN(1); offs += rec_len) { p = (ext2_dirent *)(kaddr + offs); - rec_len = le16_to_cpu(p->rec_len); + rec_len = ext2_rec_len_from_disk(p->rec_len); if (rec_len < EXT2_DIR_REC_LEN(1)) goto Eshort; @@ -204,7 +222,8 @@ static inline int ext2_match (int len, const char * const name, */ static inline ext2_dirent *ext2_next_entry(ext2_dirent *p) { - return (ext2_dirent *)((char*)p + le16_to_cpu(p->rec_len)); + return (ext2_dirent *)((char *)p + + ext2_rec_len_from_disk(p->rec_len)); } static inline unsigned @@ -316,7 +335,7 @@ ext2_readdir (struct file * filp, void * dirent, filldir_t filldir) return 0; } } - filp->f_pos += le16_to_cpu(de->rec_len); + filp->f_pos += ext2_rec_len_from_disk(de->rec_len); } ext2_put_page(page); } @@ -425,7 +444,7 @@ void ext2_set_link(struct inode *dir, struct ext2_dir_entry_2 *de, { loff_t pos = page_offset(page) + (char *) de - (char *) page_address(page); - unsigned len = le16_to_cpu(de->rec_len); + unsigned len = ext2_rec_len_from_disk(de->rec_len); int err; lock_page(page); @@ -482,7 +501,7 @@ int ext2_add_link (struct dentry *dentry, struct inode *inode) /* We hit i_size */ name_len = 0; rec_len = chunk_size; - de->rec_len = cpu_to_le16(chunk_size); + de->rec_len = ext2_rec_len_to_disk(chunk_size); de->inode = 0; goto got_it; } @@ -496,7 +515,7 @@ int ext2_add_link (struct dentry *dentry, struct inode *inode) if (ext2_match (namelen, name, de)) goto out_unlock; name_len = EXT2_DIR_REC_LEN(de->name_len); - rec_len = le16_to_cpu(de->rec_len); + rec_len = ext2_rec_len_from_disk(de->rec_len); if (!de->inode && rec_len >= reclen) goto got_it; if (rec_len >= name_len + reclen) @@ -518,8 +537,8 @@ got_it: goto out_unlock; if (de->inode) { ext2_dirent *de1 = (ext2_dirent *) ((char *) de + name_len); - de1->rec_len = cpu_to_le16(rec_len - name_len); - de->rec_len = cpu_to_le16(name_len); + de1->rec_len = ext2_rec_len_to_disk(rec_len - name_len); + de->rec_len = ext2_rec_len_to_disk(name_len); de = de1; } de->name_len = namelen; @@ -550,7 +569,8 @@ int ext2_delete_entry (struct ext2_dir_entry_2 * dir, struct page * page ) struct inode *inode = mapping->host; char *kaddr = page_address(page); unsigned from = ((char*)dir - kaddr) & ~(ext2_chunk_size(inode)-1); - unsigned to = ((char*)dir - kaddr) + le16_to_cpu(dir->rec_len); + unsigned to = ((char *)dir - kaddr) + + ext2_rec_len_from_disk(dir->rec_len); loff_t pos; ext2_dirent * pde = NULL; ext2_dirent * de = (ext2_dirent *) (kaddr + from); @@ -574,7 +594,7 @@ int ext2_delete_entry (struct ext2_dir_entry_2 * dir, struct page * page ) &page, NULL); BUG_ON(err); if (pde) - pde->rec_len = cpu_to_le16(to - from); + pde->rec_len = ext2_rec_len_to_disk(to - from); dir->inode = 0; err = ext2_commit_chunk(page, pos, to - from); inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC; @@ -610,14 +630,14 @@ int ext2_make_empty(struct inode *inode, struct inode *parent) memset(kaddr, 0, chunk_size); de = (struct ext2_dir_entry_2 *)kaddr; de->name_len = 1; - de->rec_len = cpu_to_le16(EXT2_DIR_REC_LEN(1)); + de->rec_len = ext2_rec_len_to_disk(EXT2_DIR_REC_LEN(1)); memcpy (de->name, ".\0\0", 4); de->inode = cpu_to_le32(inode->i_ino); ext2_set_de_type (de, inode); de = (struct ext2_dir_entry_2 *)(kaddr + EXT2_DIR_REC_LEN(1)); de->name_len = 2; - de->rec_len = cpu_to_le16(chunk_size - EXT2_DIR_REC_LEN(1)); + de->rec_len = ext2_rec_len_to_disk(chunk_size - EXT2_DIR_REC_LEN(1)); de->inode = cpu_to_le32(parent->i_ino); memcpy (de->name, "..\0", 4); ext2_set_de_type (de, inode); diff --git a/fs/ext2/super.c b/fs/ext2/super.c index 77bd5f9262f..154e25f13d7 100644 --- a/fs/ext2/super.c +++ b/fs/ext2/super.c @@ -311,13 +311,10 @@ static const struct super_operations ext2_sops = { #endif }; -static struct dentry *ext2_get_dentry(struct super_block *sb, void *vobjp) +static struct inode *ext2_nfs_get_inode(struct super_block *sb, + u64 ino, u32 generation) { - __u32 *objp = vobjp; - unsigned long ino = objp[0]; - __u32 generation = objp[1]; struct inode *inode; - struct dentry *result; if (ino < EXT2_FIRST_INO(sb) && ino != EXT2_ROOT_INO) return ERR_PTR(-ESTALE); @@ -338,15 +335,21 @@ static struct dentry *ext2_get_dentry(struct super_block *sb, void *vobjp) iput(inode); return ERR_PTR(-ESTALE); } - /* now to find a dentry. - * If possible, get a well-connected one - */ - result = d_alloc_anon(inode); - if (!result) { - iput(inode); - return ERR_PTR(-ENOMEM); - } - return result; + return inode; +} + +static struct dentry *ext2_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + ext2_nfs_get_inode); +} + +static struct dentry *ext2_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + ext2_nfs_get_inode); } /* Yes, most of these are left as NULL!! @@ -354,9 +357,10 @@ static struct dentry *ext2_get_dentry(struct super_block *sb, void *vobjp) * systems, but can be improved upon. * Currently only get_parent is required. */ -static struct export_operations ext2_export_ops = { +static const struct export_operations ext2_export_ops = { + .fh_to_dentry = ext2_fh_to_dentry, + .fh_to_parent = ext2_fh_to_parent, .get_parent = ext2_get_parent, - .get_dentry = ext2_get_dentry, }; static unsigned long get_sb_block(void **data) diff --git a/fs/ext3/super.c b/fs/ext3/super.c index 81868c0bc40..de55da9e28b 100644 --- a/fs/ext3/super.c +++ b/fs/ext3/super.c @@ -631,13 +631,10 @@ static int ext3_show_options(struct seq_file *seq, struct vfsmount *vfs) } -static struct dentry *ext3_get_dentry(struct super_block *sb, void *vobjp) +static struct inode *ext3_nfs_get_inode(struct super_block *sb, + u64 ino, u32 generation) { - __u32 *objp = vobjp; - unsigned long ino = objp[0]; - __u32 generation = objp[1]; struct inode *inode; - struct dentry *result; if (ino < EXT3_FIRST_INO(sb) && ino != EXT3_ROOT_INO) return ERR_PTR(-ESTALE); @@ -660,15 +657,22 @@ static struct dentry *ext3_get_dentry(struct super_block *sb, void *vobjp) iput(inode); return ERR_PTR(-ESTALE); } - /* now to find a dentry. - * If possible, get a well-connected one - */ - result = d_alloc_anon(inode); - if (!result) { - iput(inode); - return ERR_PTR(-ENOMEM); - } - return result; + + return inode; +} + +static struct dentry *ext3_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + ext3_nfs_get_inode); +} + +static struct dentry *ext3_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + ext3_nfs_get_inode); } #ifdef CONFIG_QUOTA @@ -737,9 +741,10 @@ static const struct super_operations ext3_sops = { #endif }; -static struct export_operations ext3_export_ops = { +static const struct export_operations ext3_export_ops = { + .fh_to_dentry = ext3_fh_to_dentry, + .fh_to_parent = ext3_fh_to_parent, .get_parent = ext3_get_parent, - .get_dentry = ext3_get_dentry, }; enum { diff --git a/fs/ext4/super.c b/fs/ext4/super.c index b11e9e2bcd0..8031dc0e24e 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -686,13 +686,10 @@ static int ext4_show_options(struct seq_file *seq, struct vfsmount *vfs) } -static struct dentry *ext4_get_dentry(struct super_block *sb, void *vobjp) +static struct inode *ext4_nfs_get_inode(struct super_block *sb, + u64 ino, u32 generation) { - __u32 *objp = vobjp; - unsigned long ino = objp[0]; - __u32 generation = objp[1]; struct inode *inode; - struct dentry *result; if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO) return ERR_PTR(-ESTALE); @@ -715,15 +712,22 @@ static struct dentry *ext4_get_dentry(struct super_block *sb, void *vobjp) iput(inode); return ERR_PTR(-ESTALE); } - /* now to find a dentry. - * If possible, get a well-connected one - */ - result = d_alloc_anon(inode); - if (!result) { - iput(inode); - return ERR_PTR(-ENOMEM); - } - return result; + + return inode; +} + +static struct dentry *ext4_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + ext4_nfs_get_inode); +} + +static struct dentry *ext4_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + ext4_nfs_get_inode); } #ifdef CONFIG_QUOTA @@ -792,9 +796,10 @@ static const struct super_operations ext4_sops = { #endif }; -static struct export_operations ext4_export_ops = { +static const struct export_operations ext4_export_ops = { + .fh_to_dentry = ext4_fh_to_dentry, + .fh_to_parent = ext4_fh_to_parent, .get_parent = ext4_get_parent, - .get_dentry = ext4_get_dentry, }; enum { diff --git a/fs/fat/inode.c b/fs/fat/inode.c index c0c5e9c55b5..920a576e1c2 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -653,24 +653,15 @@ static const struct super_operations fat_sops = { * of i_logstart is used to store the directory entry offset. */ -static struct dentry * -fat_decode_fh(struct super_block *sb, __u32 *fh, int len, int fhtype, - int (*acceptable)(void *context, struct dentry *de), - void *context) -{ - if (fhtype != 3) - return ERR_PTR(-ESTALE); - if (len < 5) - return ERR_PTR(-ESTALE); - - return sb->s_export_op->find_exported_dentry(sb, fh, NULL, acceptable, context); -} - -static struct dentry *fat_get_dentry(struct super_block *sb, void *inump) +static struct dentry *fat_fh_to_dentry(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) { struct inode *inode = NULL; struct dentry *result; - __u32 *fh = inump; + u32 *fh = fid->raw; + + if (fh_len < 5 || fh_type != 3) + return NULL; inode = iget(sb, fh[0]); if (!inode || is_bad_inode(inode) || inode->i_generation != fh[1]) { @@ -783,10 +774,9 @@ out: return parent; } -static struct export_operations fat_export_ops = { - .decode_fh = fat_decode_fh, +static const struct export_operations fat_export_ops = { .encode_fh = fat_encode_fh, - .get_dentry = fat_get_dentry, + .fh_to_dentry = fat_fh_to_dentry, .get_parent = fat_get_parent, }; diff --git a/fs/gfs2/ops_export.c b/fs/gfs2/ops_export.c index e2d1347796a..b9da62348a8 100644 --- a/fs/gfs2/ops_export.c +++ b/fs/gfs2/ops_export.c @@ -31,40 +31,6 @@ #define GFS2_LARGE_FH_SIZE 8 #define GFS2_OLD_FH_SIZE 10 -static struct dentry *gfs2_decode_fh(struct super_block *sb, - __u32 *p, - int fh_len, - int fh_type, - int (*acceptable)(void *context, - struct dentry *dentry), - void *context) -{ - __be32 *fh = (__force __be32 *)p; - struct gfs2_inum_host inum, parent; - - memset(&parent, 0, sizeof(struct gfs2_inum)); - - switch (fh_len) { - case GFS2_LARGE_FH_SIZE: - case GFS2_OLD_FH_SIZE: - parent.no_formal_ino = ((u64)be32_to_cpu(fh[4])) << 32; - parent.no_formal_ino |= be32_to_cpu(fh[5]); - parent.no_addr = ((u64)be32_to_cpu(fh[6])) << 32; - parent.no_addr |= be32_to_cpu(fh[7]); - case GFS2_SMALL_FH_SIZE: - inum.no_formal_ino = ((u64)be32_to_cpu(fh[0])) << 32; - inum.no_formal_ino |= be32_to_cpu(fh[1]); - inum.no_addr = ((u64)be32_to_cpu(fh[2])) << 32; - inum.no_addr |= be32_to_cpu(fh[3]); - break; - default: - return NULL; - } - - return gfs2_export_ops.find_exported_dentry(sb, &inum, &parent, - acceptable, context); -} - static int gfs2_encode_fh(struct dentry *dentry, __u32 *p, int *len, int connectable) { @@ -189,10 +155,10 @@ static struct dentry *gfs2_get_parent(struct dentry *child) return dentry; } -static struct dentry *gfs2_get_dentry(struct super_block *sb, void *inum_obj) +static struct dentry *gfs2_get_dentry(struct super_block *sb, + struct gfs2_inum_host *inum) { struct gfs2_sbd *sdp = sb->s_fs_info; - struct gfs2_inum_host *inum = inum_obj; struct gfs2_holder i_gh, ri_gh, rgd_gh; struct gfs2_rgrpd *rgd; struct inode *inode; @@ -289,11 +255,50 @@ fail: return ERR_PTR(error); } -struct export_operations gfs2_export_ops = { - .decode_fh = gfs2_decode_fh, +static struct dentry *gfs2_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + struct gfs2_inum_host this; + __be32 *fh = (__force __be32 *)fid->raw; + + switch (fh_type) { + case GFS2_SMALL_FH_SIZE: + case GFS2_LARGE_FH_SIZE: + case GFS2_OLD_FH_SIZE: + this.no_formal_ino = ((u64)be32_to_cpu(fh[0])) << 32; + this.no_formal_ino |= be32_to_cpu(fh[1]); + this.no_addr = ((u64)be32_to_cpu(fh[2])) << 32; + this.no_addr |= be32_to_cpu(fh[3]); + return gfs2_get_dentry(sb, &this); + default: + return NULL; + } +} + +static struct dentry *gfs2_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + struct gfs2_inum_host parent; + __be32 *fh = (__force __be32 *)fid->raw; + + switch (fh_type) { + case GFS2_LARGE_FH_SIZE: + case GFS2_OLD_FH_SIZE: + parent.no_formal_ino = ((u64)be32_to_cpu(fh[4])) << 32; + parent.no_formal_ino |= be32_to_cpu(fh[5]); + parent.no_addr = ((u64)be32_to_cpu(fh[6])) << 32; + parent.no_addr |= be32_to_cpu(fh[7]); + return gfs2_get_dentry(sb, &parent); + default: + return NULL; + } +} + +const struct export_operations gfs2_export_ops = { .encode_fh = gfs2_encode_fh, + .fh_to_dentry = gfs2_fh_to_dentry, + .fh_to_parent = gfs2_fh_to_parent, .get_name = gfs2_get_name, .get_parent = gfs2_get_parent, - .get_dentry = gfs2_get_dentry, }; diff --git a/fs/gfs2/ops_fstype.h b/fs/gfs2/ops_fstype.h index 407029b3b2b..da849051183 100644 --- a/fs/gfs2/ops_fstype.h +++ b/fs/gfs2/ops_fstype.h @@ -14,6 +14,6 @@ extern struct file_system_type gfs2_fs_type; extern struct file_system_type gfs2meta_fs_type; -extern struct export_operations gfs2_export_ops; +extern const struct export_operations gfs2_export_ops; #endif /* __OPS_FSTYPE_DOT_H__ */ diff --git a/fs/isofs/export.c b/fs/isofs/export.c index 4af856a7fda..29f9753ae5e 100644 --- a/fs/isofs/export.c +++ b/fs/isofs/export.c @@ -42,16 +42,6 @@ isofs_export_iget(struct super_block *sb, return result; } -static struct dentry * -isofs_export_get_dentry(struct super_block *sb, void *vobjp) -{ - __u32 *objp = vobjp; - unsigned long block = objp[0]; - unsigned long offset = objp[1]; - __u32 generation = objp[2]; - return isofs_export_iget(sb, block, offset, generation); -} - /* This function is surprisingly simple. The trick is understanding * that "child" is always a directory. So, to find its parent, you * simply need to find its ".." entry, normalize its block and offset, @@ -182,43 +172,44 @@ isofs_export_encode_fh(struct dentry *dentry, return type; } +struct isofs_fid { + u32 block; + u16 offset; + u16 parent_offset; + u32 generation; + u32 parent_block; + u32 parent_generation; +}; -static struct dentry * -isofs_export_decode_fh(struct super_block *sb, - __u32 *fh32, - int fh_len, - int fileid_type, - int (*acceptable)(void *context, struct dentry *de), - void *context) +static struct dentry *isofs_fh_to_dentry(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) { - __u16 *fh16 = (__u16*)fh32; - __u32 child[3]; /* The child is what triggered all this. */ - __u32 parent[3]; /* The parent is just along for the ride. */ + struct isofs_fid *ifid = (struct isofs_fid *)fid; - if (fh_len < 3 || fileid_type > 2) + if (fh_len < 3 || fh_type > 2) return NULL; - child[0] = fh32[0]; - child[1] = fh16[2]; /* fh16 [sic] */ - child[2] = fh32[2]; - - parent[0] = 0; - parent[1] = 0; - parent[2] = 0; - if (fileid_type == 2) { - if (fh_len > 2) parent[0] = fh32[3]; - parent[1] = fh16[3]; /* fh16 [sic] */ - if (fh_len > 4) parent[2] = fh32[4]; - } - - return sb->s_export_op->find_exported_dentry(sb, child, parent, - acceptable, context); + return isofs_export_iget(sb, ifid->block, ifid->offset, + ifid->generation); } +static struct dentry *isofs_fh_to_parent(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) +{ + struct isofs_fid *ifid = (struct isofs_fid *)fid; + + if (fh_type != 2) + return NULL; + + return isofs_export_iget(sb, + fh_len > 2 ? ifid->parent_block : 0, + ifid->parent_offset, + fh_len > 4 ? ifid->parent_generation : 0); +} -struct export_operations isofs_export_ops = { - .decode_fh = isofs_export_decode_fh, +const struct export_operations isofs_export_ops = { .encode_fh = isofs_export_encode_fh, - .get_dentry = isofs_export_get_dentry, + .fh_to_dentry = isofs_fh_to_dentry, + .fh_to_parent = isofs_fh_to_parent, .get_parent = isofs_export_get_parent, }; diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h index a07e67b1ea7..f3213f9f89a 100644 --- a/fs/isofs/isofs.h +++ b/fs/isofs/isofs.h @@ -178,4 +178,4 @@ isofs_normalize_block_and_offset(struct iso_directory_record* de, extern const struct inode_operations isofs_dir_inode_operations; extern const struct file_operations isofs_dir_operations; extern const struct address_space_operations isofs_symlink_aops; -extern struct export_operations isofs_export_ops; +extern const struct export_operations isofs_export_ops; diff --git a/fs/jfs/jfs_inode.h b/fs/jfs/jfs_inode.h index f0ec72b263f..8e2cf2cde18 100644 --- a/fs/jfs/jfs_inode.h +++ b/fs/jfs/jfs_inode.h @@ -18,6 +18,8 @@ #ifndef _H_JFS_INODE #define _H_JFS_INODE +struct fid; + extern struct inode *ialloc(struct inode *, umode_t); extern int jfs_fsync(struct file *, struct dentry *, int); extern int jfs_ioctl(struct inode *, struct file *, @@ -32,7 +34,10 @@ extern void jfs_truncate_nolock(struct inode *, loff_t); extern void jfs_free_zero_link(struct inode *); extern struct dentry *jfs_get_parent(struct dentry *dentry); extern void jfs_get_inode_flags(struct jfs_inode_info *); -extern struct dentry *jfs_get_dentry(struct super_block *sb, void *vobjp); +extern struct dentry *jfs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); +extern struct dentry *jfs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); extern void jfs_set_inode_flags(struct inode *); extern int jfs_get_block(struct inode *, sector_t, struct buffer_head *, int); diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c index 932797ba433..4e0a8493cef 100644 --- a/fs/jfs/namei.c +++ b/fs/jfs/namei.c @@ -20,6 +20,7 @@ #include <linux/fs.h> #include <linux/ctype.h> #include <linux/quotaops.h> +#include <linux/exportfs.h> #include "jfs_incore.h" #include "jfs_superblock.h" #include "jfs_inode.h" @@ -1477,13 +1478,10 @@ static struct dentry *jfs_lookup(struct inode *dip, struct dentry *dentry, struc return dentry; } -struct dentry *jfs_get_dentry(struct super_block *sb, void *vobjp) +static struct inode *jfs_nfs_get_inode(struct super_block *sb, + u64 ino, u32 generation) { - __u32 *objp = vobjp; - unsigned long ino = objp[0]; - __u32 generation = objp[1]; struct inode *inode; - struct dentry *result; if (ino == 0) return ERR_PTR(-ESTALE); @@ -1493,20 +1491,25 @@ struct dentry *jfs_get_dentry(struct super_block *sb, void *vobjp) if (is_bad_inode(inode) || (generation && inode->i_generation != generation)) { - result = ERR_PTR(-ESTALE); - goto out_iput; + iput(inode); + return ERR_PTR(-ESTALE); } - result = d_alloc_anon(inode); - if (!result) { - result = ERR_PTR(-ENOMEM); - goto out_iput; - } - return result; + return inode; +} - out_iput: - iput(inode); - return result; +struct dentry *jfs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + jfs_nfs_get_inode); +} + +struct dentry *jfs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + jfs_nfs_get_inode); } struct dentry *jfs_get_parent(struct dentry *dentry) diff --git a/fs/jfs/super.c b/fs/jfs/super.c index cff60c17194..314bb4ff1ba 100644 --- a/fs/jfs/super.c +++ b/fs/jfs/super.c @@ -48,7 +48,7 @@ MODULE_LICENSE("GPL"); static struct kmem_cache * jfs_inode_cachep; static const struct super_operations jfs_super_operations; -static struct export_operations jfs_export_operations; +static const struct export_operations jfs_export_operations; static struct file_system_type jfs_fs_type; #define MAX_COMMIT_THREADS 64 @@ -737,8 +737,9 @@ static const struct super_operations jfs_super_operations = { #endif }; -static struct export_operations jfs_export_operations = { - .get_dentry = jfs_get_dentry, +static const struct export_operations jfs_export_operations = { + .fh_to_dentry = jfs_fh_to_dentry, + .fh_to_parent = jfs_fh_to_parent, .get_parent = jfs_get_parent, }; diff --git a/fs/libfs.c b/fs/libfs.c index ae51481e45e..6e68b700958 100644 --- a/fs/libfs.c +++ b/fs/libfs.c @@ -8,6 +8,7 @@ #include <linux/mount.h> #include <linux/vfs.h> #include <linux/mutex.h> +#include <linux/exportfs.h> #include <asm/uaccess.h> @@ -678,6 +679,93 @@ out: return ret; } +/* + * This is what d_alloc_anon should have been. Once the exportfs + * argument transition has been finished I will update d_alloc_anon + * to this prototype and this wrapper will go away. --hch + */ +static struct dentry *exportfs_d_alloc(struct inode *inode) +{ + struct dentry *dentry; + + if (!inode) + return NULL; + if (IS_ERR(inode)) + return ERR_PTR(PTR_ERR(inode)); + + dentry = d_alloc_anon(inode); + if (!dentry) { + iput(inode); + dentry = ERR_PTR(-ENOMEM); + } + return dentry; +} + +/** + * generic_fh_to_dentry - generic helper for the fh_to_dentry export operation + * @sb: filesystem to do the file handle conversion on + * @fid: file handle to convert + * @fh_len: length of the file handle in bytes + * @fh_type: type of file handle + * @get_inode: filesystem callback to retrieve inode + * + * This function decodes @fid as long as it has one of the well-known + * Linux filehandle types and calls @get_inode on it to retrieve the + * inode for the object specified in the file handle. + */ +struct dentry *generic_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type, struct inode *(*get_inode) + (struct super_block *sb, u64 ino, u32 gen)) +{ + struct inode *inode = NULL; + + if (fh_len < 2) + return NULL; + + switch (fh_type) { + case FILEID_INO32_GEN: + case FILEID_INO32_GEN_PARENT: + inode = get_inode(sb, fid->i32.ino, fid->i32.gen); + break; + } + + return exportfs_d_alloc(inode); +} +EXPORT_SYMBOL_GPL(generic_fh_to_dentry); + +/** + * generic_fh_to_dentry - generic helper for the fh_to_parent export operation + * @sb: filesystem to do the file handle conversion on + * @fid: file handle to convert + * @fh_len: length of the file handle in bytes + * @fh_type: type of file handle + * @get_inode: filesystem callback to retrieve inode + * + * This function decodes @fid as long as it has one of the well-known + * Linux filehandle types and calls @get_inode on it to retrieve the + * inode for the _parent_ object specified in the file handle if it + * is specified in the file handle, or NULL otherwise. + */ +struct dentry *generic_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type, struct inode *(*get_inode) + (struct super_block *sb, u64 ino, u32 gen)) +{ + struct inode *inode = NULL; + + if (fh_len <= 2) + return NULL; + + switch (fh_type) { + case FILEID_INO32_GEN_PARENT: + inode = get_inode(sb, fid->i32.parent_ino, + (fh_len > 3 ? fid->i32.parent_gen : 0)); + break; + } + + return exportfs_d_alloc(inode); +} +EXPORT_SYMBOL_GPL(generic_fh_to_parent); + EXPORT_SYMBOL(dcache_dir_close); EXPORT_SYMBOL(dcache_dir_lseek); EXPORT_SYMBOL(dcache_dir_open); diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c index 04b26672980..66d0aeb32a4 100644 --- a/fs/nfsd/export.c +++ b/fs/nfsd/export.c @@ -386,15 +386,13 @@ static int check_export(struct inode *inode, int flags, unsigned char *uuid) dprintk("exp_export: export of non-dev fs without fsid\n"); return -EINVAL; } - if (!inode->i_sb->s_export_op) { + + if (!inode->i_sb->s_export_op || + !inode->i_sb->s_export_op->fh_to_dentry) { dprintk("exp_export: export of invalid fs type.\n"); return -EINVAL; } - /* Ok, we can export it */; - if (!inode->i_sb->s_export_op->find_exported_dentry) - inode->i_sb->s_export_op->find_exported_dentry = - find_exported_dentry; return 0; } diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c index ebd03cc0747..6f03918018a 100644 --- a/fs/nfsd/nfs4recover.c +++ b/fs/nfsd/nfs4recover.c @@ -88,7 +88,7 @@ nfs4_make_rec_clidname(char *dname, struct xdr_netobj *clname) { struct xdr_netobj cksum; struct hash_desc desc; - struct scatterlist sg[1]; + struct scatterlist sg; __be32 status = nfserr_resource; dprintk("NFSD: nfs4_make_rec_clidname for %.*s\n", @@ -102,11 +102,9 @@ nfs4_make_rec_clidname(char *dname, struct xdr_netobj *clname) if (cksum.data == NULL) goto out; - sg[0].page = virt_to_page(clname->data); - sg[0].offset = offset_in_page(clname->data); - sg[0].length = clname->len; + sg_init_one(&sg, clname->data, clname->len); - if (crypto_hash_digest(&desc, sg, sg->length, cksum.data)) + if (crypto_hash_digest(&desc, &sg, sg.length, cksum.data)) goto out; md5_to_hex(dname, cksum.data); diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c index 7011d62acfc..4f712e97058 100644 --- a/fs/nfsd/nfsfh.c +++ b/fs/nfsd/nfsfh.c @@ -115,8 +115,7 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access) dprintk("nfsd: fh_verify(%s)\n", SVCFH_fmt(fhp)); if (!fhp->fh_dentry) { - __u32 *datap=NULL; - __u32 tfh[3]; /* filehandle fragment for oldstyle filehandles */ + struct fid *fid = NULL, sfid; int fileid_type; int data_left = fh->fh_size/4; @@ -128,7 +127,6 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access) if (fh->fh_version == 1) { int len; - datap = fh->fh_auth; if (--data_left<0) goto out; switch (fh->fh_auth_type) { case 0: break; @@ -144,9 +142,11 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access) fh->fh_fsid[1] = fh->fh_fsid[2]; } if ((data_left -= len)<0) goto out; - exp = rqst_exp_find(rqstp, fh->fh_fsid_type, datap); - datap += len; + exp = rqst_exp_find(rqstp, fh->fh_fsid_type, + fh->fh_auth); + fid = (struct fid *)(fh->fh_auth + len); } else { + __u32 tfh[2]; dev_t xdev; ino_t xino; if (fh->fh_size != NFS_FHSIZE) @@ -190,22 +190,22 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access) error = nfserr_badhandle; if (fh->fh_version != 1) { - tfh[0] = fh->ofh_ino; - tfh[1] = fh->ofh_generation; - tfh[2] = fh->ofh_dirino; - datap = tfh; + sfid.i32.ino = fh->ofh_ino; + sfid.i32.gen = fh->ofh_generation; + sfid.i32.parent_ino = fh->ofh_dirino; + fid = &sfid; data_left = 3; if (fh->ofh_dirino == 0) - fileid_type = 1; + fileid_type = FILEID_INO32_GEN; else - fileid_type = 2; + fileid_type = FILEID_INO32_GEN_PARENT; } else fileid_type = fh->fh_fileid_type; - if (fileid_type == 0) + if (fileid_type == FILEID_ROOT) dentry = dget(exp->ex_dentry); else { - dentry = exportfs_decode_fh(exp->ex_mnt, datap, + dentry = exportfs_decode_fh(exp->ex_mnt, fid, data_left, fileid_type, nfsd_acceptable, exp); } @@ -286,16 +286,21 @@ out: * an inode. In this case a call to fh_update should be made * before the fh goes out on the wire ... */ -static inline int _fh_update(struct dentry *dentry, struct svc_export *exp, - __u32 *datap, int *maxsize) +static void _fh_update(struct svc_fh *fhp, struct svc_export *exp, + struct dentry *dentry) { - if (dentry == exp->ex_dentry) { - *maxsize = 0; - return 0; - } + if (dentry != exp->ex_dentry) { + struct fid *fid = (struct fid *) + (fhp->fh_handle.fh_auth + fhp->fh_handle.fh_size/4 - 1); + int maxsize = (fhp->fh_maxsize - fhp->fh_handle.fh_size)/4; + int subtreecheck = !(exp->ex_flags & NFSEXP_NOSUBTREECHECK); - return exportfs_encode_fh(dentry, datap, maxsize, - !(exp->ex_flags & NFSEXP_NOSUBTREECHECK)); + fhp->fh_handle.fh_fileid_type = + exportfs_encode_fh(dentry, fid, &maxsize, subtreecheck); + fhp->fh_handle.fh_size += maxsize * 4; + } else { + fhp->fh_handle.fh_fileid_type = FILEID_ROOT; + } } /* @@ -457,12 +462,8 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry, datap += len/4; fhp->fh_handle.fh_size = 4 + len; - if (inode) { - int size = (fhp->fh_maxsize-len-4)/4; - fhp->fh_handle.fh_fileid_type = - _fh_update(dentry, exp, datap, &size); - fhp->fh_handle.fh_size += size*4; - } + if (inode) + _fh_update(fhp, exp, dentry); if (fhp->fh_handle.fh_fileid_type == 255) return nfserr_opnotsupp; } @@ -479,7 +480,6 @@ __be32 fh_update(struct svc_fh *fhp) { struct dentry *dentry; - __u32 *datap; if (!fhp->fh_dentry) goto out_bad; @@ -490,15 +490,10 @@ fh_update(struct svc_fh *fhp) if (fhp->fh_handle.fh_version != 1) { _fh_update_old(dentry, fhp->fh_export, &fhp->fh_handle); } else { - int size; - if (fhp->fh_handle.fh_fileid_type != 0) + if (fhp->fh_handle.fh_fileid_type != FILEID_ROOT) goto out; - datap = fhp->fh_handle.fh_auth+ - fhp->fh_handle.fh_size/4 -1; - size = (fhp->fh_maxsize - fhp->fh_handle.fh_size)/4; - fhp->fh_handle.fh_fileid_type = - _fh_update(dentry, fhp->fh_export, datap, &size); - fhp->fh_handle.fh_size += size*4; + + _fh_update(fhp, fhp->fh_export, dentry); if (fhp->fh_handle.fh_fileid_type == 255) return nfserr_opnotsupp; } diff --git a/fs/ntfs/namei.c b/fs/ntfs/namei.c index e93c6142b23..e1781c8b165 100644 --- a/fs/ntfs/namei.c +++ b/fs/ntfs/namei.c @@ -450,58 +450,40 @@ try_next: return parent_dent; } -/** - * ntfs_get_dentry - find a dentry for the inode from a file handle sub-fragment - * @sb: super block identifying the mounted ntfs volume - * @fh: the file handle sub-fragment - * - * Find a dentry for the inode given a file handle sub-fragment. This function - * is called from fs/exportfs/expfs.c::find_exported_dentry() which in turn is - * called from the default ->decode_fh() which is export_decode_fh() in the - * same file. The code is closely based on the default ->get_dentry() helper - * fs/exportfs/expfs.c::get_object(). - * - * The @fh contains two 32-bit unsigned values, the first one is the inode - * number and the second one is the inode generation. - * - * Return the dentry on success or the error code on error (IS_ERR() is true). - */ -static struct dentry *ntfs_get_dentry(struct super_block *sb, void *fh) +static struct inode *ntfs_nfs_get_inode(struct super_block *sb, + u64 ino, u32 generation) { - struct inode *vi; - struct dentry *dent; - unsigned long ino = ((u32 *)fh)[0]; - u32 gen = ((u32 *)fh)[1]; + struct inode *inode; - ntfs_debug("Entering for inode 0x%lx, generation 0x%x.", ino, gen); - vi = ntfs_iget(sb, ino); - if (IS_ERR(vi)) { - ntfs_error(sb, "Failed to get inode 0x%lx.", ino); - return (struct dentry *)vi; - } - if (unlikely(is_bad_inode(vi) || vi->i_generation != gen)) { - /* We didn't find the right inode. */ - ntfs_error(sb, "Inode 0x%lx, bad count: %d %d or version 0x%x " - "0x%x.", vi->i_ino, vi->i_nlink, - atomic_read(&vi->i_count), vi->i_generation, - gen); - iput(vi); - return ERR_PTR(-ESTALE); - } - /* Now find a dentry. If possible, get a well-connected one. */ - dent = d_alloc_anon(vi); - if (unlikely(!dent)) { - iput(vi); - return ERR_PTR(-ENOMEM); + inode = ntfs_iget(sb, ino); + if (!IS_ERR(inode)) { + if (is_bad_inode(inode) || inode->i_generation != generation) { + iput(inode); + inode = ERR_PTR(-ESTALE); + } } - ntfs_debug("Done for inode 0x%lx, generation 0x%x.", ino, gen); - return dent; + + return inode; +} + +static struct dentry *ntfs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_dentry(sb, fid, fh_len, fh_type, + ntfs_nfs_get_inode); +} + +static struct dentry *ntfs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + return generic_fh_to_parent(sb, fid, fh_len, fh_type, + ntfs_nfs_get_inode); } /** * Export operations allowing NFS exporting of mounted NTFS partitions. * - * We use the default ->decode_fh() and ->encode_fh() for now. Note that they + * We use the default ->encode_fh() for now. Note that they * use 32 bits to store the inode number which is an unsigned long so on 64-bit * architectures is usually 64 bits so it would all fail horribly on huge * volumes. I guess we need to define our own encode and decode fh functions @@ -517,10 +499,9 @@ static struct dentry *ntfs_get_dentry(struct super_block *sb, void *fh) * allowing the inode number 0 which is used in NTFS for the system file $MFT * and due to using iget() whereas NTFS needs ntfs_iget(). */ -struct export_operations ntfs_export_ops = { +const struct export_operations ntfs_export_ops = { .get_parent = ntfs_get_parent, /* Find the parent of a given directory. */ - .get_dentry = ntfs_get_dentry, /* Find a dentry for the inode - given a file handle - sub-fragment. */ + .fh_to_dentry = ntfs_fh_to_dentry, + .fh_to_parent = ntfs_fh_to_parent, }; diff --git a/fs/ntfs/ntfs.h b/fs/ntfs/ntfs.h index d73f5a9ac34..d6a340bf80f 100644 --- a/fs/ntfs/ntfs.h +++ b/fs/ntfs/ntfs.h @@ -69,7 +69,7 @@ extern const struct inode_operations ntfs_dir_inode_ops; extern const struct file_operations ntfs_empty_file_ops; extern const struct inode_operations ntfs_empty_inode_ops; -extern struct export_operations ntfs_export_ops; +extern const struct export_operations ntfs_export_ops; /** * NTFS_SB - return the ntfs volume given a vfs super block diff --git a/fs/ocfs2/export.c b/fs/ocfs2/export.c index c3bbc198f9c..535bfa9568a 100644 --- a/fs/ocfs2/export.c +++ b/fs/ocfs2/export.c @@ -45,9 +45,9 @@ struct ocfs2_inode_handle u32 ih_generation; }; -static struct dentry *ocfs2_get_dentry(struct super_block *sb, void *vobjp) +static struct dentry *ocfs2_get_dentry(struct super_block *sb, + struct ocfs2_inode_handle *handle) { - struct ocfs2_inode_handle *handle = vobjp; struct inode *inode; struct dentry *result; @@ -194,54 +194,37 @@ bail: return type; } -static struct dentry *ocfs2_decode_fh(struct super_block *sb, u32 *fh_in, - int fh_len, int fileid_type, - int (*acceptable)(void *context, - struct dentry *de), - void *context) +static struct dentry *ocfs2_fh_to_dentry(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) { - struct ocfs2_inode_handle handle, parent; - struct dentry *ret = NULL; - __le32 *fh = (__force __le32 *) fh_in; - - mlog_entry("(0x%p, 0x%p, %d, %d, 0x%p, 0x%p)\n", - sb, fh, fh_len, fileid_type, acceptable, context); - - if (fh_len < 3 || fileid_type > 2) - goto bail; - - if (fileid_type == 2) { - if (fh_len < 6) - goto bail; - - parent.ih_blkno = (u64)le32_to_cpu(fh[3]) << 32; - parent.ih_blkno |= (u64)le32_to_cpu(fh[4]); - parent.ih_generation = le32_to_cpu(fh[5]); + struct ocfs2_inode_handle handle; - mlog(0, "Decoding parent: blkno: %llu, generation: %u\n", - (unsigned long long)parent.ih_blkno, - parent.ih_generation); - } + if (fh_len < 3 || fh_type > 2) + return NULL; - handle.ih_blkno = (u64)le32_to_cpu(fh[0]) << 32; - handle.ih_blkno |= (u64)le32_to_cpu(fh[1]); - handle.ih_generation = le32_to_cpu(fh[2]); + handle.ih_blkno = (u64)le32_to_cpu(fid->raw[0]) << 32; + handle.ih_blkno |= (u64)le32_to_cpu(fid->raw[1]); + handle.ih_generation = le32_to_cpu(fid->raw[2]); + return ocfs2_get_dentry(sb, &handle); +} - mlog(0, "Encoding fh: blkno: %llu, generation: %u\n", - (unsigned long long)handle.ih_blkno, handle.ih_generation); +static struct dentry *ocfs2_fh_to_parent(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) +{ + struct ocfs2_inode_handle parent; - ret = ocfs2_export_ops.find_exported_dentry(sb, &handle, &parent, - acceptable, context); + if (fh_type != 2 || fh_len < 6) + return NULL; -bail: - mlog_exit_ptr(ret); - return ret; + parent.ih_blkno = (u64)le32_to_cpu(fid->raw[3]) << 32; + parent.ih_blkno |= (u64)le32_to_cpu(fid->raw[4]); + parent.ih_generation = le32_to_cpu(fid->raw[5]); + return ocfs2_get_dentry(sb, &parent); } -struct export_operations ocfs2_export_ops = { - .decode_fh = ocfs2_decode_fh, +const struct export_operations ocfs2_export_ops = { .encode_fh = ocfs2_encode_fh, - + .fh_to_dentry = ocfs2_fh_to_dentry, + .fh_to_parent = ocfs2_fh_to_parent, .get_parent = ocfs2_get_parent, - .get_dentry = ocfs2_get_dentry, }; diff --git a/fs/ocfs2/export.h b/fs/ocfs2/export.h index e08bed9e45a..41a738678c3 100644 --- a/fs/ocfs2/export.h +++ b/fs/ocfs2/export.h @@ -28,6 +28,6 @@ #include <linux/exportfs.h> -extern struct export_operations ocfs2_export_ops; +extern const struct export_operations ocfs2_export_ops; #endif /* OCFS2_EXPORT_H */ diff --git a/fs/proc/base.c b/fs/proc/base.c index 39a3d7c969c..aeaf0d0f2f5 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -2255,27 +2255,6 @@ static const struct inode_operations proc_tgid_base_inode_operations = { .setattr = proc_setattr, }; -/** - * proc_flush_task - Remove dcache entries for @task from the /proc dcache. - * - * @task: task that should be flushed. - * - * Looks in the dcache for - * /proc/@pid - * /proc/@tgid/task/@pid - * if either directory is present flushes it and all of it'ts children - * from the dcache. - * - * It is safe and reasonable to cache /proc entries for a task until - * that task exits. After that they just clog up the dcache with - * useless entries, possibly causing useful dcache entries to be - * flushed instead. This routine is proved to flush those useless - * dcache entries at process exit time. - * - * NOTE: This routine is just an optimization so it does not guarantee - * that no dcache entries will exist at process exit time it - * just makes it very unlikely that any will persist. - */ static void proc_flush_task_mnt(struct vfsmount *mnt, pid_t pid, pid_t tgid) { struct dentry *dentry, *leader, *dir; @@ -2322,10 +2301,29 @@ out: return; } -/* - * when flushing dentries from proc one need to flush them from global +/** + * proc_flush_task - Remove dcache entries for @task from the /proc dcache. + * @task: task that should be flushed. + * + * When flushing dentries from proc, one needs to flush them from global * proc (proc_mnt) and from all the namespaces' procs this task was seen - * in. this call is supposed to make all this job. + * in. This call is supposed to do all of this job. + * + * Looks in the dcache for + * /proc/@pid + * /proc/@tgid/task/@pid + * if either directory is present flushes it and all of it'ts children + * from the dcache. + * + * It is safe and reasonable to cache /proc entries for a task until + * that task exits. After that they just clog up the dcache with + * useless entries, possibly causing useful dcache entries to be + * flushed instead. This routine is proved to flush those useless + * dcache entries at process exit time. + * + * NOTE: This routine is just an optimization so it does not guarantee + * that no dcache entries will exist at process exit time it + * just makes it very unlikely that any will persist. */ void proc_flush_task(struct task_struct *task) diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index a991af96f3f..231fd5ccadc 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -1515,19 +1515,20 @@ struct inode *reiserfs_iget(struct super_block *s, const struct cpu_key *key) return inode; } -struct dentry *reiserfs_get_dentry(struct super_block *sb, void *vobjp) +static struct dentry *reiserfs_get_dentry(struct super_block *sb, + u32 objectid, u32 dir_id, u32 generation) + { - __u32 *data = vobjp; struct cpu_key key; struct dentry *result; struct inode *inode; - key.on_disk_key.k_objectid = data[0]; - key.on_disk_key.k_dir_id = data[1]; + key.on_disk_key.k_objectid = objectid; + key.on_disk_key.k_dir_id = dir_id; reiserfs_write_lock(sb); inode = reiserfs_iget(sb, &key); - if (inode && !IS_ERR(inode) && data[2] != 0 && - data[2] != inode->i_generation) { + if (inode && !IS_ERR(inode) && generation != 0 && + generation != inode->i_generation) { iput(inode); inode = NULL; } @@ -1544,14 +1545,9 @@ struct dentry *reiserfs_get_dentry(struct super_block *sb, void *vobjp) return result; } -struct dentry *reiserfs_decode_fh(struct super_block *sb, __u32 * data, - int len, int fhtype, - int (*acceptable) (void *contect, - struct dentry * de), - void *context) +struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) { - __u32 obj[3], parent[3]; - /* fhtype happens to reflect the number of u32s encoded. * due to a bug in earlier code, fhtype might indicate there * are more u32s then actually fitted. @@ -1564,32 +1560,28 @@ struct dentry *reiserfs_decode_fh(struct super_block *sb, __u32 * data, * 6 - as above plus generation of directory * 6 does not fit in NFSv2 handles */ - if (fhtype > len) { - if (fhtype != 6 || len != 5) + if (fh_type > fh_len) { + if (fh_type != 6 || fh_len != 5) reiserfs_warning(sb, - "nfsd/reiserfs, fhtype=%d, len=%d - odd", - fhtype, len); - fhtype = 5; + "nfsd/reiserfs, fhtype=%d, len=%d - odd", + fh_type, fh_len); + fh_type = 5; } - obj[0] = data[0]; - obj[1] = data[1]; - if (fhtype == 3 || fhtype >= 5) - obj[2] = data[2]; - else - obj[2] = 0; /* generation number */ + return reiserfs_get_dentry(sb, fid->raw[0], fid->raw[1], + (fh_type == 3 || fh_type >= 5) ? fid->raw[2] : 0); +} - if (fhtype >= 4) { - parent[0] = data[fhtype >= 5 ? 3 : 2]; - parent[1] = data[fhtype >= 5 ? 4 : 3]; - if (fhtype == 6) - parent[2] = data[5]; - else - parent[2] = 0; - } - return sb->s_export_op->find_exported_dentry(sb, obj, - fhtype < 4 ? NULL : parent, - acceptable, context); +struct dentry *reiserfs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type) +{ + if (fh_type < 4) + return NULL; + + return reiserfs_get_dentry(sb, + (fh_type >= 5) ? fid->raw[3] : fid->raw[2], + (fh_type >= 5) ? fid->raw[4] : fid->raw[3], + (fh_type == 6) ? fid->raw[5] : 0); } int reiserfs_encode_fh(struct dentry *dentry, __u32 * data, int *lenp, diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c index 98c3781bc06..5cd85fe5df5 100644 --- a/fs/reiserfs/super.c +++ b/fs/reiserfs/super.c @@ -661,11 +661,11 @@ static struct quotactl_ops reiserfs_qctl_operations = { }; #endif -static struct export_operations reiserfs_export_ops = { +static const struct export_operations reiserfs_export_ops = { .encode_fh = reiserfs_encode_fh, - .decode_fh = reiserfs_decode_fh, + .fh_to_dentry = reiserfs_fh_to_dentry, + .fh_to_parent = reiserfs_fh_to_parent, .get_parent = reiserfs_get_parent, - .get_dentry = reiserfs_get_dentry, }; /* this struct is used in reiserfs_getopt () for containing the value for those diff --git a/fs/xfs/linux-2.6/xfs_export.c b/fs/xfs/linux-2.6/xfs_export.c index 3586c7a28d2..15bd4948832 100644 --- a/fs/xfs/linux-2.6/xfs_export.c +++ b/fs/xfs/linux-2.6/xfs_export.c @@ -33,62 +33,25 @@ static struct dentry dotdot = { .d_name.name = "..", .d_name.len = 2, }; /* - * XFS encodes and decodes the fileid portion of NFS filehandles - * itself instead of letting the generic NFS code do it. This - * allows filesystems with 64 bit inode numbers to be exported. - * - * Note that a side effect is that xfs_vget() won't be passed a - * zero inode/generation pair under normal circumstances. As - * however a malicious client could send us such data, the check - * remains in that code. + * Note that we only accept fileids which are long enough rather than allow + * the parent generation number to default to zero. XFS considers zero a + * valid generation number not an invalid/wildcard value. */ - -STATIC struct dentry * -xfs_fs_decode_fh( - struct super_block *sb, - __u32 *fh, - int fh_len, - int fileid_type, - int (*acceptable)( - void *context, - struct dentry *de), - void *context) +static int xfs_fileid_length(int fileid_type) { - xfs_fid_t ifid; - xfs_fid_t pfid; - void *parent = NULL; - int is64 = 0; - __u32 *p = fh; - -#if XFS_BIG_INUMS - is64 = (fileid_type & XFS_FILEID_TYPE_64FLAG); - fileid_type &= ~XFS_FILEID_TYPE_64FLAG; -#endif - - /* - * Note that we only accept fileids which are long enough - * rather than allow the parent generation number to default - * to zero. XFS considers zero a valid generation number not - * an invalid/wildcard value. There's little point printk'ing - * a warning here as we don't have the client information - * which would make such a warning useful. - */ - if (fileid_type > 2 || - fh_len < xfs_fileid_length((fileid_type == 2), is64)) - return NULL; - - p = xfs_fileid_decode_fid2(p, &ifid, is64); - - if (fileid_type == 2) { - p = xfs_fileid_decode_fid2(p, &pfid, is64); - parent = &pfid; + switch (fileid_type) { + case FILEID_INO32_GEN: + return 2; + case FILEID_INO32_GEN_PARENT: + return 4; + case FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG: + return 3; + case FILEID_INO32_GEN_PARENT | XFS_FILEID_TYPE_64FLAG: + return 6; } - - fh = (__u32 *)&ifid; - return sb->s_export_op->find_exported_dentry(sb, fh, parent, acceptable, context); + return 255; /* invalid */ } - STATIC int xfs_fs_encode_fh( struct dentry *dentry, @@ -96,21 +59,21 @@ xfs_fs_encode_fh( int *max_len, int connectable) { + struct fid *fid = (struct fid *)fh; + struct xfs_fid64 *fid64 = (struct xfs_fid64 *)fh; struct inode *inode = dentry->d_inode; - int type = 1; - __u32 *p = fh; + int fileid_type; int len; - int is64 = 0; -#if XFS_BIG_INUMS - if (!(XFS_M(inode->i_sb)->m_flags & XFS_MOUNT_SMALL_INUMS)) { - /* filesystem may contain 64bit inode numbers */ - is64 = XFS_FILEID_TYPE_64FLAG; - } -#endif /* Directories don't need their parent encoded, they have ".." */ if (S_ISDIR(inode->i_mode)) - connectable = 0; + fileid_type = FILEID_INO32_GEN; + else + fileid_type = FILEID_INO32_GEN_PARENT; + + /* filesystem may contain 64bit inode numbers */ + if (!(XFS_M(inode->i_sb)->m_flags & XFS_MOUNT_SMALL_INUMS)) + fileid_type |= XFS_FILEID_TYPE_64FLAG; /* * Only encode if there is enough space given. In practice @@ -118,39 +81,118 @@ xfs_fs_encode_fh( * over NFSv2 with the subtree_check export option; the other * seven combinations work. The real answer is "don't use v2". */ - len = xfs_fileid_length(connectable, is64); + len = xfs_fileid_length(fileid_type); if (*max_len < len) return 255; *max_len = len; - p = xfs_fileid_encode_inode(p, inode, is64); - if (connectable) { + switch (fileid_type) { + case FILEID_INO32_GEN_PARENT: spin_lock(&dentry->d_lock); - p = xfs_fileid_encode_inode(p, dentry->d_parent->d_inode, is64); + fid->i32.parent_ino = dentry->d_parent->d_inode->i_ino; + fid->i32.parent_gen = dentry->d_parent->d_inode->i_generation; spin_unlock(&dentry->d_lock); - type = 2; + /*FALLTHRU*/ + case FILEID_INO32_GEN: + fid->i32.ino = inode->i_ino; + fid->i32.gen = inode->i_generation; + break; + case FILEID_INO32_GEN_PARENT | XFS_FILEID_TYPE_64FLAG: + spin_lock(&dentry->d_lock); + fid64->parent_ino = dentry->d_parent->d_inode->i_ino; + fid64->parent_gen = dentry->d_parent->d_inode->i_generation; + spin_unlock(&dentry->d_lock); + /*FALLTHRU*/ + case FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG: + fid64->ino = inode->i_ino; + fid64->gen = inode->i_generation; + break; } - BUG_ON((p - fh) != len); - return type | is64; + + return fileid_type; } -STATIC struct dentry * -xfs_fs_get_dentry( +STATIC struct inode * +xfs_nfs_get_inode( struct super_block *sb, - void *data) -{ + u64 ino, + u32 generation) + { + xfs_fid_t xfid; bhv_vnode_t *vp; - struct inode *inode; - struct dentry *result; int error; - error = xfs_vget(XFS_M(sb), &vp, data); - if (error || vp == NULL) - return ERR_PTR(-ESTALE) ; + xfid.fid_len = sizeof(xfs_fid_t) - sizeof(xfid.fid_len); + xfid.fid_pad = 0; + xfid.fid_ino = ino; + xfid.fid_gen = generation; - inode = vn_to_inode(vp); + error = xfs_vget(XFS_M(sb), &vp, &xfid); + if (error) + return ERR_PTR(-error); + + return vp ? vn_to_inode(vp) : NULL; +} + +STATIC struct dentry * +xfs_fs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fileid_type) +{ + struct xfs_fid64 *fid64 = (struct xfs_fid64 *)fid; + struct inode *inode = NULL; + struct dentry *result; + + if (fh_len < xfs_fileid_length(fileid_type)) + return NULL; + + switch (fileid_type) { + case FILEID_INO32_GEN_PARENT: + case FILEID_INO32_GEN: + inode = xfs_nfs_get_inode(sb, fid->i32.ino, fid->i32.gen); + break; + case FILEID_INO32_GEN_PARENT | XFS_FILEID_TYPE_64FLAG: + case FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG: + inode = xfs_nfs_get_inode(sb, fid64->ino, fid64->gen); + break; + } + + if (!inode) + return NULL; + if (IS_ERR(inode)) + return ERR_PTR(PTR_ERR(inode)); + result = d_alloc_anon(inode); + if (!result) { + iput(inode); + return ERR_PTR(-ENOMEM); + } + return result; +} + +STATIC struct dentry * +xfs_fs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fileid_type) +{ + struct xfs_fid64 *fid64 = (struct xfs_fid64 *)fid; + struct inode *inode = NULL; + struct dentry *result; + + switch (fileid_type) { + case FILEID_INO32_GEN_PARENT: + inode = xfs_nfs_get_inode(sb, fid->i32.parent_ino, + fid->i32.parent_gen); + break; + case FILEID_INO32_GEN_PARENT | XFS_FILEID_TYPE_64FLAG: + inode = xfs_nfs_get_inode(sb, fid64->parent_ino, + fid64->parent_gen); + break; + } + + if (!inode) + return NULL; + if (IS_ERR(inode)) + return ERR_PTR(PTR_ERR(inode)); result = d_alloc_anon(inode); - if (!result) { + if (!result) { iput(inode); return ERR_PTR(-ENOMEM); } @@ -178,9 +220,9 @@ xfs_fs_get_parent( return parent; } -struct export_operations xfs_export_operations = { - .decode_fh = xfs_fs_decode_fh, +const struct export_operations xfs_export_operations = { .encode_fh = xfs_fs_encode_fh, + .fh_to_dentry = xfs_fs_fh_to_dentry, + .fh_to_parent = xfs_fs_fh_to_parent, .get_parent = xfs_fs_get_parent, - .get_dentry = xfs_fs_get_dentry, }; diff --git a/fs/xfs/linux-2.6/xfs_export.h b/fs/xfs/linux-2.6/xfs_export.h index 2f36071a86f..3272b6ae7a3 100644 --- a/fs/xfs/linux-2.6/xfs_export.h +++ b/fs/xfs/linux-2.6/xfs_export.h @@ -59,50 +59,14 @@ * a subdirectory) or use the "fsid" export option. */ +struct xfs_fid64 { + u64 ino; + u32 gen; + u64 parent_ino; + u32 parent_gen; +} __attribute__((packed)); + /* This flag goes on the wire. Don't play with it. */ #define XFS_FILEID_TYPE_64FLAG 0x80 /* NFS fileid has 64bit inodes */ -/* Calculate the length in u32 units of the fileid data */ -static inline int -xfs_fileid_length(int hasparent, int is64) -{ - return hasparent ? (is64 ? 6 : 4) : (is64 ? 3 : 2); -} - -/* - * Decode encoded inode information (either for the inode itself - * or the parent) into an xfs_fid_t structure. Advances and - * returns the new data pointer - */ -static inline __u32 * -xfs_fileid_decode_fid2(__u32 *p, xfs_fid_t *fid, int is64) -{ - fid->fid_len = sizeof(xfs_fid_t) - sizeof(fid->fid_len); - fid->fid_pad = 0; - fid->fid_ino = *p++; -#if XFS_BIG_INUMS - if (is64) - fid->fid_ino |= (((__u64)(*p++)) << 32); -#endif - fid->fid_gen = *p++; - return p; -} - -/* - * Encode inode information (either for the inode itself or the - * parent) into a fileid buffer. Advances and returns the new - * data pointer. - */ -static inline __u32 * -xfs_fileid_encode_inode(__u32 *p, struct inode *inode, int is64) -{ - *p++ = (__u32)inode->i_ino; -#if XFS_BIG_INUMS - if (is64) - *p++ = (__u32)(inode->i_ino >> 32); -#endif - *p++ = inode->i_generation; - return p; -} - #endif /* __XFS_EXPORT_H__ */ diff --git a/fs/xfs/linux-2.6/xfs_super.h b/fs/xfs/linux-2.6/xfs_super.h index c78c23310fe..3efcf45b14a 100644 --- a/fs/xfs/linux-2.6/xfs_super.h +++ b/fs/xfs/linux-2.6/xfs_super.h @@ -118,7 +118,7 @@ extern int xfs_blkdev_get(struct xfs_mount *, const char *, extern void xfs_blkdev_put(struct block_device *); extern void xfs_blkdev_issue_flush(struct xfs_buftarg *); -extern struct export_operations xfs_export_operations; +extern const struct export_operations xfs_export_operations; #define XFS_M(sb) ((struct xfs_mount *)((sb)->s_fs_info)) diff --git a/include/acpi/actbl1.h b/include/acpi/actbl1.h index 4e5d3ca53a8..a1b1b2ee3e5 100644 --- a/include/acpi/actbl1.h +++ b/include/acpi/actbl1.h @@ -257,7 +257,8 @@ struct acpi_table_dbgp { struct acpi_table_dmar { struct acpi_table_header header; /* Common ACPI table header */ u8 width; /* Host Address Width */ - u8 reserved[11]; + u8 flags; + u8 reserved[10]; }; /* DMAR subtable header */ @@ -265,8 +266,6 @@ struct acpi_table_dmar { struct acpi_dmar_header { u16 type; u16 length; - u8 flags; - u8 reserved[3]; }; /* Values for subtable type in struct acpi_dmar_header */ @@ -274,13 +273,15 @@ struct acpi_dmar_header { enum acpi_dmar_type { ACPI_DMAR_TYPE_HARDWARE_UNIT = 0, ACPI_DMAR_TYPE_RESERVED_MEMORY = 1, - ACPI_DMAR_TYPE_RESERVED = 2 /* 2 and greater are reserved */ + ACPI_DMAR_TYPE_ATSR = 2, + ACPI_DMAR_TYPE_RESERVED = 3 /* 3 and greater are reserved */ }; struct acpi_dmar_device_scope { u8 entry_type; u8 length; - u8 segment; + u16 reserved; + u8 enumeration_id; u8 bus; }; @@ -290,7 +291,14 @@ enum acpi_dmar_scope_type { ACPI_DMAR_SCOPE_TYPE_NOT_USED = 0, ACPI_DMAR_SCOPE_TYPE_ENDPOINT = 1, ACPI_DMAR_SCOPE_TYPE_BRIDGE = 2, - ACPI_DMAR_SCOPE_TYPE_RESERVED = 3 /* 3 and greater are reserved */ + ACPI_DMAR_SCOPE_TYPE_IOAPIC = 3, + ACPI_DMAR_SCOPE_TYPE_HPET = 4, + ACPI_DMAR_SCOPE_TYPE_RESERVED = 5 /* 5 and greater are reserved */ +}; + +struct acpi_dmar_pci_path { + u8 dev; + u8 fn; }; /* @@ -301,6 +309,9 @@ enum acpi_dmar_scope_type { struct acpi_dmar_hardware_unit { struct acpi_dmar_header header; + u8 flags; + u8 reserved; + u16 segment; u64 address; /* Register Base Address */ }; @@ -312,7 +323,9 @@ struct acpi_dmar_hardware_unit { struct acpi_dmar_reserved_memory { struct acpi_dmar_header header; - u64 address; /* 4_k aligned base address */ + u16 reserved; + u16 segment; + u64 base_address; /* 4_k aligned base address */ u64 end_address; /* 4_k aligned limit address */ }; diff --git a/include/asm-alpha/scatterlist.h b/include/asm-alpha/scatterlist.h index 917365405e8..440747ca634 100644 --- a/include/asm-alpha/scatterlist.h +++ b/include/asm-alpha/scatterlist.h @@ -5,7 +5,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; diff --git a/include/asm-arm/dma-mapping.h b/include/asm-arm/dma-mapping.h index 1eb8aac4322..e99406a7bec 100644 --- a/include/asm-arm/dma-mapping.h +++ b/include/asm-arm/dma-mapping.h @@ -5,7 +5,7 @@ #include <linux/mm.h> /* need struct page */ -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> /* * DMA-consistent mapping functions. These allocate/free a region of @@ -274,8 +274,8 @@ dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, for (i = 0; i < nents; i++, sg++) { char *virt; - sg->dma_address = page_to_dma(dev, sg->page) + sg->offset; - virt = page_address(sg->page) + sg->offset; + sg->dma_address = page_to_dma(dev, sg_page(sg)) + sg->offset; + virt = sg_virt(sg); if (!arch_is_coherent()) dma_cache_maint(virt, sg->length, dir); @@ -371,7 +371,7 @@ dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, int i; for (i = 0; i < nents; i++, sg++) { - char *virt = page_address(sg->page) + sg->offset; + char *virt = sg_virt(sg); if (!arch_is_coherent()) dma_cache_maint(virt, sg->length, dir); } @@ -384,7 +384,7 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents, int i; for (i = 0; i < nents; i++, sg++) { - char *virt = page_address(sg->page) + sg->offset; + char *virt = sg_virt(sg); if (!arch_is_coherent()) dma_cache_maint(virt, sg->length, dir); } diff --git a/include/asm-arm/scatterlist.h b/include/asm-arm/scatterlist.h index de2f65eb42e..ca0a37d0340 100644 --- a/include/asm-arm/scatterlist.h +++ b/include/asm-arm/scatterlist.h @@ -5,7 +5,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; /* buffer page */ +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; /* buffer offset */ dma_addr_t dma_address; /* dma address */ unsigned int length; /* length */ diff --git a/include/asm-avr32/dma-mapping.h b/include/asm-avr32/dma-mapping.h index 81e342636ac..a7131630c05 100644 --- a/include/asm-avr32/dma-mapping.h +++ b/include/asm-avr32/dma-mapping.h @@ -217,8 +217,8 @@ dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, for (i = 0; i < nents; i++) { char *virt; - sg[i].dma_address = page_to_bus(sg[i].page) + sg[i].offset; - virt = page_address(sg[i].page) + sg[i].offset; + sg[i].dma_address = page_to_bus(sg_page(&sg[i])) + sg[i].offset; + virt = sg_virt(&sg[i]); dma_cache_sync(dev, virt, sg[i].length, direction); } @@ -327,8 +327,7 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int i; for (i = 0; i < nents; i++) { - dma_cache_sync(dev, page_address(sg[i].page) + sg[i].offset, - sg[i].length, direction); + dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, direction); } } diff --git a/include/asm-avr32/scatterlist.h b/include/asm-avr32/scatterlist.h index c6d5ce3b3a2..377320e3bd1 100644 --- a/include/asm-avr32/scatterlist.h +++ b/include/asm-avr32/scatterlist.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-blackfin/scatterlist.h b/include/asm-blackfin/scatterlist.h index 60e07b92044..04f448711cd 100644 --- a/include/asm-blackfin/scatterlist.h +++ b/include/asm-blackfin/scatterlist.h @@ -4,7 +4,10 @@ #include <linux/mm.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; @@ -17,7 +20,6 @@ struct scatterlist { * returns, or alternatively stop on the first sg_dma_len(sg) which * is 0. */ -#define sg_address(sg) (page_address((sg)->page) + (sg)->offset) #define sg_dma_address(sg) ((sg)->dma_address) #define sg_dma_len(sg) ((sg)->length) diff --git a/include/asm-cris/scatterlist.h b/include/asm-cris/scatterlist.h index 4bdc44c4ac3..faff53ad1f9 100644 --- a/include/asm-cris/scatterlist.h +++ b/include/asm-cris/scatterlist.h @@ -2,11 +2,14 @@ #define __ASM_CRIS_SCATTERLIST_H struct scatterlist { +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif char * address; /* Location data is to be transferred to */ unsigned int length; /* The following is i386 highmem junk - not used by us */ - struct page * page; /* Location for highmem page, if any */ + unsigned long page_link; unsigned int offset;/* for highmem, page offset */ }; diff --git a/include/asm-frv/scatterlist.h b/include/asm-frv/scatterlist.h index 8e827fa853f..99ba76edc42 100644 --- a/include/asm-frv/scatterlist.h +++ b/include/asm-frv/scatterlist.h @@ -4,25 +4,28 @@ #include <asm/types.h> /* - * Drivers must set either ->address or (preferred) ->page and ->offset + * Drivers must set either ->address or (preferred) page and ->offset * to indicate where data must be transferred to/from. * - * Using ->page is recommended since it handles highmem data as well as + * Using page is recommended since it handles highmem data as well as * low mem. ->address is restricted to data which has a virtual mapping, and - * it will go away in the future. Updating to ->page can be automated very + * it will go away in the future. Updating to page can be automated very * easily -- something like * * sg->address = some_ptr; * * can be rewritten as * - * sg->page = virt_to_page(some_ptr); + * sg_set_page(virt_to_page(some_ptr)); * sg->offset = (unsigned long) some_ptr & ~PAGE_MASK; * * and that's it. There's no excuse for not highmem enabling YOUR driver. /jens */ struct scatterlist { - struct page *page; /* Location for highmem page, if any */ +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; /* for highmem, page offset */ dma_addr_t dma_address; diff --git a/include/asm-h8300/scatterlist.h b/include/asm-h8300/scatterlist.h index 985fdf54eac..d3ecdd87ac9 100644 --- a/include/asm-h8300/scatterlist.h +++ b/include/asm-h8300/scatterlist.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-ia64/scatterlist.h b/include/asm-ia64/scatterlist.h index 7d5234d5031..d6f57874041 100644 --- a/include/asm-ia64/scatterlist.h +++ b/include/asm-ia64/scatterlist.h @@ -9,7 +9,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; /* buffer length */ diff --git a/include/asm-m32r/scatterlist.h b/include/asm-m32r/scatterlist.h index 352415ff5eb..1ed372c73d0 100644 --- a/include/asm-m32r/scatterlist.h +++ b/include/asm-m32r/scatterlist.h @@ -4,9 +4,12 @@ #include <asm/types.h> struct scatterlist { +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif char * address; /* Location data is to be transferred to, NULL for * highmem page */ - struct page * page; /* Location for highmem page, if any */ + unsigned long page_link; unsigned int offset;/* for highmem, page offset */ dma_addr_t dma_address; diff --git a/include/asm-m68k/scatterlist.h b/include/asm-m68k/scatterlist.h index 24887a2d9c7..d3a7a0edfec 100644 --- a/include/asm-m68k/scatterlist.h +++ b/include/asm-m68k/scatterlist.h @@ -4,7 +4,10 @@ #include <linux/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; diff --git a/include/asm-m68knommu/module.h b/include/asm-m68knommu/module.h index 57e95cc01ad..2e45ab50b23 100644 --- a/include/asm-m68knommu/module.h +++ b/include/asm-m68knommu/module.h @@ -1 +1,11 @@ -#include <asm-m68k/module.h> +#ifndef ASM_M68KNOMMU_MODULE_H +#define ASM_M68KNOMMU_MODULE_H + +struct mod_arch_specific { +}; + +#define Elf_Shdr Elf32_Shdr +#define Elf_Sym Elf32_Sym +#define Elf_Ehdr Elf32_Ehdr + +#endif /* ASM_M68KNOMMU_MODULE_H */ diff --git a/include/asm-m68knommu/scatterlist.h b/include/asm-m68knommu/scatterlist.h index 4da79d3d3f3..afc4788b0d2 100644 --- a/include/asm-m68knommu/scatterlist.h +++ b/include/asm-m68knommu/scatterlist.h @@ -5,13 +5,15 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; }; -#define sg_address(sg) (page_address((sg)->page) + (sg)->offset) #define sg_dma_address(sg) ((sg)->dma_address) #define sg_dma_len(sg) ((sg)->length) diff --git a/include/asm-m68knommu/uaccess.h b/include/asm-m68knommu/uaccess.h index 9ed9169a884..68bbe9b312f 100644 --- a/include/asm-m68knommu/uaccess.h +++ b/include/asm-m68knommu/uaccess.h @@ -170,10 +170,12 @@ static inline long strnlen_user(const char *src, long n) */ static inline unsigned long -clear_user(void *to, unsigned long n) +__clear_user(void *to, unsigned long n) { memset(to, 0, n); return 0; } +#define clear_user(to,n) __clear_user(to,n) + #endif /* _M68KNOMMU_UACCESS_H */ diff --git a/include/asm-mips/gt64120.h b/include/asm-mips/gt64120.h index 4bf8e28f885..e64b41093c4 100644 --- a/include/asm-mips/gt64120.h +++ b/include/asm-mips/gt64120.h @@ -21,6 +21,8 @@ #ifndef _ASM_GT64120_H #define _ASM_GT64120_H +#include <linux/clocksource.h> + #include <asm/addrspace.h> #include <asm/byteorder.h> @@ -572,4 +574,7 @@ #define GT_READ(ofs) le32_to_cpu(__GT_READ(ofs)) #define GT_WRITE(ofs, data) __GT_WRITE(ofs, cpu_to_le32(data)) +extern void gt641xx_set_base_clock(unsigned int clock); +extern int gt641xx_timer0_state(void); + #endif /* _ASM_GT64120_H */ diff --git a/include/asm-mips/i8253.h b/include/asm-mips/i8253.h index 8f689d7df6b..affb32ce4af 100644 --- a/include/asm-mips/i8253.h +++ b/include/asm-mips/i8253.h @@ -2,8 +2,8 @@ * Machine specific IO port address definition for generic. * Written by Osamu Tomita <tomita@cinet.co.jp> */ -#ifndef _MACH_IO_PORTS_H -#define _MACH_IO_PORTS_H +#ifndef __ASM_I8253_H +#define __ASM_I8253_H /* i8253A PIT registers */ #define PIT_MODE 0x43 @@ -27,4 +27,4 @@ extern void setup_pit_timer(void); -#endif /* !_MACH_IO_PORTS_H */ +#endif /* __ASM_I8253_H */ diff --git a/include/asm-mips/scatterlist.h b/include/asm-mips/scatterlist.h index 7af104c95b2..83d69fe17c9 100644 --- a/include/asm-mips/scatterlist.h +++ b/include/asm-mips/scatterlist.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page * page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-mips/sibyte/sb1250.h b/include/asm-mips/sibyte/sb1250.h index 494aa65dcfb..0dad844a3b5 100644 --- a/include/asm-mips/sibyte/sb1250.h +++ b/include/asm-mips/sibyte/sb1250.h @@ -45,13 +45,11 @@ extern unsigned int soc_type; extern unsigned int periph_rev; extern unsigned int zbbus_mhz; -extern void sb1250_hpt_setup(void); extern void sb1250_time_init(void); extern void sb1250_mask_irq(int cpu, int irq); extern void sb1250_unmask_irq(int cpu, int irq); extern void sb1250_smp_finish(void); -extern void bcm1480_hpt_setup(void); extern void bcm1480_time_init(void); extern void bcm1480_mask_irq(int cpu, int irq); extern void bcm1480_unmask_irq(int cpu, int irq); diff --git a/include/asm-parisc/scatterlist.h b/include/asm-parisc/scatterlist.h index e7211c74844..62269b31ebf 100644 --- a/include/asm-parisc/scatterlist.h +++ b/include/asm-parisc/scatterlist.h @@ -5,7 +5,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; @@ -15,7 +18,7 @@ struct scatterlist { __u32 iova_length; /* bytes mapped */ }; -#define sg_virt_addr(sg) ((unsigned long)(page_address(sg->page) + sg->offset)) +#define sg_virt_addr(sg) ((unsigned long)sg_virt(sg)) #define sg_dma_address(sg) ((sg)->iova) #define sg_dma_len(sg) ((sg)->iova_length) diff --git a/include/asm-powerpc/dma-mapping.h b/include/asm-powerpc/dma-mapping.h index 65be95dd03a..ff52013c0e2 100644 --- a/include/asm-powerpc/dma-mapping.h +++ b/include/asm-powerpc/dma-mapping.h @@ -285,9 +285,9 @@ dma_map_sg(struct device *dev, struct scatterlist *sgl, int nents, BUG_ON(direction == DMA_NONE); for_each_sg(sgl, sg, nents, i) { - BUG_ON(!sg->page); - __dma_sync_page(sg->page, sg->offset, sg->length, direction); - sg->dma_address = page_to_bus(sg->page) + sg->offset; + BUG_ON(!sg_page(sg)); + __dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); + sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset; } return nents; @@ -328,7 +328,7 @@ static inline void dma_sync_sg_for_cpu(struct device *dev, BUG_ON(direction == DMA_NONE); for_each_sg(sgl, sg, nents, i) - __dma_sync_page(sg->page, sg->offset, sg->length, direction); + __dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); } static inline void dma_sync_sg_for_device(struct device *dev, @@ -341,7 +341,7 @@ static inline void dma_sync_sg_for_device(struct device *dev, BUG_ON(direction == DMA_NONE); for_each_sg(sgl, sg, nents, i) - __dma_sync_page(sg->page, sg->offset, sg->length, direction); + __dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); } static inline int dma_mapping_error(dma_addr_t dma_addr) diff --git a/include/asm-powerpc/mpc52xx.h b/include/asm-powerpc/mpc52xx.h index 568135fe52e..fcb2ebbfddb 100644 --- a/include/asm-powerpc/mpc52xx.h +++ b/include/asm-powerpc/mpc52xx.h @@ -20,6 +20,11 @@ #include <linux/suspend.h> +/* Variants of the 5200(B) */ +#define MPC5200_SVR 0x80110010 +#define MPC5200_SVR_MASK 0xfffffff0 +#define MPC5200B_SVR 0x80110020 +#define MPC5200B_SVR_MASK 0xfffffff0 /* ======================================================================== */ /* Structures mapping of some unit register set */ @@ -244,6 +249,7 @@ struct mpc52xx_cdm { #ifndef __ASSEMBLY__ extern void __iomem * mpc52xx_find_and_map(const char *); +extern void __iomem * mpc52xx_find_and_map_path(const char *path); extern unsigned int mpc52xx_find_ipb_freq(struct device_node *node); extern void mpc5200_setup_xlb_arbiter(void); extern void mpc52xx_declare_of_platform_devices(void); @@ -253,6 +259,9 @@ extern unsigned int mpc52xx_get_irq(void); extern int __init mpc52xx_add_bridge(struct device_node *node); +extern void __init mpc52xx_map_wdt(void); +extern void mpc52xx_restart(char *cmd); + #endif /* __ASSEMBLY__ */ #ifdef CONFIG_PM diff --git a/include/asm-powerpc/scatterlist.h b/include/asm-powerpc/scatterlist.h index b075f619c3b..fcf7d55afe4 100644 --- a/include/asm-powerpc/scatterlist.h +++ b/include/asm-powerpc/scatterlist.h @@ -14,7 +14,10 @@ #include <asm/dma.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; diff --git a/include/asm-ppc/system.h b/include/asm-ppc/system.h index cc45780421c..51df94c7384 100644 --- a/include/asm-ppc/system.h +++ b/include/asm-ppc/system.h @@ -33,6 +33,7 @@ #define set_mb(var, value) do { var = value; mb(); } while (0) +#define AT_VECTOR_SIZE_ARCH 6 /* entries in ARCH_DLINFO */ #ifdef CONFIG_SMP #define smp_mb() mb() #define smp_rmb() rmb() diff --git a/include/asm-s390/cpu.h b/include/asm-s390/cpu.h new file mode 100644 index 00000000000..352dde194f3 --- /dev/null +++ b/include/asm-s390/cpu.h @@ -0,0 +1,25 @@ +/* + * include/asm-s390/cpu.h + * + * Copyright IBM Corp. 2007 + * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com> + */ + +#ifndef _ASM_S390_CPU_H_ +#define _ASM_S390_CPU_H_ + +#include <linux/types.h> +#include <linux/percpu.h> +#include <linux/spinlock.h> + +struct s390_idle_data { + spinlock_t lock; + unsigned int in_idle; + unsigned long long idle_count; + unsigned long long idle_enter; + unsigned long long idle_time; +}; + +DECLARE_PER_CPU(struct s390_idle_data, s390_idle); + +#endif /* _ASM_S390_CPU_H_ */ diff --git a/include/asm-s390/mmu_context.h b/include/asm-s390/mmu_context.h index 501cb9b0631..05b842126b9 100644 --- a/include/asm-s390/mmu_context.h +++ b/include/asm-s390/mmu_context.h @@ -21,45 +21,43 @@ #ifndef __s390x__ #define LCTL_OPCODE "lctl" -#define PGTABLE_BITS (_SEGMENT_TABLE|USER_STD_MASK) #else #define LCTL_OPCODE "lctlg" -#define PGTABLE_BITS (_REGION_TABLE|USER_STD_MASK) #endif -static inline void enter_lazy_tlb(struct mm_struct *mm, - struct task_struct *tsk) +static inline void update_mm(struct mm_struct *mm, struct task_struct *tsk) { + pgd_t *pgd = mm->pgd; + unsigned long asce_bits; + + /* Calculate asce bits from the first pgd table entry. */ + asce_bits = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS; +#ifdef CONFIG_64BIT + asce_bits |= _ASCE_TYPE_REGION3; +#endif + S390_lowcore.user_asce = asce_bits | __pa(pgd); + if (switch_amode) { + /* Load primary space page table origin. */ + pgd_t *shadow_pgd = get_shadow_table(pgd) ? : pgd; + S390_lowcore.user_exec_asce = asce_bits | __pa(shadow_pgd); + asm volatile(LCTL_OPCODE" 1,1,%0\n" + : : "m" (S390_lowcore.user_exec_asce) ); + } else + /* Load home space page table origin. */ + asm volatile(LCTL_OPCODE" 13,13,%0" + : : "m" (S390_lowcore.user_asce) ); } static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { - pgd_t *shadow_pgd = get_shadow_pgd(next->pgd); - - if (prev != next) { - S390_lowcore.user_asce = (__pa(next->pgd) & PAGE_MASK) | - PGTABLE_BITS; - if (shadow_pgd) { - /* Load primary/secondary space page table origin. */ - S390_lowcore.user_exec_asce = - (__pa(shadow_pgd) & PAGE_MASK) | PGTABLE_BITS; - asm volatile(LCTL_OPCODE" 1,1,%0\n" - LCTL_OPCODE" 7,7,%1" - : : "m" (S390_lowcore.user_exec_asce), - "m" (S390_lowcore.user_asce) ); - } else if (switch_amode) { - /* Load primary space page table origin. */ - asm volatile(LCTL_OPCODE" 1,1,%0" - : : "m" (S390_lowcore.user_asce) ); - } else - /* Load home space page table origin. */ - asm volatile(LCTL_OPCODE" 13,13,%0" - : : "m" (S390_lowcore.user_asce) ); - } + if (unlikely(prev == next)) + return; cpu_set(smp_processor_id(), next->cpu_vm_mask); + update_mm(next, tsk); } +#define enter_lazy_tlb(mm,tsk) do { } while (0) #define deactivate_mm(tsk,mm) do { } while (0) static inline void activate_mm(struct mm_struct *prev, diff --git a/include/asm-s390/page.h b/include/asm-s390/page.h index ceec3826a67..584d0ee3c7f 100644 --- a/include/asm-s390/page.h +++ b/include/asm-s390/page.h @@ -82,6 +82,7 @@ typedef struct { unsigned long pte; } pte_t; #ifndef __s390x__ typedef struct { unsigned long pmd; } pmd_t; +typedef struct { unsigned long pud; } pud_t; typedef struct { unsigned long pgd0; unsigned long pgd1; @@ -90,6 +91,7 @@ typedef struct { } pgd_t; #define pmd_val(x) ((x).pmd) +#define pud_val(x) ((x).pud) #define pgd_val(x) ((x).pgd0) #else /* __s390x__ */ @@ -98,10 +100,12 @@ typedef struct { unsigned long pmd0; unsigned long pmd1; } pmd_t; +typedef struct { unsigned long pud; } pud_t; typedef struct { unsigned long pgd; } pgd_t; #define pmd_val(x) ((x).pmd0) #define pmd_val1(x) ((x).pmd1) +#define pud_val(x) ((x).pud) #define pgd_val(x) ((x).pgd) #endif /* __s390x__ */ diff --git a/include/asm-s390/pgalloc.h b/include/asm-s390/pgalloc.h index e45d3c9a4b7..709dd174095 100644 --- a/include/asm-s390/pgalloc.h +++ b/include/asm-s390/pgalloc.h @@ -19,140 +19,115 @@ #define check_pgt_cache() do {} while (0) -/* - * Page allocation orders. - */ -#ifndef __s390x__ -# define PTE_ALLOC_ORDER 0 -# define PMD_ALLOC_ORDER 0 -# define PGD_ALLOC_ORDER 1 -#else /* __s390x__ */ -# define PTE_ALLOC_ORDER 0 -# define PMD_ALLOC_ORDER 2 -# define PGD_ALLOC_ORDER 2 -#endif /* __s390x__ */ +unsigned long *crst_table_alloc(struct mm_struct *, int); +void crst_table_free(unsigned long *); -/* - * Allocate and free page tables. The xxx_kernel() versions are - * used to allocate a kernel page table - this turns on ASN bits - * if any. - */ +unsigned long *page_table_alloc(int); +void page_table_free(unsigned long *); -static inline pgd_t *pgd_alloc(struct mm_struct *mm) +static inline void clear_table(unsigned long *s, unsigned long val, size_t n) { - pgd_t *pgd = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ALLOC_ORDER); - int i; - - if (!pgd) - return NULL; - if (s390_noexec) { - pgd_t *shadow_pgd = (pgd_t *) - __get_free_pages(GFP_KERNEL, PGD_ALLOC_ORDER); - struct page *page = virt_to_page(pgd); - - if (!shadow_pgd) { - free_pages((unsigned long) pgd, PGD_ALLOC_ORDER); - return NULL; - } - page->lru.next = (void *) shadow_pgd; - } - for (i = 0; i < PTRS_PER_PGD; i++) -#ifndef __s390x__ - pmd_clear(pmd_offset(pgd + i, i*PGDIR_SIZE)); + *s = val; + n = (n / 256) - 1; + asm volatile( +#ifdef CONFIG_64BIT + " mvc 8(248,%0),0(%0)\n" #else - pgd_clear(pgd + i); + " mvc 4(252,%0),0(%0)\n" #endif - return pgd; + "0: mvc 256(256,%0),0(%0)\n" + " la %0,256(%0)\n" + " brct %1,0b\n" + : "+a" (s), "+d" (n)); } -static inline void pgd_free(pgd_t *pgd) +static inline void crst_table_init(unsigned long *crst, unsigned long entry) { - pgd_t *shadow_pgd = get_shadow_pgd(pgd); - - if (shadow_pgd) - free_pages((unsigned long) shadow_pgd, PGD_ALLOC_ORDER); - free_pages((unsigned long) pgd, PGD_ALLOC_ORDER); + clear_table(crst, entry, sizeof(unsigned long)*2048); + crst = get_shadow_table(crst); + if (crst) + clear_table(crst, entry, sizeof(unsigned long)*2048); } #ifndef __s390x__ -/* - * page middle directory allocation/free routines. - * We use pmd cache only on s390x, so these are dummy routines. This - * code never triggers because the pgd will always be present. - */ -#define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) -#define pmd_free(x) do { } while (0) -#define __pmd_free_tlb(tlb,x) do { } while (0) -#define pgd_populate(mm, pmd, pte) BUG() -#define pgd_populate_kernel(mm, pmd, pte) BUG() -#else /* __s390x__ */ -static inline pmd_t * pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) + +static inline unsigned long pgd_entry_type(struct mm_struct *mm) { - pmd_t *pmd = (pmd_t *) __get_free_pages(GFP_KERNEL, PMD_ALLOC_ORDER); - int i; - - if (!pmd) - return NULL; - if (s390_noexec) { - pmd_t *shadow_pmd = (pmd_t *) - __get_free_pages(GFP_KERNEL, PMD_ALLOC_ORDER); - struct page *page = virt_to_page(pmd); - - if (!shadow_pmd) { - free_pages((unsigned long) pmd, PMD_ALLOC_ORDER); - return NULL; - } - page->lru.next = (void *) shadow_pmd; - } - for (i=0; i < PTRS_PER_PMD; i++) - pmd_clear(pmd + i); - return pmd; + return _SEGMENT_ENTRY_EMPTY; } -static inline void pmd_free (pmd_t *pmd) +#define pud_alloc_one(mm,address) ({ BUG(); ((pud_t *)2); }) +#define pud_free(x) do { } while (0) + +#define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) +#define pmd_free(x) do { } while (0) + +#define pgd_populate(mm, pgd, pud) BUG() +#define pgd_populate_kernel(mm, pgd, pud) BUG() + +#define pud_populate(mm, pud, pmd) BUG() +#define pud_populate_kernel(mm, pud, pmd) BUG() + +#else /* __s390x__ */ + +static inline unsigned long pgd_entry_type(struct mm_struct *mm) { - pmd_t *shadow_pmd = get_shadow_pmd(pmd); + return _REGION3_ENTRY_EMPTY; +} + +#define pud_alloc_one(mm,address) ({ BUG(); ((pud_t *)2); }) +#define pud_free(x) do { } while (0) - if (shadow_pmd) - free_pages((unsigned long) shadow_pmd, PMD_ALLOC_ORDER); - free_pages((unsigned long) pmd, PMD_ALLOC_ORDER); +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) +{ + unsigned long *crst = crst_table_alloc(mm, s390_noexec); + if (crst) + crst_table_init(crst, _SEGMENT_ENTRY_EMPTY); + return (pmd_t *) crst; } +#define pmd_free(pmd) crst_table_free((unsigned long *) pmd) -#define __pmd_free_tlb(tlb,pmd) \ - do { \ - tlb_flush_mmu(tlb, 0, 0); \ - pmd_free(pmd); \ - } while (0) +#define pgd_populate(mm, pgd, pud) BUG() +#define pgd_populate_kernel(mm, pgd, pud) BUG() -static inline void -pgd_populate_kernel(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmd) +static inline void pud_populate_kernel(struct mm_struct *mm, + pud_t *pud, pmd_t *pmd) { - pgd_val(*pgd) = _PGD_ENTRY | __pa(pmd); + pud_val(*pud) = _REGION3_ENTRY | __pa(pmd); } -static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmd) +static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) { - pgd_t *shadow_pgd = get_shadow_pgd(pgd); - pmd_t *shadow_pmd = get_shadow_pmd(pmd); + pud_t *shadow_pud = get_shadow_table(pud); + pmd_t *shadow_pmd = get_shadow_table(pmd); - if (shadow_pgd && shadow_pmd) - pgd_populate_kernel(mm, shadow_pgd, shadow_pmd); - pgd_populate_kernel(mm, pgd, pmd); + if (shadow_pud && shadow_pmd) + pud_populate_kernel(mm, shadow_pud, shadow_pmd); + pud_populate_kernel(mm, pud, pmd); } #endif /* __s390x__ */ +static inline pgd_t *pgd_alloc(struct mm_struct *mm) +{ + unsigned long *crst = crst_table_alloc(mm, s390_noexec); + if (crst) + crst_table_init(crst, pgd_entry_type(mm)); + return (pgd_t *) crst; +} +#define pgd_free(pgd) crst_table_free((unsigned long *) pgd) + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { #ifndef __s390x__ - pmd_val(pmd[0]) = _PAGE_TABLE + __pa(pte); - pmd_val(pmd[1]) = _PAGE_TABLE + __pa(pte+256); - pmd_val(pmd[2]) = _PAGE_TABLE + __pa(pte+512); - pmd_val(pmd[3]) = _PAGE_TABLE + __pa(pte+768); + pmd_val(pmd[0]) = _SEGMENT_ENTRY + __pa(pte); + pmd_val(pmd[1]) = _SEGMENT_ENTRY + __pa(pte+256); + pmd_val(pmd[2]) = _SEGMENT_ENTRY + __pa(pte+512); + pmd_val(pmd[3]) = _SEGMENT_ENTRY + __pa(pte+768); #else /* __s390x__ */ - pmd_val(*pmd) = _PMD_ENTRY + __pa(pte); - pmd_val1(*pmd) = _PMD_ENTRY + __pa(pte+256); + pmd_val(*pmd) = _SEGMENT_ENTRY + __pa(pte); + pmd_val1(*pmd) = _SEGMENT_ENTRY + __pa(pte+256); #endif /* __s390x__ */ } @@ -160,7 +135,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct page *page) { pte_t *pte = (pte_t *)page_to_phys(page); - pmd_t *shadow_pmd = get_shadow_pmd(pmd); + pmd_t *shadow_pmd = get_shadow_table(pmd); pte_t *shadow_pte = get_shadow_pte(pte); pmd_populate_kernel(mm, pmd, pte); @@ -171,67 +146,14 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct page *page) /* * page table entry allocation/free routines. */ -static inline pte_t * -pte_alloc_one_kernel(struct mm_struct *mm, unsigned long vmaddr) -{ - pte_t *pte = (pte_t *) __get_free_page(GFP_KERNEL|__GFP_REPEAT); - int i; - - if (!pte) - return NULL; - if (s390_noexec) { - pte_t *shadow_pte = (pte_t *) - __get_free_page(GFP_KERNEL|__GFP_REPEAT); - struct page *page = virt_to_page(pte); - - if (!shadow_pte) { - free_page((unsigned long) pte); - return NULL; - } - page->lru.next = (void *) shadow_pte; - } - for (i=0; i < PTRS_PER_PTE; i++) { - pte_clear(mm, vmaddr, pte + i); - vmaddr += PAGE_SIZE; - } - return pte; -} - -static inline struct page * -pte_alloc_one(struct mm_struct *mm, unsigned long vmaddr) -{ - pte_t *pte = pte_alloc_one_kernel(mm, vmaddr); - if (pte) - return virt_to_page(pte); - return NULL; -} - -static inline void pte_free_kernel(pte_t *pte) -{ - pte_t *shadow_pte = get_shadow_pte(pte); - - if (shadow_pte) - free_page((unsigned long) shadow_pte); - free_page((unsigned long) pte); -} - -static inline void pte_free(struct page *pte) -{ - struct page *shadow_page = get_shadow_page(pte); - - if (shadow_page) - __free_page(shadow_page); - __free_page(pte); -} - -#define __pte_free_tlb(tlb, pte) \ -({ \ - struct mmu_gather *__tlb = (tlb); \ - struct page *__pte = (pte); \ - struct page *shadow_page = get_shadow_page(__pte); \ - if (shadow_page) \ - tlb_remove_page(__tlb, shadow_page); \ - tlb_remove_page(__tlb, __pte); \ -}) +#define pte_alloc_one_kernel(mm, vmaddr) \ + ((pte_t *) page_table_alloc(s390_noexec)) +#define pte_alloc_one(mm, vmaddr) \ + virt_to_page(page_table_alloc(s390_noexec)) + +#define pte_free_kernel(pte) \ + page_table_free((unsigned long *) pte) +#define pte_free(pte) \ + page_table_free((unsigned long *) page_to_phys((struct page *) pte)) #endif /* _S390_PGALLOC_H */ diff --git a/include/asm-s390/pgtable.h b/include/asm-s390/pgtable.h index 39bb5192dc3..f2cc25b74ad 100644 --- a/include/asm-s390/pgtable.h +++ b/include/asm-s390/pgtable.h @@ -13,8 +13,6 @@ #ifndef _ASM_S390_PGTABLE_H #define _ASM_S390_PGTABLE_H -#include <asm-generic/4level-fixup.h> - /* * The Linux memory management assumes a three-level page table setup. For * s390 31 bit we "fold" the mid level into the top-level page table, so @@ -35,9 +33,6 @@ #include <asm/bug.h> #include <asm/processor.h> -struct vm_area_struct; /* forward declaration (include/linux/mm.h) */ -struct mm_struct; - extern pgd_t swapper_pg_dir[] __attribute__ ((aligned (4096))); extern void paging_init(void); extern void vmem_map_init(void); @@ -63,14 +58,18 @@ extern char empty_zero_page[PAGE_SIZE]; */ #ifndef __s390x__ # define PMD_SHIFT 22 +# define PUD_SHIFT 22 # define PGDIR_SHIFT 22 #else /* __s390x__ */ # define PMD_SHIFT 21 +# define PUD_SHIFT 31 # define PGDIR_SHIFT 31 #endif /* __s390x__ */ #define PMD_SIZE (1UL << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) +#define PUD_SIZE (1UL << PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE-1)) #define PGDIR_SIZE (1UL << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) @@ -83,10 +82,12 @@ extern char empty_zero_page[PAGE_SIZE]; #ifndef __s390x__ # define PTRS_PER_PTE 1024 # define PTRS_PER_PMD 1 +# define PTRS_PER_PUD 1 # define PTRS_PER_PGD 512 #else /* __s390x__ */ # define PTRS_PER_PTE 512 # define PTRS_PER_PMD 1024 +# define PTRS_PER_PUD 1 # define PTRS_PER_PGD 2048 #endif /* __s390x__ */ @@ -96,6 +97,8 @@ extern char empty_zero_page[PAGE_SIZE]; printk("%s:%d: bad pte %p.\n", __FILE__, __LINE__, (void *) pte_val(e)) #define pmd_ERROR(e) \ printk("%s:%d: bad pmd %p.\n", __FILE__, __LINE__, (void *) pmd_val(e)) +#define pud_ERROR(e) \ + printk("%s:%d: bad pud %p.\n", __FILE__, __LINE__, (void *) pud_val(e)) #define pgd_ERROR(e) \ printk("%s:%d: bad pgd %p.\n", __FILE__, __LINE__, (void *) pgd_val(e)) @@ -195,7 +198,7 @@ extern unsigned long vmalloc_end; * I Segment-Invalid Bit: Segment is not available for address-translation * TT Type 01 * TF - * TL Table lenght + * TL Table length * * The 64 bit regiontable origin of S390 has following format: * | region table origon | DTTL @@ -221,6 +224,8 @@ extern unsigned long vmalloc_end; /* Hardware bits in the page table entry */ #define _PAGE_RO 0x200 /* HW read-only bit */ #define _PAGE_INVALID 0x400 /* HW invalid bit */ + +/* Software bits in the page table entry */ #define _PAGE_SWT 0x001 /* SW pte type bit t */ #define _PAGE_SWX 0x002 /* SW pte type bit x */ @@ -264,60 +269,75 @@ extern unsigned long vmalloc_end; #ifndef __s390x__ -/* Bits in the segment table entry */ -#define _PAGE_TABLE_LEN 0xf /* only full page-tables */ -#define _PAGE_TABLE_COM 0x10 /* common page-table */ -#define _PAGE_TABLE_INV 0x20 /* invalid page-table */ -#define _SEG_PRESENT 0x001 /* Software (overlap with PTL) */ - -/* Bits int the storage key */ -#define _PAGE_CHANGED 0x02 /* HW changed bit */ -#define _PAGE_REFERENCED 0x04 /* HW referenced bit */ - -#define _USER_SEG_TABLE_LEN 0x7f /* user-segment-table up to 2 GB */ -#define _KERNEL_SEG_TABLE_LEN 0x7f /* kernel-segment-table up to 2 GB */ - -/* - * User and Kernel pagetables are identical - */ -#define _PAGE_TABLE _PAGE_TABLE_LEN -#define _KERNPG_TABLE _PAGE_TABLE_LEN - -/* - * The Kernel segment-tables includes the User segment-table - */ +/* Bits in the segment table address-space-control-element */ +#define _ASCE_SPACE_SWITCH 0x80000000UL /* space switch event */ +#define _ASCE_ORIGIN_MASK 0x7ffff000UL /* segment table origin */ +#define _ASCE_PRIVATE_SPACE 0x100 /* private space control */ +#define _ASCE_ALT_EVENT 0x80 /* storage alteration event control */ +#define _ASCE_TABLE_LENGTH 0x7f /* 128 x 64 entries = 8k */ -#define _SEGMENT_TABLE (_USER_SEG_TABLE_LEN|0x80000000|0x100) -#define _KERNSEG_TABLE _KERNEL_SEG_TABLE_LEN +/* Bits in the segment table entry */ +#define _SEGMENT_ENTRY_ORIGIN 0x7fffffc0UL /* page table origin */ +#define _SEGMENT_ENTRY_INV 0x20 /* invalid segment table entry */ +#define _SEGMENT_ENTRY_COMMON 0x10 /* common segment bit */ +#define _SEGMENT_ENTRY_PTL 0x0f /* page table length */ -#define USER_STD_MASK 0x00000080UL +#define _SEGMENT_ENTRY (_SEGMENT_ENTRY_PTL) +#define _SEGMENT_ENTRY_EMPTY (_SEGMENT_ENTRY_INV) #else /* __s390x__ */ +/* Bits in the segment/region table address-space-control-element */ +#define _ASCE_ORIGIN ~0xfffUL/* segment table origin */ +#define _ASCE_PRIVATE_SPACE 0x100 /* private space control */ +#define _ASCE_ALT_EVENT 0x80 /* storage alteration event control */ +#define _ASCE_SPACE_SWITCH 0x40 /* space switch event */ +#define _ASCE_REAL_SPACE 0x20 /* real space control */ +#define _ASCE_TYPE_MASK 0x0c /* asce table type mask */ +#define _ASCE_TYPE_REGION1 0x0c /* region first table type */ +#define _ASCE_TYPE_REGION2 0x08 /* region second table type */ +#define _ASCE_TYPE_REGION3 0x04 /* region third table type */ +#define _ASCE_TYPE_SEGMENT 0x00 /* segment table type */ +#define _ASCE_TABLE_LENGTH 0x03 /* region table length */ + +/* Bits in the region table entry */ +#define _REGION_ENTRY_ORIGIN ~0xfffUL/* region/segment table origin */ +#define _REGION_ENTRY_INV 0x20 /* invalid region table entry */ +#define _REGION_ENTRY_TYPE_MASK 0x0c /* region/segment table type mask */ +#define _REGION_ENTRY_TYPE_R1 0x0c /* region first table type */ +#define _REGION_ENTRY_TYPE_R2 0x08 /* region second table type */ +#define _REGION_ENTRY_TYPE_R3 0x04 /* region third table type */ +#define _REGION_ENTRY_LENGTH 0x03 /* region third length */ + +#define _REGION1_ENTRY (_REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_LENGTH) +#define _REGION1_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INV) +#define _REGION2_ENTRY (_REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_LENGTH) +#define _REGION2_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INV) +#define _REGION3_ENTRY (_REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_LENGTH) +#define _REGION3_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INV) + /* Bits in the segment table entry */ -#define _PMD_ENTRY_INV 0x20 /* invalid segment table entry */ -#define _PMD_ENTRY 0x00 +#define _SEGMENT_ENTRY_ORIGIN ~0x7ffUL/* segment table origin */ +#define _SEGMENT_ENTRY_RO 0x200 /* page protection bit */ +#define _SEGMENT_ENTRY_INV 0x20 /* invalid segment table entry */ + +#define _SEGMENT_ENTRY (0) +#define _SEGMENT_ENTRY_EMPTY (_SEGMENT_ENTRY_INV) -/* Bits in the region third table entry */ -#define _PGD_ENTRY_INV 0x20 /* invalid region table entry */ -#define _PGD_ENTRY 0x07 +#endif /* __s390x__ */ /* - * User and kernel page directory + * A user page table pointer has the space-switch-event bit, the + * private-space-control bit and the storage-alteration-event-control + * bit set. A kernel page table pointer doesn't need them. */ -#define _REGION_THIRD 0x4 -#define _REGION_THIRD_LEN 0x3 -#define _REGION_TABLE (_REGION_THIRD|_REGION_THIRD_LEN|0x40|0x100) -#define _KERN_REGION_TABLE (_REGION_THIRD|_REGION_THIRD_LEN) - -#define USER_STD_MASK 0x0000000000000080UL +#define _ASCE_USER_BITS (_ASCE_SPACE_SWITCH | _ASCE_PRIVATE_SPACE | \ + _ASCE_ALT_EVENT) -/* Bits in the storage key */ +/* Bits int the storage key */ #define _PAGE_CHANGED 0x02 /* HW changed bit */ #define _PAGE_REFERENCED 0x04 /* HW referenced bit */ -#endif /* __s390x__ */ - /* * Page protection definitions. */ @@ -358,65 +378,38 @@ extern unsigned long vmalloc_end; #define __S111 PAGE_EX_RW #ifndef __s390x__ -# define PMD_SHADOW_SHIFT 1 -# define PGD_SHADOW_SHIFT 1 +# define PxD_SHADOW_SHIFT 1 #else /* __s390x__ */ -# define PMD_SHADOW_SHIFT 2 -# define PGD_SHADOW_SHIFT 2 +# define PxD_SHADOW_SHIFT 2 #endif /* __s390x__ */ static inline struct page *get_shadow_page(struct page *page) { - if (s390_noexec && !list_empty(&page->lru)) - return virt_to_page(page->lru.next); - return NULL; -} - -static inline pte_t *get_shadow_pte(pte_t *ptep) -{ - unsigned long pteptr = (unsigned long) (ptep); - - if (s390_noexec) { - unsigned long offset = pteptr & (PAGE_SIZE - 1); - void *addr = (void *) (pteptr ^ offset); - struct page *page = virt_to_page(addr); - if (!list_empty(&page->lru)) - return (pte_t *) ((unsigned long) page->lru.next | - offset); - } + if (s390_noexec && page->index) + return virt_to_page((void *)(addr_t) page->index); return NULL; } -static inline pmd_t *get_shadow_pmd(pmd_t *pmdp) +static inline void *get_shadow_pte(void *table) { - unsigned long pmdptr = (unsigned long) (pmdp); + unsigned long addr, offset; + struct page *page; - if (s390_noexec) { - unsigned long offset = pmdptr & - ((PAGE_SIZE << PMD_SHADOW_SHIFT) - 1); - void *addr = (void *) (pmdptr ^ offset); - struct page *page = virt_to_page(addr); - if (!list_empty(&page->lru)) - return (pmd_t *) ((unsigned long) page->lru.next | - offset); - } - return NULL; + addr = (unsigned long) table; + offset = addr & (PAGE_SIZE - 1); + page = virt_to_page((void *)(addr ^ offset)); + return (void *)(addr_t)(page->index ? (page->index | offset) : 0UL); } -static inline pgd_t *get_shadow_pgd(pgd_t *pgdp) +static inline void *get_shadow_table(void *table) { - unsigned long pgdptr = (unsigned long) (pgdp); + unsigned long addr, offset; + struct page *page; - if (s390_noexec) { - unsigned long offset = pgdptr & - ((PAGE_SIZE << PGD_SHADOW_SHIFT) - 1); - void *addr = (void *) (pgdptr ^ offset); - struct page *page = virt_to_page(addr); - if (!list_empty(&page->lru)) - return (pgd_t *) ((unsigned long) page->lru.next | - offset); - } - return NULL; + addr = (unsigned long) table; + offset = addr & ((PAGE_SIZE << PxD_SHADOW_SHIFT) - 1); + page = virt_to_page((void *)(addr ^ offset)); + return (void *)(addr_t)(page->index ? (page->index | offset) : 0UL); } /* @@ -424,7 +417,8 @@ static inline pgd_t *get_shadow_pgd(pgd_t *pgdp) * within a page table are directly modified. Thus, the following * hook is made available. */ -static inline void set_pte(pte_t *pteptr, pte_t pteval) +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *pteptr, pte_t pteval) { pte_t *shadow_pte = get_shadow_pte(pteptr); @@ -437,7 +431,6 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) pte_val(*shadow_pte) = _PAGE_TYPE_EMPTY; } } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) /* * pgd/pmd/pte query functions @@ -448,47 +441,50 @@ static inline int pgd_present(pgd_t pgd) { return 1; } static inline int pgd_none(pgd_t pgd) { return 0; } static inline int pgd_bad(pgd_t pgd) { return 0; } -static inline int pmd_present(pmd_t pmd) { return pmd_val(pmd) & _SEG_PRESENT; } -static inline int pmd_none(pmd_t pmd) { return pmd_val(pmd) & _PAGE_TABLE_INV; } -static inline int pmd_bad(pmd_t pmd) -{ - return (pmd_val(pmd) & (~PAGE_MASK & ~_PAGE_TABLE_INV)) != _PAGE_TABLE; -} +static inline int pud_present(pud_t pud) { return 1; } +static inline int pud_none(pud_t pud) { return 0; } +static inline int pud_bad(pud_t pud) { return 0; } #else /* __s390x__ */ -static inline int pgd_present(pgd_t pgd) +static inline int pgd_present(pgd_t pgd) { return 1; } +static inline int pgd_none(pgd_t pgd) { return 0; } +static inline int pgd_bad(pgd_t pgd) { return 0; } + +static inline int pud_present(pud_t pud) { - return (pgd_val(pgd) & ~PAGE_MASK) == _PGD_ENTRY; + return pud_val(pud) & _REGION_ENTRY_ORIGIN; } -static inline int pgd_none(pgd_t pgd) +static inline int pud_none(pud_t pud) { - return pgd_val(pgd) & _PGD_ENTRY_INV; + return pud_val(pud) & _REGION_ENTRY_INV; } -static inline int pgd_bad(pgd_t pgd) +static inline int pud_bad(pud_t pud) { - return (pgd_val(pgd) & (~PAGE_MASK & ~_PGD_ENTRY_INV)) != _PGD_ENTRY; + unsigned long mask = ~_REGION_ENTRY_ORIGIN & ~_REGION_ENTRY_INV; + return (pud_val(pud) & mask) != _REGION3_ENTRY; } +#endif /* __s390x__ */ + static inline int pmd_present(pmd_t pmd) { - return (pmd_val(pmd) & ~PAGE_MASK) == _PMD_ENTRY; + return pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN; } static inline int pmd_none(pmd_t pmd) { - return pmd_val(pmd) & _PMD_ENTRY_INV; + return pmd_val(pmd) & _SEGMENT_ENTRY_INV; } static inline int pmd_bad(pmd_t pmd) { - return (pmd_val(pmd) & (~PAGE_MASK & ~_PMD_ENTRY_INV)) != _PMD_ENTRY; + unsigned long mask = ~_SEGMENT_ENTRY_ORIGIN & ~_SEGMENT_ENTRY_INV; + return (pmd_val(pmd) & mask) != _SEGMENT_ENTRY; } -#endif /* __s390x__ */ - static inline int pte_none(pte_t pte) { return (pte_val(pte) & _PAGE_INVALID) && !(pte_val(pte) & _PAGE_SWT); @@ -508,7 +504,8 @@ static inline int pte_file(pte_t pte) return (pte_val(pte) & mask) == _PAGE_TYPE_FILE; } -#define pte_same(a,b) (pte_val(a) == pte_val(b)) +#define __HAVE_ARCH_PTE_SAME +#define pte_same(a,b) (pte_val(a) == pte_val(b)) /* * query functions pte_write/pte_dirty/pte_young only work if @@ -543,58 +540,52 @@ static inline int pte_young(pte_t pte) #ifndef __s390x__ -static inline void pgd_clear(pgd_t * pgdp) { } +#define pgd_clear(pgd) do { } while (0) +#define pud_clear(pud) do { } while (0) static inline void pmd_clear_kernel(pmd_t * pmdp) { - pmd_val(pmdp[0]) = _PAGE_TABLE_INV; - pmd_val(pmdp[1]) = _PAGE_TABLE_INV; - pmd_val(pmdp[2]) = _PAGE_TABLE_INV; - pmd_val(pmdp[3]) = _PAGE_TABLE_INV; -} - -static inline void pmd_clear(pmd_t * pmdp) -{ - pmd_t *shadow_pmd = get_shadow_pmd(pmdp); - - pmd_clear_kernel(pmdp); - if (shadow_pmd) - pmd_clear_kernel(shadow_pmd); + pmd_val(pmdp[0]) = _SEGMENT_ENTRY_EMPTY; + pmd_val(pmdp[1]) = _SEGMENT_ENTRY_EMPTY; + pmd_val(pmdp[2]) = _SEGMENT_ENTRY_EMPTY; + pmd_val(pmdp[3]) = _SEGMENT_ENTRY_EMPTY; } #else /* __s390x__ */ -static inline void pgd_clear_kernel(pgd_t * pgdp) +#define pgd_clear(pgd) do { } while (0) + +static inline void pud_clear_kernel(pud_t *pud) { - pgd_val(*pgdp) = _PGD_ENTRY_INV | _PGD_ENTRY; + pud_val(*pud) = _REGION3_ENTRY_EMPTY; } -static inline void pgd_clear(pgd_t * pgdp) +static inline void pud_clear(pud_t * pud) { - pgd_t *shadow_pgd = get_shadow_pgd(pgdp); + pud_t *shadow = get_shadow_table(pud); - pgd_clear_kernel(pgdp); - if (shadow_pgd) - pgd_clear_kernel(shadow_pgd); + pud_clear_kernel(pud); + if (shadow) + pud_clear_kernel(shadow); } static inline void pmd_clear_kernel(pmd_t * pmdp) { - pmd_val(*pmdp) = _PMD_ENTRY_INV | _PMD_ENTRY; - pmd_val1(*pmdp) = _PMD_ENTRY_INV | _PMD_ENTRY; + pmd_val(*pmdp) = _SEGMENT_ENTRY_EMPTY; + pmd_val1(*pmdp) = _SEGMENT_ENTRY_EMPTY; } +#endif /* __s390x__ */ + static inline void pmd_clear(pmd_t * pmdp) { - pmd_t *shadow_pmd = get_shadow_pmd(pmdp); + pmd_t *shadow_pmd = get_shadow_table(pmdp); pmd_clear_kernel(pmdp); if (shadow_pmd) pmd_clear_kernel(shadow_pmd); } -#endif /* __s390x__ */ - static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { pte_t *shadow_pte = get_shadow_pte(ptep); @@ -663,24 +654,19 @@ static inline pte_t pte_mkyoung(pte_t pte) return pte; } -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) +#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG +static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { return 0; } -static inline int -ptep_clear_flush_young(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH +static inline int ptep_clear_flush_young(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { /* No need to flush TLB; bits are in storage key */ - return ptep_test_and_clear_young(vma, address, ptep); -} - -static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) -{ - pte_t pte = *ptep; - pte_clear(mm, addr, ptep); - return pte; + return 0; } static inline void __ptep_ipte(unsigned long address, pte_t *ptep) @@ -709,6 +695,32 @@ static inline void ptep_invalidate(unsigned long address, pte_t *ptep) __ptep_ipte(address, ptep); } +/* + * This is hard to understand. ptep_get_and_clear and ptep_clear_flush + * both clear the TLB for the unmapped pte. The reason is that + * ptep_get_and_clear is used in common code (e.g. change_pte_range) + * to modify an active pte. The sequence is + * 1) ptep_get_and_clear + * 2) set_pte_at + * 3) flush_tlb_range + * On s390 the tlb needs to get flushed with the modification of the pte + * if the pte is active. The only way how this can be implemented is to + * have ptep_get_and_clear do the tlb flush. In exchange flush_tlb_range + * is a nop. + */ +#define __HAVE_ARCH_PTEP_GET_AND_CLEAR +#define ptep_get_and_clear(__mm, __address, __ptep) \ +({ \ + pte_t __pte = *(__ptep); \ + if (atomic_read(&(__mm)->mm_users) > 1 || \ + (__mm) != current->active_mm) \ + ptep_invalidate(__address, __ptep); \ + else \ + pte_clear((__mm), (__address), (__ptep)); \ + __pte; \ +}) + +#define __HAVE_ARCH_PTEP_CLEAR_FLUSH static inline pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { @@ -717,12 +729,40 @@ static inline pte_t ptep_clear_flush(struct vm_area_struct *vma, return pte; } -static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) +/* + * The batched pte unmap code uses ptep_get_and_clear_full to clear the + * ptes. Here an optimization is possible. tlb_gather_mmu flushes all + * tlbs of an mm if it can guarantee that the ptes of the mm_struct + * cannot be accessed while the batched unmap is running. In this case + * full==1 and a simple pte_clear is enough. See tlb.h. + */ +#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL +static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, int full) { - pte_t old_pte = *ptep; - set_pte_at(mm, addr, ptep, pte_wrprotect(old_pte)); + pte_t pte = *ptep; + + if (full) + pte_clear(mm, addr, ptep); + else + ptep_invalidate(addr, ptep); + return pte; } +#define __HAVE_ARCH_PTEP_SET_WRPROTECT +#define ptep_set_wrprotect(__mm, __addr, __ptep) \ +({ \ + pte_t __pte = *(__ptep); \ + if (pte_write(__pte)) { \ + if (atomic_read(&(__mm)->mm_users) > 1 || \ + (__mm) != current->active_mm) \ + ptep_invalidate(__addr, __ptep); \ + set_pte_at(__mm, __addr, __ptep, pte_wrprotect(__pte)); \ + } \ +}) + +#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define ptep_set_access_flags(__vma, __addr, __ptep, __entry, __dirty) \ ({ \ int __changed = !pte_same(*(__ptep), __entry); \ @@ -740,11 +780,13 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, * should therefore only be called if it is not mapped in any * address space. */ +#define __HAVE_ARCH_PAGE_TEST_DIRTY static inline int page_test_dirty(struct page *page) { return (page_get_storage_key(page_to_phys(page)) & _PAGE_CHANGED) != 0; } +#define __HAVE_ARCH_PAGE_CLEAR_DIRTY static inline void page_clear_dirty(struct page *page) { page_set_storage_key(page_to_phys(page), PAGE_DEFAULT_KEY); @@ -753,6 +795,7 @@ static inline void page_clear_dirty(struct page *page) /* * Test and clear referenced bit in storage key. */ +#define __HAVE_ARCH_PAGE_TEST_AND_CLEAR_YOUNG static inline int page_test_and_clear_young(struct page *page) { unsigned long physpage = page_to_phys(page); @@ -784,63 +827,48 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot) return mk_pte_phys(physpage, pgprot); } -static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) -{ - unsigned long physpage = __pa((pfn) << PAGE_SHIFT); - - return mk_pte_phys(physpage, pgprot); -} - -#ifdef __s390x__ - -static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot) -{ - unsigned long physpage = __pa((pfn) << PAGE_SHIFT); - - return __pmd(physpage + pgprot_val(pgprot)); -} - -#endif /* __s390x__ */ - -#define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT) -#define pte_page(x) pfn_to_page(pte_pfn(x)) +#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) +#define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) +#define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) +#define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1)) -#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) +#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) +#define pgd_offset_k(address) pgd_offset(&init_mm, address) -#define pmd_page(pmd) pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) +#ifndef __s390x__ -#define pgd_page_vaddr(pgd) (pgd_val(pgd) & PAGE_MASK) +#define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN) +#define pud_deref(pmd) ({ BUG(); 0UL; }) +#define pgd_deref(pmd) ({ BUG(); 0UL; }) -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pud_offset(pgd, address) ((pud_t *) pgd) +#define pmd_offset(pud, address) ((pmd_t *) pud + pmd_index(address)) -/* to find an entry in a page-table-directory */ -#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) -#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address)) +#else /* __s390x__ */ -/* to find an entry in a kernel page-table-directory */ -#define pgd_offset_k(address) pgd_offset(&init_mm, address) +#define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN) +#define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN) +#define pgd_deref(pgd) ({ BUG(); 0UL; }) -#ifndef __s390x__ +#define pud_offset(pgd, address) ((pud_t *) pgd) -/* Find an entry in the second-level page table.. */ -static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address) +static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { - return (pmd_t *) dir; + pmd_t *pmd = (pmd_t *) pud_deref(*pud); + return pmd + pmd_index(address); } -#else /* __s390x__ */ +#endif /* __s390x__ */ -/* Find an entry in the second-level page table.. */ -#define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) -#define pmd_offset(dir,addr) \ - ((pmd_t *) pgd_page_vaddr(*(dir)) + pmd_index(addr)) +#define pfn_pte(pfn,pgprot) mk_pte_phys(__pa((pfn) << PAGE_SHIFT),(pgprot)) +#define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT) +#define pte_page(x) pfn_to_page(pte_pfn(x)) -#endif /* __s390x__ */ +#define pmd_page(pmd) pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) -/* Find an entry in the third-level page table.. */ -#define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1)) -#define pte_offset_kernel(pmd, address) \ - ((pte_t *) pmd_page_vaddr(*(pmd)) + pte_index(address)) +/* Find an entry in the lowest level page table.. */ +#define pte_offset(pmd, addr) ((pte_t *) pmd_deref(*(pmd)) + pte_index(addr)) +#define pte_offset_kernel(pmd, address) pte_offset(pmd,address) #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address) #define pte_offset_map_nested(pmd, address) pte_offset_kernel(pmd, address) #define pte_unmap(pte) do { } while (0) @@ -930,17 +958,6 @@ extern int remove_shared_memory(unsigned long start, unsigned long size); #define __HAVE_ARCH_MEMMAP_INIT extern void memmap_init(unsigned long, int, unsigned long, unsigned long); -#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS -#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG -#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH -#define __HAVE_ARCH_PTEP_GET_AND_CLEAR -#define __HAVE_ARCH_PTEP_CLEAR_FLUSH -#define __HAVE_ARCH_PTEP_SET_WRPROTECT -#define __HAVE_ARCH_PTE_SAME -#define __HAVE_ARCH_PAGE_TEST_DIRTY -#define __HAVE_ARCH_PAGE_CLEAR_DIRTY -#define __HAVE_ARCH_PAGE_TEST_AND_CLEAR_YOUNG #include <asm-generic/pgtable.h> #endif /* _S390_PAGE_H */ - diff --git a/include/asm-s390/processor.h b/include/asm-s390/processor.h index 3b972d4c6b2..21d40a19355 100644 --- a/include/asm-s390/processor.h +++ b/include/asm-s390/processor.h @@ -93,7 +93,6 @@ struct thread_struct { s390_fp_regs fp_regs; unsigned int acrs[NUM_ACRS]; unsigned long ksp; /* kernel stack pointer */ - unsigned long user_seg; /* HSTD */ mm_segment_t mm_segment; unsigned long prot_addr; /* address of protection-excep. */ unsigned int error_code; /* error-code of last prog-excep. */ @@ -128,22 +127,9 @@ struct stack_frame { #define ARCH_MIN_TASKALIGN 8 -#ifndef __s390x__ -# define __SWAPPER_PG_DIR __pa(&swapper_pg_dir[0]) + _SEGMENT_TABLE -#else /* __s390x__ */ -# define __SWAPPER_PG_DIR __pa(&swapper_pg_dir[0]) + _REGION_TABLE -#endif /* __s390x__ */ - -#define INIT_THREAD {{0,{{0},{0},{0},{0},{0},{0},{0},{0},{0},{0}, \ - {0},{0},{0},{0},{0},{0}}}, \ - {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, \ - sizeof(init_stack) + (unsigned long) &init_stack, \ - __SWAPPER_PG_DIR, \ - {0}, \ - 0,0,0, \ - (per_struct) {{{{0,}}},0,0,0,0,{{0,}}}, \ - 0, 0 \ -} +#define INIT_THREAD { \ + .ksp = sizeof(init_stack) + (unsigned long) &init_stack, \ +} /* * Do necessary setup to start up a new thread. diff --git a/include/asm-s390/scatterlist.h b/include/asm-s390/scatterlist.h index a43b3afc5e2..29ec8e28c8d 100644 --- a/include/asm-s390/scatterlist.h +++ b/include/asm-s390/scatterlist.h @@ -2,7 +2,10 @@ #define _ASMS390_SCATTERLIST_H struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; }; diff --git a/include/asm-s390/tlb.h b/include/asm-s390/tlb.h index 51bd957b85b..618693cfc10 100644 --- a/include/asm-s390/tlb.h +++ b/include/asm-s390/tlb.h @@ -2,19 +2,130 @@ #define _S390_TLB_H /* - * s390 doesn't need any special per-pte or - * per-vma handling.. + * TLB flushing on s390 is complicated. The following requirement + * from the principles of operation is the most arduous: + * + * "A valid table entry must not be changed while it is attached + * to any CPU and may be used for translation by that CPU except to + * (1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY, + * or INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page + * table entry, or (3) make a change by means of a COMPARE AND SWAP + * AND PURGE instruction that purges the TLB." + * + * The modification of a pte of an active mm struct therefore is + * a two step process: i) invalidate the pte, ii) store the new pte. + * This is true for the page protection bit as well. + * The only possible optimization is to flush at the beginning of + * a tlb_gather_mmu cycle if the mm_struct is currently not in use. + * + * Pages used for the page tables is a different story. FIXME: more */ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) + +#include <linux/mm.h> +#include <linux/swap.h> +#include <asm/processor.h> +#include <asm/pgalloc.h> +#include <asm/smp.h> +#include <asm/tlbflush.h> + +#ifndef CONFIG_SMP +#define TLB_NR_PTRS 1 +#else +#define TLB_NR_PTRS 508 +#endif + +struct mmu_gather { + struct mm_struct *mm; + unsigned int fullmm; + unsigned int nr_ptes; + unsigned int nr_pmds; + void *array[TLB_NR_PTRS]; +}; + +DECLARE_PER_CPU(struct mmu_gather, mmu_gathers); + +static inline struct mmu_gather *tlb_gather_mmu(struct mm_struct *mm, + unsigned int full_mm_flush) +{ + struct mmu_gather *tlb = &get_cpu_var(mmu_gathers); + + tlb->mm = mm; + tlb->fullmm = full_mm_flush || (num_online_cpus() == 1) || + (atomic_read(&mm->mm_users) <= 1 && mm == current->active_mm); + tlb->nr_ptes = 0; + tlb->nr_pmds = TLB_NR_PTRS; + if (tlb->fullmm) + __tlb_flush_mm(mm); + return tlb; +} + +static inline void tlb_flush_mmu(struct mmu_gather *tlb, + unsigned long start, unsigned long end) +{ + if (!tlb->fullmm && (tlb->nr_ptes > 0 || tlb->nr_pmds < TLB_NR_PTRS)) + __tlb_flush_mm(tlb->mm); + while (tlb->nr_ptes > 0) + pte_free(tlb->array[--tlb->nr_ptes]); + while (tlb->nr_pmds < TLB_NR_PTRS) + pmd_free((pmd_t *) tlb->array[tlb->nr_pmds++]); +} + +static inline void tlb_finish_mmu(struct mmu_gather *tlb, + unsigned long start, unsigned long end) +{ + tlb_flush_mmu(tlb, start, end); + + /* keep the page table cache within bounds */ + check_pgt_cache(); + + put_cpu_var(mmu_gathers); +} /* - * .. because we flush the whole mm when it - * fills up. + * Release the page cache reference for a pte removed by + * tlb_ptep_clear_flush. In both flush modes the tlb fo a page cache page + * has already been freed, so just do free_page_and_swap_cache. */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) +static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) +{ + free_page_and_swap_cache(page); +} -#include <asm-generic/tlb.h> +/* + * pte_free_tlb frees a pte table and clears the CRSTE for the + * page table from the tlb. + */ +static inline void pte_free_tlb(struct mmu_gather *tlb, struct page *page) +{ + if (!tlb->fullmm) { + tlb->array[tlb->nr_ptes++] = page; + if (tlb->nr_ptes >= tlb->nr_pmds) + tlb_flush_mmu(tlb, 0, 0); + } else + pte_free(page); +} +/* + * pmd_free_tlb frees a pmd table and clears the CRSTE for the + * segment table entry from the tlb. + */ +static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) +{ +#ifdef __s390x__ + if (!tlb->fullmm) { + tlb->array[--tlb->nr_pmds] = (struct page *) pmd; + if (tlb->nr_ptes >= tlb->nr_pmds) + tlb_flush_mmu(tlb, 0, 0); + } else + pmd_free(pmd); #endif +} + +#define pud_free_tlb(tlb, pud) do { } while (0) + +#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_end_vma(tlb, vma) do { } while (0) +#define tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0) +#define tlb_migrate_finish(mm) do { } while (0) + +#endif /* _S390_TLB_H */ diff --git a/include/asm-s390/tlbflush.h b/include/asm-s390/tlbflush.h index 6de2632a3e4..a69bd2490d5 100644 --- a/include/asm-s390/tlbflush.h +++ b/include/asm-s390/tlbflush.h @@ -6,68 +6,19 @@ #include <asm/pgalloc.h> /* - * TLB flushing: - * - * - flush_tlb() flushes the current mm struct TLBs - * - flush_tlb_all() flushes all processes TLBs - * - flush_tlb_mm(mm) flushes the specified mm context TLB's - * - flush_tlb_page(vma, vmaddr) flushes one page - * - flush_tlb_range(vma, start, end) flushes a range of pages - * - flush_tlb_kernel_range(start, end) flushes a range of kernel pages - */ - -/* - * S/390 has three ways of flushing TLBs - * 'ptlb' does a flush of the local processor - * 'csp' flushes the TLBs on all PUs of a SMP - * 'ipte' invalidates a pte in a page table and flushes that out of - * the TLBs of all PUs of a SMP - */ - -#define local_flush_tlb() \ -do { asm volatile("ptlb": : :"memory"); } while (0) - -#ifndef CONFIG_SMP - -/* - * We always need to flush, since s390 does not flush tlb - * on each context switch + * Flush all tlb entries on the local cpu. */ - -static inline void flush_tlb(void) +static inline void __tlb_flush_local(void) { - local_flush_tlb(); + asm volatile("ptlb" : : : "memory"); } -static inline void flush_tlb_all(void) -{ - local_flush_tlb(); -} -static inline void flush_tlb_mm(struct mm_struct *mm) -{ - local_flush_tlb(); -} -static inline void flush_tlb_page(struct vm_area_struct *vma, - unsigned long addr) -{ - local_flush_tlb(); -} -static inline void flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end) -{ - local_flush_tlb(); -} - -#define flush_tlb_kernel_range(start, end) \ - local_flush_tlb(); - -#else -#include <asm/smp.h> - -extern void smp_ptlb_all(void); - -static inline void global_flush_tlb(void) +/* + * Flush all tlb entries on all cpus. + */ +static inline void __tlb_flush_global(void) { + extern void smp_ptlb_all(void); register unsigned long reg2 asm("2"); register unsigned long reg3 asm("3"); register unsigned long reg4 asm("4"); @@ -89,66 +40,75 @@ static inline void global_flush_tlb(void) } /* - * We only have to do global flush of tlb if process run since last - * flush on any other pu than current. - * If we have threads (mm->count > 1) we always do a global flush, - * since the process runs on more than one processor at the same time. + * Flush all tlb entries of a page table on all cpus. */ +static inline void __tlb_flush_idte(pgd_t *pgd) +{ + asm volatile( + " .insn rrf,0xb98e0000,0,%0,%1,0" + : : "a" (2048), "a" (__pa(pgd) & PAGE_MASK) : "cc" ); +} -static inline void __flush_tlb_mm(struct mm_struct * mm) +static inline void __tlb_flush_mm(struct mm_struct * mm) { cpumask_t local_cpumask; if (unlikely(cpus_empty(mm->cpu_vm_mask))) return; + /* + * If the machine has IDTE we prefer to do a per mm flush + * on all cpus instead of doing a local flush if the mm + * only ran on the local cpu. + */ if (MACHINE_HAS_IDTE) { - pgd_t *shadow_pgd = get_shadow_pgd(mm->pgd); + pgd_t *shadow_pgd = get_shadow_table(mm->pgd); - if (shadow_pgd) { - asm volatile( - " .insn rrf,0xb98e0000,0,%0,%1,0" - : : "a" (2048), - "a" (__pa(shadow_pgd) & PAGE_MASK) : "cc" ); - } - asm volatile( - " .insn rrf,0xb98e0000,0,%0,%1,0" - : : "a" (2048), "a" (__pa(mm->pgd)&PAGE_MASK) : "cc"); + if (shadow_pgd) + __tlb_flush_idte(shadow_pgd); + __tlb_flush_idte(mm->pgd); return; } preempt_disable(); + /* + * If the process only ran on the local cpu, do a local flush. + */ local_cpumask = cpumask_of_cpu(smp_processor_id()); if (cpus_equal(mm->cpu_vm_mask, local_cpumask)) - local_flush_tlb(); + __tlb_flush_local(); else - global_flush_tlb(); + __tlb_flush_global(); preempt_enable(); } -static inline void flush_tlb(void) -{ - __flush_tlb_mm(current->mm); -} -static inline void flush_tlb_all(void) -{ - global_flush_tlb(); -} -static inline void flush_tlb_mm(struct mm_struct *mm) -{ - __flush_tlb_mm(mm); -} -static inline void flush_tlb_page(struct vm_area_struct *vma, - unsigned long addr) -{ - __flush_tlb_mm(vma->vm_mm); -} -static inline void flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end) +static inline void __tlb_flush_mm_cond(struct mm_struct * mm) { - __flush_tlb_mm(vma->vm_mm); + if (atomic_read(&mm->mm_users) <= 1 && mm == current->active_mm) + __tlb_flush_mm(mm); } -#define flush_tlb_kernel_range(start, end) global_flush_tlb() +/* + * TLB flushing: + * flush_tlb() - flushes the current mm struct TLBs + * flush_tlb_all() - flushes all processes TLBs + * flush_tlb_mm(mm) - flushes the specified mm context TLB's + * flush_tlb_page(vma, vmaddr) - flushes one page + * flush_tlb_range(vma, start, end) - flushes a range of pages + * flush_tlb_kernel_range(start, end) - flushes a range of kernel pages + */ -#endif +/* + * flush_tlb_mm goes together with ptep_set_wrprotect for the + * copy_page_range operation and flush_tlb_range is related to + * ptep_get_and_clear for change_protection. ptep_set_wrprotect and + * ptep_get_and_clear do not flush the TLBs directly if the mm has + * only one user. At the end of the update the flush_tlb_mm and + * flush_tlb_range functions need to do the flush. + */ +#define flush_tlb() do { } while (0) +#define flush_tlb_all() do { } while (0) +#define flush_tlb_mm(mm) __tlb_flush_mm_cond(mm) +#define flush_tlb_page(vma, addr) do { } while (0) +#define flush_tlb_range(vma, start, end) __tlb_flush_mm_cond(mm) +#define flush_tlb_kernel_range(start, end) __tlb_flush_mm(&init_mm) #endif /* _S390_TLBFLUSH_H */ diff --git a/include/asm-sh/dma-mapping.h b/include/asm-sh/dma-mapping.h index 84fefdaa01a..fcea067f7a9 100644 --- a/include/asm-sh/dma-mapping.h +++ b/include/asm-sh/dma-mapping.h @@ -2,7 +2,7 @@ #define __ASM_SH_DMA_MAPPING_H #include <linux/mm.h> -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include <asm/cacheflush.h> #include <asm/io.h> @@ -85,10 +85,9 @@ static inline int dma_map_sg(struct device *dev, struct scatterlist *sg, for (i = 0; i < nents; i++) { #if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT) - dma_cache_sync(dev, page_address(sg[i].page) + sg[i].offset, - sg[i].length, dir); + dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir); #endif - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; + sg[i].dma_address = sg_phys(&sg[i]); } return nents; @@ -138,10 +137,9 @@ static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg, for (i = 0; i < nelems; i++) { #if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT) - dma_cache_sync(dev, page_address(sg[i].page) + sg[i].offset, - sg[i].length, dir); + dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir); #endif - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; + sg[i].dma_address = sg_phys(&sg[i]); } } diff --git a/include/asm-sh/scatterlist.h b/include/asm-sh/scatterlist.h index b9ae53c3836..a7d0d1856a9 100644 --- a/include/asm-sh/scatterlist.h +++ b/include/asm-sh/scatterlist.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page * page; /* Location for highmem page, if any */ +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset;/* for highmem, page offset */ dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-sh64/dma-mapping.h b/include/asm-sh64/dma-mapping.h index e661857f98d..1438b763a5e 100644 --- a/include/asm-sh64/dma-mapping.h +++ b/include/asm-sh64/dma-mapping.h @@ -2,7 +2,7 @@ #define __ASM_SH_DMA_MAPPING_H #include <linux/mm.h> -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include <asm/io.h> struct pci_dev; @@ -71,10 +71,9 @@ static inline int dma_map_sg(struct device *dev, struct scatterlist *sg, for (i = 0; i < nents; i++) { #if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT) - dma_cache_sync(dev, page_address(sg[i].page) + sg[i].offset, - sg[i].length, dir); + dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir); #endif - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; + sg[i].dma_address = sg_phys(&sg[i]); } return nents; @@ -124,10 +123,9 @@ static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg, for (i = 0; i < nelems; i++) { #if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT) - dma_cache_sync(dev, page_address(sg[i].page) + sg[i].offset, - sg[i].length, dir); + dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir); #endif - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; + sg[i].dma_address = sg_phys(&sg[i]); } } diff --git a/include/asm-sh64/scatterlist.h b/include/asm-sh64/scatterlist.h index 1c723f2d7a9..5109251970e 100644 --- a/include/asm-sh64/scatterlist.h +++ b/include/asm-sh64/scatterlist.h @@ -14,7 +14,10 @@ #include <asm/types.h> struct scatterlist { - struct page * page; /* Location for highmem page, if any */ +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset;/* for highmem, page offset */ dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-sparc/scatterlist.h b/include/asm-sparc/scatterlist.h index 4055af90ad7..e08d3d775b0 100644 --- a/include/asm-sparc/scatterlist.h +++ b/include/asm-sparc/scatterlist.h @@ -5,7 +5,10 @@ #include <linux/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; diff --git a/include/asm-sparc64/scatterlist.h b/include/asm-sparc64/scatterlist.h index 703c5bbe6c8..6df23f070b1 100644 --- a/include/asm-sparc64/scatterlist.h +++ b/include/asm-sparc64/scatterlist.h @@ -6,7 +6,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; diff --git a/include/asm-v850/scatterlist.h b/include/asm-v850/scatterlist.h index 56f402920db..02d27b3fb06 100644 --- a/include/asm-v850/scatterlist.h +++ b/include/asm-v850/scatterlist.h @@ -17,7 +17,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned offset; dma_addr_t dma_address; unsigned length; diff --git a/include/asm-x86/bootparam.h b/include/asm-x86/bootparam.h index ef67b59dbdb..dc031cf4463 100644 --- a/include/asm-x86/bootparam.h +++ b/include/asm-x86/bootparam.h @@ -28,8 +28,9 @@ struct setup_header { u16 kernel_version; u8 type_of_loader; u8 loadflags; -#define LOADED_HIGH 0x01 -#define CAN_USE_HEAP 0x80 +#define LOADED_HIGH (1<<0) +#define KEEP_SEGMENTS (1<<6) +#define CAN_USE_HEAP (1<<7) u16 setup_move_size; u32 code32_start; u32 ramdisk_image; @@ -41,6 +42,10 @@ struct setup_header { u32 initrd_addr_max; u32 kernel_alignment; u8 relocatable_kernel; + u8 _pad2[3]; + u32 cmdline_size; + u32 hardware_subarch; + u64 hardware_subarch_data; } __attribute__((packed)); struct sys_desc_table { diff --git a/include/asm-x86/cacheflush.h b/include/asm-x86/cacheflush.h index b3d43de44c5..9411a2d3f19 100644 --- a/include/asm-x86/cacheflush.h +++ b/include/asm-x86/cacheflush.h @@ -27,6 +27,7 @@ void global_flush_tlb(void); int change_page_attr(struct page *page, int numpages, pgprot_t prot); int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot); +void clflush_cache_range(void *addr, int size); #ifdef CONFIG_DEBUG_PAGEALLOC /* internal debugging function */ diff --git a/include/asm-x86/device.h b/include/asm-x86/device.h index d9ee5e52e91..87a715367a1 100644 --- a/include/asm-x86/device.h +++ b/include/asm-x86/device.h @@ -5,6 +5,9 @@ struct dev_archdata { #ifdef CONFIG_ACPI void *acpi_handle; #endif +#ifdef CONFIG_DMAR + void *iommu; /* hook for IOMMU specific extension */ +#endif }; #endif /* _ASM_X86_DEVICE_H */ diff --git a/include/asm-x86/dma-mapping_32.h b/include/asm-x86/dma-mapping_32.h index 6a2d26cb5da..55f01bd9e55 100644 --- a/include/asm-x86/dma-mapping_32.h +++ b/include/asm-x86/dma-mapping_32.h @@ -45,9 +45,9 @@ dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, WARN_ON(nents == 0 || sglist[0].length == 0); for_each_sg(sglist, sg, nents, i) { - BUG_ON(!sg->page); + BUG_ON(!sg_page(sg)); - sg->dma_address = page_to_phys(sg->page) + sg->offset; + sg->dma_address = sg_phys(sg); } flush_write_buffers(); diff --git a/include/asm-x86/scatterlist_32.h b/include/asm-x86/scatterlist_32.h index bd5164aa8f6..0e7d997a34b 100644 --- a/include/asm-x86/scatterlist_32.h +++ b/include/asm-x86/scatterlist_32.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; diff --git a/include/asm-x86/scatterlist_64.h b/include/asm-x86/scatterlist_64.h index ef3986ba4b7..1847c72befe 100644 --- a/include/asm-x86/scatterlist_64.h +++ b/include/asm-x86/scatterlist_64.h @@ -4,7 +4,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; unsigned int length; dma_addr_t dma_address; diff --git a/include/asm-xtensa/scatterlist.h b/include/asm-xtensa/scatterlist.h index ca337a29429..810080bb0a2 100644 --- a/include/asm-xtensa/scatterlist.h +++ b/include/asm-xtensa/scatterlist.h @@ -14,7 +14,10 @@ #include <asm/types.h> struct scatterlist { - struct page *page; +#ifdef CONFIG_DEBUG_SG + unsigned long sg_magic; +#endif + unsigned long page_link; unsigned int offset; dma_addr_t dma_address; unsigned int length; diff --git a/include/linux/capability.h b/include/linux/capability.h index 7a8d7ade28a..bb017edffd5 100644 --- a/include/linux/capability.h +++ b/include/linux/capability.h @@ -56,10 +56,8 @@ typedef struct __user_cap_data_struct { struct vfs_cap_data { __u32 magic_etc; /* Little endian */ - struct { - __u32 permitted; /* Little endian */ - __u32 inheritable; /* Little endian */ - } data[1]; + __u32 permitted; /* Little endian */ + __u32 inheritable; /* Little endian */ }; #ifdef __KERNEL__ diff --git a/include/linux/dmar.h b/include/linux/dmar.h new file mode 100644 index 00000000000..ffb6439cb5e --- /dev/null +++ b/include/linux/dmar.h @@ -0,0 +1,86 @@ +/* + * Copyright (c) 2006, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple + * Place - Suite 330, Boston, MA 02111-1307 USA. + * + * Copyright (C) Ashok Raj <ashok.raj@intel.com> + * Copyright (C) Shaohua Li <shaohua.li@intel.com> + */ + +#ifndef __DMAR_H__ +#define __DMAR_H__ + +#include <linux/acpi.h> +#include <linux/types.h> +#include <linux/msi.h> + +#ifdef CONFIG_DMAR +struct intel_iommu; + +extern char *dmar_get_fault_reason(u8 fault_reason); + +/* Can't use the common MSI interrupt functions + * since DMAR is not a pci device + */ +extern void dmar_msi_unmask(unsigned int irq); +extern void dmar_msi_mask(unsigned int irq); +extern void dmar_msi_read(int irq, struct msi_msg *msg); +extern void dmar_msi_write(int irq, struct msi_msg *msg); +extern int dmar_set_interrupt(struct intel_iommu *iommu); +extern int arch_setup_dmar_msi(unsigned int irq); + +/* Intel IOMMU detection and initialization functions */ +extern void detect_intel_iommu(void); +extern int intel_iommu_init(void); + +extern int dmar_table_init(void); +extern int early_dmar_detect(void); + +extern struct list_head dmar_drhd_units; +extern struct list_head dmar_rmrr_units; + +struct dmar_drhd_unit { + struct list_head list; /* list of drhd units */ + u64 reg_base_addr; /* register base address*/ + struct pci_dev **devices; /* target device array */ + int devices_cnt; /* target device count */ + u8 ignored:1; /* ignore drhd */ + u8 include_all:1; + struct intel_iommu *iommu; +}; + +struct dmar_rmrr_unit { + struct list_head list; /* list of rmrr units */ + u64 base_address; /* reserved base address*/ + u64 end_address; /* reserved end address */ + struct pci_dev **devices; /* target devices */ + int devices_cnt; /* target device count */ +}; + +#define for_each_drhd_unit(drhd) \ + list_for_each_entry(drhd, &dmar_drhd_units, list) +#define for_each_rmrr_units(rmrr) \ + list_for_each_entry(rmrr, &dmar_rmrr_units, list) +#else +static inline void detect_intel_iommu(void) +{ + return; +} +static inline int intel_iommu_init(void) +{ + return -ENODEV; +} + +#endif /* !CONFIG_DMAR */ +#endif /* __DMAR_H__ */ diff --git a/include/linux/efi.h b/include/linux/efi.h index 0b9579a4cd4..14813b59580 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -298,7 +298,7 @@ extern int efi_mem_attribute_range (unsigned long phys_addr, unsigned long size, u64 attr); extern int __init efi_uart_console_only (void); extern void efi_initialize_iomem_resources(struct resource *code_resource, - struct resource *data_resource); + struct resource *data_resource, struct resource *bss_resource); extern unsigned long efi_get_time(void); extern int efi_set_rtc_mmss(unsigned long nowtime); extern int is_available_memory(efi_memory_desc_t * md); diff --git a/include/linux/efs_fs.h b/include/linux/efs_fs.h index 16cb25cbf7c..dd57fe523e9 100644 --- a/include/linux/efs_fs.h +++ b/include/linux/efs_fs.h @@ -35,6 +35,7 @@ static inline struct efs_sb_info *SUPER_INFO(struct super_block *sb) } struct statfs; +struct fid; extern const struct inode_operations efs_dir_inode_operations; extern const struct file_operations efs_dir_operations; @@ -45,7 +46,10 @@ extern efs_block_t efs_map_block(struct inode *, efs_block_t); extern int efs_get_block(struct inode *, sector_t, struct buffer_head *, int); extern struct dentry *efs_lookup(struct inode *, struct dentry *, struct nameidata *); -extern struct dentry *efs_get_dentry(struct super_block *sb, void *vobjp); +extern struct dentry *efs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); +extern struct dentry *efs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); extern struct dentry *efs_get_parent(struct dentry *); extern int efs_bmap(struct inode *, int); diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h index 8872fe8392d..51d21413881 100644 --- a/include/linux/exportfs.h +++ b/include/linux/exportfs.h @@ -4,9 +4,48 @@ #include <linux/types.h> struct dentry; +struct inode; struct super_block; struct vfsmount; +/* + * The fileid_type identifies how the file within the filesystem is encoded. + * In theory this is freely set and parsed by the filesystem, but we try to + * stick to conventions so we can share some generic code and don't confuse + * sniffers like ethereal/wireshark. + * + * The filesystem must not use the value '0' or '0xff'. + */ +enum fid_type { + /* + * The root, or export point, of the filesystem. + * (Never actually passed down to the filesystem. + */ + FILEID_ROOT = 0, + + /* + * 32bit inode number, 32 bit generation number. + */ + FILEID_INO32_GEN = 1, + + /* + * 32bit inode number, 32 bit generation number, + * 32 bit parent directory inode number. + */ + FILEID_INO32_GEN_PARENT = 2, +}; + +struct fid { + union { + struct { + u32 ino; + u32 gen; + u32 parent_ino; + u32 parent_gen; + } i32; + __u32 raw[6]; + }; +}; /** * struct export_operations - for nfsd to communicate with file systems @@ -15,43 +54,9 @@ struct vfsmount; * @get_name: find the name for a given inode in a given directory * @get_parent: find the parent of a given directory * @get_dentry: find a dentry for the inode given a file handle sub-fragment - * @find_exported_dentry: - * set by the exporting module to a standard helper function. - * - * Description: - * The export_operations structure provides a means for nfsd to communicate - * with a particular exported file system - particularly enabling nfsd and - * the filesystem to co-operate when dealing with file handles. - * - * export_operations contains two basic operation for dealing with file - * handles, decode_fh() and encode_fh(), and allows for some other - * operations to be defined which standard helper routines use to get - * specific information from the filesystem. - * - * nfsd encodes information use to determine which filesystem a filehandle - * applies to in the initial part of the file handle. The remainder, termed - * a file handle fragment, is controlled completely by the filesystem. The - * standard helper routines assume that this fragment will contain one or - * two sub-fragments, one which identifies the file, and one which may be - * used to identify the (a) directory containing the file. * - * In some situations, nfsd needs to get a dentry which is connected into a - * specific part of the file tree. To allow for this, it passes the - * function acceptable() together with a @context which can be used to see - * if the dentry is acceptable. As there can be multiple dentrys for a - * given file, the filesystem should check each one for acceptability before - * looking for the next. As soon as an acceptable one is found, it should - * be returned. - * - * decode_fh: - * @decode_fh is given a &struct super_block (@sb), a file handle fragment - * (@fh, @fh_len) and an acceptability testing function (@acceptable, - * @context). It should return a &struct dentry which refers to the same - * file that the file handle fragment refers to, and which passes the - * acceptability test. If it cannot, it should return a %NULL pointer if - * the file was found but no acceptable &dentries were available, or a - * %ERR_PTR error code indicating why it couldn't be found (e.g. %ENOENT or - * %ENOMEM). + * See Documentation/filesystems/Exporting for details on how to use + * this interface correctly. * * encode_fh: * @encode_fh should store in the file handle fragment @fh (using at most @@ -63,6 +68,21 @@ struct vfsmount; * the filehandle fragment. encode_fh() should return the number of bytes * stored or a negative error code such as %-ENOSPC * + * fh_to_dentry: + * @fh_to_dentry is given a &struct super_block (@sb) and a file handle + * fragment (@fh, @fh_len). It should return a &struct dentry which refers + * to the same file that the file handle fragment refers to. If it cannot, + * it should return a %NULL pointer if the file was found but no acceptable + * &dentries were available, or an %ERR_PTR error code indicating why it + * couldn't be found (e.g. %ENOENT or %ENOMEM). Any suitable dentry can be + * returned including, if necessary, a new dentry created with d_alloc_root. + * The caller can then find any other extant dentries by following the + * d_alias links. + * + * fh_to_parent: + * Same as @fh_to_dentry, except that it returns a pointer to the parent + * dentry if it was encoded into the filehandle fragment by @encode_fh. + * * get_name: * @get_name should find a name for the given @child in the given @parent * directory. The name should be stored in the @name (with the @@ -75,52 +95,37 @@ struct vfsmount; * is also a directory. In the event that it cannot be found, or storage * space cannot be allocated, a %ERR_PTR should be returned. * - * get_dentry: - * Given a &super_block (@sb) and a pointer to a file-system specific inode - * identifier, possibly an inode number, (@inump) get_dentry() should find - * the identified inode and return a dentry for that inode. Any suitable - * dentry can be returned including, if necessary, a new dentry created with - * d_alloc_root. The caller can then find any other extant dentrys by - * following the d_alias links. If a new dentry was created using - * d_alloc_root, DCACHE_NFSD_DISCONNECTED should be set, and the dentry - * should be d_rehash()ed. - * - * If the inode cannot be found, either a %NULL pointer or an %ERR_PTR code - * can be returned. The @inump will be whatever was passed to - * nfsd_find_fh_dentry() in either the @obj or @parent parameters. - * * Locking rules: * get_parent is called with child->d_inode->i_mutex down * get_name is not (which is possibly inconsistent) */ struct export_operations { - struct dentry *(*decode_fh)(struct super_block *sb, __u32 *fh, - int fh_len, int fh_type, - int (*acceptable)(void *context, struct dentry *de), - void *context); int (*encode_fh)(struct dentry *de, __u32 *fh, int *max_len, int connectable); + struct dentry * (*fh_to_dentry)(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); + struct dentry * (*fh_to_parent)(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); int (*get_name)(struct dentry *parent, char *name, struct dentry *child); struct dentry * (*get_parent)(struct dentry *child); - struct dentry * (*get_dentry)(struct super_block *sb, void *inump); - - /* This is set by the exporting module to a standard helper */ - struct dentry * (*find_exported_dentry)( - struct super_block *sb, void *obj, void *parent, - int (*acceptable)(void *context, struct dentry *de), - void *context); }; -extern struct dentry *find_exported_dentry(struct super_block *sb, void *obj, - void *parent, int (*acceptable)(void *context, struct dentry *de), - void *context); - -extern int exportfs_encode_fh(struct dentry *dentry, __u32 *fh, int *max_len, - int connectable); -extern struct dentry *exportfs_decode_fh(struct vfsmount *mnt, __u32 *fh, +extern int exportfs_encode_fh(struct dentry *dentry, struct fid *fid, + int *max_len, int connectable); +extern struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid, int fh_len, int fileid_type, int (*acceptable)(void *, struct dentry *), void *context); +/* + * Generic helpers for filesystems. + */ +extern struct dentry *generic_fh_to_dentry(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type, + struct inode *(*get_inode) (struct super_block *sb, u64 ino, u32 gen)); +extern struct dentry *generic_fh_to_parent(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type, + struct inode *(*get_inode) (struct super_block *sb, u64 ino, u32 gen)); + #endif /* LINUX_EXPORTFS_H */ diff --git a/include/linux/ext2_fs.h b/include/linux/ext2_fs.h index c77c3bbfe4b..0f6c86c634f 100644 --- a/include/linux/ext2_fs.h +++ b/include/linux/ext2_fs.h @@ -561,6 +561,7 @@ enum { #define EXT2_DIR_ROUND (EXT2_DIR_PAD - 1) #define EXT2_DIR_REC_LEN(name_len) (((name_len) + 8 + EXT2_DIR_ROUND) & \ ~EXT2_DIR_ROUND) +#define EXT2_MAX_REC_LEN ((1<<16)-1) static inline ext2_fsblk_t ext2_group_first_block_no(struct super_block *sb, unsigned long group_no) diff --git a/include/linux/fs.h b/include/linux/fs.h index 50078bb30a1..b3ec4a496d6 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -987,7 +987,7 @@ struct super_block { const struct super_operations *s_op; struct dquot_operations *dq_op; struct quotactl_ops *s_qcop; - struct export_operations *s_export_op; + const struct export_operations *s_export_op; unsigned long s_flags; unsigned long s_magic; struct dentry *s_root; diff --git a/include/linux/i8042.h b/include/linux/i8042.h new file mode 100644 index 00000000000..7907a72403e --- /dev/null +++ b/include/linux/i8042.h @@ -0,0 +1,35 @@ +#ifndef _LINUX_I8042_H +#define _LINUX_I8042_H + +/* + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + */ + + +/* + * Standard commands. + */ + +#define I8042_CMD_CTL_RCTR 0x0120 +#define I8042_CMD_CTL_WCTR 0x1060 +#define I8042_CMD_CTL_TEST 0x01aa + +#define I8042_CMD_KBD_DISABLE 0x00ad +#define I8042_CMD_KBD_ENABLE 0x00ae +#define I8042_CMD_KBD_TEST 0x01ab +#define I8042_CMD_KBD_LOOP 0x11d2 + +#define I8042_CMD_AUX_DISABLE 0x00a7 +#define I8042_CMD_AUX_ENABLE 0x00a8 +#define I8042_CMD_AUX_TEST 0x01a9 +#define I8042_CMD_AUX_SEND 0x10d4 +#define I8042_CMD_AUX_LOOP 0x11d3 + +#define I8042_CMD_MUX_PFX 0x0090 +#define I8042_CMD_MUX_SEND 0x1090 + +int i8042_command(unsigned char *param, int command); + +#endif diff --git a/include/linux/ide.h b/include/linux/ide.h index 2e4b8dd03cf..4ed4777bba6 100644 --- a/include/linux/ide.h +++ b/include/linux/ide.h @@ -667,7 +667,7 @@ typedef struct hwif_s { u8 straight8; /* Alan's straight 8 check */ u8 bus_state; /* power state of the IDE bus */ - u16 host_flags; + u32 host_flags; u8 pio_mask; diff --git a/include/linux/linkage.h b/include/linux/linkage.h index 6c9873f8828..ff203dd0291 100644 --- a/include/linux/linkage.h +++ b/include/linux/linkage.h @@ -34,6 +34,12 @@ name: #endif +#ifndef WEAK +#define WEAK(name) \ + .weak name; \ + name: +#endif + #define KPROBE_ENTRY(name) \ .pushsection .kprobes.text, "ax"; \ ENTRY(name) diff --git a/include/linux/memory.h b/include/linux/memory.h index 654ef554487..33f0ff0cf63 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -41,18 +41,15 @@ struct memory_block { #define MEM_ONLINE (1<<0) /* exposed to userspace */ #define MEM_GOING_OFFLINE (1<<1) /* exposed to userspace */ #define MEM_OFFLINE (1<<2) /* exposed to userspace */ +#define MEM_GOING_ONLINE (1<<3) +#define MEM_CANCEL_ONLINE (1<<4) +#define MEM_CANCEL_OFFLINE (1<<5) -/* - * All of these states are currently kernel-internal for notifying - * kernel components and architectures. - * - * For MEM_MAPPING_INVALID, all notifier chains with priority >0 - * are called before pfn_to_page() becomes invalid. The priority=0 - * entry is reserved for the function that actually makes - * pfn_to_page() stop working. Any notifiers that want to be called - * after that should have priority <0. - */ -#define MEM_MAPPING_INVALID (1<<3) +struct memory_notify { + unsigned long start_pfn; + unsigned long nr_pages; + int status_change_nid; +}; struct notifier_block; struct mem_section; @@ -69,21 +66,31 @@ static inline int register_memory_notifier(struct notifier_block *nb) static inline void unregister_memory_notifier(struct notifier_block *nb) { } +static inline int memory_notify(unsigned long val, void *v) +{ + return 0; +} #else +extern int register_memory_notifier(struct notifier_block *nb); +extern void unregister_memory_notifier(struct notifier_block *nb); extern int register_new_memory(struct mem_section *); extern int unregister_memory_section(struct mem_section *); extern int memory_dev_init(void); extern int remove_memory_block(unsigned long, struct mem_section *, int); - +extern int memory_notify(unsigned long val, void *v); #define CONFIG_MEM_BLOCK_SIZE (PAGES_PER_SECTION<<PAGE_SHIFT) #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ +#ifdef CONFIG_MEMORY_HOTPLUG #define hotplug_memory_notifier(fn, pri) { \ static struct notifier_block fn##_mem_nb = \ { .notifier_call = fn, .priority = pri }; \ register_memory_notifier(&fn##_mem_nb); \ } +#else +#define hotplug_memory_notifier(fn, pri) do { } while (0) +#endif #endif /* _LINUX_MEMORY_H_ */ diff --git a/include/linux/net.h b/include/linux/net.h index c136abce7ef..dd79cdb8c4c 100644 --- a/include/linux/net.h +++ b/include/linux/net.h @@ -313,6 +313,10 @@ static const struct proto_ops name##_ops = { \ #define MODULE_ALIAS_NET_PF_PROTO(pf, proto) \ MODULE_ALIAS("net-pf-" __stringify(pf) "-proto-" __stringify(proto)) +#define MODULE_ALIAS_NET_PF_PROTO_TYPE(pf, proto, type) \ + MODULE_ALIAS("net-pf-" __stringify(pf) "-proto-" __stringify(proto) \ + "-type-" __stringify(type)) + #ifdef CONFIG_SYSCTL #include <linux/sysctl.h> extern ctl_table net_table[]; diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 6f85db3535e..4a3f54e358e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -996,7 +996,7 @@ static inline void netif_stop_subqueue(struct net_device *dev, u16 queue_index) * * Check individual transmit queue of a device with multiple transmit queues. */ -static inline int netif_subqueue_stopped(const struct net_device *dev, +static inline int __netif_subqueue_stopped(const struct net_device *dev, u16 queue_index) { #ifdef CONFIG_NETDEVICES_MULTIQUEUE @@ -1007,6 +1007,11 @@ static inline int netif_subqueue_stopped(const struct net_device *dev, #endif } +static inline int netif_subqueue_stopped(const struct net_device *dev, + struct sk_buff *skb) +{ + return __netif_subqueue_stopped(dev, skb_get_queue_mapping(skb)); +} /** * netif_wake_subqueue - allow sending packets on subqueue diff --git a/include/linux/pci.h b/include/linux/pci.h index 768b93359f9..5d2281f661f 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -141,6 +141,7 @@ struct pci_dev { unsigned int class; /* 3 bytes: (base,sub,prog-if) */ u8 revision; /* PCI revision, low byte of class word */ u8 hdr_type; /* PCI header type (`multi' flag masked out) */ + u8 pcie_type; /* PCI-E device/port type */ u8 rom_base_reg; /* which config register controls the ROM */ u8 pin; /* which interrupt pin this device uses */ @@ -183,6 +184,7 @@ struct pci_dev { unsigned int msi_enabled:1; unsigned int msix_enabled:1; unsigned int is_managed:1; + unsigned int is_pcie:1; atomic_t enable_cnt; /* pci_enable_device has been called */ u32 saved_config_space[16]; /* config space saved at suspend time */ diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index df948b44eda..4e10a074ca5 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -1943,6 +1943,7 @@ #define PCI_DEVICE_ID_TIGON3_5720 0x1658 #define PCI_DEVICE_ID_TIGON3_5721 0x1659 #define PCI_DEVICE_ID_TIGON3_5722 0x165a +#define PCI_DEVICE_ID_TIGON3_5723 0x165b #define PCI_DEVICE_ID_TIGON3_5705M 0x165d #define PCI_DEVICE_ID_TIGON3_5705M_2 0x165e #define PCI_DEVICE_ID_TIGON3_5714 0x1668 diff --git a/include/linux/reiserfs_fs.h b/include/linux/reiserfs_fs.h index 72bfccd3da2..422eab4958a 100644 --- a/include/linux/reiserfs_fs.h +++ b/include/linux/reiserfs_fs.h @@ -28,6 +28,8 @@ #include <linux/reiserfs_fs_sb.h> #endif +struct fid; + /* * include/linux/reiser_fs.h * @@ -1877,12 +1879,10 @@ void reiserfs_delete_inode(struct inode *inode); int reiserfs_write_inode(struct inode *inode, int); int reiserfs_get_block(struct inode *inode, sector_t block, struct buffer_head *bh_result, int create); -struct dentry *reiserfs_get_dentry(struct super_block *, void *); -struct dentry *reiserfs_decode_fh(struct super_block *sb, __u32 * data, - int len, int fhtype, - int (*acceptable) (void *contect, - struct dentry * de), - void *context); +struct dentry *reiserfs_fh_to_dentry(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); +struct dentry *reiserfs_fh_to_parent(struct super_block *sb, struct fid *fid, + int fh_len, int fh_type); int reiserfs_encode_fh(struct dentry *dentry, __u32 * data, int *lenp, int connectable); diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 2dc7464cce5..42daf5e1526 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -4,47 +4,95 @@ #include <asm/scatterlist.h> #include <linux/mm.h> #include <linux/string.h> +#include <asm/io.h> +/* + * Notes on SG table design. + * + * Architectures must provide an unsigned long page_link field in the + * scatterlist struct. We use that to place the page pointer AND encode + * information about the sg table as well. The two lower bits are reserved + * for this information. + * + * If bit 0 is set, then the page_link contains a pointer to the next sg + * table list. Otherwise the next entry is at sg + 1. + * + * If bit 1 is set, then this sg entry is the last element in a list. + * + * See sg_next(). + * + */ + +#define SG_MAGIC 0x87654321 + +/** + * sg_set_page - Set sg entry to point at given page + * @sg: SG entry + * @page: The page + * + * Description: + * Use this function to set an sg entry pointing at a page, never assign + * the page directly. We encode sg table information in the lower bits + * of the page pointer. See sg_page() for looking up the page belonging + * to an sg entry. + * + **/ +static inline void sg_set_page(struct scatterlist *sg, struct page *page) +{ + unsigned long page_link = sg->page_link & 0x3; + +#ifdef CONFIG_DEBUG_SG + BUG_ON(sg->sg_magic != SG_MAGIC); +#endif + sg->page_link = page_link | (unsigned long) page; +} + +#define sg_page(sg) ((struct page *) ((sg)->page_link & ~0x3)) + +/** + * sg_set_buf - Set sg entry to point at given data + * @sg: SG entry + * @buf: Data + * @buflen: Data length + * + **/ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, unsigned int buflen) { - sg->page = virt_to_page(buf); + sg_set_page(sg, virt_to_page(buf)); sg->offset = offset_in_page(buf); sg->length = buflen; } -static inline void sg_init_one(struct scatterlist *sg, const void *buf, - unsigned int buflen) -{ - memset(sg, 0, sizeof(*sg)); - sg_set_buf(sg, buf, buflen); -} - /* * We overload the LSB of the page pointer to indicate whether it's * a valid sg entry, or whether it points to the start of a new scatterlist. * Those low bits are there for everyone! (thanks mason :-) */ -#define sg_is_chain(sg) ((unsigned long) (sg)->page & 0x01) +#define sg_is_chain(sg) ((sg)->page_link & 0x01) +#define sg_is_last(sg) ((sg)->page_link & 0x02) #define sg_chain_ptr(sg) \ - ((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01)) + ((struct scatterlist *) ((sg)->page_link & ~0x03)) /** * sg_next - return the next scatterlist entry in a list * @sg: The current sg entry * - * Usually the next entry will be @sg@ + 1, but if this sg element is part - * of a chained scatterlist, it could jump to the start of a new - * scatterlist array. - * - * Note that the caller must ensure that there are further entries after - * the current entry, this function will NOT return NULL for an end-of-list. + * Description: + * Usually the next entry will be @sg@ + 1, but if this sg element is part + * of a chained scatterlist, it could jump to the start of a new + * scatterlist array. * - */ + **/ static inline struct scatterlist *sg_next(struct scatterlist *sg) { - sg++; +#ifdef CONFIG_DEBUG_SG + BUG_ON(sg->sg_magic != SG_MAGIC); +#endif + if (sg_is_last(sg)) + return NULL; + sg++; if (unlikely(sg_is_chain(sg))) sg = sg_chain_ptr(sg); @@ -62,14 +110,15 @@ static inline struct scatterlist *sg_next(struct scatterlist *sg) * @sgl: First entry in the scatterlist * @nents: Number of entries in the scatterlist * - * Should only be used casually, it (currently) scan the entire list - * to get the last entry. + * Description: + * Should only be used casually, it (currently) scan the entire list + * to get the last entry. * - * Note that the @sgl@ pointer passed in need not be the first one, - * the important bit is that @nents@ denotes the number of entries that - * exist from @sgl@. + * Note that the @sgl@ pointer passed in need not be the first one, + * the important bit is that @nents@ denotes the number of entries that + * exist from @sgl@. * - */ + **/ static inline struct scatterlist *sg_last(struct scatterlist *sgl, unsigned int nents) { @@ -83,6 +132,10 @@ static inline struct scatterlist *sg_last(struct scatterlist *sgl, ret = sg; #endif +#ifdef CONFIG_DEBUG_SG + BUG_ON(sgl[0].sg_magic != SG_MAGIC); + BUG_ON(!sg_is_last(ret)); +#endif return ret; } @@ -92,16 +145,111 @@ static inline struct scatterlist *sg_last(struct scatterlist *sgl, * @prv_nents: Number of entries in prv * @sgl: Second scatterlist * - * Links @prv@ and @sgl@ together, to form a longer scatterlist. + * Description: + * Links @prv@ and @sgl@ together, to form a longer scatterlist. * - */ + **/ static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, struct scatterlist *sgl) { #ifndef ARCH_HAS_SG_CHAIN BUG(); #endif - prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01); + prv[prv_nents - 1].page_link = (unsigned long) sgl | 0x01; +} + +/** + * sg_mark_end - Mark the end of the scatterlist + * @sgl: Scatterlist + * @nents: Number of entries in sgl + * + * Description: + * Marks the last entry as the termination point for sg_next() + * + **/ +static inline void sg_mark_end(struct scatterlist *sgl, unsigned int nents) +{ + sgl[nents - 1].page_link = 0x02; +} + +static inline void __sg_mark_end(struct scatterlist *sg) +{ + sg->page_link |= 0x02; +} + +/** + * sg_init_one - Initialize a single entry sg list + * @sg: SG entry + * @buf: Virtual address for IO + * @buflen: IO length + * + * Notes: + * This should not be used on a single entry that is part of a larger + * table. Use sg_init_table() for that. + * + **/ +static inline void sg_init_one(struct scatterlist *sg, const void *buf, + unsigned int buflen) +{ + memset(sg, 0, sizeof(*sg)); +#ifdef CONFIG_DEBUG_SG + sg->sg_magic = SG_MAGIC; +#endif + sg_mark_end(sg, 1); + sg_set_buf(sg, buf, buflen); +} + +/** + * sg_init_table - Initialize SG table + * @sgl: The SG table + * @nents: Number of entries in table + * + * Notes: + * If this is part of a chained sg table, sg_mark_end() should be + * used only on the last table part. + * + **/ +static inline void sg_init_table(struct scatterlist *sgl, unsigned int nents) +{ + memset(sgl, 0, sizeof(*sgl) * nents); + sg_mark_end(sgl, nents); +#ifdef CONFIG_DEBUG_SG + { + int i; + for (i = 0; i < nents; i++) + sgl[i].sg_magic = SG_MAGIC; + } +#endif +} + +/** + * sg_phys - Return physical address of an sg entry + * @sg: SG entry + * + * Description: + * This calls page_to_phys() on the page in this sg entry, and adds the + * sg offset. The caller must know that it is legal to call page_to_phys() + * on the sg page. + * + **/ +static inline unsigned long sg_phys(struct scatterlist *sg) +{ + return page_to_phys(sg_page(sg)) + sg->offset; +} + +/** + * sg_virt - Return virtual address of an sg entry + * @sg: SG entry + * + * Description: + * This calls page_address() on the page in this sg entry, and adds the + * sg offset. The caller must know that the sg page has a valid virtual + * mapping. + * + **/ +static inline void *sg_virt(struct scatterlist *sg) +{ + return page_address(sg_page(sg)) + sg->offset; } #endif /* _LINUX_SCATTERLIST_H */ diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index f93f22b3d2f..fd4e12f2427 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -41,8 +41,7 @@ #define SKB_DATA_ALIGN(X) (((X) + (SMP_CACHE_BYTES - 1)) & \ ~(SMP_CACHE_BYTES - 1)) #define SKB_WITH_OVERHEAD(X) \ - (((X) - sizeof(struct skb_shared_info)) & \ - ~(SMP_CACHE_BYTES - 1)) + ((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) #define SKB_MAX_ORDER(X, ORDER) \ SKB_WITH_OVERHEAD((PAGE_SIZE << (ORDER)) - (X)) #define SKB_MAX_HEAD(X) (SKB_MAX_ORDER((X), 0)) @@ -301,8 +300,9 @@ struct sk_buff { #endif int iif; +#ifdef CONFIG_NETDEVICES_MULTIQUEUE __u16 queue_mapping; - +#endif #ifdef CONFIG_NET_SCHED __u16 tc_index; /* traffic control index */ #ifdef CONFIG_NET_CLS_ACT @@ -1770,6 +1770,15 @@ static inline void skb_set_queue_mapping(struct sk_buff *skb, u16 queue_mapping) #endif } +static inline u16 skb_get_queue_mapping(struct sk_buff *skb) +{ +#ifdef CONFIG_NETDEVICES_MULTIQUEUE + return skb->queue_mapping; +#else + return 0; +#endif +} + static inline void skb_copy_queue_mapping(struct sk_buff *to, const struct sk_buff *from) { #ifdef CONFIG_NETDEVICES_MULTIQUEUE diff --git a/include/linux/socket.h b/include/linux/socket.h index f852e1afd65..c22ef1c1afb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -291,6 +291,7 @@ struct ucred { #define SOL_TIPC 271 #define SOL_RXRPC 272 #define SOL_PPPOL2TP 273 +#define SOL_BLUETOOTH 274 /* IPX options */ #define IPX_TYPE 1 diff --git a/include/linux/videodev.h b/include/linux/videodev.h index 8dba97a291f..52e3d5fd5be 100644 --- a/include/linux/videodev.h +++ b/include/linux/videodev.h @@ -294,48 +294,6 @@ struct video_code #define VID_PLAY_RESET 13 #define VID_PLAY_END_MARK 14 - - -#define VID_HARDWARE_BT848 1 -#define VID_HARDWARE_QCAM_BW 2 -#define VID_HARDWARE_PMS 3 -#define VID_HARDWARE_QCAM_C 4 -#define VID_HARDWARE_PSEUDO 5 -#define VID_HARDWARE_SAA5249 6 -#define VID_HARDWARE_AZTECH 7 -#define VID_HARDWARE_SF16MI 8 -#define VID_HARDWARE_RTRACK 9 -#define VID_HARDWARE_ZOLTRIX 10 -#define VID_HARDWARE_SAA7146 11 -#define VID_HARDWARE_VIDEUM 12 /* Reserved for Winnov videum */ -#define VID_HARDWARE_RTRACK2 13 -#define VID_HARDWARE_PERMEDIA2 14 /* Reserved for Permedia2 */ -#define VID_HARDWARE_RIVA128 15 /* Reserved for RIVA 128 */ -#define VID_HARDWARE_PLANB 16 /* PowerMac motherboard video-in */ -#define VID_HARDWARE_BROADWAY 17 /* Broadway project */ -#define VID_HARDWARE_GEMTEK 18 -#define VID_HARDWARE_TYPHOON 19 -#define VID_HARDWARE_VINO 20 /* SGI Indy Vino */ -#define VID_HARDWARE_CADET 21 /* Cadet radio */ -#define VID_HARDWARE_TRUST 22 /* Trust FM Radio */ -#define VID_HARDWARE_TERRATEC 23 /* TerraTec ActiveRadio */ -#define VID_HARDWARE_CPIA 24 -#define VID_HARDWARE_ZR36120 25 /* Zoran ZR36120/ZR36125 */ -#define VID_HARDWARE_ZR36067 26 /* Zoran ZR36067/36060 */ -#define VID_HARDWARE_OV511 27 -#define VID_HARDWARE_ZR356700 28 /* Zoran 36700 series */ -#define VID_HARDWARE_W9966 29 -#define VID_HARDWARE_SE401 30 /* SE401 USB webcams */ -#define VID_HARDWARE_PWC 31 /* Philips webcams */ -#define VID_HARDWARE_MEYE 32 /* Sony Vaio MotionEye cameras */ -#define VID_HARDWARE_CPIA2 33 -#define VID_HARDWARE_VICAM 34 -#define VID_HARDWARE_SF16FMR2 35 -#define VID_HARDWARE_W9968CF 36 -#define VID_HARDWARE_SAA7114H 37 -#define VID_HARDWARE_SN9C102 38 -#define VID_HARDWARE_ARV 39 - #endif /* CONFIG_VIDEO_V4L1_COMPAT */ #endif /* __LINUX_VIDEODEV_H */ diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h index 1f503e94eff..439474f24e3 100644 --- a/include/linux/videodev2.h +++ b/include/linux/videodev2.h @@ -441,94 +441,6 @@ struct v4l2_timecode #define V4L2_TC_USERBITS_8BITCHARS 0x0008 /* The above is based on SMPTE timecodes */ -#ifdef __KERNEL__ -/* - * M P E G C O M P R E S S I O N P A R A M E T E R S - * - * ### WARNING: This experimental MPEG compression API is obsolete. - * ### It is replaced by the MPEG controls API. - * ### This old API will disappear in the near future! - * - */ -enum v4l2_bitrate_mode { - V4L2_BITRATE_NONE = 0, /* not specified */ - V4L2_BITRATE_CBR, /* constant bitrate */ - V4L2_BITRATE_VBR, /* variable bitrate */ -}; -struct v4l2_bitrate { - /* rates are specified in kbit/sec */ - enum v4l2_bitrate_mode mode; - __u32 min; - __u32 target; /* use this one for CBR */ - __u32 max; -}; - -enum v4l2_mpeg_streamtype { - V4L2_MPEG_SS_1, /* MPEG-1 system stream */ - V4L2_MPEG_PS_2, /* MPEG-2 program stream */ - V4L2_MPEG_TS_2, /* MPEG-2 transport stream */ - V4L2_MPEG_PS_DVD, /* MPEG-2 program stream with DVD header fixups */ -}; -enum v4l2_mpeg_audiotype { - V4L2_MPEG_AU_2_I, /* MPEG-2 layer 1 */ - V4L2_MPEG_AU_2_II, /* MPEG-2 layer 2 */ - V4L2_MPEG_AU_2_III, /* MPEG-2 layer 3 */ - V4L2_MPEG_AC3, /* AC3 */ - V4L2_MPEG_LPCM, /* LPCM */ -}; -enum v4l2_mpeg_videotype { - V4L2_MPEG_VI_1, /* MPEG-1 */ - V4L2_MPEG_VI_2, /* MPEG-2 */ -}; -enum v4l2_mpeg_aspectratio { - V4L2_MPEG_ASPECT_SQUARE = 1, /* square pixel */ - V4L2_MPEG_ASPECT_4_3 = 2, /* 4 : 3 */ - V4L2_MPEG_ASPECT_16_9 = 3, /* 16 : 9 */ - V4L2_MPEG_ASPECT_1_221 = 4, /* 1 : 2,21 */ -}; - -struct v4l2_mpeg_compression { - /* general */ - enum v4l2_mpeg_streamtype st_type; - struct v4l2_bitrate st_bitrate; - - /* transport streams */ - __u16 ts_pid_pmt; - __u16 ts_pid_audio; - __u16 ts_pid_video; - __u16 ts_pid_pcr; - - /* program stream */ - __u16 ps_size; - __u16 reserved_1; /* align */ - - /* audio */ - enum v4l2_mpeg_audiotype au_type; - struct v4l2_bitrate au_bitrate; - __u32 au_sample_rate; - __u8 au_pesid; - __u8 reserved_2[3]; /* align */ - - /* video */ - enum v4l2_mpeg_videotype vi_type; - enum v4l2_mpeg_aspectratio vi_aspect_ratio; - struct v4l2_bitrate vi_bitrate; - __u32 vi_frame_rate; - __u16 vi_frames_per_gop; - __u16 vi_bframes_count; - __u8 vi_pesid; - __u8 reserved_3[3]; /* align */ - - /* misc flags */ - __u32 closed_gops:1; - __u32 pulldown:1; - __u32 reserved_4:30; /* align */ - - /* I don't expect the above being perfect yet ;) */ - __u32 reserved_5[8]; -}; -#endif - struct v4l2_jpegcompression { int quality; @@ -1420,10 +1332,6 @@ struct v4l2_chip_ident { #define VIDIOC_ENUM_FMT _IOWR ('V', 2, struct v4l2_fmtdesc) #define VIDIOC_G_FMT _IOWR ('V', 4, struct v4l2_format) #define VIDIOC_S_FMT _IOWR ('V', 5, struct v4l2_format) -#ifdef __KERNEL__ -#define VIDIOC_G_MPEGCOMP _IOR ('V', 6, struct v4l2_mpeg_compression) -#define VIDIOC_S_MPEGCOMP _IOW ('V', 7, struct v4l2_mpeg_compression) -#endif #define VIDIOC_REQBUFS _IOWR ('V', 8, struct v4l2_requestbuffers) #define VIDIOC_QUERYBUF _IOWR ('V', 9, struct v4l2_buffer) #define VIDIOC_G_FBUF _IOR ('V', 10, struct v4l2_framebuffer) diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e75d5e6c4ce..c544c6f9089 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -94,7 +94,6 @@ struct video_device char name[32]; int type; /* v4l1 */ int type2; /* v4l2 */ - int hardware; int minor; int debug; /* Activates debug level*/ @@ -272,10 +271,6 @@ struct video_device int (*vidioc_s_crop) (struct file *file, void *fh, struct v4l2_crop *a); /* Compression ioctls */ - int (*vidioc_g_mpegcomp) (struct file *file, void *fh, - struct v4l2_mpeg_compression *a); - int (*vidioc_s_mpegcomp) (struct file *file, void *fh, - struct v4l2_mpeg_compression *a); int (*vidioc_g_jpegcomp) (struct file *file, void *fh, struct v4l2_jpegcompression *a); int (*vidioc_s_jpegcomp) (struct file *file, void *fh, diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index ebfb96b4110..a8a9eb6af96 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -200,119 +200,18 @@ enum { #define HCI_LM_SECURE 0x0020 /* ----- HCI Commands ---- */ -/* OGF & OCF values */ - -/* Informational Parameters */ -#define OGF_INFO_PARAM 0x04 - -#define OCF_READ_LOCAL_VERSION 0x0001 -struct hci_rp_read_loc_version { - __u8 status; - __u8 hci_ver; - __le16 hci_rev; - __u8 lmp_ver; - __le16 manufacturer; - __le16 lmp_subver; -} __attribute__ ((packed)); - -#define OCF_READ_LOCAL_FEATURES 0x0003 -struct hci_rp_read_local_features { - __u8 status; - __u8 features[8]; -} __attribute__ ((packed)); - -#define OCF_READ_BUFFER_SIZE 0x0005 -struct hci_rp_read_buffer_size { - __u8 status; - __le16 acl_mtu; - __u8 sco_mtu; - __le16 acl_max_pkt; - __le16 sco_max_pkt; -} __attribute__ ((packed)); - -#define OCF_READ_BD_ADDR 0x0009 -struct hci_rp_read_bd_addr { - __u8 status; - bdaddr_t bdaddr; -} __attribute__ ((packed)); - -/* Host Controller and Baseband */ -#define OGF_HOST_CTL 0x03 -#define OCF_RESET 0x0003 -#define OCF_READ_AUTH_ENABLE 0x001F -#define OCF_WRITE_AUTH_ENABLE 0x0020 - #define AUTH_DISABLED 0x00 - #define AUTH_ENABLED 0x01 - -#define OCF_READ_ENCRYPT_MODE 0x0021 -#define OCF_WRITE_ENCRYPT_MODE 0x0022 - #define ENCRYPT_DISABLED 0x00 - #define ENCRYPT_P2P 0x01 - #define ENCRYPT_BOTH 0x02 - -#define OCF_WRITE_CA_TIMEOUT 0x0016 -#define OCF_WRITE_PG_TIMEOUT 0x0018 - -#define OCF_WRITE_SCAN_ENABLE 0x001A - #define SCAN_DISABLED 0x00 - #define SCAN_INQUIRY 0x01 - #define SCAN_PAGE 0x02 - -#define OCF_SET_EVENT_FLT 0x0005 -struct hci_cp_set_event_flt { - __u8 flt_type; - __u8 cond_type; - __u8 condition[0]; -} __attribute__ ((packed)); - -/* Filter types */ -#define HCI_FLT_CLEAR_ALL 0x00 -#define HCI_FLT_INQ_RESULT 0x01 -#define HCI_FLT_CONN_SETUP 0x02 - -/* CONN_SETUP Condition types */ -#define HCI_CONN_SETUP_ALLOW_ALL 0x00 -#define HCI_CONN_SETUP_ALLOW_CLASS 0x01 -#define HCI_CONN_SETUP_ALLOW_BDADDR 0x02 - -/* CONN_SETUP Conditions */ -#define HCI_CONN_SETUP_AUTO_OFF 0x01 -#define HCI_CONN_SETUP_AUTO_ON 0x02 - -#define OCF_READ_CLASS_OF_DEV 0x0023 -struct hci_rp_read_dev_class { - __u8 status; - __u8 dev_class[3]; -} __attribute__ ((packed)); - -#define OCF_WRITE_CLASS_OF_DEV 0x0024 -struct hci_cp_write_dev_class { - __u8 dev_class[3]; -} __attribute__ ((packed)); - -#define OCF_READ_VOICE_SETTING 0x0025 -struct hci_rp_read_voice_setting { - __u8 status; - __le16 voice_setting; +#define HCI_OP_INQUIRY 0x0401 +struct hci_cp_inquiry { + __u8 lap[3]; + __u8 length; + __u8 num_rsp; } __attribute__ ((packed)); -#define OCF_WRITE_VOICE_SETTING 0x0026 -struct hci_cp_write_voice_setting { - __le16 voice_setting; -} __attribute__ ((packed)); +#define HCI_OP_INQUIRY_CANCEL 0x0402 -#define OCF_HOST_BUFFER_SIZE 0x0033 -struct hci_cp_host_buffer_size { - __le16 acl_mtu; - __u8 sco_mtu; - __le16 acl_max_pkt; - __le16 sco_max_pkt; -} __attribute__ ((packed)); - -/* Link Control */ -#define OGF_LINK_CTL 0x01 +#define HCI_OP_EXIT_PERIODIC_INQ 0x0404 -#define OCF_CREATE_CONN 0x0005 +#define HCI_OP_CREATE_CONN 0x0405 struct hci_cp_create_conn { bdaddr_t bdaddr; __le16 pkt_type; @@ -322,105 +221,138 @@ struct hci_cp_create_conn { __u8 role_switch; } __attribute__ ((packed)); -#define OCF_CREATE_CONN_CANCEL 0x0008 -struct hci_cp_create_conn_cancel { - bdaddr_t bdaddr; -} __attribute__ ((packed)); - -#define OCF_ACCEPT_CONN_REQ 0x0009 -struct hci_cp_accept_conn_req { - bdaddr_t bdaddr; - __u8 role; -} __attribute__ ((packed)); - -#define OCF_REJECT_CONN_REQ 0x000a -struct hci_cp_reject_conn_req { - bdaddr_t bdaddr; - __u8 reason; -} __attribute__ ((packed)); - -#define OCF_DISCONNECT 0x0006 +#define HCI_OP_DISCONNECT 0x0406 struct hci_cp_disconnect { __le16 handle; __u8 reason; } __attribute__ ((packed)); -#define OCF_ADD_SCO 0x0007 +#define HCI_OP_ADD_SCO 0x0407 struct hci_cp_add_sco { __le16 handle; __le16 pkt_type; } __attribute__ ((packed)); -#define OCF_INQUIRY 0x0001 -struct hci_cp_inquiry { - __u8 lap[3]; - __u8 length; - __u8 num_rsp; +#define HCI_OP_CREATE_CONN_CANCEL 0x0408 +struct hci_cp_create_conn_cancel { + bdaddr_t bdaddr; } __attribute__ ((packed)); -#define OCF_INQUIRY_CANCEL 0x0002 +#define HCI_OP_ACCEPT_CONN_REQ 0x0409 +struct hci_cp_accept_conn_req { + bdaddr_t bdaddr; + __u8 role; +} __attribute__ ((packed)); -#define OCF_EXIT_PERIODIC_INQ 0x0004 +#define HCI_OP_REJECT_CONN_REQ 0x040a +struct hci_cp_reject_conn_req { + bdaddr_t bdaddr; + __u8 reason; +} __attribute__ ((packed)); -#define OCF_LINK_KEY_REPLY 0x000B +#define HCI_OP_LINK_KEY_REPLY 0x040b struct hci_cp_link_key_reply { bdaddr_t bdaddr; __u8 link_key[16]; } __attribute__ ((packed)); -#define OCF_LINK_KEY_NEG_REPLY 0x000C +#define HCI_OP_LINK_KEY_NEG_REPLY 0x040c struct hci_cp_link_key_neg_reply { bdaddr_t bdaddr; } __attribute__ ((packed)); -#define OCF_PIN_CODE_REPLY 0x000D +#define HCI_OP_PIN_CODE_REPLY 0x040d struct hci_cp_pin_code_reply { bdaddr_t bdaddr; __u8 pin_len; __u8 pin_code[16]; } __attribute__ ((packed)); -#define OCF_PIN_CODE_NEG_REPLY 0x000E +#define HCI_OP_PIN_CODE_NEG_REPLY 0x040e struct hci_cp_pin_code_neg_reply { bdaddr_t bdaddr; } __attribute__ ((packed)); -#define OCF_CHANGE_CONN_PTYPE 0x000F +#define HCI_OP_CHANGE_CONN_PTYPE 0x040f struct hci_cp_change_conn_ptype { __le16 handle; __le16 pkt_type; } __attribute__ ((packed)); -#define OCF_AUTH_REQUESTED 0x0011 +#define HCI_OP_AUTH_REQUESTED 0x0411 struct hci_cp_auth_requested { __le16 handle; } __attribute__ ((packed)); -#define OCF_SET_CONN_ENCRYPT 0x0013 +#define HCI_OP_SET_CONN_ENCRYPT 0x0413 struct hci_cp_set_conn_encrypt { __le16 handle; __u8 encrypt; } __attribute__ ((packed)); -#define OCF_CHANGE_CONN_LINK_KEY 0x0015 +#define HCI_OP_CHANGE_CONN_LINK_KEY 0x0415 struct hci_cp_change_conn_link_key { __le16 handle; } __attribute__ ((packed)); -#define OCF_READ_REMOTE_FEATURES 0x001B +#define HCI_OP_REMOTE_NAME_REQ 0x0419 +struct hci_cp_remote_name_req { + bdaddr_t bdaddr; + __u8 pscan_rep_mode; + __u8 pscan_mode; + __le16 clock_offset; +} __attribute__ ((packed)); + +#define HCI_OP_REMOTE_NAME_REQ_CANCEL 0x041a +struct hci_cp_remote_name_req_cancel { + bdaddr_t bdaddr; +} __attribute__ ((packed)); + +#define HCI_OP_READ_REMOTE_FEATURES 0x041b struct hci_cp_read_remote_features { __le16 handle; } __attribute__ ((packed)); -#define OCF_READ_REMOTE_VERSION 0x001D +#define HCI_OP_READ_REMOTE_EXT_FEATURES 0x041c +struct hci_cp_read_remote_ext_features { + __le16 handle; + __u8 page; +} __attribute__ ((packed)); + +#define HCI_OP_READ_REMOTE_VERSION 0x041d struct hci_cp_read_remote_version { __le16 handle; } __attribute__ ((packed)); -/* Link Policy */ -#define OGF_LINK_POLICY 0x02 +#define HCI_OP_SETUP_SYNC_CONN 0x0428 +struct hci_cp_setup_sync_conn { + __le16 handle; + __le32 tx_bandwidth; + __le32 rx_bandwidth; + __le16 max_latency; + __le16 voice_setting; + __u8 retrans_effort; + __le16 pkt_type; +} __attribute__ ((packed)); -#define OCF_SNIFF_MODE 0x0003 +#define HCI_OP_ACCEPT_SYNC_CONN_REQ 0x0429 +struct hci_cp_accept_sync_conn_req { + bdaddr_t bdaddr; + __le32 tx_bandwidth; + __le32 rx_bandwidth; + __le16 max_latency; + __le16 content_format; + __u8 retrans_effort; + __le16 pkt_type; +} __attribute__ ((packed)); + +#define HCI_OP_REJECT_SYNC_CONN_REQ 0x042a +struct hci_cp_reject_sync_conn_req { + bdaddr_t bdaddr; + __u8 reason; +} __attribute__ ((packed)); + +#define HCI_OP_SNIFF_MODE 0x0803 struct hci_cp_sniff_mode { __le16 handle; __le16 max_interval; @@ -429,12 +361,12 @@ struct hci_cp_sniff_mode { __le16 timeout; } __attribute__ ((packed)); -#define OCF_EXIT_SNIFF_MODE 0x0004 +#define HCI_OP_EXIT_SNIFF_MODE 0x0804 struct hci_cp_exit_sniff_mode { __le16 handle; } __attribute__ ((packed)); -#define OCF_ROLE_DISCOVERY 0x0009 +#define HCI_OP_ROLE_DISCOVERY 0x0809 struct hci_cp_role_discovery { __le16 handle; } __attribute__ ((packed)); @@ -444,7 +376,13 @@ struct hci_rp_role_discovery { __u8 role; } __attribute__ ((packed)); -#define OCF_READ_LINK_POLICY 0x000C +#define HCI_OP_SWITCH_ROLE 0x080b +struct hci_cp_switch_role { + bdaddr_t bdaddr; + __u8 role; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LINK_POLICY 0x080c struct hci_cp_read_link_policy { __le16 handle; } __attribute__ ((packed)); @@ -454,13 +392,7 @@ struct hci_rp_read_link_policy { __le16 policy; } __attribute__ ((packed)); -#define OCF_SWITCH_ROLE 0x000B -struct hci_cp_switch_role { - bdaddr_t bdaddr; - __u8 role; -} __attribute__ ((packed)); - -#define OCF_WRITE_LINK_POLICY 0x000D +#define HCI_OP_WRITE_LINK_POLICY 0x080d struct hci_cp_write_link_policy { __le16 handle; __le16 policy; @@ -470,7 +402,7 @@ struct hci_rp_write_link_policy { __le16 handle; } __attribute__ ((packed)); -#define OCF_SNIFF_SUBRATE 0x0011 +#define HCI_OP_SNIFF_SUBRATE 0x0811 struct hci_cp_sniff_subrate { __le16 handle; __le16 max_latency; @@ -478,59 +410,156 @@ struct hci_cp_sniff_subrate { __le16 min_local_timeout; } __attribute__ ((packed)); -/* Status params */ -#define OGF_STATUS_PARAM 0x05 +#define HCI_OP_SET_EVENT_MASK 0x0c01 +struct hci_cp_set_event_mask { + __u8 mask[8]; +} __attribute__ ((packed)); -/* Testing commands */ -#define OGF_TESTING_CMD 0x3E +#define HCI_OP_RESET 0x0c03 -/* Vendor specific commands */ -#define OGF_VENDOR_CMD 0x3F +#define HCI_OP_SET_EVENT_FLT 0x0c05 +struct hci_cp_set_event_flt { + __u8 flt_type; + __u8 cond_type; + __u8 condition[0]; +} __attribute__ ((packed)); -/* ---- HCI Events ---- */ -#define HCI_EV_INQUIRY_COMPLETE 0x01 +/* Filter types */ +#define HCI_FLT_CLEAR_ALL 0x00 +#define HCI_FLT_INQ_RESULT 0x01 +#define HCI_FLT_CONN_SETUP 0x02 -#define HCI_EV_INQUIRY_RESULT 0x02 -struct inquiry_info { - bdaddr_t bdaddr; - __u8 pscan_rep_mode; - __u8 pscan_period_mode; - __u8 pscan_mode; +/* CONN_SETUP Condition types */ +#define HCI_CONN_SETUP_ALLOW_ALL 0x00 +#define HCI_CONN_SETUP_ALLOW_CLASS 0x01 +#define HCI_CONN_SETUP_ALLOW_BDADDR 0x02 + +/* CONN_SETUP Conditions */ +#define HCI_CONN_SETUP_AUTO_OFF 0x01 +#define HCI_CONN_SETUP_AUTO_ON 0x02 + +#define HCI_OP_WRITE_LOCAL_NAME 0x0c13 +struct hci_cp_write_local_name { + __u8 name[248]; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LOCAL_NAME 0x0c14 +struct hci_rp_read_local_name { + __u8 status; + __u8 name[248]; +} __attribute__ ((packed)); + +#define HCI_OP_WRITE_CA_TIMEOUT 0x0c16 + +#define HCI_OP_WRITE_PG_TIMEOUT 0x0c18 + +#define HCI_OP_WRITE_SCAN_ENABLE 0x0c1a + #define SCAN_DISABLED 0x00 + #define SCAN_INQUIRY 0x01 + #define SCAN_PAGE 0x02 + +#define HCI_OP_READ_AUTH_ENABLE 0x0c1f + +#define HCI_OP_WRITE_AUTH_ENABLE 0x0c20 + #define AUTH_DISABLED 0x00 + #define AUTH_ENABLED 0x01 + +#define HCI_OP_READ_ENCRYPT_MODE 0x0c21 + +#define HCI_OP_WRITE_ENCRYPT_MODE 0x0c22 + #define ENCRYPT_DISABLED 0x00 + #define ENCRYPT_P2P 0x01 + #define ENCRYPT_BOTH 0x02 + +#define HCI_OP_READ_CLASS_OF_DEV 0x0c23 +struct hci_rp_read_class_of_dev { + __u8 status; __u8 dev_class[3]; - __le16 clock_offset; } __attribute__ ((packed)); -#define HCI_EV_INQUIRY_RESULT_WITH_RSSI 0x22 -struct inquiry_info_with_rssi { - bdaddr_t bdaddr; - __u8 pscan_rep_mode; - __u8 pscan_period_mode; +#define HCI_OP_WRITE_CLASS_OF_DEV 0x0c24 +struct hci_cp_write_class_of_dev { __u8 dev_class[3]; - __le16 clock_offset; - __s8 rssi; } __attribute__ ((packed)); -struct inquiry_info_with_rssi_and_pscan_mode { + +#define HCI_OP_READ_VOICE_SETTING 0x0c25 +struct hci_rp_read_voice_setting { + __u8 status; + __le16 voice_setting; +} __attribute__ ((packed)); + +#define HCI_OP_WRITE_VOICE_SETTING 0x0c26 +struct hci_cp_write_voice_setting { + __le16 voice_setting; +} __attribute__ ((packed)); + +#define HCI_OP_HOST_BUFFER_SIZE 0x0c33 +struct hci_cp_host_buffer_size { + __le16 acl_mtu; + __u8 sco_mtu; + __le16 acl_max_pkt; + __le16 sco_max_pkt; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LOCAL_VERSION 0x1001 +struct hci_rp_read_local_version { + __u8 status; + __u8 hci_ver; + __le16 hci_rev; + __u8 lmp_ver; + __le16 manufacturer; + __le16 lmp_subver; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LOCAL_COMMANDS 0x1002 +struct hci_rp_read_local_commands { + __u8 status; + __u8 commands[64]; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LOCAL_FEATURES 0x1003 +struct hci_rp_read_local_features { + __u8 status; + __u8 features[8]; +} __attribute__ ((packed)); + +#define HCI_OP_READ_LOCAL_EXT_FEATURES 0x1004 +struct hci_rp_read_local_ext_features { + __u8 status; + __u8 page; + __u8 max_page; + __u8 features[8]; +} __attribute__ ((packed)); + +#define HCI_OP_READ_BUFFER_SIZE 0x1005 +struct hci_rp_read_buffer_size { + __u8 status; + __le16 acl_mtu; + __u8 sco_mtu; + __le16 acl_max_pkt; + __le16 sco_max_pkt; +} __attribute__ ((packed)); + +#define HCI_OP_READ_BD_ADDR 0x1009 +struct hci_rp_read_bd_addr { + __u8 status; bdaddr_t bdaddr; - __u8 pscan_rep_mode; - __u8 pscan_period_mode; - __u8 pscan_mode; - __u8 dev_class[3]; - __le16 clock_offset; - __s8 rssi; } __attribute__ ((packed)); -#define HCI_EV_EXTENDED_INQUIRY_RESULT 0x2F -struct extended_inquiry_info { +/* ---- HCI Events ---- */ +#define HCI_EV_INQUIRY_COMPLETE 0x01 + +#define HCI_EV_INQUIRY_RESULT 0x02 +struct inquiry_info { bdaddr_t bdaddr; __u8 pscan_rep_mode; __u8 pscan_period_mode; + __u8 pscan_mode; __u8 dev_class[3]; __le16 clock_offset; - __s8 rssi; - __u8 data[240]; } __attribute__ ((packed)); -#define HCI_EV_CONN_COMPLETE 0x03 +#define HCI_EV_CONN_COMPLETE 0x03 struct hci_ev_conn_complete { __u8 status; __le16 handle; @@ -539,40 +568,63 @@ struct hci_ev_conn_complete { __u8 encr_mode; } __attribute__ ((packed)); -#define HCI_EV_CONN_REQUEST 0x04 +#define HCI_EV_CONN_REQUEST 0x04 struct hci_ev_conn_request { bdaddr_t bdaddr; __u8 dev_class[3]; __u8 link_type; } __attribute__ ((packed)); -#define HCI_EV_DISCONN_COMPLETE 0x05 +#define HCI_EV_DISCONN_COMPLETE 0x05 struct hci_ev_disconn_complete { __u8 status; __le16 handle; __u8 reason; } __attribute__ ((packed)); -#define HCI_EV_AUTH_COMPLETE 0x06 +#define HCI_EV_AUTH_COMPLETE 0x06 struct hci_ev_auth_complete { __u8 status; __le16 handle; } __attribute__ ((packed)); -#define HCI_EV_ENCRYPT_CHANGE 0x08 +#define HCI_EV_REMOTE_NAME 0x07 +struct hci_ev_remote_name { + __u8 status; + bdaddr_t bdaddr; + __u8 name[248]; +} __attribute__ ((packed)); + +#define HCI_EV_ENCRYPT_CHANGE 0x08 struct hci_ev_encrypt_change { __u8 status; __le16 handle; __u8 encrypt; } __attribute__ ((packed)); -#define HCI_EV_CHANGE_CONN_LINK_KEY_COMPLETE 0x09 -struct hci_ev_change_conn_link_key_complete { +#define HCI_EV_CHANGE_LINK_KEY_COMPLETE 0x09 +struct hci_ev_change_link_key_complete { + __u8 status; + __le16 handle; +} __attribute__ ((packed)); + +#define HCI_EV_REMOTE_FEATURES 0x0b +struct hci_ev_remote_features { + __u8 status; + __le16 handle; + __u8 features[8]; +} __attribute__ ((packed)); + +#define HCI_EV_REMOTE_VERSION 0x0c +struct hci_ev_remote_version { __u8 status; __le16 handle; + __u8 lmp_ver; + __le16 manufacturer; + __le16 lmp_subver; } __attribute__ ((packed)); -#define HCI_EV_QOS_SETUP_COMPLETE 0x0D +#define HCI_EV_QOS_SETUP_COMPLETE 0x0d struct hci_qos { __u8 service_type; __u32 token_rate; @@ -586,33 +638,33 @@ struct hci_ev_qos_setup_complete { struct hci_qos qos; } __attribute__ ((packed)); -#define HCI_EV_CMD_COMPLETE 0x0E +#define HCI_EV_CMD_COMPLETE 0x0e struct hci_ev_cmd_complete { __u8 ncmd; __le16 opcode; } __attribute__ ((packed)); -#define HCI_EV_CMD_STATUS 0x0F +#define HCI_EV_CMD_STATUS 0x0f struct hci_ev_cmd_status { __u8 status; __u8 ncmd; __le16 opcode; } __attribute__ ((packed)); -#define HCI_EV_NUM_COMP_PKTS 0x13 -struct hci_ev_num_comp_pkts { - __u8 num_hndl; - /* variable length part */ -} __attribute__ ((packed)); - -#define HCI_EV_ROLE_CHANGE 0x12 +#define HCI_EV_ROLE_CHANGE 0x12 struct hci_ev_role_change { __u8 status; bdaddr_t bdaddr; __u8 role; } __attribute__ ((packed)); -#define HCI_EV_MODE_CHANGE 0x14 +#define HCI_EV_NUM_COMP_PKTS 0x13 +struct hci_ev_num_comp_pkts { + __u8 num_hndl; + /* variable length part */ +} __attribute__ ((packed)); + +#define HCI_EV_MODE_CHANGE 0x14 struct hci_ev_mode_change { __u8 status; __le16 handle; @@ -620,53 +672,88 @@ struct hci_ev_mode_change { __le16 interval; } __attribute__ ((packed)); -#define HCI_EV_PIN_CODE_REQ 0x16 +#define HCI_EV_PIN_CODE_REQ 0x16 struct hci_ev_pin_code_req { bdaddr_t bdaddr; } __attribute__ ((packed)); -#define HCI_EV_LINK_KEY_REQ 0x17 +#define HCI_EV_LINK_KEY_REQ 0x17 struct hci_ev_link_key_req { bdaddr_t bdaddr; } __attribute__ ((packed)); -#define HCI_EV_LINK_KEY_NOTIFY 0x18 +#define HCI_EV_LINK_KEY_NOTIFY 0x18 struct hci_ev_link_key_notify { bdaddr_t bdaddr; - __u8 link_key[16]; - __u8 key_type; + __u8 link_key[16]; + __u8 key_type; } __attribute__ ((packed)); -#define HCI_EV_REMOTE_FEATURES 0x0B -struct hci_ev_remote_features { +#define HCI_EV_CLOCK_OFFSET 0x1c +struct hci_ev_clock_offset { __u8 status; __le16 handle; - __u8 features[8]; + __le16 clock_offset; } __attribute__ ((packed)); -#define HCI_EV_REMOTE_VERSION 0x0C -struct hci_ev_remote_version { +#define HCI_EV_PSCAN_REP_MODE 0x20 +struct hci_ev_pscan_rep_mode { + bdaddr_t bdaddr; + __u8 pscan_rep_mode; +} __attribute__ ((packed)); + +#define HCI_EV_INQUIRY_RESULT_WITH_RSSI 0x22 +struct inquiry_info_with_rssi { + bdaddr_t bdaddr; + __u8 pscan_rep_mode; + __u8 pscan_period_mode; + __u8 dev_class[3]; + __le16 clock_offset; + __s8 rssi; +} __attribute__ ((packed)); +struct inquiry_info_with_rssi_and_pscan_mode { + bdaddr_t bdaddr; + __u8 pscan_rep_mode; + __u8 pscan_period_mode; + __u8 pscan_mode; + __u8 dev_class[3]; + __le16 clock_offset; + __s8 rssi; +} __attribute__ ((packed)); + +#define HCI_EV_REMOTE_EXT_FEATURES 0x23 +struct hci_ev_remote_ext_features { __u8 status; __le16 handle; - __u8 lmp_ver; - __le16 manufacturer; - __le16 lmp_subver; + __u8 page; + __u8 max_page; + __u8 features[8]; } __attribute__ ((packed)); -#define HCI_EV_CLOCK_OFFSET 0x01C -struct hci_ev_clock_offset { +#define HCI_EV_SYNC_CONN_COMPLETE 0x2c +struct hci_ev_sync_conn_complete { __u8 status; __le16 handle; - __le16 clock_offset; + bdaddr_t bdaddr; + __u8 link_type; + __u8 tx_interval; + __u8 retrans_window; + __le16 rx_pkt_len; + __le16 tx_pkt_len; + __u8 air_mode; } __attribute__ ((packed)); -#define HCI_EV_PSCAN_REP_MODE 0x20 -struct hci_ev_pscan_rep_mode { - bdaddr_t bdaddr; - __u8 pscan_rep_mode; +#define HCI_EV_SYNC_CONN_CHANGED 0x2d +struct hci_ev_sync_conn_changed { + __u8 status; + __le16 handle; + __u8 tx_interval; + __u8 retrans_window; + __le16 rx_pkt_len; + __le16 tx_pkt_len; } __attribute__ ((packed)); -#define HCI_EV_SNIFF_SUBRATE 0x2E +#define HCI_EV_SNIFF_SUBRATE 0x2e struct hci_ev_sniff_subrate { __u8 status; __le16 handle; @@ -676,14 +763,25 @@ struct hci_ev_sniff_subrate { __le16 max_local_timeout; } __attribute__ ((packed)); +#define HCI_EV_EXTENDED_INQUIRY_RESULT 0x2f +struct extended_inquiry_info { + bdaddr_t bdaddr; + __u8 pscan_rep_mode; + __u8 pscan_period_mode; + __u8 dev_class[3]; + __le16 clock_offset; + __s8 rssi; + __u8 data[240]; +} __attribute__ ((packed)); + /* Internal events generated by Bluetooth stack */ -#define HCI_EV_STACK_INTERNAL 0xFD +#define HCI_EV_STACK_INTERNAL 0xfd struct hci_ev_stack_internal { __u16 type; __u8 data[0]; } __attribute__ ((packed)); -#define HCI_EV_SI_DEVICE 0x01 +#define HCI_EV_SI_DEVICE 0x01 struct hci_ev_si_device { __u16 event; __u16 dev_id; @@ -704,40 +802,40 @@ struct hci_ev_si_security { #define HCI_SCO_HDR_SIZE 3 struct hci_command_hdr { - __le16 opcode; /* OCF & OGF */ + __le16 opcode; /* OCF & OGF */ __u8 plen; } __attribute__ ((packed)); struct hci_event_hdr { - __u8 evt; - __u8 plen; + __u8 evt; + __u8 plen; } __attribute__ ((packed)); struct hci_acl_hdr { - __le16 handle; /* Handle & Flags(PB, BC) */ - __le16 dlen; + __le16 handle; /* Handle & Flags(PB, BC) */ + __le16 dlen; } __attribute__ ((packed)); struct hci_sco_hdr { - __le16 handle; - __u8 dlen; + __le16 handle; + __u8 dlen; } __attribute__ ((packed)); #ifdef __KERNEL__ #include <linux/skbuff.h> static inline struct hci_event_hdr *hci_event_hdr(const struct sk_buff *skb) { - return (struct hci_event_hdr *)skb->data; + return (struct hci_event_hdr *) skb->data; } static inline struct hci_acl_hdr *hci_acl_hdr(const struct sk_buff *skb) { - return (struct hci_acl_hdr *)skb->data; + return (struct hci_acl_hdr *) skb->data; } static inline struct hci_sco_hdr *hci_sco_hdr(const struct sk_buff *skb) { - return (struct hci_sco_hdr *)skb->data; + return (struct hci_sco_hdr *) skb->data; } #endif @@ -771,13 +869,13 @@ struct sockaddr_hci { struct hci_filter { unsigned long type_mask; unsigned long event_mask[2]; - __le16 opcode; + __le16 opcode; }; struct hci_ufilter { - __u32 type_mask; - __u32 event_mask[2]; - __le16 opcode; + __u32 type_mask; + __u32 event_mask[2]; + __le16 opcode; }; #define HCI_FLT_TYPE_BITS 31 @@ -825,15 +923,15 @@ struct hci_dev_info { struct hci_conn_info { __u16 handle; bdaddr_t bdaddr; - __u8 type; - __u8 out; - __u16 state; - __u32 link_mode; + __u8 type; + __u8 out; + __u16 state; + __u32 link_mode; }; struct hci_dev_req { - __u16 dev_id; - __u32 dev_opt; + __u16 dev_id; + __u32 dev_opt; }; struct hci_dev_list_req { diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 8f67c8a7169..ea13baa3851 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -71,7 +71,10 @@ struct hci_dev { __u16 id; __u8 type; bdaddr_t bdaddr; + __u8 dev_name[248]; + __u8 dev_class[3]; __u8 features[8]; + __u8 commands[64]; __u8 hci_ver; __u16 hci_rev; __u16 manufacturer; @@ -310,10 +313,12 @@ static inline struct hci_conn *hci_conn_hash_lookup_state(struct hci_dev *hdev, void hci_acl_connect(struct hci_conn *conn); void hci_acl_disconn(struct hci_conn *conn, __u8 reason); void hci_add_sco(struct hci_conn *conn, __u16 handle); +void hci_setup_sync(struct hci_conn *conn, __u16 handle); struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst); -int hci_conn_del(struct hci_conn *conn); -void hci_conn_hash_flush(struct hci_dev *hdev); +int hci_conn_del(struct hci_conn *conn); +void hci_conn_hash_flush(struct hci_dev *hdev); +void hci_conn_check_pending(struct hci_dev *hdev); struct hci_conn *hci_connect(struct hci_dev *hdev, int type, bdaddr_t *src); int hci_conn_auth(struct hci_conn *conn); @@ -617,11 +622,11 @@ int hci_unregister_cb(struct hci_cb *hcb); int hci_register_notifier(struct notifier_block *nb); int hci_unregister_notifier(struct notifier_block *nb); -int hci_send_cmd(struct hci_dev *hdev, __u16 ogf, __u16 ocf, __u32 plen, void *param); +int hci_send_cmd(struct hci_dev *hdev, __u16 opcode, __u32 plen, void *param); int hci_send_acl(struct hci_conn *conn, struct sk_buff *skb, __u16 flags); int hci_send_sco(struct hci_conn *conn, struct sk_buff *skb); -void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 ogf, __u16 ocf); +void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 opcode); void hci_si_event(struct hci_dev *hdev, int type, int dlen, void *data); diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h index 70e70f5d3dd..73e115bc12d 100644 --- a/include/net/bluetooth/l2cap.h +++ b/include/net/bluetooth/l2cap.h @@ -29,7 +29,8 @@ #define L2CAP_DEFAULT_MTU 672 #define L2CAP_DEFAULT_FLUSH_TO 0xFFFF -#define L2CAP_CONN_TIMEOUT (HZ * 40) +#define L2CAP_CONN_TIMEOUT (40000) /* 40 seconds */ +#define L2CAP_INFO_TIMEOUT (4000) /* 4 seconds */ /* L2CAP socket address */ struct sockaddr_l2 { @@ -148,6 +149,19 @@ struct l2cap_conf_opt { #define L2CAP_CONF_MAX_SIZE 22 +struct l2cap_conf_rfc { + __u8 mode; + __u8 txwin_size; + __u8 max_transmit; + __le16 retrans_timeout; + __le16 monitor_timeout; + __le16 max_pdu_size; +} __attribute__ ((packed)); + +#define L2CAP_MODE_BASIC 0x00 +#define L2CAP_MODE_RETRANS 0x01 +#define L2CAP_MODE_FLOWCTL 0x02 + struct l2cap_disconn_req { __le16 dcid; __le16 scid; @@ -160,7 +174,6 @@ struct l2cap_disconn_rsp { struct l2cap_info_req { __le16 type; - __u8 data[0]; } __attribute__ ((packed)); struct l2cap_info_rsp { @@ -192,6 +205,13 @@ struct l2cap_conn { unsigned int mtu; + __u32 feat_mask; + + __u8 info_state; + __u8 info_ident; + + struct timer_list info_timer; + spinlock_t lock; struct sk_buff *rx_skb; @@ -202,6 +222,9 @@ struct l2cap_conn { struct l2cap_chan_list chan_list; }; +#define L2CAP_INFO_CL_MTU_REQ_SENT 0x01 +#define L2CAP_INFO_FEAT_MASK_REQ_SENT 0x02 + /* ----- L2CAP channel and socket info ----- */ #define l2cap_pi(sk) ((struct l2cap_pinfo *) sk) @@ -221,7 +244,6 @@ struct l2cap_pinfo { __u8 conf_len; __u8 conf_state; __u8 conf_retry; - __u16 conf_mtu; __u8 ident; @@ -232,10 +254,11 @@ struct l2cap_pinfo { struct sock *prev_c; }; -#define L2CAP_CONF_REQ_SENT 0x01 -#define L2CAP_CONF_INPUT_DONE 0x02 -#define L2CAP_CONF_OUTPUT_DONE 0x04 -#define L2CAP_CONF_MAX_RETRIES 2 +#define L2CAP_CONF_REQ_SENT 0x01 +#define L2CAP_CONF_INPUT_DONE 0x02 +#define L2CAP_CONF_OUTPUT_DONE 0x04 + +#define L2CAP_CONF_MAX_RETRIES 2 void l2cap_load(void); diff --git a/include/sound/version.h b/include/sound/version.h index 8d4a8dd8923..a2be8ad8894 100644 --- a/include/sound/version.h +++ b/include/sound/version.h @@ -1,3 +1,3 @@ /* include/version.h. Generated by alsa/ksync script. */ #define CONFIG_SND_VERSION "1.0.15" -#define CONFIG_SND_DATE " (Tue Oct 16 14:57:44 2007 UTC)" +#define CONFIG_SND_DATE " (Tue Oct 23 06:09:18 2007 UTC)" diff --git a/kernel/auditsc.c b/kernel/auditsc.c index 80ecab0942e..bce9ecdb771 100644 --- a/kernel/auditsc.c +++ b/kernel/auditsc.c @@ -1615,7 +1615,7 @@ static void audit_copy_inode(struct audit_names *name, const struct inode *inode /** * audit_inode - store the inode and device from a lookup * @name: name being audited - * @inode: inode being audited + * @dentry: dentry being audited * * Called from fs/namei.c:path_lookup(). */ @@ -1650,7 +1650,7 @@ void __audit_inode(const char *name, const struct dentry *dentry) /** * audit_inode_child - collect inode info for created/removed objects * @dname: inode's dentry name - * @inode: inode being audited + * @dentry: dentry being audited * @parent: inode of dentry parent * * For syscalls that create or remove filesystem objects, audit_inode diff --git a/kernel/sched.c b/kernel/sched.c index 7581e331b13..2810e562a99 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -3375,7 +3375,6 @@ void account_system_time(struct task_struct *p, int hardirq_offset, if (p->flags & PF_VCPU) { account_guest_time(p, cputime); - p->flags &= ~PF_VCPU; return; } diff --git a/kernel/sysctl_check.c b/kernel/sysctl_check.c index 3c9ef5a7d57..ed6fe51df77 100644 --- a/kernel/sysctl_check.c +++ b/kernel/sysctl_check.c @@ -731,7 +731,7 @@ static struct trans_ctl_table trans_net_table[] = { { NET_UNIX, "unix", trans_net_unix_table }, { NET_IPV4, "ipv4", trans_net_ipv4_table }, { NET_IPX, "ipx", trans_net_ipx_table }, - { NET_ATALK, "atalk", trans_net_atalk_table }, + { NET_ATALK, "appletalk", trans_net_atalk_table }, { NET_NETROM, "netrom", trans_net_netrom_table }, { NET_AX25, "ax25", trans_net_ax25_table }, { NET_BRIDGE, "bridge", trans_net_bridge_table }, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index c567f219191..1faa5087dc8 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -389,6 +389,16 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_SG + bool "Debug SG table operations" + depends on DEBUG_KERNEL + help + Enable this to turn on checks on scatter-gather tables. This can + help find problems with drivers that do not properly initialize + their sg tables. + + If unsure, say N. + config FRAME_POINTER bool "Compile the kernel with frame pointers" depends on DEBUG_KERNEL && (X86 || CRIS || M68K || M68KNOMMU || FRV || UML || S390 || AVR32 || SUPERH || BFIN) diff --git a/lib/swiotlb.c b/lib/swiotlb.c index 752fd95323f..1a8050ade86 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -35,7 +35,7 @@ #define OFFSET(val,align) ((unsigned long) \ ( (val) & ( (align) - 1))) -#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset) +#define SG_ENT_VIRT_ADDRESS(sg) (sg_virt((sg))) #define SG_ENT_PHYS_ADDRESS(sg) virt_to_bus(SG_ENT_VIRT_ADDRESS(sg)) /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 1833879f843..3a47871a29d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -187,7 +187,24 @@ int online_pages(unsigned long pfn, unsigned long nr_pages) unsigned long onlined_pages = 0; struct zone *zone; int need_zonelists_rebuild = 0; + int nid; + int ret; + struct memory_notify arg; + + arg.start_pfn = pfn; + arg.nr_pages = nr_pages; + arg.status_change_nid = -1; + + nid = page_to_nid(pfn_to_page(pfn)); + if (node_present_pages(nid) == 0) + arg.status_change_nid = nid; + ret = memory_notify(MEM_GOING_ONLINE, &arg); + ret = notifier_to_errno(ret); + if (ret) { + memory_notify(MEM_CANCEL_ONLINE, &arg); + return ret; + } /* * This doesn't need a lock to do pfn_to_page(). * The section can't be removed here because of the @@ -222,6 +239,10 @@ int online_pages(unsigned long pfn, unsigned long nr_pages) build_all_zonelists(); vm_total_pages = nr_free_pagecache_pages(); writeback_set_ratelimit(); + + if (onlined_pages) + memory_notify(MEM_ONLINE, &arg); + return 0; } #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ @@ -467,8 +488,9 @@ int offline_pages(unsigned long start_pfn, { unsigned long pfn, nr_pages, expire; long offlined_pages; - int ret, drain, retry_max; + int ret, drain, retry_max, node; struct zone *zone; + struct memory_notify arg; BUG_ON(start_pfn >= end_pfn); /* at least, alignment against pageblock is necessary */ @@ -480,11 +502,27 @@ int offline_pages(unsigned long start_pfn, we assume this for now. .*/ if (!test_pages_in_a_zone(start_pfn, end_pfn)) return -EINVAL; + + zone = page_zone(pfn_to_page(start_pfn)); + node = zone_to_nid(zone); + nr_pages = end_pfn - start_pfn; + /* set above range as isolated */ ret = start_isolate_page_range(start_pfn, end_pfn); if (ret) return ret; - nr_pages = end_pfn - start_pfn; + + arg.start_pfn = start_pfn; + arg.nr_pages = nr_pages; + arg.status_change_nid = -1; + if (nr_pages >= node_present_pages(node)) + arg.status_change_nid = node; + + ret = memory_notify(MEM_GOING_OFFLINE, &arg); + ret = notifier_to_errno(ret); + if (ret) + goto failed_removal; + pfn = start_pfn; expire = jiffies + timeout; drain = 0; @@ -539,20 +577,24 @@ repeat: /* reset pagetype flags */ start_isolate_page_range(start_pfn, end_pfn); /* removal success */ - zone = page_zone(pfn_to_page(start_pfn)); zone->present_pages -= offlined_pages; zone->zone_pgdat->node_present_pages -= offlined_pages; totalram_pages -= offlined_pages; num_physpages -= offlined_pages; + vm_total_pages = nr_free_pagecache_pages(); writeback_set_ratelimit(); + + memory_notify(MEM_OFFLINE, &arg); return 0; failed_removal: printk(KERN_INFO "memory offlining %lx to %lx failed\n", start_pfn, end_pfn); + memory_notify(MEM_CANCEL_OFFLINE, &arg); /* pushback to free area */ undo_isolate_page_range(start_pfn, end_pfn); + return ret; } #else diff --git a/mm/mmap.c b/mm/mmap.c index 7a30c498823..facc1a75bd4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1171,8 +1171,7 @@ munmap_back: vm_flags = vma->vm_flags; if (vma_wants_writenotify(vma)) - vma->vm_page_prot = - protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC)]; + vma->vm_page_prot = vm_get_page_prot(vm_flags & ~VM_SHARED); if (!file || !vma_merge(mm, prev, addr, vma->vm_end, vma->vm_flags, NULL, file, pgoff, vma_policy(vma))) { diff --git a/mm/mprotect.c b/mm/mprotect.c index 55227845abb..4de546899dc 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -194,7 +194,7 @@ success: vma->vm_flags = newflags; vma->vm_page_prot = vm_get_page_prot(newflags); if (vma_wants_writenotify(vma)) { - vma->vm_page_prot = vm_get_page_prot(newflags); + vma->vm_page_prot = vm_get_page_prot(newflags & ~VM_SHARED); dirty_accountable = 1; } diff --git a/mm/shmem.c b/mm/shmem.c index 289dbb0a6fd..404e53bb212 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2020,33 +2020,25 @@ static int shmem_match(struct inode *ino, void *vfh) return ino->i_ino == inum && fh[0] == ino->i_generation; } -static struct dentry *shmem_get_dentry(struct super_block *sb, void *vfh) +static struct dentry *shmem_fh_to_dentry(struct super_block *sb, + struct fid *fid, int fh_len, int fh_type) { - struct dentry *de = NULL; struct inode *inode; - __u32 *fh = vfh; - __u64 inum = fh[2]; - inum = (inum << 32) | fh[1]; + struct dentry *dentry = NULL; + u64 inum = fid->raw[2]; + inum = (inum << 32) | fid->raw[1]; + + if (fh_len < 3) + return NULL; - inode = ilookup5(sb, (unsigned long)(inum+fh[0]), shmem_match, vfh); + inode = ilookup5(sb, (unsigned long)(inum + fid->raw[0]), + shmem_match, fid->raw); if (inode) { - de = d_find_alias(inode); + dentry = d_find_alias(inode); iput(inode); } - return de? de: ERR_PTR(-ESTALE); -} - -static struct dentry *shmem_decode_fh(struct super_block *sb, __u32 *fh, - int len, int type, - int (*acceptable)(void *context, struct dentry *de), - void *context) -{ - if (len < 3) - return ERR_PTR(-ESTALE); - - return sb->s_export_op->find_exported_dentry(sb, fh, NULL, acceptable, - context); + return dentry; } static int shmem_encode_fh(struct dentry *dentry, __u32 *fh, int *len, @@ -2079,11 +2071,10 @@ static int shmem_encode_fh(struct dentry *dentry, __u32 *fh, int *len, return 1; } -static struct export_operations shmem_export_ops = { +static const struct export_operations shmem_export_ops = { .get_parent = shmem_get_parent, - .get_dentry = shmem_get_dentry, .encode_fh = shmem_encode_fh, - .decode_fh = shmem_decode_fh, + .fh_to_dentry = shmem_fh_to_dentry, }; static int shmem_parse_options(char *options, int *mode, uid_t *uid, diff --git a/mm/slub.c b/mm/slub.c index e29a42988c7..aac1dd3c657 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -20,6 +20,7 @@ #include <linux/mempolicy.h> #include <linux/ctype.h> #include <linux/kallsyms.h> +#include <linux/memory.h> /* * Lock order: @@ -2694,6 +2695,121 @@ int kmem_cache_shrink(struct kmem_cache *s) } EXPORT_SYMBOL(kmem_cache_shrink); +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG) +static int slab_mem_going_offline_callback(void *arg) +{ + struct kmem_cache *s; + + down_read(&slub_lock); + list_for_each_entry(s, &slab_caches, list) + kmem_cache_shrink(s); + up_read(&slub_lock); + + return 0; +} + +static void slab_mem_offline_callback(void *arg) +{ + struct kmem_cache_node *n; + struct kmem_cache *s; + struct memory_notify *marg = arg; + int offline_node; + + offline_node = marg->status_change_nid; + + /* + * If the node still has available memory. we need kmem_cache_node + * for it yet. + */ + if (offline_node < 0) + return; + + down_read(&slub_lock); + list_for_each_entry(s, &slab_caches, list) { + n = get_node(s, offline_node); + if (n) { + /* + * if n->nr_slabs > 0, slabs still exist on the node + * that is going down. We were unable to free them, + * and offline_pages() function shoudn't call this + * callback. So, we must fail. + */ + BUG_ON(atomic_read(&n->nr_slabs)); + + s->node[offline_node] = NULL; + kmem_cache_free(kmalloc_caches, n); + } + } + up_read(&slub_lock); +} + +static int slab_mem_going_online_callback(void *arg) +{ + struct kmem_cache_node *n; + struct kmem_cache *s; + struct memory_notify *marg = arg; + int nid = marg->status_change_nid; + int ret = 0; + + /* + * If the node's memory is already available, then kmem_cache_node is + * already created. Nothing to do. + */ + if (nid < 0) + return 0; + + /* + * We are bringing a node online. No memory is availabe yet. We must + * allocate a kmem_cache_node structure in order to bring the node + * online. + */ + down_read(&slub_lock); + list_for_each_entry(s, &slab_caches, list) { + /* + * XXX: kmem_cache_alloc_node will fallback to other nodes + * since memory is not yet available from the node that + * is brought up. + */ + n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL); + if (!n) { + ret = -ENOMEM; + goto out; + } + init_kmem_cache_node(n); + s->node[nid] = n; + } +out: + up_read(&slub_lock); + return ret; +} + +static int slab_memory_callback(struct notifier_block *self, + unsigned long action, void *arg) +{ + int ret = 0; + + switch (action) { + case MEM_GOING_ONLINE: + ret = slab_mem_going_online_callback(arg); + break; + case MEM_GOING_OFFLINE: + ret = slab_mem_going_offline_callback(arg); + break; + case MEM_OFFLINE: + case MEM_CANCEL_ONLINE: + slab_mem_offline_callback(arg); + break; + case MEM_ONLINE: + case MEM_CANCEL_OFFLINE: + break; + } + + ret = notifier_from_errno(ret); + return ret; +} + +#endif /* CONFIG_MEMORY_HOTPLUG */ + /******************************************************************** * Basic setup of slabs *******************************************************************/ @@ -2715,6 +2831,8 @@ void __init kmem_cache_init(void) sizeof(struct kmem_cache_node), GFP_KERNEL); kmalloc_caches[0].refcount = -1; caches++; + + hotplug_memory_notifier(slab_memory_callback, 1); #endif /* Able to allocate the per node structures */ diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index 5fdfc9a67d3..9483320f6da 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -78,11 +78,11 @@ void hci_acl_connect(struct hci_conn *conn) cp.pkt_type = cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK); if (lmp_rswitch_capable(hdev) && !(hdev->link_mode & HCI_LM_MASTER)) - cp.role_switch = 0x01; + cp.role_switch = 0x01; else - cp.role_switch = 0x00; + cp.role_switch = 0x00; - hci_send_cmd(hdev, OGF_LINK_CTL, OCF_CREATE_CONN, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_CREATE_CONN, sizeof(cp), &cp); } static void hci_acl_connect_cancel(struct hci_conn *conn) @@ -95,8 +95,7 @@ static void hci_acl_connect_cancel(struct hci_conn *conn) return; bacpy(&cp.bdaddr, &conn->dst); - hci_send_cmd(conn->hdev, OGF_LINK_CTL, - OCF_CREATE_CONN_CANCEL, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_CREATE_CONN_CANCEL, sizeof(cp), &cp); } void hci_acl_disconn(struct hci_conn *conn, __u8 reason) @@ -109,8 +108,7 @@ void hci_acl_disconn(struct hci_conn *conn, __u8 reason) cp.handle = cpu_to_le16(conn->handle); cp.reason = reason; - hci_send_cmd(conn->hdev, OGF_LINK_CTL, - OCF_DISCONNECT, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_DISCONNECT, sizeof(cp), &cp); } void hci_add_sco(struct hci_conn *conn, __u16 handle) @@ -126,7 +124,29 @@ void hci_add_sco(struct hci_conn *conn, __u16 handle) cp.handle = cpu_to_le16(handle); cp.pkt_type = cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); - hci_send_cmd(hdev, OGF_LINK_CTL, OCF_ADD_SCO, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_ADD_SCO, sizeof(cp), &cp); +} + +void hci_setup_sync(struct hci_conn *conn, __u16 handle) +{ + struct hci_dev *hdev = conn->hdev; + struct hci_cp_setup_sync_conn cp; + + BT_DBG("%p", conn); + + conn->state = BT_CONNECT; + conn->out = 1; + + cp.handle = cpu_to_le16(handle); + cp.pkt_type = cpu_to_le16(hdev->esco_type); + + cp.tx_bandwidth = cpu_to_le32(0x00001f40); + cp.rx_bandwidth = cpu_to_le32(0x00001f40); + cp.max_latency = cpu_to_le16(0xffff); + cp.voice_setting = cpu_to_le16(hdev->voice_setting); + cp.retrans_effort = 0xff; + + hci_send_cmd(hdev, HCI_OP_SETUP_SYNC_CONN, sizeof(cp), &cp); } static void hci_conn_timeout(unsigned long arg) @@ -143,7 +163,10 @@ static void hci_conn_timeout(unsigned long arg) switch (conn->state) { case BT_CONNECT: - hci_acl_connect_cancel(conn); + if (conn->type == ACL_LINK) + hci_acl_connect_cancel(conn); + else + hci_acl_disconn(conn, 0x13); break; case BT_CONNECTED: hci_acl_disconn(conn, 0x13); @@ -330,8 +353,12 @@ struct hci_conn *hci_connect(struct hci_dev *hdev, int type, bdaddr_t *dst) hci_conn_hold(sco); if (acl->state == BT_CONNECTED && - (sco->state == BT_OPEN || sco->state == BT_CLOSED)) - hci_add_sco(sco, acl->handle); + (sco->state == BT_OPEN || sco->state == BT_CLOSED)) { + if (lmp_esco_capable(hdev)) + hci_setup_sync(sco, acl->handle); + else + hci_add_sco(sco, acl->handle); + } return sco; } @@ -348,7 +375,7 @@ int hci_conn_auth(struct hci_conn *conn) if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->pend)) { struct hci_cp_auth_requested cp; cp.handle = cpu_to_le16(conn->handle); - hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_AUTH_REQUESTED, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED, sizeof(cp), &cp); } return 0; } @@ -369,7 +396,7 @@ int hci_conn_encrypt(struct hci_conn *conn) struct hci_cp_set_conn_encrypt cp; cp.handle = cpu_to_le16(conn->handle); cp.encrypt = 1; - hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_SET_CONN_ENCRYPT, sizeof(cp), &cp); } return 0; } @@ -383,7 +410,7 @@ int hci_conn_change_link_key(struct hci_conn *conn) if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->pend)) { struct hci_cp_change_conn_link_key cp; cp.handle = cpu_to_le16(conn->handle); - hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_CHANGE_CONN_LINK_KEY, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_CHANGE_CONN_LINK_KEY, sizeof(cp), &cp); } return 0; } @@ -401,7 +428,7 @@ int hci_conn_switch_role(struct hci_conn *conn, uint8_t role) struct hci_cp_switch_role cp; bacpy(&cp.bdaddr, &conn->dst); cp.role = role; - hci_send_cmd(conn->hdev, OGF_LINK_POLICY, OCF_SWITCH_ROLE, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, HCI_OP_SWITCH_ROLE, sizeof(cp), &cp); } return 0; } @@ -423,8 +450,7 @@ void hci_conn_enter_active_mode(struct hci_conn *conn) if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { struct hci_cp_exit_sniff_mode cp; cp.handle = cpu_to_le16(conn->handle); - hci_send_cmd(hdev, OGF_LINK_POLICY, - OCF_EXIT_SNIFF_MODE, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_EXIT_SNIFF_MODE, sizeof(cp), &cp); } timer: @@ -455,8 +481,7 @@ void hci_conn_enter_sniff_mode(struct hci_conn *conn) cp.max_latency = cpu_to_le16(0); cp.min_remote_timeout = cpu_to_le16(0); cp.min_local_timeout = cpu_to_le16(0); - hci_send_cmd(hdev, OGF_LINK_POLICY, - OCF_SNIFF_SUBRATE, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_SNIFF_SUBRATE, sizeof(cp), &cp); } if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { @@ -466,8 +491,7 @@ void hci_conn_enter_sniff_mode(struct hci_conn *conn) cp.min_interval = cpu_to_le16(hdev->sniff_min_interval); cp.attempt = cpu_to_le16(4); cp.timeout = cpu_to_le16(1); - hci_send_cmd(hdev, OGF_LINK_POLICY, - OCF_SNIFF_MODE, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_SNIFF_MODE, sizeof(cp), &cp); } } @@ -493,6 +517,22 @@ void hci_conn_hash_flush(struct hci_dev *hdev) } } +/* Check pending connect attempts */ +void hci_conn_check_pending(struct hci_dev *hdev) +{ + struct hci_conn *conn; + + BT_DBG("hdev %s", hdev->name); + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_state(hdev, ACL_LINK, BT_CONNECT2); + if (conn) + hci_acl_connect(conn); + + hci_dev_unlock(hdev); +} + int hci_get_conn_list(void __user *arg) { struct hci_conn_list_req req, *cl; diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 18e3afc964d..372b0d3b75a 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -176,7 +176,7 @@ static void hci_reset_req(struct hci_dev *hdev, unsigned long opt) BT_DBG("%s %ld", hdev->name, opt); /* Reset device */ - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_RESET, 0, NULL); + hci_send_cmd(hdev, HCI_OP_RESET, 0, NULL); } static void hci_init_req(struct hci_dev *hdev, unsigned long opt) @@ -202,16 +202,16 @@ static void hci_init_req(struct hci_dev *hdev, unsigned long opt) /* Reset */ if (test_bit(HCI_QUIRK_RESET_ON_INIT, &hdev->quirks)) - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_RESET, 0, NULL); + hci_send_cmd(hdev, HCI_OP_RESET, 0, NULL); /* Read Local Supported Features */ - hci_send_cmd(hdev, OGF_INFO_PARAM, OCF_READ_LOCAL_FEATURES, 0, NULL); + hci_send_cmd(hdev, HCI_OP_READ_LOCAL_FEATURES, 0, NULL); /* Read Local Version */ - hci_send_cmd(hdev, OGF_INFO_PARAM, OCF_READ_LOCAL_VERSION, 0, NULL); + hci_send_cmd(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL); /* Read Buffer Size (ACL mtu, max pkt, etc.) */ - hci_send_cmd(hdev, OGF_INFO_PARAM, OCF_READ_BUFFER_SIZE, 0, NULL); + hci_send_cmd(hdev, HCI_OP_READ_BUFFER_SIZE, 0, NULL); #if 0 /* Host buffer size */ @@ -221,29 +221,35 @@ static void hci_init_req(struct hci_dev *hdev, unsigned long opt) cp.sco_mtu = HCI_MAX_SCO_SIZE; cp.acl_max_pkt = cpu_to_le16(0xffff); cp.sco_max_pkt = cpu_to_le16(0xffff); - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_HOST_BUFFER_SIZE, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_HOST_BUFFER_SIZE, sizeof(cp), &cp); } #endif /* Read BD Address */ - hci_send_cmd(hdev, OGF_INFO_PARAM, OCF_READ_BD_ADDR, 0, NULL); + hci_send_cmd(hdev, HCI_OP_READ_BD_ADDR, 0, NULL); + + /* Read Class of Device */ + hci_send_cmd(hdev, HCI_OP_READ_CLASS_OF_DEV, 0, NULL); + + /* Read Local Name */ + hci_send_cmd(hdev, HCI_OP_READ_LOCAL_NAME, 0, NULL); /* Read Voice Setting */ - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_READ_VOICE_SETTING, 0, NULL); + hci_send_cmd(hdev, HCI_OP_READ_VOICE_SETTING, 0, NULL); /* Optional initialization */ /* Clear Event Filters */ flt_type = HCI_FLT_CLEAR_ALL; - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_SET_EVENT_FLT, 1, &flt_type); + hci_send_cmd(hdev, HCI_OP_SET_EVENT_FLT, 1, &flt_type); /* Page timeout ~20 secs */ param = cpu_to_le16(0x8000); - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_PG_TIMEOUT, 2, ¶m); + hci_send_cmd(hdev, HCI_OP_WRITE_PG_TIMEOUT, 2, ¶m); /* Connection accept timeout ~20 secs */ param = cpu_to_le16(0x7d00); - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_CA_TIMEOUT, 2, ¶m); + hci_send_cmd(hdev, HCI_OP_WRITE_CA_TIMEOUT, 2, ¶m); } static void hci_scan_req(struct hci_dev *hdev, unsigned long opt) @@ -253,7 +259,7 @@ static void hci_scan_req(struct hci_dev *hdev, unsigned long opt) BT_DBG("%s %x", hdev->name, scan); /* Inquiry and Page scans */ - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_SCAN_ENABLE, 1, &scan); + hci_send_cmd(hdev, HCI_OP_WRITE_SCAN_ENABLE, 1, &scan); } static void hci_auth_req(struct hci_dev *hdev, unsigned long opt) @@ -263,7 +269,7 @@ static void hci_auth_req(struct hci_dev *hdev, unsigned long opt) BT_DBG("%s %x", hdev->name, auth); /* Authentication */ - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_AUTH_ENABLE, 1, &auth); + hci_send_cmd(hdev, HCI_OP_WRITE_AUTH_ENABLE, 1, &auth); } static void hci_encrypt_req(struct hci_dev *hdev, unsigned long opt) @@ -273,7 +279,7 @@ static void hci_encrypt_req(struct hci_dev *hdev, unsigned long opt) BT_DBG("%s %x", hdev->name, encrypt); /* Authentication */ - hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_ENCRYPT_MODE, 1, &encrypt); + hci_send_cmd(hdev, HCI_OP_WRITE_ENCRYPT_MODE, 1, &encrypt); } /* Get HCI device by index. @@ -384,7 +390,7 @@ static void hci_inq_req(struct hci_dev *hdev, unsigned long opt) memcpy(&cp.lap, &ir->lap, 3); cp.length = ir->length; cp.num_rsp = ir->num_rsp; - hci_send_cmd(hdev, OGF_LINK_CTL, OCF_INQUIRY, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_INQUIRY, sizeof(cp), &cp); } int hci_inquiry(void __user *arg) @@ -1111,13 +1117,13 @@ static int hci_send_frame(struct sk_buff *skb) } /* Send HCI command */ -int hci_send_cmd(struct hci_dev *hdev, __u16 ogf, __u16 ocf, __u32 plen, void *param) +int hci_send_cmd(struct hci_dev *hdev, __u16 opcode, __u32 plen, void *param) { int len = HCI_COMMAND_HDR_SIZE + plen; struct hci_command_hdr *hdr; struct sk_buff *skb; - BT_DBG("%s ogf 0x%x ocf 0x%x plen %d", hdev->name, ogf, ocf, plen); + BT_DBG("%s opcode 0x%x plen %d", hdev->name, opcode, plen); skb = bt_skb_alloc(len, GFP_ATOMIC); if (!skb) { @@ -1126,7 +1132,7 @@ int hci_send_cmd(struct hci_dev *hdev, __u16 ogf, __u16 ocf, __u32 plen, void *p } hdr = (struct hci_command_hdr *) skb_put(skb, HCI_COMMAND_HDR_SIZE); - hdr->opcode = cpu_to_le16(hci_opcode_pack(ogf, ocf)); + hdr->opcode = cpu_to_le16(opcode); hdr->plen = plen; if (plen) @@ -1143,7 +1149,7 @@ int hci_send_cmd(struct hci_dev *hdev, __u16 ogf, __u16 ocf, __u32 plen, void *p } /* Get data from the previously sent command */ -void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 ogf, __u16 ocf) +void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 opcode) { struct hci_command_hdr *hdr; @@ -1152,10 +1158,10 @@ void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 ogf, __u16 ocf) hdr = (void *) hdev->sent_cmd->data; - if (hdr->opcode != cpu_to_le16(hci_opcode_pack(ogf, ocf))) + if (hdr->opcode != cpu_to_le16(opcode)) return NULL; - BT_DBG("%s ogf 0x%x ocf 0x%x", hdev->name, ogf, ocf); + BT_DBG("%s opcode 0x%x", hdev->name, opcode); return hdev->sent_cmd->data + HCI_COMMAND_HDR_SIZE; } @@ -1355,6 +1361,26 @@ static inline void hci_sched_sco(struct hci_dev *hdev) } } +static inline void hci_sched_esco(struct hci_dev *hdev) +{ + struct hci_conn *conn; + struct sk_buff *skb; + int quote; + + BT_DBG("%s", hdev->name); + + while (hdev->sco_cnt && (conn = hci_low_sent(hdev, ESCO_LINK, "e))) { + while (quote-- && (skb = skb_dequeue(&conn->data_q))) { + BT_DBG("skb %p len %d", skb, skb->len); + hci_send_frame(skb); + + conn->sent++; + if (conn->sent == ~0) + conn->sent = 0; + } + } +} + static void hci_tx_task(unsigned long arg) { struct hci_dev *hdev = (struct hci_dev *) arg; @@ -1370,6 +1396,8 @@ static void hci_tx_task(unsigned long arg) hci_sched_sco(hdev); + hci_sched_esco(hdev); + /* Send next queued raw (unknown type) packet */ while ((skb = skb_dequeue(&hdev->raw_q))) hci_send_frame(skb); diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 4baea1e3865..46df2e403df 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -52,234 +52,273 @@ /* Handle HCI Event packets */ -/* Command Complete OGF LINK_CTL */ -static void hci_cc_link_ctl(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb) +static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb) { - __u8 status; - struct hci_conn *pend; + __u8 status = *((__u8 *) skb->data); - BT_DBG("%s ocf 0x%x", hdev->name, ocf); + BT_DBG("%s status 0x%x", hdev->name, status); - switch (ocf) { - case OCF_INQUIRY_CANCEL: - case OCF_EXIT_PERIODIC_INQ: - status = *((__u8 *) skb->data); + if (status) + return; - if (status) { - BT_DBG("%s Inquiry cancel error: status 0x%x", hdev->name, status); - } else { - clear_bit(HCI_INQUIRY, &hdev->flags); - hci_req_complete(hdev, status); - } + clear_bit(HCI_INQUIRY, &hdev->flags); - hci_dev_lock(hdev); + hci_req_complete(hdev, status); - pend = hci_conn_hash_lookup_state(hdev, ACL_LINK, BT_CONNECT2); - if (pend) - hci_acl_connect(pend); + hci_conn_check_pending(hdev); +} - hci_dev_unlock(hdev); +static void hci_cc_exit_periodic_inq(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); - break; + BT_DBG("%s status 0x%x", hdev->name, status); - default: - BT_DBG("%s Command complete: ogf LINK_CTL ocf %x", hdev->name, ocf); - break; + if (status) + return; + + clear_bit(HCI_INQUIRY, &hdev->flags); + + hci_conn_check_pending(hdev); +} + +static void hci_cc_remote_name_req_cancel(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + +static void hci_cc_role_discovery(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_role_discovery *rp = (void *) skb->data; + struct hci_conn *conn; + + BT_DBG("%s status 0x%x", hdev->name, rp->status); + + if (rp->status) + return; + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(rp->handle)); + if (conn) { + if (rp->role) + conn->link_mode &= ~HCI_LM_MASTER; + else + conn->link_mode |= HCI_LM_MASTER; } + + hci_dev_unlock(hdev); } -/* Command Complete OGF LINK_POLICY */ -static void hci_cc_link_policy(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb) +static void hci_cc_write_link_policy(struct hci_dev *hdev, struct sk_buff *skb) { + struct hci_rp_write_link_policy *rp = (void *) skb->data; struct hci_conn *conn; - struct hci_rp_role_discovery *rd; - struct hci_rp_write_link_policy *lp; void *sent; - BT_DBG("%s ocf 0x%x", hdev->name, ocf); + BT_DBG("%s status 0x%x", hdev->name, rp->status); - switch (ocf) { - case OCF_ROLE_DISCOVERY: - rd = (void *) skb->data; + if (rp->status) + return; - if (rd->status) - break; + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_LINK_POLICY); + if (!sent) + return; - hci_dev_lock(hdev); + hci_dev_lock(hdev); - conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(rd->handle)); - if (conn) { - if (rd->role) - conn->link_mode &= ~HCI_LM_MASTER; - else - conn->link_mode |= HCI_LM_MASTER; - } + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(rp->handle)); + if (conn) { + __le16 policy = get_unaligned((__le16 *) (sent + 2)); + conn->link_policy = __le16_to_cpu(policy); + } - hci_dev_unlock(hdev); - break; + hci_dev_unlock(hdev); +} - case OCF_WRITE_LINK_POLICY: - sent = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY); - if (!sent) - break; +static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); - lp = (struct hci_rp_write_link_policy *) skb->data; + BT_DBG("%s status 0x%x", hdev->name, status); - if (lp->status) - break; + hci_req_complete(hdev, status); +} - hci_dev_lock(hdev); +static void hci_cc_write_local_name(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + void *sent; - conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(lp->handle)); - if (conn) { - __le16 policy = get_unaligned((__le16 *) (sent + 2)); - conn->link_policy = __le16_to_cpu(policy); - } + BT_DBG("%s status 0x%x", hdev->name, status); - hci_dev_unlock(hdev); - break; + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_LOCAL_NAME); + if (!sent) + return; - default: - BT_DBG("%s: Command complete: ogf LINK_POLICY ocf %x", - hdev->name, ocf); - break; + if (!status) + memcpy(hdev->dev_name, sent, 248); +} + +static void hci_cc_read_local_name(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_local_name *rp = (void *) skb->data; + + BT_DBG("%s status 0x%x", hdev->name, rp->status); + + if (rp->status) + return; + + memcpy(hdev->dev_name, rp->name, 248); +} + +static void hci_cc_write_auth_enable(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + void *sent; + + BT_DBG("%s status 0x%x", hdev->name, status); + + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_AUTH_ENABLE); + if (!sent) + return; + + if (!status) { + __u8 param = *((__u8 *) sent); + + if (param == AUTH_ENABLED) + set_bit(HCI_AUTH, &hdev->flags); + else + clear_bit(HCI_AUTH, &hdev->flags); } + + hci_req_complete(hdev, status); } -/* Command Complete OGF HOST_CTL */ -static void hci_cc_host_ctl(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb) +static void hci_cc_write_encrypt_mode(struct hci_dev *hdev, struct sk_buff *skb) { - __u8 status, param; - __u16 setting; - struct hci_rp_read_voice_setting *vs; + __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s ocf 0x%x", hdev->name, ocf); + BT_DBG("%s status 0x%x", hdev->name, status); - switch (ocf) { - case OCF_RESET: - status = *((__u8 *) skb->data); - hci_req_complete(hdev, status); - break; + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_ENCRYPT_MODE); + if (!sent) + return; - case OCF_SET_EVENT_FLT: - status = *((__u8 *) skb->data); - if (status) { - BT_DBG("%s SET_EVENT_FLT failed %d", hdev->name, status); - } else { - BT_DBG("%s SET_EVENT_FLT succeseful", hdev->name); - } - break; + if (!status) { + __u8 param = *((__u8 *) sent); - case OCF_WRITE_AUTH_ENABLE: - sent = hci_sent_cmd_data(hdev, OGF_HOST_CTL, OCF_WRITE_AUTH_ENABLE); - if (!sent) - break; + if (param) + set_bit(HCI_ENCRYPT, &hdev->flags); + else + clear_bit(HCI_ENCRYPT, &hdev->flags); + } - status = *((__u8 *) skb->data); - param = *((__u8 *) sent); + hci_req_complete(hdev, status); +} - if (!status) { - if (param == AUTH_ENABLED) - set_bit(HCI_AUTH, &hdev->flags); - else - clear_bit(HCI_AUTH, &hdev->flags); - } - hci_req_complete(hdev, status); - break; +static void hci_cc_write_scan_enable(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + void *sent; - case OCF_WRITE_ENCRYPT_MODE: - sent = hci_sent_cmd_data(hdev, OGF_HOST_CTL, OCF_WRITE_ENCRYPT_MODE); - if (!sent) - break; + BT_DBG("%s status 0x%x", hdev->name, status); - status = *((__u8 *) skb->data); - param = *((__u8 *) sent); + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_SCAN_ENABLE); + if (!sent) + return; - if (!status) { - if (param) - set_bit(HCI_ENCRYPT, &hdev->flags); - else - clear_bit(HCI_ENCRYPT, &hdev->flags); - } - hci_req_complete(hdev, status); - break; + if (!status) { + __u8 param = *((__u8 *) sent); - case OCF_WRITE_CA_TIMEOUT: - status = *((__u8 *) skb->data); - if (status) { - BT_DBG("%s OCF_WRITE_CA_TIMEOUT failed %d", hdev->name, status); - } else { - BT_DBG("%s OCF_WRITE_CA_TIMEOUT succeseful", hdev->name); - } - break; + clear_bit(HCI_PSCAN, &hdev->flags); + clear_bit(HCI_ISCAN, &hdev->flags); - case OCF_WRITE_PG_TIMEOUT: - status = *((__u8 *) skb->data); - if (status) { - BT_DBG("%s OCF_WRITE_PG_TIMEOUT failed %d", hdev->name, status); - } else { - BT_DBG("%s: OCF_WRITE_PG_TIMEOUT succeseful", hdev->name); - } - break; + if (param & SCAN_INQUIRY) + set_bit(HCI_ISCAN, &hdev->flags); - case OCF_WRITE_SCAN_ENABLE: - sent = hci_sent_cmd_data(hdev, OGF_HOST_CTL, OCF_WRITE_SCAN_ENABLE); - if (!sent) - break; + if (param & SCAN_PAGE) + set_bit(HCI_PSCAN, &hdev->flags); + } - status = *((__u8 *) skb->data); - param = *((__u8 *) sent); + hci_req_complete(hdev, status); +} - BT_DBG("param 0x%x", param); +static void hci_cc_read_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_class_of_dev *rp = (void *) skb->data; - if (!status) { - clear_bit(HCI_PSCAN, &hdev->flags); - clear_bit(HCI_ISCAN, &hdev->flags); - if (param & SCAN_INQUIRY) - set_bit(HCI_ISCAN, &hdev->flags); + BT_DBG("%s status 0x%x", hdev->name, rp->status); - if (param & SCAN_PAGE) - set_bit(HCI_PSCAN, &hdev->flags); - } - hci_req_complete(hdev, status); - break; + if (rp->status) + return; - case OCF_READ_VOICE_SETTING: - vs = (struct hci_rp_read_voice_setting *) skb->data; + memcpy(hdev->dev_class, rp->dev_class, 3); - if (vs->status) { - BT_DBG("%s READ_VOICE_SETTING failed %d", hdev->name, vs->status); - break; - } + BT_DBG("%s class 0x%.2x%.2x%.2x", hdev->name, + hdev->dev_class[2], hdev->dev_class[1], hdev->dev_class[0]); +} - setting = __le16_to_cpu(vs->voice_setting); +static void hci_cc_write_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + void *sent; - if (hdev->voice_setting != setting ) { - hdev->voice_setting = setting; + BT_DBG("%s status 0x%x", hdev->name, status); - BT_DBG("%s: voice setting 0x%04x", hdev->name, setting); + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_CLASS_OF_DEV); + if (!sent) + return; - if (hdev->notify) { - tasklet_disable(&hdev->tx_task); - hdev->notify(hdev, HCI_NOTIFY_VOICE_SETTING); - tasklet_enable(&hdev->tx_task); - } - } - break; + if (!status) + memcpy(hdev->dev_class, sent, 3); +} - case OCF_WRITE_VOICE_SETTING: - sent = hci_sent_cmd_data(hdev, OGF_HOST_CTL, OCF_WRITE_VOICE_SETTING); - if (!sent) - break; +static void hci_cc_read_voice_setting(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_voice_setting *rp = (void *) skb->data; + __u16 setting; + + BT_DBG("%s status 0x%x", hdev->name, rp->status); + + if (rp->status) + return; + + setting = __le16_to_cpu(rp->voice_setting); + + if (hdev->voice_setting == setting ) + return; + + hdev->voice_setting = setting; - status = *((__u8 *) skb->data); - setting = __le16_to_cpu(get_unaligned((__le16 *) sent)); + BT_DBG("%s voice setting 0x%04x", hdev->name, setting); - if (!status && hdev->voice_setting != setting) { + if (hdev->notify) { + tasklet_disable(&hdev->tx_task); + hdev->notify(hdev, HCI_NOTIFY_VOICE_SETTING); + tasklet_enable(&hdev->tx_task); + } +} + +static void hci_cc_write_voice_setting(struct hci_dev *hdev, struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + void *sent; + + BT_DBG("%s status 0x%x", hdev->name, status); + + sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_VOICE_SETTING); + if (!sent) + return; + + if (!status) { + __u16 setting = __le16_to_cpu(get_unaligned((__le16 *) sent)); + + if (hdev->voice_setting != setting) { hdev->voice_setting = setting; - BT_DBG("%s: voice setting 0x%04x", hdev->name, setting); + BT_DBG("%s voice setting 0x%04x", hdev->name, setting); if (hdev->notify) { tasklet_disable(&hdev->tx_task); @@ -287,143 +326,153 @@ static void hci_cc_host_ctl(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb tasklet_enable(&hdev->tx_task); } } - hci_req_complete(hdev, status); - break; - - case OCF_HOST_BUFFER_SIZE: - status = *((__u8 *) skb->data); - if (status) { - BT_DBG("%s OCF_BUFFER_SIZE failed %d", hdev->name, status); - hci_req_complete(hdev, status); - } - break; - - default: - BT_DBG("%s Command complete: ogf HOST_CTL ocf %x", hdev->name, ocf); - break; } } -/* Command Complete OGF INFO_PARAM */ -static void hci_cc_info_param(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb) +static void hci_cc_host_buffer_size(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_rp_read_loc_version *lv; - struct hci_rp_read_local_features *lf; - struct hci_rp_read_buffer_size *bs; - struct hci_rp_read_bd_addr *ba; + __u8 status = *((__u8 *) skb->data); - BT_DBG("%s ocf 0x%x", hdev->name, ocf); + BT_DBG("%s status 0x%x", hdev->name, status); - switch (ocf) { - case OCF_READ_LOCAL_VERSION: - lv = (struct hci_rp_read_loc_version *) skb->data; + hci_req_complete(hdev, status); +} - if (lv->status) { - BT_DBG("%s READ_LOCAL_VERSION failed %d", hdev->name, lf->status); - break; - } +static void hci_cc_read_local_version(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_local_version *rp = (void *) skb->data; - hdev->hci_ver = lv->hci_ver; - hdev->hci_rev = btohs(lv->hci_rev); - hdev->manufacturer = btohs(lv->manufacturer); + BT_DBG("%s status 0x%x", hdev->name, rp->status); - BT_DBG("%s: manufacturer %d hci_ver %d hci_rev %d", hdev->name, - hdev->manufacturer, hdev->hci_ver, hdev->hci_rev); + if (rp->status) + return; - break; + hdev->hci_ver = rp->hci_ver; + hdev->hci_rev = btohs(rp->hci_rev); + hdev->manufacturer = btohs(rp->manufacturer); - case OCF_READ_LOCAL_FEATURES: - lf = (struct hci_rp_read_local_features *) skb->data; + BT_DBG("%s manufacturer %d hci ver %d:%d", hdev->name, + hdev->manufacturer, + hdev->hci_ver, hdev->hci_rev); +} - if (lf->status) { - BT_DBG("%s READ_LOCAL_FEATURES failed %d", hdev->name, lf->status); - break; - } +static void hci_cc_read_local_commands(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_local_commands *rp = (void *) skb->data; - memcpy(hdev->features, lf->features, sizeof(hdev->features)); + BT_DBG("%s status 0x%x", hdev->name, rp->status); - /* Adjust default settings according to features - * supported by device. */ - if (hdev->features[0] & LMP_3SLOT) - hdev->pkt_type |= (HCI_DM3 | HCI_DH3); + if (rp->status) + return; - if (hdev->features[0] & LMP_5SLOT) - hdev->pkt_type |= (HCI_DM5 | HCI_DH5); + memcpy(hdev->commands, rp->commands, sizeof(hdev->commands)); +} - if (hdev->features[1] & LMP_HV2) { - hdev->pkt_type |= (HCI_HV2); - hdev->esco_type |= (ESCO_HV2); - } +static void hci_cc_read_local_features(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_local_features *rp = (void *) skb->data; - if (hdev->features[1] & LMP_HV3) { - hdev->pkt_type |= (HCI_HV3); - hdev->esco_type |= (ESCO_HV3); - } + BT_DBG("%s status 0x%x", hdev->name, rp->status); - if (hdev->features[3] & LMP_ESCO) - hdev->esco_type |= (ESCO_EV3); + if (rp->status) + return; - if (hdev->features[4] & LMP_EV4) - hdev->esco_type |= (ESCO_EV4); + memcpy(hdev->features, rp->features, 8); - if (hdev->features[4] & LMP_EV5) - hdev->esco_type |= (ESCO_EV5); + /* Adjust default settings according to features + * supported by device. */ - BT_DBG("%s: features 0x%x 0x%x 0x%x", hdev->name, - lf->features[0], lf->features[1], lf->features[2]); + if (hdev->features[0] & LMP_3SLOT) + hdev->pkt_type |= (HCI_DM3 | HCI_DH3); - break; + if (hdev->features[0] & LMP_5SLOT) + hdev->pkt_type |= (HCI_DM5 | HCI_DH5); - case OCF_READ_BUFFER_SIZE: - bs = (struct hci_rp_read_buffer_size *) skb->data; + if (hdev->features[1] & LMP_HV2) { + hdev->pkt_type |= (HCI_HV2); + hdev->esco_type |= (ESCO_HV2); + } - if (bs->status) { - BT_DBG("%s READ_BUFFER_SIZE failed %d", hdev->name, bs->status); - hci_req_complete(hdev, bs->status); - break; - } + if (hdev->features[1] & LMP_HV3) { + hdev->pkt_type |= (HCI_HV3); + hdev->esco_type |= (ESCO_HV3); + } - hdev->acl_mtu = __le16_to_cpu(bs->acl_mtu); - hdev->sco_mtu = bs->sco_mtu; - hdev->acl_pkts = __le16_to_cpu(bs->acl_max_pkt); - hdev->sco_pkts = __le16_to_cpu(bs->sco_max_pkt); + if (hdev->features[3] & LMP_ESCO) + hdev->esco_type |= (ESCO_EV3); - if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) { - hdev->sco_mtu = 64; - hdev->sco_pkts = 8; - } + if (hdev->features[4] & LMP_EV4) + hdev->esco_type |= (ESCO_EV4); - hdev->acl_cnt = hdev->acl_pkts; - hdev->sco_cnt = hdev->sco_pkts; + if (hdev->features[4] & LMP_EV5) + hdev->esco_type |= (ESCO_EV5); - BT_DBG("%s mtu: acl %d, sco %d max_pkt: acl %d, sco %d", hdev->name, - hdev->acl_mtu, hdev->sco_mtu, hdev->acl_pkts, hdev->sco_pkts); - break; + BT_DBG("%s features 0x%.2x%.2x%.2x%.2x%.2x%.2x%.2x%.2x", hdev->name, + hdev->features[0], hdev->features[1], + hdev->features[2], hdev->features[3], + hdev->features[4], hdev->features[5], + hdev->features[6], hdev->features[7]); +} - case OCF_READ_BD_ADDR: - ba = (struct hci_rp_read_bd_addr *) skb->data; +static void hci_cc_read_buffer_size(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_buffer_size *rp = (void *) skb->data; - if (!ba->status) { - bacpy(&hdev->bdaddr, &ba->bdaddr); - } else { - BT_DBG("%s: READ_BD_ADDR failed %d", hdev->name, ba->status); - } + BT_DBG("%s status 0x%x", hdev->name, rp->status); - hci_req_complete(hdev, ba->status); - break; + if (rp->status) + return; - default: - BT_DBG("%s Command complete: ogf INFO_PARAM ocf %x", hdev->name, ocf); - break; + hdev->acl_mtu = __le16_to_cpu(rp->acl_mtu); + hdev->sco_mtu = rp->sco_mtu; + hdev->acl_pkts = __le16_to_cpu(rp->acl_max_pkt); + hdev->sco_pkts = __le16_to_cpu(rp->sco_max_pkt); + + if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) { + hdev->sco_mtu = 64; + hdev->sco_pkts = 8; } + + hdev->acl_cnt = hdev->acl_pkts; + hdev->sco_cnt = hdev->sco_pkts; + + BT_DBG("%s acl mtu %d:%d sco mtu %d:%d", hdev->name, + hdev->acl_mtu, hdev->acl_pkts, + hdev->sco_mtu, hdev->sco_pkts); +} + +static void hci_cc_read_bd_addr(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_rp_read_bd_addr *rp = (void *) skb->data; + + BT_DBG("%s status 0x%x", hdev->name, rp->status); + + if (!rp->status) + bacpy(&hdev->bdaddr, &rp->bdaddr); + + hci_req_complete(hdev, rp->status); +} + +static inline void hci_cs_inquiry(struct hci_dev *hdev, __u8 status) +{ + BT_DBG("%s status 0x%x", hdev->name, status); + + if (status) { + hci_req_complete(hdev, status); + + hci_conn_check_pending(hdev); + } else + set_bit(HCI_INQUIRY, &hdev->flags); } -/* Command Status OGF LINK_CTL */ static inline void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) { + struct hci_cp_create_conn *cp; struct hci_conn *conn; - struct hci_cp_create_conn *cp = hci_sent_cmd_data(hdev, OGF_LINK_CTL, OCF_CREATE_CONN); + BT_DBG("%s status 0x%x", hdev->name, status); + + cp = hci_sent_cmd_data(hdev, HCI_OP_CREATE_CONN); if (!cp) return; @@ -431,8 +480,7 @@ static inline void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &cp->bdaddr); - BT_DBG("%s status 0x%x bdaddr %s conn %p", hdev->name, - status, batostr(&cp->bdaddr), conn); + BT_DBG("%s bdaddr %s conn %p", hdev->name, batostr(&cp->bdaddr), conn); if (status) { if (conn && conn->state == BT_CONNECT) { @@ -457,234 +505,138 @@ static inline void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) hci_dev_unlock(hdev); } -static void hci_cs_link_ctl(struct hci_dev *hdev, __u16 ocf, __u8 status) +static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status) { - BT_DBG("%s ocf 0x%x", hdev->name, ocf); + struct hci_cp_add_sco *cp; + struct hci_conn *acl, *sco; + __u16 handle; - switch (ocf) { - case OCF_CREATE_CONN: - hci_cs_create_conn(hdev, status); - break; - - case OCF_ADD_SCO: - if (status) { - struct hci_conn *acl, *sco; - struct hci_cp_add_sco *cp = hci_sent_cmd_data(hdev, OGF_LINK_CTL, OCF_ADD_SCO); - __u16 handle; - - if (!cp) - break; + BT_DBG("%s status 0x%x", hdev->name, status); - handle = __le16_to_cpu(cp->handle); - - BT_DBG("%s Add SCO error: handle %d status 0x%x", hdev->name, handle, status); + if (!status) + return; - hci_dev_lock(hdev); + cp = hci_sent_cmd_data(hdev, HCI_OP_ADD_SCO); + if (!cp) + return; - acl = hci_conn_hash_lookup_handle(hdev, handle); - if (acl && (sco = acl->link)) { - sco->state = BT_CLOSED; + handle = __le16_to_cpu(cp->handle); - hci_proto_connect_cfm(sco, status); - hci_conn_del(sco); - } + BT_DBG("%s handle %d", hdev->name, handle); - hci_dev_unlock(hdev); - } - break; + hci_dev_lock(hdev); - case OCF_INQUIRY: - if (status) { - BT_DBG("%s Inquiry error: status 0x%x", hdev->name, status); - hci_req_complete(hdev, status); - } else { - set_bit(HCI_INQUIRY, &hdev->flags); - } - break; + acl = hci_conn_hash_lookup_handle(hdev, handle); + if (acl && (sco = acl->link)) { + sco->state = BT_CLOSED; - default: - BT_DBG("%s Command status: ogf LINK_CTL ocf %x status %d", - hdev->name, ocf, status); - break; + hci_proto_connect_cfm(sco, status); + hci_conn_del(sco); } + + hci_dev_unlock(hdev); } -/* Command Status OGF LINK_POLICY */ -static void hci_cs_link_policy(struct hci_dev *hdev, __u16 ocf, __u8 status) +static void hci_cs_remote_name_req(struct hci_dev *hdev, __u8 status) { - BT_DBG("%s ocf 0x%x", hdev->name, ocf); - - switch (ocf) { - case OCF_SNIFF_MODE: - if (status) { - struct hci_conn *conn; - struct hci_cp_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_SNIFF_MODE); + BT_DBG("%s status 0x%x", hdev->name, status); +} - if (!cp) - break; +static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status) +{ + struct hci_cp_setup_sync_conn *cp; + struct hci_conn *acl, *sco; + __u16 handle; - hci_dev_lock(hdev); + BT_DBG("%s status 0x%x", hdev->name, status); - conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle)); - if (conn) { - clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend); - } - - hci_dev_unlock(hdev); - } - break; + if (!status) + return; - case OCF_EXIT_SNIFF_MODE: - if (status) { - struct hci_conn *conn; - struct hci_cp_exit_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_EXIT_SNIFF_MODE); + cp = hci_sent_cmd_data(hdev, HCI_OP_SETUP_SYNC_CONN); + if (!cp) + return; - if (!cp) - break; + handle = __le16_to_cpu(cp->handle); - hci_dev_lock(hdev); + BT_DBG("%s handle %d", hdev->name, handle); - conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle)); - if (conn) { - clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend); - } + hci_dev_lock(hdev); - hci_dev_unlock(hdev); - } - break; + acl = hci_conn_hash_lookup_handle(hdev, handle); + if (acl && (sco = acl->link)) { + sco->state = BT_CLOSED; - default: - BT_DBG("%s Command status: ogf LINK_POLICY ocf %x", hdev->name, ocf); - break; + hci_proto_connect_cfm(sco, status); + hci_conn_del(sco); } -} -/* Command Status OGF HOST_CTL */ -static void hci_cs_host_ctl(struct hci_dev *hdev, __u16 ocf, __u8 status) -{ - BT_DBG("%s ocf 0x%x", hdev->name, ocf); - - switch (ocf) { - default: - BT_DBG("%s Command status: ogf HOST_CTL ocf %x", hdev->name, ocf); - break; - } + hci_dev_unlock(hdev); } -/* Command Status OGF INFO_PARAM */ -static void hci_cs_info_param(struct hci_dev *hdev, __u16 ocf, __u8 status) +static void hci_cs_sniff_mode(struct hci_dev *hdev, __u8 status) { - BT_DBG("%s: hci_cs_info_param: ocf 0x%x", hdev->name, ocf); - - switch (ocf) { - default: - BT_DBG("%s Command status: ogf INFO_PARAM ocf %x", hdev->name, ocf); - break; - } -} + struct hci_cp_sniff_mode *cp; + struct hci_conn *conn; -/* Inquiry Complete */ -static inline void hci_inquiry_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) -{ - __u8 status = *((__u8 *) skb->data); - struct hci_conn *pend; + BT_DBG("%s status 0x%x", hdev->name, status); - BT_DBG("%s status %d", hdev->name, status); + if (!status) + return; - clear_bit(HCI_INQUIRY, &hdev->flags); - hci_req_complete(hdev, status); + cp = hci_sent_cmd_data(hdev, HCI_OP_SNIFF_MODE); + if (!cp) + return; hci_dev_lock(hdev); - pend = hci_conn_hash_lookup_state(hdev, ACL_LINK, BT_CONNECT2); - if (pend) - hci_acl_connect(pend); + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle)); + if (conn) + clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend); hci_dev_unlock(hdev); } -/* Inquiry Result */ -static inline void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cs_exit_sniff_mode(struct hci_dev *hdev, __u8 status) { - struct inquiry_data data; - struct inquiry_info *info = (struct inquiry_info *) (skb->data + 1); - int num_rsp = *((__u8 *) skb->data); + struct hci_cp_exit_sniff_mode *cp; + struct hci_conn *conn; - BT_DBG("%s num_rsp %d", hdev->name, num_rsp); + BT_DBG("%s status 0x%x", hdev->name, status); - if (!num_rsp) + if (!status) + return; + + cp = hci_sent_cmd_data(hdev, HCI_OP_EXIT_SNIFF_MODE); + if (!cp) return; hci_dev_lock(hdev); - for (; num_rsp; num_rsp--) { - bacpy(&data.bdaddr, &info->bdaddr); - data.pscan_rep_mode = info->pscan_rep_mode; - data.pscan_period_mode = info->pscan_period_mode; - data.pscan_mode = info->pscan_mode; - memcpy(data.dev_class, info->dev_class, 3); - data.clock_offset = info->clock_offset; - data.rssi = 0x00; - info++; - hci_inquiry_cache_update(hdev, &data); - } + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle)); + if (conn) + clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend); hci_dev_unlock(hdev); } -/* Inquiry Result With RSSI */ -static inline void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_inquiry_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct inquiry_data data; - int num_rsp = *((__u8 *) skb->data); - - BT_DBG("%s num_rsp %d", hdev->name, num_rsp); - - if (!num_rsp) - return; - - hci_dev_lock(hdev); + __u8 status = *((__u8 *) skb->data); - if ((skb->len - 1) / num_rsp != sizeof(struct inquiry_info_with_rssi)) { - struct inquiry_info_with_rssi_and_pscan_mode *info = - (struct inquiry_info_with_rssi_and_pscan_mode *) (skb->data + 1); + BT_DBG("%s status %d", hdev->name, status); - for (; num_rsp; num_rsp--) { - bacpy(&data.bdaddr, &info->bdaddr); - data.pscan_rep_mode = info->pscan_rep_mode; - data.pscan_period_mode = info->pscan_period_mode; - data.pscan_mode = info->pscan_mode; - memcpy(data.dev_class, info->dev_class, 3); - data.clock_offset = info->clock_offset; - data.rssi = info->rssi; - info++; - hci_inquiry_cache_update(hdev, &data); - } - } else { - struct inquiry_info_with_rssi *info = - (struct inquiry_info_with_rssi *) (skb->data + 1); + clear_bit(HCI_INQUIRY, &hdev->flags); - for (; num_rsp; num_rsp--) { - bacpy(&data.bdaddr, &info->bdaddr); - data.pscan_rep_mode = info->pscan_rep_mode; - data.pscan_period_mode = info->pscan_period_mode; - data.pscan_mode = 0x00; - memcpy(data.dev_class, info->dev_class, 3); - data.clock_offset = info->clock_offset; - data.rssi = info->rssi; - info++; - hci_inquiry_cache_update(hdev, &data); - } - } + hci_req_complete(hdev, status); - hci_dev_unlock(hdev); + hci_conn_check_pending(hdev); } -/* Extended Inquiry Result */ -static inline void hci_extended_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct inquiry_data data; - struct extended_inquiry_info *info = (struct extended_inquiry_info *) (skb->data + 1); + struct inquiry_info *info = (void *) (skb->data + 1); int num_rsp = *((__u8 *) skb->data); BT_DBG("%s num_rsp %d", hdev->name, num_rsp); @@ -696,12 +648,12 @@ static inline void hci_extended_inquiry_result_evt(struct hci_dev *hdev, struct for (; num_rsp; num_rsp--) { bacpy(&data.bdaddr, &info->bdaddr); - data.pscan_rep_mode = info->pscan_rep_mode; - data.pscan_period_mode = info->pscan_period_mode; - data.pscan_mode = 0x00; + data.pscan_rep_mode = info->pscan_rep_mode; + data.pscan_period_mode = info->pscan_period_mode; + data.pscan_mode = info->pscan_mode; memcpy(data.dev_class, info->dev_class, 3); - data.clock_offset = info->clock_offset; - data.rssi = info->rssi; + data.clock_offset = info->clock_offset; + data.rssi = 0x00; info++; hci_inquiry_cache_update(hdev, &data); } @@ -709,70 +661,18 @@ static inline void hci_extended_inquiry_result_evt(struct hci_dev *hdev, struct hci_dev_unlock(hdev); } -/* Connect Request */ -static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) -{ - struct hci_ev_conn_request *ev = (struct hci_ev_conn_request *) skb->data; - int mask = hdev->link_mode; - - BT_DBG("%s Connection request: %s type 0x%x", hdev->name, - batostr(&ev->bdaddr), ev->link_type); - - mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type); - - if (mask & HCI_LM_ACCEPT) { - /* Connection accepted */ - struct hci_conn *conn; - struct hci_cp_accept_conn_req cp; - - hci_dev_lock(hdev); - conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); - if (!conn) { - if (!(conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr))) { - BT_ERR("No memmory for new connection"); - hci_dev_unlock(hdev); - return; - } - } - memcpy(conn->dev_class, ev->dev_class, 3); - conn->state = BT_CONNECT; - hci_dev_unlock(hdev); - - bacpy(&cp.bdaddr, &ev->bdaddr); - - if (lmp_rswitch_capable(hdev) && (mask & HCI_LM_MASTER)) - cp.role = 0x00; /* Become master */ - else - cp.role = 0x01; /* Remain slave */ - - hci_send_cmd(hdev, OGF_LINK_CTL, - OCF_ACCEPT_CONN_REQ, sizeof(cp), &cp); - } else { - /* Connection rejected */ - struct hci_cp_reject_conn_req cp; - - bacpy(&cp.bdaddr, &ev->bdaddr); - cp.reason = 0x0f; - hci_send_cmd(hdev, OGF_LINK_CTL, - OCF_REJECT_CONN_REQ, sizeof(cp), &cp); - } -} - -/* Connect Complete */ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_conn_complete *ev = (struct hci_ev_conn_complete *) skb->data; - struct hci_conn *conn, *pend; + struct hci_ev_conn_complete *ev = (void *) skb->data; + struct hci_conn *conn; BT_DBG("%s", hdev->name); hci_dev_lock(hdev); conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); - if (!conn) { - hci_dev_unlock(hdev); - return; - } + if (!conn) + goto unlock; if (!ev->status) { conn->handle = __le16_to_cpu(ev->handle); @@ -788,8 +688,7 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s if (conn->type == ACL_LINK) { struct hci_cp_read_remote_features cp; cp.handle = ev->handle; - hci_send_cmd(hdev, OGF_LINK_CTL, - OCF_READ_REMOTE_FEATURES, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_READ_REMOTE_FEATURES, sizeof(cp), &cp); } /* Set link policy */ @@ -797,8 +696,7 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s struct hci_cp_write_link_policy cp; cp.handle = ev->handle; cp.policy = cpu_to_le16(hdev->link_policy); - hci_send_cmd(hdev, OGF_LINK_POLICY, - OCF_WRITE_LINK_POLICY, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_WRITE_LINK_POLICY, sizeof(cp), &cp); } /* Set packet type for incoming connection */ @@ -809,8 +707,7 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK): cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); - hci_send_cmd(hdev, OGF_LINK_CTL, - OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp); + hci_send_cmd(hdev, HCI_OP_CHANGE_CONN_PTYPE, sizeof(cp), &cp); } else { /* Update disconnect timer */ hci_conn_hold(conn); @@ -822,9 +719,12 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s if (conn->type == ACL_LINK) { struct hci_conn *sco = conn->link; if (sco) { - if (!ev->status) - hci_add_sco(sco, conn->handle); - else { + if (!ev->status) { + if (lmp_esco_capable(hdev)) + hci_setup_sync(sco, conn->handle); + else + hci_add_sco(sco, conn->handle); + } else { hci_proto_connect_cfm(sco, ev->status); hci_conn_del(sco); } @@ -835,136 +735,104 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s if (ev->status) hci_conn_del(conn); - pend = hci_conn_hash_lookup_state(hdev, ACL_LINK, BT_CONNECT2); - if (pend) - hci_acl_connect(pend); - +unlock: hci_dev_unlock(hdev); -} - -/* Disconnect Complete */ -static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) -{ - struct hci_ev_disconn_complete *ev = (struct hci_ev_disconn_complete *) skb->data; - struct hci_conn *conn; - - BT_DBG("%s status %d", hdev->name, ev->status); - - if (ev->status) - return; - hci_dev_lock(hdev); - - conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); - if (conn) { - conn->state = BT_CLOSED; - hci_proto_disconn_ind(conn, ev->reason); - hci_conn_del(conn); - } - - hci_dev_unlock(hdev); + hci_conn_check_pending(hdev); } -/* Number of completed packets */ -static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_num_comp_pkts *ev = (struct hci_ev_num_comp_pkts *) skb->data; - __le16 *ptr; - int i; - - skb_pull(skb, sizeof(*ev)); - - BT_DBG("%s num_hndl %d", hdev->name, ev->num_hndl); + struct hci_ev_conn_request *ev = (void *) skb->data; + int mask = hdev->link_mode; - if (skb->len < ev->num_hndl * 4) { - BT_DBG("%s bad parameters", hdev->name); - return; - } + BT_DBG("%s bdaddr %s type 0x%x", hdev->name, + batostr(&ev->bdaddr), ev->link_type); - tasklet_disable(&hdev->tx_task); + mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type); - for (i = 0, ptr = (__le16 *) skb->data; i < ev->num_hndl; i++) { + if (mask & HCI_LM_ACCEPT) { + /* Connection accepted */ struct hci_conn *conn; - __u16 handle, count; - - handle = __le16_to_cpu(get_unaligned(ptr++)); - count = __le16_to_cpu(get_unaligned(ptr++)); - conn = hci_conn_hash_lookup_handle(hdev, handle); - if (conn) { - conn->sent -= count; + hci_dev_lock(hdev); - if (conn->type == ACL_LINK) { - if ((hdev->acl_cnt += count) > hdev->acl_pkts) - hdev->acl_cnt = hdev->acl_pkts; - } else { - if ((hdev->sco_cnt += count) > hdev->sco_pkts) - hdev->sco_cnt = hdev->sco_pkts; + conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); + if (!conn) { + if (!(conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr))) { + BT_ERR("No memmory for new connection"); + hci_dev_unlock(hdev); + return; } } - } - hci_sched_tx(hdev); - tasklet_enable(&hdev->tx_task); -} + memcpy(conn->dev_class, ev->dev_class, 3); + conn->state = BT_CONNECT; -/* Role Change */ -static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb) -{ - struct hci_ev_role_change *ev = (struct hci_ev_role_change *) skb->data; - struct hci_conn *conn; + hci_dev_unlock(hdev); - BT_DBG("%s status %d", hdev->name, ev->status); + if (ev->link_type == ACL_LINK || !lmp_esco_capable(hdev)) { + struct hci_cp_accept_conn_req cp; - hci_dev_lock(hdev); + bacpy(&cp.bdaddr, &ev->bdaddr); - conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); - if (conn) { - if (!ev->status) { - if (ev->role) - conn->link_mode &= ~HCI_LM_MASTER; + if (lmp_rswitch_capable(hdev) && (mask & HCI_LM_MASTER)) + cp.role = 0x00; /* Become master */ else - conn->link_mode |= HCI_LM_MASTER; - } + cp.role = 0x01; /* Remain slave */ - clear_bit(HCI_CONN_RSWITCH_PEND, &conn->pend); + hci_send_cmd(hdev, HCI_OP_ACCEPT_CONN_REQ, + sizeof(cp), &cp); + } else { + struct hci_cp_accept_sync_conn_req cp; - hci_role_switch_cfm(conn, ev->status, ev->role); - } + bacpy(&cp.bdaddr, &ev->bdaddr); + cp.pkt_type = cpu_to_le16(hdev->esco_type); - hci_dev_unlock(hdev); + cp.tx_bandwidth = cpu_to_le32(0x00001f40); + cp.rx_bandwidth = cpu_to_le32(0x00001f40); + cp.max_latency = cpu_to_le16(0xffff); + cp.content_format = cpu_to_le16(hdev->voice_setting); + cp.retrans_effort = 0xff; + + hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ, + sizeof(cp), &cp); + } + } else { + /* Connection rejected */ + struct hci_cp_reject_conn_req cp; + + bacpy(&cp.bdaddr, &ev->bdaddr); + cp.reason = 0x0f; + hci_send_cmd(hdev, HCI_OP_REJECT_CONN_REQ, sizeof(cp), &cp); + } } -/* Mode Change */ -static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_mode_change *ev = (struct hci_ev_mode_change *) skb->data; + struct hci_ev_disconn_complete *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); + if (ev->status) + return; + hci_dev_lock(hdev); conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); if (conn) { - conn->mode = ev->mode; - conn->interval = __le16_to_cpu(ev->interval); - - if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { - if (conn->mode == HCI_CM_ACTIVE) - conn->power_save = 1; - else - conn->power_save = 0; - } + conn->state = BT_CLOSED; + hci_proto_disconn_ind(conn, ev->reason); + hci_conn_del(conn); } hci_dev_unlock(hdev); } -/* Authentication Complete */ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_auth_complete *ev = (struct hci_ev_auth_complete *) skb->data; + struct hci_ev_auth_complete *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -985,8 +853,8 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s struct hci_cp_set_conn_encrypt cp; cp.handle = cpu_to_le16(conn->handle); cp.encrypt = 1; - hci_send_cmd(conn->hdev, OGF_LINK_CTL, - OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp); + hci_send_cmd(conn->hdev, + HCI_OP_SET_CONN_ENCRYPT, sizeof(cp), &cp); } else { clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->pend); hci_encrypt_cfm(conn, ev->status, 0x00); @@ -997,10 +865,16 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s hci_dev_unlock(hdev); } -/* Encryption Change */ +static inline void hci_remote_name_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); + + hci_conn_check_pending(hdev); +} + static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_encrypt_change *ev = (struct hci_ev_encrypt_change *) skb->data; + struct hci_ev_encrypt_change *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -1024,10 +898,9 @@ static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff * hci_dev_unlock(hdev); } -/* Change Connection Link Key Complete */ -static inline void hci_change_conn_link_key_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_change_link_key_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_change_conn_link_key_complete *ev = (struct hci_ev_change_conn_link_key_complete *) skb->data; + struct hci_ev_change_link_key_complete *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -1047,25 +920,263 @@ static inline void hci_change_conn_link_key_complete_evt(struct hci_dev *hdev, s hci_dev_unlock(hdev); } -/* Pin Code Request*/ -static inline void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff *skb) { + struct hci_ev_remote_features *ev = (void *) skb->data; + struct hci_conn *conn; + + BT_DBG("%s status %d", hdev->name, ev->status); + + if (ev->status) + return; + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); + if (conn) + memcpy(conn->features, ev->features, 8); + + hci_dev_unlock(hdev); } -/* Link Key Request */ -static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_remote_version_evt(struct hci_dev *hdev, struct sk_buff *skb) { + BT_DBG("%s", hdev->name); } -/* Link Key Notification */ -static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_qos_setup_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { + BT_DBG("%s", hdev->name); } -/* Remote Features */ -static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_remote_features *ev = (struct hci_ev_remote_features *) skb->data; + struct hci_ev_cmd_complete *ev = (void *) skb->data; + __u16 opcode; + + skb_pull(skb, sizeof(*ev)); + + opcode = __le16_to_cpu(ev->opcode); + + switch (opcode) { + case HCI_OP_INQUIRY_CANCEL: + hci_cc_inquiry_cancel(hdev, skb); + break; + + case HCI_OP_EXIT_PERIODIC_INQ: + hci_cc_exit_periodic_inq(hdev, skb); + break; + + case HCI_OP_REMOTE_NAME_REQ_CANCEL: + hci_cc_remote_name_req_cancel(hdev, skb); + break; + + case HCI_OP_ROLE_DISCOVERY: + hci_cc_role_discovery(hdev, skb); + break; + + case HCI_OP_WRITE_LINK_POLICY: + hci_cc_write_link_policy(hdev, skb); + break; + + case HCI_OP_RESET: + hci_cc_reset(hdev, skb); + break; + + case HCI_OP_WRITE_LOCAL_NAME: + hci_cc_write_local_name(hdev, skb); + break; + + case HCI_OP_READ_LOCAL_NAME: + hci_cc_read_local_name(hdev, skb); + break; + + case HCI_OP_WRITE_AUTH_ENABLE: + hci_cc_write_auth_enable(hdev, skb); + break; + + case HCI_OP_WRITE_ENCRYPT_MODE: + hci_cc_write_encrypt_mode(hdev, skb); + break; + + case HCI_OP_WRITE_SCAN_ENABLE: + hci_cc_write_scan_enable(hdev, skb); + break; + + case HCI_OP_READ_CLASS_OF_DEV: + hci_cc_read_class_of_dev(hdev, skb); + break; + + case HCI_OP_WRITE_CLASS_OF_DEV: + hci_cc_write_class_of_dev(hdev, skb); + break; + + case HCI_OP_READ_VOICE_SETTING: + hci_cc_read_voice_setting(hdev, skb); + break; + + case HCI_OP_WRITE_VOICE_SETTING: + hci_cc_write_voice_setting(hdev, skb); + break; + + case HCI_OP_HOST_BUFFER_SIZE: + hci_cc_host_buffer_size(hdev, skb); + break; + + case HCI_OP_READ_LOCAL_VERSION: + hci_cc_read_local_version(hdev, skb); + break; + + case HCI_OP_READ_LOCAL_COMMANDS: + hci_cc_read_local_commands(hdev, skb); + break; + + case HCI_OP_READ_LOCAL_FEATURES: + hci_cc_read_local_features(hdev, skb); + break; + + case HCI_OP_READ_BUFFER_SIZE: + hci_cc_read_buffer_size(hdev, skb); + break; + + case HCI_OP_READ_BD_ADDR: + hci_cc_read_bd_addr(hdev, skb); + break; + + default: + BT_DBG("%s opcode 0x%x", hdev->name, opcode); + break; + } + + if (ev->ncmd) { + atomic_set(&hdev->cmd_cnt, 1); + if (!skb_queue_empty(&hdev->cmd_q)) + hci_sched_cmd(hdev); + } +} + +static inline void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_cmd_status *ev = (void *) skb->data; + __u16 opcode; + + skb_pull(skb, sizeof(*ev)); + + opcode = __le16_to_cpu(ev->opcode); + + switch (opcode) { + case HCI_OP_INQUIRY: + hci_cs_inquiry(hdev, ev->status); + break; + + case HCI_OP_CREATE_CONN: + hci_cs_create_conn(hdev, ev->status); + break; + + case HCI_OP_ADD_SCO: + hci_cs_add_sco(hdev, ev->status); + break; + + case HCI_OP_REMOTE_NAME_REQ: + hci_cs_remote_name_req(hdev, ev->status); + break; + + case HCI_OP_SETUP_SYNC_CONN: + hci_cs_setup_sync_conn(hdev, ev->status); + break; + + case HCI_OP_SNIFF_MODE: + hci_cs_sniff_mode(hdev, ev->status); + break; + + case HCI_OP_EXIT_SNIFF_MODE: + hci_cs_exit_sniff_mode(hdev, ev->status); + break; + + default: + BT_DBG("%s opcode 0x%x", hdev->name, opcode); + break; + } + + if (ev->ncmd) { + atomic_set(&hdev->cmd_cnt, 1); + if (!skb_queue_empty(&hdev->cmd_q)) + hci_sched_cmd(hdev); + } +} + +static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_role_change *ev = (void *) skb->data; + struct hci_conn *conn; + + BT_DBG("%s status %d", hdev->name, ev->status); + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); + if (conn) { + if (!ev->status) { + if (ev->role) + conn->link_mode &= ~HCI_LM_MASTER; + else + conn->link_mode |= HCI_LM_MASTER; + } + + clear_bit(HCI_CONN_RSWITCH_PEND, &conn->pend); + + hci_role_switch_cfm(conn, ev->status, ev->role); + } + + hci_dev_unlock(hdev); +} + +static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_num_comp_pkts *ev = (void *) skb->data; + __le16 *ptr; + int i; + + skb_pull(skb, sizeof(*ev)); + + BT_DBG("%s num_hndl %d", hdev->name, ev->num_hndl); + + if (skb->len < ev->num_hndl * 4) { + BT_DBG("%s bad parameters", hdev->name); + return; + } + + tasklet_disable(&hdev->tx_task); + + for (i = 0, ptr = (__le16 *) skb->data; i < ev->num_hndl; i++) { + struct hci_conn *conn; + __u16 handle, count; + + handle = __le16_to_cpu(get_unaligned(ptr++)); + count = __le16_to_cpu(get_unaligned(ptr++)); + + conn = hci_conn_hash_lookup_handle(hdev, handle); + if (conn) { + conn->sent -= count; + + if (conn->type == ACL_LINK) { + if ((hdev->acl_cnt += count) > hdev->acl_pkts) + hdev->acl_cnt = hdev->acl_pkts; + } else { + if ((hdev->sco_cnt += count) > hdev->sco_pkts) + hdev->sco_cnt = hdev->sco_pkts; + } + } + } + + hci_sched_tx(hdev); + + tasklet_enable(&hdev->tx_task); +} + +static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_mode_change *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -1073,17 +1184,39 @@ static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff hci_dev_lock(hdev); conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); - if (conn && !ev->status) { - memcpy(conn->features, ev->features, sizeof(conn->features)); + if (conn) { + conn->mode = ev->mode; + conn->interval = __le16_to_cpu(ev->interval); + + if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { + if (conn->mode == HCI_CM_ACTIVE) + conn->power_save = 1; + else + conn->power_save = 0; + } } hci_dev_unlock(hdev); } -/* Clock Offset */ +static inline void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + +static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + +static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_clock_offset *ev = (struct hci_ev_clock_offset *) skb->data; + struct hci_ev_clock_offset *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -1103,10 +1236,9 @@ static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *sk hci_dev_unlock(hdev); } -/* Page Scan Repetition Mode */ static inline void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_pscan_rep_mode *ev = (struct hci_ev_pscan_rep_mode *) skb->data; + struct hci_ev_pscan_rep_mode *ev = (void *) skb->data; struct inquiry_entry *ie; BT_DBG("%s", hdev->name); @@ -1121,10 +1253,91 @@ static inline void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff * hci_dev_unlock(hdev); } -/* Sniff Subrate */ +static inline void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct inquiry_data data; + int num_rsp = *((__u8 *) skb->data); + + BT_DBG("%s num_rsp %d", hdev->name, num_rsp); + + if (!num_rsp) + return; + + hci_dev_lock(hdev); + + if ((skb->len - 1) / num_rsp != sizeof(struct inquiry_info_with_rssi)) { + struct inquiry_info_with_rssi_and_pscan_mode *info = (void *) (skb->data + 1); + + for (; num_rsp; num_rsp--) { + bacpy(&data.bdaddr, &info->bdaddr); + data.pscan_rep_mode = info->pscan_rep_mode; + data.pscan_period_mode = info->pscan_period_mode; + data.pscan_mode = info->pscan_mode; + memcpy(data.dev_class, info->dev_class, 3); + data.clock_offset = info->clock_offset; + data.rssi = info->rssi; + info++; + hci_inquiry_cache_update(hdev, &data); + } + } else { + struct inquiry_info_with_rssi *info = (void *) (skb->data + 1); + + for (; num_rsp; num_rsp--) { + bacpy(&data.bdaddr, &info->bdaddr); + data.pscan_rep_mode = info->pscan_rep_mode; + data.pscan_period_mode = info->pscan_period_mode; + data.pscan_mode = 0x00; + memcpy(data.dev_class, info->dev_class, 3); + data.clock_offset = info->clock_offset; + data.rssi = info->rssi; + info++; + hci_inquiry_cache_update(hdev, &data); + } + } + + hci_dev_unlock(hdev); +} + +static inline void hci_remote_ext_features_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + +static inline void hci_sync_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_sync_conn_complete *ev = (void *) skb->data; + struct hci_conn *conn; + + BT_DBG("%s status %d", hdev->name, ev->status); + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); + if (!conn) + goto unlock; + + if (!ev->status) { + conn->handle = __le16_to_cpu(ev->handle); + conn->state = BT_CONNECTED; + } else + conn->state = BT_CLOSED; + + hci_proto_connect_cfm(conn, ev->status); + if (ev->status) + hci_conn_del(conn); + +unlock: + hci_dev_unlock(hdev); +} + +static inline void hci_sync_conn_changed_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + BT_DBG("%s", hdev->name); +} + static inline void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_ev_sniff_subrate *ev = (struct hci_ev_sniff_subrate *) skb->data; + struct hci_ev_sniff_subrate *ev = (void *) skb->data; struct hci_conn *conn; BT_DBG("%s status %d", hdev->name, ev->status); @@ -1138,22 +1351,42 @@ static inline void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *s hci_dev_unlock(hdev); } -void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) +static inline void hci_extended_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_event_hdr *hdr = (struct hci_event_hdr *) skb->data; - struct hci_ev_cmd_complete *ec; - struct hci_ev_cmd_status *cs; - u16 opcode, ocf, ogf; + struct inquiry_data data; + struct extended_inquiry_info *info = (void *) (skb->data + 1); + int num_rsp = *((__u8 *) skb->data); - skb_pull(skb, HCI_EVENT_HDR_SIZE); + BT_DBG("%s num_rsp %d", hdev->name, num_rsp); - BT_DBG("%s evt 0x%x", hdev->name, hdr->evt); + if (!num_rsp) + return; - switch (hdr->evt) { - case HCI_EV_NUM_COMP_PKTS: - hci_num_comp_pkts_evt(hdev, skb); - break; + hci_dev_lock(hdev); + + for (; num_rsp; num_rsp--) { + bacpy(&data.bdaddr, &info->bdaddr); + data.pscan_rep_mode = info->pscan_rep_mode; + data.pscan_period_mode = info->pscan_period_mode; + data.pscan_mode = 0x00; + memcpy(data.dev_class, info->dev_class, 3); + data.clock_offset = info->clock_offset; + data.rssi = info->rssi; + info++; + hci_inquiry_cache_update(hdev, &data); + } + hci_dev_unlock(hdev); +} + +void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_event_hdr *hdr = (void *) skb->data; + __u8 event = hdr->evt; + + skb_pull(skb, HCI_EVENT_HDR_SIZE); + + switch (event) { case HCI_EV_INQUIRY_COMPLETE: hci_inquiry_complete_evt(hdev, skb); break; @@ -1162,44 +1395,64 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) hci_inquiry_result_evt(hdev, skb); break; - case HCI_EV_INQUIRY_RESULT_WITH_RSSI: - hci_inquiry_result_with_rssi_evt(hdev, skb); - break; - - case HCI_EV_EXTENDED_INQUIRY_RESULT: - hci_extended_inquiry_result_evt(hdev, skb); + case HCI_EV_CONN_COMPLETE: + hci_conn_complete_evt(hdev, skb); break; case HCI_EV_CONN_REQUEST: hci_conn_request_evt(hdev, skb); break; - case HCI_EV_CONN_COMPLETE: - hci_conn_complete_evt(hdev, skb); - break; - case HCI_EV_DISCONN_COMPLETE: hci_disconn_complete_evt(hdev, skb); break; - case HCI_EV_ROLE_CHANGE: - hci_role_change_evt(hdev, skb); - break; - - case HCI_EV_MODE_CHANGE: - hci_mode_change_evt(hdev, skb); - break; - case HCI_EV_AUTH_COMPLETE: hci_auth_complete_evt(hdev, skb); break; + case HCI_EV_REMOTE_NAME: + hci_remote_name_evt(hdev, skb); + break; + case HCI_EV_ENCRYPT_CHANGE: hci_encrypt_change_evt(hdev, skb); break; - case HCI_EV_CHANGE_CONN_LINK_KEY_COMPLETE: - hci_change_conn_link_key_complete_evt(hdev, skb); + case HCI_EV_CHANGE_LINK_KEY_COMPLETE: + hci_change_link_key_complete_evt(hdev, skb); + break; + + case HCI_EV_REMOTE_FEATURES: + hci_remote_features_evt(hdev, skb); + break; + + case HCI_EV_REMOTE_VERSION: + hci_remote_version_evt(hdev, skb); + break; + + case HCI_EV_QOS_SETUP_COMPLETE: + hci_qos_setup_complete_evt(hdev, skb); + break; + + case HCI_EV_CMD_COMPLETE: + hci_cmd_complete_evt(hdev, skb); + break; + + case HCI_EV_CMD_STATUS: + hci_cmd_status_evt(hdev, skb); + break; + + case HCI_EV_ROLE_CHANGE: + hci_role_change_evt(hdev, skb); + break; + + case HCI_EV_NUM_COMP_PKTS: + hci_num_comp_pkts_evt(hdev, skb); + break; + + case HCI_EV_MODE_CHANGE: + hci_mode_change_evt(hdev, skb); break; case HCI_EV_PIN_CODE_REQ: @@ -1214,10 +1467,6 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) hci_link_key_notify_evt(hdev, skb); break; - case HCI_EV_REMOTE_FEATURES: - hci_remote_features_evt(hdev, skb); - break; - case HCI_EV_CLOCK_OFFSET: hci_clock_offset_evt(hdev, skb); break; @@ -1226,82 +1475,32 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) hci_pscan_rep_mode_evt(hdev, skb); break; - case HCI_EV_SNIFF_SUBRATE: - hci_sniff_subrate_evt(hdev, skb); + case HCI_EV_INQUIRY_RESULT_WITH_RSSI: + hci_inquiry_result_with_rssi_evt(hdev, skb); break; - case HCI_EV_CMD_STATUS: - cs = (struct hci_ev_cmd_status *) skb->data; - skb_pull(skb, sizeof(cs)); - - opcode = __le16_to_cpu(cs->opcode); - ogf = hci_opcode_ogf(opcode); - ocf = hci_opcode_ocf(opcode); - - switch (ogf) { - case OGF_INFO_PARAM: - hci_cs_info_param(hdev, ocf, cs->status); - break; - - case OGF_HOST_CTL: - hci_cs_host_ctl(hdev, ocf, cs->status); - break; - - case OGF_LINK_CTL: - hci_cs_link_ctl(hdev, ocf, cs->status); - break; - - case OGF_LINK_POLICY: - hci_cs_link_policy(hdev, ocf, cs->status); - break; - - default: - BT_DBG("%s Command Status OGF %x", hdev->name, ogf); - break; - } - - if (cs->ncmd) { - atomic_set(&hdev->cmd_cnt, 1); - if (!skb_queue_empty(&hdev->cmd_q)) - hci_sched_cmd(hdev); - } + case HCI_EV_REMOTE_EXT_FEATURES: + hci_remote_ext_features_evt(hdev, skb); break; - case HCI_EV_CMD_COMPLETE: - ec = (struct hci_ev_cmd_complete *) skb->data; - skb_pull(skb, sizeof(*ec)); - - opcode = __le16_to_cpu(ec->opcode); - ogf = hci_opcode_ogf(opcode); - ocf = hci_opcode_ocf(opcode); - - switch (ogf) { - case OGF_INFO_PARAM: - hci_cc_info_param(hdev, ocf, skb); - break; - - case OGF_HOST_CTL: - hci_cc_host_ctl(hdev, ocf, skb); - break; + case HCI_EV_SYNC_CONN_COMPLETE: + hci_sync_conn_complete_evt(hdev, skb); + break; - case OGF_LINK_CTL: - hci_cc_link_ctl(hdev, ocf, skb); - break; + case HCI_EV_SYNC_CONN_CHANGED: + hci_sync_conn_changed_evt(hdev, skb); + break; - case OGF_LINK_POLICY: - hci_cc_link_policy(hdev, ocf, skb); - break; + case HCI_EV_SNIFF_SUBRATE: + hci_sniff_subrate_evt(hdev, skb); + break; - default: - BT_DBG("%s Command Completed OGF %x", hdev->name, ogf); - break; - } + case HCI_EV_EXTENDED_INQUIRY_RESULT: + hci_extended_inquiry_result_evt(hdev, skb); + break; - if (ec->ncmd) { - atomic_set(&hdev->cmd_cnt, 1); - if (!skb_queue_empty(&hdev->cmd_q)) - hci_sched_cmd(hdev); - } + default: + BT_DBG("%s event 0x%x", hdev->name, event); break; } diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c index 43dd6373bff..8825102c517 100644 --- a/net/bluetooth/hci_sock.c +++ b/net/bluetooth/hci_sock.c @@ -451,7 +451,7 @@ static int hci_sock_sendmsg(struct kiocb *iocb, struct socket *sock, goto drop; } - if (test_bit(HCI_RAW, &hdev->flags) || (ogf == OGF_VENDOR_CMD)) { + if (test_bit(HCI_RAW, &hdev->flags) || (ogf == 0x3f)) { skb_queue_tail(&hdev->raw_q, skb); hci_sched_tx(hdev); } else { diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c index 25835403d65..cef1e3e1881 100644 --- a/net/bluetooth/hci_sysfs.c +++ b/net/bluetooth/hci_sysfs.c @@ -41,6 +41,26 @@ static ssize_t show_type(struct device *dev, struct device_attribute *attr, char return sprintf(buf, "%s\n", typetostr(hdev->type)); } +static ssize_t show_name(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct hci_dev *hdev = dev_get_drvdata(dev); + char name[249]; + int i; + + for (i = 0; i < 248; i++) + name[i] = hdev->dev_name[i]; + + name[248] = '\0'; + return sprintf(buf, "%s\n", name); +} + +static ssize_t show_class(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct hci_dev *hdev = dev_get_drvdata(dev); + return sprintf(buf, "0x%.2x%.2x%.2x\n", + hdev->dev_class[2], hdev->dev_class[1], hdev->dev_class[0]); +} + static ssize_t show_address(struct device *dev, struct device_attribute *attr, char *buf) { struct hci_dev *hdev = dev_get_drvdata(dev); @@ -49,6 +69,17 @@ static ssize_t show_address(struct device *dev, struct device_attribute *attr, c return sprintf(buf, "%s\n", batostr(&bdaddr)); } +static ssize_t show_features(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct hci_dev *hdev = dev_get_drvdata(dev); + + return sprintf(buf, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n", + hdev->features[0], hdev->features[1], + hdev->features[2], hdev->features[3], + hdev->features[4], hdev->features[5], + hdev->features[6], hdev->features[7]); +} + static ssize_t show_manufacturer(struct device *dev, struct device_attribute *attr, char *buf) { struct hci_dev *hdev = dev_get_drvdata(dev); @@ -170,7 +201,10 @@ static ssize_t store_sniff_min_interval(struct device *dev, struct device_attrib } static DEVICE_ATTR(type, S_IRUGO, show_type, NULL); +static DEVICE_ATTR(name, S_IRUGO, show_name, NULL); +static DEVICE_ATTR(class, S_IRUGO, show_class, NULL); static DEVICE_ATTR(address, S_IRUGO, show_address, NULL); +static DEVICE_ATTR(features, S_IRUGO, show_features, NULL); static DEVICE_ATTR(manufacturer, S_IRUGO, show_manufacturer, NULL); static DEVICE_ATTR(hci_version, S_IRUGO, show_hci_version, NULL); static DEVICE_ATTR(hci_revision, S_IRUGO, show_hci_revision, NULL); @@ -185,7 +219,10 @@ static DEVICE_ATTR(sniff_min_interval, S_IRUGO | S_IWUSR, static struct device_attribute *bt_attrs[] = { &dev_attr_type, + &dev_attr_name, + &dev_attr_class, &dev_attr_address, + &dev_attr_features, &dev_attr_manufacturer, &dev_attr_hci_version, &dev_attr_hci_revision, diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c index 66c736953cf..4bbacddeb49 100644 --- a/net/bluetooth/hidp/core.c +++ b/net/bluetooth/hidp/core.c @@ -247,7 +247,7 @@ static inline int hidp_queue_report(struct hidp_session *session, unsigned char { struct sk_buff *skb; - BT_DBG("session %p hid %p data %p size %d", session, device, data, size); + BT_DBG("session %p hid %p data %p size %d", session, session->hid, data, size); if (!(skb = alloc_skb(size + 1, GFP_ATOMIC))) { BT_ERR("Can't allocate memory for new frame"); diff --git a/net/bluetooth/l2cap.c b/net/bluetooth/l2cap.c index 36ef27b625d..6fbbae78b30 100644 --- a/net/bluetooth/l2cap.c +++ b/net/bluetooth/l2cap.c @@ -55,7 +55,9 @@ #define BT_DBG(D...) #endif -#define VERSION "2.8" +#define VERSION "2.9" + +static u32 l2cap_feat_mask = 0x0000; static const struct proto_ops l2cap_sock_ops; @@ -258,7 +260,119 @@ static void l2cap_chan_del(struct sock *sk, int err) sk->sk_state_change(sk); } +static inline u8 l2cap_get_ident(struct l2cap_conn *conn) +{ + u8 id; + + /* Get next available identificator. + * 1 - 128 are used by kernel. + * 129 - 199 are reserved. + * 200 - 254 are used by utilities like l2ping, etc. + */ + + spin_lock_bh(&conn->lock); + + if (++conn->tx_ident > 128) + conn->tx_ident = 1; + + id = conn->tx_ident; + + spin_unlock_bh(&conn->lock); + + return id; +} + +static inline int l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, void *data) +{ + struct sk_buff *skb = l2cap_build_cmd(conn, code, ident, len, data); + + BT_DBG("code 0x%2.2x", code); + + if (!skb) + return -ENOMEM; + + return hci_send_acl(conn->hcon, skb, 0); +} + /* ---- L2CAP connections ---- */ +static void l2cap_conn_start(struct l2cap_conn *conn) +{ + struct l2cap_chan_list *l = &conn->chan_list; + struct sock *sk; + + BT_DBG("conn %p", conn); + + read_lock(&l->lock); + + for (sk = l->head; sk; sk = l2cap_pi(sk)->next_c) { + bh_lock_sock(sk); + + if (sk->sk_type != SOCK_SEQPACKET) { + l2cap_sock_clear_timer(sk); + sk->sk_state = BT_CONNECTED; + sk->sk_state_change(sk); + } else if (sk->sk_state == BT_CONNECT) { + struct l2cap_conn_req req; + l2cap_pi(sk)->ident = l2cap_get_ident(conn); + req.scid = cpu_to_le16(l2cap_pi(sk)->scid); + req.psm = l2cap_pi(sk)->psm; + l2cap_send_cmd(conn, l2cap_pi(sk)->ident, + L2CAP_CONN_REQ, sizeof(req), &req); + } + + bh_unlock_sock(sk); + } + + read_unlock(&l->lock); +} + +static void l2cap_conn_ready(struct l2cap_conn *conn) +{ + BT_DBG("conn %p", conn); + + if (conn->chan_list.head || !hlist_empty(&l2cap_sk_list.head)) { + struct l2cap_info_req req; + + req.type = cpu_to_le16(L2CAP_IT_FEAT_MASK); + + conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_SENT; + conn->info_ident = l2cap_get_ident(conn); + + mod_timer(&conn->info_timer, + jiffies + msecs_to_jiffies(L2CAP_INFO_TIMEOUT)); + + l2cap_send_cmd(conn, conn->info_ident, + L2CAP_INFO_REQ, sizeof(req), &req); + } +} + +/* Notify sockets that we cannot guaranty reliability anymore */ +static void l2cap_conn_unreliable(struct l2cap_conn *conn, int err) +{ + struct l2cap_chan_list *l = &conn->chan_list; + struct sock *sk; + + BT_DBG("conn %p", conn); + + read_lock(&l->lock); + + for (sk = l->head; sk; sk = l2cap_pi(sk)->next_c) { + if (l2cap_pi(sk)->link_mode & L2CAP_LM_RELIABLE) + sk->sk_err = err; + } + + read_unlock(&l->lock); +} + +static void l2cap_info_timeout(unsigned long arg) +{ + struct l2cap_conn *conn = (void *) arg; + + conn->info_ident = 0; + + l2cap_conn_start(conn); +} + static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon, u8 status) { struct l2cap_conn *conn = hcon->l2cap_data; @@ -279,6 +393,12 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon, u8 status) conn->src = &hcon->hdev->bdaddr; conn->dst = &hcon->dst; + conn->feat_mask = 0; + + init_timer(&conn->info_timer); + conn->info_timer.function = l2cap_info_timeout; + conn->info_timer.data = (unsigned long) conn; + spin_lock_init(&conn->lock); rwlock_init(&conn->chan_list.lock); @@ -318,40 +438,6 @@ static inline void l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, stru write_unlock_bh(&l->lock); } -static inline u8 l2cap_get_ident(struct l2cap_conn *conn) -{ - u8 id; - - /* Get next available identificator. - * 1 - 128 are used by kernel. - * 129 - 199 are reserved. - * 200 - 254 are used by utilities like l2ping, etc. - */ - - spin_lock_bh(&conn->lock); - - if (++conn->tx_ident > 128) - conn->tx_ident = 1; - - id = conn->tx_ident; - - spin_unlock_bh(&conn->lock); - - return id; -} - -static inline int l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, void *data) -{ - struct sk_buff *skb = l2cap_build_cmd(conn, code, ident, len, data); - - BT_DBG("code 0x%2.2x", code); - - if (!skb) - return -ENOMEM; - - return hci_send_acl(conn->hcon, skb, 0); -} - /* ---- Socket interface ---- */ static struct sock *__l2cap_get_sock_by_addr(__le16 psm, bdaddr_t *src) { @@ -508,7 +594,6 @@ static void l2cap_sock_init(struct sock *sk, struct sock *parent) /* Default config options */ pi->conf_len = 0; - pi->conf_mtu = L2CAP_DEFAULT_MTU; pi->flush_to = L2CAP_DEFAULT_FLUSH_TO; } @@ -530,7 +615,7 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock, int p INIT_LIST_HEAD(&bt_sk(sk)->accept_q); sk->sk_destruct = l2cap_sock_destruct; - sk->sk_sndtimeo = L2CAP_CONN_TIMEOUT; + sk->sk_sndtimeo = msecs_to_jiffies(L2CAP_CONN_TIMEOUT); sock_reset_flag(sk, SOCK_ZAPPED); @@ -650,6 +735,11 @@ static int l2cap_do_connect(struct sock *sk) l2cap_sock_set_timer(sk, sk->sk_sndtimeo); if (hcon->state == BT_CONNECTED) { + if (!(conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)) { + l2cap_conn_ready(conn); + goto done; + } + if (sk->sk_type == SOCK_SEQPACKET) { struct l2cap_conn_req req; l2cap_pi(sk)->ident = l2cap_get_ident(conn); @@ -958,7 +1048,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname, ch opts.imtu = l2cap_pi(sk)->imtu; opts.omtu = l2cap_pi(sk)->omtu; opts.flush_to = l2cap_pi(sk)->flush_to; - opts.mode = 0x00; + opts.mode = L2CAP_MODE_BASIC; len = min_t(unsigned int, sizeof(opts), optlen); if (copy_from_user((char *) &opts, optval, len)) { @@ -1007,7 +1097,7 @@ static int l2cap_sock_getsockopt(struct socket *sock, int level, int optname, ch opts.imtu = l2cap_pi(sk)->imtu; opts.omtu = l2cap_pi(sk)->omtu; opts.flush_to = l2cap_pi(sk)->flush_to; - opts.mode = 0x00; + opts.mode = L2CAP_MODE_BASIC; len = min_t(unsigned int, len, sizeof(opts)); if (copy_to_user(optval, (char *) &opts, len)) @@ -1084,52 +1174,6 @@ static int l2cap_sock_release(struct socket *sock) return err; } -static void l2cap_conn_ready(struct l2cap_conn *conn) -{ - struct l2cap_chan_list *l = &conn->chan_list; - struct sock *sk; - - BT_DBG("conn %p", conn); - - read_lock(&l->lock); - - for (sk = l->head; sk; sk = l2cap_pi(sk)->next_c) { - bh_lock_sock(sk); - - if (sk->sk_type != SOCK_SEQPACKET) { - l2cap_sock_clear_timer(sk); - sk->sk_state = BT_CONNECTED; - sk->sk_state_change(sk); - } else if (sk->sk_state == BT_CONNECT) { - struct l2cap_conn_req req; - l2cap_pi(sk)->ident = l2cap_get_ident(conn); - req.scid = cpu_to_le16(l2cap_pi(sk)->scid); - req.psm = l2cap_pi(sk)->psm; - l2cap_send_cmd(conn, l2cap_pi(sk)->ident, L2CAP_CONN_REQ, sizeof(req), &req); - } - - bh_unlock_sock(sk); - } - - read_unlock(&l->lock); -} - -/* Notify sockets that we cannot guaranty reliability anymore */ -static void l2cap_conn_unreliable(struct l2cap_conn *conn, int err) -{ - struct l2cap_chan_list *l = &conn->chan_list; - struct sock *sk; - - BT_DBG("conn %p", conn); - - read_lock(&l->lock); - for (sk = l->head; sk; sk = l2cap_pi(sk)->next_c) { - if (l2cap_pi(sk)->link_mode & L2CAP_LM_RELIABLE) - sk->sk_err = err; - } - read_unlock(&l->lock); -} - static void l2cap_chan_ready(struct sock *sk) { struct sock *parent = bt_sk(sk)->parent; @@ -1256,11 +1300,11 @@ static inline int l2cap_get_conf_opt(void **ptr, int *type, int *olen, unsigned break; case 2: - *val = __le16_to_cpu(*((__le16 *)opt->val)); + *val = __le16_to_cpu(*((__le16 *) opt->val)); break; case 4: - *val = __le32_to_cpu(*((__le32 *)opt->val)); + *val = __le32_to_cpu(*((__le32 *) opt->val)); break; default: @@ -1332,6 +1376,8 @@ static int l2cap_parse_conf_req(struct sock *sk, void *data) int len = pi->conf_len; int type, hint, olen; unsigned long val; + struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC }; + u16 mtu = L2CAP_DEFAULT_MTU; u16 result = L2CAP_CONF_SUCCESS; BT_DBG("sk %p", sk); @@ -1344,7 +1390,7 @@ static int l2cap_parse_conf_req(struct sock *sk, void *data) switch (type) { case L2CAP_CONF_MTU: - pi->conf_mtu = val; + mtu = val; break; case L2CAP_CONF_FLUSH_TO: @@ -1354,6 +1400,11 @@ static int l2cap_parse_conf_req(struct sock *sk, void *data) case L2CAP_CONF_QOS: break; + case L2CAP_CONF_RFC: + if (olen == sizeof(rfc)) + memcpy(&rfc, (void *) val, olen); + break; + default: if (hint) break; @@ -1368,12 +1419,24 @@ static int l2cap_parse_conf_req(struct sock *sk, void *data) /* Configure output options and let the other side know * which ones we don't like. */ - if (pi->conf_mtu < pi->omtu) + if (rfc.mode == L2CAP_MODE_BASIC) { + if (mtu < pi->omtu) + result = L2CAP_CONF_UNACCEPT; + else { + pi->omtu = mtu; + pi->conf_state |= L2CAP_CONF_OUTPUT_DONE; + } + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->omtu); + } else { result = L2CAP_CONF_UNACCEPT; - else - pi->omtu = pi->conf_mtu; - l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, pi->omtu); + memset(&rfc, 0, sizeof(rfc)); + rfc.mode = L2CAP_MODE_BASIC; + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, + sizeof(rfc), (unsigned long) &rfc); + } } rsp->scid = cpu_to_le16(pi->dcid); @@ -1397,6 +1460,23 @@ static int l2cap_build_conf_rsp(struct sock *sk, void *data, u16 result, u16 fla return ptr - data; } +static inline int l2cap_command_rej(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u8 *data) +{ + struct l2cap_cmd_rej *rej = (struct l2cap_cmd_rej *) data; + + if (rej->reason != 0x0000) + return 0; + + if ((conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT) && + cmd->ident == conn->info_ident) { + conn->info_ident = 0; + del_timer(&conn->info_timer); + l2cap_conn_start(conn); + } + + return 0; +} + static inline int l2cap_connect_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u8 *data) { struct l2cap_chan_list *list = &conn->chan_list; @@ -1577,16 +1657,19 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, len, rsp); - /* Output config done. */ - l2cap_pi(sk)->conf_state |= L2CAP_CONF_OUTPUT_DONE; - /* Reset config buffer. */ l2cap_pi(sk)->conf_len = 0; + if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_OUTPUT_DONE)) + goto unlock; + if (l2cap_pi(sk)->conf_state & L2CAP_CONF_INPUT_DONE) { sk->sk_state = BT_CONNECTED; l2cap_chan_ready(sk); - } else if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_REQ_SENT)) { + goto unlock; + } + + if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_REQ_SENT)) { u8 req[64]; l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, l2cap_build_conf_req(sk, req), req); @@ -1646,7 +1729,6 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, struct l2cap_cmd_hdr if (flags & 0x01) goto done; - /* Input config done */ l2cap_pi(sk)->conf_state |= L2CAP_CONF_INPUT_DONE; if (l2cap_pi(sk)->conf_state & L2CAP_CONF_OUTPUT_DONE) { @@ -1711,16 +1793,27 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, struct l2cap_cmd static inline int l2cap_information_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, u8 *data) { struct l2cap_info_req *req = (struct l2cap_info_req *) data; - struct l2cap_info_rsp rsp; u16 type; type = __le16_to_cpu(req->type); BT_DBG("type 0x%4.4x", type); - rsp.type = cpu_to_le16(type); - rsp.result = cpu_to_le16(L2CAP_IR_NOTSUPP); - l2cap_send_cmd(conn, cmd->ident, L2CAP_INFO_RSP, sizeof(rsp), &rsp); + if (type == L2CAP_IT_FEAT_MASK) { + u8 buf[8]; + struct l2cap_info_rsp *rsp = (struct l2cap_info_rsp *) buf; + rsp->type = cpu_to_le16(L2CAP_IT_FEAT_MASK); + rsp->result = cpu_to_le16(L2CAP_IR_SUCCESS); + put_unaligned(cpu_to_le32(l2cap_feat_mask), (__le32 *) rsp->data); + l2cap_send_cmd(conn, cmd->ident, + L2CAP_INFO_RSP, sizeof(buf), buf); + } else { + struct l2cap_info_rsp rsp; + rsp.type = cpu_to_le16(type); + rsp.result = cpu_to_le16(L2CAP_IR_NOTSUPP); + l2cap_send_cmd(conn, cmd->ident, + L2CAP_INFO_RSP, sizeof(rsp), &rsp); + } return 0; } @@ -1735,6 +1828,15 @@ static inline int l2cap_information_rsp(struct l2cap_conn *conn, struct l2cap_cm BT_DBG("type 0x%4.4x result 0x%2.2x", type, result); + conn->info_ident = 0; + + del_timer(&conn->info_timer); + + if (type == L2CAP_IT_FEAT_MASK) + conn->feat_mask = __le32_to_cpu(get_unaligned((__le32 *) rsp->data)); + + l2cap_conn_start(conn); + return 0; } @@ -1764,7 +1866,7 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn, struct sk_buff *sk switch (cmd.code) { case L2CAP_COMMAND_REJ: - /* FIXME: We should process this */ + l2cap_command_rej(conn, &cmd, data); break; case L2CAP_CONN_REQ: diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c index bb7220770f2..e7ac6ba7eca 100644 --- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -33,11 +33,11 @@ #include <linux/sched.h> #include <linux/signal.h> #include <linux/init.h> -#include <linux/freezer.h> #include <linux/wait.h> #include <linux/device.h> #include <linux/net.h> #include <linux/mutex.h> +#include <linux/kthread.h> #include <net/sock.h> #include <asm/uaccess.h> @@ -68,7 +68,6 @@ static DEFINE_MUTEX(rfcomm_mutex); static unsigned long rfcomm_event; static LIST_HEAD(session_list); -static atomic_t terminate, running; static int rfcomm_send_frame(struct rfcomm_session *s, u8 *data, int len); static int rfcomm_send_sabm(struct rfcomm_session *s, u8 dlci); @@ -1850,26 +1849,6 @@ static inline void rfcomm_process_sessions(void) rfcomm_unlock(); } -static void rfcomm_worker(void) -{ - BT_DBG(""); - - while (!atomic_read(&terminate)) { - set_current_state(TASK_INTERRUPTIBLE); - if (!test_bit(RFCOMM_SCHED_WAKEUP, &rfcomm_event)) { - /* No pending events. Let's sleep. - * Incoming connections and data will wake us up. */ - schedule(); - } - set_current_state(TASK_RUNNING); - - /* Process stuff */ - clear_bit(RFCOMM_SCHED_WAKEUP, &rfcomm_event); - rfcomm_process_sessions(); - } - return; -} - static int rfcomm_add_listener(bdaddr_t *ba) { struct sockaddr_l2 addr; @@ -1935,22 +1914,28 @@ static void rfcomm_kill_listener(void) static int rfcomm_run(void *unused) { - rfcomm_thread = current; - - atomic_inc(&running); + BT_DBG(""); - daemonize("krfcommd"); set_user_nice(current, -10); - BT_DBG(""); - rfcomm_add_listener(BDADDR_ANY); - rfcomm_worker(); + while (!kthread_should_stop()) { + set_current_state(TASK_INTERRUPTIBLE); + if (!test_bit(RFCOMM_SCHED_WAKEUP, &rfcomm_event)) { + /* No pending events. Let's sleep. + * Incoming connections and data will wake us up. */ + schedule(); + } + set_current_state(TASK_RUNNING); + + /* Process stuff */ + clear_bit(RFCOMM_SCHED_WAKEUP, &rfcomm_event); + rfcomm_process_sessions(); + } rfcomm_kill_listener(); - atomic_dec(&running); return 0; } @@ -2059,7 +2044,11 @@ static int __init rfcomm_init(void) hci_register_cb(&rfcomm_cb); - kernel_thread(rfcomm_run, NULL, CLONE_KERNEL); + rfcomm_thread = kthread_run(rfcomm_run, NULL, "krfcommd"); + if (IS_ERR(rfcomm_thread)) { + hci_unregister_cb(&rfcomm_cb); + return PTR_ERR(rfcomm_thread); + } if (class_create_file(bt_class, &class_attr_rfcomm_dlc) < 0) BT_ERR("Failed to create RFCOMM info file"); @@ -2081,14 +2070,7 @@ static void __exit rfcomm_exit(void) hci_unregister_cb(&rfcomm_cb); - /* Terminate working thread. - * ie. Set terminate flag and wake it up */ - atomic_inc(&terminate); - rfcomm_schedule(RFCOMM_SCHED_STATE); - - /* Wait until thread is running */ - while (atomic_read(&running)) - schedule(); + kthread_stop(rfcomm_thread); #ifdef CONFIG_BT_RFCOMM_TTY rfcomm_cleanup_ttys(); diff --git a/net/bluetooth/rfcomm/tty.c b/net/bluetooth/rfcomm/tty.c index 22a832098d4..e447651a2db 100644 --- a/net/bluetooth/rfcomm/tty.c +++ b/net/bluetooth/rfcomm/tty.c @@ -189,6 +189,23 @@ static struct device *rfcomm_get_device(struct rfcomm_dev *dev) return conn ? &conn->dev : NULL; } +static ssize_t show_address(struct device *tty_dev, struct device_attribute *attr, char *buf) +{ + struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); + bdaddr_t bdaddr; + baswap(&bdaddr, &dev->dst); + return sprintf(buf, "%s\n", batostr(&bdaddr)); +} + +static ssize_t show_channel(struct device *tty_dev, struct device_attribute *attr, char *buf) +{ + struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); + return sprintf(buf, "%d\n", dev->channel); +} + +static DEVICE_ATTR(address, S_IRUGO, show_address, NULL); +static DEVICE_ATTR(channel, S_IRUGO, show_channel, NULL); + static int rfcomm_dev_add(struct rfcomm_dev_req *req, struct rfcomm_dlc *dlc) { struct rfcomm_dev *dev; @@ -281,6 +298,14 @@ out: return err; } + dev_set_drvdata(dev->tty_dev, dev); + + if (device_create_file(dev->tty_dev, &dev_attr_address) < 0) + BT_ERR("Failed to create address attribute"); + + if (device_create_file(dev->tty_dev, &dev_attr_channel) < 0) + BT_ERR("Failed to create channel attribute"); + return dev->id; } diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index 65b6fb1c415..82d0dfdfa7e 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -189,7 +189,7 @@ static int sco_connect(struct sock *sk) struct sco_conn *conn; struct hci_conn *hcon; struct hci_dev *hdev; - int err = 0; + int err, type; BT_DBG("%s -> %s", batostr(src), batostr(dst)); @@ -200,7 +200,9 @@ static int sco_connect(struct sock *sk) err = -ENOMEM; - hcon = hci_connect(hdev, SCO_LINK, dst); + type = lmp_esco_capable(hdev) ? ESCO_LINK : SCO_LINK; + + hcon = hci_connect(hdev, type, dst); if (!hcon) goto done; @@ -224,6 +226,7 @@ static int sco_connect(struct sock *sk) sk->sk_state = BT_CONNECT; sco_sock_set_timer(sk, sk->sk_sndtimeo); } + done: hci_dev_unlock_bh(hdev); hci_dev_put(hdev); @@ -846,7 +849,7 @@ static int sco_connect_cfm(struct hci_conn *hcon, __u8 status) { BT_DBG("hcon %p bdaddr %s status %d", hcon, batostr(&hcon->dst), status); - if (hcon->type != SCO_LINK) + if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) return 0; if (!status) { @@ -865,10 +868,11 @@ static int sco_disconn_ind(struct hci_conn *hcon, __u8 reason) { BT_DBG("hcon %p reason %d", hcon, reason); - if (hcon->type != SCO_LINK) + if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) return 0; sco_conn_del(hcon, bt_err(reason)); + return 0; } diff --git a/net/core/dev.c b/net/core/dev.c index 38b03da5c1c..872658927e4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1553,7 +1553,7 @@ gso: return rc; } if (unlikely((netif_queue_stopped(dev) || - netif_subqueue_stopped(dev, skb->queue_mapping)) && + netif_subqueue_stopped(dev, skb)) && skb->next)) return NETDEV_TX_BUSY; } while (skb->next); @@ -1661,7 +1661,7 @@ gso: q = dev->qdisc; if (q->enqueue) { /* reset queue_mapping to zero */ - skb->queue_mapping = 0; + skb_set_queue_mapping(skb, 0); rc = q->enqueue(skb, q); qdisc_run(dev); spin_unlock(&dev->queue_lock); @@ -1692,7 +1692,7 @@ gso: HARD_TX_LOCK(dev, cpu); if (!netif_queue_stopped(dev) && - !netif_subqueue_stopped(dev, skb->queue_mapping)) { + !netif_subqueue_stopped(dev, skb)) { rc = 0; if (!dev_hard_start_xmit(skb, dev)) { HARD_TX_UNLOCK(dev); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 67ba9914e52..05979e35696 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -1438,6 +1438,9 @@ int neigh_table_clear(struct neigh_table *tbl) free_percpu(tbl->stats); tbl->stats = NULL; + kmem_cache_destroy(tbl->kmem_cachep); + tbl->kmem_cachep = NULL; + return 0; } diff --git a/net/core/netpoll.c b/net/core/netpoll.c index 95daba62496..bf8d18f1b01 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -67,7 +67,7 @@ static void queue_process(struct work_struct *work) local_irq_save(flags); netif_tx_lock(dev); if ((netif_queue_stopped(dev) || - netif_subqueue_stopped(dev, skb->queue_mapping)) || + netif_subqueue_stopped(dev, skb)) || dev->hard_start_xmit(skb, dev) != NETDEV_TX_OK) { skb_queue_head(&npinfo->txq, skb); netif_tx_unlock(dev); @@ -269,7 +269,7 @@ static void netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) tries > 0; --tries) { if (netif_tx_trylock(dev)) { if (!netif_queue_stopped(dev) && - !netif_subqueue_stopped(dev, skb->queue_mapping)) + !netif_subqueue_stopped(dev, skb)) status = dev->hard_start_xmit(skb, dev); netif_tx_unlock(dev); diff --git a/net/core/pktgen.c b/net/core/pktgen.c index c4719edb55c..de33f36947e 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -2603,8 +2603,7 @@ static struct sk_buff *fill_packet_ipv4(struct net_device *odev, skb->network_header = skb->tail; skb->transport_header = skb->network_header + sizeof(struct iphdr); skb_put(skb, sizeof(struct iphdr) + sizeof(struct udphdr)); - skb->queue_mapping = pkt_dev->cur_queue_map; - + skb_set_queue_mapping(skb, pkt_dev->cur_queue_map); iph = ip_hdr(skb); udph = udp_hdr(skb); @@ -2941,8 +2940,7 @@ static struct sk_buff *fill_packet_ipv6(struct net_device *odev, skb->network_header = skb->tail; skb->transport_header = skb->network_header + sizeof(struct ipv6hdr); skb_put(skb, sizeof(struct ipv6hdr) + sizeof(struct udphdr)); - skb->queue_mapping = pkt_dev->cur_queue_map; - + skb_set_queue_mapping(skb, pkt_dev->cur_queue_map); iph = ipv6_hdr(skb); udph = udp_hdr(skb); @@ -3385,7 +3383,7 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev) if ((netif_queue_stopped(odev) || (pkt_dev->skb && - netif_subqueue_stopped(odev, pkt_dev->skb->queue_mapping))) || + netif_subqueue_stopped(odev, pkt_dev->skb))) || need_resched()) { idle_start = getCurUs(); @@ -3402,7 +3400,7 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev) pkt_dev->idle_acc += getCurUs() - idle_start; if (netif_queue_stopped(odev) || - netif_subqueue_stopped(odev, pkt_dev->skb->queue_mapping)) { + netif_subqueue_stopped(odev, pkt_dev->skb)) { pkt_dev->next_tx_us = getCurUs(); /* TODO */ pkt_dev->next_tx_ns = 0; goto out; /* Try the next interface */ @@ -3431,7 +3429,7 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev) netif_tx_lock_bh(odev); if (!netif_queue_stopped(odev) && - !netif_subqueue_stopped(odev, pkt_dev->skb->queue_mapping)) { + !netif_subqueue_stopped(odev, pkt_dev->skb)) { atomic_inc(&(pkt_dev->skb->users)); retry_now: diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 70d9b5da96a..4e2c84fcf27 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2045,7 +2045,7 @@ skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) if (copy > 0) { if (copy > len) copy = len; - sg[elt].page = virt_to_page(skb->data + offset); + sg_set_page(&sg[elt], virt_to_page(skb->data + offset)); sg[elt].offset = (unsigned long)(skb->data + offset) % PAGE_SIZE; sg[elt].length = copy; elt++; @@ -2065,7 +2065,7 @@ skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) if (copy > len) copy = len; - sg[elt].page = frag->page; + sg_set_page(&sg[elt], frag->page); sg[elt].offset = frag->page_offset+offset-start; sg[elt].length = copy; elt++; diff --git a/net/dccp/diag.c b/net/dccp/diag.c index 0f3745585a9..d8a3509b26f 100644 --- a/net/dccp/diag.c +++ b/net/dccp/diag.c @@ -68,3 +68,4 @@ module_exit(dccp_diag_fini); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Arnaldo Carvalho de Melo <acme@mandriva.com>"); MODULE_DESCRIPTION("DCCP inet_diag handler"); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_INET_DIAG, DCCPDIAG_GETSOCK); diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index 44f6e17e105..222549ab274 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -1037,8 +1037,8 @@ module_exit(dccp_v4_exit); * values directly, Also cover the case where the protocol is not specified, * i.e. net-pf-PF_INET-proto-0-type-SOCK_DCCP */ -MODULE_ALIAS("net-pf-" __stringify(PF_INET) "-proto-33-type-6"); -MODULE_ALIAS("net-pf-" __stringify(PF_INET) "-proto-0-type-6"); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET, 33, 6); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET, 0, 6); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Arnaldo Carvalho de Melo <acme@mandriva.com>"); MODULE_DESCRIPTION("DCCP - Datagram Congestion Controlled Protocol"); diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index cac53548c2d..bbadd6681b8 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -1219,8 +1219,8 @@ module_exit(dccp_v6_exit); * values directly, Also cover the case where the protocol is not specified, * i.e. net-pf-PF_INET6-proto-0-type-SOCK_DCCP */ -MODULE_ALIAS("net-pf-" __stringify(PF_INET6) "-proto-33-type-6"); -MODULE_ALIAS("net-pf-" __stringify(PF_INET6) "-proto-0-type-6"); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET6, 33, 6); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET6, 0, 6); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Arnaldo Carvalho de Melo <acme@mandriva.com>"); MODULE_DESCRIPTION("DCCPv6 - Datagram Congestion Controlled Protocol"); diff --git a/net/ieee80211/ieee80211_crypt_tkip.c b/net/ieee80211/ieee80211_crypt_tkip.c index 72e6ab66834..c796661a021 100644 --- a/net/ieee80211/ieee80211_crypt_tkip.c +++ b/net/ieee80211/ieee80211_crypt_tkip.c @@ -390,9 +390,7 @@ static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[3] = crc >> 24; crypto_blkcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); - sg.page = virt_to_page(pos); - sg.offset = offset_in_page(pos); - sg.length = len + 4; + sg_init_one(&sg, pos, len + 4); return crypto_blkcipher_encrypt(&desc, &sg, &sg, len + 4); } @@ -485,9 +483,7 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 12; crypto_blkcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); - sg.page = virt_to_page(pos); - sg.offset = offset_in_page(pos); - sg.length = plen + 4; + sg_init_one(&sg, pos, plen + 4); if (crypto_blkcipher_decrypt(&desc, &sg, &sg, plen + 4)) { if (net_ratelimit()) { printk(KERN_DEBUG ": TKIP: failed to decrypt " @@ -539,11 +535,12 @@ static int michael_mic(struct crypto_hash *tfm_michael, u8 * key, u8 * hdr, printk(KERN_WARNING "michael_mic: tfm_michael == NULL\n"); return -1; } - sg[0].page = virt_to_page(hdr); + sg_init_table(sg, 2); + sg_set_page(&sg[0], virt_to_page(hdr)); sg[0].offset = offset_in_page(hdr); sg[0].length = 16; - sg[1].page = virt_to_page(data); + sg_set_page(&sg[1], virt_to_page(data)); sg[1].offset = offset_in_page(data); sg[1].length = data_len; diff --git a/net/ieee80211/ieee80211_crypt_wep.c b/net/ieee80211/ieee80211_crypt_wep.c index 8d182459344..0af6103d715 100644 --- a/net/ieee80211/ieee80211_crypt_wep.c +++ b/net/ieee80211/ieee80211_crypt_wep.c @@ -170,9 +170,7 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[3] = crc >> 24; crypto_blkcipher_setkey(wep->tx_tfm, key, klen); - sg.page = virt_to_page(pos); - sg.offset = offset_in_page(pos); - sg.length = len + 4; + sg_init_one(&sg, pos, len + 4); return crypto_blkcipher_encrypt(&desc, &sg, &sg, len + 4); } @@ -212,9 +210,7 @@ static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 8; crypto_blkcipher_setkey(wep->rx_tfm, key, klen); - sg.page = virt_to_page(pos); - sg.offset = offset_in_page(pos); - sg.length = plen + 4; + sg_init_one(&sg, pos, plen + 4); if (crypto_blkcipher_decrypt(&desc, &sg, &sg, plen + 4)) return -7; diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c index 7eb83ebed2e..dc429b6b0ba 100644 --- a/net/ipv4/inet_diag.c +++ b/net/ipv4/inet_diag.c @@ -815,6 +815,12 @@ static int inet_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) nlmsg_len(nlh) < hdrlen) return -EINVAL; +#ifdef CONFIG_KMOD + if (inet_diag_table[nlh->nlmsg_type] == NULL) + request_module("net-pf-%d-proto-%d-type-%d", PF_NETLINK, + NETLINK_INET_DIAG, nlh->nlmsg_type); +#endif + if (inet_diag_table[nlh->nlmsg_type] == NULL) return -ENOENT; @@ -914,3 +920,4 @@ static void __exit inet_diag_exit(void) module_init(inet_diag_init); module_exit(inet_diag_exit); MODULE_LICENSE("GPL"); +MODULE_ALIAS_NET_PF_PROTO(PF_NETLINK, NETLINK_INET_DIAG); diff --git a/net/ipv4/tcp_diag.c b/net/ipv4/tcp_diag.c index 3904d2158a9..2fbcc7d1b1a 100644 --- a/net/ipv4/tcp_diag.c +++ b/net/ipv4/tcp_diag.c @@ -56,3 +56,4 @@ static void __exit tcp_diag_exit(void) module_init(tcp_diag_init); module_exit(tcp_diag_exit); MODULE_LICENSE("GPL"); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_INET_DIAG, TCPDIAG_GETSOCK); diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c index 67cd06613a2..66a9139d46e 100644 --- a/net/ipv6/ah6.c +++ b/net/ipv6/ah6.c @@ -483,6 +483,7 @@ static int ah6_init_state(struct xfrm_state *x) break; case XFRM_MODE_TUNNEL: x->props.header_len += sizeof(struct ipv6hdr); + break; default: goto error; } diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index b0715432e45..72a659806ca 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -360,6 +360,7 @@ static int esp6_init_state(struct xfrm_state *x) break; case XFRM_MODE_TUNNEL: x->props.header_len += sizeof(struct ipv6hdr); + break; default: goto error; } diff --git a/net/mac80211/wep.c b/net/mac80211/wep.c index 6675261e958..cc806d640f7 100644 --- a/net/mac80211/wep.c +++ b/net/mac80211/wep.c @@ -138,9 +138,7 @@ void ieee80211_wep_encrypt_data(struct crypto_blkcipher *tfm, u8 *rc4key, *icv = cpu_to_le32(~crc32_le(~0, data, data_len)); crypto_blkcipher_setkey(tfm, rc4key, klen); - sg.page = virt_to_page(data); - sg.offset = offset_in_page(data); - sg.length = data_len + WEP_ICV_LEN; + sg_init_one(&sg, data, data_len + WEP_ICV_LEN); crypto_blkcipher_encrypt(&desc, &sg, &sg, sg.length); } @@ -204,9 +202,7 @@ int ieee80211_wep_decrypt_data(struct crypto_blkcipher *tfm, u8 *rc4key, __le32 crc; crypto_blkcipher_setkey(tfm, rc4key, klen); - sg.page = virt_to_page(data); - sg.offset = offset_in_page(data); - sg.length = data_len + WEP_ICV_LEN; + sg_init_one(&sg, data, data_len + WEP_ICV_LEN); crypto_blkcipher_decrypt(&desc, &sg, &sg, sg.length); crc = cpu_to_le32(~crc32_le(~0, data, data_len)); diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c index be57cf317a7..421281d9dd1 100644 --- a/net/sched/sch_teql.c +++ b/net/sched/sch_teql.c @@ -266,7 +266,7 @@ static int teql_master_xmit(struct sk_buff *skb, struct net_device *dev) int busy; int nores; int len = skb->len; - int subq = skb->queue_mapping; + int subq = skb_get_queue_mapping(skb); struct sk_buff *skb_res = NULL; start = master->slaves; @@ -284,7 +284,7 @@ restart: if (slave->qdisc_sleeping != q) continue; if (netif_queue_stopped(slave) || - netif_subqueue_stopped(slave, subq) || + __netif_subqueue_stopped(slave, subq) || !netif_running(slave)) { busy = 1; continue; @@ -294,7 +294,7 @@ restart: case 0: if (netif_tx_trylock(slave)) { if (!netif_queue_stopped(slave) && - !netif_subqueue_stopped(slave, subq) && + !__netif_subqueue_stopped(slave, subq) && slave->hard_start_xmit(skb, slave) == 0) { netif_tx_unlock(slave); master->slaves = NEXT_SLAVE(q); diff --git a/net/sctp/auth.c b/net/sctp/auth.c index 78181072471..cbd64b216cc 100644 --- a/net/sctp/auth.c +++ b/net/sctp/auth.c @@ -726,7 +726,8 @@ void sctp_auth_calculate_hmac(const struct sctp_association *asoc, /* set up scatter list */ end = skb_tail_pointer(skb); - sg.page = virt_to_page(auth); + sg_init_table(&sg, 1); + sg_set_page(&sg, virt_to_page(auth)); sg.offset = (unsigned long)(auth) % PAGE_SIZE; sg.length = end - (unsigned char *)auth; diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index f983a369d4e..658476c4d58 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -56,7 +56,7 @@ #include <linux/ipv6.h> #include <linux/net.h> #include <linux/inet.h> -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include <linux/crypto.h> #include <net/sock.h> @@ -1513,7 +1513,8 @@ static sctp_cookie_param_t *sctp_pack_cookie(const struct sctp_endpoint *ep, struct hash_desc desc; /* Sign the message. */ - sg.page = virt_to_page(&cookie->c); + sg_init_table(&sg, 1); + sg_set_page(&sg, virt_to_page(&cookie->c)); sg.offset = (unsigned long)(&cookie->c) % PAGE_SIZE; sg.length = bodysize; keylen = SCTP_SECRET_SIZE; @@ -1585,7 +1586,8 @@ struct sctp_association *sctp_unpack_cookie( /* Check the signature. */ keylen = SCTP_SECRET_SIZE; - sg.page = virt_to_page(bear_cookie); + sg_init_table(&sg, 1); + sg_set_page(&sg, virt_to_page(bear_cookie)); sg.offset = (unsigned long)(bear_cookie) % PAGE_SIZE; sg.length = bodysize; key = (char *)ep->secret_key[ep->current_key]; diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c index bfb6a29633d..32be431affc 100644 --- a/net/sunrpc/auth_gss/gss_krb5_crypto.c +++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c @@ -197,9 +197,9 @@ encryptor(struct scatterlist *sg, void *data) int i = (page_pos + outbuf->page_base) >> PAGE_CACHE_SHIFT; in_page = desc->pages[i]; } else { - in_page = sg->page; + in_page = sg_page(sg); } - desc->infrags[desc->fragno].page = in_page; + sg_set_page(&desc->infrags[desc->fragno], in_page); desc->fragno++; desc->fraglen += sg->length; desc->pos += sg->length; @@ -215,11 +215,11 @@ encryptor(struct scatterlist *sg, void *data) if (ret) return ret; if (fraglen) { - desc->outfrags[0].page = sg->page; + sg_set_page(&desc->outfrags[0], sg_page(sg)); desc->outfrags[0].offset = sg->offset + sg->length - fraglen; desc->outfrags[0].length = fraglen; desc->infrags[0] = desc->outfrags[0]; - desc->infrags[0].page = in_page; + sg_set_page(&desc->infrags[0], in_page); desc->fragno = 1; desc->fraglen = fraglen; } else { @@ -287,7 +287,7 @@ decryptor(struct scatterlist *sg, void *data) if (ret) return ret; if (fraglen) { - desc->frags[0].page = sg->page; + sg_set_page(&desc->frags[0], sg_page(sg)); desc->frags[0].offset = sg->offset + sg->length - fraglen; desc->frags[0].length = fraglen; desc->fragno = 1; diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index 6a59180e166..3d1f7cdf9dd 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -1059,7 +1059,7 @@ xdr_process_buf(struct xdr_buf *buf, unsigned int offset, unsigned int len, do { if (thislen > page_len) thislen = page_len; - sg->page = buf->pages[i]; + sg_set_page(sg, buf->pages[i]); sg->offset = page_offset; sg->length = thislen; ret = actor(sg, data); diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c index 5ced62c19c6..313d4bed3aa 100644 --- a/net/xfrm/xfrm_algo.c +++ b/net/xfrm/xfrm_algo.c @@ -13,6 +13,7 @@ #include <linux/kernel.h> #include <linux/pfkeyv2.h> #include <linux/crypto.h> +#include <linux/scatterlist.h> #include <net/xfrm.h> #if defined(CONFIG_INET_AH) || defined(CONFIG_INET_AH_MODULE) || defined(CONFIG_INET6_AH) || defined(CONFIG_INET6_AH_MODULE) #include <net/ah.h> @@ -552,7 +553,7 @@ int skb_icv_walk(const struct sk_buff *skb, struct hash_desc *desc, if (copy > len) copy = len; - sg.page = virt_to_page(skb->data + offset); + sg_set_page(&sg, virt_to_page(skb->data + offset)); sg.offset = (unsigned long)(skb->data + offset) % PAGE_SIZE; sg.length = copy; @@ -577,7 +578,7 @@ int skb_icv_walk(const struct sk_buff *skb, struct hash_desc *desc, if (copy > len) copy = len; - sg.page = frag->page; + sg_set_page(&sg, frag->page); sg.offset = frag->page_offset + offset-start; sg.length = copy; diff --git a/security/commoncap.c b/security/commoncap.c index 43f902750a1..bf67871173e 100644 --- a/security/commoncap.c +++ b/security/commoncap.c @@ -190,7 +190,8 @@ int cap_inode_killpriv(struct dentry *dentry) return inode->i_op->removexattr(dentry, XATTR_NAME_CAPS); } -static inline int cap_from_disk(__le32 *caps, struct linux_binprm *bprm, +static inline int cap_from_disk(struct vfs_cap_data *caps, + struct linux_binprm *bprm, int size) { __u32 magic_etc; @@ -198,7 +199,7 @@ static inline int cap_from_disk(__le32 *caps, struct linux_binprm *bprm, if (size != XATTR_CAPS_SZ) return -EINVAL; - magic_etc = le32_to_cpu(caps[0]); + magic_etc = le32_to_cpu(caps->magic_etc); switch ((magic_etc & VFS_CAP_REVISION_MASK)) { case VFS_CAP_REVISION: @@ -206,8 +207,8 @@ static inline int cap_from_disk(__le32 *caps, struct linux_binprm *bprm, bprm->cap_effective = true; else bprm->cap_effective = false; - bprm->cap_permitted = to_cap_t( le32_to_cpu(caps[1]) ); - bprm->cap_inheritable = to_cap_t( le32_to_cpu(caps[2]) ); + bprm->cap_permitted = to_cap_t(le32_to_cpu(caps->permitted)); + bprm->cap_inheritable = to_cap_t(le32_to_cpu(caps->inheritable)); return 0; default: return -EINVAL; @@ -219,7 +220,7 @@ static int get_file_caps(struct linux_binprm *bprm) { struct dentry *dentry; int rc = 0; - __le32 v1caps[XATTR_CAPS_SZ]; + struct vfs_cap_data incaps; struct inode *inode; if (bprm->file->f_vfsmnt->mnt_flags & MNT_NOSUID) { @@ -232,8 +233,14 @@ static int get_file_caps(struct linux_binprm *bprm) if (!inode->i_op || !inode->i_op->getxattr) goto out; - rc = inode->i_op->getxattr(dentry, XATTR_NAME_CAPS, &v1caps, - XATTR_CAPS_SZ); + rc = inode->i_op->getxattr(dentry, XATTR_NAME_CAPS, NULL, 0); + if (rc > 0) { + if (rc == XATTR_CAPS_SZ) + rc = inode->i_op->getxattr(dentry, XATTR_NAME_CAPS, + &incaps, XATTR_CAPS_SZ); + else + rc = -EINVAL; + } if (rc == -ENODATA || rc == -EOPNOTSUPP) { /* no data, that's ok */ rc = 0; @@ -242,7 +249,7 @@ static int get_file_caps(struct linux_binprm *bprm) if (rc < 0) goto out; - rc = cap_from_disk(v1caps, bprm, rc); + rc = cap_from_disk(&incaps, bprm, rc); if (rc) printk(KERN_NOTICE "%s: cap_from_disk returned %d for %s\n", __FUNCTION__, rc, bprm->filename); diff --git a/sound/core/control.c b/sound/core/control.c index 4c3aa8e1037..df0774c76f6 100644 --- a/sound/core/control.c +++ b/sound/core/control.c @@ -93,15 +93,16 @@ static int snd_ctl_open(struct inode *inode, struct file *file) static void snd_ctl_empty_read_queue(struct snd_ctl_file * ctl) { + unsigned long flags; struct snd_kctl_event *cread; - spin_lock(&ctl->read_lock); + spin_lock_irqsave(&ctl->read_lock, flags); while (!list_empty(&ctl->events)) { cread = snd_kctl_event(ctl->events.next); list_del(&cread->list); kfree(cread); } - spin_unlock(&ctl->read_lock); + spin_unlock_irqrestore(&ctl->read_lock, flags); } static int snd_ctl_release(struct inode *inode, struct file *file) diff --git a/sound/i2c/other/tea575x-tuner.c b/sound/i2c/other/tea575x-tuner.c index fe31bb5cffb..37c47fb95ac 100644 --- a/sound/i2c/other/tea575x-tuner.c +++ b/sound/i2c/other/tea575x-tuner.c @@ -189,7 +189,6 @@ void snd_tea575x_init(struct snd_tea575x *tea) tea->vd.owner = tea->card->module; strcpy(tea->vd.name, tea->tea5759 ? "TEA5759 radio" : "TEA5757 radio"); tea->vd.type = VID_TYPE_TUNER; - tea->vd.hardware = VID_HARDWARE_RTRACK; /* FIXME: assign new number */ tea->vd.release = snd_tea575x_release; video_set_drvdata(&tea->vd, tea); tea->vd.fops = &tea->fops; diff --git a/sound/pci/bt87x.c b/sound/pci/bt87x.c index 91f9e6a112f..2dba752faf4 100644 --- a/sound/pci/bt87x.c +++ b/sound/pci/bt87x.c @@ -165,7 +165,7 @@ struct snd_bt87x_board { unsigned no_digital:1; /* No digital input */ }; -static const __devinitdata struct snd_bt87x_board snd_bt87x_boards[] = { +static __devinitdata struct snd_bt87x_board snd_bt87x_boards[] = { [SND_BT87X_BOARD_UNKNOWN] = { .dig_rate = 32000, /* just a guess */ }, @@ -848,7 +848,7 @@ static int __devinit snd_bt87x_detect_card(struct pci_dev *pci) int i; const struct pci_device_id *supported; - supported = pci_match_device(&driver, pci); + supported = pci_match_id(snd_bt87x_ids, pci); if (supported && supported->driver_data > 0) return supported->driver_data; diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c index 187533e477c..ad4cb38109f 100644 --- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -626,24 +626,19 @@ int __devinit snd_hda_codec_new(struct hda_bus *bus, unsigned int codec_addr, snd_hda_get_codec_name(codec, bus->card->mixername, sizeof(bus->card->mixername)); -#ifdef CONFIG_SND_HDA_GENERIC if (is_generic_config(codec)) { err = snd_hda_parse_generic_codec(codec); goto patched; } -#endif if (codec->preset && codec->preset->patch) { err = codec->preset->patch(codec); goto patched; } /* call the default parser */ -#ifdef CONFIG_SND_HDA_GENERIC err = snd_hda_parse_generic_codec(codec); -#else - printk(KERN_ERR "hda-codec: No codec parser is available\n"); - err = -ENODEV; -#endif + if (err < 0) + printk(KERN_ERR "hda-codec: No codec parser is available\n"); patched: if (err < 0) { diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h index a79d0ed5469..20c5e625037 100644 --- a/sound/pci/hda/hda_local.h +++ b/sound/pci/hda/hda_local.h @@ -245,7 +245,14 @@ int snd_hda_multi_out_analog_cleanup(struct hda_codec *codec, /* * generic codec parser */ +#ifdef CONFIG_SND_HDA_GENERIC int snd_hda_parse_generic_codec(struct hda_codec *codec); +#else +static inline int snd_hda_parse_generic_codec(struct hda_codec *codec) +{ + return -ENODEV; +} +#endif /* * generic proc interface @@ -303,16 +310,17 @@ enum { extern const char *auto_pin_cfg_labels[AUTO_PIN_LAST]; +#define AUTO_CFG_MAX_OUTS 5 + struct auto_pin_cfg { int line_outs; - hda_nid_t line_out_pins[5]; /* sorted in the order of - * Front/Surr/CLFE/Side - */ + /* sorted in the order of Front/Surr/CLFE/Side */ + hda_nid_t line_out_pins[AUTO_CFG_MAX_OUTS]; int speaker_outs; - hda_nid_t speaker_pins[5]; + hda_nid_t speaker_pins[AUTO_CFG_MAX_OUTS]; int hp_outs; int line_out_type; /* AUTO_PIN_XXX_OUT */ - hda_nid_t hp_pins[5]; + hda_nid_t hp_pins[AUTO_CFG_MAX_OUTS]; hda_nid_t input_pins[AUTO_PIN_LAST]; hda_nid_t dig_out_pin; hda_nid_t dig_in_pin; diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c index 54cfd4526d2..0ee8ae4d441 100644 --- a/sound/pci/hda/patch_analog.c +++ b/sound/pci/hda/patch_analog.c @@ -72,7 +72,7 @@ struct ad198x_spec { unsigned int num_kctl_alloc, num_kctl_used; struct snd_kcontrol_new *kctl_alloc; struct hda_input_mux private_imux; - hda_nid_t private_dac_nids[4]; + hda_nid_t private_dac_nids[AUTO_CFG_MAX_OUTS]; unsigned int jack_present :1; @@ -612,7 +612,8 @@ static void ad1986a_hp_automute(struct hda_codec *codec) unsigned int present; present = snd_hda_codec_read(codec, 0x1a, 0, AC_VERB_GET_PIN_SENSE, 0); - spec->jack_present = (present & 0x80000000) != 0; + /* Lenovo N100 seems to report the reversed bit for HP jack-sensing */ + spec->jack_present = !(present & 0x80000000); ad1986a_update_hp(codec); } diff --git a/sound/pci/hda/patch_cmedia.c b/sound/pci/hda/patch_cmedia.c index 2468f317122..6c54793bf42 100644 --- a/sound/pci/hda/patch_cmedia.c +++ b/sound/pci/hda/patch_cmedia.c @@ -50,7 +50,7 @@ struct cmi_spec { /* playback */ struct hda_multi_out multiout; - hda_nid_t dac_nids[4]; /* NID for each DAC */ + hda_nid_t dac_nids[AUTO_CFG_MAX_OUTS]; /* NID for each DAC */ int num_dacs; /* capture */ @@ -73,7 +73,6 @@ struct cmi_spec { unsigned int pin_def_confs; /* multichannel pins */ - hda_nid_t multich_pin[4]; /* max 8-channel */ struct hda_verb multi_init[9]; /* 2 verbs for each pin + terminator */ }; diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 080e3001d9c..6aa07398674 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -85,7 +85,7 @@ struct conexant_spec { unsigned int num_kctl_alloc, num_kctl_used; struct snd_kcontrol_new *kctl_alloc; struct hda_input_mux private_imux; - hda_nid_t private_dac_nids[4]; + hda_nid_t private_dac_nids[AUTO_CFG_MAX_OUTS]; }; @@ -554,10 +554,16 @@ static struct snd_kcontrol_new cxt5045_mixers[] = { .get = conexant_mux_enum_get, .put = conexant_mux_enum_put }, - HDA_CODEC_VOLUME("Int Mic Volume", 0x1a, 0x01, HDA_INPUT), - HDA_CODEC_MUTE("Int Mic Switch", 0x1a, 0x01, HDA_INPUT), - HDA_CODEC_VOLUME("Ext Mic Volume", 0x1a, 0x02, HDA_INPUT), - HDA_CODEC_MUTE("Ext Mic Switch", 0x1a, 0x02, HDA_INPUT), + HDA_CODEC_VOLUME("Int Mic Capture Volume", 0x1a, 0x01, HDA_INPUT), + HDA_CODEC_MUTE("Int Mic Capture Switch", 0x1a, 0x01, HDA_INPUT), + HDA_CODEC_VOLUME("Ext Mic Capture Volume", 0x1a, 0x02, HDA_INPUT), + HDA_CODEC_MUTE("Ext Mic Capture Switch", 0x1a, 0x02, HDA_INPUT), + HDA_CODEC_VOLUME("PCM Playback Volume", 0x17, 0x0, HDA_INPUT), + HDA_CODEC_MUTE("PCM Playback Switch", 0x17, 0x0, HDA_INPUT), + HDA_CODEC_VOLUME("Int Mic Playback Volume", 0x17, 0x1, HDA_INPUT), + HDA_CODEC_MUTE("Int Mic Playback Switch", 0x17, 0x1, HDA_INPUT), + HDA_CODEC_VOLUME("Ext Mic Playback Volume", 0x17, 0x2, HDA_INPUT), + HDA_CODEC_MUTE("Ext Mic Playback Switch", 0x17, 0x2, HDA_INPUT), HDA_BIND_VOL("Master Playback Volume", &cxt5045_hp_bind_master_vol), { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, @@ -576,16 +582,15 @@ static struct hda_verb cxt5045_init_verbs[] = { {0x12, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN }, {0x14, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN|AC_PINCTL_VREF_80 }, /* HP, Amp */ - {0x11, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP }, - {0x17, AC_VERB_SET_CONNECT_SEL,0x01}, - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, - AC_AMP_SET_OUTPUT|AC_AMP_SET_RIGHT|AC_AMP_SET_LEFT|0x01}, - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, - AC_AMP_SET_OUTPUT|AC_AMP_SET_RIGHT|AC_AMP_SET_LEFT|0x02}, - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, - AC_AMP_SET_OUTPUT|AC_AMP_SET_RIGHT|AC_AMP_SET_LEFT|0x03}, - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, - AC_AMP_SET_OUTPUT|AC_AMP_SET_RIGHT|AC_AMP_SET_LEFT|0x04}, + {0x10, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT}, + {0x10, AC_VERB_SET_CONNECT_SEL, 0x1}, + {0x11, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP}, + {0x11, AC_VERB_SET_CONNECT_SEL, 0x1}, + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(2)}, + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(3)}, + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(4)}, /* Record selector: Int mic */ {0x1a, AC_VERB_SET_CONNECT_SEL,0x1}, {0x1a, AC_VERB_SET_AMP_GAIN_MUTE, diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 53b0428abfc..d9f78c809ee 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -238,7 +238,7 @@ struct alc_spec { unsigned int num_kctl_alloc, num_kctl_used; struct snd_kcontrol_new *kctl_alloc; struct hda_input_mux private_imux; - hda_nid_t private_dac_nids[5]; + hda_nid_t private_dac_nids[AUTO_CFG_MAX_OUTS]; /* hooks */ void (*init_hook)(struct hda_codec *codec); diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c index bf950195107..f9b2c435a13 100644 --- a/sound/pci/hda/patch_sigmatel.c +++ b/sound/pci/hda/patch_sigmatel.c @@ -111,6 +111,7 @@ struct sigmatel_spec { unsigned int alt_switch: 1; unsigned int hp_detect: 1; unsigned int gpio_mute: 1; + unsigned int no_vol_knob :1; unsigned int gpio_mask, gpio_data; @@ -1930,7 +1931,8 @@ static int stac92xx_auto_create_hp_ctls(struct hda_codec *codec, } if (spec->multiout.hp_nid) { const char *pfx; - if (old_num_dacs == spec->multiout.num_dacs) + if (old_num_dacs == spec->multiout.num_dacs && + spec->no_vol_knob) pfx = "Master"; else pfx = "Headphone"; @@ -2487,6 +2489,7 @@ static int patch_stac9200(struct hda_codec *codec) codec->spec = spec; spec->num_pins = ARRAY_SIZE(stac9200_pin_nids); spec->pin_nids = stac9200_pin_nids; + spec->no_vol_knob = 1; spec->board_config = snd_hda_check_board_config(codec, STAC_9200_MODELS, stac9200_models, stac9200_cfg_tbl); @@ -2541,6 +2544,7 @@ static int patch_stac925x(struct hda_codec *codec) codec->spec = spec; spec->num_pins = ARRAY_SIZE(stac925x_pin_nids); spec->pin_nids = stac925x_pin_nids; + spec->no_vol_knob = 1; spec->board_config = snd_hda_check_board_config(codec, STAC_925x_MODELS, stac925x_models, stac925x_cfg_tbl); diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c index 33b5e1ffa81..4cdf3e6df4b 100644 --- a/sound/pci/hda/patch_via.c +++ b/sound/pci/hda/patch_via.c @@ -114,7 +114,7 @@ struct via_spec { unsigned int num_kctl_alloc, num_kctl_used; struct snd_kcontrol_new *kctl_alloc; struct hda_input_mux private_imux; - hda_nid_t private_dac_nids[4]; + hda_nid_t private_dac_nids[AUTO_CFG_MAX_OUTS]; #ifdef CONFIG_SND_HDA_POWER_SAVE struct hda_loopback_check loopback; diff --git a/sound/sh/aica.c b/sound/sh/aica.c index 131ec481228..88dc840152c 100644 --- a/sound/sh/aica.c +++ b/sound/sh/aica.c @@ -106,11 +106,14 @@ static void spu_write_wait(void) static void spu_memset(u32 toi, u32 what, int length) { int i; + unsigned long flags; snd_assert(length % 4 == 0, return); for (i = 0; i < length; i++) { if (!(i % 8)) spu_write_wait(); + local_irq_save(flags); writel(what, toi + SPU_MEMORY_BASE); + local_irq_restore(flags); toi++; } } @@ -118,6 +121,7 @@ static void spu_memset(u32 toi, u32 what, int length) /* spu_memload - write to SPU address space */ static void spu_memload(u32 toi, void *from, int length) { + unsigned long flags; u32 *froml = from; u32 __iomem *to = (u32 __iomem *) (SPU_MEMORY_BASE + toi); int i; @@ -128,7 +132,9 @@ static void spu_memload(u32 toi, void *from, int length) if (!(i % 8)) spu_write_wait(); val = *froml; + local_irq_save(flags); writel(val, to); + local_irq_restore(flags); froml++; to++; } @@ -138,28 +144,36 @@ static void spu_memload(u32 toi, void *from, int length) static void spu_disable(void) { int i; + unsigned long flags; u32 regval; spu_write_wait(); regval = readl(ARM_RESET_REGISTER); regval |= 1; spu_write_wait(); + local_irq_save(flags); writel(regval, ARM_RESET_REGISTER); + local_irq_restore(flags); for (i = 0; i < 64; i++) { spu_write_wait(); regval = readl(SPU_REGISTER_BASE + (i * 0x80)); regval = (regval & ~0x4000) | 0x8000; spu_write_wait(); + local_irq_save(flags); writel(regval, SPU_REGISTER_BASE + (i * 0x80)); + local_irq_restore(flags); } } /* spu_enable - set spu registers to enable sound output */ static void spu_enable(void) { + unsigned long flags; u32 regval = readl(ARM_RESET_REGISTER); regval &= ~1; spu_write_wait(); + local_irq_save(flags); writel(regval, ARM_RESET_REGISTER); + local_irq_restore(flags); } /* @@ -168,25 +182,34 @@ static void spu_enable(void) */ static void spu_reset(void) { + unsigned long flags; spu_disable(); spu_memset(0, 0, 0x200000 / 4); /* Put ARM7 in endless loop */ + local_irq_save(flags); ctrl_outl(0xea000002, SPU_MEMORY_BASE); + local_irq_restore(flags); spu_enable(); } /* aica_chn_start - write to spu to start playback */ static void aica_chn_start(void) { + unsigned long flags; spu_write_wait(); + local_irq_save(flags); writel(AICA_CMD_KICK | AICA_CMD_START, (u32 *) AICA_CONTROL_POINT); + local_irq_restore(flags); } /* aica_chn_halt - write to spu to halt playback */ static void aica_chn_halt(void) { + unsigned long flags; spu_write_wait(); + local_irq_save(flags); writel(AICA_CMD_KICK | AICA_CMD_STOP, (u32 *) AICA_CONTROL_POINT); + local_irq_restore(flags); } /* ALSA code below */ @@ -213,12 +236,13 @@ static int aica_dma_transfer(int channels, int buffer_size, int q, err, period_offset; struct snd_card_aica *dreamcastcard; struct snd_pcm_runtime *runtime; - err = 0; + unsigned long flags; dreamcastcard = substream->pcm->private_data; period_offset = dreamcastcard->clicks; period_offset %= (AICA_PERIOD_NUMBER / channels); runtime = substream->runtime; for (q = 0; q < channels; q++) { + local_irq_save(flags); err = dma_xfer(AICA_DMA_CHANNEL, (unsigned long) (runtime->dma_area + (AICA_BUFFER_SIZE * q) / @@ -228,9 +252,12 @@ static int aica_dma_transfer(int channels, int buffer_size, AICA_CHANNEL0_OFFSET + q * CHANNEL_OFFSET + AICA_PERIOD_SIZE * period_offset, buffer_size / channels, AICA_DMA_MODE); - if (unlikely(err < 0)) + if (unlikely(err < 0)) { + local_irq_restore(flags); break; + } dma_wait_for_completion(AICA_DMA_CHANNEL); + local_irq_restore(flags); } return err; } diff --git a/sound/sparc/cs4231.c b/sound/sparc/cs4231.c index 9785382a5f3..f8c7a120ccb 100644 --- a/sound/sparc/cs4231.c +++ b/sound/sparc/cs4231.c @@ -400,65 +400,44 @@ static void snd_cs4231_mce_up(struct snd_cs4231 *chip) static void snd_cs4231_mce_down(struct snd_cs4231 *chip) { - unsigned long flags; - unsigned long end_time; - int timeout; + unsigned long flags, timeout; + int reg; - spin_lock_irqsave(&chip->lock, flags); snd_cs4231_busy_wait(chip); + spin_lock_irqsave(&chip->lock, flags); #ifdef CONFIG_SND_DEBUG if (__cs4231_readb(chip, CS4231U(chip, REGSEL)) & CS4231_INIT) snd_printdd("mce_down [%p] - auto calibration time out (0)\n", CS4231U(chip, REGSEL)); #endif chip->mce_bit &= ~CS4231_MCE; - timeout = __cs4231_readb(chip, CS4231U(chip, REGSEL)); - __cs4231_writeb(chip, chip->mce_bit | (timeout & 0x1f), + reg = __cs4231_readb(chip, CS4231U(chip, REGSEL)); + __cs4231_writeb(chip, chip->mce_bit | (reg & 0x1f), CS4231U(chip, REGSEL)); - if (timeout == 0x80) - snd_printdd("mce_down [%p]: serious init problem - " - "codec still busy\n", - chip->port); - if ((timeout & CS4231_MCE) == 0) { + if (reg == 0x80) + snd_printdd("mce_down [%p]: serious init problem " + "- codec still busy\n", chip->port); + if ((reg & CS4231_MCE) == 0) { spin_unlock_irqrestore(&chip->lock, flags); return; } /* - * Wait for (possible -- during init auto-calibration may not be set) - * calibration process to start. Needs upto 5 sample periods on AD1848 - * which at the slowest possible rate of 5.5125 kHz means 907 us. + * Wait for auto-calibration (AC) process to finish, i.e. ACI to go low. */ - msleep(1); - - /* check condition up to 250ms */ - end_time = jiffies + msecs_to_jiffies(250); - while (snd_cs4231_in(chip, CS4231_TEST_INIT) & - CS4231_CALIB_IN_PROGRESS) { - + timeout = jiffies + msecs_to_jiffies(250); + do { spin_unlock_irqrestore(&chip->lock, flags); - if (time_after(jiffies, end_time)) { - snd_printk("mce_down - " - "auto calibration time out (2)\n"); - return; - } - msleep(1); - spin_lock_irqsave(&chip->lock, flags); - } - - /* check condition up to 100ms */ - end_time = jiffies + msecs_to_jiffies(100); - while (__cs4231_readb(chip, CS4231U(chip, REGSEL)) & CS4231_INIT) { - spin_unlock_irqrestore(&chip->lock, flags); - if (time_after(jiffies, end_time)) { - snd_printk("mce_down - " - "auto calibration time out (3)\n"); - return; - } msleep(1); spin_lock_irqsave(&chip->lock, flags); - } + reg = snd_cs4231_in(chip, CS4231_TEST_INIT); + reg &= CS4231_CALIB_IN_PROGRESS; + } while (reg && time_before(jiffies, timeout)); spin_unlock_irqrestore(&chip->lock, flags); + + if (reg) + snd_printk(KERN_ERR + "mce_down - auto calibration time out (2)\n"); } static void snd_cs4231_advance_dma(struct cs4231_dma_control *dma_cont, diff --git a/sound/usb/usbquirks.h b/sound/usb/usbquirks.h index 743568f8990..59410f43770 100644 --- a/sound/usb/usbquirks.h +++ b/sound/usb/usbquirks.h @@ -84,6 +84,15 @@ USB_DEVICE_ID_MATCH_INT_CLASS | USB_DEVICE_ID_MATCH_INT_SUBCLASS, .idVendor = 0x046d, + .idProduct = 0x08f5, + .bInterfaceClass = USB_CLASS_AUDIO, + .bInterfaceSubClass = USB_SUBCLASS_AUDIO_CONTROL +}, +{ + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | + USB_DEVICE_ID_MATCH_INT_CLASS | + USB_DEVICE_ID_MATCH_INT_SUBCLASS, + .idVendor = 0x046d, .idProduct = 0x08f6, .bInterfaceClass = USB_CLASS_AUDIO, .bInterfaceSubClass = USB_SUBCLASS_AUDIO_CONTROL |