Age | Commit message (Collapse) | Author |
|
When unmapping grants, instead of converting the kernel map ops to
unmap ops on the fly, pre-populate the set of unmap ops.
This allows the grant unmap for the kernel mappings to be trivially
batched in the future.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
Introduce an arch specific function to find out whether a particular dma
mapping operation needs to bounce on the swiotlb buffer.
On ARM and ARM64, if the page involved is a foreign page and the device
is not coherent, we need to bounce because at unmap time we cannot
execute any required cache maintenance operations (we don't know how to
find the pfn from the mfn).
No change of behaviour for x86.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
In xen_dma_map_page, if the page is a local page, call the native
map_page dma_ops. If the page is foreign, call __xen_dma_map_page that
issues any required cache maintenane operations via hypercall.
The reason for doing this is that the native dma_ops map_page could
allocate buffers than need to be freed. If the page is foreign we don't
call the native unmap_page dma_ops function, resulting in a memory leak.
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
dev_addr is the machine address of the page.
The new parameter can be used by the ARM and ARM64 implementations of
xen_dma_map_page to find out if the page is a local page (pfn == mfn) or
a foreign page (pfn != mfn).
dev_addr could be retrieved again from the physical address, using
pfn_to_mfn, but it requires accessing an rbtree. Since we already have
the dev_addr in our hands at the call site there is no need to get the
mfn twice.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Remove code duplication in mm32.c by calling the native dma_ops if the
page is a local page (not a foreign page). Use a simple pfn_valid(pfn)
check to figure out if the page is local, exploiting the fact that dom0
is mapped 1:1, therefore pfn_valid always returns false when called on a
foreign mfn.
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Remove the rbtree used to keep track of machine to physical mappings:
the frontend can grant the same page multiple times, leading to errors
inserting or removing entries from the mach_to_phys tree.
Linux only needed to know the physical address corresponding to a given
machine address in swiotlb-xen. Now that swiotlb-xen can call the
xen_dma_* functions passing the machine address directly, we can remove
it.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Denis Schneider <v1ne2go@gmail.com>
|
|
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and
xen_dma_sync_single_for_device are currently implemented by calling into
the corresponding generic ARM implementation of these functions. In
order to do this, firstly the dma_addr_t handle, that on Xen is a
machine address, needs to be translated into a physical address. The
operation is expensive and inaccurate, given that a single machine
address can correspond to multiple physical addresses in one domain,
because the same page can be granted multiple times by the frontend.
To avoid this problem, we introduce a Xen specific implementation of
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and
xen_dma_sync_single_for_device, that can operate on machine addresses
directly.
The new implementation relies on the fact that the hypervisor creates a
second p2m mapping of any grant pages at physical address == machine
address of the page for dom0. Therefore we can access memory at physical
address == dma_addr_r handle and perform the cache flushing there. Some
cache maintenance operations require a virtual address. Instead of using
ioremap_cache, that is not safe in interrupt context, we allocate a
per-cpu PAGE_KERNEL scratch page and we manually update the pte for it.
arm64 doesn't need cache maintenance operations on unmap for now.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Denis Schneider <v1ne2go@gmail.com>
|
|
Introduce HYPERVISOR_suspend() and a few additional empty stubs for
Xen arch specific functions called by drivers/xen/manage.c.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
|
virt_to_pfn has been defined in asm/memory.h by the commit e26a9e0 "ARM: Better
virt_to_page() handling"
This will result of a compilation warning when CONFIG_XEN is enabled.
arch/arm/include/asm/xen/page.h:80:0: warning: "virt_to_pfn" redefined [enabled by default]
#define virt_to_pfn(v) (PFN_DOWN(__pa(v)))
^
In file included from arch/arm/include/asm/page.h:163:0,
from arch/arm/include/asm/xen/page.h:4,
from include/xen/page.h:4,
from arch/arm/xen/grant-table.c:33:
The definition in memory.h is nearly the same (it directly expand PFN_DOWN),
so we can safely drop virt_to_pfn in xen include.
Signed-off-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
|
As part of this make the usual change to xen_ulong_t in place of unsigned long.
This change has no impact on x86.
The Linux definition of struct multicall_entry.result differs from the Xen
definition, I think for good reasons, and used a long rather than an unsigned
long. Therefore introduce a xen_long_t, which is a long on x86 architectures
and a signed 64-bit integer on ARM.
Use uint32_t nr_calls on x86 for consistency with the ARM definition.
Build tested on amd64 and i386 builds. Runtime tested on ARM.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
|
The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the bulk of the original function (everything after the mapping hypercall)
is moved to arch-dependent set/clear_foreign_p2m_mapping
- the "if (xen_feature(XENFEAT_auto_translated_physmap))" branch goes to ARM
- therefore the ARM function could be much smaller, the m2p_override stubs
could be also removed
- on x86 the set_phys_to_machine calls were moved up to this new funcion
from m2p_override functions
- and m2p_override functions are only called when there is a kmap_ops param
It also removes a stray space from arch/x86/include/asm/xen/page.h.
Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: Anthony Liguori <aliguori@amazon.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull Xen updates from Konrad Rzeszutek Wilk:
"Two major features that Xen community is excited about:
The first is event channel scalability by David Vrabel - we switch
over from an two-level per-cpu bitmap of events (IRQs) - to an FIFO
queue with priorities. This lets us be able to handle more events,
have lower latency, and better scalability. Good stuff.
The other is PVH by Mukesh Rathor. In short, PV is a mode where the
kernel lets the hypervisor program page-tables, segments, etc. With
EPT/NPT capabilities in current processors, the overhead of doing this
in an HVM (Hardware Virtual Machine) container is much lower than the
hypervisor doing it for us.
In short we let a PV guest run without doing page-table, segment,
syscall, etc updates through the hypervisor - instead it is all done
within the guest container. It is a "hybrid" PV - hence the 'PVH'
name - a PV guest within an HVM container.
The major benefits are less code to deal with - for example we only
use one function from the the pv_mmu_ops (which has 39 function
calls); faster performance for syscall (no context switches into the
hypervisor); less traps on various operations; etc.
It is still being baked - the ABI is not yet set in stone. But it is
pretty awesome and we are excited about it.
Lastly, there are some changes to ARM code - you should get a simple
conflict which has been resolved in #linux-next.
In short, this pull has awesome features.
Features:
- FIFO event channels. Key advantages: support for over 100,000
events (2^17), 16 different event priorities, improved fairness in
event latency through the use of FIFOs.
- Xen PVH support. "It’s a fully PV kernel mode, running with
paravirtualized disk and network, paravirtualized interrupts and
timers, no emulated devices of any kind (and thus no qemu), no BIOS
or legacy boot — but instead of requiring PV MMU, it uses the HVM
hardware extensions to virtualize the pagetables, as well as system
calls and other privileged operations." (from "The
Paravirtualization Spectrum, Part 2: From poles to a spectrum")
Bug-fixes:
- Fixes in balloon driver (refactor and make it work under ARM)
- Allow xenfb to be used in HVM guests.
- Allow xen_platform_pci=0 to work properly.
- Refactors in event channels"
* tag 'stable/for-linus-3.14-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: (52 commits)
xen/pvh: Set X86_CR0_WP and others in CR0 (v2)
MAINTAINERS: add git repository for Xen
xen/pvh: Use 'depend' instead of 'select'.
xen: delete new instances of __cpuinit usage
xen/fb: allow xenfb initialization for hvm guests
xen/evtchn_fifo: fix error return code in evtchn_fifo_setup()
xen-platform: fix error return code in platform_pci_init()
xen/pvh: remove duplicated include from enlighten.c
xen/pvh: Fix compile issues with xen_pvh_domain()
xen: Use dev_is_pci() to check whether it is pci device
xen/grant-table: Force to use v1 of grants.
xen/pvh: Support ParaVirtualized Hardware extensions (v3).
xen/pvh: Piggyback on PVHVM XenBus.
xen/pvh: Piggyback on PVHVM for grant driver (v4)
xen/grant: Implement an grant frame array struct (v3).
xen/grant-table: Refactor gnttab_init
xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
xen/pvh: Piggyback on PVHVM for event channels (v2)
xen/pvh: Update E820 to work with PVH (v2)
xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
...
|
|
The 'xen_hvm_resume_frames' used to be an 'unsigned long'
and contain the virtual address of the grants. That was OK
for most architectures (PVHVM, ARM) were the grants are contiguous
in memory. That however is not the case for PVH - in which case
we will have to do a lookup for each virtual address for the PFN.
Instead of doing that, lets make it a structure which will contain
the array of PFNs, the virtual address and the count of said PFNs.
Also provide a generic functions: gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames to populate said structure with
appropriate values for PVHVM and ARM.
To round it off, change the name from 'xen_hvm_resume_frames' to
a more descriptive one - 'xen_auto_xlat_grant_frames'.
For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
and also introduces xen_unmap for gnttab_free_auto_xlat_frames.
Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[v3: Based on top of 'asm/xen/page.h: remove redundant semicolon']
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
Signed-off-by: Wei Liu <liuw@liuw.name>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
ioremap_cache is more aligned with other architectures.
There are only 2 users of this in the kernel: pxa2xx-flash and Xen.
This fixes Xen build failures on arm64:
drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Some common Xen drivers, like balloon.c, call pfn_to_mfn and mfn_to_pfn
even for autotranslate guests, expecting the argument back.
The following commit broke these drivers by changing the behavior of
pfn_to_mfn and mfn_to_pfn:
commit 4a19138c6505e224d9f4cc2fe9ada9188d7100ea
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu Oct 17 16:22:27 2013 +0000
arm/xen,arm64/xen: introduce p2m
They now return INVALID_P2M_ENTRY if Linux doesn't actually know what is
the mfn backing a pfn or what is the pfn corresponding to an mfn.
Fix the regression by switching to the old behavior.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Ian Campbell <ian.campbell@citrix.com>
|
|
stable/for-linus-3.13
* stefano/swiotlb-xen-9.1:
swiotlb-xen: fix error code returned by xen_swiotlb_map_sg_attrs
swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus and range_straddles_page_boundary
grant-table: call set_phys_to_machine after mapping grant refs
arm,arm64: do not always merge biovec if we are running on Xen
swiotlb: print a warning when the swiotlb is full
swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/device
xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device
swiotlb-xen: use xen_alloc/free_coherent_pages
xen: introduce xen_alloc/free_coherent_pages
arm64/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain
arm/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain
swiotlb-xen: introduce xen_swiotlb_set_dma_mask
xen/arm,arm64: enable SWIOTLB_XEN
xen: make xen_create_contiguous_region return the dma address
xen/x86: allow __set_phys_to_machine for autotranslate guests
arm/xen,arm64/xen: introduce p2m
arm64: define DMA_ERROR_CODE
arm: make SWIOTLB available
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Conflicts:
arch/arm/include/asm/dma-mapping.h
drivers/xen/swiotlb-xen.c
[Conflicts arose b/c "arm: make SWIOTLB available" v8 was in Stefano's
branch, while I had v9 + Ack from Russel. I also fixed up white-space
issues]
|
|
Introduce xen_dma_map_page, xen_dma_unmap_page,
xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device.
They have empty implementations on x86 and ia64 but they call the
corresponding platform dma_ops function on arm and arm64.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Changes in v9:
- xen_dma_map_page return void, avoid page_to_phys.
|
|
xen_swiotlb_alloc_coherent needs to allocate a coherent buffer for cpu
and devices. On native x86 is sufficient to call __get_free_pages in
order to get a coherent buffer, while on ARM (and potentially ARM64) we
need to call the native dma_ops->alloc implementation.
Introduce xen_alloc_coherent_pages to abstract the arch specific buffer
allocation.
Similarly introduce xen_free_coherent_pages to free a coherent buffer:
on x86 is simply a call to free_pages while on ARM and ARM64 is
arm_dma_ops.free.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Changes in v7:
- rename __get_dma_ops to __generic_dma_ops;
- call __generic_dma_ops(hwdev)->alloc/free on arm64 too.
Changes in v6:
- call __get_dma_ops to get the native dma_ops pointer on arm.
|
|
Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to
program the hardware with mfns rather than pfns for dma addresses.
Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select
SWIOTLB_XEN on arm and arm64.
At the moment always rely on swiotlb-xen, but when Xen starts supporting
hardware IOMMUs we'll be able to avoid it conditionally on the presence
of an IOMMU on the platform.
Implement xen_create_contiguous_region on arm and arm64: for the moment
we assume that dom0 has been mapped 1:1 (physical addresses == machine
addresses) therefore we don't need to call XENMEM_exchange. Simply
return the physical address as dma address.
Initialize the xen-swiotlb from xen_early_init (before the native
dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Changes in v8:
- assume dom0 is mapped 1:1, no need to call XENMEM_exchange.
Changes in v7:
- call __set_phys_to_machine_multi from xen_create_contiguous_region and
xen_destroy_contiguous_region to update the P2M;
- don't call XENMEM_unpin, it has been removed;
- call XENMEM_exchange instead of XENMEM_exchange_and_pin;
- set nr_exchanged to 0 before calling the hypercall.
Changes in v6:
- introduce and export xen_dma_ops;
- call xen_mm_init from as arch_initcall.
Changes in v4:
- remove redefinition of DMA_ERROR_CODE;
- update the code to use XENMEM_exchange_and_pin and XENMEM_unpin;
- add a note about hardware IOMMU in the commit message.
Changes in v3:
- code style changes;
- warn on XENMEM_put_dma_buf failures.
|
|
Introduce physical to machine and machine to physical tracking
mechanisms based on rbtrees for arm/xen and arm64/xen.
We need it because any guests on ARM are an autotranslate guests,
therefore a physical address is potentially different from a machine
address. When programming a device to do DMA, we need to be
extra-careful to use machine addresses rather than physical addresses to
program the device. Therefore we need to know the physical to machine
mappings.
For the moment we assume that dom0 starts with a 1:1 physical to machine
mapping, in other words physical addresses correspond to machine
addresses. However when mapping a foreign grant reference, obviously the
1:1 model doesn't work anymore. So at the very least we need to be able
to track grant mappings.
We need locking to protect accesses to the two trees.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Changes in v8:
- move pfn_to_mfn and mfn_to_pfn to page.h as static inline functions;
- no need to walk the tree if phys_to_mach.rb_node is NULL;
- correctly handle multipage p2m entries;
- substitute the spin_lock with a rwlock.
|
|
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
Define xen_remap as ioremap_cache (MT_MEMORY and MT_DEVICE_CACHED end up
having the same AttrIndx encoding).
Remove include asm/mach/map.h, not unneeded.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|
|
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
|
|
Rob Herring has observed that c81611c4e96f "xen: event channel arrays are
xen_ulong_t and not unsigned long" introduced a compile failure when building
without CONFIG_AEABI:
/tmp/ccJaIZOW.s: Assembler messages:
/tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]'
Will Deacon pointed out that this is because OABI does not require even base
registers for 64-bit values. We can avoid this by simply using the existing
atomic64_xchg operation and the same containerof trick as used by the cmpxchg
macros. However since this code is used on memory which is shared with the
hypervisor we require proper atomic instructions and cannot use the generic
atomic64 callbacks (which are based on spinlocks), therefore add a dependency
on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much
downside to this.
While thinking about this we also observed that OABI has different struct
alignment requirements to EABI, which is a problem for hypercall argument
structs which are shared with the hypervisor and which must be in EABI layout.
Since I don't expect people to want to run OABI kernels on Xen depend on
CONFIG_AEABI explicitly too (although it also happens to be enforced by the
!GENERIC_ATOMIC64 requirement too).
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <robherring2@gmail.com>
Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
On ARM we want these to be the same size on 32- and 64-bit.
This is an ABI change on ARM. X86 does not change.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Keir (Xen.org) <keir@xen.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: xen-devel@lists.xen.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
ioremap can't be used to map ring pages on ARM because it uses device
memory caching attributes (MT_DEVICE*).
Introduce a Xen specific abstraction to map ring pages, called
xen_remap, that is defined as ioremap on x86 (no behavioral changes).
On ARM it explicitly calls __arm_ioremap with the right caching
attributes: MT_MEMORY.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
We use XENMEM_add_to_physmap_range which is the preferred interface
for foreign mappings.
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
This makes common code less ifdef-y and is consistent with PVHVM on
x86.
Also note that phys_to_machine_mapping_valid should take a pfn
argument and make it do so.
Add __set_phys_to_machine, make set_phys_to_machine a simple wrapper
(on systems with non-nop implementations the outer one can allocate
new p2m pages).
Make __set_phys_to_machine check for identity mapping or invalid only.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
This correctly sizes it as 64 bit on ARM but leaves it as unsigned
long on x86 (therefore no intended change on x86).
The long and ulong guest handles are now unused (and a bit dangerous)
so remove them.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Define PRI macros for xen_ulong_t and xen_pfn_t and use to fix:
drivers/xen/sys-hypervisor.c:288:4: warning: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'xen_ulong_t' [-Wformat]
Ideally this would use PRIx64 on ARM but these (or equivalent) don't
seem to be available in the kernel.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Compile events.c on ARM.
Parse, map and enable the IRQ to get event notifications from the device
tree (node "/xen").
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
All the original Xen headers have xen_ulong_t as unsigned long type, however
when they have been imported in Linux, xen_ulong_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
see fit.
Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.
Changes in v3:
- remove the incorrect changes to multicall_entry;
- remove the change to apic_physbase.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
ARM Xen guests always use paging in hardware, like PV on HVM guests in
the X86 world.
Changes in v3:
- improve comments.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Use r12 to pass the hypercall number to the hypervisor.
We need a register to pass the hypercall number because we might not
know it at compile time and HVC only takes an immediate argument.
Among the available registers r12 seems to be the best choice because it
is defined as "intra-procedure call scratch register".
Use the ISS to pass an hypervisor specific tag.
Changes in v2:
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
at the moment is unused;
- use ldm instead of pop;
- fix up comments.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
- Basic hypervisor.h and interface.h definitions.
- Skeleton enlighten.c, set xen_start_info to an empty struct.
- Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.
The new code only compiles when CONFIG_XEN is set, that is going to be
added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
ARM".
Changes in v3:
- improve comments.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|