Age | Commit message (Collapse) | Author |
|
Add support for Dual/Quad SPI Transfers to the spidev API.
As this uses SPI mode bits that don't fit in a single byte, two new
ioctls (SPI_IOC_RD_MODE32 and SPI_IOC_WR_MODE32) are introduced.
Signed-off-by: Geert Uytterhoeven <geert+renesas@linux-m68k.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
git://people.freedesktop.org/~deathsimple/linux into drm-next
So this is the initial pull request for radeon drm-next 3.15. Highlights:
- VCE bringup including DPM support
- Few cleanups for the ring handling code
* 'drm-next-3.15' of git://people.freedesktop.org/~deathsimple/linux:
drm/radeon: cleanup false positive lockup handling
drm/radeon: drop radeon_ring_force_activity
drm/radeon: drop drivers copy of the rptr
drm/radeon/cik: enable/disable vce cg when encoding v2
drm/radeon: add support for vce 2.0 clock gating
drm/radeon/dpm: properly enable/disable vce when vce pg is enabled
drm/radeon/dpm: enable dynamic vce state switching v2
drm/radeon: add vce dpm support for KV/KB
drm/radeon: enable vce dpm on CI
drm/radeon: add vce dpm support for CI
drm/radeon: fill in set_vce_clocks for CIK asics
drm/radeon/dpm: fetch vce states from the vbios
drm/radeon/dpm: fill in some initial vce infrastructure
drm/radeon/dpm: move platform caps fetching to a separate function
drm/radeon: add callback for setting vce clocks
drm/radeon: add VCE version parsing and checking
drm/radeon: add VCE ring query
drm/radeon: initial VCE support v4
drm/radeon: fix CP semaphores on CIK
|
|
ssh://git.freedesktop.org/git/drm-intel into drm-next
- Yet more steps towards atomic modeset from Ville.
- DP panel power sequencing improvements from Paulo.
- irq code cleanups from Ville.
- 5.4 GHz dp lane clock support for bdw/hsw from Todd.
- Clock readout support for hsw/bdw (aka fastboot) from Jesse.
- Make pipe underruns report at ERROR level (Ville). This is to check our
improved watermarks code.
- Full ppgtt support from Ben for gen7.
- More fbc fixes and improvements from Ville all over the place, unfortunately
not yet enabled by default on more platforms.
- w/a cleanups from Ville.
- HiZ stall optimization settings (Chia-I Wu).
- Display register mmio offset refactor patch from Antti.
- RPS improvements for corner-cases from Jeff McGee.
* tag 'drm-intel-next-2014-02-07' of ssh://git.freedesktop.org/git/drm-intel: (166 commits)
drm/i915: Update rps interrupt limits
drm/i915: Restore rps/rc6 on reset
drm/i915: Prevent recursion by retiring requests when the ring is full
drm/i915: Generate a hang error code
drm/i915: unify FLIP_DONE macro names
drm/i915: vlv: s/spin_lock_irqsave/spin_lock/ in irq handler
drm/i915: factor out valleyview_pipestat_irq_handler
drm/i915: vlv: don't unmask IIR[DISPLAY_PIPE_A/B_VBLANK] interrupt
drm/i915: Reorganize display pipe register accesses
drm/i915: Treat using a purged buffer as a source of EFAULT
drm/i915: Convert EFAULT into a silent SIGBUS
drm/i915: release mutex in i915_gem_init()'s error path
drm/i915: check for oom when allocating private_default_ctx
drm/i915/vlv: WA to fix Voltage not getting dropped to Vmin when Gfx is power gated.
drm/i915: Get rid of acthd based guilty batch search
drm/i915: Use hangcheck score to find guilty context
drm/i915: Drop WaDisablePSDDualDispatchEnable:ivb for IVB GT2
drm/i915: Fix IVB GT2 WaDisableDopClockGating and WaDisablePSDDualDispatchEnable
drm/i915: Don't access snooped pages through the GTT (even for error capture)
drm/i915: Only print information for filing bug reports once
...
Conflicts:
drivers/gpu/drm/i915/intel_dp.c
|
|
git://anongit.freedesktop.org/tegra/linux into drm-next
drm: DisplayPort AUX framework for v3.15-rc1
This series of patches implements a small framework that abstracts away
some of the functionality that the DisplayPort AUX channel provides. It
comes with a set of generic helpers that use the driver implementations
to reduce code duplication.
* tag 'drm/dp-aux-for-3.15-rc1' of git://anongit.freedesktop.org/tegra/linux:
drm/dp: Allow registering AUX channels as I2C busses
drm/dp: Add DisplayPort link helpers
drm/dp: Add drm_dp_dpcd_read_link_status()
drm/dp: Add AUX channel infrastructure
|
|
This patches creates the public hci_req_add_le_passive_scan helper so
it can be re-used outside hci_core.c in the next patch.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
We should only accept connection parameters from identity addresses
(public or random static). Thus, we should check the address type
in hci_conn_params_add().
Additionally, since the IRK is removed during unpair, we should also
remove the connection parameters from that device.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch introduces the LE auto connection options: HCI_AUTO_CONN_
ALWAYS and HCI_AUTO_CONN_LINK_LOSS. Their working mechanism are
described as follows:
The HCI_AUTO_CONN_ALWAYS option configures the kernel to always re-
establish the connection, no matter the reason the connection was
terminated. This feature is required by some LE profiles such as
HID over GATT, Health Thermometer and Blood Pressure. These profiles
require the host autonomously connect to the device as soon as it
enters in connectable mode (start advertising) so the device is able
to delivery notifications or indications.
The BT_AUTO_CONN_LINK_LOSS option configures the kernel to re-
establish the connection in case the connection was terminated due
to a link loss. This feature is required by the majority of LE
profiles such as Proximity, Find Me, Cycling Speed and Cadence and
Time.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch introduces the LE auto connection infrastructure which
will be used to implement the LE auto connection options.
In summary, the auto connection mechanism works as follows: Once the
first pending LE connection is created, the background scanning is
started. When the target device is found in range, the kernel
autonomously starts the connection attempt. If connection is
established successfully, that pending LE connection is deleted and
the background is stopped.
To achieve that, this patch introduces the hci_update_background_scan()
which controls the background scanning state. This function starts or
stops the background scanning based on the hdev->pend_le_conns list. If
there is no pending LE connection, the background scanning is stopped.
Otherwise, we start the background scanning.
Then, every time a pending LE connection is added we call hci_update_
background_scan() so the background scanning is started (in case it is
not already running). Likewise, every time a pending LE connection is
deleted we call hci_update_background_scan() so the background scanning
is stopped (in case this was the last pending LE connection) or it is
started again (in case we have more pending LE connections). Finally,
we also call hci_update_background_scan() in hci_le_conn_failed() so
the background scan is restarted in case the connection establishment
fails. This way the background scanning keeps running until all pending
LE connection are established.
At this point, resolvable addresses are not support by this
infrastructure. The proper support is added in upcoming patches.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch introduces the hdev->pend_le_conn list which holds the
device addresses the kernel should autonomously connect. It also
introduces some helper functions to manipulate the list.
The list and helper functions will be used by the next patch which
implements the LE auto connection infrastructure.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
hci_connect() is a very simple and useless wrapper of hci_connect_acl
and hci_connect_le functions. Addtionally, all places where hci_connect
is called the link type value is passed explicitly. This way, we can
safely delete hci_connect, declare hci_connect_acl and hci_connect_le
in hci_core.h and call them directly.
No functionality is changed by this patch.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Some LE controllers don't support scanning and creating a connection
at the same time. So we should always stop scanning in order to
establish the connection.
Since we may prematurely stop the discovery procedure in favor of
the connection establishment, we should also cancel hdev->le_scan_
disable delayed work and set the discovery state to DISCOVERY_STOPPED.
This change does a small improvement since it is not mandatory the
user stops scanning before connecting anymore. Moreover, this change
is required by upcoming LE auto connection mechanism in order to work
properly with controllers that don't support background scanning and
connection establishment at the same time.
In future, we might want to do a small optimization by checking if
controller is able to scan and connect at the same time. For now,
we want the simplest approach so we always stop scanning (even if
the controller is able to carry out both operations).
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch adds the "hci_" prefix to le_conn_failed() helper and
declares it in hci_core.h so it can be reused in hci_event.c.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch moves stop LE scanning duplicate code to one single
place and reuses it. This will avoid more duplicate code in
upcoming patches.
Signed-off-by: Andre Guedes <andre.guedes@openbossa.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Suggest by Arnd: abstract mmc tuning as clock behavior,
also because different soc have different tuning method and registers.
hi3620_mmc_clks is added to handle mmc clock specifically on hi3620.
Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
|
|
Use push and pop to both guarantee that the correct alignment is used,
and to restore the alignment to whatever it was before the header
was included.
It is reported that the #pragma pack(push/pop) directives are not supported
by the specific GCCs, but this patch still doesn't affect kernel build
as there are already #pragma pack([1]) directives used in the old ACPICA
headers, which means there shouldn't be GCCs that are currently used to
compile the ACPI kernels do not support #pragma pack() directives.
References: https://bugs.acpica.org/show_bug.cgi?id=1058
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
|
|
Upcoming congestion controls for TCP require usec resolution for RTT
estimations. Millisecond resolution is simply not enough these days.
FQ/pacing in DC environments also require this change for finer control
and removal of bimodal behavior due to the current hack in
tcp_update_pacing_rate() for 'small rtt'
TCP_CONG_RTT_STAMP is no longer needed.
As Julian Anastasov pointed out, we need to keep user compatibility :
tcp_metrics used to export RTT and RTTVAR in msec resolution,
so we added RTT_US and RTTVAR_US. An iproute2 patch is needed
to use the new attributes if provided by the kernel.
In this example ss command displays a srtt of 32 usecs (10Gbit link)
lpk51:~# ./ss -i dst lpk52
Netid State Recv-Q Send-Q Local Address:Port Peer
Address:Port
tcp ESTAB 0 1 10.246.11.51:42959
10.246.11.52:64614
cubic wscale:6,6 rto:201 rtt:0.032/0.001 ato:40 mss:1448
cwnd:10 send
3620.0Mbps pacing_rate 7240.0Mbps unacked:1 rcv_rtt:993 rcv_space:29559
Updated iproute2 ip command displays :
lpk51:~# ./ip tcp_metrics | grep 10.246.11.52
10.246.11.52 age 561.914sec cwnd 10 rtt 274us rttvar 213us source
10.246.11.51
Old binary displays :
lpk51:~# ip tcp_metrics | grep 10.246.11.52
10.246.11.52 age 561.914sec cwnd 10 rtt 250us rttvar 125us source
10.246.11.51
With help from Julian Anastasov, Stephen Hemminger and Yuchung Cheng
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Larry Brakmo <brakmo@google.com>
Cc: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
ktime_get() is too expensive on some cases, and we'd like to get
usec resolution timestamps in TCP stack.
This patch adds a light weight facility using a combination of
local_clock() and jiffies samples.
Instead of :
u64 t0, t1;
t0 = ktime_get();
// stuff
t1 = ktime_get();
delta_us = ktime_us_delta(t1, t0);
use :
struct skb_mstamp t0, t1;
skb_mstamp_get(&t0);
// stuff
skb_mstamp_get(&t1);
delta_us = skb_mstamp_us_delta(&t1, &t0);
Note : local_clock() might have a (bounded) drift between cpus.
Do not use this infra in place of ktime_get() without understanding the
issues.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Larry Brakmo <brakmo@google.com>
Cc: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sometimes we have a struct resource where we know the type (MEM/IO/etc.)
and the size, but we haven't assigned address space for it. The
IORESOURCE_UNSET flag is a way to indicate this situation. For these
"unset" resources, the start address is meaningless, so print only the
size, e.g.,
- pci 0000:0c:00.0: reg 184: [mem 0x00000000-0x00001fff 64bit]
+ pci 0000:0c:00.0: reg 184: [mem size 0x2000 64bit]
For %pr (printing with raw flags), we still print the address range,
because %pr is mostly used for debugging anyway.
Thanks to Fengguang Wu <fengguang.wu@intel.com> for suggesting
resource_size().
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
We have two identical copies of resource_contains() already, and more
places that could use it. This moves it to ioport.h where it can be
shared.
resource_contains(struct resource *r1, struct resource *r2) returns true
iff r1 and r2 are the same type (most callers already checked this
separately) and the r1 address range completely contains r2.
In addition, the new resource_contains() checks that both r1 and r2 have
addresses assigned to them. If a resource is IORESOURCE_UNSET, it doesn't
have a valid address and can't contain or be contained by another resource.
Some callers already check this or for res->start.
No functional change.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
This option has the same semantic as IP_PMTUDISC_OMIT for IPv4 which
got recently introduced. It doesn't honor the path mtu discovered by the
host but in contrary to IPV6_PMTUDISC_INTERFACE allows the generation of
fragments if the packet size exceeds the MTU of the outgoing interface
MTU.
Fixes: 93b36cf3425b9b ("ipv6: support IPV6_PMTU_INTERFACE on sockets")
Cc: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
IP_PMTUDISC_INTERFACE has a design error: because it does not allow the
generation of fragments if the interface mtu is exceeded, it is very
hard to make use of this option in already deployed name server software
for which I introduced this option.
This patch adds yet another new IP_MTU_DISCOVER option to not honor any
path mtu information and not accepting new icmp notifications destined for
the socket this option is enabled on. But we allow outgoing fragmentation
in case the packet size exceeds the outgoing interface mtu.
As such this new option can be used as a drop-in replacement for
IP_PMTUDISC_DONT, which is currently in use by most name server software
making the adoption of this option very smooth and easy.
The original advantage of IP_PMTUDISC_INTERFACE is still maintained:
ignoring incoming path MTU updates and not honoring discovered path MTUs
in the output path.
Fixes: 482fc6094afad5 ("ipv4: introduce new IP_MTU_DISCOVER mode IP_PMTUDISC_INTERFACE")
Cc: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a sysfs file to enable user space to query the device
port number used by a netdevice instance. This is needed for
devices that have multiple ports on the same PCI function.
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Three counters are added:
- one to track when we went from non-zero to zero window
- one to track the reverse
- one counter incremented when we want to announce zero window,
but can't because we would shrink current window.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All ethertypes other than ETH_P_MPLS_UC, ETH_P_MPLS_MC and
ETH_P_ATMMPOA were already ordered numerically. This commit moves
those three ETH_P_... values into correct numerical order too.
Signed-off-by: Neil Jerram <Neil.Jerram@metaswitch.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This lets us check extensions, particularly VFIO_DMA_CC_IOMMU using
the external user interface, allowing KVM to probe IOMMU coherency.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Now that the type1 IOMMU backend can support IOMMU_CACHE, we need to
be able to test whether coherency is currently enforced. Add an
extension for this.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
We currently have a problem that we cannot support advanced features
of an IOMMU domain (ex. IOMMU_CACHE), because we have no guarantee
that those features will be supported by all of the hardware units
involved with the domain over its lifetime. For instance, the Intel
VT-d architecture does not require that all DRHDs support snoop
control. If we create a domain based on a device behind a DRHD that
does support snoop control and enable SNP support via the IOMMU_CACHE
mapping option, we cannot then add a device behind a DRHD which does
not support snoop control or we'll get reserved bit faults from the
SNP bit in the pagetables. To add to the complexity, we can't know
the properties of a domain until a device is attached.
We could pass this problem off to userspace and require that a
separate vfio container be used, but we don't know how to handle page
accounting in that case. How do we know that a page pinned in one
container is the same page as a different container and avoid double
billing the user for the page.
The solution is therefore to support multiple IOMMU domains per
container. In the majority of cases, only one domain will be required
since hardware is typically consistent within a system. However, this
provides us the ability to validate compatibility of domains and
support mixed environments where page table flags can be different
between domains.
To do this, our DMA tracking needs to change. We currently try to
coalesce user mappings into as few tracking entries as possible. The
problem then becomes that we lose granularity of user mappings. We've
never guaranteed that a user is able to unmap at a finer granularity
than the original mapping, but we must honor the granularity of the
original mapping. This coalescing code is therefore removed, allowing
only unmaps covering complete maps. The change in accounting is
fairly small here, a typical QEMU VM will start out with roughly a
dozen entries, so it's arguable if this coalescing was ever needed.
We also move IOMMU domain creation to the point where a group is
attached to the container. An interesting side-effect of this is that
we now have access to the device at the time of domain creation and
can probe the devices within the group to determine the bus_type.
This finally makes vfio_iommu_type1 completely device/bus agnostic.
In fact, each IOMMU domain can host devices on different buses managed
by different physical IOMMUs, and present a single DMA mapping
interface to the user. When a new domain is created, mappings are
replayed to bring the IOMMU pagetables up to the state of the current
container. And of course, DMA mapping and unmapping automatically
traverse all of the configured IOMMU domains.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: Varun Sethi <Varun.Sethi@freescale.com>
|
|
Implements an I2C-over-AUX I2C adapter on top of the generic drm_dp_aux
infrastructure. It extracts the retry logic from existing drivers, which
should help in porting those drivers to this new helper.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
---
Changes in v5:
- move comments partially to to header file
- keep MOT set between I2C messages
- return -EPROTO on short reads
Changes in v4:
- fix typo "bitrate" -> "bit rate"
Changes in v3:
- add back DRM_DEBUG_KMS and DRM_ERROR messages
- embed i2c_adapter within struct drm_dp_aux
- fix typo in comment
|
|
Add a helper to probe a DP link (read out the supported DPCD revision,
maximum rate, link count and capabilities) as well as power up the DP
link and configure it accordingly.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
---
Changes in v5:
- export helpers
Changes in v4:
- fix a couple of typos in comments as pointed out by Alex Deucher
Changes in v3:
- split into drm_dp_link_power_up() and drm_dp_link_configure()
- do not change sink state for DPCD versions earlier than 1.1
- sleep for 1-2 ms after setting local sink to D0 state
- read and write consecutive registers where possible
- read DPCD revision when link is probed
- remove duplicate kerneldoc
|
|
The function reads the link status (6 bytes starting at offset 0x202)
from the DPCD so that it can be conveniently passed to other DPCD
helpers.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
This is a superset of the current i2c_dp_aux bus functionality and can
be used to transfer native AUX in addition to I2C-over-AUX messages.
Helpers are provided to read and write the DPCD, either blockwise or
byte-wise. Many of the existing helpers for DisplayPort take a copy of a
portion of the DPCD and operate on that, without a way to write data
back to the DPCD (e.g. for configuration of the link).
Subsequent patches will build upon this infrastructure to provide common
functionality in a generic way.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
---
Changes in v5:
- move comments partially to struct drm_dp_aux_msg in header file
- return -EPROTO on short reads in DPCD helpers
Changes in v4:
- fix a typo in a comment
Changes in v3:
- reorder drm_dp_dpcd_writeb() arguments to be more intuitive
- return number of bytes transferred in drm_dp_dpcd_write()
- factor out drm_dp_dpcd_access()
- describe error codes
|
|
torture.2014.02.23a: locktorture addition and rcutorture changes
|
|
into HEAD
doc.2014.02.24a: Documentation changes
fixes.2014.02.26a: Miscellaneous fixes
rt.2014.02.17b: Response-time-related changes
|
|
This commit fixes the follwoing warning:
kernel/ksysfs.c:143:5: warning: symbol 'rcu_expedited' was not declared. Should it be static?
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
[ paulmck: Moved the declaration to include/linux/rcupdate.h to avoid
including the RCU-internal rcu.h file outside of RCU. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
Devices with more complex boot proceedures may occasionally apply the
register patch manual. regmap_multi_reg_write is a logical way to do so,
however the patch must be applied with cache bypass on, such that it
doesn't override any user settings. This patch adds a
regmap_multi_reg_write_bypassed function that applies a set of writes
with the bypass enabled.
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
There should be no need for the writes supplied to this function to be
edited by it so mark them as const.
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Commit 93e6f119c0ce ("ipc/mqueue: cleanup definition names and
locations") added global hardcoded limits to the amount of message
queues that can be created. While these limits are per-namespace,
reality is that it ends up breaking userspace applications.
Historically users have, at least in theory, been able to create up to
INT_MAX queues, and limiting it to just 1024 is way too low and dramatic
for some workloads and use cases. For instance, Madars reports:
"This update imposes bad limits on our multi-process application. As
our app uses approaches that each process opens its own set of queues
(usually something about 3-5 queues per process). In some scenarios
we might run up to 3000 processes or more (which of-course for linux
is not a problem). Thus we might need up to 9000 queues or more. All
processes run under one user."
Other affected users can be found in launchpad bug #1155695:
https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695
Instead of increasing this limit, revert it entirely and fallback to the
original way of dealing queue limits -- where once a user's resource
limit is reached, and all memory is used, new queues cannot be created.
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Reported-by: Madars Vitolins <m@silodev.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: <stable@vger.kernel.org> [3.5+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit c02cecb92ed4 ("ARM: orion: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up. While at it also change the header
file protection macros appropriately.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Chris Ball <chris@printf.net>
|
|
Commit 1ef21f6343ff ("ARM: msm: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up. While at it also change the header
file protection macros appropriately.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Chris Ball <chris@printf.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/drivers
A patch to break dependency of DaVinci NAND
driver with mach-davinci. Required for reuse
of driver on other platforms (keystone).
* tag 'davinci-for-v3.15/nand' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
ARM: davinci: aemif: get rid of davinci-nand driver dependency on aemif
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Send Channel Availability Check time as a parameter
of start_radar_detection() callback.
Get CAC time from regulatory database.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Introduce DFS CAC time as a regd param, configured per REG_RULE and
set per channel in cfg80211. DFS CAC time is close connected with
regulatory database configuration. Instead of using hardcoded values,
get DFS CAC time form regulatory database. Pass DFS CAC time to user
mode (mainly for iw reg get, iw list, iw info). Allow setting DFS CAC
time via CRDA. Add support for internal regulatory database.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
[rewrap commit log]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
As mount() and kill_sb() is not a one-to-one match, we shoudn't get
ns refcnt unconditionally in sysfs_mount(), and instead we should
get the refcnt only when kernfs_mount() allocated a new superblock.
v2:
- Changed the name of the new argument, suggested by Tejun.
- Made the argument optional, suggested by Tejun.
v3:
- Make the new argument as second-to-last arg, suggested by Tejun.
Signed-off-by: Li Zefan <lizefan@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
---
fs/kernfs/mount.c | 8 +++++++-
fs/sysfs/mount.c | 5 +++--
include/linux/kernfs.h | 9 +++++----
3 files changed, 15 insertions(+), 7 deletions(-)
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
For optimization, task_lock() is additionally used to protect
task->cgroups. The optimization is pretty dubious as either
css_set_rwsem is grabbed anyway or PF_EXITING already protects
task->cgroups. It adds only overhead and confusion at this point.
Let's drop task_[un]lock() and update comments accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, process / task migration is a single operation which may
fail depending on memory pressure or the involved controllers'
->can_attach() callbacks. One problem with this approach is migration
of multiple targets. It's impossible to tell whether a given target
will be successfully migrated beforehand and cgroup core can't keep
track of enough states to roll back after intermediate failure.
This is already an issue with cgroup_transfer_tasks(). Also, we're
gonna need multiple target migration for unified hierarchy.
This patch splits migration into four stages -
cgroup_migrate_add_src(), cgroup_migrate_prepare_dst(),
cgroup_migrate() and cgroup_migrate_finish(), where
cgroup_migrate_prepare_dst() performs all the operations which may
fail due to allocation failure without actually migrating the target.
The four separate stages mean that, disregarding ->can_attach()
failures, the success or failure of multi target migration can be
determined before performing any actual migration. If preparations of
all targets succeed, the whole thing will succeed. If not, the whole
operation can fail without any side-effect.
Since the previous patch to use css_set->mg_tasks to keep track of
migration targets, the only thing which may need memory allocation
during migration is the target css_sets. cgroup_migrate_prepare()
pins all source and target css_sets and link them up. Note that this
can be performed without holding threadgroup_lock even if the target
is a process. As long as cgroup_mutex is held, no new css_set can be
put into play.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, while migrating tasks from one cgroup to another,
cgroup_attach_task() builds a flex array of all target tasks;
unfortunately, this has a couple issues.
* Flex array has size limit. On 64bit, struct task_and_cgroup is
24bytes making the flex element limit around 87k. It is a high
number but not impossible to hit. This means that the current
cgroup implementation can't migrate a process with more than 87k
threads.
* Process migration involves memory allocation whose size is dependent
on the number of threads the process has. This means that cgroup
core can't guarantee success or failure of multi-process migrations
as memory allocation failure can happen in the middle. This is in
part because cgroup can't grab threadgroup locks of multiple
processes at the same time, so when there are multiple processes to
migrate, it is imposible to tell how many tasks are to be migrated
beforehand.
Note that this already affects cgroup_transfer_tasks(). cgroup
currently cannot guarantee atomic success or failure of the
operation. It may fail in the middle and after such failure cgroup
doesn't have enough information to roll back properly. It just
aborts with some tasks migrated and others not.
To resolve the situation, this patch updates the migration path to use
task->cg_list to track target tasks. The previous patch already added
css_set->mg_tasks and updated iterations in non-migration paths to
include them during task migration. This patch updates migration path
to actually make use of it.
Instead of putting onto a flex_array, each target task is moved from
its css_set->tasks list to css_set->mg_tasks and the migration path
keeps trace of all the source css_sets and the associated cgroups.
Once all source css_sets are determined, the destination css_set for
each is determined, linked to the matching source css_set and put on a
separate list.
To iterate the target tasks, migration path just needs to iterat
through either the source or target css_sets, depending on whether
migration has been committed or not, and the tasks on their ->mg_tasks
lists. cgroup_taskset is updated to contain the list_heads for source
and target css_sets and the iteration cursor. cgroup_taskset_*() are
accordingly updated to walk through css_sets and their ->mg_tasks.
This resolves the above listed issues with moderate additional
complexity.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, while migrating tasks from one cgroup to another,
cgroup_attach_task() builds a flex array of all target tasks;
unfortunately, this has a couple issues.
* Flex array has size limit. On 64bit, struct task_and_cgroup is
24bytes making the flex element limit around 87k. It is a high
number but not impossible to hit. This means that the current
cgroup implementation can't migrate a process with more than 87k
threads.
* Process migration involves memory allocation whose size is dependent
on the number of threads the process has. This means that cgroup
core can't guarantee success or failure of multi-process migrations
as memory allocation failure can happen in the middle. This is in
part because cgroup can't grab threadgroup locks of multiple
processes at the same time, so when there are multiple processes to
migrate, it is imposible to tell how many tasks are to be migrated
beforehand.
Note that this already affects cgroup_transfer_tasks(). cgroup
currently cannot guarantee atomic success or failure of the
operation. It may fail in the middle and after such failure cgroup
doesn't have enough information to roll back properly. It just
aborts with some tasks migrated and others not.
To resolve the situation, we're going to use task->cg_list during
migration too. Instead of building a separate array, target tasks
will be linked into a dedicated migration list_head on the owning
css_set. Tasks on the migration list are treated the same as tasks on
the usual tasks list; however, being on a separate list allows cgroup
migration code path to keep track of the target tasks by simply
keeping the list of css_sets with tasks being migrated, making
unpredictable dynamic allocation unnecessary.
In prepartion of such migration path update, this patch introduces
css_set->mg_tasks list and updates css_set task iterations so that
they walk both css_set->tasks and ->mg_tasks. Note that ->mg_tasks
isn't used yet.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Remove file references rendered invalid due to relocation.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Remove file references rendered invalid due to relocation.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|