2007-05-05 18:45:53 +00:00
|
|
|
/*
|
|
|
|
* mac80211 configuration hooks for cfg80211
|
|
|
|
*
|
2010-02-15 10:53:10 +00:00
|
|
|
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
|
2014-09-03 12:24:57 +00:00
|
|
|
* Copyright 2013-2014 Intel Mobile Communications GmbH
|
2007-05-05 18:45:53 +00:00
|
|
|
*
|
|
|
|
* This file is GPLv2 as found in COPYING.
|
|
|
|
*/
|
|
|
|
|
2007-12-19 01:03:30 +00:00
|
|
|
#include <linux/ieee80211.h>
|
2007-05-05 18:45:53 +00:00
|
|
|
#include <linux/nl80211.h>
|
|
|
|
#include <linux/rtnetlink.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2007-09-17 18:56:21 +00:00
|
|
|
#include <net/net_namespace.h>
|
2007-12-19 01:03:33 +00:00
|
|
|
#include <linux/rcupdate.h>
|
2011-09-28 11:12:52 +00:00
|
|
|
#include <linux/if_ether.h>
|
2007-05-05 18:45:53 +00:00
|
|
|
#include <net/cfg80211.h>
|
|
|
|
#include "ieee80211_i.h"
|
2009-04-23 16:52:52 +00:00
|
|
|
#include "driver-ops.h"
|
2007-09-18 21:29:21 +00:00
|
|
|
#include "cfg.h"
|
2008-04-08 19:14:40 +00:00
|
|
|
#include "rate.h"
|
2008-02-23 14:17:17 +00:00
|
|
|
#include "mesh.h"
|
2014-10-07 07:38:50 +00:00
|
|
|
#include "wme.h"
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2012-09-19 07:26:06 +00:00
|
|
|
static struct wireless_dev *ieee80211_add_iface(struct wiphy *wiphy,
|
|
|
|
const char *name,
|
2012-06-15 22:00:26 +00:00
|
|
|
enum nl80211_iftype type,
|
|
|
|
u32 *flags,
|
|
|
|
struct vif_params *params)
|
2007-05-05 18:45:53 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2012-06-15 22:00:26 +00:00
|
|
|
struct wireless_dev *wdev;
|
2008-01-31 18:48:23 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
int err;
|
2007-05-05 18:45:53 +00:00
|
|
|
|
2012-06-15 22:00:26 +00:00
|
|
|
err = ieee80211_if_add(local, name, &wdev, type, params);
|
2010-12-03 08:20:42 +00:00
|
|
|
if (err)
|
|
|
|
return ERR_PTR(err);
|
2008-01-31 18:48:23 +00:00
|
|
|
|
2010-12-03 08:20:42 +00:00
|
|
|
if (type == NL80211_IFTYPE_MONITOR && flags) {
|
2012-06-15 22:00:26 +00:00
|
|
|
sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
2010-12-03 08:20:42 +00:00
|
|
|
sdata->u.mntr_flags = *flags;
|
|
|
|
}
|
|
|
|
|
2012-06-15 22:00:26 +00:00
|
|
|
return wdev;
|
2007-05-05 18:45:53 +00:00
|
|
|
}
|
|
|
|
|
2012-06-15 22:00:26 +00:00
|
|
|
static int ieee80211_del_iface(struct wiphy *wiphy, struct wireless_dev *wdev)
|
2007-05-05 18:45:53 +00:00
|
|
|
{
|
2012-06-15 22:00:26 +00:00
|
|
|
ieee80211_if_remove(IEEE80211_WDEV_TO_SUB_IF(wdev));
|
2007-05-05 18:45:53 +00:00
|
|
|
|
2008-07-09 12:40:35 +00:00
|
|
|
return 0;
|
2007-05-05 18:45:53 +00:00
|
|
|
}
|
|
|
|
|
2009-06-09 19:04:43 +00:00
|
|
|
static int ieee80211_change_iface(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
2008-02-23 14:17:06 +00:00
|
|
|
enum nl80211_iftype type, u32 *flags,
|
|
|
|
struct vif_params *params)
|
2007-09-28 19:52:27 +00:00
|
|
|
{
|
2009-12-23 12:15:31 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2008-07-09 12:40:36 +00:00
|
|
|
int ret;
|
2007-09-28 19:52:27 +00:00
|
|
|
|
2008-09-10 22:01:58 +00:00
|
|
|
ret = ieee80211_if_change_type(sdata, type);
|
2008-07-09 12:40:36 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2007-09-28 19:52:27 +00:00
|
|
|
|
2009-11-19 10:55:19 +00:00
|
|
|
if (type == NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
params && params->use_4addr == 0)
|
2011-08-01 16:19:00 +00:00
|
|
|
RCU_INIT_POINTER(sdata->u.vlan.sta, NULL);
|
2009-11-19 10:55:19 +00:00
|
|
|
else if (type == NL80211_IFTYPE_STATION &&
|
|
|
|
params && params->use_4addr >= 0)
|
|
|
|
sdata->u.mgd.use_4addr = params->use_4addr;
|
|
|
|
|
2010-10-02 11:17:07 +00:00
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_MONITOR && flags) {
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
|
|
|
|
if (ieee80211_sdata_running(sdata)) {
|
2013-05-28 11:01:53 +00:00
|
|
|
u32 mask = MONITOR_FLAG_COOK_FRAMES |
|
|
|
|
MONITOR_FLAG_ACTIVE;
|
|
|
|
|
2010-10-02 11:17:07 +00:00
|
|
|
/*
|
2013-05-28 11:01:53 +00:00
|
|
|
* Prohibit MONITOR_FLAG_COOK_FRAMES and
|
|
|
|
* MONITOR_FLAG_ACTIVE to be changed while the
|
|
|
|
* interface is up.
|
2010-10-02 11:17:07 +00:00
|
|
|
* Else we would need to add a lot of cruft
|
|
|
|
* to update everything:
|
|
|
|
* cooked_mntrs, monitor and all fif_* counters
|
|
|
|
* reconfigure hardware
|
|
|
|
*/
|
2013-05-28 11:01:53 +00:00
|
|
|
if ((*flags & mask) != (sdata->u.mntr_flags & mask))
|
2010-10-02 11:17:07 +00:00
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
ieee80211_adjust_monitor_flags(sdata, -1);
|
|
|
|
sdata->u.mntr_flags = *flags;
|
|
|
|
ieee80211_adjust_monitor_flags(sdata, 1);
|
|
|
|
|
|
|
|
ieee80211_configure_filter(local);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Because the interface is down, ieee80211_do_stop
|
|
|
|
* and ieee80211_do_open take care of "everything"
|
|
|
|
* mentioned in the comment above.
|
|
|
|
*/
|
|
|
|
sdata->u.mntr_flags = *flags;
|
|
|
|
}
|
|
|
|
}
|
2010-04-26 22:26:34 +00:00
|
|
|
|
2007-09-28 19:52:27 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-06-18 18:07:15 +00:00
|
|
|
static int ieee80211_start_p2p_device(struct wiphy *wiphy,
|
|
|
|
struct wireless_dev *wdev)
|
|
|
|
{
|
2014-02-27 09:07:21 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
mutex_lock(&sdata->local->chanctx_mtx);
|
|
|
|
ret = ieee80211_check_combinations(sdata, NULL, 0, 0);
|
|
|
|
mutex_unlock(&sdata->local->chanctx_mtx);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
2012-06-18 18:07:15 +00:00
|
|
|
return ieee80211_do_open(wdev, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ieee80211_stop_p2p_device(struct wiphy *wiphy,
|
|
|
|
struct wireless_dev *wdev)
|
|
|
|
{
|
|
|
|
ieee80211_sdata_stop(IEEE80211_WDEV_TO_SUB_IF(wdev));
|
|
|
|
}
|
|
|
|
|
2011-11-18 13:20:44 +00:00
|
|
|
static int ieee80211_set_noack_map(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
u16 noack_map)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
sdata->noack_map = noack_map;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:30 +00:00
|
|
|
static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
|
2010-10-05 17:39:30 +00:00
|
|
|
u8 key_idx, bool pairwise, const u8 *mac_addr,
|
2007-12-19 01:03:30 +00:00
|
|
|
struct key_params *params)
|
|
|
|
{
|
2010-08-27 10:35:55 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2013-03-24 12:23:27 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2007-12-19 01:03:30 +00:00
|
|
|
struct sta_info *sta = NULL;
|
2013-03-24 12:23:27 +00:00
|
|
|
const struct ieee80211_cipher_scheme *cs = NULL;
|
2008-02-25 15:27:45 +00:00
|
|
|
struct ieee80211_key *key;
|
2008-04-08 15:56:52 +00:00
|
|
|
int err;
|
2007-12-19 01:03:30 +00:00
|
|
|
|
2010-08-27 10:35:55 +00:00
|
|
|
if (!ieee80211_sdata_running(sdata))
|
2010-06-01 08:19:19 +00:00
|
|
|
return -ENETDOWN;
|
|
|
|
|
2010-08-10 07:46:38 +00:00
|
|
|
/* reject WEP and TKIP keys if WEP failed to initialize */
|
2007-12-19 01:03:30 +00:00
|
|
|
switch (params->cipher) {
|
|
|
|
case WLAN_CIPHER_SUITE_WEP40:
|
|
|
|
case WLAN_CIPHER_SUITE_TKIP:
|
2010-08-10 07:46:38 +00:00
|
|
|
case WLAN_CIPHER_SUITE_WEP104:
|
2013-03-24 12:23:27 +00:00
|
|
|
if (IS_ERR(local->wep_tx_tfm))
|
2010-08-10 07:46:38 +00:00
|
|
|
return -EINVAL;
|
2009-01-08 11:32:02 +00:00
|
|
|
break;
|
2013-03-24 12:23:27 +00:00
|
|
|
case WLAN_CIPHER_SUITE_CCMP:
|
|
|
|
case WLAN_CIPHER_SUITE_AES_CMAC:
|
|
|
|
case WLAN_CIPHER_SUITE_GCMP:
|
|
|
|
break;
|
2007-12-19 01:03:30 +00:00
|
|
|
default:
|
2013-03-24 12:23:27 +00:00
|
|
|
cs = ieee80211_cs_get(local, params->cipher, sdata->vif.type);
|
2010-08-10 07:46:38 +00:00
|
|
|
break;
|
2007-12-19 01:03:30 +00:00
|
|
|
}
|
|
|
|
|
2010-08-10 07:46:38 +00:00
|
|
|
key = ieee80211_key_alloc(params->cipher, key_idx, params->key_len,
|
2013-03-24 12:23:27 +00:00
|
|
|
params->key, params->seq_len, params->seq,
|
|
|
|
cs);
|
2010-08-01 16:37:03 +00:00
|
|
|
if (IS_ERR(key))
|
|
|
|
return PTR_ERR(key);
|
2008-02-25 15:27:45 +00:00
|
|
|
|
2010-10-05 17:39:30 +00:00
|
|
|
if (pairwise)
|
|
|
|
key->conf.flags |= IEEE80211_KEY_FLAG_PAIRWISE;
|
|
|
|
|
2013-03-24 12:23:27 +00:00
|
|
|
mutex_lock(&local->sta_mtx);
|
2008-04-08 15:56:52 +00:00
|
|
|
|
2007-12-19 01:03:30 +00:00
|
|
|
if (mac_addr) {
|
2011-05-03 23:57:12 +00:00
|
|
|
if (ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
sta = sta_info_get(sdata, mac_addr);
|
|
|
|
else
|
|
|
|
sta = sta_info_get_bss(sdata, mac_addr);
|
2013-01-11 13:34:25 +00:00
|
|
|
/*
|
|
|
|
* The ASSOC test makes sure the driver is ready to
|
|
|
|
* receive the key. When wpa_supplicant has roamed
|
|
|
|
* using FT, it attempts to set the key before
|
|
|
|
* association has completed, this rejects that attempt
|
2014-10-30 05:55:58 +00:00
|
|
|
* so it will set the key again after association.
|
2013-01-11 13:34:25 +00:00
|
|
|
*
|
|
|
|
* TODO: accept the key if we have a station entry and
|
|
|
|
* add it to the device after the station.
|
|
|
|
*/
|
|
|
|
if (!sta || !test_sta_flag(sta, WLAN_STA_ASSOC)) {
|
2013-03-06 21:53:52 +00:00
|
|
|
ieee80211_key_free_unused(key);
|
2008-04-08 15:56:52 +00:00
|
|
|
err = -ENOENT;
|
|
|
|
goto out_unlock;
|
2008-02-25 15:27:45 +00:00
|
|
|
}
|
2007-12-19 01:03:30 +00:00
|
|
|
}
|
|
|
|
|
2012-09-04 15:08:23 +00:00
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
if (sdata->u.mgd.mfp != IEEE80211_MFP_DISABLED)
|
|
|
|
key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_AP:
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
/* Keys without a station are used for TX only */
|
|
|
|
if (key->sta && test_sta_flag(key->sta, WLAN_STA_MFP))
|
|
|
|
key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
|
|
|
/* no MFP (yet) */
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
if (sdata->u.mesh.security != IEEE80211_MESH_SEC_NONE)
|
|
|
|
key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
case NL80211_IFTYPE_WDS:
|
|
|
|
case NL80211_IFTYPE_MONITOR:
|
|
|
|
case NL80211_IFTYPE_P2P_DEVICE:
|
|
|
|
case NL80211_IFTYPE_UNSPECIFIED:
|
|
|
|
case NUM_NL80211_IFTYPES:
|
|
|
|
case NL80211_IFTYPE_P2P_CLIENT:
|
|
|
|
case NL80211_IFTYPE_P2P_GO:
|
|
|
|
/* shouldn't happen */
|
|
|
|
WARN_ON_ONCE(1);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2013-03-24 12:23:27 +00:00
|
|
|
if (sta)
|
|
|
|
sta->cipher_scheme = cs;
|
|
|
|
|
2010-08-27 11:26:52 +00:00
|
|
|
err = ieee80211_key_link(key, sdata, sta);
|
2008-02-25 15:27:45 +00:00
|
|
|
|
2008-04-08 15:56:52 +00:00
|
|
|
out_unlock:
|
2013-03-24 12:23:27 +00:00
|
|
|
mutex_unlock(&local->sta_mtx);
|
2008-04-08 15:56:52 +00:00
|
|
|
|
|
|
|
return err;
|
2007-12-19 01:03:30 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
|
2010-10-05 17:39:30 +00:00
|
|
|
u8 key_idx, bool pairwise, const u8 *mac_addr)
|
2007-12-19 01:03:30 +00:00
|
|
|
{
|
2011-05-12 12:31:49 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
2007-12-19 01:03:30 +00:00
|
|
|
struct sta_info *sta;
|
2011-05-12 12:31:49 +00:00
|
|
|
struct ieee80211_key *key = NULL;
|
2007-12-19 01:03:30 +00:00
|
|
|
int ret;
|
|
|
|
|
2011-05-12 12:31:49 +00:00
|
|
|
mutex_lock(&local->sta_mtx);
|
|
|
|
mutex_lock(&local->key_mtx);
|
2008-04-08 15:56:52 +00:00
|
|
|
|
2007-12-19 01:03:30 +00:00
|
|
|
if (mac_addr) {
|
2008-04-08 15:56:52 +00:00
|
|
|
ret = -ENOENT;
|
|
|
|
|
2010-01-08 17:10:58 +00:00
|
|
|
sta = sta_info_get_bss(sdata, mac_addr);
|
2007-12-19 01:03:30 +00:00
|
|
|
if (!sta)
|
2008-04-08 15:56:52 +00:00
|
|
|
goto out_unlock;
|
2007-12-19 01:03:30 +00:00
|
|
|
|
2011-05-12 12:31:49 +00:00
|
|
|
if (pairwise)
|
2013-03-24 12:23:27 +00:00
|
|
|
key = key_mtx_dereference(local, sta->ptk[key_idx]);
|
2011-05-12 12:31:49 +00:00
|
|
|
else
|
2011-05-13 12:15:49 +00:00
|
|
|
key = key_mtx_dereference(local, sta->gtk[key_idx]);
|
2011-05-12 12:31:49 +00:00
|
|
|
} else
|
2011-05-13 12:15:49 +00:00
|
|
|
key = key_mtx_dereference(local, sdata->keys[key_idx]);
|
2007-12-19 01:03:30 +00:00
|
|
|
|
2011-05-12 12:31:49 +00:00
|
|
|
if (!key) {
|
2008-04-08 15:56:52 +00:00
|
|
|
ret = -ENOENT;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2007-12-19 01:03:30 +00:00
|
|
|
|
2013-03-06 21:58:23 +00:00
|
|
|
ieee80211_key_free(key, true);
|
2007-12-19 01:03:30 +00:00
|
|
|
|
2008-04-08 15:56:52 +00:00
|
|
|
ret = 0;
|
|
|
|
out_unlock:
|
2011-05-12 12:31:49 +00:00
|
|
|
mutex_unlock(&local->key_mtx);
|
|
|
|
mutex_unlock(&local->sta_mtx);
|
2008-04-08 15:56:52 +00:00
|
|
|
|
|
|
|
return ret;
|
2007-12-19 01:03:30 +00:00
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:31 +00:00
|
|
|
static int ieee80211_get_key(struct wiphy *wiphy, struct net_device *dev,
|
2010-10-05 17:39:30 +00:00
|
|
|
u8 key_idx, bool pairwise, const u8 *mac_addr,
|
|
|
|
void *cookie,
|
2007-12-19 01:03:31 +00:00
|
|
|
void (*callback)(void *cookie,
|
|
|
|
struct key_params *params))
|
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2007-12-19 01:03:31 +00:00
|
|
|
struct sta_info *sta = NULL;
|
|
|
|
u8 seq[6] = {0};
|
|
|
|
struct key_params params;
|
2010-10-05 17:39:30 +00:00
|
|
|
struct ieee80211_key *key = NULL;
|
2011-07-06 19:59:39 +00:00
|
|
|
u64 pn64;
|
2007-12-19 01:03:31 +00:00
|
|
|
u32 iv32;
|
|
|
|
u16 iv16;
|
|
|
|
int err = -ENOENT;
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-04-08 15:56:52 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
|
2007-12-19 01:03:31 +00:00
|
|
|
if (mac_addr) {
|
2010-01-08 17:10:58 +00:00
|
|
|
sta = sta_info_get_bss(sdata, mac_addr);
|
2007-12-19 01:03:31 +00:00
|
|
|
if (!sta)
|
|
|
|
goto out;
|
|
|
|
|
2013-12-08 11:30:52 +00:00
|
|
|
if (pairwise && key_idx < NUM_DEFAULT_KEYS)
|
2013-03-24 12:23:27 +00:00
|
|
|
key = rcu_dereference(sta->ptk[key_idx]);
|
2013-12-08 11:31:29 +00:00
|
|
|
else if (!pairwise &&
|
|
|
|
key_idx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS)
|
2011-05-12 13:11:37 +00:00
|
|
|
key = rcu_dereference(sta->gtk[key_idx]);
|
2007-12-19 01:03:31 +00:00
|
|
|
} else
|
2011-05-12 13:11:37 +00:00
|
|
|
key = rcu_dereference(sdata->keys[key_idx]);
|
2007-12-19 01:03:31 +00:00
|
|
|
|
|
|
|
if (!key)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
memset(¶ms, 0, sizeof(params));
|
|
|
|
|
2010-08-10 07:46:38 +00:00
|
|
|
params.cipher = key->conf.cipher;
|
2007-12-19 01:03:31 +00:00
|
|
|
|
2010-08-10 07:46:38 +00:00
|
|
|
switch (key->conf.cipher) {
|
|
|
|
case WLAN_CIPHER_SUITE_TKIP:
|
2008-05-14 23:26:19 +00:00
|
|
|
iv32 = key->u.tkip.tx.iv32;
|
|
|
|
iv16 = key->u.tkip.tx.iv16;
|
2007-12-19 01:03:31 +00:00
|
|
|
|
2009-04-23 16:52:52 +00:00
|
|
|
if (key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
|
|
|
|
drv_get_tkip_seq(sdata->local,
|
|
|
|
key->conf.hw_key_idx,
|
|
|
|
&iv32, &iv16);
|
2007-12-19 01:03:31 +00:00
|
|
|
|
|
|
|
seq[0] = iv16 & 0xff;
|
|
|
|
seq[1] = (iv16 >> 8) & 0xff;
|
|
|
|
seq[2] = iv32 & 0xff;
|
|
|
|
seq[3] = (iv32 >> 8) & 0xff;
|
|
|
|
seq[4] = (iv32 >> 16) & 0xff;
|
|
|
|
seq[5] = (iv32 >> 24) & 0xff;
|
|
|
|
params.seq = seq;
|
|
|
|
params.seq_len = 6;
|
|
|
|
break;
|
2010-08-10 07:46:38 +00:00
|
|
|
case WLAN_CIPHER_SUITE_CCMP:
|
2011-07-06 19:59:39 +00:00
|
|
|
pn64 = atomic64_read(&key->u.ccmp.tx_pn);
|
|
|
|
seq[0] = pn64;
|
|
|
|
seq[1] = pn64 >> 8;
|
|
|
|
seq[2] = pn64 >> 16;
|
|
|
|
seq[3] = pn64 >> 24;
|
|
|
|
seq[4] = pn64 >> 32;
|
|
|
|
seq[5] = pn64 >> 40;
|
2007-12-19 01:03:31 +00:00
|
|
|
params.seq = seq;
|
|
|
|
params.seq_len = 6;
|
|
|
|
break;
|
2010-08-10 07:46:38 +00:00
|
|
|
case WLAN_CIPHER_SUITE_AES_CMAC:
|
2011-07-06 20:00:35 +00:00
|
|
|
pn64 = atomic64_read(&key->u.aes_cmac.tx_pn);
|
|
|
|
seq[0] = pn64;
|
|
|
|
seq[1] = pn64 >> 8;
|
|
|
|
seq[2] = pn64 >> 16;
|
|
|
|
seq[3] = pn64 >> 24;
|
|
|
|
seq[4] = pn64 >> 32;
|
|
|
|
seq[5] = pn64 >> 40;
|
2009-01-08 11:32:02 +00:00
|
|
|
params.seq = seq;
|
|
|
|
params.seq_len = 6;
|
|
|
|
break;
|
2007-12-19 01:03:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
params.key = key->conf.key;
|
|
|
|
params.key_len = key->conf.keylen;
|
|
|
|
|
|
|
|
callback(cookie, ¶ms);
|
|
|
|
err = 0;
|
|
|
|
|
|
|
|
out:
|
2008-04-08 15:56:52 +00:00
|
|
|
rcu_read_unlock();
|
2007-12-19 01:03:31 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:30 +00:00
|
|
|
static int ieee80211_config_default_key(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
2010-12-09 18:58:59 +00:00
|
|
|
u8 key_idx, bool uni,
|
|
|
|
bool multi)
|
2007-12-19 01:03:30 +00:00
|
|
|
{
|
2010-06-01 08:19:19 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2008-04-08 15:56:52 +00:00
|
|
|
|
2010-12-09 18:49:02 +00:00
|
|
|
ieee80211_set_default_key(sdata, key_idx, uni, multi);
|
2007-12-19 01:03:30 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-01-08 11:32:02 +00:00
|
|
|
static int ieee80211_config_default_mgmt_key(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
u8 key_idx)
|
|
|
|
{
|
2010-07-22 11:58:51 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2009-01-08 11:32:02 +00:00
|
|
|
|
|
|
|
ieee80211_set_default_mgmt_key(sdata, key_idx);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-05 23:31:48 +00:00
|
|
|
void sta_set_rate_info_tx(struct sta_info *sta,
|
|
|
|
const struct ieee80211_tx_rate *rate,
|
|
|
|
struct rate_info *rinfo)
|
|
|
|
{
|
|
|
|
rinfo->flags = 0;
|
2012-11-09 17:38:32 +00:00
|
|
|
if (rate->flags & IEEE80211_TX_RC_MCS) {
|
2012-03-05 23:31:48 +00:00
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_MCS;
|
2012-11-09 17:38:32 +00:00
|
|
|
rinfo->mcs = rate->idx;
|
|
|
|
} else if (rate->flags & IEEE80211_TX_RC_VHT_MCS) {
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_VHT_MCS;
|
|
|
|
rinfo->mcs = ieee80211_rate_get_vht_mcs(rate);
|
|
|
|
rinfo->nss = ieee80211_rate_get_vht_nss(rate);
|
|
|
|
} else {
|
|
|
|
struct ieee80211_supported_band *sband;
|
2013-07-08 14:55:53 +00:00
|
|
|
int shift = ieee80211_vif_get_shift(&sta->sdata->vif);
|
|
|
|
u16 brate;
|
|
|
|
|
2012-11-09 17:38:32 +00:00
|
|
|
sband = sta->local->hw.wiphy->bands[
|
|
|
|
ieee80211_get_sdata_band(sta->sdata)];
|
2013-07-08 14:55:53 +00:00
|
|
|
brate = sband->bitrates[rate->idx].bitrate;
|
|
|
|
rinfo->legacy = DIV_ROUND_UP(brate, 1 << shift);
|
2012-11-09 17:38:32 +00:00
|
|
|
}
|
2012-03-05 23:31:48 +00:00
|
|
|
if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_40_MHZ_WIDTH;
|
2012-11-09 17:38:32 +00:00
|
|
|
if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_80_MHZ_WIDTH;
|
|
|
|
if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_160_MHZ_WIDTH;
|
2012-03-05 23:31:48 +00:00
|
|
|
if (rate->flags & IEEE80211_TX_RC_SHORT_GI)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI;
|
|
|
|
}
|
|
|
|
|
2012-11-28 12:59:38 +00:00
|
|
|
void sta_set_rate_info_rx(struct sta_info *sta, struct rate_info *rinfo)
|
|
|
|
{
|
|
|
|
rinfo->flags = 0;
|
|
|
|
|
|
|
|
if (sta->last_rx_rate_flag & RX_FLAG_HT) {
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_MCS;
|
|
|
|
rinfo->mcs = sta->last_rx_rate_idx;
|
|
|
|
} else if (sta->last_rx_rate_flag & RX_FLAG_VHT) {
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_VHT_MCS;
|
|
|
|
rinfo->nss = sta->last_rx_rate_vht_nss;
|
|
|
|
rinfo->mcs = sta->last_rx_rate_idx;
|
|
|
|
} else {
|
|
|
|
struct ieee80211_supported_band *sband;
|
2013-07-08 14:55:53 +00:00
|
|
|
int shift = ieee80211_vif_get_shift(&sta->sdata->vif);
|
|
|
|
u16 brate;
|
2012-11-28 12:59:38 +00:00
|
|
|
|
|
|
|
sband = sta->local->hw.wiphy->bands[
|
|
|
|
ieee80211_get_sdata_band(sta->sdata)];
|
2013-07-08 14:55:53 +00:00
|
|
|
brate = sband->bitrates[sta->last_rx_rate_idx].bitrate;
|
|
|
|
rinfo->legacy = DIV_ROUND_UP(brate, 1 << shift);
|
2012-11-28 12:59:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (sta->last_rx_rate_flag & RX_FLAG_40MHZ)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_40_MHZ_WIDTH;
|
|
|
|
if (sta->last_rx_rate_flag & RX_FLAG_SHORT_GI)
|
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI;
|
2014-02-05 14:37:11 +00:00
|
|
|
if (sta->last_rx_rate_vht_flag & RX_VHT_FLAG_80MHZ)
|
2012-11-28 12:59:38 +00:00
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_80_MHZ_WIDTH;
|
2014-02-05 14:37:11 +00:00
|
|
|
if (sta->last_rx_rate_vht_flag & RX_VHT_FLAG_80P80MHZ)
|
2012-11-28 12:59:38 +00:00
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_80P80_MHZ_WIDTH;
|
2014-02-05 14:37:11 +00:00
|
|
|
if (sta->last_rx_rate_vht_flag & RX_VHT_FLAG_160MHZ)
|
2012-11-28 12:59:38 +00:00
|
|
|
rinfo->flags |= RATE_INFO_FLAGS_160_MHZ_WIDTH;
|
|
|
|
}
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
static int ieee80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
int idx, u8 *mac, struct station_info *sinfo)
|
2008-02-23 14:17:17 +00:00
|
|
|
{
|
2009-11-16 11:00:37 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2012-06-21 07:56:46 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2008-02-23 14:17:17 +00:00
|
|
|
struct sta_info *sta;
|
2008-02-25 15:27:46 +00:00
|
|
|
int ret = -ENOENT;
|
|
|
|
|
2012-06-21 07:56:46 +00:00
|
|
|
mutex_lock(&local->sta_mtx);
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2009-11-16 11:00:37 +00:00
|
|
|
sta = sta_info_get_by_idx(sdata, idx);
|
2008-02-25 15:27:46 +00:00
|
|
|
if (sta) {
|
|
|
|
ret = 0;
|
2008-09-10 22:02:02 +00:00
|
|
|
memcpy(mac, sta->sta.addr, ETH_ALEN);
|
2008-02-25 15:27:46 +00:00
|
|
|
sta_set_sinfo(sta, sinfo);
|
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2012-06-21 07:56:46 +00:00
|
|
|
mutex_unlock(&local->sta_mtx);
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2008-02-25 15:27:46 +00:00
|
|
|
return ret;
|
2008-02-23 14:17:17 +00:00
|
|
|
}
|
|
|
|
|
2010-04-19 08:23:57 +00:00
|
|
|
static int ieee80211_dump_survey(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
int idx, struct survey_info *survey)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr);
|
|
|
|
|
|
|
|
return drv_get_survey(local, idx, survey);
|
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:37 +00:00
|
|
|
static int ieee80211_get_station(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
const u8 *mac, struct station_info *sinfo)
|
2007-12-19 01:03:37 +00:00
|
|
|
{
|
2009-11-25 16:46:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2012-06-21 07:56:46 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2007-12-19 01:03:37 +00:00
|
|
|
struct sta_info *sta;
|
2008-02-25 15:27:46 +00:00
|
|
|
int ret = -ENOENT;
|
2007-12-19 01:03:37 +00:00
|
|
|
|
2012-06-21 07:56:46 +00:00
|
|
|
mutex_lock(&local->sta_mtx);
|
2007-12-19 01:03:37 +00:00
|
|
|
|
2010-01-08 17:10:58 +00:00
|
|
|
sta = sta_info_get_bss(sdata, mac);
|
2008-02-25 15:27:46 +00:00
|
|
|
if (sta) {
|
|
|
|
ret = 0;
|
|
|
|
sta_set_sinfo(sta, sinfo);
|
|
|
|
}
|
|
|
|
|
2012-06-21 07:56:46 +00:00
|
|
|
mutex_unlock(&local->sta_mtx);
|
2008-02-25 15:27:46 +00:00
|
|
|
|
|
|
|
return ret;
|
2007-12-19 01:03:37 +00:00
|
|
|
}
|
|
|
|
|
2012-07-26 15:24:39 +00:00
|
|
|
static int ieee80211_set_monitor_channel(struct wiphy *wiphy,
|
2012-11-08 20:25:48 +00:00
|
|
|
struct cfg80211_chan_def *chandef)
|
2012-05-16 21:50:16 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
int ret = 0;
|
2012-05-16 21:50:16 +00:00
|
|
|
|
2012-11-09 10:39:59 +00:00
|
|
|
if (cfg80211_chandef_identical(&local->monitor_chandef, chandef))
|
2012-07-26 15:24:39 +00:00
|
|
|
return 0;
|
2012-05-16 21:50:16 +00:00
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2012-07-26 15:24:39 +00:00
|
|
|
mutex_lock(&local->iflist_mtx);
|
|
|
|
if (local->use_chanctx) {
|
|
|
|
sdata = rcu_dereference_protected(
|
|
|
|
local->monitor_sdata,
|
|
|
|
lockdep_is_held(&local->iflist_mtx));
|
|
|
|
if (sdata) {
|
|
|
|
ieee80211_vif_release_channel(sdata);
|
2012-11-09 10:39:59 +00:00
|
|
|
ret = ieee80211_vif_use_channel(sdata, chandef,
|
2012-07-26 15:24:39 +00:00
|
|
|
IEEE80211_CHANCTX_EXCLUSIVE);
|
|
|
|
}
|
|
|
|
} else if (local->open_count == local->monitors) {
|
2013-03-25 15:26:57 +00:00
|
|
|
local->_oper_chandef = *chandef;
|
2012-07-26 15:24:39 +00:00
|
|
|
ieee80211_hw_config(local, 0);
|
|
|
|
}
|
2012-05-16 21:50:16 +00:00
|
|
|
|
2012-11-09 10:39:59 +00:00
|
|
|
if (ret == 0)
|
|
|
|
local->monitor_chandef = *chandef;
|
2012-07-26 15:24:39 +00:00
|
|
|
mutex_unlock(&local->iflist_mtx);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2012-05-16 21:50:16 +00:00
|
|
|
|
2012-07-26 15:24:39 +00:00
|
|
|
return ret;
|
2012-06-06 06:18:22 +00:00
|
|
|
}
|
|
|
|
|
2011-11-10 09:28:57 +00:00
|
|
|
static int ieee80211_set_probe_resp(struct ieee80211_sub_if_data *sdata,
|
2014-06-05 12:21:36 +00:00
|
|
|
const u8 *resp, size_t resp_len,
|
|
|
|
const struct ieee80211_csa_settings *csa)
|
2011-11-10 09:28:57 +00:00
|
|
|
{
|
2012-08-06 11:26:16 +00:00
|
|
|
struct probe_resp *new, *old;
|
2011-11-10 09:28:57 +00:00
|
|
|
|
|
|
|
if (!resp || !resp_len)
|
2012-08-22 08:51:07 +00:00
|
|
|
return 1;
|
2011-11-10 09:28:57 +00:00
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
old = sdata_dereference(sdata->u.ap.probe_resp, sdata);
|
2011-11-10 09:28:57 +00:00
|
|
|
|
2012-08-06 11:26:16 +00:00
|
|
|
new = kzalloc(sizeof(struct probe_resp) + resp_len, GFP_KERNEL);
|
2011-11-10 09:28:57 +00:00
|
|
|
if (!new)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2012-08-06 11:26:16 +00:00
|
|
|
new->len = resp_len;
|
|
|
|
memcpy(new->data, resp, resp_len);
|
2011-11-10 09:28:57 +00:00
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
if (csa)
|
|
|
|
memcpy(new->csa_counter_offsets, csa->counter_offsets_presp,
|
|
|
|
csa->n_counter_offsets_presp *
|
|
|
|
sizeof(new->csa_counter_offsets[0]));
|
|
|
|
|
2011-11-10 09:28:57 +00:00
|
|
|
rcu_assign_pointer(sdata->u.ap.probe_resp, new);
|
2012-08-06 11:26:16 +00:00
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu_head);
|
2011-11-10 09:28:57 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-12-08 07:42:25 +00:00
|
|
|
static int ieee80211_assign_beacon(struct ieee80211_sub_if_data *sdata,
|
2014-06-05 12:21:36 +00:00
|
|
|
struct cfg80211_beacon_data *params,
|
|
|
|
const struct ieee80211_csa_settings *csa)
|
2007-12-19 01:03:33 +00:00
|
|
|
{
|
|
|
|
struct beacon_data *new, *old;
|
|
|
|
int new_head_len, new_tail_len;
|
2012-02-13 14:17:18 +00:00
|
|
|
int size, err;
|
|
|
|
u32 changed = BSS_CHANGED_BEACON;
|
2007-12-19 01:03:33 +00:00
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
old = sdata_dereference(sdata->u.ap.beacon, sdata);
|
|
|
|
|
2007-12-19 01:03:33 +00:00
|
|
|
|
|
|
|
/* Need to have a beacon head if we don't have one yet */
|
|
|
|
if (!params->head && !old)
|
2012-02-13 14:17:18 +00:00
|
|
|
return -EINVAL;
|
2007-12-19 01:03:33 +00:00
|
|
|
|
|
|
|
/* new or old head? */
|
|
|
|
if (params->head)
|
|
|
|
new_head_len = params->head_len;
|
|
|
|
else
|
|
|
|
new_head_len = old->head_len;
|
|
|
|
|
|
|
|
/* new or old tail? */
|
|
|
|
if (params->tail || !old)
|
|
|
|
/* params->tail_len will be zero for !params->tail */
|
|
|
|
new_tail_len = params->tail_len;
|
|
|
|
else
|
|
|
|
new_tail_len = old->tail_len;
|
|
|
|
|
|
|
|
size = sizeof(*new) + new_head_len + new_tail_len;
|
|
|
|
|
|
|
|
new = kzalloc(size, GFP_KERNEL);
|
|
|
|
if (!new)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* start filling the new info now */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* pointers go into the block we allocated,
|
|
|
|
* memory is | beacon_data | head | tail |
|
|
|
|
*/
|
|
|
|
new->head = ((u8 *) new) + sizeof(*new);
|
|
|
|
new->tail = new->head + new_head_len;
|
|
|
|
new->head_len = new_head_len;
|
|
|
|
new->tail_len = new_tail_len;
|
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
if (csa) {
|
|
|
|
new->csa_current_counter = csa->count;
|
|
|
|
memcpy(new->csa_counter_offsets, csa->counter_offsets_beacon,
|
|
|
|
csa->n_counter_offsets_beacon *
|
|
|
|
sizeof(new->csa_counter_offsets[0]));
|
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:33 +00:00
|
|
|
/* copy in head */
|
|
|
|
if (params->head)
|
|
|
|
memcpy(new->head, params->head, new_head_len);
|
|
|
|
else
|
|
|
|
memcpy(new->head, old->head, new_head_len);
|
|
|
|
|
|
|
|
/* copy in optional tail */
|
|
|
|
if (params->tail)
|
|
|
|
memcpy(new->tail, params->tail, new_tail_len);
|
|
|
|
else
|
|
|
|
if (old)
|
|
|
|
memcpy(new->tail, old->tail, new_tail_len);
|
|
|
|
|
2011-11-10 09:28:57 +00:00
|
|
|
err = ieee80211_set_probe_resp(sdata, params->probe_resp,
|
2014-06-05 12:21:36 +00:00
|
|
|
params->probe_resp_len, csa);
|
2012-02-13 14:17:18 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
if (err == 0)
|
2011-11-10 09:28:57 +00:00
|
|
|
changed |= BSS_CHANGED_AP_PROBE_RESP;
|
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
rcu_assign_pointer(sdata->u.ap.beacon, new);
|
|
|
|
|
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu_head);
|
2011-09-04 08:11:32 +00:00
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
return changed;
|
2007-12-19 01:03:33 +00:00
|
|
|
}
|
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_ap_settings *params)
|
2007-12-19 01:03:33 +00:00
|
|
|
{
|
2012-02-13 14:17:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2007-12-19 01:03:33 +00:00
|
|
|
struct beacon_data *old;
|
2011-11-04 10:18:11 +00:00
|
|
|
struct ieee80211_sub_if_data *vlan;
|
2012-02-13 14:17:18 +00:00
|
|
|
u32 changed = BSS_CHANGED_BEACON_INT |
|
|
|
|
BSS_CHANGED_BEACON_ENABLED |
|
|
|
|
BSS_CHANGED_BEACON |
|
2012-11-14 14:21:17 +00:00
|
|
|
BSS_CHANGED_SSID |
|
|
|
|
BSS_CHANGED_P2P_PS;
|
2012-02-13 14:17:18 +00:00
|
|
|
int err;
|
2008-07-29 11:22:52 +00:00
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
old = sdata_dereference(sdata->u.ap.beacon, sdata);
|
2007-12-19 01:03:33 +00:00
|
|
|
if (old)
|
|
|
|
return -EALREADY;
|
|
|
|
|
2014-09-10 11:07:35 +00:00
|
|
|
switch (params->smps_mode) {
|
|
|
|
case NL80211_SMPS_OFF:
|
|
|
|
sdata->smps_mode = IEEE80211_SMPS_OFF;
|
|
|
|
break;
|
|
|
|
case NL80211_SMPS_STATIC:
|
|
|
|
sdata->smps_mode = IEEE80211_SMPS_STATIC;
|
|
|
|
break;
|
|
|
|
case NL80211_SMPS_DYNAMIC:
|
|
|
|
sdata->smps_mode = IEEE80211_SMPS_DYNAMIC;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-09-11 12:34:12 +00:00
|
|
|
sdata->needed_rx_chains = sdata->local->rx_chains;
|
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2012-11-09 10:39:59 +00:00
|
|
|
err = ieee80211_vif_use_channel(sdata, ¶ms->chandef,
|
2012-07-26 15:24:39 +00:00
|
|
|
IEEE80211_CHANCTX_SHARED);
|
2014-03-05 12:14:08 +00:00
|
|
|
if (!err)
|
|
|
|
ieee80211_vif_copy_chanctx_to_vlans(sdata, false);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2012-05-16 21:50:18 +00:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2011-11-04 10:18:11 +00:00
|
|
|
/*
|
|
|
|
* Apply control port protocol, this allows us to
|
|
|
|
* not encrypt dynamic WEP control frames.
|
|
|
|
*/
|
|
|
|
sdata->control_port_protocol = params->crypto.control_port_ethertype;
|
|
|
|
sdata->control_port_no_encrypt = params->crypto.control_port_no_encrypt;
|
2013-03-24 12:23:27 +00:00
|
|
|
sdata->encrypt_headroom = ieee80211_cs_headroom(sdata->local,
|
|
|
|
¶ms->crypto,
|
|
|
|
sdata->vif.type);
|
|
|
|
|
2011-11-04 10:18:11 +00:00
|
|
|
list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list) {
|
|
|
|
vlan->control_port_protocol =
|
|
|
|
params->crypto.control_port_ethertype;
|
|
|
|
vlan->control_port_no_encrypt =
|
|
|
|
params->crypto.control_port_no_encrypt;
|
2013-03-24 12:23:27 +00:00
|
|
|
vlan->encrypt_headroom =
|
|
|
|
ieee80211_cs_headroom(sdata->local,
|
|
|
|
¶ms->crypto,
|
|
|
|
vlan->vif.type);
|
2011-11-04 10:18:11 +00:00
|
|
|
}
|
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
sdata->vif.bss_conf.beacon_int = params->beacon_interval;
|
|
|
|
sdata->vif.bss_conf.dtim_period = params->dtim_period;
|
2012-12-14 13:06:28 +00:00
|
|
|
sdata->vif.bss_conf.enable_beacon = true;
|
2012-02-13 14:17:18 +00:00
|
|
|
|
|
|
|
sdata->vif.bss_conf.ssid_len = params->ssid_len;
|
|
|
|
if (params->ssid_len)
|
|
|
|
memcpy(sdata->vif.bss_conf.ssid, params->ssid,
|
|
|
|
params->ssid_len);
|
|
|
|
sdata->vif.bss_conf.hidden_ssid =
|
|
|
|
(params->hidden_ssid != NL80211_HIDDEN_SSID_NOT_IN_USE);
|
|
|
|
|
2013-03-21 14:47:56 +00:00
|
|
|
memset(&sdata->vif.bss_conf.p2p_noa_attr, 0,
|
|
|
|
sizeof(sdata->vif.bss_conf.p2p_noa_attr));
|
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow =
|
|
|
|
params->p2p_ctwindow & IEEE80211_P2P_OPPPS_CTWINDOW_MASK;
|
|
|
|
if (params->p2p_opp_ps)
|
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow |=
|
|
|
|
IEEE80211_P2P_OPPPS_ENABLE_BIT;
|
2012-11-14 14:21:17 +00:00
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
err = ieee80211_assign_beacon(sdata, ¶ms->beacon, NULL);
|
2014-01-27 09:07:42 +00:00
|
|
|
if (err < 0) {
|
|
|
|
ieee80211_vif_release_channel(sdata);
|
2012-02-13 14:17:18 +00:00
|
|
|
return err;
|
2014-01-27 09:07:42 +00:00
|
|
|
}
|
2012-02-13 14:17:18 +00:00
|
|
|
changed |= err;
|
|
|
|
|
2012-10-19 13:44:42 +00:00
|
|
|
err = drv_start_ap(sdata->local, sdata);
|
|
|
|
if (err) {
|
2013-11-21 17:19:50 +00:00
|
|
|
old = sdata_dereference(sdata->u.ap.beacon, sdata);
|
|
|
|
|
2012-10-19 13:44:42 +00:00
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu_head);
|
|
|
|
RCU_INIT_POINTER(sdata->u.ap.beacon, NULL);
|
2014-01-27 09:07:42 +00:00
|
|
|
ieee80211_vif_release_channel(sdata);
|
2012-10-19 13:44:42 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-12-19 18:25:15 +00:00
|
|
|
ieee80211_recalc_dtim(local, sdata);
|
2012-02-13 14:17:18 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, changed);
|
|
|
|
|
2012-04-03 08:24:00 +00:00
|
|
|
netif_carrier_on(dev);
|
|
|
|
list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
|
|
|
|
netif_carrier_on(vlan->dev);
|
|
|
|
|
2011-11-04 10:18:11 +00:00
|
|
|
return 0;
|
2007-12-19 01:03:33 +00:00
|
|
|
}
|
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
static int ieee80211_change_beacon(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_beacon_data *params)
|
2007-12-19 01:03:33 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2007-12-19 01:03:33 +00:00
|
|
|
struct beacon_data *old;
|
2012-02-13 14:17:18 +00:00
|
|
|
int err;
|
2007-12-19 01:03:33 +00:00
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2014-01-29 06:56:21 +00:00
|
|
|
sdata_assert_lock(sdata);
|
2008-07-29 11:22:52 +00:00
|
|
|
|
2013-07-11 14:09:06 +00:00
|
|
|
/* don't allow changing the beacon while CSA is in place - offset
|
|
|
|
* of channel switch counter may change
|
|
|
|
*/
|
|
|
|
if (sdata->vif.csa_active)
|
|
|
|
return -EBUSY;
|
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
old = sdata_dereference(sdata->u.ap.beacon, sdata);
|
2007-12-19 01:03:33 +00:00
|
|
|
if (!old)
|
|
|
|
return -ENOENT;
|
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
err = ieee80211_assign_beacon(sdata, params, NULL);
|
2012-02-13 14:17:18 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
ieee80211_bss_info_change_notify(sdata, err);
|
|
|
|
return 0;
|
2007-12-19 01:03:33 +00:00
|
|
|
}
|
|
|
|
|
2012-02-13 14:17:18 +00:00
|
|
|
static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev)
|
2007-12-19 01:03:33 +00:00
|
|
|
{
|
2012-10-25 17:02:42 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_sub_if_data *vlan;
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct beacon_data *old_beacon;
|
|
|
|
struct probe_resp *old_probe_resp;
|
2013-11-06 12:55:51 +00:00
|
|
|
struct cfg80211_chan_def chandef;
|
2008-07-29 11:22:52 +00:00
|
|
|
|
2014-01-29 06:56:21 +00:00
|
|
|
sdata_assert_lock(sdata);
|
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
old_beacon = sdata_dereference(sdata->u.ap.beacon, sdata);
|
2012-10-25 17:02:42 +00:00
|
|
|
if (!old_beacon)
|
2007-12-19 01:03:33 +00:00
|
|
|
return -ENOENT;
|
2013-11-21 17:19:50 +00:00
|
|
|
old_probe_resp = sdata_dereference(sdata->u.ap.probe_resp, sdata);
|
2007-12-19 01:03:33 +00:00
|
|
|
|
2013-07-11 14:09:06 +00:00
|
|
|
/* abort any running channel switch */
|
2014-04-09 13:10:59 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2013-07-11 14:09:06 +00:00
|
|
|
sdata->vif.csa_active = false;
|
2014-06-13 13:30:07 +00:00
|
|
|
if (sdata->csa_block_tx) {
|
|
|
|
ieee80211_wake_vif_queues(local, sdata,
|
|
|
|
IEEE80211_QUEUE_STOP_REASON_CSA);
|
|
|
|
sdata->csa_block_tx = false;
|
|
|
|
}
|
|
|
|
|
2014-04-09 13:10:59 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
|
2013-11-21 17:19:53 +00:00
|
|
|
kfree(sdata->u.ap.next_beacon);
|
|
|
|
sdata->u.ap.next_beacon = NULL;
|
|
|
|
|
2012-10-25 17:02:42 +00:00
|
|
|
/* turn off carrier for this interface and dependent VLANs */
|
2012-04-03 08:24:00 +00:00
|
|
|
list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
|
|
|
|
netif_carrier_off(vlan->dev);
|
|
|
|
netif_carrier_off(dev);
|
|
|
|
|
2012-10-25 17:02:42 +00:00
|
|
|
/* remove beacon and probe response */
|
2011-08-01 16:19:00 +00:00
|
|
|
RCU_INIT_POINTER(sdata->u.ap.beacon, NULL);
|
2012-10-25 17:02:42 +00:00
|
|
|
RCU_INIT_POINTER(sdata->u.ap.probe_resp, NULL);
|
|
|
|
kfree_rcu(old_beacon, rcu_head);
|
|
|
|
if (old_probe_resp)
|
|
|
|
kfree_rcu(old_probe_resp, rcu_head);
|
2014-01-23 12:28:16 +00:00
|
|
|
sdata->u.ap.driver_smps_mode = IEEE80211_SMPS_OFF;
|
2012-02-13 14:17:18 +00:00
|
|
|
|
2013-12-04 22:18:37 +00:00
|
|
|
__sta_info_flush(sdata, true);
|
2013-12-04 22:47:09 +00:00
|
|
|
ieee80211_free_keys(sdata, true);
|
2012-12-14 13:56:03 +00:00
|
|
|
|
2012-12-14 13:06:28 +00:00
|
|
|
sdata->vif.bss_conf.enable_beacon = false;
|
2013-04-10 11:47:45 +00:00
|
|
|
sdata->vif.bss_conf.ssid_len = 0;
|
2012-12-14 13:06:28 +00:00
|
|
|
clear_bit(SDATA_STATE_OFFCHANNEL_BEACON_STOPPED, &sdata->state);
|
2009-04-23 14:13:26 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED);
|
2012-02-13 14:17:18 +00:00
|
|
|
|
2013-06-11 08:44:39 +00:00
|
|
|
if (sdata->wdev.cac_started) {
|
2013-11-06 12:55:51 +00:00
|
|
|
chandef = sdata->vif.bss_conf.chandef;
|
2013-06-11 08:44:39 +00:00
|
|
|
cancel_delayed_work_sync(&sdata->dfs_cac_timer_work);
|
2013-11-06 12:55:51 +00:00
|
|
|
cfg80211_cac_event(sdata->dev, &chandef,
|
|
|
|
NL80211_RADAR_CAC_ABORTED,
|
2013-06-11 08:44:39 +00:00
|
|
|
GFP_KERNEL);
|
|
|
|
}
|
|
|
|
|
2012-10-19 13:44:42 +00:00
|
|
|
drv_stop_ap(sdata->local, sdata);
|
|
|
|
|
2012-10-25 17:02:42 +00:00
|
|
|
/* free all potentially still buffered bcast frames */
|
|
|
|
local->total_ps_buffered -= skb_queue_len(&sdata->u.ap.ps.bc_buf);
|
|
|
|
skb_queue_purge(&sdata->u.ap.ps.bc_buf);
|
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2014-03-05 12:14:08 +00:00
|
|
|
ieee80211_vif_copy_chanctx_to_vlans(sdata, true);
|
2012-07-26 15:24:39 +00:00
|
|
|
ieee80211_vif_release_channel(sdata);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2012-07-26 15:24:39 +00:00
|
|
|
|
2009-04-23 14:13:26 +00:00
|
|
|
return 0;
|
2007-12-19 01:03:33 +00:00
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
/* Layer 2 Update frame (802.2 Type 1 LLC XID Update response) */
|
|
|
|
struct iapp_layer2_update {
|
|
|
|
u8 da[ETH_ALEN]; /* broadcast */
|
|
|
|
u8 sa[ETH_ALEN]; /* STA addr */
|
|
|
|
__be16 len; /* 6 */
|
|
|
|
u8 dsap; /* 0 */
|
|
|
|
u8 ssap; /* 0 */
|
|
|
|
u8 control;
|
|
|
|
u8 xid_info[3];
|
2010-06-03 10:21:52 +00:00
|
|
|
} __packed;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
|
|
|
static void ieee80211_send_layer2_update(struct sta_info *sta)
|
|
|
|
{
|
|
|
|
struct iapp_layer2_update *msg;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
/* Send Level 2 Update Frame to update forwarding tables in layer 2
|
|
|
|
* bridge devices */
|
|
|
|
|
|
|
|
skb = dev_alloc_skb(sizeof(*msg));
|
|
|
|
if (!skb)
|
|
|
|
return;
|
|
|
|
msg = (struct iapp_layer2_update *)skb_put(skb, sizeof(*msg));
|
|
|
|
|
|
|
|
/* 802.2 Type 1 Logical Link Control (LLC) Exchange Identifier (XID)
|
|
|
|
* Update response frame; IEEE Std 802.2-1998, 5.4.1.2.1 */
|
|
|
|
|
2012-07-13 14:23:07 +00:00
|
|
|
eth_broadcast_addr(msg->da);
|
2008-09-10 22:02:02 +00:00
|
|
|
memcpy(msg->sa, sta->sta.addr, ETH_ALEN);
|
2007-12-19 01:03:35 +00:00
|
|
|
msg->len = htons(6);
|
|
|
|
msg->dsap = 0;
|
|
|
|
msg->ssap = 0x01; /* NULL LSAP, CR Bit: Response */
|
|
|
|
msg->control = 0xaf; /* XID response lsb.1111F101.
|
|
|
|
* F=0 (no poll command; unsolicited frame) */
|
|
|
|
msg->xid_info[0] = 0x81; /* XID format identifier */
|
|
|
|
msg->xid_info[1] = 1; /* LLC types/classes: Type 1 LLC */
|
|
|
|
msg->xid_info[2] = 0; /* XID sender's receive window size (RW) */
|
|
|
|
|
2008-02-25 15:27:46 +00:00
|
|
|
skb->dev = sta->sdata->dev;
|
|
|
|
skb->protocol = eth_type_trans(skb, sta->sdata->dev);
|
2007-12-19 01:03:35 +00:00
|
|
|
memset(skb->cb, 0, sizeof(skb->cb));
|
2010-07-19 15:52:59 +00:00
|
|
|
netif_rx_ni(skb);
|
2007-12-19 01:03:35 +00:00
|
|
|
}
|
|
|
|
|
nl80211/mac80211: support full station state in AP mode
Today, stations are added already associated. That is
inefficient if, for example, the driver has no room
for stations any more because then the station will
go through the entire auth/assoc handshake, only to
be kicked out afterwards.
To address this a bit better, at least with drivers
using the new station state callback, allow hostapd
to add stations in unauthenticated mode, just after
receiving the AUTH frame, before even replying. Thus
if there's no more space at that point, it can send
a negative auth frame back. It still needs to handle
later state transition errors though, of course.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2012-10-26 15:53:44 +00:00
|
|
|
static int sta_apply_auth_flags(struct ieee80211_local *local,
|
|
|
|
struct sta_info *sta,
|
|
|
|
u32 mask, u32 set)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_AUTHENTICATED) &&
|
|
|
|
set & BIT(NL80211_STA_FLAG_AUTHENTICATED) &&
|
|
|
|
!test_sta_flag(sta, WLAN_STA_AUTH)) {
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_AUTH);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_ASSOCIATED) &&
|
|
|
|
set & BIT(NL80211_STA_FLAG_ASSOCIATED) &&
|
|
|
|
!test_sta_flag(sta, WLAN_STA_ASSOC)) {
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_AUTHORIZED)) {
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_AUTHORIZED))
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_AUTHORIZED);
|
|
|
|
else if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC);
|
|
|
|
else
|
|
|
|
ret = 0;
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_ASSOCIATED) &&
|
|
|
|
!(set & BIT(NL80211_STA_FLAG_ASSOCIATED)) &&
|
|
|
|
test_sta_flag(sta, WLAN_STA_ASSOC)) {
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_AUTH);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_AUTHENTICATED) &&
|
|
|
|
!(set & BIT(NL80211_STA_FLAG_AUTHENTICATED)) &&
|
|
|
|
test_sta_flag(sta, WLAN_STA_AUTH)) {
|
|
|
|
ret = sta_info_move_state(sta, IEEE80211_STA_NONE);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-12-14 11:35:30 +00:00
|
|
|
static int sta_apply_parameters(struct ieee80211_local *local,
|
|
|
|
struct sta_info *sta,
|
|
|
|
struct station_parameters *params)
|
2007-12-19 01:03:35 +00:00
|
|
|
{
|
2011-12-14 11:35:30 +00:00
|
|
|
int ret = 0;
|
2008-01-24 18:38:38 +00:00
|
|
|
struct ieee80211_supported_band *sband;
|
2008-02-25 15:27:46 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
2012-07-26 15:24:39 +00:00
|
|
|
enum ieee80211_band band = ieee80211_get_sdata_band(sdata);
|
2009-05-11 18:57:56 +00:00
|
|
|
u32 mask, set;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2012-07-26 15:24:39 +00:00
|
|
|
sband = local->hw.wiphy->bands[band];
|
2008-10-14 14:58:37 +00:00
|
|
|
|
2009-05-11 18:57:56 +00:00
|
|
|
mask = params->sta_flags_mask;
|
|
|
|
set = params->sta_flags_set;
|
2008-02-25 15:27:47 +00:00
|
|
|
|
nl80211/mac80211: support full station state in AP mode
Today, stations are added already associated. That is
inefficient if, for example, the driver has no room
for stations any more because then the station will
go through the entire auth/assoc handshake, only to
be kicked out afterwards.
To address this a bit better, at least with drivers
using the new station state callback, allow hostapd
to add stations in unauthenticated mode, just after
receiving the AUTH frame, before even replying. Thus
if there's no more space at that point, it can send
a negative auth frame back. It still needs to handle
later state transition errors though, of course.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2012-10-26 15:53:44 +00:00
|
|
|
if (ieee80211_vif_is_mesh(&sdata->vif)) {
|
|
|
|
/*
|
|
|
|
* In mesh mode, ASSOCIATED isn't part of the nl80211
|
|
|
|
* API but must follow AUTHENTICATED for driver state.
|
|
|
|
*/
|
|
|
|
if (mask & BIT(NL80211_STA_FLAG_AUTHENTICATED))
|
|
|
|
mask |= BIT(NL80211_STA_FLAG_ASSOCIATED);
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_AUTHENTICATED))
|
|
|
|
set |= BIT(NL80211_STA_FLAG_ASSOCIATED);
|
2013-02-14 23:48:33 +00:00
|
|
|
} else if (test_sta_flag(sta, WLAN_STA_TDLS_PEER)) {
|
|
|
|
/*
|
|
|
|
* TDLS -- everything follows authorized, but
|
|
|
|
* only becoming authorized is possible, not
|
|
|
|
* going back
|
|
|
|
*/
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_AUTHORIZED)) {
|
|
|
|
set |= BIT(NL80211_STA_FLAG_AUTHENTICATED) |
|
|
|
|
BIT(NL80211_STA_FLAG_ASSOCIATED);
|
|
|
|
mask |= BIT(NL80211_STA_FLAG_AUTHENTICATED) |
|
|
|
|
BIT(NL80211_STA_FLAG_ASSOCIATED);
|
|
|
|
}
|
2009-05-11 18:57:56 +00:00
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2014-06-11 14:18:19 +00:00
|
|
|
/* auth flags will be set later for TDLS stations */
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_TDLS_PEER)) {
|
|
|
|
ret = sta_apply_auth_flags(local, sta, mask, set);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
2011-12-14 11:35:30 +00:00
|
|
|
|
2009-05-11 18:57:56 +00:00
|
|
|
if (mask & BIT(NL80211_STA_FLAG_SHORT_PREAMBLE)) {
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_SHORT_PREAMBLE))
|
2011-09-29 14:04:36 +00:00
|
|
|
set_sta_flag(sta, WLAN_STA_SHORT_PREAMBLE);
|
|
|
|
else
|
|
|
|
clear_sta_flag(sta, WLAN_STA_SHORT_PREAMBLE);
|
2009-05-11 18:57:56 +00:00
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2014-07-22 12:50:47 +00:00
|
|
|
if (mask & BIT(NL80211_STA_FLAG_WME))
|
|
|
|
sta->sta.wme = set & BIT(NL80211_STA_FLAG_WME);
|
2009-01-08 11:31:59 +00:00
|
|
|
|
2009-05-11 18:57:56 +00:00
|
|
|
if (mask & BIT(NL80211_STA_FLAG_MFP)) {
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_MFP))
|
2011-09-29 14:04:36 +00:00
|
|
|
set_sta_flag(sta, WLAN_STA_MFP);
|
|
|
|
else
|
|
|
|
clear_sta_flag(sta, WLAN_STA_MFP);
|
2007-12-19 01:03:35 +00:00
|
|
|
}
|
2011-04-07 22:08:30 +00:00
|
|
|
|
2011-09-28 11:12:53 +00:00
|
|
|
if (mask & BIT(NL80211_STA_FLAG_TDLS_PEER)) {
|
|
|
|
if (set & BIT(NL80211_STA_FLAG_TDLS_PEER))
|
2011-09-29 14:04:36 +00:00
|
|
|
set_sta_flag(sta, WLAN_STA_TDLS_PEER);
|
|
|
|
else
|
|
|
|
clear_sta_flag(sta, WLAN_STA_TDLS_PEER);
|
2011-09-28 11:12:53 +00:00
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2011-09-27 18:56:12 +00:00
|
|
|
if (params->sta_modify_mask & STATION_PARAM_APPLY_UAPSD) {
|
|
|
|
sta->sta.uapsd_queues = params->uapsd_queues;
|
|
|
|
sta->sta.max_sp = params->max_sp;
|
|
|
|
}
|
2011-08-23 11:37:47 +00:00
|
|
|
|
2009-05-24 14:42:30 +00:00
|
|
|
/*
|
|
|
|
* cfg80211 validates this (1-2007) and allows setting the AID
|
|
|
|
* only when creating a new station entry
|
|
|
|
*/
|
|
|
|
if (params->aid)
|
|
|
|
sta->sta.aid = params->aid;
|
|
|
|
|
2008-02-25 15:27:47 +00:00
|
|
|
/*
|
2012-12-27 16:32:09 +00:00
|
|
|
* Some of the following updates would be racy if called on an
|
|
|
|
* existing station, via ieee80211_change_station(). However,
|
|
|
|
* all such changes are rejected by cfg80211 except for updates
|
|
|
|
* changing the supported rates on an existing but not yet used
|
|
|
|
* TDLS peer.
|
2008-02-25 15:27:47 +00:00
|
|
|
*/
|
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
if (params->listen_interval >= 0)
|
|
|
|
sta->listen_interval = params->listen_interval;
|
|
|
|
|
|
|
|
if (params->supported_rates) {
|
2013-07-08 14:55:53 +00:00
|
|
|
ieee80211_parse_bitrates(&sdata->vif.bss_conf.chandef,
|
|
|
|
sband, params->supported_rates,
|
|
|
|
params->supported_rates_len,
|
|
|
|
&sta->sta.supp_rates[band]);
|
2007-12-19 01:03:35 +00:00
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2008-10-09 10:13:49 +00:00
|
|
|
if (params->ht_capa)
|
2011-11-18 19:32:00 +00:00
|
|
|
ieee80211_ht_cap_ie_to_sta_ht_cap(sdata, sband,
|
2013-02-07 10:47:44 +00:00
|
|
|
params->ht_capa, sta);
|
2008-08-25 08:58:58 +00:00
|
|
|
|
2012-10-11 08:04:52 +00:00
|
|
|
if (params->vht_capa)
|
|
|
|
ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
|
2013-02-07 10:58:58 +00:00
|
|
|
params->vht_capa, sta);
|
2012-10-11 08:04:52 +00:00
|
|
|
|
2014-02-03 13:44:44 +00:00
|
|
|
if (params->opmode_notif_used) {
|
|
|
|
/* returned value is only needed for rc update, but the
|
|
|
|
* rc isn't initialized here yet, so ignore it
|
|
|
|
*/
|
|
|
|
__ieee80211_vht_handle_opmode(sdata, sta,
|
|
|
|
params->opmode_notif,
|
|
|
|
band, false);
|
|
|
|
}
|
|
|
|
|
2011-05-03 23:57:11 +00:00
|
|
|
if (ieee80211_vif_is_mesh(&sdata->vif)) {
|
2011-05-12 13:32:17 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
2013-02-13 20:14:19 +00:00
|
|
|
u32 changed = 0;
|
2013-02-14 23:48:33 +00:00
|
|
|
|
|
|
|
if (params->sta_modify_mask & STATION_PARAM_APPLY_PLINK_STATE) {
|
2011-05-03 23:57:11 +00:00
|
|
|
switch (params->plink_state) {
|
2011-05-13 17:45:43 +00:00
|
|
|
case NL80211_PLINK_ESTAB:
|
2013-01-07 15:04:49 +00:00
|
|
|
if (sta->plink_state != NL80211_PLINK_ESTAB)
|
|
|
|
changed = mesh_plink_inc_estab_count(
|
|
|
|
sdata);
|
|
|
|
sta->plink_state = params->plink_state;
|
2013-01-30 17:14:08 +00:00
|
|
|
|
|
|
|
ieee80211_mps_sta_status_update(sta);
|
2013-02-13 20:14:19 +00:00
|
|
|
changed |= ieee80211_mps_set_sta_local_pm(sta,
|
|
|
|
sdata->u.mesh.mshcfg.power_mode);
|
2013-01-07 15:04:49 +00:00
|
|
|
break;
|
|
|
|
case NL80211_PLINK_LISTEN:
|
2011-05-13 17:45:43 +00:00
|
|
|
case NL80211_PLINK_BLOCKED:
|
2013-01-07 15:04:49 +00:00
|
|
|
case NL80211_PLINK_OPN_SNT:
|
|
|
|
case NL80211_PLINK_OPN_RCVD:
|
|
|
|
case NL80211_PLINK_CNF_RCVD:
|
|
|
|
case NL80211_PLINK_HOLDING:
|
|
|
|
if (sta->plink_state == NL80211_PLINK_ESTAB)
|
|
|
|
changed = mesh_plink_dec_estab_count(
|
|
|
|
sdata);
|
2011-05-03 23:57:11 +00:00
|
|
|
sta->plink_state = params->plink_state;
|
2013-01-30 17:14:08 +00:00
|
|
|
|
|
|
|
ieee80211_mps_sta_status_update(sta);
|
2013-10-15 10:29:24 +00:00
|
|
|
changed |= ieee80211_mps_set_sta_local_pm(sta,
|
|
|
|
NL80211_MESH_POWER_UNKNOWN);
|
2011-05-03 23:57:11 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* nothing */
|
|
|
|
break;
|
|
|
|
}
|
2013-02-14 23:48:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
switch (params->plink_action) {
|
|
|
|
case NL80211_PLINK_ACTION_NO_ACTION:
|
|
|
|
/* nothing */
|
|
|
|
break;
|
|
|
|
case NL80211_PLINK_ACTION_OPEN:
|
|
|
|
changed |= mesh_plink_open(sta);
|
|
|
|
break;
|
|
|
|
case NL80211_PLINK_ACTION_BLOCK:
|
|
|
|
changed |= mesh_plink_block(sta);
|
|
|
|
break;
|
2013-01-07 15:04:49 +00:00
|
|
|
}
|
2013-01-30 17:14:08 +00:00
|
|
|
|
|
|
|
if (params->local_pm)
|
2013-02-13 20:14:19 +00:00
|
|
|
changed |=
|
|
|
|
ieee80211_mps_set_sta_local_pm(sta,
|
|
|
|
params->local_pm);
|
2013-11-06 18:04:29 +00:00
|
|
|
ieee80211_mbss_info_change_notify(sdata, changed);
|
2011-05-12 13:32:17 +00:00
|
|
|
#endif
|
2008-02-23 14:17:19 +00:00
|
|
|
}
|
2011-12-14 11:35:30 +00:00
|
|
|
|
2014-06-11 14:18:19 +00:00
|
|
|
/* set the STA state after all sta info from usermode has been set */
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_TDLS_PEER)) {
|
|
|
|
ret = sta_apply_auth_flags(local, sta, mask, set);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-12-14 11:35:30 +00:00
|
|
|
return 0;
|
2007-12-19 01:03:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
const u8 *mac,
|
|
|
|
struct station_parameters *params)
|
2007-12-19 01:03:35 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2007-12-19 01:03:35 +00:00
|
|
|
struct sta_info *sta;
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-02-25 15:27:47 +00:00
|
|
|
int err;
|
2008-12-12 15:08:31 +00:00
|
|
|
int layer2_update;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
|
|
|
if (params->vlan) {
|
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(params->vlan);
|
|
|
|
|
2008-09-10 22:01:58 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP)
|
2007-12-19 01:03:35 +00:00
|
|
|
return -EINVAL;
|
|
|
|
} else
|
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
mac80211: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-08 18:56:52 +00:00
|
|
|
if (ether_addr_equal(mac, sdata->vif.addr))
|
2008-02-27 08:56:40 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (is_multicast_ether_addr(mac))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
sta = sta_info_alloc(sdata, mac, GFP_KERNEL);
|
2008-02-25 15:27:47 +00:00
|
|
|
if (!sta)
|
|
|
|
return -ENOMEM;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
nl80211/mac80211: support full station state in AP mode
Today, stations are added already associated. That is
inefficient if, for example, the driver has no room
for stations any more because then the station will
go through the entire auth/assoc handshake, only to
be kicked out afterwards.
To address this a bit better, at least with drivers
using the new station state callback, allow hostapd
to add stations in unauthenticated mode, just after
receiving the AUTH frame, before even replying. Thus
if there's no more space at that point, it can send
a negative auth frame back. It still needs to handle
later state transition errors though, of course.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2012-10-26 15:53:44 +00:00
|
|
|
/*
|
|
|
|
* defaults -- if userspace wants something else we'll
|
|
|
|
* change it accordingly in sta_apply_parameters()
|
|
|
|
*/
|
2013-02-14 23:48:33 +00:00
|
|
|
if (!(params->sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER))) {
|
|
|
|
sta_info_pre_move_state(sta, IEEE80211_STA_AUTH);
|
|
|
|
sta_info_pre_move_state(sta, IEEE80211_STA_ASSOC);
|
2014-05-01 07:17:27 +00:00
|
|
|
} else {
|
|
|
|
sta->sta.tdls = true;
|
2013-02-14 23:48:33 +00:00
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2011-12-14 11:35:30 +00:00
|
|
|
err = sta_apply_parameters(local, sta, params);
|
|
|
|
if (err) {
|
|
|
|
sta_info_free(local, sta);
|
|
|
|
return err;
|
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2011-11-07 21:24:39 +00:00
|
|
|
/*
|
2013-02-14 23:48:33 +00:00
|
|
|
* for TDLS, rate control should be initialized only when
|
|
|
|
* rates are known and station is marked authorized
|
2011-11-07 21:24:39 +00:00
|
|
|
*/
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_TDLS_PEER))
|
|
|
|
rate_control_rate_init(sta);
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2008-12-12 15:08:31 +00:00
|
|
|
layer2_update = sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP;
|
|
|
|
|
2010-02-03 12:59:58 +00:00
|
|
|
err = sta_info_insert_rcu(sta);
|
2008-02-25 15:27:47 +00:00
|
|
|
if (err) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-12-12 15:08:31 +00:00
|
|
|
if (layer2_update)
|
2008-02-25 15:27:47 +00:00
|
|
|
ieee80211_send_layer2_update(sta);
|
|
|
|
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_del_station(struct wiphy *wiphy, struct net_device *dev,
|
2014-10-10 17:52:40 +00:00
|
|
|
struct station_del_parameters *params)
|
2007-12-19 01:03:35 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2014-10-10 17:52:40 +00:00
|
|
|
if (params->mac)
|
|
|
|
return sta_info_destroy_addr_bss(sdata, params->mac);
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2012-12-13 22:07:46 +00:00
|
|
|
sta_info_flush(sdata);
|
2007-12-19 01:03:35 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_change_station(struct wiphy *wiphy,
|
2014-05-19 15:19:31 +00:00
|
|
|
struct net_device *dev, const u8 *mac,
|
2007-12-19 01:03:35 +00:00
|
|
|
struct station_parameters *params)
|
|
|
|
{
|
2009-11-25 16:46:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2007-12-19 01:03:35 +00:00
|
|
|
struct sta_info *sta;
|
|
|
|
struct ieee80211_sub_if_data *vlansdata;
|
2013-02-14 23:48:33 +00:00
|
|
|
enum cfg80211_station_type statype;
|
2011-12-29 12:41:39 +00:00
|
|
|
int err;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2011-12-14 11:20:29 +00:00
|
|
|
mutex_lock(&local->sta_mtx);
|
2008-04-10 13:36:09 +00:00
|
|
|
|
2010-01-08 17:10:58 +00:00
|
|
|
sta = sta_info_get_bss(sdata, mac);
|
2008-04-10 13:36:09 +00:00
|
|
|
if (!sta) {
|
2013-02-14 23:48:33 +00:00
|
|
|
err = -ENOENT;
|
|
|
|
goto out_err;
|
2008-04-10 13:36:09 +00:00
|
|
|
}
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2013-02-14 23:48:33 +00:00
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
2013-03-04 21:06:12 +00:00
|
|
|
if (sdata->u.mesh.user_mpm)
|
2013-03-04 21:06:11 +00:00
|
|
|
statype = CFG80211_STA_MESH_PEER_USER;
|
2013-02-14 23:48:33 +00:00
|
|
|
else
|
2013-03-04 21:06:11 +00:00
|
|
|
statype = CFG80211_STA_MESH_PEER_KERNEL;
|
2013-02-14 23:48:33 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
|
|
|
statype = CFG80211_STA_IBSS;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_TDLS_PEER)) {
|
|
|
|
statype = CFG80211_STA_AP_STA;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
|
|
|
|
statype = CFG80211_STA_TDLS_PEER_ACTIVE;
|
|
|
|
else
|
|
|
|
statype = CFG80211_STA_TDLS_PEER_SETUP;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_AP:
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
statype = CFG80211_STA_AP_CLIENT;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
err = -EOPNOTSUPP;
|
|
|
|
goto out_err;
|
2011-12-14 11:20:27 +00:00
|
|
|
}
|
|
|
|
|
2013-02-14 23:48:33 +00:00
|
|
|
err = cfg80211_check_station_change(wiphy, params, statype);
|
|
|
|
if (err)
|
|
|
|
goto out_err;
|
|
|
|
|
2008-02-25 15:27:46 +00:00
|
|
|
if (params->vlan && params->vlan != sta->sdata->dev) {
|
2012-04-23 17:49:03 +00:00
|
|
|
bool prev_4addr = false;
|
|
|
|
bool new_4addr = false;
|
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
vlansdata = IEEE80211_DEV_TO_SUB_IF(params->vlan);
|
|
|
|
|
2009-11-19 10:55:19 +00:00
|
|
|
if (params->vlan->ieee80211_ptr->use_4addr) {
|
2009-11-20 09:09:14 +00:00
|
|
|
if (vlansdata->u.vlan.sta) {
|
2013-02-14 23:48:33 +00:00
|
|
|
err = -EBUSY;
|
|
|
|
goto out_err;
|
2009-11-20 09:09:14 +00:00
|
|
|
}
|
2009-11-10 19:10:05 +00:00
|
|
|
|
2012-01-12 04:41:32 +00:00
|
|
|
rcu_assign_pointer(vlansdata->u.vlan.sta, sta);
|
2012-04-23 17:49:03 +00:00
|
|
|
new_4addr = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
sta->sdata->u.vlan.sta) {
|
2014-03-23 19:21:43 +00:00
|
|
|
RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL);
|
2012-04-23 17:49:03 +00:00
|
|
|
prev_4addr = true;
|
2009-11-10 19:10:05 +00:00
|
|
|
}
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sta->sdata = vlansdata;
|
2012-04-23 17:49:03 +00:00
|
|
|
|
|
|
|
if (sta->sta_state == IEEE80211_STA_AUTHORIZED &&
|
|
|
|
prev_4addr != new_4addr) {
|
|
|
|
if (new_4addr)
|
|
|
|
atomic_dec(&sta->sdata->bss->num_mcast_sta);
|
|
|
|
else
|
|
|
|
atomic_inc(&sta->sdata->bss->num_mcast_sta);
|
|
|
|
}
|
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
ieee80211_send_layer2_update(sta);
|
|
|
|
}
|
|
|
|
|
2011-12-29 12:41:39 +00:00
|
|
|
err = sta_apply_parameters(local, sta, params);
|
2013-02-14 23:48:33 +00:00
|
|
|
if (err)
|
|
|
|
goto out_err;
|
2007-12-19 01:03:35 +00:00
|
|
|
|
2013-02-14 23:48:33 +00:00
|
|
|
/* When peer becomes authorized, init rate control as well */
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_TDLS_PEER) &&
|
|
|
|
test_sta_flag(sta, WLAN_STA_AUTHORIZED))
|
2011-11-07 21:24:39 +00:00
|
|
|
rate_control_rate_init(sta);
|
|
|
|
|
2011-12-14 11:20:29 +00:00
|
|
|
mutex_unlock(&local->sta_mtx);
|
2008-04-10 13:36:09 +00:00
|
|
|
|
2013-10-01 13:45:43 +00:00
|
|
|
if ((sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&
|
|
|
|
sta->known_smps_mode != sta->sdata->bss->req_smps &&
|
|
|
|
test_sta_flag(sta, WLAN_STA_AUTHORIZED) &&
|
|
|
|
sta_info_tx_streams(sta) != 1) {
|
|
|
|
ht_dbg(sta->sdata,
|
|
|
|
"%pM just authorized and MIMO capable - update SMPS\n",
|
|
|
|
sta->sta.addr);
|
|
|
|
ieee80211_send_smps_action(sta->sdata,
|
|
|
|
sta->sdata->bss->req_smps,
|
|
|
|
sta->sta.addr,
|
|
|
|
sta->sdata->vif.bss_conf.bssid);
|
|
|
|
}
|
|
|
|
|
2011-03-11 00:43:19 +00:00
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_STATION &&
|
2012-07-27 09:33:22 +00:00
|
|
|
params->sta_flags_mask & BIT(NL80211_STA_FLAG_AUTHORIZED)) {
|
2011-03-11 00:43:19 +00:00
|
|
|
ieee80211_recalc_ps(local, -1);
|
2012-07-27 09:33:22 +00:00
|
|
|
ieee80211_recalc_ps_vif(sdata);
|
|
|
|
}
|
2013-02-14 23:48:33 +00:00
|
|
|
|
2007-12-19 01:03:35 +00:00
|
|
|
return 0;
|
2013-02-14 23:48:33 +00:00
|
|
|
out_err:
|
|
|
|
mutex_unlock(&local->sta_mtx);
|
|
|
|
return err;
|
2007-12-19 01:03:35 +00:00
|
|
|
}
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
static int ieee80211_add_mpath(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
const u8 *dst, const u8 *next_hop)
|
2008-02-23 14:17:17 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-02-23 14:17:17 +00:00
|
|
|
struct mesh_path *mpath;
|
|
|
|
struct sta_info *sta;
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-02-25 15:27:46 +00:00
|
|
|
rcu_read_lock();
|
2009-11-25 16:46:18 +00:00
|
|
|
sta = sta_info_get(sdata, next_hop);
|
2008-02-25 15:27:46 +00:00
|
|
|
if (!sta) {
|
|
|
|
rcu_read_unlock();
|
2008-02-23 14:17:17 +00:00
|
|
|
return -ENOENT;
|
2008-02-25 15:27:46 +00:00
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2013-03-29 13:38:39 +00:00
|
|
|
mpath = mesh_path_add(sdata, dst);
|
|
|
|
if (IS_ERR(mpath)) {
|
2008-02-25 15:27:46 +00:00
|
|
|
rcu_read_unlock();
|
2013-03-29 13:38:39 +00:00
|
|
|
return PTR_ERR(mpath);
|
2008-02-25 15:27:46 +00:00
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
|
|
|
|
mesh_path_fix_nexthop(mpath, sta);
|
2008-02-25 15:27:46 +00:00
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_del_mpath(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
const u8 *dst)
|
2008-02-23 14:17:17 +00:00
|
|
|
{
|
2008-08-03 00:04:37 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
if (dst)
|
2013-02-15 13:40:31 +00:00
|
|
|
return mesh_path_del(sdata, dst);
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2011-08-29 20:23:04 +00:00
|
|
|
mesh_path_flush_by_iface(sdata);
|
2008-02-23 14:17:17 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-19 15:19:31 +00:00
|
|
|
static int ieee80211_change_mpath(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
const u8 *dst, const u8 *next_hop)
|
2008-02-23 14:17:17 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-02-23 14:17:17 +00:00
|
|
|
struct mesh_path *mpath;
|
|
|
|
struct sta_info *sta;
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-02-25 15:27:46 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
|
2009-11-25 16:46:18 +00:00
|
|
|
sta = sta_info_get(sdata, next_hop);
|
2008-02-25 15:27:46 +00:00
|
|
|
if (!sta) {
|
|
|
|
rcu_read_unlock();
|
2008-02-23 14:17:17 +00:00
|
|
|
return -ENOENT;
|
2008-02-25 15:27:46 +00:00
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
|
2013-02-15 13:40:31 +00:00
|
|
|
mpath = mesh_path_lookup(sdata, dst);
|
2008-02-23 14:17:17 +00:00
|
|
|
if (!mpath) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
mesh_path_fix_nexthop(mpath, sta);
|
2008-02-25 15:27:46 +00:00
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mpath_set_pinfo(struct mesh_path *mpath, u8 *next_hop,
|
|
|
|
struct mpath_info *pinfo)
|
|
|
|
{
|
2011-05-12 13:11:37 +00:00
|
|
|
struct sta_info *next_hop_sta = rcu_dereference(mpath->next_hop);
|
|
|
|
|
|
|
|
if (next_hop_sta)
|
|
|
|
memcpy(next_hop, next_hop_sta->sta.addr, ETH_ALEN);
|
2008-02-23 14:17:17 +00:00
|
|
|
else
|
|
|
|
memset(next_hop, 0, ETH_ALEN);
|
|
|
|
|
2012-08-27 13:28:16 +00:00
|
|
|
memset(pinfo, 0, sizeof(*pinfo));
|
|
|
|
|
2009-08-07 14:17:38 +00:00
|
|
|
pinfo->generation = mesh_paths_generation;
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
pinfo->filled = MPATH_INFO_FRAME_QLEN |
|
2009-11-09 23:46:55 +00:00
|
|
|
MPATH_INFO_SN |
|
2008-02-23 14:17:17 +00:00
|
|
|
MPATH_INFO_METRIC |
|
|
|
|
MPATH_INFO_EXPTIME |
|
|
|
|
MPATH_INFO_DISCOVERY_TIMEOUT |
|
|
|
|
MPATH_INFO_DISCOVERY_RETRIES |
|
|
|
|
MPATH_INFO_FLAGS;
|
|
|
|
|
|
|
|
pinfo->frame_qlen = mpath->frame_queue.qlen;
|
2009-11-09 23:46:55 +00:00
|
|
|
pinfo->sn = mpath->sn;
|
2008-02-23 14:17:17 +00:00
|
|
|
pinfo->metric = mpath->metric;
|
|
|
|
if (time_before(jiffies, mpath->exp_time))
|
|
|
|
pinfo->exptime = jiffies_to_msecs(mpath->exp_time - jiffies);
|
|
|
|
pinfo->discovery_timeout =
|
|
|
|
jiffies_to_msecs(mpath->discovery_timeout);
|
|
|
|
pinfo->discovery_retries = mpath->discovery_retries;
|
|
|
|
if (mpath->flags & MESH_PATH_ACTIVE)
|
|
|
|
pinfo->flags |= NL80211_MPATH_FLAG_ACTIVE;
|
|
|
|
if (mpath->flags & MESH_PATH_RESOLVING)
|
|
|
|
pinfo->flags |= NL80211_MPATH_FLAG_RESOLVING;
|
2009-11-09 23:46:55 +00:00
|
|
|
if (mpath->flags & MESH_PATH_SN_VALID)
|
|
|
|
pinfo->flags |= NL80211_MPATH_FLAG_SN_VALID;
|
2008-02-23 14:17:17 +00:00
|
|
|
if (mpath->flags & MESH_PATH_FIXED)
|
|
|
|
pinfo->flags |= NL80211_MPATH_FLAG_FIXED;
|
2012-08-27 13:28:16 +00:00
|
|
|
if (mpath->flags & MESH_PATH_RESOLVED)
|
|
|
|
pinfo->flags |= NL80211_MPATH_FLAG_RESOLVED;
|
2008-02-23 14:17:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_get_mpath(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
u8 *dst, u8 *next_hop, struct mpath_info *pinfo)
|
|
|
|
|
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-02-23 14:17:17 +00:00
|
|
|
struct mesh_path *mpath;
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
rcu_read_lock();
|
2013-02-15 13:40:31 +00:00
|
|
|
mpath = mesh_path_lookup(sdata, dst);
|
2008-02-23 14:17:17 +00:00
|
|
|
if (!mpath) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
memcpy(dst, mpath->dst, ETH_ALEN);
|
|
|
|
mpath_set_pinfo(mpath, next_hop, pinfo);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_dump_mpath(struct wiphy *wiphy, struct net_device *dev,
|
2014-05-19 15:19:31 +00:00
|
|
|
int idx, u8 *dst, u8 *next_hop,
|
|
|
|
struct mpath_info *pinfo)
|
2008-02-23 14:17:17 +00:00
|
|
|
{
|
2008-07-29 11:22:52 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2008-02-23 14:17:17 +00:00
|
|
|
struct mesh_path *mpath;
|
|
|
|
|
2008-07-29 11:22:52 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2008-02-23 14:17:17 +00:00
|
|
|
rcu_read_lock();
|
2013-02-15 13:40:31 +00:00
|
|
|
mpath = mesh_path_lookup_by_idx(sdata, idx);
|
2008-02-23 14:17:17 +00:00
|
|
|
if (!mpath) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
memcpy(dst, mpath->dst, ETH_ALEN);
|
|
|
|
mpath_set_pinfo(mpath, next_hop, pinfo);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
2008-10-21 19:03:48 +00:00
|
|
|
|
2014-09-12 06:58:50 +00:00
|
|
|
static void mpp_set_pinfo(struct mesh_path *mpath, u8 *mpp,
|
|
|
|
struct mpath_info *pinfo)
|
|
|
|
{
|
|
|
|
memset(pinfo, 0, sizeof(*pinfo));
|
|
|
|
memcpy(mpp, mpath->mpp, ETH_ALEN);
|
|
|
|
|
|
|
|
pinfo->generation = mpp_paths_generation;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_get_mpp(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
u8 *dst, u8 *mpp, struct mpath_info *pinfo)
|
|
|
|
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
struct mesh_path *mpath;
|
|
|
|
|
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
mpath = mpp_path_lookup(sdata, dst);
|
|
|
|
if (!mpath) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
memcpy(dst, mpath->dst, ETH_ALEN);
|
|
|
|
mpp_set_pinfo(mpath, mpp, pinfo);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_dump_mpp(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
int idx, u8 *dst, u8 *mpp,
|
|
|
|
struct mpath_info *pinfo)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
struct mesh_path *mpath;
|
|
|
|
|
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
mpath = mpp_path_lookup_by_idx(sdata, idx);
|
|
|
|
if (!mpath) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
memcpy(dst, mpath->dst, ETH_ALEN);
|
|
|
|
mpp_set_pinfo(mpath, mpp, pinfo);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-12-17 01:37:48 +00:00
|
|
|
static int ieee80211_get_mesh_config(struct wiphy *wiphy,
|
2008-10-21 19:03:48 +00:00
|
|
|
struct net_device *dev,
|
|
|
|
struct mesh_config *conf)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
memcpy(conf, &(sdata->u.mesh.mshcfg), sizeof(struct mesh_config));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool _chg_mesh_attr(enum nl80211_meshconf_params parm, u32 mask)
|
|
|
|
{
|
|
|
|
return (mask >> (parm-1)) & 0x1;
|
|
|
|
}
|
|
|
|
|
2010-12-17 01:37:49 +00:00
|
|
|
static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
|
|
|
|
const struct mesh_setup *setup)
|
|
|
|
{
|
|
|
|
u8 *new_ie;
|
|
|
|
const u8 *old_ie;
|
2011-11-25 01:15:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = container_of(ifmsh,
|
|
|
|
struct ieee80211_sub_if_data, u.mesh);
|
2010-12-17 01:37:49 +00:00
|
|
|
|
2011-04-07 22:08:27 +00:00
|
|
|
/* allocate information elements */
|
2010-12-17 01:37:49 +00:00
|
|
|
new_ie = NULL;
|
2011-04-07 22:08:27 +00:00
|
|
|
old_ie = ifmsh->ie;
|
2010-12-17 01:37:49 +00:00
|
|
|
|
2011-04-07 22:08:27 +00:00
|
|
|
if (setup->ie_len) {
|
|
|
|
new_ie = kmemdup(setup->ie, setup->ie_len,
|
2010-12-17 01:37:49 +00:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!new_ie)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2011-04-07 22:08:27 +00:00
|
|
|
ifmsh->ie_len = setup->ie_len;
|
|
|
|
ifmsh->ie = new_ie;
|
|
|
|
kfree(old_ie);
|
2010-12-17 01:37:49 +00:00
|
|
|
|
|
|
|
/* now copy the rest of the setup parameters */
|
|
|
|
ifmsh->mesh_id_len = setup->mesh_id_len;
|
|
|
|
memcpy(ifmsh->mesh_id, setup->mesh_id, ifmsh->mesh_id_len);
|
2012-03-31 18:31:33 +00:00
|
|
|
ifmsh->mesh_sp_id = setup->sync_method;
|
2010-12-17 01:37:49 +00:00
|
|
|
ifmsh->mesh_pp_id = setup->path_sel_proto;
|
|
|
|
ifmsh->mesh_pm_id = setup->path_metric;
|
2013-03-04 21:06:12 +00:00
|
|
|
ifmsh->user_mpm = setup->user_mpm;
|
2013-05-08 18:46:00 +00:00
|
|
|
ifmsh->mesh_auth_id = setup->auth_id;
|
2011-05-03 23:57:07 +00:00
|
|
|
ifmsh->security = IEEE80211_MESH_SEC_NONE;
|
|
|
|
if (setup->is_authenticated)
|
|
|
|
ifmsh->security |= IEEE80211_MESH_SEC_AUTHED;
|
|
|
|
if (setup->is_secure)
|
|
|
|
ifmsh->security |= IEEE80211_MESH_SEC_SECURED;
|
2010-12-17 01:37:49 +00:00
|
|
|
|
2011-11-25 01:15:20 +00:00
|
|
|
/* mcast rate setting in Mesh Node */
|
|
|
|
memcpy(sdata->vif.bss_conf.mcast_rate, setup->mcast_rate,
|
|
|
|
sizeof(setup->mcast_rate));
|
2013-06-03 17:33:36 +00:00
|
|
|
sdata->vif.bss_conf.basic_rates = setup->basic_rates;
|
2011-11-25 01:15:20 +00:00
|
|
|
|
2013-01-07 15:04:51 +00:00
|
|
|
sdata->vif.bss_conf.beacon_int = setup->beacon_interval;
|
|
|
|
sdata->vif.bss_conf.dtim_period = setup->dtim_period;
|
|
|
|
|
2010-12-17 01:37:49 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-12-17 01:37:48 +00:00
|
|
|
static int ieee80211_update_mesh_config(struct wiphy *wiphy,
|
2010-12-03 08:20:44 +00:00
|
|
|
struct net_device *dev, u32 mask,
|
|
|
|
const struct mesh_config *nconf)
|
2008-10-21 19:03:48 +00:00
|
|
|
{
|
|
|
|
struct mesh_config *conf;
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2009-11-09 23:46:57 +00:00
|
|
|
struct ieee80211_if_mesh *ifmsh;
|
|
|
|
|
2008-10-21 19:03:48 +00:00
|
|
|
sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2009-11-09 23:46:57 +00:00
|
|
|
ifmsh = &sdata->u.mesh;
|
2008-10-21 19:03:48 +00:00
|
|
|
|
|
|
|
/* Set the config options which we are interested in setting */
|
|
|
|
conf = &(sdata->u.mesh.mshcfg);
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_RETRY_TIMEOUT, mask))
|
|
|
|
conf->dot11MeshRetryTimeout = nconf->dot11MeshRetryTimeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_CONFIRM_TIMEOUT, mask))
|
|
|
|
conf->dot11MeshConfirmTimeout = nconf->dot11MeshConfirmTimeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HOLDING_TIMEOUT, mask))
|
|
|
|
conf->dot11MeshHoldingTimeout = nconf->dot11MeshHoldingTimeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_MAX_PEER_LINKS, mask))
|
|
|
|
conf->dot11MeshMaxPeerLinks = nconf->dot11MeshMaxPeerLinks;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_MAX_RETRIES, mask))
|
|
|
|
conf->dot11MeshMaxRetries = nconf->dot11MeshMaxRetries;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_TTL, mask))
|
|
|
|
conf->dot11MeshTTL = nconf->dot11MeshTTL;
|
2010-12-03 08:20:40 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_ELEMENT_TTL, mask))
|
2012-06-14 16:23:53 +00:00
|
|
|
conf->element_ttl = nconf->element_ttl;
|
2013-03-04 21:06:14 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_AUTO_OPEN_PLINKS, mask)) {
|
|
|
|
if (ifmsh->user_mpm)
|
|
|
|
return -EBUSY;
|
2008-10-21 19:03:48 +00:00
|
|
|
conf->auto_open_plinks = nconf->auto_open_plinks;
|
2013-03-04 21:06:14 +00:00
|
|
|
}
|
2012-03-31 18:31:33 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR, mask))
|
|
|
|
conf->dot11MeshNbrOffsetMaxNeighbor =
|
|
|
|
nconf->dot11MeshNbrOffsetMaxNeighbor;
|
2008-10-21 19:03:48 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_MAX_PREQ_RETRIES, mask))
|
|
|
|
conf->dot11MeshHWMPmaxPREQretries =
|
|
|
|
nconf->dot11MeshHWMPmaxPREQretries;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_PATH_REFRESH_TIME, mask))
|
|
|
|
conf->path_refresh_time = nconf->path_refresh_time;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_MIN_DISCOVERY_TIMEOUT, mask))
|
|
|
|
conf->min_discovery_timeout = nconf->min_discovery_timeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_ACTIVE_PATH_TIMEOUT, mask))
|
|
|
|
conf->dot11MeshHWMPactivePathTimeout =
|
|
|
|
nconf->dot11MeshHWMPactivePathTimeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_PREQ_MIN_INTERVAL, mask))
|
|
|
|
conf->dot11MeshHWMPpreqMinInterval =
|
|
|
|
nconf->dot11MeshHWMPpreqMinInterval;
|
2011-11-25 01:15:24 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_PERR_MIN_INTERVAL, mask))
|
|
|
|
conf->dot11MeshHWMPperrMinInterval =
|
|
|
|
nconf->dot11MeshHWMPperrMinInterval;
|
2008-10-21 19:03:48 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_NET_DIAM_TRVS_TIME,
|
|
|
|
mask))
|
|
|
|
conf->dot11MeshHWMPnetDiameterTraversalTime =
|
|
|
|
nconf->dot11MeshHWMPnetDiameterTraversalTime;
|
2009-11-09 23:46:57 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_ROOTMODE, mask)) {
|
|
|
|
conf->dot11MeshHWMPRootMode = nconf->dot11MeshHWMPRootMode;
|
|
|
|
ieee80211_mesh_root_setup(ifmsh);
|
|
|
|
}
|
2011-08-09 23:45:11 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_GATE_ANNOUNCEMENTS, mask)) {
|
2011-08-25 17:36:14 +00:00
|
|
|
/* our current gate announcement implementation rides on root
|
|
|
|
* announcements, so require this ifmsh to also be a root node
|
|
|
|
* */
|
|
|
|
if (nconf->dot11MeshGateAnnouncementProtocol &&
|
2012-06-13 18:06:09 +00:00
|
|
|
!(conf->dot11MeshHWMPRootMode > IEEE80211_ROOTMODE_ROOT)) {
|
|
|
|
conf->dot11MeshHWMPRootMode = IEEE80211_PROACTIVE_RANN;
|
2011-08-25 17:36:14 +00:00
|
|
|
ieee80211_mesh_root_setup(ifmsh);
|
|
|
|
}
|
2011-08-09 23:45:11 +00:00
|
|
|
conf->dot11MeshGateAnnouncementProtocol =
|
|
|
|
nconf->dot11MeshGateAnnouncementProtocol;
|
|
|
|
}
|
2012-06-11 03:59:36 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_RANN_INTERVAL, mask))
|
2011-08-09 23:45:10 +00:00
|
|
|
conf->dot11MeshHWMPRannInterval =
|
|
|
|
nconf->dot11MeshHWMPRannInterval;
|
2012-01-20 17:02:16 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_FORWARDING, mask))
|
|
|
|
conf->dot11MeshForwarding = nconf->dot11MeshForwarding;
|
2012-02-29 01:04:08 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_RSSI_THRESHOLD, mask)) {
|
|
|
|
/* our RSSI threshold implementation is supported only for
|
|
|
|
* devices that report signal in dBm.
|
|
|
|
*/
|
|
|
|
if (!(sdata->local->hw.flags & IEEE80211_HW_SIGNAL_DBM))
|
|
|
|
return -ENOTSUPP;
|
|
|
|
conf->rssi_threshold = nconf->rssi_threshold;
|
|
|
|
}
|
2012-04-30 21:20:32 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HT_OPMODE, mask)) {
|
|
|
|
conf->ht_opmode = nconf->ht_opmode;
|
|
|
|
sdata->vif.bss_conf.ht_operation_mode = nconf->ht_opmode;
|
|
|
|
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_HT);
|
|
|
|
}
|
2012-06-13 18:06:06 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT, mask))
|
|
|
|
conf->dot11MeshHWMPactivePathToRootTimeout =
|
|
|
|
nconf->dot11MeshHWMPactivePathToRootTimeout;
|
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_ROOT_INTERVAL, mask))
|
|
|
|
conf->dot11MeshHWMProotInterval =
|
|
|
|
nconf->dot11MeshHWMProotInterval;
|
2012-06-13 18:06:10 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL, mask))
|
|
|
|
conf->dot11MeshHWMPconfirmationInterval =
|
|
|
|
nconf->dot11MeshHWMPconfirmationInterval;
|
2013-01-30 17:14:08 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_POWER_MODE, mask)) {
|
|
|
|
conf->power_mode = nconf->power_mode;
|
|
|
|
ieee80211_mps_local_status_update(sdata);
|
|
|
|
}
|
2013-02-14 19:20:13 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_AWAKE_WINDOW, mask))
|
2013-01-30 17:14:08 +00:00
|
|
|
conf->dot11MeshAwakeWindowDuration =
|
|
|
|
nconf->dot11MeshAwakeWindowDuration;
|
2013-06-03 16:53:40 +00:00
|
|
|
if (_chg_mesh_attr(NL80211_MESHCONF_PLINK_TIMEOUT, mask))
|
|
|
|
conf->plink_timeout = nconf->plink_timeout;
|
2013-02-14 19:20:13 +00:00
|
|
|
ieee80211_mbss_info_change_notify(sdata, BSS_CHANGED_BEACON);
|
2008-10-21 19:03:48 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-12-03 08:20:44 +00:00
|
|
|
static int ieee80211_join_mesh(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
const struct mesh_config *conf,
|
|
|
|
const struct mesh_setup *setup)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
|
2010-12-17 01:37:49 +00:00
|
|
|
int err;
|
2010-12-03 08:20:44 +00:00
|
|
|
|
2010-12-17 01:37:49 +00:00
|
|
|
memcpy(&ifmsh->mshcfg, conf, sizeof(struct mesh_config));
|
|
|
|
err = copy_mesh_setup(ifmsh, setup);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2012-05-16 21:50:20 +00:00
|
|
|
|
2012-09-11 12:34:12 +00:00
|
|
|
/* can mesh use other SMPS modes? */
|
|
|
|
sdata->smps_mode = IEEE80211_SMPS_OFF;
|
|
|
|
sdata->needed_rx_chains = sdata->local->rx_chains;
|
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&sdata->local->mtx);
|
2012-11-09 10:39:59 +00:00
|
|
|
err = ieee80211_vif_use_channel(sdata, &setup->chandef,
|
2012-07-26 15:24:39 +00:00
|
|
|
IEEE80211_CHANCTX_SHARED);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_unlock(&sdata->local->mtx);
|
2012-05-16 21:50:20 +00:00
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2013-02-14 19:20:13 +00:00
|
|
|
return ieee80211_start_mesh(sdata);
|
2010-12-03 08:20:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_leave_mesh(struct wiphy *wiphy, struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
ieee80211_stop_mesh(sdata);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&sdata->local->mtx);
|
2012-07-26 15:24:39 +00:00
|
|
|
ieee80211_vif_release_channel(sdata);
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_unlock(&sdata->local->mtx);
|
2010-12-03 08:20:44 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2008-02-23 14:17:17 +00:00
|
|
|
#endif
|
|
|
|
|
2008-08-07 17:07:01 +00:00
|
|
|
static int ieee80211_change_bss(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
struct bss_parameters *params)
|
|
|
|
{
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
enum ieee80211_band band;
|
2008-08-07 17:07:01 +00:00
|
|
|
u32 changed = 0;
|
|
|
|
|
2013-11-21 17:19:50 +00:00
|
|
|
if (!sdata_dereference(sdata->u.ap.beacon, sdata))
|
2012-07-26 15:24:39 +00:00
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
band = ieee80211_get_sdata_band(sdata);
|
2008-08-07 17:07:01 +00:00
|
|
|
|
|
|
|
if (params->use_cts_prot >= 0) {
|
2008-10-10 23:51:51 +00:00
|
|
|
sdata->vif.bss_conf.use_cts_prot = params->use_cts_prot;
|
2008-08-07 17:07:01 +00:00
|
|
|
changed |= BSS_CHANGED_ERP_CTS_PROT;
|
|
|
|
}
|
|
|
|
if (params->use_short_preamble >= 0) {
|
2008-10-10 23:51:51 +00:00
|
|
|
sdata->vif.bss_conf.use_short_preamble =
|
2008-08-07 17:07:01 +00:00
|
|
|
params->use_short_preamble;
|
|
|
|
changed |= BSS_CHANGED_ERP_PREAMBLE;
|
|
|
|
}
|
2010-01-15 02:00:48 +00:00
|
|
|
|
|
|
|
if (!sdata->vif.bss_conf.use_short_slot &&
|
2012-07-26 15:24:39 +00:00
|
|
|
band == IEEE80211_BAND_5GHZ) {
|
2010-01-15 02:00:48 +00:00
|
|
|
sdata->vif.bss_conf.use_short_slot = true;
|
|
|
|
changed |= BSS_CHANGED_ERP_SLOT;
|
|
|
|
}
|
|
|
|
|
2008-08-07 17:07:01 +00:00
|
|
|
if (params->use_short_slot_time >= 0) {
|
2008-10-10 23:51:51 +00:00
|
|
|
sdata->vif.bss_conf.use_short_slot =
|
2008-08-07 17:07:01 +00:00
|
|
|
params->use_short_slot_time;
|
|
|
|
changed |= BSS_CHANGED_ERP_SLOT;
|
|
|
|
}
|
|
|
|
|
2008-10-30 14:59:22 +00:00
|
|
|
if (params->basic_rates) {
|
2013-07-08 14:55:53 +00:00
|
|
|
ieee80211_parse_bitrates(&sdata->vif.bss_conf.chandef,
|
|
|
|
wiphy->bands[band],
|
|
|
|
params->basic_rates,
|
|
|
|
params->basic_rates_len,
|
|
|
|
&sdata->vif.bss_conf.basic_rates);
|
2008-10-30 14:59:22 +00:00
|
|
|
changed |= BSS_CHANGED_BASIC_RATES;
|
|
|
|
}
|
|
|
|
|
2010-04-26 23:23:36 +00:00
|
|
|
if (params->ap_isolate >= 0) {
|
|
|
|
if (params->ap_isolate)
|
|
|
|
sdata->flags |= IEEE80211_SDATA_DONT_BRIDGE_PACKETS;
|
|
|
|
else
|
|
|
|
sdata->flags &= ~IEEE80211_SDATA_DONT_BRIDGE_PACKETS;
|
|
|
|
}
|
|
|
|
|
2010-11-19 11:40:26 +00:00
|
|
|
if (params->ht_opmode >= 0) {
|
|
|
|
sdata->vif.bss_conf.ht_operation_mode =
|
|
|
|
(u16) params->ht_opmode;
|
|
|
|
changed |= BSS_CHANGED_HT;
|
|
|
|
}
|
|
|
|
|
2012-11-14 14:21:17 +00:00
|
|
|
if (params->p2p_ctwindow >= 0) {
|
2013-03-21 14:47:56 +00:00
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow &=
|
|
|
|
~IEEE80211_P2P_OPPPS_CTWINDOW_MASK;
|
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow |=
|
|
|
|
params->p2p_ctwindow & IEEE80211_P2P_OPPPS_CTWINDOW_MASK;
|
2012-11-14 14:21:17 +00:00
|
|
|
changed |= BSS_CHANGED_P2P_PS;
|
|
|
|
}
|
|
|
|
|
2013-03-21 14:47:56 +00:00
|
|
|
if (params->p2p_opp_ps > 0) {
|
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow |=
|
|
|
|
IEEE80211_P2P_OPPPS_ENABLE_BIT;
|
|
|
|
changed |= BSS_CHANGED_P2P_PS;
|
|
|
|
} else if (params->p2p_opp_ps == 0) {
|
|
|
|
sdata->vif.bss_conf.p2p_noa_attr.oppps_ctwindow &=
|
|
|
|
~IEEE80211_P2P_OPPPS_ENABLE_BIT;
|
2012-11-14 14:21:17 +00:00
|
|
|
changed |= BSS_CHANGED_P2P_PS;
|
|
|
|
}
|
|
|
|
|
2008-08-07 17:07:01 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, changed);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-10-30 14:59:24 +00:00
|
|
|
static int ieee80211_set_txq_params(struct wiphy *wiphy,
|
2011-09-25 17:06:53 +00:00
|
|
|
struct net_device *dev,
|
2008-10-30 14:59:24 +00:00
|
|
|
struct ieee80211_txq_params *params)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2011-09-25 17:06:54 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
2008-10-30 14:59:24 +00:00
|
|
|
struct ieee80211_tx_queue_params p;
|
|
|
|
|
|
|
|
if (!local->ops->conf_tx)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2012-03-28 09:04:25 +00:00
|
|
|
if (local->hw.queues < IEEE80211_NUM_ACS)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2008-10-30 14:59:24 +00:00
|
|
|
memset(&p, 0, sizeof(p));
|
|
|
|
p.aifs = params->aifs;
|
|
|
|
p.cw_max = params->cwmax;
|
|
|
|
p.cw_min = params->cwmin;
|
|
|
|
p.txop = params->txop;
|
2010-01-12 08:42:31 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Setting tx queue params disables u-apsd because it's only
|
|
|
|
* called in master mode.
|
|
|
|
*/
|
|
|
|
p.uapsd = false;
|
|
|
|
|
2012-03-28 09:04:24 +00:00
|
|
|
sdata->tx_conf[params->ac] = p;
|
|
|
|
if (drv_conf_tx(local, sdata, params->ac, &p)) {
|
2010-08-20 23:25:38 +00:00
|
|
|
wiphy_debug(local->hw.wiphy,
|
2012-03-28 09:04:24 +00:00
|
|
|
"failed to set TX queue parameters for AC %d\n",
|
|
|
|
params->ac);
|
2008-10-30 14:59:24 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2012-07-06 15:37:43 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_QOS);
|
|
|
|
|
2008-10-30 14:59:24 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-01-19 16:20:53 +00:00
|
|
|
#ifdef CONFIG_PM
|
2011-05-04 13:37:28 +00:00
|
|
|
static int ieee80211_suspend(struct wiphy *wiphy,
|
|
|
|
struct cfg80211_wowlan *wowlan)
|
2009-01-19 16:20:53 +00:00
|
|
|
{
|
2011-05-04 13:37:29 +00:00
|
|
|
return __ieee80211_suspend(wiphy_priv(wiphy), wowlan);
|
2009-01-19 16:20:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_resume(struct wiphy *wiphy)
|
|
|
|
{
|
|
|
|
return __ieee80211_resume(wiphy_priv(wiphy));
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
#define ieee80211_suspend NULL
|
|
|
|
#define ieee80211_resume NULL
|
|
|
|
#endif
|
|
|
|
|
2009-02-10 20:25:55 +00:00
|
|
|
static int ieee80211_scan(struct wiphy *wiphy,
|
|
|
|
struct cfg80211_scan_request *req)
|
|
|
|
{
|
2012-06-18 17:17:03 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
|
|
|
|
sdata = IEEE80211_WDEV_TO_SUB_IF(req->wdev);
|
2009-02-10 20:25:55 +00:00
|
|
|
|
2010-09-16 12:58:23 +00:00
|
|
|
switch (ieee80211_vif_type_p2p(&sdata->vif)) {
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
case NL80211_IFTYPE_P2P_CLIENT:
|
2012-06-18 18:07:15 +00:00
|
|
|
case NL80211_IFTYPE_P2P_DEVICE:
|
2010-09-16 12:58:23 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_P2P_GO:
|
|
|
|
if (sdata->local->ops->hw_scan)
|
|
|
|
break;
|
2011-02-01 14:35:36 +00:00
|
|
|
/*
|
|
|
|
* FIXME: implement NoA while scanning in software,
|
|
|
|
* for now fall through to allow scanning only when
|
|
|
|
* beaconing hasn't been configured yet
|
|
|
|
*/
|
2010-09-16 12:58:23 +00:00
|
|
|
case NL80211_IFTYPE_AP:
|
2012-10-16 06:39:22 +00:00
|
|
|
/*
|
|
|
|
* If the scan has been forced (and the driver supports
|
|
|
|
* forcing), don't care about being beaconing already.
|
|
|
|
* This will create problems to the attached stations (e.g. all
|
|
|
|
* the frames sent while scanning on other channel will be
|
|
|
|
* lost)
|
|
|
|
*/
|
|
|
|
if (sdata->u.ap.beacon &&
|
|
|
|
(!(wiphy->features & NL80211_FEATURE_AP_SCAN) ||
|
|
|
|
!(req->flags & NL80211_SCAN_FLAG_AP)))
|
2010-09-16 12:58:23 +00:00
|
|
|
return -EOPNOTSUPP;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
2009-02-10 20:25:55 +00:00
|
|
|
|
|
|
|
return ieee80211_request_scan(sdata, req);
|
|
|
|
}
|
|
|
|
|
2011-05-11 14:09:36 +00:00
|
|
|
static int
|
|
|
|
ieee80211_sched_scan_start(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
struct cfg80211_sched_scan_request *req)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
if (!sdata->local->ops->sched_scan_start)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return ieee80211_request_sched_scan_start(sdata, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2011-05-12 13:28:29 +00:00
|
|
|
ieee80211_sched_scan_stop(struct wiphy *wiphy, struct net_device *dev)
|
2011-05-11 14:09:36 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
if (!sdata->local->ops->sched_scan_stop)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2011-05-12 13:28:29 +00:00
|
|
|
return ieee80211_request_sched_scan_stop(sdata);
|
2011-05-11 14:09:36 +00:00
|
|
|
}
|
|
|
|
|
2009-03-19 11:39:22 +00:00
|
|
|
static int ieee80211_auth(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_auth_request *req)
|
|
|
|
{
|
2009-07-07 01:45:17 +00:00
|
|
|
return ieee80211_mgd_auth(IEEE80211_DEV_TO_SUB_IF(dev), req);
|
2009-03-19 11:39:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_assoc(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_assoc_request *req)
|
|
|
|
{
|
2009-07-07 01:45:17 +00:00
|
|
|
return ieee80211_mgd_assoc(IEEE80211_DEV_TO_SUB_IF(dev), req);
|
2009-03-19 11:39:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_deauth(struct wiphy *wiphy, struct net_device *dev,
|
2012-02-24 12:50:51 +00:00
|
|
|
struct cfg80211_deauth_request *req)
|
2009-03-19 11:39:22 +00:00
|
|
|
{
|
2012-02-24 12:50:51 +00:00
|
|
|
return ieee80211_mgd_deauth(IEEE80211_DEV_TO_SUB_IF(dev), req);
|
2009-03-19 11:39:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_disassoc(struct wiphy *wiphy, struct net_device *dev,
|
2012-02-24 12:50:51 +00:00
|
|
|
struct cfg80211_disassoc_request *req)
|
2009-03-19 11:39:22 +00:00
|
|
|
{
|
2012-02-24 12:50:51 +00:00
|
|
|
return ieee80211_mgd_disassoc(IEEE80211_DEV_TO_SUB_IF(dev), req);
|
2009-03-19 11:39:22 +00:00
|
|
|
}
|
|
|
|
|
2009-04-19 19:25:43 +00:00
|
|
|
static int ieee80211_join_ibss(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_ibss_params *params)
|
|
|
|
{
|
2012-07-26 15:24:39 +00:00
|
|
|
return ieee80211_ibss_join(IEEE80211_DEV_TO_SUB_IF(dev), params);
|
2009-04-19 19:25:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_leave_ibss(struct wiphy *wiphy, struct net_device *dev)
|
|
|
|
{
|
2012-07-26 15:24:39 +00:00
|
|
|
return ieee80211_ibss_leave(IEEE80211_DEV_TO_SUB_IF(dev));
|
2009-04-19 19:25:43 +00:00
|
|
|
}
|
|
|
|
|
2012-11-02 12:27:49 +00:00
|
|
|
static int ieee80211_set_mcast_rate(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
int rate[IEEE80211_NUM_BANDS])
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
2013-02-06 16:23:45 +00:00
|
|
|
memcpy(sdata->vif.bss_conf.mcast_rate, rate,
|
|
|
|
sizeof(int) * IEEE80211_NUM_BANDS);
|
2012-11-02 12:27:49 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-04-20 16:39:05 +00:00
|
|
|
static int ieee80211_set_wiphy_params(struct wiphy *wiphy, u32 changed)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2009-04-23 16:52:52 +00:00
|
|
|
int err;
|
2009-04-20 16:39:05 +00:00
|
|
|
|
2010-11-08 09:51:06 +00:00
|
|
|
if (changed & WIPHY_PARAM_FRAG_THRESHOLD) {
|
|
|
|
err = drv_set_frag_threshold(local, wiphy->frag_threshold);
|
|
|
|
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-09-04 21:57:41 +00:00
|
|
|
if ((changed & WIPHY_PARAM_COVERAGE_CLASS) ||
|
|
|
|
(changed & WIPHY_PARAM_DYN_ACK)) {
|
|
|
|
s16 coverage_class;
|
|
|
|
|
|
|
|
coverage_class = changed & WIPHY_PARAM_COVERAGE_CLASS ?
|
|
|
|
wiphy->coverage_class : -1;
|
|
|
|
err = drv_set_coverage_class(local, coverage_class);
|
2009-12-21 21:50:48 +00:00
|
|
|
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-04-20 16:39:05 +00:00
|
|
|
if (changed & WIPHY_PARAM_RTS_THRESHOLD) {
|
2009-04-23 16:52:52 +00:00
|
|
|
err = drv_set_rts_threshold(local, wiphy->rts_threshold);
|
2009-04-20 16:39:05 +00:00
|
|
|
|
2009-04-23 16:52:52 +00:00
|
|
|
if (err)
|
|
|
|
return err;
|
2009-04-20 16:39:05 +00:00
|
|
|
}
|
|
|
|
|
2012-11-09 17:38:32 +00:00
|
|
|
if (changed & WIPHY_PARAM_RETRY_SHORT) {
|
|
|
|
if (wiphy->retry_short > IEEE80211_MAX_TX_RETRY)
|
|
|
|
return -EINVAL;
|
2009-04-20 16:39:05 +00:00
|
|
|
local->hw.conf.short_frame_max_tx_count = wiphy->retry_short;
|
2012-11-09 17:38:32 +00:00
|
|
|
}
|
|
|
|
if (changed & WIPHY_PARAM_RETRY_LONG) {
|
|
|
|
if (wiphy->retry_long > IEEE80211_MAX_TX_RETRY)
|
|
|
|
return -EINVAL;
|
2009-04-20 16:39:05 +00:00
|
|
|
local->hw.conf.long_frame_max_tx_count = wiphy->retry_long;
|
2012-11-09 17:38:32 +00:00
|
|
|
}
|
2009-04-20 16:39:05 +00:00
|
|
|
if (changed &
|
|
|
|
(WIPHY_PARAM_RETRY_SHORT | WIPHY_PARAM_RETRY_LONG))
|
|
|
|
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_RETRY_LIMITS);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-06-02 11:01:39 +00:00
|
|
|
static int ieee80211_set_tx_power(struct wiphy *wiphy,
|
2012-10-24 08:17:18 +00:00
|
|
|
struct wireless_dev *wdev,
|
2010-06-23 09:12:37 +00:00
|
|
|
enum nl80211_tx_power_setting type, int mbm)
|
2009-06-02 11:01:39 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2012-10-24 08:59:25 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2009-06-02 11:01:39 +00:00
|
|
|
|
2012-10-24 08:59:25 +00:00
|
|
|
if (wdev) {
|
|
|
|
sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case NL80211_TX_POWER_AUTOMATIC:
|
|
|
|
sdata->user_power_level = IEEE80211_UNSET_POWER_LEVEL;
|
|
|
|
break;
|
|
|
|
case NL80211_TX_POWER_LIMITED:
|
|
|
|
case NL80211_TX_POWER_FIXED:
|
|
|
|
if (mbm < 0 || (mbm % 100))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
sdata->user_power_level = MBM_TO_DBM(mbm);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
ieee80211_recalc_txpower(sdata);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2012-07-26 15:24:39 +00:00
|
|
|
|
2009-06-02 11:01:39 +00:00
|
|
|
switch (type) {
|
2010-06-23 09:12:37 +00:00
|
|
|
case NL80211_TX_POWER_AUTOMATIC:
|
2012-10-24 08:59:25 +00:00
|
|
|
local->user_power_level = IEEE80211_UNSET_POWER_LEVEL;
|
2009-06-02 11:01:39 +00:00
|
|
|
break;
|
2010-06-23 09:12:37 +00:00
|
|
|
case NL80211_TX_POWER_LIMITED:
|
|
|
|
case NL80211_TX_POWER_FIXED:
|
|
|
|
if (mbm < 0 || (mbm % 100))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
local->user_power_level = MBM_TO_DBM(mbm);
|
2009-06-02 11:01:39 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2012-10-24 08:59:25 +00:00
|
|
|
mutex_lock(&local->iflist_mtx);
|
|
|
|
list_for_each_entry(sdata, &local->interfaces, list)
|
|
|
|
sdata->user_power_level = local->user_power_level;
|
|
|
|
list_for_each_entry(sdata, &local->interfaces, list)
|
|
|
|
ieee80211_recalc_txpower(sdata);
|
|
|
|
mutex_unlock(&local->iflist_mtx);
|
2009-06-02 11:01:39 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-10-24 08:17:18 +00:00
|
|
|
static int ieee80211_get_tx_power(struct wiphy *wiphy,
|
|
|
|
struct wireless_dev *wdev,
|
|
|
|
int *dbm)
|
2009-06-02 11:01:39 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2012-10-24 08:59:25 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
2009-06-02 11:01:39 +00:00
|
|
|
|
2014-10-25 22:32:53 +00:00
|
|
|
if (local->ops->get_txpower)
|
|
|
|
return drv_get_txpower(local, sdata, dbm);
|
|
|
|
|
2012-10-24 08:59:25 +00:00
|
|
|
if (!local->use_chanctx)
|
|
|
|
*dbm = local->hw.conf.power_level;
|
|
|
|
else
|
|
|
|
*dbm = sdata->vif.bss_conf.txpower;
|
2009-06-02 11:01:39 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-07-01 19:26:58 +00:00
|
|
|
static int ieee80211_set_wds_peer(struct wiphy *wiphy, struct net_device *dev,
|
2010-10-07 11:11:09 +00:00
|
|
|
const u8 *addr)
|
2009-07-01 19:26:58 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
memcpy(&sdata->u.wds.remote_addr, addr, ETH_ALEN);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-06-02 11:01:41 +00:00
|
|
|
static void ieee80211_rfkill_poll(struct wiphy *wiphy)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
|
|
|
|
drv_rfkill_poll(local);
|
|
|
|
}
|
|
|
|
|
2009-07-01 19:26:51 +00:00
|
|
|
#ifdef CONFIG_NL80211_TESTMODE
|
2013-07-31 15:04:15 +00:00
|
|
|
static int ieee80211_testmode_cmd(struct wiphy *wiphy,
|
|
|
|
struct wireless_dev *wdev,
|
|
|
|
void *data, int len)
|
2009-07-01 19:26:51 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2013-07-31 15:06:22 +00:00
|
|
|
struct ieee80211_vif *vif = NULL;
|
2009-07-01 19:26:51 +00:00
|
|
|
|
|
|
|
if (!local->ops->testmode_cmd)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2013-07-31 15:06:22 +00:00
|
|
|
if (wdev) {
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
|
|
|
|
sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
|
|
|
if (sdata->flags & IEEE80211_SDATA_IN_DRIVER)
|
|
|
|
vif = &sdata->vif;
|
|
|
|
}
|
|
|
|
|
|
|
|
return local->ops->testmode_cmd(&local->hw, vif, data, len);
|
2009-07-01 19:26:51 +00:00
|
|
|
}
|
2011-05-20 16:05:54 +00:00
|
|
|
|
|
|
|
static int ieee80211_testmode_dump(struct wiphy *wiphy,
|
|
|
|
struct sk_buff *skb,
|
|
|
|
struct netlink_callback *cb,
|
|
|
|
void *data, int len)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
|
|
|
|
if (!local->ops->testmode_dump)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return local->ops->testmode_dump(&local->hw, skb, cb, data, len);
|
|
|
|
}
|
2009-07-01 19:26:51 +00:00
|
|
|
#endif
|
|
|
|
|
2013-10-01 13:45:43 +00:00
|
|
|
int __ieee80211_request_smps_ap(struct ieee80211_sub_if_data *sdata,
|
|
|
|
enum ieee80211_smps_mode smps_mode)
|
|
|
|
{
|
|
|
|
struct sta_info *sta;
|
|
|
|
enum ieee80211_smps_mode old_req;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_AP))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
old_req = sdata->u.ap.req_smps;
|
|
|
|
sdata->u.ap.req_smps = smps_mode;
|
|
|
|
|
|
|
|
/* AUTOMATIC doesn't mean much for AP - don't allow it */
|
|
|
|
if (old_req == smps_mode ||
|
|
|
|
smps_mode == IEEE80211_SMPS_AUTOMATIC)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* If no associated stations, there's no need to do anything */
|
|
|
|
if (!atomic_read(&sdata->u.ap.num_mcast_sta)) {
|
|
|
|
sdata->smps_mode = smps_mode;
|
|
|
|
ieee80211_queue_work(&sdata->local->hw, &sdata->recalc_smps);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ht_dbg(sdata,
|
|
|
|
"SMSP %d requested in AP mode, sending Action frame to %d stations\n",
|
|
|
|
smps_mode, atomic_read(&sdata->u.ap.num_mcast_sta));
|
|
|
|
|
|
|
|
mutex_lock(&sdata->local->sta_mtx);
|
|
|
|
for (i = 0; i < STA_HASH_SIZE; i++) {
|
|
|
|
for (sta = rcu_dereference_protected(sdata->local->sta_hash[i],
|
|
|
|
lockdep_is_held(&sdata->local->sta_mtx));
|
|
|
|
sta;
|
|
|
|
sta = rcu_dereference_protected(sta->hnext,
|
|
|
|
lockdep_is_held(&sdata->local->sta_mtx))) {
|
|
|
|
/*
|
|
|
|
* Only stations associated to our AP and
|
|
|
|
* associated VLANs
|
|
|
|
*/
|
|
|
|
if (sta->sdata->bss != &sdata->u.ap)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* This station doesn't support MIMO - skip it */
|
|
|
|
if (sta_info_tx_streams(sta) == 1)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't wake up a STA just to send the action frame
|
|
|
|
* unless we are getting more restrictive.
|
|
|
|
*/
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_PS_STA) &&
|
|
|
|
!ieee80211_smps_is_restrictive(sta->known_smps_mode,
|
|
|
|
smps_mode)) {
|
|
|
|
ht_dbg(sdata,
|
|
|
|
"Won't send SMPS to sleeping STA %pM\n",
|
|
|
|
sta->sta.addr);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the STA is not authorized, wait until it gets
|
|
|
|
* authorized and the action frame will be sent then.
|
|
|
|
*/
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ht_dbg(sdata, "Sending SMPS to %pM\n", sta->sta.addr);
|
|
|
|
ieee80211_send_smps_action(sdata, smps_mode,
|
|
|
|
sta->sta.addr,
|
|
|
|
sdata->vif.bss_conf.bssid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mutex_unlock(&sdata->local->sta_mtx);
|
|
|
|
|
|
|
|
sdata->smps_mode = smps_mode;
|
|
|
|
ieee80211_queue_work(&sdata->local->hw, &sdata->recalc_smps);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int __ieee80211_request_smps_mgd(struct ieee80211_sub_if_data *sdata,
|
|
|
|
enum ieee80211_smps_mode smps_mode)
|
2009-12-01 12:37:02 +00:00
|
|
|
{
|
|
|
|
const u8 *ap;
|
|
|
|
enum ieee80211_smps_mode old_req;
|
|
|
|
int err;
|
|
|
|
|
2013-05-10 10:32:47 +00:00
|
|
|
lockdep_assert_held(&sdata->wdev.mtx);
|
2011-04-19 18:44:04 +00:00
|
|
|
|
2013-10-01 13:45:43 +00:00
|
|
|
if (WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2009-12-01 12:37:02 +00:00
|
|
|
old_req = sdata->u.mgd.req_smps;
|
|
|
|
sdata->u.mgd.req_smps = smps_mode;
|
|
|
|
|
|
|
|
if (old_req == smps_mode &&
|
|
|
|
smps_mode != IEEE80211_SMPS_AUTOMATIC)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If not associated, or current association is not an HT
|
2012-09-11 12:34:12 +00:00
|
|
|
* association, there's no need to do anything, just store
|
|
|
|
* the new value until we associate.
|
2009-12-01 12:37:02 +00:00
|
|
|
*/
|
|
|
|
if (!sdata->u.mgd.associated ||
|
2012-11-09 10:39:59 +00:00
|
|
|
sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT)
|
2009-12-01 12:37:02 +00:00
|
|
|
return 0;
|
|
|
|
|
2009-12-23 12:15:39 +00:00
|
|
|
ap = sdata->u.mgd.associated->bssid;
|
2009-12-01 12:37:02 +00:00
|
|
|
|
|
|
|
if (smps_mode == IEEE80211_SMPS_AUTOMATIC) {
|
|
|
|
if (sdata->u.mgd.powersave)
|
|
|
|
smps_mode = IEEE80211_SMPS_DYNAMIC;
|
|
|
|
else
|
|
|
|
smps_mode = IEEE80211_SMPS_OFF;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* send SM PS frame to AP */
|
|
|
|
err = ieee80211_send_smps_action(sdata, smps_mode,
|
|
|
|
ap, ap);
|
|
|
|
if (err)
|
|
|
|
sdata->u.mgd.req_smps = old_req;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-07-01 19:26:57 +00:00
|
|
|
static int ieee80211_set_power_mgmt(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
bool enabled, int timeout)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr);
|
|
|
|
|
2013-10-29 22:11:59 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
2010-01-15 11:21:37 +00:00
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2009-07-01 19:26:57 +00:00
|
|
|
if (!(local->hw.flags & IEEE80211_HW_SUPPORTS_PS))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
if (enabled == sdata->u.mgd.powersave &&
|
2010-06-09 06:51:52 +00:00
|
|
|
timeout == local->dynamic_ps_forced_timeout)
|
2009-07-01 19:26:57 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
sdata->u.mgd.powersave = enabled;
|
2010-06-09 06:51:52 +00:00
|
|
|
local->dynamic_ps_forced_timeout = timeout;
|
2009-07-01 19:26:57 +00:00
|
|
|
|
2009-12-01 12:37:02 +00:00
|
|
|
/* no change, but if automatic follow powersave */
|
2013-06-03 11:51:59 +00:00
|
|
|
sdata_lock(sdata);
|
2013-10-01 13:45:43 +00:00
|
|
|
__ieee80211_request_smps_mgd(sdata, sdata->u.mgd.req_smps);
|
2013-06-03 11:51:59 +00:00
|
|
|
sdata_unlock(sdata);
|
2009-12-01 12:37:02 +00:00
|
|
|
|
2009-07-01 19:26:57 +00:00
|
|
|
if (local->hw.flags & IEEE80211_HW_SUPPORTS_DYNAMIC_PS)
|
|
|
|
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
|
|
|
|
|
|
|
|
ieee80211_recalc_ps(local, -1);
|
2012-07-27 09:33:22 +00:00
|
|
|
ieee80211_recalc_ps_vif(sdata);
|
2009-07-01 19:26:57 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-03-23 07:02:34 +00:00
|
|
|
static int ieee80211_set_cqm_rssi_config(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
s32 rssi_thold, u32 rssi_hyst)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_vif *vif = &sdata->vif;
|
|
|
|
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
|
|
|
|
|
|
|
|
if (rssi_thold == bss_conf->cqm_rssi_thold &&
|
|
|
|
rssi_hyst == bss_conf->cqm_rssi_hyst)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
bss_conf->cqm_rssi_thold = rssi_thold;
|
|
|
|
bss_conf->cqm_rssi_hyst = rssi_hyst;
|
|
|
|
|
|
|
|
/* tell the driver upon association, unless already associated */
|
2012-01-19 08:29:58 +00:00
|
|
|
if (sdata->u.mgd.associated &&
|
|
|
|
sdata->vif.driver_flags & IEEE80211_VIF_SUPPORTS_CQM_RSSI)
|
2010-03-23 07:02:34 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_CQM);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-07-01 19:26:59 +00:00
|
|
|
static int ieee80211_set_bitrate_mask(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
const u8 *addr,
|
|
|
|
const struct cfg80211_bitrate_mask *mask)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr);
|
2011-04-27 11:26:51 +00:00
|
|
|
int i, ret;
|
2009-12-04 08:26:38 +00:00
|
|
|
|
2012-06-12 09:41:15 +00:00
|
|
|
if (!ieee80211_sdata_running(sdata))
|
|
|
|
return -ENETDOWN;
|
|
|
|
|
2011-04-27 11:26:51 +00:00
|
|
|
if (local->hw.flags & IEEE80211_HW_HAS_RATE_CONTROL) {
|
|
|
|
ret = drv_set_bitrate_mask(local, sdata, mask);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
2009-07-01 19:26:59 +00:00
|
|
|
|
2012-01-28 16:25:33 +00:00
|
|
|
for (i = 0; i < IEEE80211_NUM_BANDS; i++) {
|
2013-04-16 11:38:42 +00:00
|
|
|
struct ieee80211_supported_band *sband = wiphy->bands[i];
|
|
|
|
int j;
|
|
|
|
|
2010-01-06 11:09:08 +00:00
|
|
|
sdata->rc_rateidx_mask[i] = mask->control[i].legacy;
|
2013-12-05 09:02:15 +00:00
|
|
|
memcpy(sdata->rc_rateidx_mcs_mask[i], mask->control[i].ht_mcs,
|
|
|
|
sizeof(mask->control[i].ht_mcs));
|
2013-04-16 11:38:42 +00:00
|
|
|
|
|
|
|
sdata->rc_has_mcs_mask[i] = false;
|
|
|
|
if (!sband)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (j = 0; j < IEEE80211_HT_MCS_MASK_LEN; j++)
|
|
|
|
if (~sdata->rc_rateidx_mcs_mask[i][j]) {
|
|
|
|
sdata->rc_has_mcs_mask[i] = true;
|
|
|
|
break;
|
|
|
|
}
|
2012-01-28 16:25:33 +00:00
|
|
|
}
|
2009-07-01 19:26:59 +00:00
|
|
|
|
2010-01-06 11:09:08 +00:00
|
|
|
return 0;
|
2009-07-01 19:26:59 +00:00
|
|
|
}
|
|
|
|
|
2014-09-03 12:25:06 +00:00
|
|
|
static bool ieee80211_coalesce_started_roc(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_roc_work *new_roc,
|
|
|
|
struct ieee80211_roc_work *cur_roc)
|
|
|
|
{
|
|
|
|
unsigned long j = jiffies;
|
|
|
|
unsigned long cur_roc_end = cur_roc->hw_start_time +
|
|
|
|
msecs_to_jiffies(cur_roc->duration);
|
|
|
|
struct ieee80211_roc_work *next_roc;
|
|
|
|
int new_dur;
|
|
|
|
|
|
|
|
if (WARN_ON(!cur_roc->started || !cur_roc->hw_begun))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (time_after(j + IEEE80211_ROC_MIN_LEFT, cur_roc_end))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
ieee80211_handle_roc_started(new_roc);
|
|
|
|
|
|
|
|
new_dur = new_roc->duration - jiffies_to_msecs(cur_roc_end - j);
|
|
|
|
|
|
|
|
/* cur_roc is long enough - add new_roc to the dependents list. */
|
|
|
|
if (new_dur <= 0) {
|
|
|
|
list_add_tail(&new_roc->list, &cur_roc->dependents);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
new_roc->duration = new_dur;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* if cur_roc was already coalesced before, we might
|
|
|
|
* want to extend the next roc instead of adding
|
|
|
|
* a new one.
|
|
|
|
*/
|
|
|
|
next_roc = list_entry(cur_roc->list.next,
|
|
|
|
struct ieee80211_roc_work, list);
|
|
|
|
if (&next_roc->list != &local->roc_list &&
|
|
|
|
next_roc->chan == new_roc->chan &&
|
|
|
|
next_roc->sdata == new_roc->sdata &&
|
|
|
|
!WARN_ON(next_roc->started)) {
|
|
|
|
list_add_tail(&new_roc->list, &next_roc->dependents);
|
|
|
|
next_roc->duration = max(next_roc->duration,
|
|
|
|
new_roc->duration);
|
|
|
|
next_roc->type = max(next_roc->type, new_roc->type);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* add right after cur_roc */
|
|
|
|
list_add(&new_roc->list, &cur_roc->list);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
static int ieee80211_start_roc_work(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct ieee80211_channel *channel,
|
|
|
|
unsigned int duration, u64 *cookie,
|
2013-02-12 07:34:13 +00:00
|
|
|
struct sk_buff *txskb,
|
|
|
|
enum ieee80211_roc_type type)
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_roc_work *roc, *tmp;
|
|
|
|
bool queued = false;
|
2010-12-18 16:20:47 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
lockdep_assert_held(&local->mtx);
|
|
|
|
|
2012-07-26 12:55:08 +00:00
|
|
|
if (local->use_chanctx && !local->ops->remain_on_channel)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
roc = kzalloc(sizeof(*roc), GFP_KERNEL);
|
|
|
|
if (!roc)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2013-12-19 11:25:29 +00:00
|
|
|
/*
|
|
|
|
* If the duration is zero, then the driver
|
|
|
|
* wouldn't actually do anything. Set it to
|
|
|
|
* 10 for now.
|
|
|
|
*
|
|
|
|
* TODO: cancel the off-channel operation
|
|
|
|
* when we get the SKB's TX status and
|
|
|
|
* the wait time was zero before.
|
|
|
|
*/
|
|
|
|
if (!duration)
|
|
|
|
duration = 10;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
roc->chan = channel;
|
|
|
|
roc->duration = duration;
|
|
|
|
roc->req_duration = duration;
|
|
|
|
roc->frame = txskb;
|
2013-02-12 07:34:13 +00:00
|
|
|
roc->type = type;
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
roc->mgmt_tx_cookie = (unsigned long)txskb;
|
|
|
|
roc->sdata = sdata;
|
|
|
|
INIT_DELAYED_WORK(&roc->work, ieee80211_sw_roc_work);
|
|
|
|
INIT_LIST_HEAD(&roc->dependents);
|
|
|
|
|
2014-01-12 09:06:37 +00:00
|
|
|
/*
|
|
|
|
* cookie is either the roc cookie (for normal roc)
|
|
|
|
* or the SKB (for mgmt TX)
|
|
|
|
*/
|
|
|
|
if (!txskb) {
|
|
|
|
/* local->mtx protects this */
|
|
|
|
local->roc_cookie_counter++;
|
|
|
|
roc->cookie = local->roc_cookie_counter;
|
|
|
|
/* wow, you wrapped 64 bits ... more likely a bug */
|
|
|
|
if (WARN_ON(roc->cookie == 0)) {
|
|
|
|
roc->cookie = 1;
|
|
|
|
local->roc_cookie_counter++;
|
|
|
|
}
|
|
|
|
*cookie = roc->cookie;
|
|
|
|
} else {
|
|
|
|
*cookie = (unsigned long)txskb;
|
|
|
|
}
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
/* if there's one pending or we're scanning, queue this one */
|
2013-02-08 17:16:20 +00:00
|
|
|
if (!list_empty(&local->roc_list) ||
|
|
|
|
local->scanning || local->radar_detect_enabled)
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
goto out_check_combine;
|
|
|
|
|
|
|
|
/* if not HW assist, just queue & schedule work */
|
|
|
|
if (!local->ops->remain_on_channel) {
|
|
|
|
ieee80211_queue_delayed_work(&local->hw, &roc->work, 0);
|
|
|
|
goto out_queue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* otherwise actually kick it off here (for error handling) */
|
|
|
|
|
2013-02-12 07:34:13 +00:00
|
|
|
ret = drv_remain_on_channel(local, sdata, channel, duration, type);
|
2010-12-18 16:20:47 +00:00
|
|
|
if (ret) {
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
kfree(roc);
|
|
|
|
return ret;
|
2010-12-18 16:20:47 +00:00
|
|
|
}
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
roc->started = true;
|
|
|
|
goto out_queue;
|
|
|
|
|
|
|
|
out_check_combine:
|
|
|
|
list_for_each_entry(tmp, &local->roc_list, list) {
|
2012-11-08 17:31:02 +00:00
|
|
|
if (tmp->chan != channel || tmp->sdata != sdata)
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Extend this ROC if possible:
|
|
|
|
*
|
|
|
|
* If it hasn't started yet, just increase the duration
|
|
|
|
* and add the new one to the list of dependents.
|
2013-02-12 07:34:13 +00:00
|
|
|
* If the type of the new ROC has higher priority, modify the
|
|
|
|
* type of the previous one to match that of the new one.
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
*/
|
|
|
|
if (!tmp->started) {
|
|
|
|
list_add_tail(&roc->list, &tmp->dependents);
|
|
|
|
tmp->duration = max(tmp->duration, roc->duration);
|
2013-02-12 07:34:13 +00:00
|
|
|
tmp->type = max(tmp->type, roc->type);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
queued = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If it has already started, it's more difficult ... */
|
|
|
|
if (local->ops->remain_on_channel) {
|
|
|
|
/*
|
|
|
|
* In the offloaded ROC case, if it hasn't begun, add
|
|
|
|
* this new one to the dependent list to be handled
|
2013-02-12 07:34:13 +00:00
|
|
|
* when the master one begins. If it has begun,
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
* check that there's still a minimum time left and
|
|
|
|
* if so, start this one, transmitting the frame, but
|
2013-02-12 07:34:13 +00:00
|
|
|
* add it to the list directly after this one with
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
* a reduced time so we'll ask the driver to execute
|
|
|
|
* it right after finishing the previous one, in the
|
|
|
|
* hope that it'll also be executed right afterwards,
|
|
|
|
* effectively extending the old one.
|
|
|
|
* If there's no minimum time left, just add it to the
|
|
|
|
* normal list.
|
2013-02-12 07:34:13 +00:00
|
|
|
* TODO: the ROC type is ignored here, assuming that it
|
|
|
|
* is better to immediately use the current ROC.
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
*/
|
|
|
|
if (!tmp->hw_begun) {
|
|
|
|
list_add_tail(&roc->list, &tmp->dependents);
|
|
|
|
queued = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2014-09-03 12:25:06 +00:00
|
|
|
if (ieee80211_coalesce_started_roc(local, roc, tmp))
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
queued = true;
|
|
|
|
} else if (del_timer_sync(&tmp->work.timer)) {
|
|
|
|
unsigned long new_end;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In the software ROC case, cancel the timer, if
|
|
|
|
* that fails then the finish work is already
|
|
|
|
* queued/pending and thus we queue the new ROC
|
|
|
|
* normally, if that succeeds then we can extend
|
|
|
|
* the timer duration and TX the frame (if any.)
|
|
|
|
*/
|
|
|
|
|
|
|
|
list_add_tail(&roc->list, &tmp->dependents);
|
|
|
|
queued = true;
|
|
|
|
|
|
|
|
new_end = jiffies + msecs_to_jiffies(roc->duration);
|
|
|
|
|
|
|
|
/* ok, it was started & we canceled timer */
|
|
|
|
if (time_after(new_end, tmp->work.timer.expires))
|
|
|
|
mod_timer(&tmp->work.timer, new_end);
|
|
|
|
else
|
|
|
|
add_timer(&tmp->work.timer);
|
|
|
|
|
|
|
|
ieee80211_handle_roc_started(roc);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
out_queue:
|
|
|
|
if (!queued)
|
|
|
|
list_add_tail(&roc->list, &local->roc_list);
|
|
|
|
|
|
|
|
return 0;
|
2010-12-18 16:20:47 +00:00
|
|
|
}
|
|
|
|
|
2009-12-23 12:15:42 +00:00
|
|
|
static int ieee80211_remain_on_channel(struct wiphy *wiphy,
|
2012-06-15 13:30:18 +00:00
|
|
|
struct wireless_dev *wdev,
|
2009-12-23 12:15:42 +00:00
|
|
|
struct ieee80211_channel *chan,
|
|
|
|
unsigned int duration,
|
|
|
|
u64 *cookie)
|
|
|
|
{
|
2012-06-15 13:30:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
2010-12-18 16:20:47 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
int ret;
|
2010-12-18 16:20:47 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2012-11-08 17:31:02 +00:00
|
|
|
ret = ieee80211_start_roc_work(local, sdata, chan,
|
2013-02-12 07:34:13 +00:00
|
|
|
duration, cookie, NULL,
|
|
|
|
IEEE80211_ROC_TYPE_NORMAL);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2009-12-23 12:15:42 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
return ret;
|
2009-12-23 12:15:42 +00:00
|
|
|
}
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
static int ieee80211_cancel_roc(struct ieee80211_local *local,
|
|
|
|
u64 cookie, bool mgmt_tx)
|
2010-12-18 16:20:47 +00:00
|
|
|
{
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
struct ieee80211_roc_work *roc, *tmp, *found = NULL;
|
2010-12-18 16:20:47 +00:00
|
|
|
int ret;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
list_for_each_entry_safe(roc, tmp, &local->roc_list, list) {
|
2012-06-11 15:09:41 +00:00
|
|
|
struct ieee80211_roc_work *dep, *tmp2;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(dep, tmp2, &roc->dependents, list) {
|
2012-10-26 14:13:06 +00:00
|
|
|
if (!mgmt_tx && dep->cookie != cookie)
|
2012-06-11 15:09:41 +00:00
|
|
|
continue;
|
|
|
|
else if (mgmt_tx && dep->mgmt_tx_cookie != cookie)
|
|
|
|
continue;
|
|
|
|
/* found dependent item -- just remove it */
|
|
|
|
list_del(&dep->list);
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
|
2013-03-25 10:51:14 +00:00
|
|
|
ieee80211_roc_notify_destroy(dep, true);
|
2012-06-11 15:09:41 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-10-26 14:13:06 +00:00
|
|
|
if (!mgmt_tx && roc->cookie != cookie)
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
continue;
|
|
|
|
else if (mgmt_tx && roc->mgmt_tx_cookie != cookie)
|
|
|
|
continue;
|
2010-12-18 16:20:47 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
found = roc;
|
|
|
|
break;
|
|
|
|
}
|
2010-12-18 16:20:47 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (!found) {
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
2010-12-18 16:20:47 +00:00
|
|
|
|
2012-06-11 15:09:41 +00:00
|
|
|
/*
|
|
|
|
* We found the item to cancel, so do that. Note that it
|
|
|
|
* may have dependents, which we also cancel (and send
|
|
|
|
* the expired signal for.) Not doing so would be quite
|
|
|
|
* tricky here, but we may need to fix it later.
|
|
|
|
*/
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (local->ops->remain_on_channel) {
|
|
|
|
if (found->started) {
|
|
|
|
ret = drv_cancel_remain_on_channel(local);
|
|
|
|
if (WARN_ON_ONCE(ret)) {
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
2010-12-18 16:20:47 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
list_del(&found->list);
|
2010-12-18 16:20:47 +00:00
|
|
|
|
2012-06-20 18:11:33 +00:00
|
|
|
if (found->started)
|
|
|
|
ieee80211_start_next_roc(local);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2010-12-18 16:20:47 +00:00
|
|
|
|
2013-03-25 10:51:14 +00:00
|
|
|
ieee80211_roc_notify_destroy(found, true);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
} else {
|
|
|
|
/* work may be pending so use it all the time */
|
|
|
|
found->abort = true;
|
|
|
|
ieee80211_queue_delayed_work(&local->hw, &found->work, 0);
|
2010-12-18 16:20:47 +00:00
|
|
|
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
/* work will clean up etc */
|
|
|
|
flush_delayed_work(&found->work);
|
2013-03-25 10:51:14 +00:00
|
|
|
WARN_ON(!found->to_be_freed);
|
|
|
|
kfree(found);
|
2010-12-18 16:20:47 +00:00
|
|
|
}
|
2009-12-23 12:15:42 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
return 0;
|
2009-12-23 12:15:42 +00:00
|
|
|
}
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
static int ieee80211_cancel_remain_on_channel(struct wiphy *wiphy,
|
2012-06-15 13:30:18 +00:00
|
|
|
struct wireless_dev *wdev,
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
u64 cookie)
|
2010-11-25 09:02:30 +00:00
|
|
|
{
|
2012-06-15 13:30:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2010-11-25 09:02:30 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
return ieee80211_cancel_roc(local, cookie, false);
|
2010-11-25 09:02:30 +00:00
|
|
|
}
|
|
|
|
|
2013-02-08 17:16:20 +00:00
|
|
|
static int ieee80211_start_radar_detection(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
2014-02-21 18:46:13 +00:00
|
|
|
struct cfg80211_chan_def *chandef,
|
|
|
|
u32 cac_time_ms)
|
2013-02-08 17:16:20 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
int err;
|
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
if (!list_empty(&local->roc_list) || local->scanning) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2013-02-08 17:16:20 +00:00
|
|
|
|
|
|
|
/* whatever, but channel contexts should not complain about that one */
|
|
|
|
sdata->smps_mode = IEEE80211_SMPS_OFF;
|
|
|
|
sdata->needed_rx_chains = local->rx_chains;
|
|
|
|
|
|
|
|
err = ieee80211_vif_use_channel(sdata, chandef,
|
|
|
|
IEEE80211_CHANCTX_SHARED);
|
|
|
|
if (err)
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
goto out_unlock;
|
2013-02-08 17:16:20 +00:00
|
|
|
|
|
|
|
ieee80211_queue_delayed_work(&sdata->local->hw,
|
2014-02-21 18:46:13 +00:00
|
|
|
&sdata->dfs_cac_timer_work,
|
|
|
|
msecs_to_jiffies(cac_time_ms));
|
2013-02-08 17:16:20 +00:00
|
|
|
|
mac80211: fix iflist_mtx/mtx locking in radar detection
The scan code creates an iflist_mtx -> mtx locking dependency,
and a few other places, notably radar detection, were creating
the opposite dependency, causing lockdep to complain. As scan
and radar detection are mutually exclusive, the deadlock can't
really happen in practice, but it's still bad form.
A similar issue exists in the monitor mode code, but this is
only used by channel-context drivers right now and those have
to have hardware scan, so that also can't happen.
Still, fix these issues by making some of the channel context
code require the mtx to be held rather than acquiring it, thus
allowing the monitor/radar callers to keep the iflist_mtx->mtx
lock ordering.
While at it, also fix access to the local->scanning variable
in the radar code, and document that radar_detect_enabled is
now properly protected by the mtx.
All this would now introduce an ABBA deadlock between the DFS
work cancelling and local->mtx, so change the locking there a
bit to not need to use cancel_delayed_work_sync() but be able
to just use cancel_delayed_work(). The work is also safely
stopped/removed when the interface is stopped, so no extra
changes are needed.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-12-18 08:43:33 +00:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return err;
|
2013-02-08 17:16:20 +00:00
|
|
|
}
|
|
|
|
|
2013-07-11 14:09:06 +00:00
|
|
|
static struct cfg80211_beacon_data *
|
|
|
|
cfg80211_beacon_dup(struct cfg80211_beacon_data *beacon)
|
|
|
|
{
|
|
|
|
struct cfg80211_beacon_data *new_beacon;
|
|
|
|
u8 *pos;
|
|
|
|
int len;
|
|
|
|
|
|
|
|
len = beacon->head_len + beacon->tail_len + beacon->beacon_ies_len +
|
|
|
|
beacon->proberesp_ies_len + beacon->assocresp_ies_len +
|
|
|
|
beacon->probe_resp_len;
|
|
|
|
|
|
|
|
new_beacon = kzalloc(sizeof(*new_beacon) + len, GFP_KERNEL);
|
|
|
|
if (!new_beacon)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
pos = (u8 *)(new_beacon + 1);
|
|
|
|
if (beacon->head_len) {
|
|
|
|
new_beacon->head_len = beacon->head_len;
|
|
|
|
new_beacon->head = pos;
|
|
|
|
memcpy(pos, beacon->head, beacon->head_len);
|
|
|
|
pos += beacon->head_len;
|
|
|
|
}
|
|
|
|
if (beacon->tail_len) {
|
|
|
|
new_beacon->tail_len = beacon->tail_len;
|
|
|
|
new_beacon->tail = pos;
|
|
|
|
memcpy(pos, beacon->tail, beacon->tail_len);
|
|
|
|
pos += beacon->tail_len;
|
|
|
|
}
|
|
|
|
if (beacon->beacon_ies_len) {
|
|
|
|
new_beacon->beacon_ies_len = beacon->beacon_ies_len;
|
|
|
|
new_beacon->beacon_ies = pos;
|
|
|
|
memcpy(pos, beacon->beacon_ies, beacon->beacon_ies_len);
|
|
|
|
pos += beacon->beacon_ies_len;
|
|
|
|
}
|
|
|
|
if (beacon->proberesp_ies_len) {
|
|
|
|
new_beacon->proberesp_ies_len = beacon->proberesp_ies_len;
|
|
|
|
new_beacon->proberesp_ies = pos;
|
|
|
|
memcpy(pos, beacon->proberesp_ies, beacon->proberesp_ies_len);
|
|
|
|
pos += beacon->proberesp_ies_len;
|
|
|
|
}
|
|
|
|
if (beacon->assocresp_ies_len) {
|
|
|
|
new_beacon->assocresp_ies_len = beacon->assocresp_ies_len;
|
|
|
|
new_beacon->assocresp_ies = pos;
|
|
|
|
memcpy(pos, beacon->assocresp_ies, beacon->assocresp_ies_len);
|
|
|
|
pos += beacon->assocresp_ies_len;
|
|
|
|
}
|
|
|
|
if (beacon->probe_resp_len) {
|
|
|
|
new_beacon->probe_resp_len = beacon->probe_resp_len;
|
|
|
|
beacon->probe_resp = pos;
|
|
|
|
memcpy(pos, beacon->probe_resp, beacon->probe_resp_len);
|
|
|
|
pos += beacon->probe_resp_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
return new_beacon;
|
|
|
|
}
|
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
void ieee80211_csa_finish(struct ieee80211_vif *vif)
|
2013-07-11 14:09:06 +00:00
|
|
|
{
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
|
2013-07-11 14:09:06 +00:00
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
ieee80211_queue_work(&sdata->local->hw,
|
|
|
|
&sdata->csa_finalize_work);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_csa_finish);
|
2013-11-21 17:19:51 +00:00
|
|
|
|
2014-04-09 13:11:00 +00:00
|
|
|
static int ieee80211_set_after_csa_beacon(struct ieee80211_sub_if_data *sdata,
|
|
|
|
u32 *changed)
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
{
|
2014-04-09 13:11:00 +00:00
|
|
|
int err;
|
2013-09-01 14:15:51 +00:00
|
|
|
|
2013-08-28 11:41:31 +00:00
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_AP:
|
2014-06-05 12:21:36 +00:00
|
|
|
err = ieee80211_assign_beacon(sdata, sdata->u.ap.next_beacon,
|
|
|
|
NULL);
|
2014-01-29 06:56:18 +00:00
|
|
|
kfree(sdata->u.ap.next_beacon);
|
|
|
|
sdata->u.ap.next_beacon = NULL;
|
|
|
|
|
2013-08-28 11:41:31 +00:00
|
|
|
if (err < 0)
|
2014-04-09 13:11:00 +00:00
|
|
|
return err;
|
|
|
|
*changed |= err;
|
2013-08-28 11:41:31 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
2014-01-29 06:56:17 +00:00
|
|
|
err = ieee80211_ibss_finish_csa(sdata);
|
|
|
|
if (err < 0)
|
2014-04-09 13:11:00 +00:00
|
|
|
return err;
|
|
|
|
*changed |= err;
|
2013-08-28 11:41:31 +00:00
|
|
|
break;
|
2013-10-17 22:55:02 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
err = ieee80211_mesh_finish_csa(sdata);
|
|
|
|
if (err < 0)
|
2014-04-09 13:11:00 +00:00
|
|
|
return err;
|
|
|
|
*changed |= err;
|
2013-10-17 22:55:02 +00:00
|
|
|
break;
|
|
|
|
#endif
|
2013-08-28 11:41:31 +00:00
|
|
|
default:
|
|
|
|
WARN_ON(1);
|
2014-04-09 13:11:00 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-08 07:10:02 +00:00
|
|
|
static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
|
2014-04-09 13:11:00 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
u32 changed = 0;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
sdata_assert_lock(sdata);
|
|
|
|
lockdep_assert_held(&local->mtx);
|
2014-06-25 10:35:08 +00:00
|
|
|
lockdep_assert_held(&local->chanctx_mtx);
|
2014-04-09 13:11:00 +00:00
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
/*
|
|
|
|
* using reservation isn't immediate as it may be deferred until later
|
|
|
|
* with multi-vif. once reservation is complete it will re-schedule the
|
|
|
|
* work with no reserved_chanctx so verify chandef to check if it
|
|
|
|
* completed successfully
|
|
|
|
*/
|
2014-04-09 13:11:00 +00:00
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
if (sdata->reserved_chanctx) {
|
|
|
|
/*
|
|
|
|
* with multi-vif csa driver may call ieee80211_csa_finish()
|
|
|
|
* many times while waiting for other interfaces to use their
|
|
|
|
* reservations
|
|
|
|
*/
|
|
|
|
if (sdata->reserved_ready)
|
|
|
|
return 0;
|
|
|
|
|
2014-10-09 18:36:22 +00:00
|
|
|
return ieee80211_vif_use_reserved_context(sdata);
|
2013-08-28 11:41:31 +00:00
|
|
|
}
|
2013-07-11 14:09:06 +00:00
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
if (!cfg80211_chandef_identical(&sdata->vif.bss_conf.chandef,
|
|
|
|
&sdata->csa_chandef))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-04-09 13:11:00 +00:00
|
|
|
sdata->vif.csa_active = false;
|
|
|
|
|
|
|
|
err = ieee80211_set_after_csa_beacon(sdata, &changed);
|
|
|
|
if (err)
|
2014-05-08 07:10:02 +00:00
|
|
|
return err;
|
2014-01-29 06:56:17 +00:00
|
|
|
|
2014-04-09 13:11:00 +00:00
|
|
|
ieee80211_bss_info_change_notify(sdata, changed);
|
2014-04-09 13:10:59 +00:00
|
|
|
|
2014-06-13 13:30:07 +00:00
|
|
|
if (sdata->csa_block_tx) {
|
|
|
|
ieee80211_wake_vif_queues(local, sdata,
|
|
|
|
IEEE80211_QUEUE_STOP_REASON_CSA);
|
|
|
|
sdata->csa_block_tx = false;
|
|
|
|
}
|
2014-05-08 07:10:02 +00:00
|
|
|
|
2014-10-08 06:48:38 +00:00
|
|
|
err = drv_post_channel_switch(sdata);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
cfg80211_ch_switch_notify(sdata->dev, &sdata->csa_chandef);
|
|
|
|
|
2014-05-08 07:10:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
|
|
|
|
{
|
|
|
|
if (__ieee80211_csa_finalize(sdata)) {
|
|
|
|
sdata_info(sdata, "failed to finalize CSA, disconnecting\n");
|
|
|
|
cfg80211_stop_iface(sdata->local->hw.wiphy, &sdata->wdev,
|
|
|
|
GFP_KERNEL);
|
|
|
|
}
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_csa_finalize_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata =
|
|
|
|
container_of(work, struct ieee80211_sub_if_data,
|
|
|
|
csa_finalize_work);
|
2014-04-09 13:10:59 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
|
|
|
|
sdata_lock(sdata);
|
2014-04-09 13:10:59 +00:00
|
|
|
mutex_lock(&local->mtx);
|
2014-06-25 10:35:08 +00:00
|
|
|
mutex_lock(&local->chanctx_mtx);
|
2014-04-09 13:10:59 +00:00
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
/* AP might have been stopped while waiting for the lock. */
|
|
|
|
if (!sdata->vif.csa_active)
|
|
|
|
goto unlock;
|
|
|
|
|
|
|
|
if (!ieee80211_sdata_running(sdata))
|
|
|
|
goto unlock;
|
|
|
|
|
|
|
|
ieee80211_csa_finalize(sdata);
|
2013-11-21 17:19:51 +00:00
|
|
|
|
|
|
|
unlock:
|
2014-06-25 10:35:08 +00:00
|
|
|
mutex_unlock(&local->chanctx_mtx);
|
2014-04-09 13:10:59 +00:00
|
|
|
mutex_unlock(&local->mtx);
|
2013-11-21 17:19:51 +00:00
|
|
|
sdata_unlock(sdata);
|
2013-07-11 14:09:06 +00:00
|
|
|
}
|
|
|
|
|
2014-02-28 14:59:06 +00:00
|
|
|
static int ieee80211_set_csa_beacon(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct cfg80211_csa_settings *params,
|
|
|
|
u32 *changed)
|
2013-07-11 14:09:06 +00:00
|
|
|
{
|
2014-06-05 12:21:36 +00:00
|
|
|
struct ieee80211_csa_settings csa = {};
|
2014-02-28 14:59:06 +00:00
|
|
|
int err;
|
2013-07-11 14:09:06 +00:00
|
|
|
|
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_AP:
|
2013-08-28 11:41:31 +00:00
|
|
|
sdata->u.ap.next_beacon =
|
|
|
|
cfg80211_beacon_dup(¶ms->beacon_after);
|
|
|
|
if (!sdata->u.ap.next_beacon)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
/*
|
|
|
|
* With a count of 0, we don't have to wait for any
|
|
|
|
* TBTT before switching, so complete the CSA
|
|
|
|
* immediately. In theory, with a count == 1 we
|
|
|
|
* should delay the switch until just before the next
|
|
|
|
* TBTT, but that would complicate things so we switch
|
|
|
|
* immediately too. If we would delay the switch
|
|
|
|
* until the next TBTT, we would have to set the probe
|
|
|
|
* response here.
|
|
|
|
*
|
|
|
|
* TODO: A channel switch with count <= 1 without
|
|
|
|
* sending a CSA action frame is kind of useless,
|
|
|
|
* because the clients won't know we're changing
|
|
|
|
* channels. The action frame must be implemented
|
|
|
|
* either here or in the userspace.
|
|
|
|
*/
|
|
|
|
if (params->count <= 1)
|
|
|
|
break;
|
|
|
|
|
2014-05-09 11:11:47 +00:00
|
|
|
if ((params->n_counter_offsets_beacon >
|
|
|
|
IEEE80211_MAX_CSA_COUNTERS_NUM) ||
|
|
|
|
(params->n_counter_offsets_presp >
|
|
|
|
IEEE80211_MAX_CSA_COUNTERS_NUM))
|
|
|
|
return -EINVAL;
|
2014-05-09 11:11:46 +00:00
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
csa.counter_offsets_beacon = params->counter_offsets_beacon;
|
|
|
|
csa.counter_offsets_presp = params->counter_offsets_presp;
|
|
|
|
csa.n_counter_offsets_beacon = params->n_counter_offsets_beacon;
|
|
|
|
csa.n_counter_offsets_presp = params->n_counter_offsets_presp;
|
|
|
|
csa.count = params->count;
|
2014-05-09 11:11:46 +00:00
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
err = ieee80211_assign_beacon(sdata, ¶ms->beacon_csa, &csa);
|
2013-08-28 11:41:31 +00:00
|
|
|
if (err < 0) {
|
|
|
|
kfree(sdata->u.ap.next_beacon);
|
|
|
|
return err;
|
|
|
|
}
|
2014-02-28 14:59:06 +00:00
|
|
|
*changed |= err;
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
|
2013-08-28 11:41:31 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
|
|
|
if (!sdata->vif.bss_conf.ibss_joined)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (params->chandef.width != sdata->u.ibss.chandef.width)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
switch (params->chandef.width) {
|
|
|
|
case NL80211_CHAN_WIDTH_40:
|
|
|
|
if (cfg80211_get_chandef_type(¶ms->chandef) !=
|
|
|
|
cfg80211_get_chandef_type(&sdata->u.ibss.chandef))
|
|
|
|
return -EINVAL;
|
|
|
|
case NL80211_CHAN_WIDTH_5:
|
|
|
|
case NL80211_CHAN_WIDTH_10:
|
|
|
|
case NL80211_CHAN_WIDTH_20_NOHT:
|
|
|
|
case NL80211_CHAN_WIDTH_20:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* changes into another band are not supported */
|
|
|
|
if (sdata->u.ibss.chandef.chan->band !=
|
|
|
|
params->chandef.chan->band)
|
|
|
|
return -EINVAL;
|
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
/* see comments in the NL80211_IFTYPE_AP block */
|
|
|
|
if (params->count > 1) {
|
|
|
|
err = ieee80211_ibss_csa_beacon(sdata, params);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2014-02-28 14:59:06 +00:00
|
|
|
*changed |= err;
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ieee80211_send_action_csa(sdata, params);
|
|
|
|
|
2013-07-11 14:09:06 +00:00
|
|
|
break;
|
2013-10-15 02:08:28 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
2014-02-28 14:59:06 +00:00
|
|
|
case NL80211_IFTYPE_MESH_POINT: {
|
|
|
|
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
|
2013-10-15 02:08:28 +00:00
|
|
|
|
|
|
|
if (params->chandef.width != sdata->vif.bss_conf.chandef.width)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* changes into another band are not supported */
|
|
|
|
if (sdata->vif.bss_conf.chandef.chan->band !=
|
|
|
|
params->chandef.chan->band)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-01-22 06:53:04 +00:00
|
|
|
if (ifmsh->csa_role == IEEE80211_MESH_CSA_ROLE_NONE) {
|
2014-01-13 17:42:58 +00:00
|
|
|
ifmsh->csa_role = IEEE80211_MESH_CSA_ROLE_INIT;
|
2014-01-22 06:53:04 +00:00
|
|
|
if (!ifmsh->pre_value)
|
|
|
|
ifmsh->pre_value = 1;
|
|
|
|
else
|
|
|
|
ifmsh->pre_value++;
|
|
|
|
}
|
2014-01-13 17:42:58 +00:00
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
/* see comments in the NL80211_IFTYPE_AP block */
|
|
|
|
if (params->count > 1) {
|
|
|
|
err = ieee80211_mesh_csa_beacon(sdata, params);
|
|
|
|
if (err < 0) {
|
|
|
|
ifmsh->csa_role = IEEE80211_MESH_CSA_ROLE_NONE;
|
|
|
|
return err;
|
|
|
|
}
|
2014-02-28 14:59:06 +00:00
|
|
|
*changed |= err;
|
2013-11-08 07:09:43 +00:00
|
|
|
}
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
|
|
|
|
if (ifmsh->csa_role == IEEE80211_MESH_CSA_ROLE_INIT)
|
|
|
|
ieee80211_send_action_csa(sdata, params);
|
|
|
|
|
2013-10-15 02:08:28 +00:00
|
|
|
break;
|
2014-02-28 14:59:06 +00:00
|
|
|
}
|
2013-10-15 02:08:28 +00:00
|
|
|
#endif
|
2013-07-11 14:09:06 +00:00
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2014-02-28 14:59:06 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-07 17:05:12 +00:00
|
|
|
static int
|
|
|
|
__ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_csa_settings *params)
|
2014-02-28 14:59:06 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
2014-10-08 06:48:37 +00:00
|
|
|
struct ieee80211_channel_switch ch_switch;
|
2014-04-09 13:29:32 +00:00
|
|
|
struct ieee80211_chanctx_conf *conf;
|
2014-02-28 14:59:06 +00:00
|
|
|
struct ieee80211_chanctx *chanctx;
|
2014-10-20 19:36:04 +00:00
|
|
|
u32 changed = 0;
|
|
|
|
int err;
|
2014-02-28 14:59:06 +00:00
|
|
|
|
|
|
|
sdata_assert_lock(sdata);
|
2014-04-09 13:10:59 +00:00
|
|
|
lockdep_assert_held(&local->mtx);
|
2014-02-28 14:59:06 +00:00
|
|
|
|
|
|
|
if (!list_empty(&local->roc_list) || local->scanning)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (sdata->wdev.cac_started)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (cfg80211_chandef_identical(¶ms->chandef,
|
|
|
|
&sdata->vif.bss_conf.chandef))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
/* don't allow another channel switch if one is already active. */
|
|
|
|
if (sdata->vif.csa_active)
|
|
|
|
return -EBUSY;
|
|
|
|
|
2014-04-09 13:29:32 +00:00
|
|
|
mutex_lock(&local->chanctx_mtx);
|
|
|
|
conf = rcu_dereference_protected(sdata->vif.chanctx_conf,
|
|
|
|
lockdep_is_held(&local->chanctx_mtx));
|
|
|
|
if (!conf) {
|
2014-06-25 10:35:08 +00:00
|
|
|
err = -EBUSY;
|
|
|
|
goto out;
|
2014-02-28 14:59:06 +00:00
|
|
|
}
|
|
|
|
|
2014-04-09 13:29:32 +00:00
|
|
|
chanctx = container_of(conf, struct ieee80211_chanctx, conf);
|
2014-06-25 10:35:08 +00:00
|
|
|
if (!chanctx) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto out;
|
2014-02-28 14:59:06 +00:00
|
|
|
}
|
|
|
|
|
2014-10-08 06:48:37 +00:00
|
|
|
err = drv_pre_channel_switch(sdata, &ch_switch);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
err = ieee80211_vif_reserve_chanctx(sdata, ¶ms->chandef,
|
|
|
|
chanctx->mode,
|
|
|
|
params->radar_required);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
2014-02-28 14:59:06 +00:00
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
/* if reservation is invalid then this will fail */
|
|
|
|
err = ieee80211_check_combinations(sdata, NULL, chanctx->mode, 0);
|
|
|
|
if (err) {
|
|
|
|
ieee80211_vif_unreserve_chanctx(sdata);
|
|
|
|
goto out;
|
|
|
|
}
|
2014-02-28 14:59:06 +00:00
|
|
|
|
2014-10-08 06:48:37 +00:00
|
|
|
ch_switch.timestamp = 0;
|
|
|
|
ch_switch.device_timestamp = 0;
|
|
|
|
ch_switch.block_tx = params->block_tx;
|
|
|
|
ch_switch.chandef = params->chandef;
|
|
|
|
ch_switch.count = params->count;
|
|
|
|
|
2014-02-28 14:59:06 +00:00
|
|
|
err = ieee80211_set_csa_beacon(sdata, params, &changed);
|
2014-06-25 10:35:08 +00:00
|
|
|
if (err) {
|
|
|
|
ieee80211_vif_unreserve_chanctx(sdata);
|
|
|
|
goto out;
|
|
|
|
}
|
2014-02-28 14:59:06 +00:00
|
|
|
|
2013-11-11 18:34:54 +00:00
|
|
|
sdata->csa_chandef = params->chandef;
|
2014-04-09 13:10:59 +00:00
|
|
|
sdata->csa_block_tx = params->block_tx;
|
2013-07-11 14:09:06 +00:00
|
|
|
sdata->vif.csa_active = true;
|
|
|
|
|
2014-04-09 13:10:59 +00:00
|
|
|
if (sdata->csa_block_tx)
|
2014-06-13 13:30:07 +00:00
|
|
|
ieee80211_stop_vif_queues(local, sdata,
|
|
|
|
IEEE80211_QUEUE_STOP_REASON_CSA);
|
2014-04-09 13:10:59 +00:00
|
|
|
|
mac80211: only set CSA beacon when at least one beacon must be transmitted
A beacon should never have a Channel Switch Announcement information
element with a count of 0, because a count of 1 means switch just
before the next beacon. So, if a count of 0 was valid in a beacon, it
would have been transmitted in the next channel already, which is
useless. A CSA count equal to zero is only meaningful in action
frames or probe_responses.
Fix the ieee80211_csa_is_complete() and ieee80211_update_csa()
functions accordingly.
With a CSA count of 0, we won't transmit any CSA beacons, because the
switch will happen before the next TBTT. To avoid extra work and
potential confusion in the drivers, complete the CSA immediately,
instead of waiting for the driver to call ieee80211_csa_finish().
To keep things simpler, we also switch immediately when the CSA count
is 1, while in theory we should delay the switch until just before the
next TBTT.
Additionally, move the ieee80211_csa_finish() function to cfg.c,
where it makes more sense.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de>
Acked-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-01-13 17:43:00 +00:00
|
|
|
if (changed) {
|
|
|
|
ieee80211_bss_info_change_notify(sdata, changed);
|
|
|
|
drv_channel_switch_beacon(sdata, ¶ms->chandef);
|
|
|
|
} else {
|
|
|
|
/* if the beacon didn't change, we can finalize immediately */
|
|
|
|
ieee80211_csa_finalize(sdata);
|
|
|
|
}
|
2013-07-11 14:09:06 +00:00
|
|
|
|
2014-06-25 10:35:08 +00:00
|
|
|
out:
|
|
|
|
mutex_unlock(&local->chanctx_mtx);
|
|
|
|
return err;
|
2013-07-11 14:09:06 +00:00
|
|
|
}
|
|
|
|
|
2014-04-09 13:10:59 +00:00
|
|
|
int ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
struct cfg80211_csa_settings *params)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
err = __ieee80211_channel_switch(wiphy, dev, params);
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2012-06-15 13:30:18 +00:00
|
|
|
static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
|
2013-11-18 17:06:49 +00:00
|
|
|
struct cfg80211_mgmt_tx_params *params,
|
|
|
|
u64 *cookie)
|
2010-02-15 10:53:10 +00:00
|
|
|
{
|
2012-06-15 13:30:18 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
2010-06-09 15:20:33 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
struct sta_info *sta;
|
2013-11-18 17:06:49 +00:00
|
|
|
const struct ieee80211_mgmt *mgmt = (void *)params->buf;
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
bool need_offchan = false;
|
2011-11-04 10:18:21 +00:00
|
|
|
u32 flags;
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
int ret;
|
2014-05-09 11:11:45 +00:00
|
|
|
u8 *data;
|
2010-11-25 09:02:29 +00:00
|
|
|
|
2013-11-18 17:06:49 +00:00
|
|
|
if (params->dont_wait_for_ack)
|
2011-11-04 10:18:21 +00:00
|
|
|
flags = IEEE80211_TX_CTL_NO_ACK;
|
|
|
|
else
|
|
|
|
flags = IEEE80211_TX_INTFL_NL80211_FRAME_TX |
|
|
|
|
IEEE80211_TX_CTL_REQ_TX_STATUS;
|
|
|
|
|
2013-11-18 17:06:49 +00:00
|
|
|
if (params->no_cck)
|
2011-09-25 09:23:31 +00:00
|
|
|
flags |= IEEE80211_TX_CTL_NO_CCK_RATE;
|
|
|
|
|
2010-06-09 15:20:33 +00:00
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_ADHOC:
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (!sdata->vif.bss_conf.ibss_joined)
|
|
|
|
need_offchan = true;
|
|
|
|
/* fall through */
|
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
if (ieee80211_vif_is_mesh(&sdata->vif) &&
|
|
|
|
!sdata->u.mesh.mesh_id_len)
|
|
|
|
need_offchan = true;
|
|
|
|
/* fall through */
|
|
|
|
#endif
|
2010-09-30 19:06:09 +00:00
|
|
|
case NL80211_IFTYPE_AP:
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
case NL80211_IFTYPE_P2P_GO:
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_ADHOC &&
|
|
|
|
!ieee80211_vif_is_mesh(&sdata->vif) &&
|
|
|
|
!rcu_access_pointer(sdata->bss->beacon))
|
|
|
|
need_offchan = true;
|
2010-09-30 19:06:09 +00:00
|
|
|
if (!ieee80211_is_action(mgmt->frame_control) ||
|
2013-06-21 06:50:58 +00:00
|
|
|
mgmt->u.action.category == WLAN_CATEGORY_PUBLIC ||
|
2013-08-28 11:41:31 +00:00
|
|
|
mgmt->u.action.category == WLAN_CATEGORY_SELF_PROTECTED ||
|
|
|
|
mgmt->u.action.category == WLAN_CATEGORY_SPECTRUM_MGMT)
|
2010-06-09 15:20:33 +00:00
|
|
|
break;
|
|
|
|
rcu_read_lock();
|
|
|
|
sta = sta_info_get(sdata, mgmt->da);
|
|
|
|
rcu_read_unlock();
|
|
|
|
if (!sta)
|
|
|
|
return -ENOLINK;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_STATION:
|
2010-09-30 19:06:09 +00:00
|
|
|
case NL80211_IFTYPE_P2P_CLIENT:
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (!sdata->u.mgd.associated)
|
|
|
|
need_offchan = true;
|
2010-06-09 15:20:33 +00:00
|
|
|
break;
|
2012-06-18 18:07:15 +00:00
|
|
|
case NL80211_IFTYPE_P2P_DEVICE:
|
|
|
|
need_offchan = true;
|
|
|
|
break;
|
2010-06-09 15:20:33 +00:00
|
|
|
default:
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2013-06-11 12:20:00 +00:00
|
|
|
/* configurations requiring offchan cannot work if no channel has been
|
|
|
|
* specified
|
|
|
|
*/
|
2013-11-18 17:06:49 +00:00
|
|
|
if (need_offchan && !params->chan)
|
2013-06-11 12:20:00 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
mutex_lock(&local->mtx);
|
|
|
|
|
|
|
|
/* Check if the operating channel is the requested channel */
|
|
|
|
if (!need_offchan) {
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_chanctx_conf *chanctx_conf;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
|
|
|
|
|
2013-06-11 12:20:00 +00:00
|
|
|
if (chanctx_conf) {
|
2013-11-18 17:06:49 +00:00
|
|
|
need_offchan = params->chan &&
|
|
|
|
(params->chan !=
|
|
|
|
chanctx_conf->def.chan);
|
|
|
|
} else if (!params->chan) {
|
2013-06-11 12:20:00 +00:00
|
|
|
ret = -EINVAL;
|
|
|
|
rcu_read_unlock();
|
|
|
|
goto out_unlock;
|
|
|
|
} else {
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
need_offchan = true;
|
2013-06-11 12:20:00 +00:00
|
|
|
}
|
2012-07-26 15:24:39 +00:00
|
|
|
rcu_read_unlock();
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
}
|
|
|
|
|
2013-11-18 17:06:49 +00:00
|
|
|
if (need_offchan && !params->offchan) {
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
ret = -EBUSY;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2013-11-18 17:06:49 +00:00
|
|
|
skb = dev_alloc_skb(local->hw.extra_tx_headroom + params->len);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (!skb) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2010-06-09 15:20:33 +00:00
|
|
|
skb_reserve(skb, local->hw.extra_tx_headroom);
|
|
|
|
|
2014-05-09 11:11:45 +00:00
|
|
|
data = skb_put(skb, params->len);
|
|
|
|
memcpy(data, params->buf, params->len);
|
|
|
|
|
|
|
|
/* Update CSA counters */
|
|
|
|
if (sdata->vif.csa_active &&
|
|
|
|
(sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_ADHOC) &&
|
|
|
|
params->n_csa_offsets) {
|
|
|
|
int i;
|
2014-06-05 12:21:36 +00:00
|
|
|
struct beacon_data *beacon = NULL;
|
2014-05-09 11:11:45 +00:00
|
|
|
|
2014-06-05 12:21:36 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
|
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_AP)
|
|
|
|
beacon = rcu_dereference(sdata->u.ap.beacon);
|
|
|
|
else if (sdata->vif.type == NL80211_IFTYPE_ADHOC)
|
|
|
|
beacon = rcu_dereference(sdata->u.ibss.presp);
|
|
|
|
else if (ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
beacon = rcu_dereference(sdata->u.mesh.beacon);
|
|
|
|
|
|
|
|
if (beacon)
|
|
|
|
for (i = 0; i < params->n_csa_offsets; i++)
|
|
|
|
data[params->csa_offsets[i]] =
|
|
|
|
beacon->csa_current_counter;
|
|
|
|
|
|
|
|
rcu_read_unlock();
|
2014-05-09 11:11:45 +00:00
|
|
|
}
|
2010-06-09 15:20:33 +00:00
|
|
|
|
|
|
|
IEEE80211_SKB_CB(skb)->flags = flags;
|
|
|
|
|
|
|
|
skb->dev = sdata->dev;
|
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (!need_offchan) {
|
2012-07-16 16:36:52 +00:00
|
|
|
*cookie = (unsigned long) skb;
|
2010-11-25 09:02:30 +00:00
|
|
|
ieee80211_tx_skb(sdata, skb);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
ret = 0;
|
|
|
|
goto out_unlock;
|
2010-11-25 09:02:30 +00:00
|
|
|
}
|
|
|
|
|
2013-02-11 17:21:07 +00:00
|
|
|
IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_TX_OFFCHAN |
|
|
|
|
IEEE80211_TX_INTFL_OFFCHAN_TX_OK;
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (local->hw.flags & IEEE80211_HW_QUEUE_CONTROL)
|
|
|
|
IEEE80211_SKB_CB(skb)->hw_queue =
|
|
|
|
local->hw.offchannel_tx_hw_queue;
|
2010-11-25 09:02:30 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
/* This will handle all kinds of coalescing and immediate TX */
|
2013-11-18 17:06:49 +00:00
|
|
|
ret = ieee80211_start_roc_work(local, sdata, params->chan,
|
|
|
|
params->wait, cookie, skb,
|
2013-02-12 07:34:13 +00:00
|
|
|
IEEE80211_ROC_TYPE_MGMT_TX);
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
if (ret)
|
|
|
|
kfree_skb(skb);
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&local->mtx);
|
|
|
|
return ret;
|
2010-02-15 10:53:10 +00:00
|
|
|
}
|
|
|
|
|
2010-11-25 09:02:30 +00:00
|
|
|
static int ieee80211_mgmt_tx_cancel_wait(struct wiphy *wiphy,
|
2012-06-15 13:30:18 +00:00
|
|
|
struct wireless_dev *wdev,
|
2010-11-25 09:02:30 +00:00
|
|
|
u64 cookie)
|
|
|
|
{
|
2012-06-15 13:30:18 +00:00
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2010-11-25 09:02:30 +00:00
|
|
|
|
mac80211: unify SW/offload remain-on-channel
Redesign all the off-channel code, getting rid of
the generic off-channel work concept, replacing
it with a simple remain-on-channel list.
This fixes a number of small issues with the ROC
implementation:
* offloaded remain-on-channel couldn't be queued,
now we can queue it as well, if needed
* in iwlwifi (the only user) offloaded ROC is
mutually exclusive with scanning, use the new
queue to handle that case -- I expect that it
will later depend on a HW flag
The bigger issue though is that there's a bad bug
in the current implementation: if we get a mgmt
TX request while HW roc is active, and this new
request has a wait time, we actually schedule a
software ROC instead since we can't guarantee the
existing offloaded ROC will still be that long.
To fix this, the queuing mechanism was needed.
The queuing mechanism for offloaded ROC isn't yet
optimal, ideally we should add API to have the HW
extend the ROC if needed. We could add that later
but for now use a software implementation.
Overall, this unifies the behaviour between the
offloaded and software-implemented case as much
as possible.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2012-06-05 12:28:42 +00:00
|
|
|
return ieee80211_cancel_roc(local, cookie, true);
|
2010-11-25 09:02:30 +00:00
|
|
|
}
|
|
|
|
|
2010-10-13 10:06:24 +00:00
|
|
|
static void ieee80211_mgmt_frame_register(struct wiphy *wiphy,
|
2012-06-15 13:30:18 +00:00
|
|
|
struct wireless_dev *wdev,
|
2010-10-13 10:06:24 +00:00
|
|
|
u16 frame_type, bool reg)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
|
2012-06-20 15:51:14 +00:00
|
|
|
switch (frame_type) {
|
|
|
|
case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_PROBE_REQ:
|
|
|
|
if (reg)
|
|
|
|
local->probe_req_reg++;
|
|
|
|
else
|
|
|
|
local->probe_req_reg--;
|
2010-10-13 10:06:24 +00:00
|
|
|
|
2012-10-31 14:50:34 +00:00
|
|
|
if (!local->open_count)
|
|
|
|
break;
|
|
|
|
|
2012-06-20 15:51:14 +00:00
|
|
|
ieee80211_queue_work(&local->hw, &local->reconfig_filter);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2010-10-13 10:06:24 +00:00
|
|
|
}
|
|
|
|
|
2010-11-10 03:50:56 +00:00
|
|
|
static int ieee80211_set_antenna(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
|
|
|
|
if (local->started)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return drv_set_antenna(local, tx_ant, rx_ant);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_get_antenna(struct wiphy *wiphy, u32 *tx_ant, u32 *rx_ant)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
|
|
|
|
return drv_get_antenna(local, tx_ant, rx_ant);
|
|
|
|
}
|
|
|
|
|
2011-07-05 14:35:41 +00:00
|
|
|
static int ieee80211_set_rekey_data(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
struct cfg80211_gtk_rekey_data *data)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
|
|
|
|
if (!local->ops->set_rekey_data)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
drv_set_rekey_data(local, sdata, data);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-04 10:18:16 +00:00
|
|
|
static int ieee80211_probe_client(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
const u8 *peer, u64 *cookie)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct ieee80211_qos_hdr *nullfunc;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int size = sizeof(*nullfunc);
|
|
|
|
__le16 fc;
|
|
|
|
bool qos;
|
|
|
|
struct ieee80211_tx_info *info;
|
|
|
|
struct sta_info *sta;
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_chanctx_conf *chanctx_conf;
|
|
|
|
enum ieee80211_band band;
|
2011-11-04 10:18:16 +00:00
|
|
|
|
|
|
|
rcu_read_lock();
|
2012-07-26 15:24:39 +00:00
|
|
|
chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
|
|
|
|
if (WARN_ON(!chanctx_conf)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-11-09 10:39:59 +00:00
|
|
|
band = chanctx_conf->def.chan->band;
|
2013-09-29 19:39:33 +00:00
|
|
|
sta = sta_info_get_bss(sdata, peer);
|
2011-11-11 19:22:30 +00:00
|
|
|
if (sta) {
|
2014-07-22 12:50:47 +00:00
|
|
|
qos = sta->sta.wme;
|
2011-11-11 19:22:30 +00:00
|
|
|
} else {
|
|
|
|
rcu_read_unlock();
|
2011-11-04 10:18:16 +00:00
|
|
|
return -ENOLINK;
|
2011-11-11 19:22:30 +00:00
|
|
|
}
|
2011-11-04 10:18:16 +00:00
|
|
|
|
|
|
|
if (qos) {
|
|
|
|
fc = cpu_to_le16(IEEE80211_FTYPE_DATA |
|
|
|
|
IEEE80211_STYPE_QOS_NULLFUNC |
|
|
|
|
IEEE80211_FCTL_FROMDS);
|
|
|
|
} else {
|
|
|
|
size -= 2;
|
|
|
|
fc = cpu_to_le16(IEEE80211_FTYPE_DATA |
|
|
|
|
IEEE80211_STYPE_NULLFUNC |
|
|
|
|
IEEE80211_FCTL_FROMDS);
|
|
|
|
}
|
|
|
|
|
|
|
|
skb = dev_alloc_skb(local->hw.extra_tx_headroom + size);
|
2012-07-26 15:24:39 +00:00
|
|
|
if (!skb) {
|
|
|
|
rcu_read_unlock();
|
2011-11-04 10:18:16 +00:00
|
|
|
return -ENOMEM;
|
2012-07-26 15:24:39 +00:00
|
|
|
}
|
2011-11-04 10:18:16 +00:00
|
|
|
|
|
|
|
skb->dev = dev;
|
|
|
|
|
|
|
|
skb_reserve(skb, local->hw.extra_tx_headroom);
|
|
|
|
|
|
|
|
nullfunc = (void *) skb_put(skb, size);
|
|
|
|
nullfunc->frame_control = fc;
|
|
|
|
nullfunc->duration_id = 0;
|
|
|
|
memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN);
|
|
|
|
memcpy(nullfunc->addr2, sdata->vif.addr, ETH_ALEN);
|
|
|
|
memcpy(nullfunc->addr3, sdata->vif.addr, ETH_ALEN);
|
|
|
|
nullfunc->seq_ctrl = 0;
|
|
|
|
|
|
|
|
info = IEEE80211_SKB_CB(skb);
|
|
|
|
|
|
|
|
info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS |
|
|
|
|
IEEE80211_TX_INTFL_NL80211_FRAME_TX;
|
|
|
|
|
|
|
|
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
|
|
|
|
skb->priority = 7;
|
|
|
|
if (qos)
|
|
|
|
nullfunc->qos_ctrl = cpu_to_le16(7);
|
|
|
|
|
|
|
|
local_bh_disable();
|
2012-07-26 15:24:39 +00:00
|
|
|
ieee80211_xmit(sdata, skb, band);
|
2011-11-04 10:18:16 +00:00
|
|
|
local_bh_enable();
|
2012-07-26 15:24:39 +00:00
|
|
|
rcu_read_unlock();
|
2011-11-04 10:18:16 +00:00
|
|
|
|
|
|
|
*cookie = (unsigned long) skb;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-11-08 20:25:48 +00:00
|
|
|
static int ieee80211_cfg_get_channel(struct wiphy *wiphy,
|
|
|
|
struct wireless_dev *wdev,
|
|
|
|
struct cfg80211_chan_def *chandef)
|
2012-07-12 17:45:08 +00:00
|
|
|
{
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
|
2013-02-23 18:02:14 +00:00
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
2012-07-26 15:24:39 +00:00
|
|
|
struct ieee80211_chanctx_conf *chanctx_conf;
|
2012-11-08 20:25:48 +00:00
|
|
|
int ret = -ENODATA;
|
2012-07-26 15:24:39 +00:00
|
|
|
|
|
|
|
rcu_read_lock();
|
2013-02-28 08:59:22 +00:00
|
|
|
chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
|
|
|
|
if (chanctx_conf) {
|
2014-09-30 04:08:02 +00:00
|
|
|
*chandef = sdata->vif.bss_conf.chandef;
|
2013-02-28 08:59:22 +00:00
|
|
|
ret = 0;
|
|
|
|
} else if (local->open_count > 0 &&
|
|
|
|
local->open_count == local->monitors &&
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_MONITOR) {
|
|
|
|
if (local->use_chanctx)
|
|
|
|
*chandef = local->monitor_chandef;
|
|
|
|
else
|
2013-03-25 15:26:57 +00:00
|
|
|
*chandef = local->_oper_chandef;
|
2012-11-08 20:25:48 +00:00
|
|
|
ret = 0;
|
2012-07-26 15:24:39 +00:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2012-07-12 17:45:08 +00:00
|
|
|
|
2012-11-08 20:25:48 +00:00
|
|
|
return ret;
|
2012-07-12 17:45:08 +00:00
|
|
|
}
|
|
|
|
|
2012-04-04 13:05:25 +00:00
|
|
|
#ifdef CONFIG_PM
|
|
|
|
static void ieee80211_set_wakeup(struct wiphy *wiphy, bool enabled)
|
|
|
|
{
|
|
|
|
drv_set_wakeup(wiphy_priv(wiphy), enabled);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2013-12-17 07:04:43 +00:00
|
|
|
static int ieee80211_set_qos_map(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
struct cfg80211_qos_map *qos_map)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct mac80211_qos_map *new_qos_map, *old_qos_map;
|
|
|
|
|
|
|
|
if (qos_map) {
|
|
|
|
new_qos_map = kzalloc(sizeof(*new_qos_map), GFP_KERNEL);
|
|
|
|
if (!new_qos_map)
|
|
|
|
return -ENOMEM;
|
|
|
|
memcpy(&new_qos_map->qos_map, qos_map, sizeof(*qos_map));
|
|
|
|
} else {
|
|
|
|
/* A NULL qos_map was passed to disable QoS mapping */
|
|
|
|
new_qos_map = NULL;
|
|
|
|
}
|
|
|
|
|
2013-12-30 22:12:37 +00:00
|
|
|
old_qos_map = sdata_dereference(sdata->qos_map, sdata);
|
2013-12-17 07:04:43 +00:00
|
|
|
rcu_assign_pointer(sdata->qos_map, new_qos_map);
|
|
|
|
if (old_qos_map)
|
|
|
|
kfree_rcu(old_qos_map, rcu_head);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-04-28 08:22:25 +00:00
|
|
|
static int ieee80211_set_ap_chanwidth(struct wiphy *wiphy,
|
|
|
|
struct net_device *dev,
|
|
|
|
struct cfg80211_chan_def *chandef)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
int ret;
|
|
|
|
u32 changed = 0;
|
|
|
|
|
|
|
|
ret = ieee80211_vif_change_bandwidth(sdata, chandef, &changed);
|
|
|
|
if (ret == 0)
|
|
|
|
ieee80211_bss_info_change_notify(sdata, changed);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-10-07 07:38:50 +00:00
|
|
|
static int ieee80211_add_tx_ts(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
u8 tsid, const u8 *peer, u8 up,
|
|
|
|
u16 admitted_time)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
|
|
|
|
int ac = ieee802_1d_to_ac[up];
|
|
|
|
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
if (!(sdata->wmm_acm & BIT(up)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (ifmgd->tx_tspec[ac].admitted_time)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (admitted_time) {
|
|
|
|
ifmgd->tx_tspec[ac].admitted_time = 32 * admitted_time;
|
|
|
|
ifmgd->tx_tspec[ac].tsid = tsid;
|
|
|
|
ifmgd->tx_tspec[ac].up = up;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ieee80211_del_tx_ts(struct wiphy *wiphy, struct net_device *dev,
|
|
|
|
u8 tsid, const u8 *peer)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
|
|
|
|
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
|
|
|
|
struct ieee80211_local *local = wiphy_priv(wiphy);
|
|
|
|
int ac;
|
|
|
|
|
|
|
|
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
|
|
|
|
struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];
|
|
|
|
|
|
|
|
/* skip unused entries */
|
|
|
|
if (!tx_tspec->admitted_time)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (tx_tspec->tsid != tsid)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* due to this new packets will be reassigned to non-ACM ACs */
|
|
|
|
tx_tspec->up = -1;
|
|
|
|
|
|
|
|
/* Make sure that all packets have been sent to avoid to
|
|
|
|
* restore the QoS params on packets that are still on the
|
|
|
|
* queues.
|
|
|
|
*/
|
|
|
|
synchronize_net();
|
|
|
|
ieee80211_flush_queues(local, sdata);
|
|
|
|
|
|
|
|
/* restore the normal QoS parameters
|
|
|
|
* (unconditionally to avoid races)
|
|
|
|
*/
|
|
|
|
tx_tspec->action = TX_TSPEC_ACTION_STOP_DOWNGRADE;
|
|
|
|
tx_tspec->downgraded = false;
|
|
|
|
ieee80211_sta_handle_tspec_ac_params(sdata);
|
|
|
|
|
|
|
|
/* finally clear all the data */
|
|
|
|
memset(tx_tspec, 0, sizeof(*tx_tspec));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
2014-01-20 22:55:44 +00:00
|
|
|
const struct cfg80211_ops mac80211_config_ops = {
|
2007-05-05 18:45:53 +00:00
|
|
|
.add_virtual_intf = ieee80211_add_iface,
|
|
|
|
.del_virtual_intf = ieee80211_del_iface,
|
2007-09-28 19:52:27 +00:00
|
|
|
.change_virtual_intf = ieee80211_change_iface,
|
2012-06-18 18:07:15 +00:00
|
|
|
.start_p2p_device = ieee80211_start_p2p_device,
|
|
|
|
.stop_p2p_device = ieee80211_stop_p2p_device,
|
2007-12-19 01:03:30 +00:00
|
|
|
.add_key = ieee80211_add_key,
|
|
|
|
.del_key = ieee80211_del_key,
|
2007-12-19 01:03:31 +00:00
|
|
|
.get_key = ieee80211_get_key,
|
2007-12-19 01:03:30 +00:00
|
|
|
.set_default_key = ieee80211_config_default_key,
|
2009-01-08 11:32:02 +00:00
|
|
|
.set_default_mgmt_key = ieee80211_config_default_mgmt_key,
|
2012-02-13 14:17:18 +00:00
|
|
|
.start_ap = ieee80211_start_ap,
|
|
|
|
.change_beacon = ieee80211_change_beacon,
|
|
|
|
.stop_ap = ieee80211_stop_ap,
|
2007-12-19 01:03:35 +00:00
|
|
|
.add_station = ieee80211_add_station,
|
|
|
|
.del_station = ieee80211_del_station,
|
|
|
|
.change_station = ieee80211_change_station,
|
2007-12-19 01:03:37 +00:00
|
|
|
.get_station = ieee80211_get_station,
|
2008-02-23 14:17:17 +00:00
|
|
|
.dump_station = ieee80211_dump_station,
|
2010-04-19 08:23:57 +00:00
|
|
|
.dump_survey = ieee80211_dump_survey,
|
2008-02-23 14:17:17 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
.add_mpath = ieee80211_add_mpath,
|
|
|
|
.del_mpath = ieee80211_del_mpath,
|
|
|
|
.change_mpath = ieee80211_change_mpath,
|
|
|
|
.get_mpath = ieee80211_get_mpath,
|
|
|
|
.dump_mpath = ieee80211_dump_mpath,
|
2014-09-12 06:58:50 +00:00
|
|
|
.get_mpp = ieee80211_get_mpp,
|
|
|
|
.dump_mpp = ieee80211_dump_mpp,
|
2010-12-17 01:37:48 +00:00
|
|
|
.update_mesh_config = ieee80211_update_mesh_config,
|
|
|
|
.get_mesh_config = ieee80211_get_mesh_config,
|
2010-12-03 08:20:44 +00:00
|
|
|
.join_mesh = ieee80211_join_mesh,
|
|
|
|
.leave_mesh = ieee80211_leave_mesh,
|
2008-02-23 14:17:17 +00:00
|
|
|
#endif
|
2008-08-07 17:07:01 +00:00
|
|
|
.change_bss = ieee80211_change_bss,
|
2008-10-30 14:59:24 +00:00
|
|
|
.set_txq_params = ieee80211_set_txq_params,
|
2012-06-06 06:18:22 +00:00
|
|
|
.set_monitor_channel = ieee80211_set_monitor_channel,
|
2009-01-19 16:20:53 +00:00
|
|
|
.suspend = ieee80211_suspend,
|
|
|
|
.resume = ieee80211_resume,
|
2009-02-10 20:25:55 +00:00
|
|
|
.scan = ieee80211_scan,
|
2011-05-11 14:09:36 +00:00
|
|
|
.sched_scan_start = ieee80211_sched_scan_start,
|
|
|
|
.sched_scan_stop = ieee80211_sched_scan_stop,
|
2009-03-19 11:39:22 +00:00
|
|
|
.auth = ieee80211_auth,
|
|
|
|
.assoc = ieee80211_assoc,
|
|
|
|
.deauth = ieee80211_deauth,
|
|
|
|
.disassoc = ieee80211_disassoc,
|
2009-04-19 19:25:43 +00:00
|
|
|
.join_ibss = ieee80211_join_ibss,
|
|
|
|
.leave_ibss = ieee80211_leave_ibss,
|
2012-11-02 12:27:49 +00:00
|
|
|
.set_mcast_rate = ieee80211_set_mcast_rate,
|
2009-04-20 16:39:05 +00:00
|
|
|
.set_wiphy_params = ieee80211_set_wiphy_params,
|
2009-06-02 11:01:39 +00:00
|
|
|
.set_tx_power = ieee80211_set_tx_power,
|
|
|
|
.get_tx_power = ieee80211_get_tx_power,
|
2009-07-01 19:26:58 +00:00
|
|
|
.set_wds_peer = ieee80211_set_wds_peer,
|
2009-06-02 11:01:41 +00:00
|
|
|
.rfkill_poll = ieee80211_rfkill_poll,
|
2009-07-01 19:26:51 +00:00
|
|
|
CFG80211_TESTMODE_CMD(ieee80211_testmode_cmd)
|
2011-05-20 16:05:54 +00:00
|
|
|
CFG80211_TESTMODE_DUMP(ieee80211_testmode_dump)
|
2009-07-01 19:26:57 +00:00
|
|
|
.set_power_mgmt = ieee80211_set_power_mgmt,
|
2009-07-01 19:26:59 +00:00
|
|
|
.set_bitrate_mask = ieee80211_set_bitrate_mask,
|
2009-12-23 12:15:42 +00:00
|
|
|
.remain_on_channel = ieee80211_remain_on_channel,
|
|
|
|
.cancel_remain_on_channel = ieee80211_cancel_remain_on_channel,
|
2010-08-12 13:38:38 +00:00
|
|
|
.mgmt_tx = ieee80211_mgmt_tx,
|
2010-11-25 09:02:30 +00:00
|
|
|
.mgmt_tx_cancel_wait = ieee80211_mgmt_tx_cancel_wait,
|
2010-03-23 07:02:34 +00:00
|
|
|
.set_cqm_rssi_config = ieee80211_set_cqm_rssi_config,
|
2010-10-13 10:06:24 +00:00
|
|
|
.mgmt_frame_register = ieee80211_mgmt_frame_register,
|
2010-11-10 03:50:56 +00:00
|
|
|
.set_antenna = ieee80211_set_antenna,
|
|
|
|
.get_antenna = ieee80211_get_antenna,
|
2011-07-05 14:35:41 +00:00
|
|
|
.set_rekey_data = ieee80211_set_rekey_data,
|
2011-09-28 11:12:52 +00:00
|
|
|
.tdls_oper = ieee80211_tdls_oper,
|
|
|
|
.tdls_mgmt = ieee80211_tdls_mgmt,
|
2011-11-04 10:18:16 +00:00
|
|
|
.probe_client = ieee80211_probe_client,
|
2011-11-18 13:20:44 +00:00
|
|
|
.set_noack_map = ieee80211_set_noack_map,
|
2012-04-04 13:05:25 +00:00
|
|
|
#ifdef CONFIG_PM
|
|
|
|
.set_wakeup = ieee80211_set_wakeup,
|
|
|
|
#endif
|
2012-07-12 17:45:08 +00:00
|
|
|
.get_channel = ieee80211_cfg_get_channel,
|
2013-02-08 17:16:20 +00:00
|
|
|
.start_radar_detection = ieee80211_start_radar_detection,
|
2013-07-11 14:09:06 +00:00
|
|
|
.channel_switch = ieee80211_channel_switch,
|
2013-12-17 07:04:43 +00:00
|
|
|
.set_qos_map = ieee80211_set_qos_map,
|
2014-04-28 08:22:25 +00:00
|
|
|
.set_ap_chanwidth = ieee80211_set_ap_chanwidth,
|
2014-10-07 07:38:50 +00:00
|
|
|
.add_tx_ts = ieee80211_add_tx_ts,
|
|
|
|
.del_tx_ts = ieee80211_del_tx_ts,
|
2007-05-05 18:45:53 +00:00
|
|
|
};
|