Follow Notes from Two Scientific Psychologists on Feedspot

Continue with Google
Continue with Facebook

Haptics (or proprioception) is the sensory modality built into our bodies; it's provides constant information about the state of the body and things it is in mechanical contact with, such as tools. Many ecological psychologists (myself included) have investigated haptic perception and it's role in the control of action, but unlike the optic array, we have basically zero work identifying what the relevant information variables look like. 

I first investigated haptic perception in the context of coordinated rhythmic movements (Wilson, Bingham & Craig, 2003). Geoff had run studies showing that visual judgements of different relative phase varied in stability in the same way that the production of those relative phases does. This suggested that the movement phenomena were being caused by the way relative phase is perceived. This was vision, however, and the movement phenomena obviously involve motion of the body and the haptic system. This involvement was typically explained in terms of muscle homology and neural crosstalk effects. Our study had people track manipulanda that moved up and down one of three mean relative phases with various levels of phase variability added, and had them make judgements of that variability (replicating the visual studies). We found haptic perception of relative phase, as measured by those judgements, behaved just like visual perception of relative phase - we inferred that the information, the relative direction of motion, can be detected by both systems and has the same effects. 

I am moving back into the haptic information world for two related reasons. 

First, I want to replace the muscle homology/neural crosstalk stories with a haptic perception story. The effects these theories account for are very large and reliable, and Geoff's perception-action model currently only applies to visual information. Specifically, muscle homology applies to relative phase defined in an egocentric (body centred) frame of reference, while Geoff's model applies to relative phase defined in an allocentric (external) frame of reference. Relative phase is clearly detected in both frames of references; when they are pitted against one another experimentally, both matter and the egocentric effects dominate (e.g. Pickavance, Azmoodah & Wilson, 2018).

Second, I have become interested in individual variation in the variables used to perceive relative phase. Based on his data, Geoff's model predicts relative phase is perceived via the information variable relative direction of motion, the detection of which is modified by the relative speed of the oscillators. In Wilson & Bingham (2008; blog post), we showed this was true in 7 out of 10 untrained participants judging 0° and 180°. The other three became unable to judge these phases when we perturbed another candidate variable, relative position. This experiment also showed that people trained to perceive 90° had improved because they had switched to this variable, but we were not expecting people at the other relative phases to be using this variable. I'm finally getting back into experiments probing the prevalance of this individual difference in visual information use and the consequences for perception-action stability (briefly: there's a lot of variation and it matters!). As part of the above project, I want to do the same kinds of studies on haptic perception too. 

My problem here is, there is essentially no information in the literature on the nature of haptic information variables. This Peril lays out my current hypothesis about where to look; please, dear God, come help me!

Dynamic Touch
One huge literature that does investigate haptic perception is the dynamic touch literature. These are experiments that study the haptic perception of limb and object properties that is possible when we wield (move) objects. 

The basis of perception via dynamic touch is the inertia tensor. This is the mathematical description of the characteristic way an object resists being moved in all 6 degrees of freedom (the x, y, z planes and the pitch, roll and yaw rotations). Judgements of object properties such as relative mass or length covary with various moments of inertia (eigenvectors of this matrix). This is now well established. 

The problem is that everyone in this literature refers to the moments of inertia and properties such as mass as information variables. This is wrong; they are the dynamical characteristics of the object to be perceived, not the information. So to my knowledge, this literature has not characterised any haptic information variables. 
Limb Kinematics
Geoff has investigated dynamic touch in the context of hefting an object to perceive how throw-able it is (e.g. Bingham, Schmidt & Rosenblum, 1989; blog post; this was replicated and extended by Zhu & Bingham, 2008; blog post)

In the 1989 paper, Geoff did extensive analyses on the kinematics of the limbs during hefting, looking for invariant features of that motion that might be serving as information about throwability. He identified a candidate, but the follow up paper tested and ruled this particular one out. Over all the perceptual work he's done with Zhu, they have not been able to identify any patterns of limb motion that could be the information. They have shown that equally throwable objects that vary in size feel equally heavy, but they tested and ruled out a role for the inertia tensor in creating felt heaviness and so right now, there is no known information variable supporting the (very good) perception of the affordance. 
A Better Place to Look
The failures to find invariants in any kinematic features of limb motion have bugged me for a long time; where the hell could the information be, if not there? I've recently realised the answer; haptic information lives in the kinematic motions of the medium of the haptic array caused by the kinematics of limbs during dynamic touch activities. Geoff was working one level too high up, and the dynamic touch people are working one level too low. 

The medium of the optic array is light; the medium of the acoustic array is the atmosphere. Turvey & Fonseca (2014) propose the hypothesis that the medium of the haptic array involves muscles and the web of connective tissues that bind them to the skeleton (that paper was actually part of a special issue on tensegrity analyses of haptic perception). They then further propose that the most perceptually appropriate way to characterise the physical properties of this medium is as a tensegrity (specifically, a multifractal tensegrity, in which multiple tensegrities are nested within and coupled to each each other). In the same way that visual information is implemented by the physical properties of light, haptic information must be implemented by the physical properties of the haptic tensegrity. 

In order to understand haptic perception of dynamical properties, we need to characterise how wielding limbs and objects with those properties affects not just the kinematics of the limb, but then how those altered kinematics affect the kinematics of the haptic array. 
Where I'm At Now
I am at the early days of this. I have read Turvey & Fonseca a couple of times and, like with all of Turvey's papers, I think he's right but I don't yet understand the full awesomeness I'm sure is in there. So the tensegrity papers are at the top of my reading list for any haptic perception project. 

The basic idea of a tensegrity structure sounds like an ideal formalism. To quote Wikipedia, 
Tensegrity, tensional integrity or floating compression is a structural principle based on the use of isolated components in compression inside a net of continuous tension, in such a way that the compressed members (usually bars or struts) do not touch each other and the prestressed tensioned members (usually cables or tendons) delineate the system spatially
Here is a video of Buckminster Fuller explaining this idea, here is a video of Tom Myers explaining this idea in the context of the connective tissue of the body, and here is a video of Turvey explaining the idea in the context of haptic perception. These are the three stages of this hypothesis being developed; from a physical principle for constructing things in a certain way, to the hypothesis that the fascia of the body are constructed this way, to the hypothesis that the medium of haptic perception is therefore constructed this way. You can make tensegrity structures yourself pretty easily (e.g. see this video for instructions). 

Based on the egocentric constraint on coordination dynamics, my proposal of haptic perception of relative phase over muscle homology requires that that haptic perception must happen in an egocentric frame of reference, and so the medium of haptic perception must be egocentric. The tensegrity hypothesis fits this bill, as far as I can tell. I still have to run many experiments on the haptic perception of relative phase (I need to replicate the whole suite of visual judgement studies Geoff did, plus a series that pits haptic vs visual perception against one another), but assuming haptic perception of coordination is egocentric, the tensegrity analysis will be the place to go to characterise the actual information variables that are being used. I'd like to write a grant for this project; it requires some equipment and a lot of time. This is me getting my head into the game to get ready for that.

Beyond this, I do not have any further specific details worked out. Haptic perception is going to be a damned tough problem, not least of which is the intimidating mathematics of nested multifractal tensegrity systems (as opposed to the still hard but more obviously tractable geometry of the optic array). I'm going to need some heavy hitting support on that part, so feel free to join in - please! :)
In order to properly characterise the nature of haptic information variables, we need to characterise what happens to the multifractal tensegrity haptic medium when we move and mechanically interact with dynamical properties of objects such as the inertia tensor. I want to get into this in the context of coordination dynamics because that's where my interest and expertise lies, but the other obvious domain is dynamic touch. I'd love to see work developing this hypothesis. This work will not only be a rich investigation of haptic perception, but it will help improve all the theory work that currently depends heavily (but inappropriately) on the dynamic touch literature. I have in mind here the direct learning model of Jacobs and Michaels, 2007, which I like but currently spends all it's time talking about attuning to moments of inertia as information variables. The ecological approach has been heavily visual for lots of good pragmatic reasons for most of it's life, but this is a rich and as yet untapped vein of perception-action research waiting to happen. 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The field of motor control has been recently steadily moving towards the idea that there is no such thing as an ideal movement. The system is not trying to reliably produce a single, stable, perfect form, and movement variability has gone from being treated as noise to being studied and analysed as a key feature of a flexible, adaptive control process. This formalises Bernstein's notion of 'repetition without repetition' in movement, and recognises that the redundancy in our behavioural capabilities relative to any given task allows for multiple solutions to that task being legitimate options. 

There are many new analysis techniques within this 'motor abundance' framework, and I've reviewed most of them already; uncontrolled manifold analysis, stochastic optimal control theory and goal equivalent manifolds are the three big ones, as well as nonlinear covariation analysis. The essence of all these methods is that they take variability in the execution or outcome of a movement, and decompose that variability into variability that does not interfere with achieving the outcome and variability that does

This post will explain the variability decomposition process in Sternad & Cohen's (2009) Tolerance, Noise and Covariation (TNC) analysis, which my students and I are busily applying to some new throwing data from the lab. I have talked a little about this analysis here but I focused on the part of the analysis that involves a task dynamical analysis identical to the one I did for my throwing paper in 2016. In this post, I want to explain the TNC analysis itself. I will be relying on Sternad et al, 2010, which I've found to be a crystal clear explanation of the entire approach; you can also download Matlab code implementing the analysis from her website.  

Sternad et al (2010) explain that the key motivation for developing this analysis is to remove worrying researcher degrees of freedom. UCM and the related analyses decompose variance in the kinematics of the performance of an action. There are many ways to express these kinematics; in terms of joint positions, velocities, or angles (relative or absolute). UCM gives you different answers depending on the coordinate system of the kinematic data, which makes coordinate frame a researcher degree of freedom that is rarely justified a priori. 

Sternad solves this by a) moving the analysis to results in execution space and b) defining that execution space a priori by a task dynamical analysis. While I think that there may be room to use UCM's coordinate sensitivity as a way to explore data for evidence of which frame of reference is behaviourally relevant, I am 100% on board with Sternad's analysis of the problem, as well as her solution. 

The only thing I will do here that goes beyond what she does is say that in order for the task dynamics to constrain action variability, they must be perceived. This makes behaviourally relevant task dynamics affordances (Wilson et al, 2016) and also means that the story isn't complete until we have the information analysis as well. But this task dynamical analysis is the right place to start. 
Task Dynamics of Throwing
I have reviewed this specific analysis in detail here. The key is that throwing entails creating a projectile motion that either maximises distance or intercepts a distant target. For a given projectile, the dynamics of projectile motion requires three initial conditions to be specified; release angle (relative to the ground plane), release velocity (speed and direction), and release height (relative to the target height). The task dynamical analysis therefore specifies a priori that this is the execution space in which the results must be analysed. (Both Dagmar and I restricted our analysis to the 2D release angle/release speed space, because that's where most of the control action is. I did see some tentative evidence that release angle was being adaptively controlled in the 2016 paper, however, so I do want to extend this back out to the 3D execution space. I will focus here on the 2D analysis because it's way easier to graph :)

Within that space of possible release parameter combinations, there is a subset that achieve the goal. For example, for a target that has some non-0 size, you can miss the centre but still hit the target. This subset is referred to as the result function (bounded by lines of constant result) which maps parameter combinations onto results, and the subset of the result function that produces 0 error (e.g., exactly hits the centre of the target) is the solution manifold. (Sternad notes that this function can be readily identified, but so far I've only been able to map it point by point in the simulations, rather than identify the actual function.)

Figure 1 plots an example of of result functions from 2016 with one of Dagmar's; her throwing task is actually a tetherball task, so her dynamical analysis, while still projectile motion, produces a different result function to my untethered throwing task. The underlying analysis is identical, however. 
Figure 1. Result functions for two throwing tasks, plotted in execution space. Left, Wilson; Right, Sternad
All of the following analysis evaluates the data you record from your participants relative to this solution manifold and result function. As you can see, not all regions of the space are equally useful; sometimes the result function is very narrow. But this result function gives you a reference frame to evaluate various aspects of the variability in the observed distribution of data; 
  • Does it lives in the most stable region of the space (Tolerance)
  • How much noise it shows (Noise)
  • Does it shows evidence of a synergy in action between the execution space variables (Covariation). 
See Figure 2 for some example data from Wilson et al (2016), plotted on the appropriate result function.
Figure 2. Observed release angle/release speed combinations with the result function for that condition
Tolerance asks 'given the spread of data we observed, is that distributed data living in the most tolerant or error/stable region of the result function?' The analysis takes the data, preserves the spread in both dimensions, and moves that set around to all different parts of the space. It evaluates the average result of each virtual data set with respect to the result function, and the location for the data that yields the lowest distance from the solution manifold to the average result is the best location that distribution of data could hope to be centred on.  The tolerance cost is the difference between this ideal minimum and the actual location of the real data set; the smaller the tolerance cost, the more the real data is living in the most tolerant-of-error region in the space. 
Not all variability is functional; there is still good old fashioned noise in the system, specifically variation that pulls you away from the solution manifold. But how much? The analysis takes the observed data distribution, and progressively shrinks it towards the average. At each step, this average result is evaluated with respect to the result function, and the greatest improvement in performance is the Noise cost. This measures how far away from the noiseless performance the observed data are; there can still be variation in the virtual data set at this point, but by definition none of it will be moving the data out of the result function, just moving it around inside the bounds.
In principle, the release parameters can be varied independently of each other; I can throw at (almost) any combination of release angle, speed and height within the execution space. So I could be ending up in this stable region by controlling them separately. However, I might also be taking advantage of the relationship between the variables determined by the task dynamics; I can offset speed for a higher angle, for example, because the underlying physics allows both those solutions to work. The Covariation analysis takes the original [release angle, release speed] data and shuffles the speed values so each produced speed is paired with every produced angle in a collection of virtual data sets. Each set is evaluated with respect to the result function, and the difference between the best performing virtual set and the real data set is the Covariation cost. If this is small, it tells you that the release parameter combinations that were produced were not just any old set that worked, but close to the best possible set, suggesting they were not being produced independently of each other.

Note: this measure is sensitive to the coordinate frame, under certain conditions. Sternad et al (2010) propose a solution that entails rotating the coordinate frame if required, and they are working on a more robust, coordinate frame independent solution. 
Overall, each measure evaluates the observed result variability with respect to 'best case performance' defined by the task-dynamically defined result function. This quantifies the degree to which your observed data shows evidence of being organised with respect to that result function (i.e. with respect to the task dynamically defined affordance). Each of these quantifies an intuition I had about my 2016 data set; I came up with a couple of ways to assess the idea that participants were living in a nice, stable region of the space but nothing as useful as this. It is an excellent, well motivated and well executed analysis and I'm looking forward to getting into it with my current and future studies. 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
One of the more enduring arguments in ecological psychology is about the best way to formally describe affordances. The two basic approaches are that they are dispositions (Turvey, Scarantino, me) or that they are relations (Reitveld, Kiverstein, Chemero). The argument has mostly settled down into just agreeing to disagree, but I am still convinced that the relational analysis is critically flawed and I want to try and either get them to solve the problem or end the debate once and for all. I've reviewed this in a bunch of places (e.g. here, here, and here)  but this post is just setting out my challenge once and for all; you cannot perceive a relational affordance, and there is as yet no good story about how to learn new affordances.

My problem stems from this Gibson (1979) quote (we all have our favourite, but this one seems to cut to the heart of it)
The central question for the theory of affordances is not whether they exist and are real but whether information is available in ambient light for perceiving them.
Right now, the affordances-are-relations camp have no story for how these can structure light (or other energy media) and therefore create information about themselves. They are therefore, as currently formulated, not even in principle perceptible. This means affordances-as-relations is of zero use to the ecological approach. 

Bruinberg et al (2018) tried to address this problem, but as I blogged here their solution is not ecological information and it reveals that these authors do not as yet understand what information actually is. My challenge is therefore this: tell me a story in which affordances-as-relations are able to create ecological information in energy arrays, and might therefore be learned, and the debate will be back on. Until then, affordances-as-dispositions is the only account that formalises the right properties and the debate is over. 

My basic argument goes as follows. As you go, you should notice that the word 'relation' shows up in a lot of places. This is on purpose, and I think it is part of the confusion.

Here, crudely, is the perception-action loop.
Figure 1. A good old fashioned perception-action loop picture
The organism is a real part of the loop, as is the environment. A given perception-action event entails the organism placing themselves into some relation to the environment (acting) and detecting the ongoing consequences (perceiving). The question at hand is, where in this loop do affordances live?

This analysis (Turvey, Shaw, Reed & Mace, 1981; Turvey, 1992) places affordances in the environment. They are higher-order properties of objects and events, and they are constituted by various lower order properties (things like surfaces) being arranged in a specific spatial and temporal relation to each other.

From this point of view, to say “My coffee mug affords grasping” is to identify that the relations between the surfaces that constitute the mug make it so that when certain satisfying conditions are met (say, the presence of an appropriately sized hand) the disposition to be grasped can be manifested. The affordance disposition is constituted by the cup’s surfaces and their physical properties (the anchoring properties of the disposition; Turvey et al, 1981) but the affordance itself is a distinct, higher order property of the cup. The ecological hypothesis is that perceiving-acting organisms organise their behaviour with respect to this latter property directly, and not via internal, inferential combination of the various anchoring properties.

Dispositions come in complementary pairs. The description above includes a specification of the higher-order properties of something that isn't the coffee mug that can do the grasping. Turvey calls the organism complement of the affordance an effectivity.

So affordances are properties of the environment, and effectivities are properties of the organism. As dispositions, they co-define each other in interesting ways (Scarantino's analysis is currently the most up-to-date in terms of the ontology of dispositions, building on Mumford's definitive work on this topic) but they exist independently of each other; a coffee mug affords grasping even if there is no-one around to effect that grasp.

This analysis (Chemero, 2003, 2009) places affordances at the level of the organism-environment relation. From this point of view, affordances are what you get when you place the relata (the organism, and the environment) into a relation. In his 2009 book, Chemero went even further and proposed dynamical Affordances 2.0 in which the organism and environment causally interact in real time to create affordances that ebb and flow. 

This account is mostly positioned as a way to solve problems with the dispositional account, primarily the issue of malfunction. When the complementary pairs of a disposition are in each other's presence, they must manifest the disposition. Chemero and others worry that this means there cannot therefore be malfunctions, or errors, which clearly occur. However, given that the effectivity is always a complex dynamic that must be learned to be softly assembled, and given that there really is an element of compulsory-ness to skilled action (think about being unable to not pull that door handle, even if the sign says push), I think this problem is already solvable.

Affordances-as-relations also let you find more complex affordances; social affordances, cultural affordances, linguistic affordances. This is how the Skilled Intentionality Framework (e.g. Bruineberg & Reitveld, 2014) works to tackle these issues ecologically. 

The Problem
Refer to Figure 1. Both the organism and the (physical) environment are things that light and other energy media can interact with. The organism-environment relation is not. The relation is not constituted by surfaces, it's constituted by the spatiotemporal layout of the organism-environment system. Light (or other media) cannot bounce off a layout. This means that affordances-as-relations cannot create information about themselves, and they therefore cannot be perceived. They cannot, therefore play any role in an ecological analysis of behaviour. Affordances and effectivities can, however, and so they are at least in principle an option. 

A second, slightly more subtle issue is that the organism-environment relation does not exist prior to the relata being in the relation. In affordances 2.0, organisms co-create affordances by their causal interactions with the environment. This means that I can only create affordances using abilities I already have; so how do I learn new affordances? It can't be by being in the presence of those new affordances, because I cannot create them yet; I need to learn something else. What that is has never been laid out; but it means learning is not affordance-based and there is suddenly a need for something else in the ontology. 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
One of the most robust sex differences occurs in throwing. Men can throw (on average) much faster and therefore much farther than women, and this gap even exists at comparable levels of sports such as baseball and softball. The most common explanations are that a) men are, on average, larger and stronger than women, and b) most cultures gender throwing activities as male, leading to earlier acquisition and much more practice. YouTube has plenty of videos of men throwing with their off hand that point to the critical role of learning. 

However, Lombardo & Deaner (2018; L&D) have just published a hypothesis that while these factors are at play, they rest on top of an underlying biological advantage and that 'throwing is a male adaptation'. Specifically, they claim that there has been greater selective evolutionary pressure on men (as compared to women) to develop the strength, skills and anatomy needed to throw for large distances and great accuracy. Men have evolved to be better throwers than women.

This post will briefly review the hypothesis and the evidence, and then come to two conclusions. First, many of the differences they discuss seem quite closely aligned to the cultural sex differences around throwing that we know exist and so may not be biologically innate. Second, and more importantly, there may not even be a throwing-specific sex difference to explain. Right now, the only clear finding is that men throw faster; but they are also (on average) stronger and larger for non-throwing reasons. There is, as yet, no clear evidence that men are better throwers. I will then review some recent data of my own that suggests when the full perception-action task dynamic is analysed in closer detail, trained women show every sign of being equally skilled throwers as trained men.
The Hypothesis
L&D first consider the origin of throwing. Modern humans have been anatomically able to throw well since Homo erectus, but at some point we were non-throwing primates. What made us begin to develop along a trajectory that enabled throwing? (NB the following is their argument, not mine; I'll note when I'm interjecting a comment)

L&D note that modern non-human primates never throw in hunting contexts (ADW note: because they can't, because they live in crowded jungles where throwing isn't that useful, or because they are often not primarily carnivores - take your pick). When they do throw, they do so in the context of antagonistic social encounters, to scare off a same-species competitor. Most of this kind of behaviour is done by the males. So throwing has, since the dawn of the skill, been a primarily male behaviour. 

Once we became the 'primates who throw', speed and accuracy went up and this skill showed up in hunting and warfare (all mostly male activities). Males who were better at hunting and warfare got more and better mates and more and better children. This feedback loop kept the highest pressure on males, and thus the sex differences in modern human performance are the result of throwing specific, male-selective evolutionary pressures. 
The Evidence Behavioural
Boys throw very early and much earlier than girls, and 'early' suggests 'biologically innate' over 'socialised'. While girls/women can learn to throw, their speed and form never catch up to men's. Finally, men are better throwers overall, where 'better' means they can throw faster and farther, and more accurately (although the data here are very limited). These differences persist cross-culturally, even when female throwing is more normal, or when things like the American obsession with baseball isn't a massive driver, or when the women are highly trained like the men.

They also discuss male advantages in interception, but of course interception and throwing are two entirely different perception-action task dynamics and they are therefore not related in the way the authors assume. 
L&D review pectoral girdle anatomy and show some of the features that support throwing show sex differences in favour of men. Some do not, and some go the other way; some of the features are affected by use and some are less so. Only some are clearly throwing specific, and others may have become male-advantaged or female-disadvantaged for non-throwing related reasons. 
As I summarised in the Introduction, most of the evidence is suggestive but not clearly much more. The least contestable difference is in throwing for aggression, hunting and war (although I'd bet the story is muddier than they think). But this cultural difference will only make throwing a male adaptation if it shows up as male-specific anatomical or behavioural differences. The behavioural differences all have a confound, namely the culturally-driven high male involvement in throwing. The authors admit as much (pg 110) but don't take it seriously enough. The anatomical differences are often there, but use (biased by the cultural gendering of throwing) and non-throwing related selection pressures remain options for many of them. 

My primary concern, however, is with the overall framing; that men are better throwers than women and that this needs explaining. 'Better' mostly means 'faster', and they have made no effort to parcel out the contribution to this of the non-throwing related strength and size differences between men and women they review on page 93. 'Better' briefly means 'more accurate', although the evidence is only two unconvincing papers (page 97). Finally, 'better' occasionally means 'throwing form', although this is only ever assessed using an observational checklist looking for a particular, mature form; no detailed kinematic analyses and no understanding that action control is not about the production of one perfect movement. Overall, therefore, they present very little evidence that men are 'better' throwers than women. 
A Little Data
I actually have some data that I plan to dig back into to test the hypothesis that men are better throwers than women. It will take a little while to get the formal analysis done (I need to get Sternad's T-N-C analysis code working on my solution manifolds) but I can demonstrate the idea with a couple of graphs.

Wilson et al (2016) tested three groups of skilled throwers trying to hit a 4ft x 4ft target at 3 distances (5m, 10m and 15m) and 3 heights (centre at 1m, 1.5m and 2m). The three groups were male American college level baseballers, female American college level softballers, and male UK club level cricketers. 

The baseballers threw about 10m/s faster than the softballers, but the softballers threw about 10m/s faster than the cricketers and at around 30m/s. So the data showed a clear difference as a function of sport, with the women actually in the middle and throwing fast. 

However, there are two elements in the data that go beyond speed as a metric for quality; refer to Figure 1. 
Figure 1. Release parameters from Wilson et al, 2016
First, note that within each group, everyone scales their release speed in basically the same way as distance increases (Figure 1a). Second, note that release angle varies appropriately within every group as a function of distance (to offset the speed differences) and of height (to maintain a flat, fast trajectory). So the softball women, while not the fastest group, showed a lot of evidence for the same level of control over the relevant action parameters.

The other part of that paper was using affordance maps (solution manifolds) to assess performance. This involved simulating throws across a wide range of release parameters, identifying which combinations produced hits, and placing the human data on that map. I've produced an example of softball performance in Figure 2
Figure 2. Female release speed and angle combinations mapped onto the solution manifold/affordance map
As you can see, the females produced very consistent data that lives within a stable region of the solution manifold. This tight clustering and good location was typical across all conditions; in fact, the softball data was much tidier than the men, who tended to be spread out along the release speed axis. I interpreted this as the men trying to throw as fast as possible, not always getting it right, and making the necessary adjustments. Good control, just a messy strategy. So along this dimension, I'd argue the women are throwing better!

I can now quantify these patterns using Sternad's Tolerance-Noise-Covariance analysis, and I will do this comprehensively for this data as soon as I have time to get her student's code working. These three components assess three different aspects of execution, relative to the solution manifold, and I am guessing that they will show the women are producing at least equally skilled throws as the men. I think I will also get into this more explicitly in the future (Registered Report, anyone?)

My current conclusion is that the evidence for L&D's strong evolutionary hypothesis is, in fact, weak, and that it's time to get into this with up-to-date perception-action techniques (and probably some more detailed evolutionary biology). 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview