Loading...

Follow e-Literate - What We Are Learning About Online .. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

EdSurge's Tony Wan is first out of the blocks with an Instructurecon coverage article this year. (Because of my recent change in professional focus, I will not be on the LMS conference circuit this year.) Tony broke some news in his interview with CEO Dan Goldsmith with this tidbit about the forthcoming DIG analytics product:

One example with DIG is around student success and student risk. We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom. Throughout that class, or even at the beginning, we can make recommendations to the teacher or student on things they can do to increase their chances of success.

Instructure CEO Dan Goldsmith

There isn't a whole lot of detail to go on here, so I don't want to speculate too much. But the phrase "before they even set foot in the classroom" is a clue as to what this might be. I suspect that the particular functionality he is talking about is what's known as an "student retention early warning system."

Or maybe not. Time will tell.

Either way, it provides me with the thin pretext I was looking for to write a post on student retention early warning systems. It seems like a good time to review the history, anatomy, and challenges of the product category since I haven't written about them in quite a while and they've become something of a fixture. The product category is also a good case study in why tool that could be tremendously useful in supporting students who need help the most often fails to live up to either its educational or commercial potential.

The archetype: Purdue Course Signals

The first retention early warning system that I know of was Purdue Course Signals. It was an experiment undertaken by Purdue University to—you guessed it—increase student retention, particularly in the first year of college, when students tend to drop out most often. The leader of the project, John Campbell, and his fellow researchers Kim Arnold and Matthew Pistilli, looked at data from their Student Information System (SIS) as well as the LMS to see if they could predict and influence students. Their first goal was to prevent them from dropping courses, but they ultimately wanted to prevent those students from dropping out.

They looked at quite a few variables from both systems, but the main results they found are fairly intuitive. On the LMS side, the four biggest predictors they found for students staying in the class (or, conversely, for falling through the cracks) where

  1. Student logins (i.e., whether they are showing up for class)
  2. Student assignments (i.e., whether they are turning in their work)
  3. Student grades (i.e., whether their work is passing)
  4. Student discussion participation (i.e., are they participating in class)

All four of these variables were compared to the class average, because not all instructors were using the LMS in the same way. If, for example, the instructor wasn't conducting class discussions online, then the fact that a student wasn't posting on the discussion board wouldn't be a meaningful indicator.

These are basically four of the same very generic criteria that any instructor would look at to determine whether a student is starting to get in trouble. The system is just more objective and vigilant in applying these criteria than instructors can be at times, particularly in large classes (which is likely to be the norm for many first-year students). The sensitivity with which Course Signals would respond to those factors would be modified by what the system "knew" about the students from their longitudinal data—their prior course grades, their SAT or ACT scores, their biographical and demographic data, and so on. For example, the system would be less "concerned" about an honors student living on campus who doesn't log in for a week than about a student on academic probation who lives off-campus.

In the latter case, the data used by the system might not normally be accessible, or even legal, for the instructor to look at. For example, a disability could be a student retention risk factor for which there are laws governing the conditions under which faculty can be informed. Of course, instructors don't have to be informed in order for the early warning system to be influenced by the risk factor. One way to think about a way that this sensitive information could be handled is like a credit score. There is some composite score that informs the instructor that the student is at increased risk based on a variety of factors, some of which are private to the student. The people who are authorized to see the data can verify that the model works and that there is legitimate reason to be concerned about the student, but the people who are not authorize are only told that the student is considered at-risk.

Already, we are in a bit of an ethical rabbit hole here. Note that this is not caused by the technology. At least in my state, the great Commonwealth of Massachusetts, instructors are not permitted to ask students about their disabilities, even though that knowledge could be very helpful in teaching those students. (I should know whether that's a Federal law, but I don't.) Colleges and universities face complicated challenges today, in the analog world, with the tensions between their obligation to protect student privacy and their affirmative obligation to help the students based on what they know about what the students need. And this is exactly the way John Campbell characterized the problem when he talked about it. This is not a "Facebook" problem. It's a genuine educational ethical dilemma.

Some of you may remember some controversy around the Purdue research. The details matter here. Purdue's original study, which showed increased course completion and improved course grades, particularly for "C" and "D" students, was never questioned. It still stands. A subsequent study, which purported to show that student gains persisted in subsequent classes, was later called into question. You can read the details of that drama here. (e-Literate played a minor role in that drama by helping to amplify the voices of the people who caught the problem in the research.)

But if you remember the controversy, it's important to remember three things about it. First, the original research about persistence was not ever called into question. Second, the subsequent finding was not disproven; rather, there was a null hypothesis. We have proof neither for nor against the hypothesis that the Perdue system can produce longer term effects. And finally, the biggest problem that controversy exposed was with university IR departments releasing non-peer-reviewed research papers that staff researchers have no power to respond to on their own when they get criticized. That's worth exploring further some other time, but for now, the point is that the process problem was the real story. The controversy didn't invalidate the fundamental idea behind the software.

Since then

Since then, we've seen lots of tinkering with the model on both the LMS and SIS sides of the equation. Predictive models have gotten better. Both Blackboard and D2L have some sort of retention early warning products, as do Hobsons, Civitas, EAB, and HelioCampus, among others. There were some early problems related to a generational shift in data analytics technologies; most LMSs and SISs were originally architected well before the era when systems were expected to provide the kind of high-volume transactional data flows needed to perform near-real-time early warning analytics. Those problems have increasingly been either ironed out or, at least, worked around. So in one sense, this is a relatively mature product category. We have a pretty good sense of what a solution looks like and there are a number of providers in the market right now with variations on on the theme.

In a second sense, the product category hasn't fundamentally changed since Purdue created Course Signals over a decade ago. We've seen incremental improvements to the model, but no fundamental changes to it. Maybe that's because the Purdue folks pretty much nailed the basic model for a single institution on the first try. What's left are three challenges that share the common characteristic of becoming harder when converted from an experiment by a single university to a product model supported by a third-party company. At the same time, They fall on different places on the spectrum between being primarily human challenges and primarily technology challenges. The first, the aforementioned privacy dilemma, is mostly a human challenge. It's a university policy issue that can be supported by software affordances. The second, model tuning, is on the opposite end of the spectrum. It's all about the software. And the third, which is the last mile problem from good analytics to actual impact, is somewhere in the messy middle.

Three significant challenges

I've already spent some time on the student data privacy challenge specific to these systems, so I won't spend much more time on it here. The macro issue is that these systems sometimes rely on privacy-sensitive data to determine—with demonstrated accuracy—which students are most likely to need extra attention to make sure they don't fall through the cracks. This is an academic (and legal) problem that can only be resolved by academic (and legal) stakeholders. The role of the technologists is to make the effectiveness and the privacy consequences of various software settings both clear and clearly in the control of the appropriate stakeholders. In other words, the software should support and enable appropriate policy decisions rather than obscuring or impeding them. At Purdue, where Course Signals was not a product that was purchased but a research initiative that had active, high-level buy-in from academic leadership, these issues could be worked through. But a company selling the product into as many universities as possible with differing levels of sophistication and policy-making capability in this area, the best the vendor can do is build a transparent product and try to educate their customers as best as they can. You can lead a horse to water and all that.

On the other end of the human/technology spectrum, there is an open question about the degree to which these systems can be made accurate without individual hand tuning of the algorithms for each institution. Purdue was building a system for exactly one university, so it didn't face this problem. We don't have good public data on how well its commercial successors work out of the box. I am not a data scientist, but I have had this question raised by some of the folks who I trust the most in this field. That, in turn, means that each installation of the product would require a significant services component, which would raise the cost and make these systems less affordable to the access-oriented institutions that need them the most. This is not a settled question; the jury is still out. I would like to see more public proof points that have undergone some form of peer review.

And in the middle, there's the question of what to do with the predictions in order to produce positive results. Suppose you know which students are more likely to fail the course on Day 1. Suppose your confidence level is high. Maybe not Minority Report-level stuff—although, if I remember the movie correctly, they got a big case wrong, didn't they?—but pretty accurately. What then? At my recent IMS conference visit, I heard one panelists on learning analytics (depressingly) say, "We're getting really good at predicting which students are likely to fail, but we're not getting much better at preventing them from failing."

Purdue had both a specific theory of action for helping students and good connections among the various program offices that would need to execute that theory of action. Campell et al believed, based on prior academic research, that students who struggle academically in their first year of college are likely to be weak in a skill called "help-seeking behavior." Academically at risk students often are not good at knowing when they need help and they are not good at knowing how to get it. Course Signals would send students carefully crafted and increasingly insistent emails urging them to go to the tutoring center, where staff would track which students actually came. The IR department would analyze the results. Over time, the academic IT department that owned the Course Signals system itself experimented with different email messages, in collaboration with IR, and figured out which ones were the most effective at motivating students to take action and seek help.

Notice two critical features to Purdue's method. First, they had a theory about student learning—in this case, learning about productive study behaviors—that could be supported or disproven by evidence. Second, they used data science to test a learning intervention that they believed would help students based on their theory of what is going on inside the students' heads. This is learning engineering. It also explains why the Purdue folks had reason to hypothesize that the effects of using Course Signals might persist with students after they stopped using the product. They believed that students might learn the skill from the product. The fact that the experimental design of their follow-up study was flawed doesn't mean that their hypothesis was a bad one.

When Blackboard built their first version of a retention early warning system—one, it should be noted, that is substantially different from their current product in a number of ways—they didn't choose Purdue's theory of change. Instead, gave the risk information to the instructors and let them decide what to do with it. As have many other designers of these systems. While everybody that I know of copied Purdue's basic analytics design, nobody that I know—at least no commercial product developers that I know of—copied Purdue's decision to put so much emphasis on student empowerment first. Some of this has started to enter product design in more recent years now that "nudges" have made the leap from behavioral economics into consumer software design. (Fitbit, anyone?) But the faculty and administrators remain the primary personas in the design process for many of these products. (For non-software designers, a "persona" is an idealized person that you imagine that you're designing the software for.)

Why? Two reasons. First, students don't buy enterprise academic software. So however much the companies that design these products may genuinely want to serve students well, their relationship with them is inherently mediated. The second reason is the same as with the previous two challenges in scaling Purdue's solution. Individual institutions can do things that companies can't. Purdue was able to foster extensive coordination between academic IT, institutional research, and the tutoring center, even though those three organizations live on completely different branches of the organizational chart in pretty much every college and university that I know. An LMS vendor has no way of compelling such inter-departmental coordination in its customers. The best they can do is give information to a single stakeholder who is most likely to be in a position to take action and hope that person does something. In this case, the instructor.

One could imagine different kinds of vendor relationships with a service component—a consultancy or an OPM, for example—where this kind of coordination would be supported. One could also imagine colleges and universities reorganizing themselves and learning new skills to become better at the sort of cross-functional cooperation for serving students. If academia is going to survive and thrive in the changing environment it finds itself in, both of these possibilities will have to become far more common. The kinds of scaling problems I just described in retention early warning systems are far from unique to that category. Before higher education can develop and apply the new techniques and enabling technologies it needs to serve students more effectively with high ethical standards, we first need to cultivate an academic ecosystem that can make proper use of better tools.

Given a hammer, everything looks pretty frustrating if you don't have an opposable thumb.

The post Instructure DIG and Student Early Warning Systems appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In my recent IMS update post, I wrote,

[T]he nature and challenges of interoperability our sector will be facing in the next decade are fundamentally different from the ones that we faced in the last one. Up until now, we have primarily been concerned with synchronizing administration-related bits across applications. Which people are in this class? Are they students or instructors? What grades did they get on which assignments? And how much does each assignment count toward the final course grade? These challenges are hard in all the ways that are familiar to anyone who works on any sort of generic data interoperability questions. 
But the next decade is is going to be about data interoperability as it pertains to insight. Data scientists think this is still familiar territory and are excited because it keeps them at the frontier of their own profession. But this will not be generic data science, for several reasons.

I then asserted the following positions:

  • Because learning processes are not directly observable, blindly running machine learning algorithms against the click streams in our learning platforms will probably not teach us much about learning.
  • On the other hand, if our analytics are theory-driven, i.e., if we start with some empirically grounded hypotheses about learning processes and design our analytics to search for data that either support or disprove those hypotheses, then we might actually get somewhere.
  • Because learning analytics expressions written in the IMS Caliper standard can be readily translated into plain English, Caliper could form a basis for expressing educational hypotheses and translating them into interoperable tools for testing those hypotheses across the boundaries of tech tools and platforms.
  • The kind of Caliper-mediated conversation I imagined among learning scientists, practicing educators, data scientists, learning system designers, and others, is relevant to a term coined and still used heavily at Carnegie Mellon University—"learning engineering."

In this post, I'm going to explore the last two points in more detail.

What the heck is "learning engineering"?

The term "learning engineering" was first used by Nobel laureate and Carnegie Mellon University polymath Herbert Simon in 1966. It has been around for quite a while. But it is a term whose time as finally has come and, as such, we are seeing the usual academic turf wars over its meaning and value. On the one hand, some folks love it, embrace it, and want to apply it liberally. IEEE has an entire group devoted to defining it. As is always the case, some of this sort of enthusiasm is thoughtful, and some of it is less so. At its worst, there is a tendency for people to get tangled up in the term because it provides a certain je ne sais quoi they've been yearning for to describe the aspects of their jobs that they really want to be doing as change agents rather than the mundane tasks that they keep being dragged back into doing, much like the way some folks are wrapping "innovation" and "design" around themselves like a warm blanket. It's perfectly understandable, and I think it attaches to something real in many cases, but it's hard to say exactly what that is. And, of course, where there are enthusiasts in academia, there are critics. Again, some thoughtful, while others...less so. (Note my comment in the thread on that particularly egregious column.)

If you want to get a clear sense of the range of possible meanings of "learning engineering" as used by people who actually think about it deeply, one good place to start would be Learning Engineering for Online Education: Theoretical Contexts and Design-Based Examples edited by Chris Dede, John Richards, and Bror Saxberg. (I am still working on getting half a day's worth of Carnegie Mellon University video presentations on their own learning engineering work ready for posting on the web. I promise it is coming.) There are a lot of great take-aways from that anthology, one of which is that even the people who think hard about the term and work together to put together something like a coherent tome on the subject don't fully agree on what the term means.

And that's really OK. Let's just set a few boundary conditions. On the one hand, learning engineering isn't an all-encompassing discipline and methodology that is going to make all previous roles, disciplines, and methodologies obsolete. If you are an instructional designer, or a learning designer, or a user experience designer; if you practice design thinking, or ADDIE; be not afraid. On the other hand, learning engineering is not creeping Stalinism either. Think about learning engineering, writ large, as applying data and cognitive sciences to help bring about desired learning outcomes, usually within the context of a team of colleagues with different skills all working together. That's still pretty vague, but it's specific enough for the current cultural moment.

Forget about your stereotypes of engineers and their practices. Do you believe there is a place for applied science in our efforts to improve the ways in which we design and deliver our courses, or try to understand and serve our students needs and goals? If so, what would such an applied science look like? What would a person applying the science need to know? What would their role be? How would they work with other educators who have complementary expertise?

That is the possibility space that learning engineering inhabits.

Applied science as a design exercise

One of the reasons that people have trouble wrapping their heads around the notion of learning engineering is that it was conceived of by very unusual mind. Some of the critiques I've seen online of the term position "learning engineering" in opposition to "learning design." But as Phil Long points out in his essay in the aforementioned anthology, Herb Simon both coined the term "learning engineering" and is essentially the grandfather of design thinking:

Design science was introduced by Buckminster Fuller in 1963, but it was Herbert Simon who is most closely associated with it and has established how we think of it today. "The Sciences of the Artificial" (Simon, 1967) distinguished the artificial, or practical sciences, from the natural sciences. Simon described design as an ill-structured problem, much like the learning environment, which involves man-made responses to the world. Design science is influenced by the limitations of human cognition unlike mathematical models. Human decision-making is further constrained by practical attributes of limited time and available information. This bounded rationality makes us prone to seek adequate as opposed to optimal solutions to problems. That is, we engage in satisficing not optimizing. Design is central to the artificial sciences: 'Everyone designs who devises courses of action aimed at changing existing situations into desired ones.' Natural sciences are concerned with understanding what is; design science instead asks about what should be. this distinction separates the study of the science of learning from the design of learning. Learning scientists are interested in how humans learn. Learning engineers are part of team focused on how students ought to learn."

Phil Long, "The Role of the Learning Engineer"

Phil points out two important dichotomies in Simon's thinking. The first one: is vs. ought. Natural science is about what is, while design science is about what you would like to exist. What you want to bring into being. The second dichotomy is about well structured vs. poorly structured. For Simon, "design" is a set of activities one undertakes to solve a poorly structured problem. To need or want is human, and to be human is to be messy. Understanding a human need is about understanding a messy problem. Understanding how different humans with different backgrounds and different cognitive and non-cognitive abilities learn, given a wide range of contextual variables like the teaching strategies being employed, the personal relationships between students and teacher, what else is going on in the students' lives at the time, whether different students are coming to class well fed and well slept, and so on, is pretty much the definition of a poorly structured problem. So as far as Herb Simon is concerned, education is a design problem by definition, whether or not you choose to use the word "engineer."

In the next section of his article, Phil then makes a fascinating connection between the evolution of design thinking, which emerged out design science, and learning engineering. The key is in identifying the central social activity that defines design thinking:

Design thinking represents those processes that designers use to create new designs, possible approaches to problem solutions spaces where none existed before. A problem-solving method has been derived from this and applied to human social interactions iteratively taking the designer and/or co-design participants from inspiration to ideation and then to implementation. The designer and design team may have a mental model of the solution to a proposed problem, but it is essential to externalize this representation in terms of a sketch a description of a learning design sequence, or by actual prototyping of the activities which the learner is asked to engage. [Emphasis added.] All involved can see the attributes of the proposed design solution that were not apparent in the conceptualization of it. this process of externalizing and prototyping design solutions allows it to be situated in larger and different contexts, what Donald Schon called reframing the design, situating it in contexts other than originally considered.

Phil Long, "The Role of the Learning Engineer"

So the essential feature that Phil is calling out in design thinking is putting the idea out into the world so that everybody can see it, respond to it, and talk about it together. Now watch where he takes this:

As learning environments are intentionally designed in digital contexts, the opportunity to instrument the learning environment emerges. Learners benefit in terms of feedback or suggested possible actions. Evaluators can assess how the course performed on a number of dimensions. The faculty and others in the learning-design team can get data through the instrumented learning behaviors, which may provide insight into how the design is working, for whom it is working, and in what context.

Phil Long, "The Role of the Learning Engineer"

Rather than a sketch, a wireframe, or a prototype, a learning engineer makes the graph, the dashboard, or the visualization into the externalization. For Herb Simon, as for Phil Long, these design artifacts serve the same purpose. They're the same thing, basically.

If you're not a data person, this might be hard to grasp. (I'm not a data person. This is hard for me to grasp sometimes.) How can you take numbers in a table and turn them into a meaningful artifact that a group of people can look at together, discuss, make sense of, debate, and learn from? What might that even look like?

Well, it might look something like this, for example:

Phil Hill's famous squid diagram

Phil Hill has a graduate degree in engineering. Not learning engineering. Electrical. (Also, he's not a Stalinist.)

By the way, when we externalize and share data with a student about her learning processes in a form that is designed to provoke thought and discussion, we have a particular term of art for that in education. It's called "formative assessment." If we do it in a way such that the student always has access to such externalizations, which are continually updating based on the student's actions, we call that "continuous formative assessment." When executed well, there is evidence that it can be an effective educational practice.

Caliper statements as learning engineering artifacts

So here's where we've arrived at this point in the post:

  • Design is a process by which we tackle ill-defined problems of meeting human needs and wants, such as needing or wanting to learn something.
  • Engineering is a word that we're not going to worry about defining precisely for now, but it relates to applying science to a design problem, and therefore often involves the measurement and numbers.
  • One important innovation in design methodology is the creation of external artifacts early in the design process so that various stakeholders with different sorts of experience and expertise can provide feedback in a social context. In other words, create something that makes the idea more "real" and therefore easier to discuss.
  • Learning engineering includes the skills of creation and manipulation of design artifacts that require more technical expertise, including expertise in data and software engineering.

The twist with Caliper is that, rather than using visualizations and dashboards as the externalization, we can use human language. This was the original idea of behind the Semantic Web, which is still brilliant in concept, even if the original implementation was flawed. Let's review that basic idea as implemented in Caliper:

  • You can express statements about the world (or the world-wide web) in three-word sentences of the form [subject] [verb] [direct object] e.g., [student A] [correctly answers] .
  • Because English grammar works the way it does, you can string these sentences together to form inferences, e.g., [tests knowledge of] [multiplying fractions]; therefore, [student A] [correctly answers] [a question about multiplying fractions].
  • We can define mandatory and optional details of every noun and verb e.g., it might be mandatory to know that question 13 was a multiple choice question, but it might be optional to include the actual text of the question, the correct answer, and the distractors.

That's it. Three-word sentences, which work the way they do in English grammar, and definitions of the "words."

A learning engineer could use Caliper paragraphs as a design artifact to facilitate conversations about refining the standard, the products involved, and the experimental design. I'll share a modified version of an example I recently shared with an IMS engineer to illustrate this same point.

Suppose you are interested in helping students become better at reflective writing. You want to do this by providing them with continuous formative assessment, i.e., in addition to the feedback that you give them as an instructor, you want to provide them an externalization of the language in their reflective writing assignments. You want to use textual analysis to help the students look at their own writing through a new lens, find the spots where they are really doing serious thought work, and also the spots where maybe they could think a little harder.

But you have to solve a few problems in order to do give this affordance to your students. First, you have to develop the natural language analysis tool that can detect cues in the students' writing that indicate self-reflection (or not). That's hard enough, but the research is being conducted and progress is being made. The second problem is that you are designing a new experiment to test your latest iteration and need some sort of summative measure to test against. So maybe you design a randomized controlled trial where half the students in the class use the new feedback tool, half don't, and all get the same human-graded final reflective writing assignment. You compare the results.

This is an example of theory-driven learning analytics. Your theory is that student reflection improves when students become more aware of certain types of reflective language in their journaling. You think you can train a textual analysis algorithm to reliably distinguish—externalize—the kind of language that you want students to be more aware of in their writing and point it out to them. You want to test that by giving students such a tool and see if their reflective writing does, in fact, improve. Either students' reflective writing will improve under the test condition, which will provide supporting evidence for the theory, or it won't, which at the very least will not support the theory and might provide evidence that tends to disprove the theory, depending on the specifics. There are data science and machine learning being employed here, but they are being employed more selectively than just shotgunning an algorithm at a data set and expecting it to come up with novel insights about the mysteries of human cognition.

Constructing theory-driven learning analytics of the sort described here is challenging enough to do in a unified system that is designed for the experiment. But now we get to the problem for which we will need the help of IMS over the next decade, which is that the various activities we need to monitor for this work often happen in different applications. Each writing assignment is in response to a reading. So the first thing you might want to do, at least for the experiment if not in the production application, is to control for students who do the reading. If they aren't doing the reading, then their reflective writing on that reading isn't going to tell you much. Let's say the reading happens to take place in an ebook app. But their writing takes place in a separate notebook app. Maybe it's whatever notebook app they normally use—Evernote, One Note, etc. Ideally, you would want them to journal in whatever they normally use for that sort of activity. And if it's reflective writing for their own growth, it should be an app that they own and that will travel with them after they leave the class and the institution. On the other hand, the final writing assignment needs to be submittable, gradable, and maybe markable. So maybe it gets submitted through an LMS, or maybe through a specialized tool like Turnitin.

This is an interoperability problem. But it's a special one, because the semantics have to be preserved through all of these connections in order for (a) the researchers to conduct the study, and then (b) the formative assessment tool to have real value to the students. The people who normally write Caliper metric profiles—the technical definitions of the nouns in Caliper—would have no idea about any of this on their own. Nor would the application developers. Both groups would need to have a conversation with the researchers in order to get the clarity they need in order to define the profiles for this purpose.

The language of Caliper could help with this if a person with the right role and expertise were facilitating the conversation. That person would start by eliciting a set of three-word sentences from the researchers. What do you need to know? The answers might include statements like the following:

  • Student A reads text 1
  • Student A writes text alpha
  • Text alpha is a learning reflection of text 1
  • Student A reads text 2
  • Text 2 is a learning reflection of texts 1 and 2
  • Etc.

The person asking the questions of the researcher and the feature designer—let's call that person the learning engineer—would then ask questions about the meanings and details of the words, such as the following:

  • In what system or systems is the reading activity happening?
  • Do you need to know if the student started the reading? Finished it? Anything finer grained than that?
  • What do you need to know about the student's writing in order to perform your textual analysis? What data and metadata do you need? And how long a writing sample do you need to elicit in order to perform the kind of textual analysis you intend and get worthwhile results back?
  • What do you mean when you say that text 2 is a reflection of both text 1 and 2, and how would you make that determination?

At some point, the data scientist and software systems engineers would join in the conversation and different concerns would start to come up, such as the following:

  • Right now, I have no way of associating Student A in the note-taking system with Student A in the reading system.
  • To do the analysis you want, you need the full text of the reflection. That's not currently in the spec, and it has performance implications. We should discuss this.
  • The student data privacy implications are very different for an IRB-approved research study, an individual student dashboard, and an instructor- or administrator-facing dashboard. Who owns these privacy concerns and how do we expect them to be handled?

Notice that the Caliper language has become the externalization that we manipulate socially in the design exercise. There are two aspects of Caliper that make this work: (1) the three-word sentences are linguistically generative, i.e., they can express new ideas that have never been expressed before, and (2) every human-readable expression directly maps to a machine-readable expression. These two properties together enable rich conversations among very different kinds of stakeholders to map out theory-driven analytics and the interoperability requirements that they entail.

This is the kind of conversation by which Caliper can evolve into a standard that leads to useful insights and tools for improving learning impact. And in the early days, it will likely happen one use case at a time. Over time, the working group would learn from having enough of these conversations that design patterns would emerge, both for writing new portions of the specification itself and for the process by which the specification is modified and extended.

Copyright Carnegie Mellon University, CC-BY

The post Learning Engineering: A Caliper Example appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today I am sharing the first video out of the Empirical Educator Project (EEP) 2019 summit, and with it, a central concerns of the project. Much of the basic machinery our learning processes work so naturally and automatically so much of the time that they are invisible to us. So pervasively invisible, in fact, that most of us are barely aware that it even exists. And that's a problem. If you believe that the job of education is to work within what psychologist Lev Vygotski called the "zone of proximal development"—the kind of learning challenge that would be too hard for a student to learn on her own but not so hard that she can't learn it at all—then we have to have a very finely tuned understanding of that learning machinery, to the point where we can accurately find each student's zone of proximal development with a high level of consistency.

We fail to do this all the time. Some students are bored while others struggle. The more heterogeneous the student population is, the bigger a problem this is. As higher education as a sector becomes more committed to serving post-traditional students, first-generation students, and students with 40-year educational relationships to the school rather than 4-year relationships, then this need to be able to see and understand these invisible learning processes becomes more acute. For this reason among others, fostering academic literacy around the mental machinery of learning—making the invisible visible—is one of the central goals of EEP. I therefore wanted to start the 2019 EEP summit by highlighting this challenge. So I invited three Carnegie Mellon University (CMU) professors with complementary areas of expertise to participate in a panel that could highlight several dimensions of the problem.

This wasn't the first time I had interviewed these three particular academics. I had been fortunate enough to be invited to a CMU press fellowship three years earlier. I brought my video camera along and happened to be able to get some air time with these very three people, two of whom I had never met before. The interviews turned out to be formative for me, particularly with regard to my thinking about EEP. I'm going to write a little about the complimentary insights that these three academics gave to me and then share both the interview video from the summit and the original interview videos from two years ago.

Expert blind spots

As we get old and forgetful, we like to joke that our minds have to make room for the new information by clearing out old information. It turns out that there's truth behind this joke in multiple ways. First, we have different kinds of memory. If I asked you to list the steps required to tie your shoe, those steps would probably not come tripping off your tongue. Does that mean that you don't know how to tie your shoe? No, it doesn't. It means that you've moved that knowledge to a more efficient memory space in your brain. One that's quick and efficient enough that you can easily bend down and tie your shoes while performing other, more demanding cognitive tasks. But that knowledge is not accessible to your conscious mind. It is "tacit" knowledge. Your brain is very efficient at shunting information that it needs to access but does not need to consciously examine into a different compartment than the one it was in when you were learning a skill.

There was a time when you could list the steps in tying your shoe, because that was how you first learned those steps. Your brain put that information into a box once it no longer needed conscious access to it. Chances are good that you don't remember that time well and that you don't remember the experience of those steps fading from your conscious memory. I tried to recreate this experience recently for myself. I am learning to swim. In the first weeks, I was thinking about about very basic aspects of moving my arms and, separately, moving my legs. That period was about nine months ago. I decided to try a little experiment with memory encoding in the process. Every two weeks, I would try to remember the steps that I learned in my first lesson. I didn't try to memorize those steps. That would be triggering a different memory process and would invalidate the experiment. I just tried to reconstruct the steps in my mind. Meanwhile, I spent most of my time at the pool learning to be a better swimmer.

As the weeks went on, I found myself thinking less about what my arms and legs were doing separately and more about what my whole body was doing. I also found it harder and harder to remember what the original steps were that I learned in my first lesson. Nine months in, I barely remember anything about how I first thought about what I was doing. If I had to teach somebody to swim from scratch, I couldn't just reproduce the lesson that was taught to me. I'd have to make something up. Nor could I reproduce the learning steps I took—many of which I made on my own, without my instructor—to get from my beginner's understanding to the level of expertise I have achieved as of today. I might be able to draw on some of my knowledge and experience, but I would have to invent more of my teaching moves than most teachers like to admit, through trial and error, by working with students.

So our brains do, in fact, make room for new information by boxing up old information and putting into storage. In addition to the memory changes, we also process information differently as our domain knowledge gets more sophisticated. When you're learning math, or cooking, or yoga, or any other discipline with integrated skills that build on each other, at first, you're learning each skill separately. Over time, your mind integrates steps and makes general rules. As novice cooks become expert cooks, their way of thinking about cooking looks less like meticulously following one out of hundreds of completely separate recipes and more like following some generalized principles that they've drawn from their experience of making so many recipes. They stop thinking algorithmically and start thinking heuristically.

We don't generally notice these changes in our cognition as we move from novices to experts in a topic. They're not directly observable and not usually consciously experienced. They just happen. This is a problem for teaching because professors, as experts, have undergone all of these changes in their learning processes. They no longer think they way their students do. They don't think about cooking as following individual recipes. Further, because their evolution as thinkers was largely silent, and because most professors have no professional development in these processes, it's not always obvious to them the extent to which their brains process information in fundamentally different ways than those of their students. Ironically, it is their very expertise that causes them to struggle sometimes to understand how their students think about their subjects or how to work with them in that zone of proximal development. CMU Professor Ken Koedinger, Director of LearnLab at the Pittsburg Science of Learning Center, is an expert in this conundrum.

Expert teaching blind spots

There's a related phenomenon that I'll call an expert teaching blind spot, even though I don't think that's an official term of art. Just as it is possible to not consciously know what you know in any domain of knowledge, it's possible to have tacit knowledge specifically in teaching. In addition to the reasons above, I'll add another one: Interpersonal skills, including teaching skills, are somewhere in the middle of learning spectrum between things that we are hardwired to learn without anyone specifically teaching us (like spoken language as young children), and something that is an intellectual creation which must be consciously learned (like political science). Many educators have what we colloquially refer to as teaching "instincts," and that word is not far from the truth. We have tacit interpersonal knowledge, sometimes including tacit knowledge about learning processes of our students. We know some things about how to teach in a very real sense, but that knowledge is not fully consciously accessible to us.

As a result, it can be very difficult to talk to even highly skilled teachers about what they do, because in many cases they've never even tried to put what they do into language. They just do what seems right and obvious to them. And if they do verbalize what they're doing, they usually aren't using terms of art because they usually haven't been taught any. Their insights seem personal because nobody has talked to them that beyond the personal and phenomenological there could be a sharable, learnable, teachable body of knowledge that their instincts are tapping into. CMU's Marsha Lovett, Director of Eberly Center for Teaching Excellence & Educational Innovation is an expert in this problem domain.

If we don't have a coherent answer, then we make one up

If you put all of this together, it adds up to a very significant challenge to serious educators. They don't have easy ways of knowing how they think differently than their students or easy access to their own cognitive journeys that got them from novice learners to expert learners. And yet, most of us have vivid memories of our formative experiences as students. On top of that, teachers teach, and students learn. It happens all the time. Humans are such incredible learning machines, and the machinery is so well hidden from us, that many people tend to assume that there really isn't much to it (when nothing could be further from the truth). Most professors are good at academic learning. That's how they ended up as professors.

And they usually had at least one experience that really inspired them to learn about their chosen field. That association is often all it takes for educators to attribute causality. "Well, I had an amazing experience in Professor Smith's class, and Professor Smith did X, so X must be a great way to teach." Given that most professors diligently worked through five to seven years of graduate school without being exposed to the tiniest hint of any of the above and then were expected to somehow magically know how to teach well, what tends to happen is that professors make up their own stories about what effective teaching is based on their own personal experiences—which is the only data they have, really—and they go on that. And they don't change their minds about it very much or very easily. CMU anthropologist and Simon Research Faculty Lauren Herckis has conducted some fascinating research in this area.

We have a literacy problem

If you put all of this together, it's clear that we're not going to make substantial progress on improving education until educators are taught to see that which is currently invisible. We have to develop a common cultural understanding that learning involves a complex set of cognitive processes, that being an expert in a knowledge domain is not sufficient to be a good teacher of novices, that good teaching instincts are often based on tacit knowledge which we can make explicit and therefore more sharable and useful. Only by doing this together, as a sector, can we make substantial progress on improving student success. One of the main goals of Empirical Educator Project is to begin fostering the cultural infrastructure that we need in order to do that.

Here are the three original video interviews I conducted of Marsha, Ken, and Lauren two years ago:

What is Learning Science from an Educator's Perspective? - YouTube
e-Literate TV CMU Interviews

I got lucky with those interviews. The coherence in the interviews is a product of the coherent body of work at CMU's Simon Initiative as represented by the three people who happened to be available to interview rather than through some master plan of mine.

At the summit, I chose to frame up both the discussion and the event more consciously. In addition to their work, I asked the three to reflect on their personal journeys as educators to embrace views about teaching and learning that may have seemed surprising or even counter-intuitive to them:

Empirical Education, Version 1.0 beta - YouTube
EEP Summit 2019: Empirical Education 1.0 beta Panel

The journeys that these experts describe are emblematic of the bigger picture that EEP is all about. And not just in classroom work specifically, but in every aspect of serving students.

I have said before that academia needs to move from a philosophical commitment to student success toward operational excellence at supporting student success. The implied gap is knowhow. It will show up differently in the classroom than it will in, say, advising, but the pattern is going to be the same, and I think academics will be most comfortable thinking about it as starting with a literacy problem. There is some discipline, either new or existing, that they must learn to some degree of competence in order to serve their students well. They might not have to be expert in it—they don't have to have PhDs in cognitive psychology, for example—but they do need to be literate in it.

The post EEP 2019: The Invisible Miracle of Learning appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Yesterday, I wrote about my experiences at the recent IMS Learning Impact Leadership Institute. Today, I'm going to write about a sentence that I heard uttered several times while at that summit. One that I've been expecting to hear for nearly a year now.

"Instructure is the new Blackboard."

It's not the first time I have heard that sentence, but it has reached critical mass. I have known it was coming since last Instructurecon. I wrote a blog post specifically to prepare for this entirely predictable moment. It has finally arrived.

And now it is time to explain why nobody should ever say "X is the new Blackboard" about any company ever again.

Predicting the inevitable

I have often characterized Instructure's first decade of customer relations as "gravity-defying." Once or twice, I have had people challenge me on the blog about that characterization. "Why are you rooting for them to fail?" they would ask. But I wasn't. I was merely observing that gravity exists, and nobody can defy the laws of physics forever. What goes up eventually comes down. And in ed tech, any fall is a fall from grace. As a rule, educators are distrustful of ed tech companies, are really distrustful of large ones, and are bitterly resentful of companies that disappoint them. At some point, Instructure would have to slip from abnormally good and revert to mean. And when that happened, there would be blowback.

It was clear that moment had arrived at Instructurecon 2018 because Instructure was no longer able to pull off the impossible. Josh Coates keynotes should have been impossible. Josh is a smart, interesting, thoughtful guy. He is not good keynote speaker. He rambles. He careens. He talks about what he cares about, and what he thinks you should care about, but doesn't give a lot of thought to what you think you need to hear from him. And yet, somehow, his Instructurecon keynotes came off as charming and fascinating. Nobody cared that he said not one damned thing about anything that every other LMS company CEO would have been shredded by their customers for not covering. He was like some funhouse mirror version of Mr. Rogers.

Until 2018, when his keynote was a disaster. It wasn't just that the quirky charm failed to work this time. Josh offended multiple groups in the audience. What goes up must eventually come down.

Then there was Josh's fireside chat with Dan Goldsmith, then the newly announced President. It was obvious to Phil and me that Dan was being introduced to the customers because he would be CEO within a year. Gravity-defying Instructure would have somehow magically helped the audience understand that they were being introduced to the line of succession while being reassured that things were steady-as-she-goes. But that would have been a near-impossible feat to pull off, and the Instructure of 2018 walked on the earth like you and me. So the audience reaction was, basically, "Uh, he seems nice, but why do I need to hear about how he was an Uber driver for a while?"

There were also smaller signs, and other facts from which one could draw inferences. There was the small but noticeable reduction in spending on the conference. There was the increasing pressure from the stock market for Instructure to grow their sales of Bridge to corporations. The dominos had already started falling, and the pattern was set for the next ones to fall in a certain order:

  • Josh would leave soon. Other executives and senior managers would likely leave as well. Some would go because they had had a good run and were ready to move on. Others would go because Dan would want to put his own team in place.
  • Instructure was built around Josh, who is an idiosyncratic leader. It was also built to sell to higher education. In order to retool it so that it is something that can run well under Dan's leadership style and sell into higher education, K12, and the corporate market, many things would have to change internally. People would move around. Some people would leave. Others would arrive. Processes would change. All of this is distracting to people who are trying to do their jobs. Things inevitably would fall through the cracks. Some of those things would be important to some customers. Those customers would notice.
  • All of this uncertainty would inevitably create some trepidation among the employees, even if the new management handles the situation beautifully. The fact is that when people are no longer sure what their job is or how they can be successful at it, which is inevitable in this kind of environment of change, they tend to keep their heads down until they figure it out. They may not challenge decisions that they think are on the wrong track.
  • Meanwhile, some of the new senior management, crucially including the CEO, were new to education and didn't know where the landmines are. And there are many, many landmines. It wouldn't matter how smart they are. It didn't matter how decent and kind they are. Since they didn't know where the landmines are, and their people were likely too nervous or distracted to warn them, then sooner or later they would step on one.

In March of this year, Dan Goldsmith said this:

What's even more interesting and compelling is that we can take that information, correlate it across all sorts of universities, curricula, etc, and we can start making recommendations and suggestions to the student or instructor in how they can be more successful. Watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that. When we drive student success, we impact things like retention, we impact the productivity of the teachers, and it's a huge opportunity. That's just one small example.

Our DIG initiative, it is first and foremost a platform for ML and AI, and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions. [snip]

We've gone through enough cycles thus far to have demonstrable results around improving outcomes with students and improving student success. [snip] I hope to have something at least in beta by the end of this year.

That quote is pulled from Phil's contemporaneous post on the statement, where he then goes on to reference the "robot tutor in the sky." But Dan probably wouldn't have gotten that reference, because he wasn't in the industry at the time that former Knewton CEO Jose Ferreira made it. As a result, his own statement, which was predictably explosive to Phil and me, probably seemed somewhere between anodyne and exciting to him.

So. You have an ed tech company that has spectacularly over-performed for a decade. Their performance slips, not to horror show levels, but to levels where some customers are noticeably unhappy. The company leadership makes a tone deaf statement or two about unreleased products that we really don't know that much about.

And that is all it takes to become a fallen angel in higher education ed tech. There is likely no way that employees at Instructure who have only ever worked at that one ed tech company could have known that to be true in advance of having experienced it. There is likely no way that executives coming in from outside of ed tech could have known that to be true without having experienced it either. Because it doesn't make sense. But it is true. Instructure's brand destined to crash hard precisely because it was so good. That's how it works in ed tech. Cynics are disappointed optimists, and we have a lot of those.

But why, specifically, "the new Blackboard?" It's not the first time I've heard that phrase used about a company. And really, it's unfair to both Instructure and Blackboard. In fact, when I wrote in my last post about how some companies that used to be barriers to interoperability work now are among its most important champions, I was specifically thinking of Blackboard. The complaints I've had about them have been related to (1) trying to spin their financial challenges and (2) struggling to execute well during an extraordinarily tough transition. In other words, totally normal company stuff. Today's Blackboard may not be perfect, but it is basically a decent company. In the moral sense.

This sector has a lingering revulsion for a version of a company that ceased to exist in 2012 and yet continues to loom as a shadow over the entire vendor space, creating a sense of ever-present subconscious dread. It's like having a lifelong fear of clowns from something that happened at a circus when you were three years old but that you can no longer remember.

It is time to remember.

The personal as parable

As I described in a recent post, my public debates with Blackboard over their patent assertion are something of an origin story for e-Literate. There is a lot about the story that I'm going to tell now—some of it for the first time on the blog—that is became personal because certain parties at Blackboard chose to make it personal. Throughout that period, and through my writing since, I have tried to keep e-Literate professional and focused only on details that are worth sharing insofar as they advance the public good. I have not always succeeded in that aspiration, but it is important to me to try.

Today I choose to share some actions that were taken against me because I think it is important to understand how truly bad actors behave. These are not the kinds of actions that either Instructure or today's Blackboard would take. If the sector is going to improve, then we need to get better at distinguishing between bad behavior, which can have a variety of causes and can be corrected through engagement, and truly bad actors, with whom can be no negotiating. In my experience, truly bad actors are rare.

So I'm going to share some personal experiences later in this blog, but I'm going to try to keep this as minimally personal as I can. When possible, I'm going to avoid naming names, even though some of you will know who I'm talking about. I will share some details but not others. What I ask you to think about as you read my portion of the story is not what happened to me or who did what but how what happened then is qualitatively different from what is happening now.

The Old Blackboard

The period of Blackboard's history that I am talking about is specifically from 1999 to 2012. During this period, the company carefully developed a carefully crafted and highly successful business strategy. First, they were pioneers in the software rental business. You didn't own Blackboard software, even if you ran it on your own servers. You paid an annual license fee. I can't say that Blackboard invented this strategy—I'm not sure who did; it might have been Oracle—but Blackboard certainly drove it deep into the education sector.

This could be a handsomely profitable business model, particularly if they could hold market share and maintain pricing power. Which brings us to the second leg of their strategy. Blackboard sought to dominate ed tech product categories by buying up every vendor in the category as soon as it reached significant market share. Here's how that looked in the LMS product category:

  • In 2000, they acquired MadDuck Technologies, which made Web Course in a Box
  • In 2002, it was George Washington University's Prometheus
  • In 2006, WebCT (which had spun out of University of British Columbia but had been independent for a while)
  • In 2009, ANGEL Learning from IUPUI
  • In 2012, after reportedly failing to buy Moodle Pty, the company bought Moodlerooms and NetSpot, the biggest Moodle partners in the US and Australia respectively

The reason that Phil's famous LMS market share graphic is called the "squid graph" is because Blackboard formed the body by continuously gobbling up competitors as they formed.

In every case except Moodle, Blackboard would kill off the acquired platform after acquisition. They weren't really looking to acquire technology. To the contrary; they didn't want the expense of maintaining multiple platforms and showed almost no interest any of the technical innovation until after the ANGEL acquisition, when Ray Henderson started driving some of the product strategy for them. Rather, Blackboard was interested in acquiring customers. They knew that some of those customers would leave—in fact, some of those customers had already left Blackboard previously to the platform that was now being acquired—but that was OK. Because by keeping competition low and competitors under a certain size, Blackboard was really controlling pricing power. LMS license fees were, not coincidentally, significantly more expensive during this period than they are today.

There was one company—Desire2Learn—that represented an increasing threat to Blackboard but would not sell. So Blackboard tried a different tactic, which we'll come to a little later in this narrative.

Blackboard tried a similar trick of domination through acquisition, somewhat less successfully, in the web conference space by simultaneously buying Wimba and Elluminate, which were two of the largest education-specific web conferencing platforms at the time. If there hadn't been an explosion of cheap and excellent generic web conferencing solutions soon afterward, it might have worked.

Blackboard did not really consider itself a software development company during this period and was not afraid to say so explicitly to customers. I was told this by a Blackboard representative, and I know of one ePortfolio company that was told the same thing. They started up specifically because Blackboard's response to them when they asked as university customers if Blackboard would an build ePortfolio was, "We don't really develop software, but if you know of any good ePortfolio companies, we might consider acquiring one."

Blackboard did have an internal product development strategy of sorts, albeit an anemic one. Companies understand that it's easier (and cheaper) to sell a second product to an existing customer than a first product to a new customer. So they often develop a portfolio of products and services to "cross-sell" to those existing clients. In and of itself, there's absolutely nothing wrong with that. And like many companies, Blackboard had a formula for how many products they needed to cross-sell in order to hit their financial goals. Again, this is pretty standard stuff. The objectionable part was the way in which that formula drove the product road map.

The quintessential example of this was Blackboard Community. Keep in mind that the LMS originated when universities started taking generic groupware (like Lotus Notes, for example) and adding education specific features like a grade book and a homework drop box. Blackboard's idea was to strip those education-specific features back out of the product and license it separately to use for clubs, committees, and so on. I'm sure it wasn't quite that simple from a development perspective, but it wasn't very far off. Take the product you've already sold to the customer, strip out some features, integrate the stripped down version with the original version—badly—and sell it to the customer a second time.

Blackboard also had epically bad customer service. Far worse than any of the LMS vendors today. To be clear, there were individuals at Blackboard who worked their butts off to serve their customers. There are always good people at sufficiently large companies. There were people in Blackboard—on their development teams, in customer service, and in other parts of the company—who tried desperately hard to serve their customers well. But the company's processes were not optimized for customer service, and it did not invest in customer service. One can only conclude that customer service was not a priority of executive management, whatever the line employees may have felt about it.

The patent suit

As I mentioned earlier, Desire2Learn was becoming a thorn in Blackboard's side. But Blackboard's management team was developing a legal strategy that they thought would complement their acquisition strategy, especially in cases where pesky entrepreneurs would not sell. They started filing for patents. Now, software patents are an unfortunate reality in our world. I don't like them, but since they exist, I understand why some companies feel the need to have them. That said, Blackboard's intentions were neither for defensive purposes nor for demonstrating durable value to investors. They intended to assert their patents against other companies.

In industries like pharmaceuticals or electronics, where innovation takes considerable investment up front but yields significant, long-term profits afterward, the economics can support patent assertion. There is enough money flowing in the system that there is at least a plausible argument that paying the inventor a licensing fee incentivizes investment in innovation. But education is not that sort of market, and the LMS product category in particular has thin margins. If new LMS vendors had to pay patent royalties, there likely wouldn't have been new LMS vendors.

Blackboard received a patent for LMS functionality, the precise definition of which I will get to momentarily. They immediately asserted that patent against Desire2Learn. They probably expected the company to fold and agree to either pay the royalty or sell. Companies usually don't fight patents. If Desire2Learn had folded, that would have given Blackboard's patent added legal weight. And Blackboard had other patents it had filed. There was every indication that they were attempting to create what is called a "patent thicket," effectively making it impossible to bring a new product to market without running into one or another of their patents. If they had succeeded, they would have owned the LMS market forever.

They would have killed the LMS market.

And what was Blackboard's first patent? What was their supposed innovation?

A system where a user could log into one course as an instructor and another as a student.

That's it.

Really.

When I learned enough about how to read a patent to figure that out, I couldn't believe it. And this is where Blackboard started fighting with me. But it was all non-denial denials. There is a moment in the legal process of a patent fight where the court determines the scope of the patent. Before that, legally speaking, the patent is undefined. So when Blackboard pushed back against my posts, all they were really saying was that the court hadn't spoken yet.

When the two companies faced each other in court and argued for their definition of the scope of the patent, what did Blackboard argue was the scope of their patent?

A system where a user could log into one course as an instructor and another as a student.

Blackboard didn't like me writing stuff like that. They—where "they" means specific executives who I chose not to mention by name, rather than some hive mind of every human working at the company—did not like it when I called them out on it in advance. And they really did not like it when I pointed out afterward that they had been misleading at best in their previous statements about what they believed the scope of the patent to be.

What concerned me was that their repeatedly calling attention to my writing by arguing with me in public was irrational. I was relatively unknown until they started responding to me. This kind of regular unforced error was out of character. It was telling me...something. What was it telling me? The most logical explanation was that I had gotten under their skin. I had cause to suspect that they were the kind of people who did not have a high tolerance for being challenged. That could be dangerous.

As long as I was working at SUNY, I was protected. They may have been irrationally focused on me, but they weren't stupid. They were not about to attack a university employee. However, once I became an Oracle employee, I was concerned that things would get ugly.

I was right.

What ugly looks like

When I was offered the job at Oracle, I had a conversation with my prospective manager about the Blackboard situation. I told him that I thought the patent assertion was a threat to the health of the sector, that I did not intend to stop writing about it, and that it was possible that Blackboard would come after me once I was no longer working for a university. He replied that he respected my right to continue writing as long as I made clear on the blog that my opinions were my own—which I did, scrupulously—but that if the politics reached above a certain level in the organization, then his ability to protect me would have its limits. We agreed that it would be unfortunate if that were to happen, we each understood and respected the other's position, and we agreed to give it a go. Nothing ventured and all that.

It didn't take long. I was at a Blackboard reception at EDUCAUSE when one of the executives approached me and started a conversation about my posts. "You know, I wouldn't complain to Oracle about it. I would never do that. I respect your independence. But this isn't good for the relationship between our two companies."

That's a nice shiny new job you got there, kid. It would be shame if anything were to, you know. Happen to it.

I kept writing.

Not many months after that, the same executive, in the presence of my manager, sat down next to my colleague and started complaining to her about me. Repeatedly. Incessantly. To the point where my manager had to physically interpose himself between the executive and my colleague in order to protect her from what he perceived to be harassment. At which point, the executive started complaining to my manager about me.

It had the opposite of the intended effect. My manager was very protective of his people.

I kept writing.

Not all of the writing was negative, by the way. For example, when Blackboard's Chief Legal Counsel showed up at a Sakai conference to debate the Free Software Foundation's Eben Moglen on the merits of the patent, I argued both that Blackboard's representative had been unfairly treated and that it was important to continue to try to work with the company constructively on the larger patent problem if at all possible.

Nevertheless, Blackboard continued what I can only describe as a widening and escalating campaign convince my employer to either silence me or remove me. They were specifically told that I was unwelcome at Blackboard hosted events. The message was clear: Feldstein is harming Oracle's relationship with Blackboard. And if that weren't clear enough, I started being approached by random Oracle employees. The conversation would go like this:

Do you know [Blackboard employee name redacted]?

Yeah, I know him. Why?

Well I don't, but he just came up to me at BbWorld and started complaining to me about how you're harming Blackboard's relationship with Oracle.

That same Blackboard employee accosted me at an IMS meeting, literally yelling at me, telling me that he had almost convinced his bosses to adopt the new version of the LIS standard we were developing—the one that was going to save universities time and money by getting rid of the need to manually monitor the integration between the registrar software and the LMS—but they killed it when they read my latest blog post.

A..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A few weeks back, I had the pleasure of attending the IMS Learning Impact Leadership Institute (LILI). For those of you who aren't familiar with it, IMS is the major learning application technical interoperability organization for higher education and K12 (and is making some forays into the corporate training and development world as well). They're behind specifications like LIS, which lets your registrar software automagically populate your LMS course shell with students, and LTI, which lets you plug in many different learning applications. (I'll have a lot more to say about LTI later in this post.)

While you may not pay much attention to them if you aren't a technical person, they have been and will continue to be vital to creating the kind of infrastructure necessary to support more and better teaching and learning affordances in our educational technology. As I'll describe in this post, I think the nature of that role is likely to evolve somewhat as the interoperability needs of the sector are beginning to evolve.

The IMS is very healthy

I'm happy to report that the IMS appears to be thriving by any obvious measure. The conference was well attended. It attracted a remarkably diverse group of people for an event hosted by an organization that could easily be perceived as techie-only. Furthermore, the attendees seemed very engaged and the discussions were lively.

On more objective measures, the organization's annual report bears out this impression of strong engagement. They have strong international representation across a range of organization types.

From the IMS Global 2018 Annual Report

Whether your measure is membership, product certifications, or financial health, the IMS is setting records.

From the IMS Global 2018 Annual Report

This state of affairs is even more remarkable given that, 13 years ago, there was some question as to whether the IMS was financially sustainable.

From the IMS Global 2018 Annual Report

If you look carefully at this graph, you'll see three distinct periods of improvement: 2005-2008, 2009-2013, and 2013-2018. Based on what I know about the state of the organization at the time, first period can most plausibly be attributed to immediate changes implemented by Rob Abel, who took over the reins of the organization in February of 2006 and likely saved it from extinction. Likewise, the magnitude of growth in the second period is consistent with that of a healthy membership organization that has been put back on track.

But that third period is different. That's not normal growth. That's hockey stick growth.

I am not a San Franciscan. By and large, I do not believe in heroic entrepreneur geniuses who change the world through sheer force of will. Whenever I see that kind of an upward trend, I look for a systemic change that enabled a leader or organization—through insight, luck, or both—to catch an updraft.

There is no doubt in my mind that the IMS has capitalized on some major updrafts over the last decade. That is an observation, not a criticism. That said, the winds are changing, in part because the IMS has helped move the sector through an important period of evolution and is now helping to usher in the next one. That will raise some new challenges that the IMS is certainly healthy enough to take on but will likely require them to develop a few new tricks.

The world of 2005

In the first year of the chart above, when the IMS was in danger of dying, there was very little in the way ed tech to interoperate. There were LMSs and registrar systems (a.k.a. SISs). Those were the two main systems that had to talk to each other. And they did, after a fashion. There was an IMS standard at the time, but it wasn't a very good one. The result was that, even with the standard, there was a person in each college or university IT department whose job it was to manage the integration process, keep it running, fix it when it broke, and so on. This was not an occasional tweak, but a continual effort that ran from the first day of class registration through the last day of add/drop. If you picture an old-timey railroad engineer shoveling coal into the engine to keep it running and checking the pressure gauge every ten minutes to make sure it didn't blow up, you wouldn't be too far off. As for reporting final grades from the LMS's electronic grade book automatically to the SIS's electronic final grade record, well, forget it.

If you ignore some of the older content-oriented specifications like QTI for test questions and Common Cartridge for importing static course content, then that was pretty much it in terms of application-to-application interoperability. Once you were inside the LMS, it was basically a bare-bones box with not much you could add. Today, the IMS lists 276 officially certified products that one can plug into any LMS (or other LTI-compliant consumer), from Academic ASAP to Xinics Commons. I am certain that is a substantial undercount of the number of LTI-compatible applications, since not all compatible product makers get officially certified. In 2005, there were zero, because LTI didn't exist. There were LMS-specific extensions. Blackboard, for example, had Building Blocks. But with a few exceptions, most weren't very elaborate or interesting.

My personal experience at the time was working at SUNY Systems Administration and running a search committee for an LMS that could be centrally hosted—preferably on a single instance—and potentially support all 64 campuses. For those who aren't familiar with it, SUNY is a highly diverse system, with everything from rural (and urban) community colleges to R1s to everything in between, with some specialty schools thrown into the mix like the Fashion Institute of Technology, a medical school or two, an ophthalmology school, and so on. Both the pedagogical needs and the on-campus support capabilities across the system were (and presumably still are) incredibly diverse. There simply was not any existing LMS at the time, with or without proprietary extensions, that could meet such a diverse set of needs across the system. We saw no signs that this state of affairs was changing at pace that was visible to the naked eye, and relatively few signs that it was even widely recognized as a problem.

To be honest, I came to the realization of the need fairly slowly myself, one conversation at a time. A couple of art history professors dragged me excitedly to Columbia University to see an open source image annotation tool, only to be disappointed when they discovered that the tool was developed to teach clinical histology, which uses image annotation to teach in an entirely different way than is typically employed in art history classes. An astronomy professor at a community college on the far tip of Long Island, where there was relatively little light pollution, wanted to give every astronomy student in SUNY remote access to his telescope if only we could figure out how to get it to talk to the LMS. Anyone who has either taught a been an instructional designer for a few wildly different subjects has a leg up on this insight (and I had done both), but even so, there are levels of understanding. The art history/histology thing definitely took me by surprise.

A colleague and I, in an effort to raise awareness about the problem, wrote an article about the need for "tinkerable" learning environments in eLearn Magazine. But there were very few models at the time, even in the consumer world. The first iPhone wasn't released until 2007. The first practically usable iPhone wasn't released until 2008. (And we now know that even Steve Jobs was secretly skeptical that apps on a phone were a good idea.) It is a sign of just how impoverished our world of examples was in January of 2006 that the best we could think of to show would a world of learning apps could be like was Google Maps:

There are several different ways that software can be designed for extensibility. One of the most common is for developers to provide a set of application programming interfaces, or APIs, which other developers can use to hook into their own software. For example, Blackboard provides a set of APIs for building extensions that they call "Building Blocks." The company lists about 70 such blocks that have been developed for Blackboard 6 over the several years that the product version has been in existence. That sounds like a lot, doesn't it? On the other hand, in the first five months after Google made the APIs available for Google Maps, at least ten times that many extensions have been created for the new tool. Google doesn't formally track the number of extensions that people create using their APIs, but Mike Pegg, author of the Google Maps Mania weblog, estimates that 800-900 English-language extensions, or "mash-ups," with a "usable, polished Google Maps implementation" have been developed during that time—with a growth rate continuing at about 1,000 new applications being developed every six months. According to Pegg, "There are about five sites out there that facilitate users to create a map by taking out an account. These sites include wayfaring.comcommunitywalk.commapbuilder.net—each of these sites probably has hundreds of maps for which just one key has been registered at Google." (Google requires people who are extending their application to register for free software "keys." Perhaps for this reason, Chris DiBona, Google's own Open Source Program Manager, has heard estimates that are much higher. "I've seen speculation that there are hundreds or thousands," says DiBona, noting that estimates can vary widely depending on how you count.

Nevertheless, even the most conservative estimate of Google Maps mash-ups is higher than the total number of extensions that exist for any mainstream LMS by an order of magnitude.

There seemed little hope for this kind of growth any time in the foreseeable future. By early 2007, having failed to convince SUNY to use its institutional weight to push interoperability forward, I had a new job working at Oracle and was representing them on a specification development committee at the IMS. It was hard, which I didn't mind, but it was also depressing. There was little incentive for the small number of LMS and SIS vendors who dominated specification development at that time to do anything ambitious. To the contrary, the market was so anemic that the dominant vendors had every reason to maintain their dominance by resisting interoperability. Every step forward represented an internal battle within those companies between the obvious benefit of a competitive moat and the less obvious enlightened self-interest of doing something good for customers. This is simply not the kind of environment in which interoperability standards grow and thrive.

And yet, despite the fact that it certainly didn't feel like it, change was in the air.

Glaciers are slow, but they reshape the planet

For starters, there was the LMS, which was both a change agent in of itself and an indicator of deeper changes in the institutions that were adopting them. EDUCAUSE data shows that the US LMS market became saturated some time roughly around 2003. At that time, Blackboard and WebCT had the major leads as #1 and #2, respectively. The dynamic for the next 10 years was a seesaw, with new competitors rising and Blackboard buying and killing them off as fast as it could. Take a look at the period between 2003 and 2013 in Phil's squid graph:1

It was absolutely vicious.

None of this would materially affect the standards making process inside the IMS until, first, Blackboard's practice of continually buying up market share eventually failed (thus allowing an actual market with actual market pressures to form) and, second, until the management team that came up with this decidedly anti-competitive strategy...er...chose to spend more time with their respective families. (I'll have more to say about Heckle and Jeckle and their lasting impact on market perceptions in a future post.)

But the important dynamic during this period is that customers kept trying to leave Blackboard (even if they found themselves being reacquired shortly thereafter) and other companies kept trying to provide better alternatives. So even though we didn't have a functioning, competitive market that could incentivize interoperability, and even though it certainly didn't feel like we had one, some of the preconditions for one were being established.

Meanwhile online education growth was being driven by no fewer than three different vectors. First, for-profit providers were hitting their stride. By 2005, the University of Phoenix alone was at over 400,000 enrollments. Second, public access-oriented institutions, many of which had been seeded a decade earlier with grants from the Sloane Foundation, were starting to show impressive growth as well. A couple were getting particular attention. UMUC, for example, may not have had over 400,000 online enrollments in 2005, but they had well over 40,000, which is enough to get the attention of anyone in charge of an access-oriented public university's budget. More quietly, many smaller schools were having online success that were proportional to their sizes and missions. For example, when I arrived at SUNY in 2005, they had a handful of community colleges that had self-sustaining online degree programs that supported both the missions and the budget of the campuses. Many more were offering individual courses and partial degrees in order to increase access for students. (Most of New York is rural, after all.)

The third driver of online education, which is more tightly intertwined with the first two than most people realize, is that Online Program Management companies (OPMs) were taking off. The early pioneers, like Deltak (now Wiley Education Services), Embanet, Compass Education (now both subsumed into Pearson), and Orbis (recently acquired by Grand Canyon University) had proved out the model. The second wave was coming. Academic Partnerships and 2Tor (now 2U) were both founded in 2008. Altius Education came in 2009. In 2010, Learning House (now also owned by Wiley) was founded.

Counting online enrollments is a notoriously slippery business, but this chart from the Babson survey is highly suggestive and accurate enough for our purpose:

If you're a campus leader and thirty percent of your students are taking at least one online class, that becomes hard for you to ignore. Uptime becomes far more important. Quality of user experience becomes far more important. Educational affordances become far more important. Obviously, thirty percent is an average, and one that is highly unevenly distributed across segments. But it's significant enough to be market-changing.

And the market did change. In a number of ways, the biggest one being that it became an actual, functioning market (or at least as close to one as we've gotten in this space).

When glaciers recede

Let's revisit that second growth period in the IMS graph—2008 to 2013—and talk about what was happening in the world during that period. For starters, online continued its rocket ride. The for-profits peaked in 2010 at roughly 2 million enrollments (before beginning their spectacular downward spiral shortly thereafter). Not-for-profits (and odd mostly-not hybrids) ramped up the competition. ASU launched its first online 4-year degree in 2006. SNHU started a new online unit in 2009. WGU expanded into Indiana in 2010, which was the same year that Embanet merged with Compass Knowledge and was promptly bought by Pearson. (Wiley acquired Deltak two years later.)

Once again, the more online students you have, the less you are able to tolerate downtime, a poor user interface that drives down productivity, or generic course shells that make it hard to teach students what they need to learn in the ways in which they need to learn. Instructure was founded in 2008. They emphasized a few distinctions from their competitors out of the gate. The first was their native multitentant cloud architecture. Reduced downtime? Check. The second was a strong emphasis on usability. The big feature that they touted which was their early runaway hit was Speed Grader. Increased productivity? Check.

Instructure had found their updraft to give them their hockey stick growth.

But they also emphasized that they were going to be a learning platform. They weren't going to build out every tool imaginable. Instead, they were going build a platform and encourage others to build the specialized the tools that teachers and students need. And they would aggressively encourage the development and usage of standards to do so. On the one hand, this fit from a cultural perspective. Instructure was more like a Silicon Valley company than its competitors, and platforms were hot in the Valley. On the other hand, it was still a little weird for the education space. There still weren't good interoperability standards for what they wanted to do. There still hadn't been an explosion of good learning tools. This is one of those situations where it's hard to tell how much of their success was prescience and how much of it was luck that higher ed caught up with their cultural inclination at that exact moment.

Co-evolution

The very same year that Brian Whitmer and Devlin Daley founded Instructure, Chuck Severence and Mark Alier were mentoring Jordi Piguillem on a Google Summer of Code project that would become the initial implementation of LTI. In 2010, the same year that Instructure scored its first major win with the Utah Education Network, IMS Global released the final specification for LTI v1.0. All this time that the market had felt like it had been standing still, it had actually been iterating. We just hadn't been experiencing the benefits of it. Chuck, who had been thinking about interoperability in part through his work on Sakai, had been tinkering. Students like Brian and Devlin, who had been frustrated with their LMS, had been tinkering. The IMS, which actually had a precursor specification before LTI, had been tinkering. While conditions hadn't become visible on the surface of the glacier, way down, a mile below, the topology of the land was changing.

Meanwhile in Arizona, in 2009, the very first ASU+GSV summit was held. I admit that I have had writer's block regarding this particular conference the last few years. It has gotten so big that it's hard to know how to think about it, much less how to sum it up. In 2009, it was an idea. What if a university and a company that facilitates start-ups (in multiple ways) got together to encourage ed tech companies to work more effectively with universities? That's my retrospective interpretation of the original vision. I wasn't at many of those early conferences and I certainly wasn't an insider. It was hard for me, with my particular background, to know what to make of it then and even harder now.

But something clicked for me this year when it turned out that IMS LILI was held at the same hotel that the ASU+GSV summit had been at a couple of months earlier. How does the IMS get to 523 product certifications and $8 million in the bank? A lot of things have to go right for that to happen, but for starters, there have to be 523 products to certify and lots of companies that can afford to pay certification fees. That economy simply did not exist in 2008. Without it, there would be no updraft to ride and consequently no hockey stick growth. ASU+GSV's phenomenal growth, and the ecosystem that it enabled, was another major factor influenced what I saw at IMS LILI this month.

There is a lot of chicken-and-egg here. LTI made a lot of this possible, and the success LTI (and IMS Global) have experienced would not have been possible without a lot of this. The harder you stare at the picture, the more complicated it looks. This is what "systems thinking" is all about. There isn't a linear cause-and-effect story. There are multiple interacting feedback cycles. And the reason that's important to understand is that complex systems can behave in non-linear ways, changing suddenly and dramatically in non-obvious directions. The past is not necessarily prologue.

After the glacier comes the flood

What I saw at the IMS LILI..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In my last post, I wrote about at a high level about the new e-Literate:

Think of the organization as having two parts. The first part is dedicated to spreading understanding. e-Literate has been doing this from the very beginning. The Empirical Educator Project (EEP) can be thought of as a social, IRL extension of e-Literate. I think of this as "spreading the gospel," but the business-speak for this part would be "media and events." There will be the blog, the summits (which may evolve into a conference as interest grows), a podcast, and other logical extensions. I will have more details to share on this, including a way to pay for the expense of expanding this work that I think will actually do net good rather than harm, in tomorrow's post. One does not get rich from education media and events, but if they pay the mortgages of the people working on them, that's just fine.

The other half of the business is to help organizations implement the knowledge and effective practices that are brought forward in the first half. Everything published through e-Literate and EEP will be contributed to the commons via a Creative Commons or open source license. But sometimes organizations need help implementing these ideas, or they want context-specific coaching. So the rest of the business will provide a combination of workshops and consulting that help apply and extend just about anything covered in e-Literate or EEP. I do have significant ambitions to grow this part of the organization, largely because I think it's an important way to grow impact.

In today's post, I'm going to make good on the promise of describing my plan for paying for the "spreading the gospel" part of the mission. Here's the short version:

Vendors are always trying to get me to write good things about them. They know my word is valuable because people believe I will write good things about vendors if and only if I think they are true. By definition, I can't take their money to write good things about them. That would defeat the purpose. Which means it becomes ever harder to cover authentically good work as the demand for attention increases and I have no way to pay for the cost of the time to cover that news. In fact, in the old consulting model, a lot of good work that we knew about was hard to cover because the way we learned about it—namely, through our paid consulting with the vendor—created a conflict of interest.

But what if vendors paid not for good coverage but for help in becoming better collaborators and contributors, particularly with non-customers? And what if there were a kind of peer review mechanism to help vet their success at becoming better collaborators and contributors? And what if they got coverage for free based on that success?

That's the new model in a nutshell. Now I'll break down where it came from and how it works.

The past: An origin story

Back in the early days of this blog, when I was just some guy working at a university writing about stuff that was on my mind, I didn't think or write much about vendors or business issues. Sure, I was frustrated by the teaching tools they were providing, but I didn't think much about the organizations themselves, how they ran their businesses, or the dynamics of the markets.

Then Blackboard decided to sue Desire2Learn for patent infringement. I didn't know anything about software patents, either legally or economically, but this sounded at first blush like it might not be a good thing. So I started asking around. It turns out that when you work in an academic environment it isn't too hard to learn stuff about just about anything. Pretty soon I knew more than I ever thought I wanted to know about patent claim construction, patent thickets, different ways that patents can be challenged, and so on. The more I learned, the more concerned I grew. And the more concerned I grew, the more I wrote about it.

When I started writing about the topic, only a handful of people were reading my blog. But then a weird thing happened. Matt Small, Blackboard's Chief Legal Counsel, started arguing with me. In public. The next thing I knew, an Associated Press reporter was at my door with a photographer, and my picture was in USA Today. My readership skyrocketed. My public arguments with Matt Small continued literally for years, and with them, my readership continued to grow. Blackboard had made themselves the most hated company in ed tech and I became known as the little guy who took them on.

It started off as an accident, but once it had happened, I felt compelled to continue. Nobody else was doing that kind of accountability work in ed tech until Phil came along, and it was important work. Importantly, it sometimes got results. The bigger I got, the more likely companies were to behave better when we poked them.

That said, there are two aspects of this work that I don't like. The first is the collateral damage. A blog post is a blunt force instrument. Companies do dumb or harmful things for a lot of different reasons, and some educators tend to be very reductive about their vendors and the people working at them. Sometimes that damage is a necessary evil, but I've come to appreciate that a sharp-elbowed vendor accountability piece is not something to be written lightly. The potential for good has to outweigh the potential for harm, and that's often a tough calculation to make.

The second aspect I don't like about the "cop on the beat" role of e-Literate is the opportunity cost. Every post I write about something that somebody shouldn't have done a missed opportunity to highlight something good that we should be doing more of.

Put these two problems together, add in the problem that none of the time on this work is paid for, and it feels like e-Literate could be doing a lot more than it has been to foster a new economy in ed tech. One where good behavior is rewarded. And the funny thing is, ed tech companies try to make contributions all the time. Their efforts are generally ignored, dismissed, or simply not noticed.

There are many reasons for this. One is that many of these companies are not skilled at making contributions in ways that are helpful to educators and likely to be noticed. Another is that many educators have trouble thinking about vendors as complex organizations that can do a mix of good things, bad things, and neutral things all at the same time for reasons other than some evil master plan. There are others I could list, but the net effect is that contribution and collaboration are not reliably effective methods for vendors to increase their brand value in this space. So they do less of it than they could and don't work as hard as they should at getting better at it.

Education companies should compete based on how much they contribute to education, particularly including how much they contribute to the public good. We haven't created a world in which it is possible for them to do so effectively. That strikes me as a problem worth tackling. e-Literate is trying to do something about it via EEP.

The present

I realize that many of you are still trying to wrap your heads around exactly what the Empirical Educator Project (EEP) is. That's OK. For the purpose of this post, you just need to understand this much:

  • It's a project where colleges and universities share what they've learned and collaborate on projects to help their students learn, succeed, and thrive.
  • Knowledge shared in EEP is intended to be contributed to the commons via some sort of open source license and to be made as practically accessible as possible to as many organizations as possible.
  • EEP is, in many ways, a real-world extension of e-Literate.

EEP is also vendor-sponsored. In fact, it has a very particular and carefully crafted approach to sponsorship that is designed to help create an economy that rewards vendor collaboration and contribution.

Here are the principles:

Sponsors are vetted, not once but continually

I personally hand pick sponsors. I interview them before I accept their money. I have turned down sponsors, including high-dollar-value ones whose logos would have looked very good on the EEP web site. I have declined to invite sponsors back when their behavior has not lived up to expectations. The first step in establishing a new economy of collaboration and contribution is establishing a higher level of trust. I can only have EEP sponsors in the room that can demonstrate they are trustworthy.

Sponsors are participants

We refer to the vendors who support EEP as "sponsoring participants," and we only admit sponsors who have something to offer in addition to money. The corollary is that sponsoring participants are not allowed to send sales or marketing people to the summit. They have to send product designers, executives, or researchers. In other words, they have to send people who are in roles that enable them to collaborate. Ultimately, we want our sponsors to contribute something. That is what they are there for.

For each company, there is only one price

There is no gold, silver, or bronze sponsorship. Nobody pays for a lunch or a lanyard or a program ad. You are either in or you are out. The only price difference is based on size. We have three bands based on broad ranges of company valuations—small, medium, or large. The price is the price, and it covers a year of sponsorship.

There may be ancillaries in the future—the one in the works right now is a podcast—but the core will always be one price based on company size with no distinction in the value received in return for that price, because we do not want to privilege large companies over small ones.

Credit is proportional to contribution

Becoming an EEP sponsoring participant in and of itself gets the vendor a few things:

  • Their logo on the web site to show that they are participating in creating this new ecosystem
  • My word to EEP network participants that the vendor has been vetted for participation
  • An opportunity to sit side-by-side with the participants at the summit and seek collaboration opportunities throughout the year
  • Help from e-Literate in finding collaboration and contribution opportunities

It does not buy them a single positive blog post on e-Literate. (Nor does it buy them protection from e-Literate's watchdog function. The blog will continue to do what it does.)

If the participating sponsor makes a contribution under some form of open license, that will get coverage from e-Literate. I will be publishing several such posts in the coming weeks based on contributions from the recent summit. For the short term, I am vetting those contributions. As we grow, I intend to put in place a more robust review mechanism.

If the participating sponsor makes a non-proprietary contribution that is adopted by colleges and universities in the EEP network, that will get more attention on e-Literate because the adoption acts as a peer review mechanism. And if that adoption comes with some sort of impact measure, that will get the most attention of all.

Cost is proportional to value

The way we came up with the sponsorship costs is that, within each of the three company sponsorship size bands, we figured out roughly how much the marketing manager could sign off on without getting executive approval. Then we charged a little more than that. We are creating an environment in which vendors have an opportunity to prove that they are genuinely contributing to the betterment of education. They should be willing to invest in that future.

Some questions you may have

It's pretty abnormal to have such a long post on e-Literate about we make our money, so you probably have some questions. I'll try to anticipate a few, but feel free to ask yours in the comments section below.

  • Are you selling out, Michael? My answer on this one is the same as always. I'm the wrong person to answer that question. You be the judge. All I ask is that you watch my actions going forward and judge me by what I do.
  • Is e-Literate going to be all about EEP all the time now? No. As a matter of fact, I have a couple of posts coming up in the immediate future that are exactly the kind of analysis I've been writing for a long time now. But my writing has always evolved based on whatever I've been working on and thinking about. You may not have noticed it, because I haven't always called attention to it. But it has. It's going to do so again.
  • Does this mean you will only write about companies that are EEP sponsors? No, but it does mean that I have a filter for my already limited writing time. I ignore about a dozen press releases every week. That was true before EEP, and it's still true now. I will write about good work wherever I see it. The question is really one of where I have time to focus my attention. Going forward, I am going to be focusing more of my attention on the work coming out of the EEP network, so I will be more likely to notice work being done there.
  • Do you really think you can change the vendor economy? Me by myself? Probably not. I have been surprised by how much I've been able to nudge it on occasion, but changing the whole darned thing is a pretty heavy lift. All of us together, on the other hand? Yeah, I kinda do think we can make it happen.
  • Can my company sponsor EEP? Maybe. Contact me and we'll talk.
  • Are you going to be writing about your business all the time now? Nah. Don't get too weirded out by all of this. I am more me than ever. I had to explain the changes and will return to the details from time to time when they are relevant, but e-Literate is still gonna e-Literate.

The post Building a Better Vendor Ecosystem appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You'll notice that the e-Literate web site has gotten a redesign and that we've switched URLs to http://eliterate.us. (Please update your bookmarks and RSS feeds.) You will also notice that there is a new "Get Help (Services)" menu item at the top of the page. While I had originally put a commercial services page on the Empirical Educator Project site, that was just a placeholder until I could complete the e-Literate site makeover. It has migrated here, where it belongs. In this post and the next, I'm going to provide some more details about how e-Literate is evolving, what's changing, and what isn't.

The Past

During the first half of e-Literate's life, it was simply a personal blog that was a work of passion. I started writing it as a way of thinking out loud about my work and of experimenting with what was then a new form. As my readership unexpectedly grew, it became a way for me to find new colleagues, engage in productive conversation, and advocate for change. For the most part, I never really thought about it as something that benefited me professionally, except on the rare occasions when I looked to change jobs and discovered, to my surprise, that people I was interviewing with already knew who I was.

When Phil and I started MindWires together, we were still pretty naive about the relationship between the blog and the company. We thought that MindWires would be our day job that we would use to feed our blogging habit. As it turned out, most of our work came because people knew us through the blog.

But it wasn't that simple. Phil and I naturally gravitate toward somewhat different kinds of analysis and different kinds of work. As writers working in the same publication, those differences worked beautifully together. We complemented each other well. As business partners, it was harder to build a coherent business. Phil is a natural market analyst and process consultant. He does those things extremely well and enjoys the work for its own sake. He also likes maintaining the kind of neutrality that one needs in order to do that work well. I, on the other hand, am more of a puzzle solver than a referee and have always thought of myself as an activist, which is not a word that Phil uses to describe his own vocation. So there was always a certain amount of parallel play in both e-Literate and MindWires. That worked for a while, but the more ambitious we got, the harder it became to reconcile our respective passions under one roof.

Hence, the split. Our time together taught us something important about what we each need to be doing. And, as we have also learned through our seven years working together, our work and our writing is tied together more intimately than we had realized. Splitting up the business while continuing to blog together wouldn't take us to where we each need to go right now.

For me, after 14 years of trying to have an impact through e-Literate while earning my living in various ways that are only loosely connected to the passion expressed on these pages, I need to bring the two closer together. So that is what I am doing.

The Future

e-Literate the publication is now one piece of a larger e-Literate organization, although it remains the beating heart of it.

Think of the organization as having two parts. The first part is dedicated to spreading understanding. e-Literate has been doing this from the very beginning. The Empirical Educator Project (EEP) can be thought of as a social, IRL extension of e-Literate. I think of this as "spreading the gospel," but the business-speak for this part would be "media and events." There will be the blog, the summits (which may evolve into a conference as interest grows), a podcast, and other logical extensions. I will have more details to share on this, including a way to pay for the expense of expanding this work that I think will actually do net good rather than harm, in tomorrow's post. One does not get rich from education media and events, but if they pay the mortgages of the people working on them, that's just fine.

The other half of the business is to help organizations implement the knowledge and effective practices that are brought forward in the first half. Everything published through e-Literate and EEP will be contributed to the commons via a Creative Commons or open source license. But sometimes organizations need help implementing these ideas, or they want context-specific coaching. So the rest of the business will provide a combination of workshops and consulting that help apply and extend just about anything covered in e-Literate or EEP. I do have significant ambitions to grow this part of the organization, largely because I think it's an important way to grow impact.

Keeping Trust

Moving forward, I intend to increase the high standard of transparency that we have always tried to maintain on the site. I will continue the practice of prominently disclosing any financial relationship in cases where I am writing about an organization with which my company has a financial relationship with. And I will also continue to write critical posts of current clients if I think it important to fulfilling the function that e-Literate plays in the higher education ecosystem.

By moving the unpaid writing and the paid work under the same brand, I am further increasing the transparency. Maintaining separate brands for the two different kinds of work creates only an illusion of independence in small organizations, where both kinds of tasks are performed by the same people. There is no practical firewall. The only way to provide assurance of ethical standards in such situations is through transparency. Putting everything under the same brand makes the responsibility as clear as possible. As an editorial convention, I will continue to italicize e-Literate when I am referring to the publication but will not do so when I am referring to the eponymous umbrella organization. But in the big picture, e-Literate and e-Literate are one and the same.

My intention is to build something that is ultimately bigger than me and that will outlast me. But for now, I understand that I am the personification of e-Literate. To reflect that reality, and in keeping with the proud tradition of small company owners making up their own absurd titles, I am calling myself e-Literate's Chief Accountability Officer.

The buck stops here.

The post Understanding the New e-Literate Ecosystem appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hi. I'm Michael. I'm a teacher from a family of teachers.

Over the fifteen years that I've been writing here, I have played a lot of roles, including technologist, product designer, quasi-academic, market analyst, journalist, and watchdog. And I will still play some of those roles from time to time when I think that doing so might be useful to you. But those are just things that I do. I am a teacher from a family of teachers.

The title of this blog was intended to be self-deprecating. When I started, I felt only semi-literate about the subjects that I was writing about. I still feel that way. I like that feeling. I tells me that I have an opportunity to learn something.

In the beginning, I was focused on learning about how technology could be useful to educators. So the "e" in "e-Literate" was like the one in "email" or "ecommerce" or "e-Learning." The lesson of the last two decades in all three of those domains is that, while some things change, a lot of others stay the same. Ecommerce is still commerce. Likewise, e-Learning is still learning. Yes, there is additional craft we can draw on that is afforded by new the tools. Nevertheless, humans still learn in basically the same ways they always have.

The "e" I'm more interested in now is educational effectiveness. In all honestly, that's what I've always been interested in. But but I personally, and we has a sector, had to learn a lot about where technology can be helpful. I won't say that we're done with that, but we don't need for it to be the main point of the story all the time anymore. And it shouldn't be.

If you've been following my writing lately, a lot of it has been about the Empirical Educator Project (EEP). There is ed tech in these stories, yes, but there is also research on effective practices. Above all, there is culture-building for supporting educators in learning, adopting, and contributing new effective practices. We need new structures, processes, and support mechanisms. Universities have to reinvent themselves from the inside out. And also, commercial providers can and should play a vital role in speeding along this transition in a healthy direction.

I have been laying out the breadcrumbs of my thinking for a while and am getting better at telling the story more clearly and succinctly. I have a point of view and a theory of change. This is my new focus. While I will continue to write about many of the topics you are used to see me write about, you will notice more thematic coherence moving forward. The Empirical Educator Project, a sister project to e-Literate, is my opportunity to test and refine the theory of change with like-minded colleagues whom I admire. If you are not among their number yet, then I hope you will be soon. And I am now launching a commercial offshoot of that work to further support the mission. You can see some early details here. If I am successful, then the business will enable me to scale EEP to reach more people more quickly and effectively.

When Phil and I were working together, I sometimes joked that we had a blog that owned a business. Today, that is more true than ever. This is the first time in my nineteen years of writing that my day job has aligned seamlessly with my public writing. You will see that reflected here. I will be writing about my work in ways that I haven't before.

I am traveling this week to IMS Learning Impact Leadership Institute, where I look forward to reconnecting with old friends and re-engaging with the IMS community in ways that I haven't had the luxury of doing in some time. When I return, you will begin to see some changes here.

The post Welcome to the New e-Literate appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

All good things must come to an end.

After nearly seven great years as business partners and an even longer run as co-bloggers, Phil and Michael, a.k.a. the Statler and Waldorf of ed tech, are undergoing a process that Gwyneth Paltrow calls “conscious uncoupling.”

(Is it weirder that we’re referring to ourselves in the third person or that we’re quoting Gwyneth Paltrow? Anyway...)

We’ve had great fun together and hopefully done a little good in the world, but we each need to be doing different things at the moment. Phil will be keeping MindWires as well as the LMS Market Analysis subscription service. He will be forking to his new blog home at http://PhilOnEdTech.com, but all historical posts will remain at e-Literate. He will continue to cover ed tech and online / hybrid education market trends in general, and basically all the stuff that you’re used to reading about from him. Email subscribers to e-Literate will initially be subscribed to Phil’s blog but will be given clear options to opt out. That said, you should follow his blog. It’s going to be great.

Michael is going to stay right here at e-Literate. His new business will help universities and education companies adopt the innovations that are being contributed by the Empirical Educator Project network and, more generally, helping organizations to improve their educational impact. In keeping with that new venture, e-Literate will be returning to its roots of likewise focusing on the nuts and bolts of education, only covering technologies, companies, and markets from angles that set the context for discussions of educational impact. Expect a blog post from him here on e-Literate with more details soon.

This does not mean that you have heard the last from Statler and Waldorf. We remain friends and, more importantly, one of us owes the other money. (We won’t say who.)

Keep your ears open. From time to time, you will still hear some cackling from the cheap seats.

The post Major Changes at e-Literate and MindWires appeared first on e-Literate.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Interest in the Empirical Educator Summit (EEP) has been off the charts. We want and intend to include everybody, but only when we can include people in a way that is useful to them. So we are being intentional about the pace and ways in which we are growing.

That said, we know a lot of people are very interested. We had already planned to release video of much of the summit after the fact. We've decided that we're going to try to live stream the audio as well. (My experience with live-streaming video is that there isn't much value in the visuals unless your setup is better than we will be able to manage, so we'd rather focus on trying to get you a solid audio stream.)

We have a placeholder page set up at http://empiricaleducators.net/2019-eep-summit/. Between now and Monday, we will be posting an agenda of the summit and putting up a widget for the audio streaming on that page. Check there periodically for updates. For planning purposes, I can tell you now that the audio streaming will be from 9 AM to 3:30 PM EST on Monday, May 6th and from 9 AM to 12 PM EST on Tuesday, May 7th. Again, the agenda will be posted on the EEP summit page soon. This is a last-minute addition driven by demand, so we're winging it a bit.

We also invite you to discuss the summit on Twitter as it is streamed. We will not have the luxury of a dedicated social media person to monitor and respond to the conversation live, but we will be encouraging the on-site community to participate and will definitely be looking at what you have to say afterward to see what we can learn from your input. The hashtag for the event is #EEP2019.

We're adding two more hashtags for more specific input, since EEP is ultimately about doing things together. If you use these, please be sure to catch the early sessions on Monday that explain the goals of EEP so that your input is on point. The first hashtag, #EEP2019ideas, is for suggestions about how EEP members—both current and prospective—can work together to accomplish the goals of the network. The second, #EEP2019challenges, is for obstacles you want us to be aware of as we think about how to build out the collaborative network.

To prepare you for the streaming of the event, I'm going to assign you some homework. The main reading is very short. I just published a piece in Forbes about Carnegie Mellon's contribution. It's not what you're used to reading from me in that Forbes required the piece to be only about 800 words and strictly enforced a requirement that readers shouldn't need to have any knowledge of higher education or software whatsoever in order to understand the article. The downside of these requirements is that I had to flatten and truncate some details and nuances that e-Literate readers are used to getting from me. (One example that I particularly want to get off my chest is that I briefly described the fruits of Lumen Learning's collaboration with Carnegie Mellon but wasn't able to give them proper credit.) But there were some benefits to those restrictions too. I think the piece captures something of the sense of professional identity and culture that both Carnegie Mellon and EEP seek to foster. Also, did I mention that it's probably the shortest piece by me that you will ever see? Go read it.

Beyond that, if you want to get a deeper sense of the train of thought behind the effort, take a dip into the archive of EEP-related blog posts here at e-Literate.

The post EEP 2019 Will Be Live Audiostreamed appeared first on e-Literate.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview