Loading...

Follow Higher Ed Data Stories | Jon Boeckenstedt on Feedspot

Continue with Google
Continue with Facebook
or

Valid
This will probably be the final post in Higher Ed Data Stories for a while.  As you may already know, I'm leaving DePaul for the "best coast" to take the position of Vice Provost of Enrollment at Oregon State University, effective July 1.  Thus, I'll be in personal and professional transition for a while, and once I arrive, I'll be busy learning a new institution and working on new challenges that should keep me occupied for a while. (Additionally, I'm not sure what my Tableau situation will be like...)

As you may already know, HEDS got its start when I decided to revise and share work I was already doing in my job at DePaul. Much of it, I presumed, would be of interest to university colleagues and high school and independent counselors.  But I had no idea how many people craved information like this.

So, for now, here is information about bachelor's degrees earned in 2011, 2014, and 2017, the most recent available in the IPEDS data center.  It's presented three different ways so you can find the information you're curious about.

First Tab: Macro Trends shows bachelor's degrees awarded by US four-year, public and private, not-for-profit institutions in three years (2011, 2014, and 2017).  Use the highlight control at top to call out a specific area, and use the filters to limit the universe, for instance, to look at just private colleges in New England, or Doctoral Institutions in the Southeast.  This will give you an overall perspective on the market, which is much bigger, much more powerful, and far more independent than we'd like to admit. Hover over data points for details via the popup.

Second Tab: Single Institution allows you to select a single college or university to compare over time.  You generally won't see a lot of change over any three-year period, but the overall upward trends are interesting.  The slices are labeled by total number and percent of total for ease of comparison. 

Third Tab: Changes in Bachelor's Degrees breaks out the data by discipline.  It starts with Education majors, but you can use the control to change it to any academic area.  Again, if you don't want to see the whole universe of institutions, use the filters to change the included universities. 

One note about this: Some institutions have meteoric increases due to small base numbers in 2011.  For instance, going from 2 to 16 is a 700% increase, which is interesting but not compelling.  To make the data more meaningful, I set the filter (at lower right) to include only those institutions with at least 50 degrees awarded in the discipline in 2011.  You might want to change this, to 15 for Philosophy, for instance, or 100 for Engineering.  As always, you can't break this; just use the reset icon at lower right.

And as always, let me know what you see that's interesting. 


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
If you want to strike fear into the hearts of enrollment managers everywhere, just say, "The trustees want to talk about the discount rate."

If you don't know, the discount rate is a simple calculation: Institutional financial aid as a percentage of tuition (or tuition and fees) revenue.  If your university billed $100 million in tuition and fees, and awarded $45 million in aid, your discount is 45%.  In that instance, you'd have $55 million in hard cash to run the organization.

Discount used to be a reporting function, something you would look at when the year was over to see where you stood.  Now, it's become a management target. And that's a problem.  If you want to know why, read this quick little explanation of Campbell's Law. The short explanation is this: If you want to lower discount--if that's really the thing you are after--you can do it very easily.  Just shrink your enrollment.  Or lower your quality, as measured by things like GPA and test scores. Easy.

Of course, this is generally not what people mean when they say they want to decrease the discount rate.  They usually mean "decrease the discount and keep everything else the same, or better yet, improve those measures."  That's not so easy.  The simple reason is that decreasing your discount means you're raising price.  And we all know what happens when you raise price, unless you turn your college into a Giffen good which you can't do, of course.

What people really want is more net revenue: that $55 million in the example above.  You'd probably like to have it be $57 million, which would mean you lower your discount rate to 43%.  That happens because you either charge students more, or enroll more students who bring external aid, like Pell or state grants.  You don't care, really.  Cash is cash.

The absurdity of discount was demonstrated to me by a finance professor friend, who said back in the late 90's, "If we generate $12,000 in average net revenue on an $18,000 tuition (a 33% discount), let's propose raising tuition to $100,000 and the discount to 80%."  Yes, believe it or not, the denominator is important when calculating percentages, which is why it's hard to compare discounts in a meaningful way for competitors who charge more.)

If you're interested, here's a little presentation I did on why colleges have tended to increase discount and net revenue at the same time.  This exercise is probably close to the breaking point, however.

Now that you understand a little more about discount, on to the data. This is from the IPEDS data for Fall, 2016, the most recent available showing both aid and admissions data.  There are four views, using the tabs across the top.

View 1: Discount overview

No interactivity: Just average discount rates by Carnegie type, Region, and Urbanicity.  I think the bottom one is the most fascinating discovery I've come across yet.  Just by playing with the data.

View 2: Discount by Market Category

This one combines the three categories above: Carnegie, Region, and Urbanicity into a single category to see how discounts play out.  In order to be included in this, there had to be at least ten colleges in the category.  You can see that the highest discount, on average, is Baccalaureate institutions in distant towns in the South Central region of the US.  You can color this by any of the three individual categories using the little control at the top right.

View 3: Individual Colleges

This lists all the private colleges for which I could calculate a freshman discount rate and net revenue per freshman.  The controls at the top allow you to look at schools like yours, if you want.  Note the slider at top right: I started showing freshman classes of at least 200, as some small college data gets a bit funky.  You can expand or narrow that by pulling the sliders to your heart's content.

View 4: Multidimensional

Each college in this view is a bubble, arrayed on the chart in two dimensions: Freshman Discount and Average net revenue per freshman.  The size of the bubble shows freshman selectivity (bigger is more selective).  The color of the bubble shows the percentage of freshmen with institutional aid.  Note that the highest net revenue institutions are also the most selective, suggesting people will pay for prestige (or prestige and wealth pave the way to admissions). And the lowest net revenue institutions are dark blue, showing almost everyone getting institutional aid (either "merit" or "need-based" although those distinctions are silly.)

Use the filters to limit the colleges on the view, and use the highlight function (just start typing) to highlight the institution of your choice.  Note especially what happens when you limit the view to colleges with higher tuition.  Go ahead.  You won't break anything.

As always, let me know what you see.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
If you are at all interested in college admissions, you are perhaps already sick of the coverage of the Varsity Blues Scandal, in which some parents allegedly conspired to break the law to get unfair advantages for their children in the admissions process.

Almost no one thinks this behavior is appropriate.  Almost no one.

There have been calls for reform in college admissions.  And it's clear, of course, that this scandal has exposed some weak spots in the process.  At some institutions.  OK, a handful of institutions.  If you had your thumb removed.  And even then, it's a coach or two at those institutions who gave into greed and decided to take the money in exchange for a spot on the team and a greased path to admission which they controlled.

Of course, as we get deeper into discovery on this, we may find that the cheating on the SAT and ACT scandal goes far deeper than we would have anticipated.  Clearly, the most zealous users of the SAT and ACT are the very institutions everyone is fascinated with, and this is what they get for putting faith in a test a) originally designed to keep Jews out of the Ivy League that is b) now produced by a private company and that c) measures wealth and ethnicity better than academic potential. (The Institutional Research Office at Yale knew this as early as the mid-60's and fortunately for all of us, put it on paper and sent it to the archives.)

Those of us who work in higher education have long since given up trying to corral the media fascination with this handful of institutions and their quirky ways.  But the calls for reform suggest that admission to college in the US is extraordinary competitive, so seeing the scope and context from a high level is still important, I think.

So, this:

Four views of admissions data from 2001--2017, all interactive and filterable to your heart's content.  Dive right in, or if you've never interacted with Tableau software before, take a few minutes to learn how to interact.

1st Tab: (tabs are across the top): The three bars represent freshman applications, admissions, and enrollments at all four-year, public and private not-for profit institutions in America, from 2001--2017 that are not open enrollment.  The number of institutions varies a tiny bit over time, but nothing that makes this analysis and different, fundamentally.

The orange line represents the aggregate admissions rate (percentage of applicants admitted).  If you want to look at a subset of these institutions, use one of the controls at the top: Look at just public universities, or just colleges in New England, or use the filters in any combination. 

2nd Tab: Compare any four institutions to each other.  The view starts with four highly regarded Big 10 institutions, but you can use the drop down boxes to choose any four institutions you wish, from Abilene Christian to Youngstown State.

3rd Tab: This shows the universe broken into groups of colleges by selectivity.  Use the controls to see how many applications, admissions, or enrolls at each band of colleges (Most Selective is a 2017 Admit Rate of 10% or less, for instance; Extremely includes all colleges with admit rates between 10% and 25% in that year). 

The top chart shows raw numbers; the middle chart shows percent of total, and the bottom shows how many colleges in each category.  Hint: Dark blue is the most selective group of colleges--the ones everyone talks about.  And remember, this data doesn't even include Community Colleges (who do not report admissions data to IPEDS).

4th Tab: This shows four key metrics for all colleges, and can be broken out (that is the lines separated and colored) by several different categories using the control at top right, and can be filtered to show only certain types of colleges, using the controls in the middle of the right-hand column.

Draw rate (the bottom chart) is especially important, because it's a measure of market power, calculated by the yield rate/admit rate.  If I might make a suggestion: Notice national averages or draw rates, and then look at it by the selectivity categories.  While almost every college strives to raise this rate, note who has: The ones everyone talks about.  Chicken, meet egg.  You two fight it out.

As always, you can't break anything.  The little reset arrow in the lower right is your friend, so use it if you get stuck.

And let me know what you think.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Several years ago, The College Board produced a study of "discrepant performance," after studying about 150,000 students and their freshman-year grades in college.  If you want to see the study, you can get a pdf of it here. The title, of course, is interesting. (And before we get too deep, it's important to note that these are old SAT scores, in case you think the new test is better and want to argue that point, nice tight concordance between the old and the new notwithstanding.)

Discrepant performance is defined as standardized test scores that are inconsistent with a student's academic performance in high school.  The distributions of scores and grades were normalized, and then each student's z-score on tests was compared to their z-score on grades.  Just under two-thirds of students had scores and grades that were consistent, and we don't need to talk too much more about them.

The most interesting thing is the other one third of students: Those whose grades were higher than their tests (High GPA) and those whose tests were higher than their grades (High SAT).  Each group was about one-sixth (17.5%) of the sample.

Back to The College Board presentation for a moment.  It suggests that high testers with low grades are a better risk than low testers with high grades, but it also says grades are a better predictor by themselves than tests.  That's not statistically impossible, of course, but it does seem curious.  And it does fly in the face of both my experience and conventional wisdom regarding the best way to make admissions decisions; I think most admissions officers believe high tests/low grades are the weaker bet of the two extremes.

But let's go with that for a minute.  And let's ask why it might be true. But first, let's argue with what little methodology is presented here in this study.  A lot of the conceptual problem in predicting human performance, of course, comes from our own arrogance: In this case, the belief that a limited number of pre-college factors represent the sum total of factors affecting freshman grades.  How limited, in this case?  Two.

If you really wanted to get a good model to predict freshman performance, you'd look at a lot of factors: Family income, parental attainment, ethnicity of the student vis-à-vis the student body and vis-à-vis the high school they came from, just to name a few.  All of those factors are important, and what we find is that students from lower-income families whose parents didn't go to college, and who feel out of place in the college they've enrolled have some struggles.  I don't see any of these factors controlled for in this analysis (if I'm wrong I'll be happy to correct it.)

You can see the table of how discrepant performance breaks out, but you can you really see it?  Let me draw you a picture.  On this chart, (which shows only the students with discrepant performance), the light blue bar on the left chart shows the number of students with high tests and lower GPA (High SAT); the orange bar on the left chart show the number of students with low tests and high GPA (High GPA).  Hover over the bars to see how many there are.  One the right chart, the mauve colored bar shows what percentage of each group had high SAT. (The ones The College Board says you should give the breaks to).



Surprise: Guess who tends to have higher grades and lower scores?  Women (who get better grades at every level of education than me, by the way); poorer students; students from underrepresented ethnic groups, and students whose parents have less education.  This narrative plays smoothly into the prevailing wisdom of 1930, which suggested they just were not suited for higher education, and which some people still seem to believe.

Who has higher scores and lower grades? Men, white and Asian students, wealthier students, and children of well educated parents.  And The College Board statistics tell you these are the students you should give a break to in the admissions process because they did better on a three-hour test.  You see, in the simple approach, only SAT scores and GPA determine your college performance, and it's not at all affected by how much you have to work, or worry about money, or spend time figuring out how college operates, or whether you belong there.  So keep giving the white guys the break.

Two final points: If I took the labels off the bars, and told you "This is the gender chart," or "This is the income chart" you could probably put the labels on in correct order pretty easily.  Second (and this could be a whole other blog post all together) even the poorest students (a B- average) with the lowest test scores ended the first year with an average GPA of 2.0, and the differences between and among the groups are exaggerated by a truncated y-axis on the chart in the presentation.
As always, let me know what you think.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Much has been made recently of the attempts by colleges to increase the enrollment of Pell-eligible students.  For those who don't know, the Pell Grant is the federal grant awarded to students with the highest financial need.  In fact, the pressure may be backfiring, in a classic case of Campbell's law.

Regardless, given the state of federal reporting requirements (why can't the FISAP be in IPEDS??), this blunt tool is still the best one we have widely available to help take stock of the economic diversity of enrolling students.

So this is where we are.

This morning, Robert Kelchen sent this tweet about the data he uses to measure grad rate gaps between Pell and Non-Pell recipients.  I asked him for it, and he graciously shared it right away.  I spent 30 minutes to visualize it (for our own internal use, mostly), and made it better for others who might want to take a look.

On the first view, four data points are displayed: The college's grad rate for Pell (light blue) and Non-Pell (dark blue) on the left; the percentage in the measured freshman cohort in purple in the center; and the gap, in percentage points.    The identical chart at the bottom breaks it out by sector.

I recommend you use the filters at the top to limit the top chart by a)  the size of the cohort (for instance, between 500 and 5,000), and then by sector.  For these two filters, the bottom chart will not change.  However, if you want to look at a specific state, using that filter will affect both the top and bottom.

If you want to sort the data by either the red or purple bars, hover over the top of the column, and click on the small icon that appears.  Sort descending, ascending, or alphabetical on consecutive clicks.

On the second chart is a mostly a nothing burger: I was curious to see if the percentage of Pell students  in the cohort had an effect on the gap.  As you can see, it doesn't.  On this chart, type and select any institution to see it highlighted.

And, as always, let me know what you see.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This data has long been of interest to high school counselors, and of course, I decided to update it at the worst possible time: During the recent shutdown of the federal government.  I found the NSF website shuttered.

Fortunately, the Polar Vortex gave almost everyone in Chicago a two-day break shorty after the government re-opened, and there was not much to do with the windchill approaching -60° F; but the government had re-opened, and the data were available.  So here you go.

There are two simple views of doctoral education here: The first is the undergraduate institution of doctoral recipients from 2013 to 2017.  You can use the controls at the top to limit your view to public or private; Carnegie type; State, or HBCU status.  If you want to, you can also focus on a single year or range of years using the sliders.

For instance, if you wanted to look at how many graduates of Baccalaureate institutions in California received a doctorate in chemistry in 2014, just play around until you get there. (The top college may surprise you!)

The second view is similar, but shows the universities awarding the doctorate.  The filters work the same way.

Let me know what you find interesting here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
When designing a data visualization, the first thing to ask is, "What does the viewer want to see, or need to know?"  If you're designing a dashboard for a CFO or a CEO or a VP for Marketing, those things are pretty straight forward: You're designing for one person and you have a pretty good idea what that person wants.

But in higher education, we want to look at segments of the industry, and trends that are specific to our sector.  And there are thousands of you (if this blog post is average, that is).  So I can't know.

This visualization of enrollment data measures only one thing: Enrollment.  But it measures several different types of enrollment (full-time, part-time, graduate, and undergraduate, in combination) at many different types of institutions (doctoral, baccalaureate, public, private, etc.)  And the best thing is that you can make it yours with a few clicks.

The top chart shows total headcount, and the bottom shows percentage change since the first year selected.  If you want to change the years, or change the types of enrollment, or the universe of the colleges selected, use the gray boxes at the right.  At any time, use the lavender box at top right to change the breakouts of the charts: To color by region, or grad/undergrad, or any other variable listed.

There are lots of interesting trends here, some of which will help you realize that while enrollment may be declining, it's not declining everywhere, or for every type of institution.

See something interesting? Post in the comments below.


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This is pretty interesting, I think, mostly for the patterns you don't see.

This is data on medical school admission in the US; some of it is compiled for a single year, and some for two years (which is OK because this data appears to be pretty stable over time.)

Tab 1 is not interactive, but does show applications, admits, and admit data on grids defined by GPA and MCAT scores.  Darker colors show higher numbers (that is, more counts, or higher admit rates.)  While we cannot get a sense of all takers like we do with other standardized tests, this does perhaps show some strong correlation between college GPA and MCAT scores (of course, another explanation may be that students self-select out, which then makes me wonder about that one student with less than a 2.0 GPA and less than a 486 Total MCAT score who applied, was admitted, and then enrolled.

The second and third tabs show applicants by undergraduate major, and ethnicity, respectively.  Choose a value at upper right (Total MCAT, or Science GPA, or Total GPA, for instance), and then compare that value for all applicants and all enrolling students on the bars; gold is applicants, and purple is enrollers.  The label only shows the value for the longer bar; hover on the other for details.

I was frankly surprised by some of these results.  How about you?


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
IPEDS just released Fall, 2017 Admissions data, and I downloaded it and took a quick look at it. If you've been here before, most of this should be self-explanatory.

Three tabs, here: The first is to take a look at a single institution.  Use the control at top to select the college or university you're looking for. (Hint, type a few letters of the name to make scrolling quicker).

The second tab allows you to compare ten (you can do more, but it gets messy).  I started with the ten most people want to see, but you can delete them by scrolling to their check in the drop down and deleting them, and clicking apply.  Add institutions by checking the box by their name.

The final shows the relationships between test scores and Pell, which I've done before, but I never get tired of. Choose SAT or ACT calculated means for the x-axis, then limit by region and/or control if you so desire.

Notes:

1) Some of the admissions data for 2017 is tentative, so anomalies are probably in error.
2) Test-optional colleges are not allowed to report test data
3) Financial aid data is for 2016, as the 2017 data is not yet available.  It tends not to change dramatically from one year to the next, however.


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The last several days have seen a couple of articles about the decline of history majors in America.  How big is the problem?  And is it isolated, or across the proverbial board?

This will let you see the macro trend, and drill down all the way to a single institution, if you'd like.

The four charts, clockwise from top left are: Raw numbers of bachelor's degrees awarded from 2011-2016 (AY); percentage of total (which only makes sense when you color the bars) to show the origins of those degrees; percentage change since the first year selected; and numeric change since the first year selected.

You can color the bars by anything in the top box at right (the blue one) or just leave totals; and you can filter the results to any region, or group of years, or major group (for instance, history, or physical sciences), or even any specific institution.  And of course you can combine filters to look at Business majors in the Southeast, if you wish.

That's it.  Pretty simple.  Let me know what looks interesting here.


Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview