Follow Math-Blog: Mathematics is wonderful! on Feedspot

Continue with Google
Continue with Facebook


The Arc Approach

A flat puzzle (tiling) with dozens or hundreds of identical pieces may sound a little dull and predictable. But what is the most interesting shape we can use, to get the most unusual designs and the most variety? To make it more visually interesting, let’s say we want a shape with no straight edges—only curves. The following guidelines should help us get started.

  1. Let’s use circular arcs, all with the same radius of a unit length. Hereafter we won’t talk about lengths; just about angles. These are angles of the arcs and of the corner angles. For good tiling these angles need to be divisors of 360° such as multiples of 12° or 15°: “agreeable” angles.
  2. Since the arcs must fit together there must be as much concave arc as convex arc.
  3. We’ll look at shapes that at least tile periodically—that is, by repeating it in simple translation—but are looking for tiles that fit together after rotation, with the more options the better.
  4. Let’s say we are free to use the reflection or mirror image of the shape. This might not seem important at first with symmetrical shapes but will be important later with more complex shapes and tilings.

Circling the Square
It is simplest to start with a square, since we can just replace the sides with two concave and two convex arcs, and get tiling based on adjacent squares as shown below. We can start with 90° arcs, which could encircle the square. The bottom shape below will show up again. It has been used for centuries. Since it is could be viewed as a stylized horseshoe crab, we’ll call it the Crab.

The arc angle can be any value up to 180°, as shown below.

Similar results can be achieved when starting with a rhombus, but with a more distorted view:

Trying Triangles
It we want to start with a triangle, it becomes more difficult due to the three sides. We can’t just replace the three sides of an equilateral triangle with identical arcs, since we won’t be able to get the same amount of convex and concave arc.
A 45° right triangle can be easily converted by putting a 180° arc on the hypotenuse, and 90° arcs on the two smaller sides. This gives us the Crab again.

Any right triangle can be converted to a tiling shape, by putting a convex 180° arc on the hypotenuse, and same-radius concave arcs on each of the smaller sides. This is because any right triangle can be inscribed in a half circle.

This shape can tile periodically, and some special cases–such as conversions from 45° and 30°/60° right triangles—result in shapes with agreeable angles that can also tile with rotations. But with all other right triangles we can’t easily get the final agreeable angles we want.

Coming Full Circle

If we start with a whole circle, we will want to replace half the circumference with concave arcs. We could start with creating two 90° concave arcs, either opposite each other or next to each other–and get the two same shapes we got initially using squares.
We can also use three 60° concave arcs. This can be done in the three arrangements shown below.

These shapes can also be made using a hexagon as the starting point.
The shape on the right above–with the three adjacent concave cutouts—can be modified with other sizes or concave arcs. If we stay symmetrical we can use various combinations of concave arcs totaling 180° as shown below. These will all tile in the same periodic manner. If the bottom middle cutout is reduced to nothing, we will have just two 90° concave arcs: the Crab again.

This approach with three concave cutouts in the lower half can also be used based on a lens shape. The lens is created by taking one arc (up to 180 degrees) and mirroring it about its endpoints. This is the more general case of the circle. As we did with the circle, we can make three concave cutouts bounded by one of the arcs, with similar periodic tiling.

All the shapes above primarily tile predictably and periodically, albeit with a wide range of possible arc angles and corner angles. Some of them can fit together in more complex ways, with rotation and more choices for tiling. How can we get the most flexibility from a single shape; or better, from a family of shapes?

Trifocal Lenses

The shape family with the most overall flexibility has three sides. But is not constructed from a triangle; rather it starts with the desired corner angles or arcs in the framework of a lens shape.
Let’s say we want a triangle-like shape with the usable corner angles of 30° and 60°. These will also be the angles of the two concave arcs. We could start construction with these, but it’s easier to start with the large-arc lens which will be the sum of these, or 90°. So we make a 90° arc and mirror it to make a lens shape. Then mark two smaller arcs—where they meet on the mirrored arc—and mirror each of them about their endpoints.

The resulting shape allows surprising flexibility for tiling.

The big advantage with this approach is that we choose the corner angles first, and the rest follows. If we want to build tiling around 5-pointed stars or flowers, we can choose small angles of, say, 36° and 72°.
Assuming we use reasonable angles, this construction and tiling works for any large angle up to 180°, and any proportioning of the two smaller arcs. The corner angle opposite the large convex arc is always the supplement (difference from 180°) of the large arc. And the smaller corner angles are always the same as the concave arcs.


The above approach lets us make a wide range of shapes, with complex and varied tilings that are radial/polar, periodic, or non-periodic, or some combination of these. This new family of shapes we can call tricurves.
Please try this out, explore the possibilities, and share what you find!
For more information on tricurves, see National Curve Bank entry and article and more images.

The post Tiling with One Arc-Sided Shape appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


US K-12 mathematics has long been dominated by the notion that any legitimate pathway through high school and into college requires passing over Calculus Mountain. It isn’t a real mountain, but it might as well be for those who wish to pursue certain professions, not all of which actually make use of calculus at the graduate or professional school level or indeed in those professions in the field. In particular, medicine, dentistry, veterinary medicine, and other prestigious branches of “the healing arts” make calculus an absolute hurdle for those who wish to qualify for professional training. And the results can be harmful not only to individuals who are thus shut out because they can’t get over Calculus Mountain but to the professions themselves and the public they serve. Few doctors outside of those doing research ever use one lick of calculus outside of their undergraduate education, but only those who can “cut it” for four semesters in college are going to get into the requisite professional schools. And not all those who make it are necessarily the best-suited to be healers; neither are all those who are kept out necessarily otherwise ill-qualified.

The above issues aside, there seems to be a factor of inertia and traditionalism that keeps calculus as the “One True Grail” for school mathematics. And to ask why this need be the only path for our students is to risk being branded a heretic. One person who has challenged the received wisdom is Dr. Joseph G. Rosenstein, an emeritus professor of mathematics at Rutgers University. He’s done so not by attacking analysis, but by offering a major alternative for students in K-12: discrete mathematics. In the introduction to his textbook, PROBLEM SOLVING AND REASONING with DISCRETE MATHEMATICS, he writes that the text, “addresses five types of issues related to problem solving and reasoning: (a) strategies for solving problems, (b) thinking and acting systematically, (c) mathematical reasoning, (d) mathematical modeling, and (e) mathematical practice. Although many of the mathematical topics in this book are not addressed in the current version of the Common Core State Standards, these five types of issues play a major role in what the standards refer to as “Standards for Mathematical Practice.”

Dr. Rosenstein’s textbook offers a coherent treatment of a major strand of mathematics that has many of the most important qualities of both abstract and applied math, including rigorous thinking and proving, utility, and beauty, all of which can be accessible to K-12 students who many not necessarily have been drawn to or successful with every aspect of the traditional curriculum. In these senses, Rosenstein provides a pathway that exemplifies the heart and soul of the Standards for Mathematical Practice, the piece of the Common Core Mathematics Standards that to my thinking exists above and beyond any particular list or sequencing of topics in the Content Standards.

Further, his book is eminently accessible to a broad range of students and teachers. He has selected from the most appealing areas of discrete mathematics, starting with coloring, graphs, counting, and paths and circuits, all of which provide students with the opportunity to dive into serious math and be successful with it. The problems are chosen to be thought-provoking and engaging, with a mix of the concrete, the abstract, the applied, and the esthetic. Teachers who are not themselves deeply familiar with one or more of the topics can readily explore discrete mathematics with students in ways that

would benefit everyone. Those who have a background in discrete mathematics will find plenty of familiar and new problems to challenge both their students and themselves.

As someone who loves discrete mathematics, I am truly excited to see this book appear just at a time when the United States needs to find alternatives for students who have not been well-served by the traditional curriculum for K-12 math. While no textbook can guarantee that it will instantly make students love mathematics who have heretofore feared and loathed the subject, I believe Rosenstein’s book comes close to offering a vast range of students a tempting invitation to reconsider math as something enjoyable, beautiful, and perhaps most importantly, understandable. Nothing breeds success like success, and with this book, students will be able to succeed in authentic mathematical problem-solving, regardless of previous experiences.

The post Review: Problem Solving and Reasoning WIth Discrete Mathematics appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Joseph G. Rosenstein

It’s our pleasure to welcome Professor Joseph G. Rosenstein to the Math Blog. He is Distinguished  Emeritus Professor of mathematics at Rutgers University.  His biography is extensive and you may read about his accomplishments on his web page .

Michael Paul Goldenberg: Welcome, Joe. It’s a pleasure to be speaking with you again. I know you from attending a summer program for high school teachers in applied graph theory at DIMACS in the summer of 2001. You were running a parallel program for middle and elementary school teachers. You were quite a legend among the high school teachers who were returning from previous years. I was told to be  on the lookout for someone with a big beard, some sort of splashy tie, and shorts. Do you still dress like that during your summer work? Any reasons beyond a personal taste for the “Joe Rosenstein Look”?


Joe Rosenstein: Yes, I still look much the same. I still have a beard – my beard is now almost 52 years old – but, curiously, it is now much whiter than in the past. I always wear a tie, year-round, and whenever they’re out of style (which is often), they can be described as “splashy;” I started wearing ties over 50 years ago, and have quite a collection. I also wear shorts in the late spring and summer every year; indeed, with global warming, this year I wore shorts through the end of October. There is no mathematical, pedagogical, spiritual, or political reason for the way I look. It’s just who I am. 


MPG:  How did you first become interested in mathematics as a serious pursuit?  


JR: I can’t point to a specific moment or incident when I decided that mathematics would be the direction in which I would go. As a high school student, I would go to the main public library in Rochester, New York and devour whatever math books they had; I scored 100 on all the math Regents exams given in New York State (including Solid Geometry). It was natural for me to continue to focus on mathematics when I went to college. Initially, I was very discouraged, because most of the 75 students in the honors calculus course in which I was enrolled in Columbia had gone to high schools like Bronx Science and were much better prepared than me, but I stuck it out, and I was one of only six students who survived that two-year honors course. So clearly math and I were meant for each other. Therefore, it was clear that I would go to graduate school in mathematics.

I went to Cornell because while at Columbia I became interested in mathematical logic and Cornell was then (and still is) a center for mathematical logic, which indeed became my focus in graduate school. My mentors and thesis advisors at Cornell were Anil Nerode and  Gerald Sacks. One day I was told that I would be going to the University of Minnesota as an Assistant Professor, and that is what I did. That was September 1969.

I continued doing mathematical research for about 15 years and then decided to write a book called “Linear Orderings” which included much of the research that I had done, but was also intended to organize and put in one place all that was known about linear orderings; that book was published in 1983 in Academic Press’ Series on Pure and Applied Mathematics. By 1987, however, my focus had shifted from mathematical research to mathematics education.

When I am asked on occasion why I chose to go into mathematics, and to become an academic mathematician, I respond that I was good at math and never asked the question of what I really wanted to spend my life doing … so I just continued what I was doing successfully. Sixty years ago, young people often did not ask questions about their future, but rather took the path that was somehow laid out for them. As time passed, I learned that I had other interests and talents, and have pursued them as well. But I have no regrets about spending my life with mathematics..


MPG: What led you to a focus on discrete mathematics? 


JR: Before 1989 I had not focused on discrete mathematics, although in retrospect it had become clear to me by then that much of my research would now be called either theoretical computer science or infinite combinatorics. However, in that year, a proposal was submitted to the National Science Foundation to create a “Science and Technology Center” based at Rutgers whose focus was on discrete mathematics. As part of the proposal, applicants had to describe how their center would reach out to precollege teachers and students. By then I was already quite involved in running programs for high school teachers; I had started a well-attended annual “precalculus conference” (the 32nd conference will take place in March 2018!) and an annual week-long summer institute for new math teachers (and a parallel institute for new science teachers). So I think that we could make a convincing case that we could do the outreach to schools that NSF required. Indeed, our proposal was funded by NSF and we were one of the initial half dozen Science and Technology Centers; our center is called DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science, and almost 30 years on, it is a flourishing center with an impressive track record and international reputation.

Soon after NSF funded DIMACS, I submitted proposals to NSF to fund programs for high school teachers and high school students. These proposals were also successful; indeed, I received four consecutive grants from NSF for the Rutgers Young Scholars Program (RYSP) in Discrete Mathematics (extending over 8 years) and three consecutive grants from NSF for the Leadership Program (LP) in Discrete Mathematics for teachers (extending over 11 years). The LP continued for another 8 years with funding from other sources and the RYSP celebrated its 26th iteration this past summer.

MPG: What did you learn from the Leadership Program in Discrete Mathematics? 

JGR: In its initial years, the LP provided programs just for high school teachers. We learned from those programs that

  • students were motivated by, and indeed excited by, the focus on applications;
  • all students were able to learn the topics in discrete mathematics, since there were few mathematical prerequisites for these topics;
  • while being accessible to all students, the topics could provide challenges to mathematically talented students;
  • discrete math provided an opportunity for a “new start” in mathematics to students who had previously been unsuccessful;
  • discrete math also provided an opportunity for a “new start” for teachers, in that techniques (such as focusing on problem solving and understanding, using group work, using questioning techniques) that they had avoided when teaching traditional topics, they were able to introduce successfully when teaching discrete math and were often able to transfer their new strategies into teaching traditional topics although we had initially assumed that discrete math would work only for high school students, the teachers in the LP told us that they were able to introduce discrete math topics successfully to middle school students as well.

As a result, when we applied for a second grant from the NSF, the reach was expanded to include middle school teachers as well. And the third, five-year, grant from the NSF was for K-8 teachers, since many of the topics in discrete math were accessible to elementary students as well, although the LP participants of course had to modify the curriculum and instruction to be suitable for their grade levels. 

MPG: You’ve spent a large part of your career promoting discrete mathematics as part of K-12 curricula on a state and national level. When I first came to the University of Michigan to do graduate work in mathematics education in 1992, I discovered that the State of Michigan already had a discrete mathematics strand embedded throughout its curriculum framework for math. While I was field supervisor of student teachers in secondary mathematics for U-M, I met a local high school teacher who was an enthusiastic booster of discrete math and did presentations for teachers at various conferences. I subsequently arranged to have him speak to my student teachers every semester and eventually was able to get a couple of them placed in his classroom. In my conversations with him, I realized that, outside of his classroom and those of a handful of other teachers around the state, the discrete mathematics strand was being honored far more in the breach than in the observance. The issue, he told me, was that there were absolutely no discrete mathematics items on the annual state assessments. How does this compare to your experiences in New Jersey? 


JR: In the early 1990s it became clear to the leadership of the Mathematical Sciences Education Board, part of the National Academy of Sciences, that improvement in mathematics education should be state-based. Influenced and encouraged by the MSEB, I created the New Jersey Mathematics Coalition in 1991, bringing together leaders from colleges, schools, industry, government, and the public sector in order to improve the mathematics education of New Jersey students. (Similar coalitions were created in subsequent years in many other states, as well as a national organization of such coalitions.) The coalitions advocated for the adoption of state standards.

As the idea of state standards gained momentum, the New Jersey Mathematics Coalition, in partnership with the NJ State Department of Education, was able to get a grant from the US Department of Education to create mathematics standards and a mathematics framework in New Jersey, an effort that I directed, that involved hundreds of New Jersey educators, together with people from the other sectors, and that produced the standards that were adopted by the State Board of Education in 1996 and that produced the New Jersey Mathematics Curriculum Framework to assist teachers and schools in implementing the standards.

These standards included a standard in discrete mathematics. Thus, when the state assessments were developed, they included items on discrete mathematics. Since the effort to develop the 1996 standards was not envisioned as providing specifications for statewide assessments, the standards were revised in 2002 in order to better align the revised state standards with revised state assessments.

As a result, New Jersey’s state assessments have, since soon after 1996 and until about 2008, included questions on discrete mathematics. I cannot avoid mentioning that, during this period, the scores of New Jersey students on the National Assessment of Educational Progress (NAEP) were among the highest in the nation.


MPG: What about on the national level? What happened to discrete mathematics with NCTM and with the Common Core Content Standards for mathematics? 


JR: With the advent of the Common Core standards, discrete mathematics has been essentially absent from the national math standards and, of course, from the New Jersey math standards. The one exception is that combinatorics has been incorporated into the standard relating to probability at the 8th grade level, although systematic listing and counting should be introduced at the elementary level.


MPG: Why do you think there has been so much resistance and inertia when it comes to discrete math in American K-12 mathematics education? 


JR: There has been a traditional and systemic resistance on the part of mathematicians against discrete mathematics, including a reluctance to consider it mathematics. Even Euler, the founder of graph theory, considered his solution to the question of whether a given graph has an Euler circuit as reasoning, not as mathematics. If summations, integrals, partial derivatives, or complex numbers are not present, then it’s not real mathematics. This perspective filters down to the K-12 curriculum into the view that all of mathematics should be preparation for calculus, a view that is echoed by teachers who never learned discrete mathematics in preparing for their teaching career.

Indeed, although the national standards were always intended to include the mathematics that prepare students for college, careers, and citizenship, the Common Core has hijacked the math standards so that it became preparation for calculus. That is very unfortunate, for not all students need to prepare themselves for calculus.

Unfortunately, most teachers don’t know discrete math or its value for their students and most college mathematicians who teach prospective teachers do not know how valuable it would be for their students our future K-12 teachers.

 MPG: Why do you think the Common Core focused on preparing students for calculus.

JGR: Three important reasons that I think led to the Common Core’s focus on preparing students for calculus were (a) a concern about the STEM pipeline, (b) a concern about US students performance on international assessments, and (c) a concern about the number of students who come to college with inadequate preparation for college math courses. From a superficial perspective, each of these supports a calculus-based curriculum.

With respect to (a), our research suggests that it is not that the STEM pipeline is too small (if shortages indeed exist), but rather that it is too leaky; a substantial number of students who already appear to be in the STEM pipeline are provided little encouragement to pursue STEM-based careers and drop out of math after taking AP calculus. 

With respect to (b), if the gap between US students and international students is indeed a problem (of which I am not convinced), then it’s not clear that narrowing our curriculum is an effective way to solve the problem. (In theory, if we spend 100% of our class time on the topics covered in the international assessments, then our students will do better than if we spend 90% of our class time on those topics and 10% of our time on other topics.) Our students’ scores may rise, but their overall understanding and experience with mathematics will suffer.

With respect to (c), very few of the students who come to college unprepared for college mathematics are going to end up taking, let alone succeeding, in calculus. They would be better prepared for college mathematics if they had a stronger experience with problem solving and reasoning.

Looking at the three issues more closely, we see that what all of these students need is not a narrower curriculum, but a broader curriculum, one that focuses more on problem solving and reasoning, as is the case when one incorporates discrete mathematics in the curriculum.

These ideas are discussed in more detail in my article “The Absence of Discrete Mathematics from Primary and Secondary Education in the United States … and Why that is Counterproductive” that will soon appear in the ICMI-13 Monograph published by Springer and entitled “Teaching and Learning Discrete Mathematics Worldwide: Curriculum and Research.”

MPG: I taught math in an alternative high school from 1998 – 2000. Virtually all of my students were testing at a 4th to 5th grade level in literacy and mathematics, and few of them had earned any high school credits in mathematics. During my second year there, after stumbling around trying to find something that would be accessible and interesting to students who feared and loathed mathematics and were very weak in basic arithmetic, to the extent that trying to teach them algebra was essentially futile. I stumbled via the Core-Plus curriculum into a unit on graph theory. While many of them floundered with Euler circuits and paths, something I thought would work for them, I was thrilled to finally find a topic that a more than a few of them liked and with which some of the weakest and most resistant students were extremely successful: graph coloring. What are your thoughts on that as a way to engage students who have not been doing well previously? 


JR: That is exactly right. When I teach courses for prospective K-8 teachers, I start with map coloring. More specifically I provide each group of 4-5 students with a map of the continental United States in which all the states are colored white and an envelope full of paper chips of various colors and ask them to color the states so that bordering states have different colors. I don’t say anything about the number of colors. Of course, each group colors the whole map in one or two minutes, and I then ask them whether they can eliminate one color, then whether they can eliminate another color and so on, and every group always is able to reduce the number of colors to four.

This activity, which we also used to start off the Leadership Program, has the advantage that it doesn’t appear to be mathematics, and therefore does not arouse all of the students’ and teachers’ residual fears about mathematics, the negative experiences they have had with math in the past, and their lack of confidence in their mathematical abilities. Through this activity, they find out that they can be successful in mathematics, and this initial successful encounter – and those that follow – enables them to continue to succeed. This activity, as well as the other activities in the LP, were developed by myself and Valerie DeBellis, who served as LP Associate Director.

My book, “Problem Solving and Reasoning with Discrete Mathematics,”  [note: reviewed this month] also begins with map coloring, although the activity described above does not work the same way with readers of a textbook as is does in a classroom setting. From map coloring, we go to vertex-edge graphs and graph coloring, and then to applications of graph coloring, and then to systematic construction of graphs. This book is designed both for a course for high school students and for a course for prospective K-8 teachers. But it is also appropriate for those who are mathematically curious.


MPG: Do you see any promising approaches to getting discrete mathematics accepted as an option for students in K-12 who may not be ready for or interested in calculus? 


JR: As it becomes clearer that the Common Core standards are the wrong standards, that they are inappropriate for a substantial number of students, the question of which standards are appropriate will be answered on a state-by-state basis, following the conception formulated by the MSEB almost 30 years ago. Perhaps having national standards is a good idea but, given the disastrous standards that were produced in this round, the national standards route will not be taken for many decades. That will make it possible for those math teachers who know about discrete mathematics to attempt to convince their states of its value.

In order for that to happen, those teachers need to work now to institute courses that can serve as models for their colleagues. For the past ten years, their hands have been tied, as schools and districts insist on spending class time exclusively on what’s in the standards. But in the coming decade, I anticipate that those restrictions will be lifted and teachers will have more freedom to explore other mathematical topics, including discrete math.


MPG: What are you working on now? 


JR: High school math teachers will have to convince their supervisors and principals that discrete mathematics is valuable for their students. So I am working on producing a video (actually a series of videos) that are designed for that purpose – emphasizing the importance of problem-solving and reasoning and how discrete math promotes that, emphasizing the importance of seeing how math can be used to solve real-world problems (which in a calculus-based curriculum doesn’t happen very much until calculus), and emphasizing that all students can benefit from learning about these topics. I hope that these videos will convince teachers of the value of discrete mathematics and that teachers will encourage their supervisors and principals to watch the videos and also come to see the value of discrete mathematics. I hope that many will, as a result, introduce discrete mathematics into their schools on an experimental basis and, when successful, will expand its availability to all of their students.

In order for this plan to work, and in order for teachers to realize that my book, “Problem Solving and Reasoning with Discrete Mathematics,” and other books are available, I will also have to develop strategies for reaching teachers, including through social networks – to do which I hope to recruit many other educators to promote discrete mathematics. I hope that some of those who read this interview will join me in this enterprise.

As you may know, we just elected a new governor in New Jersey. I hope that he will be amenable to developing new math standards for New Jersey, and I am prepared to be actively involved in that process. Such an effort may lead to bringing discrete math back into the New Jersey curriculum.


MPG: Anything else you’d like to share with us? 


 JR: The website for my book on discrete mathematics is new-math-text.com. I also have another website for other books I have written on Jewish themes, and particularly Jewish prayerbooks – that is newsiddur.org.

I recently retired after 48 years as Rutgers, and am getting used to adding “emeritus” to my title of “Distinguished Professor of Mathematics.” My Rutgers website is http://dimacs.rutgers.edu/~joer/joer.html

Finally, my wife and I have been married for 48 years, and are very proud of our five daughters, five sons-in-law, and eleven grandchildren.


MPG: Thanks so much for taking the time to speak with us, Joe, and for sharing your passion for mathematics.

The post Joe Rosenstein: The Art of Being Discrete appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Typical algorithms for doing square roots by hand require estimation. I have taught a different algorithm that does not rely on estimation but instead uses subtraction of successive odd integers. First, I offer examples that illustrate two situations that may arise. Then I present a third situation (as well as how to deal with the square roots of non-perfect squares).

This approach is based on the fact that the nth perfect square is the sum of the first n odd integers. This fact can be used to subtract successive odd integers from a given number for which one wishes to find the square root. If the number isn’t a perfect square, this method can be extended by adding pairs of zeroes to the original number and continuing the process for each additional decimal place one wishes find.


It helps to look at a couple of examples to illustrate two “special cases” that arise with some numbers, requiring one or two additional “rules” or steps.

Using the example of 54,756:

Start by marking pairs of digits from the right-most digit: 5 | 47 | 56

Then subtract 1 from the leftmost digit or pair: 5 – 1 = 4.
Continue with the next odd integer: 4 – 3 = 1.

We can’t subtract 5 from 1, so we count how many odd integers we’ve subtracted thus far (2) and mark that above the 5.

Bring down the next pair of digits and append it to the 1 yielding 147.

To get the next odd integer to subtract, multiply the last odd integer subtracted by 10 and add 11 (this is Rule #1) to the product. Here, we have 10 x 3 + 11 = 41. Proceed as previously, subtracting 41 from 147 = 106.
Subtract the next odd integer, 43 from 106 = 63.
Subtract the next odd integer, 45 from 63 = 18.

Again, we can’t subtract 47 from 18; counting, we have done 3 subtractions and place 3 above the pair 47. Multiply 45 x 10 and add 11 = 461.

Bring down the next pair of digits, 56, and append them to the 18, yielding 1856.

Subtract 461 from 1856 = 1395.
Subtract the next odd integer, 463 from 1395 = 932.
Subtract the next odd integer, 465 from 932 = 467.
Subtract the next odd integer, 467 from 467 = 0.

Counting how many subtractions, we see it is 4 and we write 4 above the 56.

Our answer is that 234 is the square root of 54,756. Alternately, instead of keeping a running total of the subtractions and placing the digits above successive pairs of digits from the left, take the last number subtracted, 467, add 1, and divide the result by 2 = 234, same as what we determined the other way.


A second example introduces another rule not previously required: find the square root of 4,121,062,016 using the subtraction of successive odd integers.

Begin as above by making pairs of digits from the right-most digit: 4 | 12 | 10 | 62 | 40 | 16

Subtract 1 from 4 = 3.
Subtract 3 from 3 = 0.
Write down 2 for the two subtractions above the 4.
Bring down the next pair of digits, 12.
Multiply 3 x 10 and add 11 = 41.
Note that 41 is too big to subtract from 12.
Write 0 above the 12, since we did 0 subtractions.

Bring down the next pair of digits, 10, and append to the 12 => 1210.

Insert a 0 to the left of the last digit in 41 => 401. (This is Rule #2)
Subtract 401 from 1210 = 809.
Subtract the next odd integer, 403, from 809 = 406.
Subtract the next odd integer, 405 from 406 = 1.

For the three subtractions, write 3 above the 10.

Bring down the next pair of integers, 62 and append to the 1 => 162
Multiply 405 by 10 and add 11 = 4061.
We need to apply Rule #2 again. Write 0 above the 62, bring down the next pair of digits, 40, and append to the 162 => 16240.
Insert 0 to the left of the last digit of 4061 => 40601.

Note that this is still too big to subtract from 16240.
Apply Rule #2 again (and it may have to be applied more than twice in particular cases).
Write 0 above the 40, bring down the 16 and append to the 16240 => 1624016.
Insert a 0 to the left of the last digit of 40601 => 406001.

Subtract 406001 from 1624016 = 1218015.
Subtract the next odd integer , 406003 from 1218015 = 812012.
Subtract the next odd integer, 406005 from 812012 = 406007.
Subtract the next odd integer, 406007 from 406007 = 0.
Write a 4 above the last pair of digits, 16.

The square root of 41210624016 = 203,004.

Again, alternately, the answer = (406007+1) / 2 = 203,004.


There is a group of numbers for which the process previously described won’t work. For example, try to use it to find the square root of 100.

Grouping as before: 1 | 00

Subtracting 1 from 1 = 0.

Write 1 above the 1, bring down the next pair of digits, 00, and append to the 0.

Multiply 1 x 10 and add 11 = 21.

Can’t subtract 21 from 0. Hmm. Although we know the answer is 10, to make things work, we can note the following, which is Rule #3:

If you want the square root of a whole number that ends in two or more zeros, write the number as a product of a number and an even power of ten.

So 100 = 1 x 10^2.

We get that the square root of 1 = 1, append one zero for every pair of zeroes in the original number, and Bob’s your uncle. (Or something like that).

For example, to find the square root of 3,610,000, remove two pairs of zeroes from the original number, then apply the original procedure:

Group: 3 | 61.

Subtract 1 from 3 = 2

Can’t subtract 3 from 2, so write 1 above the 3, bring down the next pair of digits and append them to the 2 => 261.

Multiply 1 x 10 and add 11 = 21.

Subtract 21 from 261 = 240.
Subtract 23 from 240 = 217
Subtract 25 from 217 = 192
Subtract 27 from 192 = 165
Subtract 29 from 165 = 136
Subtract 31 from 136 = 105
Subtract 33 from 105 = 72
Subtract 35 from 72 = 37
Subtract 37 from 37 = 0

So write a 9 above the 61. Append two zeroes to the 19, one for each pair removed.
Then the square root of 3,610,000 = 1900.


Finally, this process works for whole numbers that aren’t perfect squares and for decimals. It just won’t terminate in those cases (except arbitrarily). For a decimal, also break the number into pairs of digits to the right of the decimal point.

For example, finding the square root of 3 to 3 decimal places.

Append pairs of zeroes for each decimal place you want in the answer, plus two more to be able to round to the given place.

So write 3 as 3 | 00 | 00 | 00 | 00

Subtract 1 from 3 = 2.

Write 1 above the 3. Bring down a pair of zeroes, append to the 2 => 200.

Multiply 1 x 10 and add 11 = 21.

Subtract 21 from 200 = 179
Subtract 23 from 179 = 156
Subtract 25 from 156 = 131
Subtract 27 from 131 = 104
Subtract 29 from 104 = 75
Subtract 31 from 75 = 44
Subtract 33 from 44 = 11.

Write 7 above the first pair of zeroes.

Bring down the next pair of zeroes and append to the 11 => 1100.

Multiply 33 x 10 and add 11 = 341.

Subtract 341 from 1100 = 759.
Subtract 343 from 759 = 416.
Subtract 345 from 416 = 71.

Write 3 above the second pair of zeroes.

Append the next pair of zeroes to the 71 => 7100.

Multiply 345 x 10 and add 11 = 3461.

Subtract 3461 from 7100 = 3639.
Subtract 3463 from 3639 = 176.

Write 2 above the third pair of zeroes.

Append the last pair of zeroes to the 176 => 17600

Multiply 3463 x 10 and add 11 = 346241.

We could continue, but it suffices to realize that the next digit will be 0 and so our answer is that the square root of 3 is 1.732 rounded to three decimal places.

The post Finding Square Roots Without Estimating appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

James Grime is a mathematician with a personal passion for maths communication and the promotion of mathematics in schools and to the general public. He can be mostly found doing exactly that, either touring the world giving public talks or on YouTube.

James has a Ph.D. in mathematics and his academic interests include group theory (the mathematics of symmetry) and combinatorics (the mathematics of networks and solving problems with diagrams and pictures). James also has a keen interest in cryptography (the mathematics of codes and secret messages), probability (games, gambling and predicting the future) and number theory (the properties of numbers).

James went on to study mathematics at Lancaster University. He was attracted by the challenge of the analytical and creative thought required in a maths degree, but it was probably the lack of essays and the reading list he found most attractive. Later, James went to York University with the aim of getting a Ph.D. and avoiding the real world for at least another three years. He was successful on both counts.

In his spare time, James has many hobbies, including juggling, unicycling and a great number of other circus skills, and has finally embraced the fact that his ultimate purpose in life may be simply to make a fool of himself in public.*

Michael Paul Goldenberg: Welcome, James. It’s very exciting to have a chance to speak with you.

James Grime: Hi Michael, I’m very happy to be here.


MPG: I’ve wanted to ask you this for years, James: where did you come up with the name, “The Singing Banana,” or would that be giving away a deep secret?

JG: Ha! In case people don’t know, singingbanana is my original YouTube channel. When people ask me about the name I like to pretend that it’s a perfectly reasonable name for a maths channel. The truth is, I have been using singingbanana as my internet name since I was 17 – when the internet was young. I took the name from my school tuck shop [Editor’s note: “canteen” to us Yanks] which itself was inspired by an advert from around the time. So that’s the name I naturally used when I first got a YouTube account – I never intended it to become so public. Having said that, I don’t want to change it, I like it.


 MPG: It’s extremely memorable and evocative of a host of things. So which came first, your own video channel or Numberphile? How did you become involved with Brady Haran and the mathematicians and scientists who contribute there?

JG: My own channel was first. It started in the early days of YouTube and the videos I uploaded were holiday videos and videos of me juggling. Then one day a friend showed me a puzzle he had come up with, which was good but I wasn’t happy with the set-up for the puzzle. So I recorded a different version of the puzzle just to show my friend. To my surprise, a few other people saw the video, so I thought I would make another maths video. And that was the start of it.

A few years later Brady launched Periodic Videos, his chemistry channel. I showed this to people in my department and said, look this is what we should be doing. There wasn’t much enthusiasm. I wrote a Guardian blog about the rise of science on YouTube including channels like Periodic Videos and I contacted Brady saying if he ever makes a maths channel to let me know. And he did.


 MPG: One of the first videos of yours I saw was on non-transitive dice. You were introducing some gear you’d developed and your enthusiasm was like nothing I’d ever heard or seen from a mathematician. Aside from the intriguing mathematics with which I was partially familiar (though I’d never seen dice that could be successfully played against two opponents simultaneously!), I was captivated by the excitement you were able to convey. Is that a common reaction you get?

JG: Thank you! I always believe in leading by example. And if I want people to be excited and interested in mathematics, how can I expect them to be if I don’t look interested and excited. So I try and present in that way. I remember one YouTube comment that said I was like a children’s TV presenter. I think they intended that as an insult, but that’s kind of what I am going for!


I don’t come from an academic background. So it was children’s TV that got me interested in science and maths. What’s great about YouTube is that it reaches kids all over the world, many of whom don’t come from academic backgrounds either.


 MPG: It’s hardly original for me to observe that many people, quite likely most people, consider mathematics to be a rather dry subject that could never engender the sort of passion you express. Would you comment on the role of online videos in changing that perception?

 JG: Absolutely. And I understand what you mean about the perception of maths being a very dry subject. Mathematicians don’t believe that but need to get our enthusiasm across. I think it’s very important to humanize the subject, to see a real person – with all their quirks – someone who is interested in what they are talking about. Even if you don’t understand it all, you can see the person’s enthusiasm. And people respond to that. That is something Numberphile has done very well.

MPG: How old were you when you started to consider seriously becoming a mathematician? Who or what were some of the big influences on your taking that path?

JG: It was a secret ambition in the back of my mind. I think I discovered that was a thing you could do when I was around 10 – and I learned that through TV. It was presenters like Johnny Ball and the Royal Institution Christmas Lectures that showed me that. However, it seemed unlikely so I kept that ambition to myself, while quietly working towards it.

Being a mathematician was always the goal, but I took it one step at a time, from A-levels, to university, to Ph.D. But part of the plan had always been to pay back what those TV presenters had done for me. So it wasn’t an accident when I started making maths YouTube videos. But no one expected it to be as popular as it became. And it has been an honor to pay back a little of what I experienced watching the presenters I watched as I child.

MPG: Let me backtrack a bit: do you run into resistance and/or criticism of what you and others who are using YouTube to reach schoolchildren and others about mathematics they would not likely encounter in K-12 classrooms? I ask in particular as someone who has encountered a strain of staunch resistance from various mathematicians, engineers, and the like straying outside the traditional school mathematics topics. In the US, what has come to be called “The Math Wars” has been going on for the past quarter century. To those on one side, everything you’re doing would be termed “math avoidance,” rather than what I would use much more positive descriptors. Have you had to deal with that kind of thing?

JG: I haven’t had that problem. Maybe it is because the UK has a long tradition of this kind of public lecture for children and the general public. I know the Royal Institution Christmas Lectures go back to Michael Faraday in 1825. I am a fan of these lectures that introduce real science in an accessible way to young people. It is true that I am trying to show people the interesting stuff beyond the school curriculum. A good analogy is how we learn music. We have to learn the basics but we are still allowed to listen to great classical pianists or your favorite pop star. In the same way, I am trying to provide inspiration and motivation for what students learn in school.

MPG: What is one of your favorite videos from your channel or Numberphile? What makes it special to you? How do you imagine teachers making use of it with students?

JG: That’s a really difficult question to answer. I think the best videos are the ones that stimulate your curiosity. I know the most popular videos are ones about infinity or dividing by zero. I think these are questions that a lot of students of maths are interested in. There are some great fun ones, viewers I meet often mention the video where we try to order 43 chicken nuggets from McDonald’s – it can’t be done by the way!


 MPG: Tell us a bit about the live presentations you do. Who is the audience and what typically goes into your talks?

JG: I travel the UK and the world giving talks about maths, in particular, I talk about the history and mathematics of code breaking. It’s called The Enigma Project and I bring with me an original WWII Enigma machine – one of few left in the world. I visit schools and speaking to students of all ages, from primary school to secondary school and colleges, as well as universities, festivals and other events. It’s a pleasure to do and people love it – because codes are cool! Who doesn’t love spies and secret messages? The real message behind it all though is that it’s about solving problems and to show people what it’s like to be a mathematician.


 MPG: Any projects in the works you’d like to tell us about?

JG: I am currently working on a small exhibit with the Fitzwilliam Museum in Cambridge. It’s called Codebreakers and Groundbreakers and will be open from the 23rd of October 2017 to February 2018. Mainly it’s about two code-breakers, Alan Turing who broke the German Enigma code in WWII and Michael Ventris who broke a forgotten Greek script called Linear B in the 1950s. So although one was a mathematician and one was a linguist we are trying to show that there are skills these people shared. And the success of Bletchley Park in WWII was due to the collaboration of people from different disciplines.

MPG: Thank you so much for sharing your time with us, James. I hope we can get more people looking at mathematics and science via the work you and the other folks at Numberphile are doing. I can say sincerely that if something like this had been available when I was in school, I wouldn’t have waited until my mid-thirties to start playing seriously with mathematics.

 JG: Thank you!


*Hence the title of this interview, coupled with my recent viewing of Robert Altman’s 1985 film of Sam Shepard’s amazing play, Fool For Love.

The post James Grime: FOOL FOR MATHS* appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Subtraction: What is “the” Standard Algorithm?

One common complaint amongst anti-reform pundits is that  progressive reform math advocates and the programs they create and/or teach from “hate” standard arithmetic algorithms and fail to teach them. While I have not found this to be the case in actual classrooms with real teachers where series such as EVERYDAY MATHEMATICS, INVESTIGATIONS IN NUMBER DATA & SPACE, or MATH TRAILBLAZERS were being used (in fact, the so-called “standard” algorithms are ALWAYS taught and frequently given pride of place by teachers regardless of the program employed), the claim begs the question of how and why a given algorithm became “standard” as well as how being “standard” automatically means “superior” or “the only one students should have the opportunity to learn or use.” It strikes me that it is almost as if such people are stuck in some pre-technological age in which we trained low-level white collar office workers to be scribes, number-crunchers who summed and re-summed large columns of figures by hand, etc. The absurdity of seeing kids today as needing to prepare for THAT sort of world is evident to anyone who spends any time in a modern office, including that of a small business. Desktop and handheld calculators are commonplace. So are desktop and laptop computers, not to mention tablets or smartphones in shirt pockets running Desmos, Wolfram|Alpha, etc. There is a need for people to understand basic mathematics, but not to be fast and expert number-crunchers in that 19th-century sense.

Thus, it seems reasonable to ask what should be an obvious question: if the goal is to know what numbers to crunch and how (what operations need be used) to crunch them, and, most importantly, to correctly interpret and make decisions based upon the results of the right calculations, and further if it is glaringly obvious that the actual number-crunching itself is done faster and more accurately by machines than by the vast, vast majority of humans can reasonably expect to do, why would any intelligent person be obsessing in 2017 over the SPEED of an algorithm for paper and pencil arithmetic? For the big argument raised for always (and exclusively) teaching one standard algorithm for each arithmetic operation seems to be speed and efficiency.

I have argued repeatedly that the efficiency issue is only reasonable if one fairly assesses it. And to do that is to grant that a student who misunderstands and botches ANY algorithm is unlikely to be performing “efficiently” with it. Compared with a student who uses even a ludicrously slow algorithm (e.g., repeated addition in place of any other approach to multiplication) accurately, the student who can’t accurately make use of the fastest possible algorithm is going to be taking a long time to arrive at the right answer which will be reached, if at all, only after many missteps and revisits to the same problem. For that student, at least, the “algorithm of choice” is not efficient at all. So finding one that the student understands and can use properly would by necessity be preferable. But not, apparently, in the mind of ideologues. For them, there’s one true way to do each sort of calculation and they are its prophets.

Of course, I’m not favoring teaching alternate algorithms because I dislike any particular standard one or feel the need to “prove” that, say, lattice multiplication is “better” than the currently favored algorithm. On the contrary, I’m all for teaching the standard algorithm. But not alone and not mechanically, and not at the expense of student understanding. Indeed, from my perspective, it’s difficult to understand why it is necessary to mount a defense for alternative algorithms in general, though any particular one may be of questionable value and might need some justifying or explaining. If anything, it is those who hold that there is a single best algorithm that is the only one that deserves to be taught who need to make the case for such a narrow position. In my reading, I’ve yet to encounter a convincing argument, and indeed most people who hold that viewpoint seem to think it’s glaringly obvious that their anointed algorithms are both necessary and sufficient.

What compounds my outrage at the narrower viewpoint is the fact that it is based for the most part on woeful historical ignorance. Previously, I’ve addressed the question of the lattice multiplication method, which has come under attack from various anti-reform groups and individuals almost certainly because it has been re-introduced in some progressive elementary programs such as Everyday Math and Investigations in Number, Data, and Space. The arguments raised against it are very much in keeping with above-mentioned concerns with speed and efficiency. Ostensibly, the algorithm is unwieldy for larger, multi-digit calculations. The fact is that it is just as easy to use (easier for those who prefer it and get it), and while it’s possible to use a vast amount of space to write out a problem, it’s not required that one do so and the amount of paper used is a social, not a pedagogical issue. But please note that I said RE-introduced, and that was not a slip. The fact is that this algorithm was widely used for hundreds of years with no ill effects. Issues that strictly had to do with the ease of printing it in books with relatively primitive technology and problems of readability when the printing quality was poor, NOT concerns with the actual carrying out of the algorithm, caused it to fall into disuse. Not a pedagogical issue at all, and with modern printing methods, completely irrelevant from any perspective. Yet the anti-reformers howl bloody murder when they see this method being taught. The only believable explanation for their outrage is politics. They simply find it politically unacceptable to teach ANY alternatives to their approved “standard” methods. And their ignorance of the historical basis for lattice multiplication as well as their refusal to acknowledge that it is thoroughly and logically grounded in exactly the same processes that inform the current standard approach suggests that bias and politics, not logic, is their motivation.

Subtraction algorithms

I raise all these questions because I had my attention drawn to a “non-standard” algorithm (actually two such algorithms and some related variations) for subtraction. Tad Watanabe, a professor of mathematics education whom I’ve known since the early 1990s posted the following on a mathematics education discussion list:

Someone told me (while back) that the subtraction algorithm sometimes called “equal addition algorithm” was the commonly used algorithm in the US until about 50 years ago. Does anyone know if that is indeed the case, and if so, about when we shifted to the current conventional algorithm?

I couldn’t recall having heard of this method, and so I was eager to find out what he was talking about. Searching the web, I discovered an article that repaired my ignorance on the algorithm: “Subtraction in the United States: An Historical Perspective,” by Susan Ross and Mary Pratt-Cotter. This 2000 appearance in THE MATHEMATICS EDUCATOR was a reprint of the article that had originally appeared several years previously in the same journal. It draws upon a host of historical sources, the earliest of which is from 1819. And there are other articles available online, including Marilyn N. Suydam’s “Recent Research on Mathematics Instruction” in ERIC/SMEAC Mathematics Education Digest No. 2; and Peter McCarthy’s “Investigating Teaching and Learning of Subtractions That Involves Renaming Using Base Complement Additions.”

The Ross article makes clear that as far back as 1819, American textbooks taught the equal additions algorithm. To wit,

1. Place the less number under the greater, with
units under units, tens under tens, etc.

2. Begin at the right hand and take the lower figure
from the one above it and – set the difference

3. If the figure in the lower line be greater than the
one above it, take the lower figure from 10 and
add the difference to the upper figure which sum
set down.

4. When the lower figure is taken from 10 there
must be one added to the next lower figure.

In fact, according to a 1938 article by J. T. Johnson, “The relative merits of three methods of subtraction: An experimental comparison of the decomposition method of subtraction with the equal additions method and the Austrian method,” equal additions as a way to do subtraction goes back at least to the 15th and 16th centuries. And while this approach, which was taught on a wide-scale basis in the United States prior to the late 1930s, works from right to left, as do all the standard arithmetic algorithms currently in use EXCEPT notably for long division (which may in part help account for student difficulties for this operation far more serious and frequent that are those associated with the other three basic operations, it can be done just as handily from left to right.

Consider the example of finding the difference between 6354 and 2978. Using the standard approach, we write:

and work as follows: 1) 8 is greater than 4 so we “borrow 1” from the 5 and then subtract 8 from 14 and get 6. We cross out the 5 and write 4 to account for the borrowing; 2) 7 is greater than 4 so we borrow 1 from the 3 and then subtract 7 from 14 and get 7. Again, we scratch out the 3 and write 2 to account for the borrowing; 3) 9 is greater than 2, so we borrow 1 from the 6, subtract 9 from 12 and write 3. Again, we scratch out the 6 and write 5 because of the borrowing; 4) finally we subtract 2 from 5 and get 3, leaving us with the answer: 3376. (Although it may not be obvious, we could do that subtraction from left to right using an approach similar to what I will show below for the equal additions algorithm).
Equal additions
 The equal additions method works as follows for the same problem above:
1) “8 from 14 is 6;
2) 8 from 15 is 7;
3) 10 from 13 is 3;
4) 3 from 6 is 3″
That is to say, each time the digit in the subtrahend is greater than the digit in the same place in the minuend, 10 (or 100 or 1000, etc.) is added to the digit in the minuend and also to the digit in the next largest place in the subtrahend. However, for example, in step #1 above, adding 10 to the units digit in the minuend appears to be “compensated” for by adding only 1 to the 7 in the subtrahend. In reality, they both represent additions of 10 because of place value. The algorithm really does involve equal additions at each step as necessary. And of course, because adding equal quantities to the minuend and subtrahend does not change their difference, the resulting computation is correct.
 Left to right subtraction?
 Could the calculation be performed correctly operating from left to right? Consider the following approach: 1) beginning on the left, take 2 from 6 and write down 4; 2) since 9 is greater than 3, add “10” (really 1000) to the 3 and take 9 from 13 getting 4 which we write down. Take “1” (really 1000) from the previous 4; 3) since 7 is greater than 5, add “10” (really 100) to the 5 and subtract 7 from 15 and write down 8. Take “1” (really 100) from the previous 4; 4) since 8 is greater than 6, add 10 to the 4 and take 8 from 14 and write down 6. Take “1” (really 10) from the previous 8. The answer is 3376.
Is this “better” than the standard (“compensation”) algorithm? Is it worse? I only mention it because research has suggested that many young students, left completely to their own devices, are likely to develop similar left-to-right strategies, correct or flawed. It seems highly likely that this is a natural outgrowth of the fact that we read English from left to right, and we teach students to read numbers the same way. It seems almost bizarre, once one thinks about it, that the standard algorithms for addition, subtraction, and multiplication, as well as some alternative methods, insist upon working from right to left. I suspect, too, that this causes some students much difficulty (there is plenty of evidence of kids who have problems TRYING to do arithmetic from left to right and getting enmired).

I will not discuss or describe in detail the Austrian algorithm other than to say that it doesn’t feel “right” to me. That’s not saying it’s “wrong,” but rather that I can’t see it as one I would use. And here is one major difference between me and the reform-haters: that doesn’t mean I wouldn’t revisit it or wouldn’t show it to teachers, and perhaps if I saw a particular student or class for whom it might prove helpful, I’d teach it. My “taste” isn’t the issue, but rather keeping a large number of options available for my practice and for my students. I suppose that’s just not very “efficient” of me.

Finally, it bears noting that there are references in the above-mentioned articles to research on the use of these algorithms, and at least some reason to think that equal additions should be looked at again very seriously by mathematics teacher educators and K-5 teachers. If you read the historical treatment of subtraction algorithms in the US, you’ll likely note how much chance and arbitrariness there can be in how one particular algorithm comes into fashion while others fall into disuse. I see no firm evidence for the “superiority” of the current most commonly-taught algorithm, and there is clearly a history of it’s causing difficulties for particular students. Would the universe collapse if we were to teach both? Even more, would it collapse if we didn’t rush to teach it right away, but rather, as has been proposed by more than a few researchers and theorists on early mathematics education, let students play and invent their own algorithms first, before trying to steer them toward one or another of our own? Sadly, the anti-reformers amongst us, the activist educational conservatives who are constantly trying to narrow rather than open up K-12 education, believe that there’s always one best way to do everything. And not coincidentally, that way always turns out to be the one they learned as a child. That, more than anything, is why I think it reasonable to call the not-so-traditional math that they push on everyone “nostalgia math.” It’s not that what they learned is better. It’s just what they learned back in simpler times when life was easy and there were no Math Wars and no one like me to suggest that their emperor is stark naked.

The post Subtraction: What is “the” Standard Algorithm? appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

An Interview with Alan Schoenfeld: At the Boundaries of Effective Mathematics Thinking, Teaching, and Learning

Alan Schoenfeld is the Elizabeth and Edward Conner Professor of Education and Affiliated Professor of Mathematics at the University of California at Berkeley. He has served as President of AERA and vice President of the National Academy of Education. He holds the International Commission on Mathematics Instruction’s Klein Medal, the highest international distinction in mathematics education; AERA’s Distinguished Contributions to Research in Education award, AERA’s highest honor; and the Mathematical Association of America’s Mary P. Dolciani award, given to a pure or applied mathematician for distinguished contributions to the mathematical education of K-16 students.

Schoenfeld’s main focus is on Teaching for Robust Understanding. A Brief overview of the TRU framework, which applies to all learning environments, can be found at The Teaching for Robust Understanding (TRU) Framework. A discussion of TRU as it applies to classrooms can be found in What makes for Powerful Classrooms?, and a discussion of how the framework can be used systemically can be found in Thoughts on Scale.

Schoenfeld’s research deals broadly with thinking, teaching, and learning. His book, Mathematical Problem Solving, characterizes what it means to think mathematically and describes a research-based undergraduate course in mathematical problem solving. Schoenfeld led the Balanced Assessment project and was one of the leaders of the NSF-sponsored center for Diversity in Mathematics Education (DiME). The DiME Center was awarded the AERA Division G Henry T. Trueba Award for Research Leading to the Transformation of the Social Contexts of Education. He was lead author for grades 9-12 of the National Council of Teachers of Mathematics’ Principles and Standards for School Mathematics. He was one of the founding editors of Research in Collegiate Mathematics Education, and has served as associate editor of Cognition and Instruction. He has served as senior advisor to the Educational Human Resources Directorate of the National Science Foundation, and senior content advisor to the U.S. Department of Education’s What Works Clearinghouse.

Schoenfeld has written, edited, or co-edited twenty-two books and approximately two hundred articles on thinking and learning. He has an ongoing interest in the development of productive mechanisms for systemic change and for deepening the connections between educational research and practice. His most recent book, How we Think, provides detailed models of human decision making in complex situations such as teaching, and his current research focuses on the attributes of classrooms that produce students who are powerful thinkers.  Schoenfeld’s current projects (the Algebra Teaching Study, funded by NSF; the Mathematics Assessment Project (MAP) and Formative assessment with Computational Technologies (FACT), funded by the Gates Foundation; and work with the San Francisco and Oakland Unified School Districts under the auspices of the National Research Council’s SERP project) all focus on understanding and enhancing mathematics teaching and learning.

Michael Paul Goldenberg: It’s my pleasure to welcome you to The Math Blog, Alan. I appreciate your taking the time to be interviewed.

Alan Schoenfeld: Thank you, Michael. It’s great to have the opportunity to speak with you.

MPG: You started your professional career as a research mathematician. What led you to become a mathematics educator and researcher?

AS: I became a mathematician because I loved math, plain and simple. I also loved teaching. Then, I read Pólya’s HOW TO SOLVE IT and my head exploded – he described my mathematical thought processes! Why hadn’t I been taught them directly? It seemed clear to me that using these strategies could open math up to a lot more people!

I asked around and people I spoke with (Putnam team coaches and math-ed researchers) said that the ideas might feel right, but they didn’t work. Now THAT was a problem worth working on. This was in the mid 70s, as cognitive science was beginning to form as a field, and I decided to make the transition from mathematician to math-ed researcher. If I could figure out how to make Pólya’s strategies work, I’d be doing challenging research and doing work that could have real-world impact.

Having that kind of impact was my lifelong career goal. I love mathematics, and it’s distressing to see how many people don’t. The big question was, could we understand thinking, teaching, and learning well enough that we could create learning environments in which students would become powerful and resourceful thinkers and problem solvers, and as a result come to love math? (How could you not, if you’re successful?)

MPG:  I would think there’s a rather different flavor to that sort of task compared with doing mathematics itself.

AS: Yes, definitely so. The major goal in doing mathematics hinges on proving. The challenge in making the transition to educational research was the loss of certainty that one experiences when leaving mathematics. I don’t mean “proof” in the narrow sense. Yes, having a proof means that there’s no doubt: this thing is true. But that’s the tip of the iceberg. Most proofs provide a sense of mechanism. They say how things fit together, and why they have to be true.

MPG: Could you give us an example?

AS: Certainly. Think of space-filling curves, say one of the standard maps from the unit interval onto the unit square. You start with a simple map into the square, then expand it in a way that’s essentially fractal. As you do, you can see the sequence of functions you create both filling things up and getting more and more dense. So if you understand why the uniform limits of continuous functions are continuous, you know the result is onto. Moreover, if you look closely, you see that each 4th of the unit interval maps onto ¼ of the unit square, that each 4th of those 4ths maps onto 1/16 of the unit square, and so on – the map is actually measure preserving! In this example, which is the kind of mathematical argument I like, the proof tells you much more than the fact that a result is true. It tells you how and why it’s true. (For what it’s worth, my early mathematical work was in this arena, working on general topological spaces.)

When I left math, I left that kind of certainty and explanation. (Henry Pollak once said, “There are no theorems in mathematics education.”) So, what do you replace it with?

Basically, the tools depended on the problem. In my early problem solving days, part of the proof was in the empirical pudding. That is: I came to realize that Pólya’s strategies weren’t implementable as he described them: a “simple” strategy like “try to solve an easier related problem. The method or result may help you solve the original” was in fact a dozen or more separate strategies, because there were a dozen or more very distinct ways to create easier related problems. But those dozen sub-strategies were all well enough defined to be teachable, and students could learn them; when they learned enough of them, then they could use the strategy. The proof? Both in lab experimentation and in my courses, students could solve problems they hadn’t been able to approach before. (see MATHEMATICAL PROBLEM SOLVING).

MPG: So proof does come into play?

AS: Yes, but that’s proof in the narrow sense. The results were documented, but I couldn’t really say what was going on in people’s heads. In my work on metacognition (or “executive control”) I could show that ineffective monitoring and self-regulation doomed students to failure, and that they could learn to get better at it; but I still didn’t have a theory that described how and why they made the choices they did. That took another twenty years.

Since my ultimate goal was to improve teaching, I turned my attention to tutoring (an “easier related problem” in the space of teaching) – the question being, why does a tutor make the choices he or she does, when interacting with a student?

Here’s where the idea of modeling comes in. It’s just not reasonable to make ad hoc claims about what someone is doing and why – you can explain almost any decision in an ad hoc way, but if you keep track of the rationales for those decisions, you’ll find that the rationale for decision #20 may flatly contradict the rationale for decision #11. I know many qualitative studies provide “blow by blow” explanations of events, but I found that profoundly unsatisfying.

MPG: How can you avoid falling into that trap?

AS: Modeling prevents you from doing that. If you’re modeling someone’s (a student’s, a tutor’s, a teacher’s) decision making, then you have to stipulate the elements of the model and how they’re related: these things matter, and under these circumstances, the model will act in the following way. You take the thing you’re modeling, stipulate which aspects of it get represented in the model, and run the model. There’s no fudging. If you’re modeling a teacher’s decision making, does the model you’ve constructed make the decisions that the teacher does? Almost certainly not, the first time you run the model – which means you missed something. It could be a theoretical element, it could be something you hadn’t noticed about the teacher. So you refine the model and see if it does better. As you do that, you’re not just testing an individual model: you’re working on the architecture of such models in general. You’re building a theory, which you’re testing with a variety of models. If you can model a wide range of examples – from, say, a beginning teacher to a highly-reflective and knowledgeable one like Deborah Ball – then there’s a pretty good chance your theory focuses on the right things. After some 20-25 years, I’d gotten to the point where those ideas were pretty robust. (See HOW WE THINK: A theory of human decision-making with educational applications) 

MPG: Did you find applications for this model beyond teaching?

AS:  Yes, the work on decision making connected to decision making in other fields (e.g., medicine, electronic trouble-shooting) as well as being an abstraction of the problem solving work,­ but it wasn’t enough. The ultimate question for me was, what is the nature of productive learning environments – environments from which students emerge as powerful thinkers and problem solvers? The modeling work I’d done had focused on individuals, but there’s the whole question of classroom dynamics.

I found myself hamstrung. I tried using the modeling techniques I’d developed for understanding teaching to approach the classroom discourse as a whole, and it was just impossible. Ultimately, we started afresh: we listed everything we could think of that mattered in classroom interactions (from the literature, from watching tapes, etc.) – a huge number! – and then created equivalence classes of those. It turned out that we could distill all of the consequential events into five equivalence classes: those related to the quality of the mathematics, to opportunities students had for sense making and for “productive struggle;” equitable access to the key content for all students; opportunities for students to interact with the content and each other that allowed them to develop a sense of agency, ownership over the content, and productive disciplinary identities; and making student thinking public so that instruction could be productively modified to “meet the students where they are” (formative assessment). We call this framework the Teaching for Robust Understanding (TRU) framework.

(Details of its development can be found in Schoenfeld, 2013; an intro to the framework and a large set of professional development tools can be found in Schoenfeld & the Teaching for Robust Understanding Project, 2016).

MPG: So where is this heading and how is it working out?

AS: Of course, having such a framework is having a hypothesis: these five things are what counts. (In mathematical language, the five dimensions of TRU are necessary and sufficient for creating powerful mathematics classrooms.) This in a way sends me back in time to the R&D work on problem solving. I now have an idea of what counts, but how do I get compelling evidence, and how do I help people use these ideas? Some of the evidence is correlational: the students who come from classrooms that score well on a rubric that assesses the five dimensions do better on tests of thinking and problem solving than students who come from classrooms with lower scores. Some are existence proofs: when Chicago adopted TRU and used it for professional development, Chicago’s math scores went up while the rest of Illinois’ scores went down. Some is yet to come: we’re building tools to help Teacher Learning Communities implement TRU, and we’ll do detailed studies of what kinds of changes there are in classroom dynamics and how those relate to student outcomes (see http://map.mathshell.org/trumath.php and http://edcollaboration.org/TRU-LS/trutools.html for tools and evidence). There are very positive signals as we work, some at a level of mechanism, but the road to understanding – and compelling evidence – is long and hard. Ask me in a few years and I’ll let you know how far we’ve progressed.

MPG: I would love to have you back to fill us in. What you’ve described thus far is exciting and thought-provoking to me as a mathematics educator. Thank you, Alan: you certainly have an open invitation to visit again.

AS: It’s been a pleasure speaking with you, Michael.

Additional References

Schoenfeld, A. H. (2013). Classroom observations in theory and practice. ZDM, the International Journal of Mathematics Education, 45: 607-621. DOI 10.1007/s11858-012-0483-1.

Schoenfeld, A. H., & the Teaching for Robust Understanding Project. (2016). An Introduction to the Teaching for Robust Understanding (TRU) Framework. Berkeley, CA: Graduate School of Education. Retrieved from http://map.mathshell.org/trumath.php

The TRU Math Suite Page on The Mathematics Assessment Project web site:


The TRU Math Suite Page on the TRU-Lesson Study web site:


The post Alan Schoenfeld: At the Boundaries of Effective Mathematics Thinking, Teaching, and Learning appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

What would make you write a book about writing a book you recently published on a 13th-century mathematician? When you’re Stanford University’s Keith Devlin (aka, NPR’s “The Math Guy”) and the mathematician is Leonardo of Pisa (aka, “Fibonacci”), the story of researching the first book, The Man of Numbers, becomes an incredible story in itself: Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World

What makes Devlin’s story so compelling is that it involves many other people, multiple countries, 900+ years, and enough setbacks, twists and turns, courage, and fortitude to rival fictional adventure. Throw in the idea that Leonardo’s work helped revolutionize the world forever, parallels with another earth-shaking revolutionary, Steve Jobs, sprinkle well with the best-known number sequence of all time, and you have yourself a real page-turner.

I don’t want to spoil Devlin’s tale, but it’s impossible to resist mentioning the key notion that Leonardo gave us the mathematical tools needed to make much of the business transactions of the millennium that followed him conceivable and possible, as well as providing the framework for much more that we take for granted in every mathematically-based arena of modern civilization. Absent Leonardo’s genius, there’s no telling how long Europe would have remained hobbled by Roman numerals, counting boards, and mathematical computations reserved for a relatively small number of specialists. Instead, the European merchant community had put into its hands the perfect means to grow almost boundlessly.

Devlin’s book will serve to enlighten anyone fortunate enough to read it as to the pivotal role Leonardo played in the rise of the West out of its Dark Ages. By taking crucial ideas from India via the greatness of Islamic civilization, bringing them to Italy, developing practical methods for doing complicated calculations including those of vital importance to merchants and tradespeople, Leonardo of Pisa gave us the modern world. And Keith Devlin helps give this genius the long-overdue credit he deserves.

The post Dr. Devlin Finds Fibonacci appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Models and Delving Deeper Into Division

Last month, we unpacked the long-division procedure taught in American K-6 classrooms, tied it into the standard “long multiplication” algorithm, discussed issues with the loss of information traded for an increase in speed and efficiency, and also explore some alternative methods for doing long division.

What was taken for granted was that we knew what division “is” (I use scare quotation marks here advisedly: it’s dangerous to try to be too restrictive about the underlying meaning of “simple, elementary” mathematical operations, and I do so here without wishing to suggest that what I am offering is or could possibly be definitive). My sense is that to a large extent, K-6 teachers have at best a very limited grasp of what division means, particularly when extended to the integers. This piece proposes to help flesh out that understanding, particularly for those who have struggled with the requisite ideas or not given the matter much consideration.

First, it is reasonable to state that division is the inverse operation for multiplication (I recommend avoiding or minimizing the use of “opposite” in this context), in the same sense that subtraction is the inverse operation for addition. In fact, it’s not at all unusual to define subtraction as “adding the inverse (or negative)”; similarly, we may define division as “multiplying by the inverse (or reciprocal).” However, some caution is required. If children haven’t learned about integers, teachers will have to restrict subtraction to cases of a – b where b ≤ a. This raises issues about whether to state that “You can’t take away a bigger number from a smaller number,” which is false once students learn about negative numbers. I recommend stating something along the lines of, “Given the numbers you know about so far, questions like “What is            3 – 5?” don’t have an answer; however, you will learn about numbers that allow us to answer questions like that in a few years.”

Similarly, before they have access to the rational numbers, questions like 16 ÷ 5 = ? have to be answered using remainders, and something along the lines of 3 ÷ 7 =? might not make much sense at all! Stating flatly, “You can’t divide a smaller number by a bigger one” is not only misleading but also leaves students and teachers alike in a serious quandary when faced with dividing -6 by -3. Does no one teaching K-6 realize that while the answer is POSITIVE 2, the problem shouldn’t be possible within the integers if that proscription about division is true?

Real-World Models for Division

It’s not my contention that students learn arithmetic better if they can ground it in their real-world experiences, or that they’ll enjoy mathematics more if they can relate it to their everyday concrete experiences. That might well be true for some children and irrelevant for others. Be that as it may, I believe teachers benefit from having various models at their disposal as they teach wide-ranging students mathematics. And good models won’t take away from understanding as long as teachers understand and communicate to their charges that models are ways to aid understanding the mathematics; they are not the mathematics itself.

The two main models for division in elementary school mathematics are the partitive (aka, fair-share) model and the quotitive (measurement) model.

Most young children have a reasonably good intuitive/experiential understanding of what fair shares are by the time division is introduced in school, even if they are not all that comfortable with formal arithmetic operations (there is a good deal of research with children in Brazil and elsewhere that suggests many children who work with money in market stalls and other non-school settings develop facility with complicated calculations that are not mirrored in their school performance with arithmetic).

Given a fixed number of items, x (say, gummy bears), and a fixed number of friends k (including herself), students have effective ways to determine how much each person gets, q (share size), and how many are left over, r (remainder) if any. What gets done with the remainder varies, of course. Local customs are not standardized.

What most children know is that, remainder aside and assuming that all gummy bears are equally desirable, fair-sharing requires that each child gets the same number of bears. (However, be careful if you’re trying to figure out if given kids really get the “fair-share” idea enough to connect it accurately with fractions (i.e., rational numbers). It’s not unusual for younger children sharing a candy bar with one other person to ask for “the bigger half.” That makes perfect sense as long as we’re using ordinary English rather than mathematical terminology.) A typical strategy for figuring out the quotient is to distribute one bear at a time to each person, repeating the process until there are no more gummies or not enough for everyone to get an additional bear.

Note that in the partitive model of division, there is a fixed total/whole, x, a fixed number of shares, k, and an unknown share size, q, which is to be determined. For simplicity, we’ll not discuss how the remainder, r, is “disposed of. “

With the quotitive model, there is again a fixed total/whole, x, but the size share is determined, and what is unknown is how many shares/groups of this size can be made before exhausting the supply or not having enough left to make another group.

A typical situation for quotitive division is cooking, where there might be 12 cups of flour, and a cookie recipe that calls for 1 1/3 cups of flour per batch. The question would now be, “How many full batches of cookies can be made with 12 cups of flour?” (Assumed for this example is that there are no other constraints: adequate amounts of all other ingredients are available).

Generally, this is a more difficult situation for many students (and teachers) to grapple with. Research has indicated, for example, that many elementary teachers and teacher education students have a real struggle writing word problems that involve division by proper fractions. Asked to write a ‘real world problem’ that would be solved by dividing by ½, a significant number of those asked will instead provide one that represents division by 2, even when the dividend and divisor are explicitly provided in writing.

Modeling Division With Integers

Consider how partitive and quotitive models apply to signed number division, keeping in mind, too, that multiplication of real numbers and their subsets is commutative, but division is not.

For integers p, q, r, with r ≠ 0, consider what happens for various combinations of p & q being positive or negative. With p & q both positive, we can easily imagine both partitive and quotitive division situation and already have mentioned such examples.

If p & q are both negative, say -12 and -3, we can ask meaningfully, “Into how many groups of size -3 can we divide a total of -12?” The answer, positive four, makes sense arithmetically, but it can also be viewed as distributing a debt of negative $12 into equal groups of size negative $3, then asking how many partners would be needed to absorb the debt equally. This is an example of quotitive division.

If p is negative and q is positive, we can also make a meaningful model. Let’s say that p is again –12 and that q is 3: then we might ask, “How much debt must each of 3 partners take to cover a debt of $12?” Here, the answer, -4, makes sense because we’re talking about sharing a fixed debt into a fixed number of groups and each share contains the same negative number of dollars. The model is partitive.

Now, what happens when p is positive and q is negative? Can we make a sensible partitive or quotitive model? A bit of thought suggests that we cannot. We can’t have a partitive model with a negative divisor because a negative number of groups simply makes no sense in the real world. On the other hand, a measurement model doesn’t work either. Trying to divide a positive total into shares of negative size won’t fly.

Nonetheless, such computations pose no actual difficulty: 12 ÷ -3 = -4 and this is consistent with our notions about the relationship between division and multiplication, since -4 • -3 = 12 Furthermore, the rules students are taught about signed-number multiplication and division hold up: no contradiction is introduced into arithmetic thereby. We should all be happy.

But what about our nice models? The answer is, they break down here. And perhaps that’s a good thing. Mathematics does not depend on a correspondence with the “real world.” It depends on logical consistency from the objects, rules for working with them, and the laws of reasoning. If we do not arrive at contradictions, we’re happy. Finding one or more models or metaphors to help understanding may be desirable, but it is not necessary.

So students who are in upper-elementary or middle school who are learning to think about sense making with signed-number division should have a chance to play and grapple with these issues. In many cases, they should be able to tackle the more abstract idea of division as multiplication by the reciprocal. Eventually, we would like all students to be able to think more abstractly in terms of mathematical objects and rules for working with them. The interplay between models and mathematics is ongoing even as the abstraction is ramped up, but models are a kind of scaffolding much of the time that we should be prepared to abandon when necessary or convenient, and the inability to find a model for a particular bit of mathematics should not be an insuperable barrier to tackling it.

The post Models and Delving Deeper Into Division appeared first on Math ∞ Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview