Learning computer programming at the high school/middle school level is often compared to the study of mathematics at the same level. Both require the acquisition of what is basically a new language with their own “words” and syntax, both require a logical, step-by-step thought process and both require patience and perseverance to get half-way decent at either. The problem with this comparison is the differences between mathematics and programming are more significant, from the teaching aspect, than the similarities.

To be good at mathematics often requires intuitive leaps to solve new problems. As a math teacher I see that my best students have the ability to make the leap, then come back and fill in the holes left by that leap, while the average and below average students struggle along trying to rely on rote memorization of previous examples. The good students will have an understanding of the concept behind the math and will have some examples memorized. The average students will have some examples memorized and if a problem varies enough from the example, the student is lost. In programming there can be no intuitive leaps in writing a program. The computer simply does not like them. Examples of code are not memorized, they are cut and pasted. Cut and pasting on a math exam is bad, cut and paste in a programming assignment is expected.

For years I have been trying to teach math kids to think outside the box, while in programming it is all inside the box. Programming does take leaps of intuition, but hopefully not while typing the actual code. I teach my students there are two major phases of programming; design and code. I admit this is a bit simplistic but they get the idea. In the design phase the programmer needs to think smart. This is the phase where you take the concept/idea/assignment/whatever and build a rough outline of how the thing is going to work; what is the input, what is the output, what does the GUI look like, what functions and procedures would help break the program down into component pieces, and so on, are what I consider part of the design phase. Hopefully this is where any leaps of intuition are going to take place. This is also where the big money is made. The code phase is where the programmer is actually hammering on the keyboard try to make the ideas work. This is where the programmer has to think stupid, just like the computer. This is also the job that is usually shipped overseas. Here the programming is consists of using the language that has been somewhat predefined. There is no such thing as writing an original For/Next statement so there is no need to get clever. The clever part was back in the design phase were the For/Next was laid out.

Many of the kids (all right, all of the kids) like to skip the design phase of this whole process. They want to sit down and start hacking some code and see if something is going to work. Now I admit this is my own favorite method of writing programs, it is sort of fun this way but experience has taught me that it is usually not the most efficient method. Beginning programming students simply do not have the experience with years of coding to make this work. In mathematics class I love it when the kids just start trying things, I like to see them experiment and dig through the textbook. When a programming kid starts just trying things I know they do not have a clue what to do and they are going to take forever to get anywhere. One of the big differences using the trial-and-error method in math verses programming is the shear scope or size of the problems. The typical high school math problem is at most one page, the same can rarely be said of a typical programming problem. If the kids try to solve a programming by hacking it the assignment just does not get done in a reasonable amount of time. That design/planning time is critical.

Look at the programming statement “x = x + 1”. Put yourself in the position of an average high school student as you look at this statement. From the high school mathematics point of view this thing makes no sense. For years the kids have been taught x = x, not x + 1. All of the sudden this statement is supposed to make sense? Good luck with that. The better students will pick it up quickly, they can compartmentalize the differences as context requires. The average kid is going to try to continue to apply the algebra rules he/she struggled so hard to learn over the last 4 years or so. The differences like this between mathematics and programming seem minor to us, the teachers and programmers, but to the kid struggling with both they are major confusers.

Programming can be hard and sometimes we, the teachers, make it even harder by not paying attention to the differences in what the students already know and what we are trying to teach them. The logic that a program has to go through, every little step being defined and succinct, is not the easiest way for kids to think. They what to make those jumps in logic and steps so forcing the kids to “think stupid” is counter to their normal thinking. Beginning programmers need to be able to count on their fingers.

July 8, 2010 at 8:49 pm |

Never use Single character variables, especially a, b, c, x, y and z. They confuse students with algebra. Use proper names, like hit = hit 1, these statements are easier for the students to understand and grasp,

July 8, 2010 at 9:12 pm |

Excellent point. I forgot about that difference. Mathematics is a world of single character variables while programming should not use them at all.

July 8, 2010 at 9:30 pm |

You have the same issue if you used foo = foo + 1 though. The big issue is probably that we us the equal sign to mean different things in math and programming. Perhaps it would be easier and more clear if we used foo <- foo + 1 but I'm not sure. foo += 1 is a whole different thing but we can't always use it. foo = bar + xray is still not the same in math and programming.

I sometimes wonder if the reason that programming came sort of easy was because math came sort of hard for me. My math skills got a lot better after programming for a while. Counter-intuitive perhaps but it worked that way in part because I suddenly had more motivation to learn the math.

July 9, 2010 at 5:04 pm |

The same symbol used for two different operations and now let’s throw in some new stuff like or, and, xor, nand, &&, ||, ==, and so on. Oh, let’s also not forget the differences between languages. Talk about fubar! (I do know what the acronym fubar stands for.) Of course, that is why programmers make the big bucks. LOL.

Oh, and by the way, any good comments in these replies will get used in my methods course materials.

July 12, 2010 at 10:18 am |

[…] the confusion between teaching math and computer programming for many students in a post called Algebra vs. programming syntax – x=x+1 makes no sense . This a a point I have heard before. The syntax we use in programming often looks like what they […]

July 13, 2010 at 3:59 am |

I think the unfortunate syntactic legacy of the C family of languages is part of the issue. My first CS class in high school (back in 1994) was taught in Pascal and the more distinct syntax for assignment, x := x + 1, stood out in my mind as a separate concept from equality.

I can think of a couple of other non-parallels that I had trouble resolving as I was learning or as I was mentoring others…

– In the C family of languages, 3 / 2 == 1, which is not what you get in algebra, 3 / 2 = 1.5. It’s a gotcha I still see today when I inspect code. You have to learn the type system and all the implicit rules of coercion at some point, but I think having some sort of static checker to throw a warning would prevent this error from frustrating the daylights out of the novice programmer for hours.

– 1/10 in programming is never exactly equal to 0.1 due to floating point representation issues. Again, something everyone has to to learn, but there’s rarely a solution in any programming book that shows the learner how to use (or write) a function like AreClose(x, y, acceptableDelta). Along similar lines, all the examples in the textbooks suggest using a float or double to represent currency values (instead of using a Decimal type or writing a currency type), when I think this is setting students up for a trip later on.

– Rounding library functions rarely behave as novices expect. In math, we round real numbers to the nearest tenths place, hundredths place, or nearest whole number. And 5.5 always rounds up to 6. In .NET, 5.5 rounds to 5 and 5.51 rounds to 6 because Math.Round implements banker’s rounding. I’m always surprised that there isn’t a better API design for the rounding functions that would allow novices to get by in the beginning, rounding using mathematical rounding to a certain number of decimal places before learning the intricacies of banker’s rounding method the hard way and spending time manipulating the number to get it to round to a nearest place instead of focusing on solving the real problem at hand.

– Functions in programming have little relation to functions in mathematics, despite the similar appearance. Early on, we learn how to use library functions like y = sin(theta) and write functions like x = avg(a, b) as a set of instructions that transforms its input arguments into an output return value. This parallel sort of fits, but functions in math are technically a mapping between two sets and the map or Dictionary data structures are more closely related to the algebraic definition of a function.

September 20, 2012 at 9:45 pm |

hats off…great article!!