Why, professor Clements, why? Why are you taking all of our loops away?
I feel like the Grinch, sometimes, taking all of the pretty little candy loops away from the wide-eyed first-year students.
But the fact is… that I really don’t like loops.
Loops in every popular language simply execute a block of code repeatedly. This means that your code has to perform mutation in order to allow a value to escape from the code. This requires before-and-after reasoning, and introduces a huge new source of bugs into your program. By contrast, there’s almost no chance to tangle the action of the caller and the callee in a recursive call.
Corollary of this: you can easily update loop variables in the wrong order: e.g.: i += 1, sum += arr[i]. Oops, bug.
Loops can’t easily be debugged; in a functional style, each loop iteration is a recursive call for which the student can write an explicit test. Not possible using loops.
In functional style, you can’t forget to update a variable; each recursive call must contain values for all loop variables, or you get a sensible (and generally statically detectable) error.
You can’t write a loop in a non-tail-calling fashion. You have to do all of the work on the way down. Traversing a tree is basically impossible (to do it, you’re going to wind up building a model of the stack, turing tar pit etc.).
Finally, writing in a functional style is about 80% of the way to proving your code correct; you have to choose a name for the action of the recursive call, and you can usually state an invariant on the relation between the inputs and the outputs without difficulty.
Nothing here is new; it’s just a way of blowing off steam while grading my students’ terrible, terrible code. Sigh.
(EDIT: I should add… none of this applies to list comprehension forms such as for/list; those are dandy. Not sure? Run down the list of bullet points and check for yourself.)
Okay, it’s week four of data structures in Python. In the past few days, I’ve read a lot of terrible code. Here’s a beautiful, horrible, example:
1 2 3 4 5 6 7 8 910111213141516171819202122232425
# An IntList is one of# - None, or# - Pair(int, IntList)classPair:def__init__(self,first,rest):self.first=firstself.rest=rest# standard definitions of __eq__ and __repr__ ...# A Position is one of# - an int, representing a list index, or# - None# IntList int -> Position# find the position of the sought element in the list, return None if not found.defsearch(l,sought):ifl==None:returnNonerest_result=search(l.rest,sought)if(l.first==soughtorrest_result)!=None:ifl.first==sought:return0else:return1+rest_result
This code works correctly. It searches a list to find the position of a given element. Notice anything interesting about it?
Take a look at the parentheses in the if line. How do you feel about this code now?
(Spoilers after the jump. Figure out why it works before clicking, and how to avoid this problem.)
It’s much closer to ‘go’ time now with Python, and I must say, getting to know Python better is not making me like it better. I know it’s widely used, but it really has many nasty bits, especially when I look toward using it for teaching. Here’s my old list:
Testing framework involves hideous boilerplate.
Testing framework has standard problems with floating-point numbers.
Scoping was clearly designed by someone who’d never taken (or failed to pay attention in) a programming languages course.
The vile ‘return’ appears everywhere.
But wait, now I have many more, and I’m a bit more shouty:
Oh dear lord, I’m going to have to force my students to implement their own equality method in order to get test-case-intensional checking. Awful. Discovering this was the moment when I switched from actually writing Python to writing Racket code that generates Python. Bleah.
Python’s timing mechanism involves a hideously unhygienic “pass me a string representing a program” mechanism. Totally dreadful. Worse than C macros.
Finally, I just finished reading Guido Van Rossum’s piece on tail-calling, and I find his arguments not just unconvincing, not just wrong, but sort of deliberately insulting. His best point is his first: TRE (or TCO or just proper tail-calling) can reduce the utility of stack traces. However, the solution of translating this code to loops destroys the stack traces too! You can argue that you lose stack frames in those instances in which you make tail calls that are not representable as loops, and in that case I guess I’d point you to our work with continuation marks. His next point can be paraphrased as “If we give them nice things, they might come to depend on them.” Well, yes. His third point suggests to me that he’s tired of losing arguments with Scheme programmers. Fourth, and maybe this is the most persuasive, he points out that Python is a poorly designed language and that it’s not easy for a compiler to reliably determine whether a call is in tail position. Actually, it looks like he’s wrong even here; I read it more carefully, and he’s getting hung up on some extremely simple scoping issues. I’m really not impressed by GvR as a language designer.
Some data: how long is it taking our students to graduate? How is that changing over time?
It looks like CSC and CPE are improving. It looks like CPE students are having trouble getting out in 4 years. On the other hand, the 5-year graduation rate for CPE majors is steadily rising.
It looks like lots of students graduate in four years and one quarter.
It looks like very few students graduate in their sixth year; that may be because there’s nobody left; this graph doesn’t distinguish between those who haven’t yet graduated, and those who are discontinued or disqualified. However, given the very small numbers for six years, it seems reasonable to assume that most of those that haven’t finished after six years already gave up, possibly years earlier.
Note that for students admitted in 2012, none of them are shown as having graduated in 4 years plus 2 quarters or more… because it hasn’t yet been that long since they were admitted. By the same token, we don’t see six-year numbers for 2011 admits, etc.
Also, all of this data uses “adjusted cohorts”. That is, those that change majors out of the program are removed from the denominator and the numerator, and those changing majors into the program are added to the denominator and the numerator.
Here’s a picture of (lecture, non-supervisory) sections taught by CSSE faculty since Fall 2008:
Because the number of sections taught has gone up rather dramatically, here’s the same data, normalized by sections per year, to show the fraction of classes taught in each category:
There are many caveats and things to explain about these pictures.
Following Zoe’s excellent suggestion, I added color to the course dependency chart:
The arrows should be the same as the previous chart, though you’ll notice that because of the way I generated it, the chart now includes the classes that don’t have any edges connected to them.
The colors relate to the number of students that have enrolled in each class. Specifically, the log of the number of students that have taken the class from fall 2010 through spring 2016. Classes with zero enrollment are given log(0.5). The class with the highest value is red, the one with the lowest value is green. Yes, there should be a legend. Classes that aren’t in the Computer Science catalog are gray.
Note that I have no idea how many of these enrolls are repeats. That would be interesting, but I’ll need a different data source to answer that question.
The numbers for each class are interesting, and I think I’ll just publish them separately.
This picture shows the evolution of class sizes in classes taught by Computer Science faculty since Fall 2008. Specifically, it shows the likelihood that a randomly chosen scu will belong to a class of a given size.
So, for instance, you can see that in Fall 2008, more than 50% of our SCUs were delivered in classes of size less than 30, and that in Winter 2017 (2172), about 90% of our SCUs were delivered in classes of size 40 or less.
Here’s the picture for the College as a whole:
You may be wondering why I keep yammering on about SCUs, rather than saying things like “50% of classes…”.
Here’s another interesting picture. (Well, I thought it was interesting, anyway.) It shows the number of WTUs taught by the CS department faculty, from Fall 2008 up through Spring 2016. It includes courses with a bunch of different prefixes: CSC, CPE, HNRS, EE, ENGR, LAES, ME, and DATA.
This graph is broken up by the level of the course. The lowest (white) region shows courses whose names start with “01” (like “0123” and “0101”), the second region shows courses whose names start with “02”, and so forth. The “Sup” region shows the supervisory courses; senior project, master’s thesis, etc.
One note on “adjusted WTUs”: this data is taken from the FAD report, which misclassifies senior projects as lab courses, resulting in some very broken data. I’ve corrected this by re-assigning WTUs according to the CSU’s formulae.
Also, these are all classes taught by faculty associated with the department, so it includes lots of courses taught to nonmajors, as well as some courses taught by department faculty with other prefixes (for instance, a Mechanical Engineering course).
To me, the most interesting thing about this picture is frankly how flat it is. Our enrollments have gone way up, but the number of WTUs we’re teaching is pretty much unchanged.
I think the next picture to draw is how class sizes have changed over the years.
All parsing and rendering done in Racket. Isn’t it time that you learned Racket for yourself? :).
Here’s an SVG showing all of the dependencies associated with CSC courses:
Yes, it’s a little small to read. Click on it to see a bigger version. It’s an SVG, so you can blow it up arbitrarily. (Note: this picture is a lot more readable since Aaron Keen made the eminently sensible suggestion that it be left-to-right rather than bottom-to-top.)
Things to know about this data:
It’s scraped from the 2015–2017 course catalog in HTML format.
All cross-listed courses are normalized to their CSC equivalents.
Arrows are shown to all courses mentioned in the prereqs.
The last of these is significant. If a course has a pre-requisite like “Both CSC 124 and one of MATH 117 or MATH 118”, I just draw arrows to all of them. So don’t assume that the number of outgoing arrows is an indication of the number of courses required to take this course.
There are lots of courses shown here that haven’t been taught in a long time. CSC 108 jumps out at me, but there are others.
Some courses have a prerequisite that can be fulfilled by a no-longer-existing course. For instance, CSC 141 changed into 348, but there are still a bunch of courses that list CSC 141. Since 141 is not displayed as a hyperlink in the catalog, we assume that it’s defunct, and we don’t show it.
No dependencies are shown for non-CSC courses.
All scraping and processing done in Racket, natch. Graph drawn with Dot.
It appears that I have not recently documented how exactly it is that you “capture,” and (more importantly) feed and care for a wild yeast culture.
I should start by saying that most of what I know comes from
“Cooked,” by Michael Pollan,
“Tartine Bread,” by Chad Robertson,
… and certain brief conversations with biologists at Cal Poly
The fundamental assumption behind the effectiveness of what I’m going to tell you comes from the ‘everything is everywhere: but the environment selects’ hypothesis, “promulgated by Dutch microbiologist Martinus Wilhelm Beijerinck early in the twentieth century.”1 The basic idea is that most microorganisms are present in varying densities nearly everywhere on earth, and that it’s possible, by providing the right environment, to breed pretty much any microorganism that you want.
In this case, we want a yeast culture. Some people take great care to preserve a particular culture that has been handed down for generations. My reading suggests that this is … well, kind of pointless. You can create your own culture in a week or two (or maybe less) without much fuss, and without anything but flour and water.
With that said, it’s certainly easier to preserve a culture than it is to create one, so once you’ve got it going well, it makes sense to me to keep it going. Honestly, it’s a lot like a fire was to primitive culture: starting it is a pain, and feeding it isn’t too expensive.
Okay, here’s my “recipe.” I’ve started from scratch on this twice, and I’ll probably do it again.