Cal Poly Solar Decathlon Site
The student-run Cal Poly Solar Decathlon site is now online, at http://www.calpolysolardecathlon.org.
Check it out!
The student-run Cal Poly Solar Decathlon site is now online, at http://www.calpolysolardecathlon.org.
Check it out!
I’m sympathetic to Teachers’ unions. In fact, I’m in the Teachers’ union. More specifically, I’m a tenure-track associate professor at Cal Poly, and a member of the California Faculty Association.
Many of the faculty union’s actions I find commendable. In particular, I’m thankful that the union supports faculty wages1, and tries to ensure the continued presence of full-time faculty.
However, I find the union’s seniority rules pretty much indefensible. In particular, article 38.16 of the contract (Collective Bargaining Agreement) negotiated by the CFA with the California State University system (or CSU) stipulates (IANAL) that “The President shall establish the order of layoff for tenured faculty unit employees in a unit of layoff by reverse order of seniority.”
Why would this be the case? Is the administration presumed to be so incapable of estimating worth that this decision needs to be taken out of their hands completely? It appears to me that the current goal of the union is to demonize every aspect of the CSU administration’s activity. The level of the CFA rhetoric in its published materials is incredibly low; to take just one example, the idea of paying more money to certain employees based on their performance is described by the CFA as “Pucker Pay.” Please.
Now’s where I should launch into a detailed analysis of the history of labor laws and the role played by seniority layoffs… but I don’t have that background, or that time. If I could build a model using Redex and publish it in POPL, I’d be all over it. Instead, it will just be my opinion.
Here’s another part of my opinion: civil discourse is the basis for forward progress in our government.
I guess I can say this: I voted for Marshall Tuck.
1 though probably not mine, actually
What? There are no decimal time zones?
Okay, backing up.
I love time-wasting hard-to-learn idiosyncrasies. I use the dvorak keyboard, I run in sandals I make myself, I run my own mail server (surely the stupidest of my habits).
About two years ago I “invented” decimal time. Which is to say: I did in fact think of it myself. Unfortunately, there’s a bit of prior art here, going back to the French Revolution.
Short version: Our current day has 86,400 seconds in it. This is not really very far from 100,000. So… what if we just designated a decimal second as being 1/100,000 of a day? then we could have all of our hours and minutes be decimal divisions. More specifically: the day is divided into 10 decimal hours, each hour into 100 decimal minutes, and each minute into 100 decimal seconds. Works great! The decimal hours are quite long, but the decimal minutes are pretty close to our existing ones.
A brief diary of broken bicycle frames.
All dates approximate.
This list does not include broken axles or bent forks.
EDIT:
Back in 2005, Cal Poly placed third in the Solar Decathlon… and there’s a movie, to prove it!
The Rahus Institute has graciously agreed to put their 2005 Solar Decathlon movie online. Here’s the first of three segments of the hour-long movie.
I feel like the list of things I’m outraged about politically is always growing. How can I keep track of it?
I have the feeling I’m going to be adding to these.
I’m totally delighted to move from octopress to Greg Hendershott’s frog, a Racket-based static blog generator. No more of those !@#$ RVMENVRCETCETC dot files.
Yes, there are still lots of things to fix. Some of those images are a wee bit enormous, for instance. That’s not going to happen today.
Granite Mon 2013 is in the books. Well, in some books. This was the 19th running of the … well, of the Long Island Challenge part of the Granite Mon. This year it was organized by Alice Clements, and enjoyed by many.
We arose at an absurdly early hour on August 17th, and met at the KYC at 5:00 AM. Miraculously, we had plenty of chase boats, despite a few late cancellations. We counted seven swimmers and eight chase boats, if I recall correctly, so we packed kayaks into the motorboats and headed over.
It was a really lovely morning:
Here we are after we finished:
From left to right, in this picture:
Justin Pollard
Matt ??
Also swimming was Matt, whose last name I can’t remember and who is cut out of the picture. That’s really too bad, and if anyone can give me a picture, I’ll stick it in here.
Being on sabbatical has given me a bit of experience with other systems and languages. Also, my kids are now old enough to “mess around” with programming. Learning from both of these, I’d like to hazard a bit of HtDP heresy: students should learn for i = 1 to 10
before they learn
To many of you, this may seem obvious. I’m not writing to you. Or maybe you folks can just read along and nod sagely.
HtDP takes this small and very lovely thing—recursive traversals over inductively defined data—and shows how it covers a huge piece of real estate. Really, if students could just understand how to write this class of programs effectively, they would have a vastly easier time with much of the rest of their programming careers, to say nothing of the remainder of their undergraduate tenure. Throw a few twists in there—a bit of mutation for efficiency, some memoization, some dynamic programming—and you’re pretty much done with the programming part of your first four years.
The sad thing is that many, many students make it through an entire four-year curriculum without ever really figuring out how to write a simple recursive traversal of an inductively defined data structure. This makes professors sad.
Among the Very Simple applications of this nice idea is that of “indexes.” That is, the natural numbers can be regarded as an inductively defined set, where a natural number is either 0 or the successor of a natural number. This allows you to regard any kind of indexing loop as simply a special case of … a recursive traversal of an inductively defined data structure.
So here’s the problem: in September, you face a bunch of bright-eyed, enthusiastic, deeply forgiving first-year college students. And you give them the recursive traversal of the inductively defined data structure. A very small number of them get it, and they’re off to the races. The rest of them struggle, and struggle, and finally get their teammates to help them write the code, and really wish they’d taken some other class.
However, another big part of the problem is … well, monads are like burritos.
Let me take a step back.
The notion of repeated action is a visceral and easily-understood one. Here’s what I mean. “A human can multiply a pair of 32-bit integers in about a minute. A computer can multiply 32-bit integers at a rate of several billion per second, or about a hundred billion times as fast as a person.” That’s an easily-understood claim: we understand what it means to the same thing a whole bunch of times really fast.
So, when I write
for i=[1..100] multiply_two_numbers();
It’s pretty easy to understand that I’m doing something one hundred times.
Is this post a thinly disguised ripoff of Brian Anderson’s post about embedding Rust in Ruby? Why yes. Yes it is.
Okay, let me start with a little background. Rust is a magnificent language that comes from Mozilla; it’s targeted at programmers who want
I think the Mozilla Research homepage is probably the best place to start learning about Rust.
To be honest, though, I’m probably flattering myself if I think that this blog post is being read by anyone who doesn’t already know lots about Rust.
One of the key requirements of a language like Rust is that it be embeddable; that is, it should be possible to call Rust code from another language just as it’s possible to call C code from another language.
This is now possible.
To illustrate this, Brian Anderson posted a lovely example of embedding Rust in Ruby. But of course, embedding Rust in Ruby is pretty much exactly the same as embedding Rust in any other language.
Say, for instance, Racket.
So, without further ado, here’s the setup. You just happen to have a small web app written in Racket that performs a Gaussian Blur. You decide to optimize the performance by porting your code to Rust. Then you want to plug your Rust code into your Racket application. Done! Here’s the github repo that contains all of the code.
Let’s see that again in slow motion.
First, here’s the gaussian blur function, written in Racket. We’re going to stick with a grayscale image. It works fine in color, but the code is just that much harder to read.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
;; the gaussian filter used in the racket blur. ;; boosted center value by 1/1000 to make sure that whites stay white. (define filter '[[0.011 0.084 0.011] [0.084 0.620 0.084] [0.011 0.084 0.011]]) ;; racket-blur: blur the image using the gaussian filter ;; number number list-of-bytes -> vector-of-bytes (define (racket-blur width height data) (define data-vec (list->vector data)) ;; ij->offset : compute the offset of the pixel data within the buffer (define (ij->offset i j) (+ i (* j width))) (define bytes-len (* width height)) (define new-bytes (make-vector bytes-len 0)) (define filter-x (length (car filter))) (define filter-y (length filter)) (define offset-x (/ (sub1 filter-x) 2)) (define offset-y (/ (sub1 filter-y) 2)) ;; compute the filtered byte array (for* ([x width] [y height]) (define new-val (for*/fold ([sum 0.0]) ([dx filter-x] [dy filter-y]) (define sample-x (modulo (+ dx (- x offset-x)) width)) (define sample-y (modulo (+ dy (- y offset-y)) height)) (define sample-value (vector-ref data-vec (ij->offset sample-x sample-y))) (define weight (list-ref (list-ref filter dy) dx)) (+ sum (* weight sample-value)))) (vector-set! new-bytes (ij->offset x y) new-val)) (vector->list new-bytes)) |
Suppose we want to rewrite that in Rust. Here’s what it might look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
fn blur_rust(width: uint, height: uint, data: &[u8]) -> ~[u8] { let filter = [[0.011, 0.084, 0.011], [0.084, 0.620, 0.084], [0.011, 0.084, 0.011]]; let mut newdata = ~[]; for uint::range(0, height) |y| { for uint::range(0, width) |x| { let mut new_value = 0.0; for uint::range(0, filter.len()) |yy| { for uint::range(0, filter.len()) |xx| { let x_sample = x - (filter.len() - 1) / 2 + xx; let y_sample = y - (filter.len() - 1) / 2 + yy; let sample_value = data[width * (y_sample % height) + (x_sample % width)]; let sample_value = sample_value as float; let weight = filter[yy][xx]; new_value += sample_value * weight; } } newdata.push(new_value as u8); } } return newdata; } |
Pretty similar. Of course, it uses curly braces, so it runs about three times faster…
So: what kind of glue code is necessary to link the Rust code to the Racket code? Not a lot. On the Rust side, we need to create a pointer to the C data, then copy the result back into the source buffer when we’re done with the blur:
1 2 3 4 5 6 7 8 9 10 11 12 |
#[no_mangle] pub extern fn blur(width: c_uint, height: c_uint, data: *mut u8) { let width = width as uint; let height = height as uint; unsafe { do vec::raw::mut_buf_as_slice(data, width * height) |data| { let out_data = blur_rust(width, height, data); vec::raw::copy_memory(data, out_data, width * height); } } } |
On the Racket side, it’s just a question of making an ffi call, which is super-concise:
1 2 3 4 5 6 7 8 |
;; link to the rust library: (define rust-lib (ffi-lib (build-path here "libblur-68a2c114141ca-0.0"))) (define rust-blur-fun (get-ffi-obj "blur" rust-lib (_fun _uint _uint _cvector -> _void))) (define (rust-blur width height data) (define cvec (list->cvector data _byte)) (rust-blur-fun width height cvec) (cvector->list cvec)) |
And away you go!
I’ve got this code running live at FIXME. What’s that you say? You can’t seem to find FIXME?