I gave a lecture yesterday as part of a lab course I was TA-ing. The assignment for this week had to do with understanding how different TCP variants perform in a wireless setting.
To prepare students for the assignment, my lecture was designed to be a refresher on TCP’s basics.
My plan was to discuss what TCP sets out to accomplish, some of the early design problems associated with it, and how each subsequent improvement of TCP solved a problem that the previous one didn’t (or introduced). This could have been a very one-sided lecture, with me parroting all of the above. But the best way to keep a classroom interactive is to deliver a lecture packed with questions, and have the students come up with the answers.
This meant that I began the lecture by asking students what TCP tries to accomplish. The students threw all kinds of answers at me, and we discussed each of them one after the other. We talked about what reliability means, how reliable TCP’s guarantee of reliability actually is, and from a performance standpoint, what TCP tries to accomplish. Note, at this point I’m still on slide number 1 with only “What are TCP’s objectives?” on it. Next, we went into the law of conservation of packets, and I asked them why that matters. After that round of discussions were complete, we started with TCP Tahoe. I posed each problem that TCP Tahoe tries to fix, the problems it doesn’t fix, and also asked them what the ramification is/would be of a certain design decision of Tahoe. This went on for a while, with the students getting more and more worked up about the topic, until we finally covered all the TCP variants I had planned on teaching. By this point, the students themselves had discussed, debated and attempted to solve each of the many issues associated with making TCP perform well.
Next, we moved on to the problems associated with TCP over wireless, and I asked them to suggest avenues for constructing a solution. The discussion that followed was pretty exciting, and at some point they even began correcting and arguing with each other. Little did they know, that this one line problem statement I offered them took several PhD theses to even construct partially working solutions.
I’ve tried different variations of this strategy in the past, and after all these years I’ve concluded this: Leaving students with questions during a lecture puts them in the shoes of those before them who tried to find the answers. Leaving students with the answers makes them mere consumers of knowledge.
When we tell students about a solution alongside the problem itself, we’ve already put horse blinders on their chain of thought. We’re directing their thoughts through a linear chain. Leaving them with the questions long enough enough makes them think more, and in my opinion, works very well in making a classroom interactive.
I recently stumbled upon this.
The gist of the discussion is that a good deal of CS research published at reputable venues is notoriously difficult or even impossible to replicate. Hats off to the team from Arizona for helping to bring this to the limelight. It’s something we as a community ought to be really concerned about.
Among the most common reasons seem to be:
- None of the authors can be contacted for any help relating to the paper.
- Single points of failure: the only author capable of reproducing the work has graduated.
- The objective of publishing a paper being accomplished, the software went unmaintained and requires divine intervention to even build/setup, let alone use.
- The software used or built in the paper cannot be publicly released. This is either due to licensing reasons, the first two points, or plain refusal by the authors.
- Critical details that are required to re-implement the work are omitted from the paper.
One of the criticisms I have with the study is that their methodology involved marking a piece of code as “cannot build” if 30 minutes of programmer time was insufficient to build the tool. I doubt many of my own sincere attempts to make code publicly available would pass this test. Odin comes to my mind here, which is a pain to setup despite the fact that others have and do successfully use it for their research.
So what can we do to minimise academic abandonware? Packaging your entire software environment into VMs and releasing them via a project website sounds to me like an idea worth pursuing. It avoids the problem of having to find, compile and link combinations of ancient libraries. True, it doesn’t help if one requires special hardware resources or a testbed in order to run the system, but it’s a start nevertheless. Investing time and research into building thoroughly validated simulators and emulators may also aid in this direction.
I’ll end this post with a comic I once drew.
I’ve had a couple of paper deadlines in the last few months, all of which were not-so-conveniently placed a couple of hours apart from each other. While the month leading up to it was insanely stressful, I managed to push out most of what I had in the pipeline and don’t have any more paper deadlines to worry about for a few months.
I’m now doing the usual post-submission-mental-detox to clear up my head, where I’ve been taking it easy at work and catching up on life in general. I’ve been completing some pending reviews, preparing an undergraduate course for the upcoming semester, and rabidly catching up on lost gaming time. I’m also going on holiday to Argentina in a week, an opportunity to completely disconnect from work all together.
This freedom to manage my time the way that suits me best is what I enjoy the most about doing a PhD. I can be working insanely hard in the weeks leading up to a deadline to push out a paper, and then slow down for a while to to clear up again.
Now back to exploring dungeons in Skyrim.
The Setup Interviews is an interesting website which features interviews with professionals from different fields about the hardware/software they use on a daily basis. The interviews conclude with the interviewees being asked what their dream setup is. While most people tend to answer this question with some set of gizmos they’d like to own, I feel Matt Might got it right in his answer:
When I was young, I dreamed about building a “nerd cave” full of fast hardware, big monitors, sleek software and cool gadgets. I see now that technology can only nip at the margins of happiness, creativity and productivity relative to the effect of having sharp colleagues, good friends and close family nearby. I have many sharp colleagues that double as good friends. And, there’s an outside chance that in the next two or three years both of my brothers and all three of my sisters-in-law (each of whom is like an actual sister to me) will have joined me and my wife in Utah. I hope it happens. That’s my dream setup.
It’s the New Year and that means it’s time for change. I’ve finally moved my blog off wordpress.com and onto Github + Jekyll.
Jekyll has been a pleasure to deal with so far. The import from wordpress was mostly trivial but with some rough edges.
First, export your wordpress.com blog using the admin console (you should get an xml dump of your site) and then run:
I only exported my posts because I wanted to setup pages myself. The above command should populate Jekyll’s
_postsfolder with your blog’s posts as html files.
I found the generated html to be rather mangled; there were no paragraph separations and blockquotes looked ugly. This required some monkey patching with
awkto fix. There are still some more loose edges left which I’ll get around to later. I’ve setup Disqus for comments, and I still need to import all the comments from the Wordpress site.
I’m currently on the Hyde theme which I modified a bit to suit my liking.
All in all, it’s been a breeze to deploy over Github and I’m quite happy to have a lot more control on how my site looks like.