• Academic Code Quality

  • Chasing the Thesis Carrot

    My thesis defense is scheduled for the 22nd of June, so I've been in writing mode since the last two weeks. In parallel though, I've been evaluating my system, which seems to be producing pretty graphs for the time being.

    I'm both surprised and sad at my ability to be distracted when writing my thesis. I've always had an attention span within the order of microseconds, but this is an all time low.

    Here's how my typical weekday seems to go off late:

    • 8.30: Wake up.
    • 8.30 - 9:00: Ponder about the mysteries of the universe whilst showering.
    • 9:00 - 9:30: Have breakfast, and watch a full episode of the Simpsons or Family Guy.
    • 10:00: Reach the lab. Setup laptop, mouse, keyboard, and extra monitor. Open window for some fresh air. Go grab coffee.
    • 11:30: Done checking my mail, zero-unread-ing my feed of web comics, browsing through HN, Slashdot, and some other news sites (and a few "Oooh! Cat picture!" moments).
    • 11:30 - 12:30: Lunch.
    • 12:30 - 13:30: Post-lunch-procrastination (see 11:30).
    • 13:30 - 14:30: Body has begun processing lunch, so feeling drowsy -- Need. More. Coffee.
    • 14:30: Open up editor for writing thesis. The "let's settle this once and for all!" feeling surges through my body.
    • 14:31 - 14:45: Check Facebook.
    • (The above two repeats for a while)
    • 15:30: "This is boring! I think I'll do something that matters. Like code!"
    • 15:31: Implement new feature! "Byzantine-fault-tolerant-key-value-based-scalable-elastic-hadoop-LTE-fabric-on-the-cloud!"
    • 16:00: Realise that new feature broke all unit and system tests.
    • 16:01: git reset --hard HEAD
    • 16:01 - 16:02: Check Google+. Doesn't take that long though because there's nothing there.
    • 16:30: "That's it! I'm going to do more experiments! Nothing like graphs to make you feel like a scientist!" * challenge-accepted-rage-face *
    • 16:45: Fire shell script and watch as the whole testbed dances to your bidding, cables filling with packets, WiFi waves flowing through space. You feel empowered, like you're about to introduce a tear in the fabric of space.
    • 16:47: Realise that you misconfigured everything.
    • 16:49: Repeat experiment. Pretty sure it's correct this time, so need to do something useful for an hour.
    • 16:50: Continue with writing thesis.
    • 17:00: Time for more coffee.
    • 17:05: Back to desk, "What was I doing again?".
    • 17:06: Facebook time.
    • 17:08: Booooored.
    • 17:10: Write a few more lines of related work. "Previous work by Joe et al [10] has been known to suck".
    • 17:15: Discover some feature in text editor. Optimise key bindings for maximum productivity.
    • 17:49: Experiment's over. Fire SQL queries to extract data from measurements database, and pass it through gnuplot.
    • 17:50: Add graphs to thesis. Defend weird results with "Proof-of-concept".
    • 18:00: "Woah! Is it warm here in the lab or what? Screw you guys! I'm going home so that I can write comfortably!"
    • 18:30: At home. Have dinner with the company of Homer or Peter.
    • 19:00: Feel sleepy. Idle around.
    • 23:00: Sleep.
    • Repeat.

    And I wonder why the carrot's never getting closer.

  • Murali

  • Network Operating System?

    I've just begun dealing with Software Defined Networks (SDN) for my Master's thesis, and I'm experimenting on top of Floodlight, an open source OpenFlow controller from Big Switch Networks. In OpenFlow, a logically centralised entity known as the controller can control the forwarding tables of a bunch of switches which speak OpenFlow. OpenFlow applications then talk to the controller using some controller-specific API to 'program' the network (manipulate forwarding tables on the switches). The high level architecture looks something like this:

    Just like an operating system abstracts away the complexities of the underlying hardware for a user-space application, the controller abstracts away the complexities of the network for OpenFlow applications. For this reason, the controller is often referred to as a "network operating system". Applications have some API to talk to the network-OS, and it translates those APIs into OpenFlow commands that control the switches.

    For my thesis, the plan for my architecture was to have two applications that provide different services to the network, that are expected to run simultaneously. Both of them collect information from the OpenFlow switches and some other framework specific agents situated at the edges of the network to make some optimisation type decisions. But as soon as I implemented one of the applications, it was clear that I had no straightforward way of ensuring that both my applications wouldn't make decisions that counteract each other. Although I really don't like the idea of doing this, the easiest way to solve this is to wrap both applications into one. And from the looks of it, this is a problem that hasn't been solved yet.

    Controllers like NOX and Onix make the assumption that only one OpenFlow application is running on a given network at any point of time. This is a reasonable assumption from a systems perspective. But what's gotten me confused is how OpenFlow applications fit into the "SDN for enterprises" picture. I was under the impression that a network operator using a particular controller could choose between different 3rd party OpenFlow applications to handle different complexities with the network: a load balancing application from vendor A for the edge, a routing daemon application from vendor B, and so forth. While these are relatively orthogonal applications, it looks like it's possible for two OpenFlow applications to make decisions and choices that adversely affect each other (leading to oscillations in switch state). Floodlight allows you to run multiple applications at the same time, but leaves it to the developer (or user?) to ensure that applications can safely co-exist with each other.

    So again, if my observation isn't mistaken, how do OpenFlow applications fit cleanly into the SDN ecosystem?  How can I manage my network using building blocks of applications from different vendors? Will I need to rely on OneBigApplianceFromBigBadVendor per network? Does this necessitate something analogous to per-process resource allocation as in traditional operating systems? I can see that FlowVisor style slicing is one way to go about it, but will that suffice?

    So what *should* the network operating system do here? Let the applications run wild and fight it out? Or provide some mechanism to enforce policies between applications?

    If I am indeed mistaken in my assumption, please do let me know what I'm missing here! :)

  • 11 days to WNS3 2012

    If you're not aware already, the 4th International Workshop on NS-3 (WNS3) 2012 is just around the corner. We're almost done with the organising and have a very interesting lineup of presentations. Tom Henderson and Mathieu Lacage will be giving keynote talks. We then have 12 full paper presentations spread across the day, and a shared poster session with the Omnet++ workshop for which we have 8 posters/demo submissions from our side. Have a look at the final program here.

    If you're an ns-3 user, don't miss this chance to share ideas and learn more about what researchers from around the globe are upto with the project. Don't forget, there's also the developers' meeting the next day. Just add yourself to the wiki if you're interesting in attending in person or remotely.

    Now if only I can find a damn youth hostel in Desenzano...