A False Sense of Security with Test-driven Development

Test driven development is great as long as you have proper tests. The problem is that it’s very hard to predict enough edge cases to cover the field of possible scenarios. Code coverage analysis will help developers make sure all code blocks are executed, but it doesn’t do anything to ensure an application correctly handles the variations in data, user interaction, failure scenarios, or how it behaves under different stress conditions.

The fact that tests are helpful, but never complete is something most developers are already conscious of. The danger is that better tests make worse developers! It’s very easy to lean too heavily on passing tests, wildly changing code until the light goes green without spending enough time thinking through the application’s logic.

I’m basically saying that, psychologically speaking, passing tests gives us a false sense of security. They can be a distraction from carefully crafted and thought through code. That’s why I advocate writing tests only for the purposes of regression testing. It should be a follow-up step, not an integral part of initial development.

Democracy 2.0 with Micro-voting

The democratic process hasn’t changed much. The public elects a representative, representatives make decisions. The average citizen is essentially limited to three tools for affecting change. First, we elect the politician who best convinces us they’ll make the decisions we would want to make. Second, we can send correspondence to our representatives hoping to influence their decisions. Third we can hold protests and demonstrations to broadcast our opinions.

But there can be a better way. Modern communication tools allow the public better access to government and can revolutionize the democratic process by voting directly on the issues. Using internet, text message, and phone voting systems citizens can be directly involved in the decision making process. The role of the representative is reduced to more of an organizer than a decision maker because constituents decide most of the issues themselves. This is democracy 2.0 not because of the invention of new tools, but because it changes the way people behave. I believe we’re headed in the direction of popular governing, but it’s not a perfect world.

Old school tools of Democracy 1.0

There are three essential tools. Ballot voting is infrequent and highly constrained. Decisions are simply yes/no or choose your favorite candidate. No second choices, no weighted scores.

Correspondence with representatives is a free-form way to express opinions and ideas, but it is usually ignored or not accurately counted because representatives simply can’t handle the volume. It lacks transparency and accountability in that only the representative knows the aggregate opinion. Sending correspondence requires knowledge of the system and time.

Protests show what a sample of the population thinks about an issue. To decision makers it roughly quantifies two things about an issue; proportion of the constituency and intensity of the opinion.

There’s a better way with Democracy 2.0

With more ubiquitous methods of communication we now have the tools for a more involved democracy. One where citizens can become more involved in the decision making process. The ease of arranging “micro-polls” through the web, text message, or telephone is such that we can weigh in on any range of issues. By making these polls open and transparent and frequent, we could have truly participatory democracy. Micro-polls can be used to develop policy decisions with rapid iteration harnessing the “wisdom of the crowds”. Determine weather any actions should be taken, then what kind of action until the constituency arrives at the most agreeable outcome.

Round 1:
Poll: Should the federal government adopt policies to reduce the number of illegal immigrants?

Round 2:
Poll: Which method of reducing the number of illegal immigrants do you most agree with?
— Forced deportation
— Keeping non-citizen status, but documenting and taxing
— Naturalizing to eventually be US Citizens

Round 3:
Poll: As non-citizen, documented workers the following services and taxes should be imposed:
— All normal social services and medicare except social security and impose all normal taxes except social security
— No social services and impose all normal taxes except social security

Public surveys on non-policy issues such as a yearly report card for representatives give feedback so they can do their job better without waiting for the next election cycle. Constituents rate various qualities on their representative on a scale from 1 to 10. Descriptive statistics would provide more valuable information such as standard deviation and a breakdown of ratings by demographics.

Crowd sourced idea generation could lead to novel solutions for hard problems. Anyone can submit ideas and the best ideas are voted up. Discussions revolve around the ideas to help them mature.

The web could be used as an official forum for discourse between a representatives and their constituency where everyone gets a voice. Think of it as a town hall meeting on a national scale. Voting and crowd filtering would make quality questions and comments raise to the top while trolls and inappropriate comments would be voted down and hidden by popular disapproval. The forum also gives the representative a venue for communicating their personal beliefs beliefs on the issues as a well informed, political professional. This is where the organizing and rallying happens.

By being more involved in decisions and having a high frequency of iteration, it strengthens the feedback loop that citizens feel through exercising their rights. Psychologically, many people would feel more connected to the democratic process and would hopefully be willing to contribute a greater amount of mental energy toward the running of their country through greater sense or ownership.

Not without it’s problems

No political system is perfect. While Democracy 2.0 could be a vast improvement, it doesn’t solve every problem of Democracy 1.0 and it also exposes new dangers. People are quick to point out that electronic voting is unsafe and error-prone. There are secure, open source options, but that is the subject of many other blog posts. Some countries like estonia already employ internet voting using the national ID card as authentication. Great care would need to be taken by the election commission to certify any system used.

The general public is often more susceptible to manipulation by persuasive individuals than an elected official would be. News pundits and politically motivated propaganda is no stranger to Americans today, but the effect could be even greater when the people have greater control over more decisions and it could result in a detrimental effect. Democracy depends on a well-informed public and that value is something we need to instill in our culture regardless.

Mob dynamics don’t always result in the best decision. There are numerous studies showing how the sense of moral responsibility declines in groups more than when acting individually. And sometimes the “most agreeable” compromise is worse than either extreme. We’ve all heard the maxim “A camel is a horse designed by committee.” This is where the role of an inspirational representative is so important in getting their constituents behind a cause and guiding the conversation toward an optimal outcome.

The idea of a representative has been necessary because people don’t have time or knowledge to wisely vote on every issue, so they instill their trust in their elected official. That problem is still valid, but there are ways of having it both ways. For example, if a constituent doesn’t vote, their vote could be allocated to the representative. Alternatively, those who care deeply about an issue will be represented and those who care less will not be heard. Voting is and always should be optional.

There is still a lot of thought and experimentation that needs to go into Democracy 2.0, but we can move forward in baby steps.

More Resources

http://usnowfilm.com
http://opengov.ideascale.com/a/dtd/2865-4049
http://www.nytimes.com/2009/09/12/world/americas/12iht-currents.html
http://en.wikipedia.org/wiki/Electronic_voting_in_Estonia
http://en.wikipedia.org/wiki/Direct_democracy#Electronic_direct_democracy

Getting Wikipedia Summary from the Page ID

While working on my forthcoming checkin.to project, I needed to use the MediaWiki API to get the summary paragraph of wikipedia articles pertaining to places. Checkin.to relies on the Yahoo Where On Earth Identifiers (woeid). Yahoo also conveniently offers a concordance API so from the woeid I get the Geonames ID and the Wikipedia page ID among other things. As far as I can tell, the MediaWiki API doesn’t allow you to request page content using the page ID so the first step here is to resolve the page id into a unique page title. This can be done using the query action like so:

http://en.wikipedia.org/w/api.php?action=query&pageids=49728&format=json

It gives a response resembling:

{"query":{"pages":{"49728":{"pageid":49728,"ns":0,"title":"San Francisco"}}}}

Step 2 is to get the actual page content. There are a variety of formats available including the raw wiki markup, but for my purpose the formatted HTML is much more useful. We also need to convert the spaces in the page title to underscores. The request looks like this:

http://en.wikipedia.org/w/api.php?action=parse&prop=text&page=San_Francisco&format=json

And a response resembling:

{"parse":{"text":{"*":"<div class=\"dablink\">This article is about the place in California. [...] "}}}

Step 3 is to parse the resulting article html and extract just the first body paragraph which typically summarizes the whole article. The problem here is that a bunch of other stuff including all the sidebar content comes before the first body paragraph and that other stuff itself can include p tags. jQuery is a big help here, as usual. First, lets wrap the entire resulting wiki page in a div element to give everything a root. Then we can first just the simplings of that wrapper element to find the first root level p tag.

wikipage = $("<div>"+data.parse.text['*']+"<div>").children('p:first');

Below I have the entire resulting function that goes from page id to summary paragraph and appends it to a <div> somewhere in my DOM called #wiki_container. I also perform some optional cleanup including removing citations, updating the relative hrefs to absolute hrefs pointing to http://en.wikipedia.org, and adding a read more link.

function getAreaMetaInfo_Wikipedia(page_id) {
  $.ajax({
    url: 'http://en.wikipedia.org/w/api.php',
    data: {
      action:'query',
      pageids:page_id,
      format:'json'
    },
    dataType:'jsonp',
    success: function(data) {
      title = data.query.pages[page_id].title.replace(' ','_');
      $.ajax({
        url: 'http://en.wikipedia.org/w/api.php',
        data: {
          action:'parse',
          prop:'text',
          page:title,
          format:'json'
        },
        dataType:'jsonp',
        success: function(data) {
          wikipage = $("<div>"+data.parse.text['*']+"</div>").children('p:first');
          wikipage.find('sup').remove();
          wikipage.find('a').each(function() {
            $(this)
              .attr('href', 'http://en.wikipedia.org'+$(this).attr('href'))
              .attr('target','wikipedia');
          });
          $("#wiki_container").append(wikipage);
          $("#wiki_container").append("<a href='http://en.wikipedia.org/wiki/"+title+"' target='wikipedia'>Read more on Wikipedia</a>");
        }
      });
    }
  });
}

A continuous, blocking python interface for streaming Flickr photos

As I explained in my last post, Yahoo! claims their Firehose is a real-time streaming API and it’s not. So to make life a bit easier for app developers I wrote a python wrapper that provides a continuous blocking interface to the Flickr polling API. Effectively it emulates a streaming API by stringing together frequent requests to the flickr.photos.getRecent results. And it’s dead simple.

import PyFlickrStreamr

fs = PyFlickrStreamr('your_api_key_here', extras=['date_upload','url_m'])
for row in fs:
    print str(row['id'])+"   "+row['url_m']

You can download the package from pypi or fork the source code on github. Have fun.

The Yahoo Firehose "feed" isn’t a feed at all

The web has been on a big trend of real-time for the past couple years. Friendfeed was one of the first services to show real-time updates across your social network and real-time feeds took the stage in a big way when Twitter started its streaming API. In April, Yahoo! announced it’s Firehose API claiming “it includes a real-time feed of every public action taken on our network”. The thing is, this isn’t a “feed” or a “stream” in the same sense that Twitter’s streaming API is. It’s a database you can poll with Yahoo’s YQL, an SQL like query language. Sure, the updates may be available in their database in near real-time, but to receive them you need to issue a new request. In fact the only way you know if there are updates is to continuously poll the service. A feed would be something like long-polling with HTTP server push (what twitter does) or PubSubHubbub.

It may be just semantics to some, but this bothers me. To those of us who build applications that publish or consumer real-time information this is a very important distinction. I plan on writing a python library that wraps flickr’s polling API into a “real-time” blocking continuous stream for a project I’m working on. I’ll publish the code on github and post it here when done.

Resolving HTTP Redirects in Python

Since everyone is using short urls these days and sometimes we just need to know where that URL leads I wrote this handy little function which finds out for us. Redirection can be a kind of tricky thing. We have 301 (“permanent”) and 302 (“temporary”) style status codes and multiple layers of redirection. I think the simplest approach to take is whenever the server returns a Location http header and the value in that location field is not the same as what you made the request to, we can pretty well be sure that it’s a redirect. The function below uses the http HEAD verb/method to request only the headers so as not to waste bandwidth and recursively calls itself until it gets a non-redirecting result. As a safeguard against infinite recursion I have a depth counter.

import urlparse
import httplib

# Recursively follow redirects until there isn't a location header
def resolve_http_redirect(url, depth=0):
    if depth > 10:
        raise Exception("Redirected "+depth+" times, giving up.")
    o = urlparse.urlparse(url,allow_fragments=True)
    conn = httplib.HTTPConnection(o.netloc)
    path = o.path
    if o.query:
        path +='?'+o.query
    conn.request("HEAD", path)
    res = conn.getresponse()
    headers = dict(res.getheaders())
    if headers.has_key('location') and headers['location'] != url:
        return resolve_http_redirect(headers['location'], depth+1)
    else:
        return url

Simple Skew Animation with ActionScript 3

I couldn’t find any good examples on simple tween of a skew effect in actionscipt 3 so I thought I’d share what I came up with.

The problem is that skew is not a property on the MovieClip like x or height or others you’re used to tweening with fl.transitions. To apply a skew effect in AS3 you need to use a matrix transform like this:

    import flash.geom.Matrix;
    var matrix:Matrix = new Matrix();
    matrix.b = _y * Math.PI/180;
    matrix.c = _x * Math.PI/180;
    matrix.concat(target.transform.matrix);
    my_mc.transform.matrix = matrix;

or

    import flash.geom.Matrix;
    var matrix:Matrix = new Matrix(1, _y * Math.PI/180, _x * Math.PI/180, 1, 0, 0);
    my_mc.transform.matrix = matrix;

where _y or _x is the skew angle in radians. Unfortunately when you update a property of the transform matrix like

my_mc.transform.matrix.b = _y * Math.PI/180;

it doesn’t update the movieclip. You need to actually re-assign the matrix to trigger an update. So you can’t simply tween the my_mc.transform.matrix.b directly. Here’s my solution.

    import fl.transitions.Tween;
    import fl.transitions.easing.*;
    import fl.transitions.TweenEvent;
    import flash.geom.Matrix;

    var mymatrix:Matrix = new Matrix(1,0,0,1,0,0);

    function reassignMatrix(e:TweenEvent) {
        bg.transform.matrix = mymatrix;
    }

    var bgTweenSkew = new Tween(mymatrix, 'b', Regular.easeOut, mymatrix.b, Math.tan(_y * (Math.PI/180)), 10);
    bgTweenSkew.addEventListener(TweenEvent.MOTION_CHANGE, reassignMatrix);

Note this will overwrite the scale and rotation properties, which are a part of the transform matrix. See the Adobe livedocs for more information.

Three Alternative Inputs for Video Games

When you use a computer enough it can start to feel like the mouse becomes a part of you. Very quickly you forget about consciously moving it right or left to control the cursor as it becomes second nature. Essentially it is an extension of your physical self, with which you manipulate the screen as naturally as you pick up an object with your hand. Combined with a keyboard it is the only form of input for most ‘serious’ video games in the home because of the level of precision required. I think there are other forms of input that can be used in conjunction with traditional mouse and keyboard for a much richer, more immersive experience.

  1. Eye tracking
  2. Voice control
  3. Video gestures

Eye tracking has been employed in usability and behavioral studies for over a decade, but it has so far been demonstrated only in a very limited sense for actual control input. These two videos demonstrate eye tracking as used for controlling camera movement instead of the mouse and aim instead of what would normally be a physical gun in an arcade.

In the case of a free camera the player moves the camera by looking towards the edge of the screen. When the eyes are centered the camera is still. This is the same kind of control that a joystick uses. Without having tried it myself, I feel like it could be unintuitive because when we look at something we don’t expect it to move away. With the second example you can see that with a fixed camera in place of a physical gun, eye tracking is quite effective, but I don’t see any major advantages over the traditional method.

In first person shooters I feel the best combination will be mouse for camera control and eye tracking for aim. When you look at something on the screen it doesn’t move away – you shoot precisely where you look. Gamers can use the same intuitive interface they’re already used to for moving the character around and firing the weapon. For the quickest, most precise control of aiming, the lighting fast twitch reflex of eye movement is perfect and, in fact, is something players already do.

In 2005, Erika Jonsson of the Royal Institute of Technology in Sweden conducted a study of the methods I described and arrived at similar conclusions:

The result showed that interaction with the eyes is very fast, easy to learn and perceived to be natural and relaxed. According to the usability study, eye control can provide a more fun and committing gaming experience than ordinary mouse control. Eye controlled computer games is a very new area that needs to be further developed and evaluated. The result of this study suggests that eye based interaction may be very successful in computer games.

The problems facing this method are primarily the cost of hardware. Current technologies are used in a limited fashion with academics and usability studies and are generally not available in the mass market. Accurate eye tracking is apparently not an easy thing to do. I’m guessing that, were there a serious market opportunity, some enterprising group of young researchers could simplify the hardware and prepare it for mass production, bringing costs down to reasonable levels for a luxury product.

Voice control can be used for high level commands that might otherwise be accessed from menus or other complex key commands. By using voice it saves the player from having to break from the immersive experience of controlling the character. It has the additional side effect of engaging other parts of the brain and encourages more realistic style of interaction that people encounter in daily life.

The jury is still out on whether people like talking to their computer. I think reception of it will rest, as usual, on the intuitiveness of the commands. The game should never force the player to use voice commands – there always needs to be another path of access – and the player should never have to remember what that command or sequence is. You just say what you want to happen. I think this could be especially interested in non-linear story lines where the options aren’t necessarily clear to the player. Instead of selecting from a menu of pre-defined choices the player could explore (as long as they don’t feel they’re searching in the dark).

Video gestures have been used as a fun, gimmicky activity with the PS3 eye and soon with microsoft’s project natal, but I haven’t seen a use that actually results in interesting game play with what we call “serious” games. One idea is to take hints from the camera and not direct input. When communicating with team mates in multiplayer a user might say a command in voice chat and point in a general direction relative to where he’s looking. The camera could take that hint and cause his character to also point in that direction. This is not something that can be used for precise control, but we can attempt to mimic non-essential body language as added value.

When combined with voice command video gestures could enhance the level of realism and natural interface. For instance the direction of the face could indicate whether the player is giving a command to the game or talking to another person in the room. In story telling dialog, more than one player can input and the video indicates which one of them is talking. Again, I think the best we can do with camera input at this point is imprecise input. Project natal looks like it might do a great job in the casual games space, but we’ll have to see it in the wild controlling games by 3rd party developers.

Visualizing Wikipedia As One Large Node Graph

As a network visualization tool, node graphs are an intuitive method of communicating relationships between entities. I’ve been thinking a lot about the semantic web lately and thought it would be cool to visualize all of the links between articles in Wikipedia at once. I want to pull back and get the 10,000 foot view of the state of modern knowledge, which I don’t think has been done before in a comprehensible way. Chris Harrison’s WikiViz project comes closest but it quickly becomes incomprehensible and is not dynamic.

I have not yet found a tool capable of pulling this off. There are two key ideas that go into representing information at such vast scale. We need to be able to show detailed information in a narrow focus but not get bogged down when zoomed out, which means you need to represent the graph at different resolutions. This has been a problem solved for seeing images at scale. Google earth represents the earth at vastly different resolutions and gigapan is able to zoom into images of many gigapixel size. Second, the kind of information you’re displaying needs to make sense at any height. That means when you’re looking at the graph from 10,000 feet it shouldn’t devolve into a gray blur. Google maps also demonstrates this by removing detail such as building names and street names, cities, and states when you zoom out. Because I’m a gamer I’m inspired by Supreme Commander which developed an innovative way of showing tactical information. You can zoom out to see the playing field as a whole and seamlessly zoom in to examine an area in detail. When zoomed out, individual units become symbols that still convey what the unit is.

At a detailed level a single node could contain basic information including the name, some classification, and perhaps a summery. We can use a sunburst style visualization to represent links. As you zoom out that detail gradually disappears. At a high level less significant articles can be represented by generalized concepts. Higher yet, even more general concepts begin to swallow up the more specific ones. The higher you get the more general the concept. Less significant links between concepts fade into the background. The big challenge is reliably building a concept tree for any node in Wikipedia. A lot of research and effort has gone into that area, but it’s not quite there yet. People would be forgiving of the accuracy to start with.

So here’s a summary of the requirements for a tool to visualize Wikipedia

  • Must handle 3.2 million nodes and tens of millions of edges (links)
  • Must be able to modify the graph dynamically to highlight changes in real-time. This means we need something other than the standard spring plotting algorithm which runs in computational complexity O(n^2).
  • Must be able to represent clusters of nodes symbolically as concepts and gradually move between level of detail
  • Must be able to operate with partial graph data. The client application will see only a small slice of the network at once or a high level view of a larger area at a low resolution.

In my brief analysis there are very few tools designed to handle large data sets dynamically. AI3 has a list of 26 candidate tools for large scale graph visualization and although some are visually stunning and some are large scale, none satisfy the requirements above. It seems like the major innovation needed here is a map-reduce style algorithm for undirected graphs. Map-reduce works well with a tree structure, but not as well with unstructured, cyclic graphs. In Wikipedia any node can be linked to any other node and there’s no consistent parent-child relationship. Everything is an “unowned” entity. If a comprehensive and reliable concept hierarchy could be generated from Wikipedia links and text we might be able to use that as the tree-like structure where each level of the tree roughly represents one resolution of the network.

Anyway – that’s something to think on.

UPDATE: A new open-source project called Gephi looks really interesting. http://gephi.org

Here are some more links of interest:
http://cytoscape.org/screenshots.php
http://arxiv.org/abs/cs/0512085
http://blog.semanticfoundry.com/2009/06/01/dynamic-visualization-introduction-theory/