Category Archives: Ideas

Capitalizing on the Value of the Elderly in Modern Society

As modern societies develop, become richer, and life expectancies increase, the ratio between the elderly, non-working population and younger, industrious population shifts towards unsustainability. In the United States in 2010, for every 100 working-age adults there are 22 non-working elderly. By 2030 there will be 37 non-working for every 100 working age adults. (from census) The increased dependency ratio on the 20-65 age group will slow economic growth and add burden to already strained government support programs.

What seems so backwards is that instead of providing value and filling important roles in society, much of the 65+ age group is filed away in retirement homes. As we age we often become less effective at the kind of social roles that modern society has chosen to put value on like manual labor and creative ingenuity.

What I suggest is that there are in fact important social roles in which the elderly populations can be very effective, but are not currently taking up in a systematic way.

Many grandparents are already playing a small role in the upbringing of grandchildren, but cultural, physical, and economic factors are preventing the full potential for that role to be reached. For every 100 working age adults there are 45 children who must be supported. Why are there so many child care facilities while there are so many able-minded elderly with nothing to do? One burdensome population can actually lessen the burden of the other.

Jared Diamond just gave an eye-opening TED talk comparing the value of elderly populations between traditional, tribal societies and modern westernized societies. Elderly were valued for their wisdom, leadership, craftsmanship, and child rearing abilities, freeing younger populations to hunt and fill other physically intensive roles. Elderly often lived among all ranges of age groups and could provide value wherever able. Responsibilities were shared among a community and not just a single immediately family. Though we live in a very different world now, we would be smart to learn something from historical social structures and see where we may institute programs to make better use of the human resource we collectively share.

Lawrence Lessig’s TED Talk on Reclaiming the Republic

Lessig talks about how the people have lost control of our government to those who have money. He tells stories of politicians and everyone connected to the political and legislative process evolving their thinking to appease their financiers. Deregulation is politically counter-productive because those involved get steady cash from lobbyists to manipulate those regulations. Total reform is nearly impossible because keeping the status quo means keeping your financial backing.

Lessig offers an emotional argument to draw attention to this issue of corruption, a word he uses many times. He reminds us that regardless of how impossible the task of forcing change in the engine of political finance may seem, we must do it because we love this country as we love our child. Lessig gives no path of action or inspirational light at the end of his talk, making it all the easier to push the problem out of our minds. I would at least resolve to keep the issue in the back of my mind in hope of having some small idea to affect change.

Beauty Can Make the World Safer

It’s amazing how appearances affect our behavior. Edi Rama, an artist and the former mayor of the capital of Albania gave an inspiring TED talk about transforming his city by adding color to old buildings and improving public spaces. Streets that used to be crime ridden became safer because they looked nicer. People are less likely to litter on clean, well groomed streets than streets already covered in dirty, covered in rubble, and broken windows. Shop owners and residents regain their pride and are protective of their neighborhood.

It’s amazing how this concept extends to so many areas.  Rama tells a story of combating corruption, again, using the tool of appearances. By improving a government building from a shack with a dark hole to a well-lit building with face to face interactions with officials it reduced the frequency of bribes. And a little closer to personal experience – when my office’s kitchen sink has a few dirty dishes laying about everyone feels like they can add to the pile, but as long as the sink is clean people tend to wash their own.

Arguments Against Owning a Home: Specialization of Labor

I recently read this article on TechCrunch by James Altucher entitled “Why Entrepreneurs Should NOT Buy Homes“. In discussing the topic with some friends, I was turned on to this article in the Wall Street Journal by economist Robert Bridges showing statistically why home ownership is almost always a poor investment. The idea rings true to me. Altucher focuses on Entrepreneurs, but I believe his argument could be extended to almost anyone and it has to do with specialization of skill.

You could summarize Altucher’s article in two core points:

  1. Entrepreneurs should not own homes because the investment is illiquid meaning you can’t use the money for other things when you need it. In the same sense, when you might need to pick up and move to a new city for a job or your level of income changes it can be difficult and expensive to sell your house.
  2. Owning a home takes time, new skills, and adds a lot of stress that distracts your from things that are most important: being really good at what you do and enjoying your life.

Altucher’s argument is similar to the idea of specialization of skill and division of labor. Economic prosperity and development really took off when people stopped doing everything for themselves and started to focus more on what they’re good at. If you’re good at making furniture it’s more efficient for you to spend your time doing that and investing money in better tools rather than spending half your day farming and investing money in farming equipment. Leave the food to someone who does that for a living and sell him some furniture in exchange. Likewise, leave the property management and investment to someone who does that for a living and focus your time and money on being better at whatever it is you do – be it startups or photography or teaching economics.

If, on the other hand, you really ENJOY fixing up houses and aren’t worried about moving soon, it could make sense to buy a house for personal reasons, but less as an economic investment. Also, this argument ignores the psychological aspects of lack of disciplin for investing controlled by forced mortgage payments, but a similar effect can maxing out your 401k and other investment strategies.

Inspiring Civic Hacking

Mick Ebeling recently gave a TED talk about the homemade eye-tracking device he and a bunch of hackers made to allow a paralyzed man to communicate, stephen hawking style. They did this with an off-the-shelf PS3 camera and some open source software for $50. That’s what I call a righteous hack. Most importantly it has real-world significance. And it’s totally something I or many people I know could have done if I had thought of the idea.

I think a lot of hackers are hungry for this kind of meaningful work. We need a repository of project ideas like the Eyewriter – immediate needs that have a tangible social affect and can be done in a weekend or two. Organize the ideas by skills required and offer the platform for organizing groups of hackers to tackle the problem. There are a lot of developers out there looking for a side project and a way to have an impact. And we also need idea people. Social workers, NGO’s, and every day people to tell us how technology could solve the problems they see in the field.

Some resources:
Random Hacks of Kindness
Code For America
Public Equals Online
Applications for Good

For goodness sake, hack!

Three Alternative Inputs for Video Games

When you use a computer enough it can start to feel like the mouse becomes a part of you. Very quickly you forget about consciously moving it right or left to control the cursor as it becomes second nature. Essentially it is an extension of your physical self, with which you manipulate the screen as naturally as you pick up an object with your hand. Combined with a keyboard it is the only form of input for most ‘serious’ video games in the home because of the level of precision required. I think there are other forms of input that can be used in conjunction with traditional mouse and keyboard for a much richer, more immersive experience.

  1. Eye tracking
  2. Voice control
  3. Video gestures

Eye tracking has been employed in usability and behavioral studies for over a decade, but it has so far been demonstrated only in a very limited sense for actual control input. These two videos demonstrate eye tracking as used for controlling camera movement instead of the mouse and aim instead of what would normally be a physical gun in an arcade.

In the case of a free camera the player moves the camera by looking towards the edge of the screen. When the eyes are centered the camera is still. This is the same kind of control that a joystick uses. Without having tried it myself, I feel like it could be unintuitive because when we look at something we don’t expect it to move away. With the second example you can see that with a fixed camera in place of a physical gun, eye tracking is quite effective, but I don’t see any major advantages over the traditional method.

In first person shooters I feel the best combination will be mouse for camera control and eye tracking for aim. When you look at something on the screen it doesn’t move away – you shoot precisely where you look. Gamers can use the same intuitive interface they’re already used to for moving the character around and firing the weapon. For the quickest, most precise control of aiming, the lighting fast twitch reflex of eye movement is perfect and, in fact, is something players already do.

In 2005, Erika Jonsson of the Royal Institute of Technology in Sweden conducted a study of the methods I described and arrived at similar conclusions:

The result showed that interaction with the eyes is very fast, easy to learn and perceived to be natural and relaxed. According to the usability study, eye control can provide a more fun and committing gaming experience than ordinary mouse control. Eye controlled computer games is a very new area that needs to be further developed and evaluated. The result of this study suggests that eye based interaction may be very successful in computer games.

The problems facing this method are primarily the cost of hardware. Current technologies are used in a limited fashion with academics and usability studies and are generally not available in the mass market. Accurate eye tracking is apparently not an easy thing to do. I’m guessing that, were there a serious market opportunity, some enterprising group of young researchers could simplify the hardware and prepare it for mass production, bringing costs down to reasonable levels for a luxury product.

Voice control can be used for high level commands that might otherwise be accessed from menus or other complex key commands. By using voice it saves the player from having to break from the immersive experience of controlling the character. It has the additional side effect of engaging other parts of the brain and encourages more realistic style of interaction that people encounter in daily life.

The jury is still out on whether people like talking to their computer. I think reception of it will rest, as usual, on the intuitiveness of the commands. The game should never force the player to use voice commands – there always needs to be another path of access – and the player should never have to remember what that command or sequence is. You just say what you want to happen. I think this could be especially interested in non-linear story lines where the options aren’t necessarily clear to the player. Instead of selecting from a menu of pre-defined choices the player could explore (as long as they don’t feel they’re searching in the dark).

Video gestures have been used as a fun, gimmicky activity with the PS3 eye and soon with microsoft’s project natal, but I haven’t seen a use that actually results in interesting game play with what we call “serious” games. One idea is to take hints from the camera and not direct input. When communicating with team mates in multiplayer a user might say a command in voice chat and point in a general direction relative to where he’s looking. The camera could take that hint and cause his character to also point in that direction. This is not something that can be used for precise control, but we can attempt to mimic non-essential body language as added value.

When combined with voice command video gestures could enhance the level of realism and natural interface. For instance the direction of the face could indicate whether the player is giving a command to the game or talking to another person in the room. In story telling dialog, more than one player can input and the video indicates which one of them is talking. Again, I think the best we can do with camera input at this point is imprecise input. Project natal looks like it might do a great job in the casual games space, but we’ll have to see it in the wild controlling games by 3rd party developers.

Visualizing Wikipedia As One Large Node Graph

As a network visualization tool, node graphs are an intuitive method of communicating relationships between entities. I’ve been thinking a lot about the semantic web lately and thought it would be cool to visualize all of the links between articles in Wikipedia at once. I want to pull back and get the 10,000 foot view of the state of modern knowledge, which I don’t think has been done before in a comprehensible way. Chris Harrison’s WikiViz project comes closest but it quickly becomes incomprehensible and is not dynamic.

I have not yet found a tool capable of pulling this off. There are two key ideas that go into representing information at such vast scale. We need to be able to show detailed information in a narrow focus but not get bogged down when zoomed out, which means you need to represent the graph at different resolutions. This has been a problem solved for seeing images at scale. Google earth represents the earth at vastly different resolutions and gigapan is able to zoom into images of many gigapixel size. Second, the kind of information you’re displaying needs to make sense at any height. That means when you’re looking at the graph from 10,000 feet it shouldn’t devolve into a gray blur. Google maps also demonstrates this by removing detail such as building names and street names, cities, and states when you zoom out. Because I’m a gamer I’m inspired by Supreme Commander which developed an innovative way of showing tactical information. You can zoom out to see the playing field as a whole and seamlessly zoom in to examine an area in detail. When zoomed out, individual units become symbols that still convey what the unit is.

At a detailed level a single node could contain basic information including the name, some classification, and perhaps a summery. We can use a sunburst style visualization to represent links. As you zoom out that detail gradually disappears. At a high level less significant articles can be represented by generalized concepts. Higher yet, even more general concepts begin to swallow up the more specific ones. The higher you get the more general the concept. Less significant links between concepts fade into the background. The big challenge is reliably building a concept tree for any node in Wikipedia. A lot of research and effort has gone into that area, but it’s not quite there yet. People would be forgiving of the accuracy to start with.

So here’s a summary of the requirements for a tool to visualize Wikipedia

  • Must handle 3.2 million nodes and tens of millions of edges (links)
  • Must be able to modify the graph dynamically to highlight changes in real-time. This means we need something other than the standard spring plotting algorithm which runs in computational complexity O(n^2).
  • Must be able to represent clusters of nodes symbolically as concepts and gradually move between level of detail
  • Must be able to operate with partial graph data. The client application will see only a small slice of the network at once or a high level view of a larger area at a low resolution.

In my brief analysis there are very few tools designed to handle large data sets dynamically. AI3 has a list of 26 candidate tools for large scale graph visualization and although some are visually stunning and some are large scale, none satisfy the requirements above. It seems like the major innovation needed here is a map-reduce style algorithm for undirected graphs. Map-reduce works well with a tree structure, but not as well with unstructured, cyclic graphs. In Wikipedia any node can be linked to any other node and there’s no consistent parent-child relationship. Everything is an “unowned” entity. If a comprehensive and reliable concept hierarchy could be generated from Wikipedia links and text we might be able to use that as the tree-like structure where each level of the tree roughly represents one resolution of the network.

Anyway – that’s something to think on.

UPDATE: A new open-source project called Gephi looks really interesting. http://gephi.org

Here are some more links of interest:
http://cytoscape.org/screenshots.php
http://arxiv.org/abs/cs/0512085
http://blog.semanticfoundry.com/2009/06/01/dynamic-visualization-introduction-theory/

Climate Data Mashup

The problem: people generally have very little perspective of the actual scale of the contributing components of climate change or the effects of different proposed measures to stop it. What percentage of CO2 emissions are a result of city residential electrical consumption vs agriculture vs vehicles? How much of a difference will legislation X make in the big picture? When Obama says that the United States will cut greenhouse gas emission 80% by 2050 what kind of effect does that actually have? What would happen to the weather in 10 years if everyone in the world stopped driving tomorrow?

The solution: Let people build hypothetical scenarios themselves. Design an interface centered around an attractive time line graph indicating climate data in all its various forms including temperature increase, carbon emissions, and sea level. Curious users can click on and off different proposed solutions to see the real overall effect on projected emissions along with dollar cost over time. Group data in terms of current relevancy such as proposals being discussed at the climate summit in Copenhagen this week. Include competing predictions from different agencies and scientific groups to communicate the level of uncertainty.

Extra credit for building a system to automatically pull in current data from a variety of sources.