There have been several articles documenting the work of Google’s Salar Kamangar and Eric Veach in bringing this to AdWords. What is lesser known (atleast to me )is that they implemented this model to solve another problem entirely. I came across this old talk from a Google employee – in the speaker notes, it talks about how Kamangar and Veach implemented this feature to stop advertisers from logging into the system and modifying their bids constantly (since that’s what people tend to do in an open English auction). By implementing a second price auction, they were hoping to reduce the load on the system.
Genepax, a Japanese company, has unveiled a car that can run on water. It apparently extracts Hydrogen from water and uses it to create energy to fuel the car. The prototype was driven around in the city of Osaka in Japan. Engadget has more details:
The key to that system, it seems, is its membrane electrode assembly (or MEA), which contains a material that’s capable of breaking down water into hydrogen and oxygen through a chemical reaction. Not surprisingly, the company isn’t getting much more specific than that, with it only saying that it’s adopted a “well-known process to produce hydrogen from water to the MEA.” Currently, that system costs on the order of ¥2,000,000 (or about $18,700 — not including the car), but company says that if it can get it into mass production that could be cut to ¥500,000 or less (or just under $5,000)
There is a video from Reuters that I have tried to embed below, but I am not sure if it will show up on the final blog (here’s the link to the Reuter’s page that houses the video):
Update: Looks like there’s more to it than meets the eye. See this discussion on Slashdot.
A very interesting article by Tim Harford about how prizes were a motivation for a big chunk of research which got productised earlier, and how it could be making a comeback. The advantage is quicker solutions, involvement of a more diverse community with more diverse ideas, cutting bureaucracy, fame and fortune for the inventors, and of course, problems getting solved. He cites how a competition was used to build an accurate clock used to predict the longitude of ships, and how today, from the Gates Foundation (for pneumococcal disesases) to Netflix (for machine learning algorithms) is using a cash prize as a motivation to involve people to solve important problems. It could also be used by governments to replace patents for solving large problems. He says:
Champions of prizes see them as a component of a wider system to promote innovation, rather than as an outright replacement either for grants or patents. Instead, the hope is that prizes will help to compensate for the specific weaknesses of those alternatives.
The downside of a patent is fundamental to its design: in order to reward an innovator, the patent confers a monopoly. Economists view this as, at best, a necessary evil since monopolies distort prices. In the hope of raising profits from some customers, they will price others out of a market. The most obvious victims are consumers in poor countries.
In an ideal world, prizes could replace patents. Instead of offering a patent for an innovation, the government could offer a prize. The inventor would pocket the prize but would not be allowed to exploit any monopoly power, so the innovation would be freely available to use in products for poor consumers – cheap drugs for Africa, for instance – and, importantly, in further innovations. But to explain that idea is to see its limitations. How could the government know enough about the costs and benefits – and even the very possibility – of an innovation to put a price tag on it and write the terms of reference for a prize competition? For this reason it is hard to see prizes replacing patents in most cases. But it is not impossible.
The modern heir to 18th-century prizes for canning, water turbines and finding longitude at sea is the advanced market commitment for vaccines for the poor: the goal is clear, the costs and benefits can be guessed at, and the quasi-prize nudges the patent system to one side with a prize contract that respects the patent but, in exchange for a large subsidy, radically constricts the holder’s right to exploit it.
Prizes may be an effective way to build technologies that solve a specific problem, but I doubt if they can help in unknown sojourns into the world of science. Most of our applied technologies are build upon these basic scientific fundamentals and I don’t know if a gold-rush will lead to the newest laws of physics, or fundamental rules in mathematical logic. Issues of ownership of Intellectual property are also a little ambiguous, and have to be specified clearly up front. In many cases, gauging the ramifications of a new mathematical theory, or basic physical laws might be extremely difficult (which is the reason Nobel prizes are awarded after the work has been established over a long term).
All said and done, I am sure prizes (not just the cash, the fame and respect as well) make for great motivation and we might see a lot of it.
In a previous post on the Honey-bee algorithm for allocating servers, which I found quite fascinating, I had pointed out I had referred to a paper on Swarm Intelligence by Eric Bonabeau and Christopher Meyer published by Harvard Business Review, and finally I got time to go back and read it and I found it quite fascinating! The paper describes case studies where people have used algorithms inspired by nature (ants, bees) which use a decentralized model of computation and optimization.
The paper points out that the main advantages of using algorithms like these are flexibility, robustness and self-organization. The algorithms work in a completely decentralized manner, and work on the principle that the wisdom of all the ants (or the small agents) can be harnessed in such a manner that the whole is far greater than the sum of its parts. Also, the algorithms are invariably robust to failure and adaptive since they don’t make use of a central decision making bodies and there is a lot of experimentation with new sources of food (or results in the case of algorithms).
The paper also points out that there are several cases where these concepts have been used successfully (both in business and academia):
- Optimally routing telephones calls and Internet data packets seems to be a tough problem because if we use a centralized algorithm, it will neither be robust nor adaptive. Algorithms based on swarm intelligence come to the rescue since they are not based on a central decision making body, but rather work on the principle that the scouts recruit other agents to follow new promising paths.
- Fleet management and cargo management also suffer from similar problems. The paper points out that Southwest Airlines found out that in some cases, letting cargo go to wrong destinations and recovering is faster and more robust than always making sure that all cargo is handled correctly.
- Small simple rules that lets people take decisions for themselves usually works best. This has since been shown to work very well for companies such as Google as well.
There are more case studies in the paper, but what’s fascinating is that these techniques become even more popular now-a-days because companies have realized that it is easier to tolerate failure than to eradicate it — more so in the context of the Internet where there is a race to build systems that are self-correcting (such as Map-Reduce, Hadoop and Dryad). Also the new realities of the emerging architectures (multi-core, highly parallel, massive clusters grids) is going to mean that we have more parallel horsepower to run our applications and such self-organizing algorithms are going to become even more popular in the computing community.
However, one concern would be programming models for such computing bedrocks. We still don’t understand how to manage parallel computation very well to ensure that interpreting such algorithms in code is going to remain difficult for the average programmer for quite sometime.
I came across this article in Linux Today which describes Project Fortress, an open-source effort from Sun to provide a language based on Fortran to easily write parallel programs. The project seems to be built on top of Java. Some salient features seem to be:
- Implicit parallelism: If you want to execute a loop sequentially you have to explicitly write that. The big claim is of course, using this efficiently on multi-core machines.
- Support for unicode: As a result, the scientific research community can make use of greek alphabets in their code, and even use things like superscripts, subscripts, and hats and bars! This means that your code is going to look a lot more like your algorithm.
- Automated Type inference: The system has extensive type inference (the kind that functional languages and C# 3.0 have) and that means that your code is far more readable.
- Extensive library support: In fact, even some parts of the main system are implemented as libraries. They expose the parsed AST to the programmer, and give him extensive control.
These sound quite interesting, and it seems that the scientific computing language of the future is going to look a lot like Fortress, if they are successful with this effort.
I came across an interesting article in The Hindu (see the story from GaTech news; I couldn’t find the link on The Hindu website) today which described work done by Sunil Nakrani and Craig Tovey, researchers in GaTech, on using a decentralized profit-aware load balancing algorithm for allocating servers for serving HTTP requests for multiple hosted services on the Web. The interesting, thing is that the algorithm is based on how Honey Bees in a bee-hive decide where to collect nectar from. I decided to take a look at the paper.
Essentially, the forager bees collect information about how profitable a particular nectar source and how much is the cost involved in collecting from that source (round trip time). Based on a composite score, they perform a waggle-dance which essentially indicates what is the value of performing foraging where they have been. The inactive foragers can thereafter figure out where to go look for nectar.
The researchers modeled it in the server space by having an advert-board, where servers post profits from serving a request and the time required to serve it. Thereafter, the other servers can choose which colony (or service) they wish to be a part of. Existing servers can also move to another colony based on a probability determined from a look-up table indexed by the ratio of their profits by the profits of their colony.
Their results indicate that they do quite well compared to optimal-omniscient strategy (which knows the pattern of all future web requests) and better than existing greedy and static assignment strategy. Shows that we still have a lot to learn from nature!
One thing that flummoxed me though was that the original paper seems to have been published way back in 2003 (see Tovey’s publication page). I wonder why it got press publicity only now.
[The paper also cites a Harvard Business Review paper titled Swarm Intelligence: A whole New Way to Think About Business]
I had the good fortune of being able to listen to Fran Allen (IBM profile and Wikipedia entry) today. Fran pioneered a lot of work in compiler optimization and was awarded the Turing Award for her contribution to Computer Science in 2007. That makes her the first and only (till date) woman to have won the Turing Award, the highest honour in Computer Science.
It was inspiring listening to her talk about her adventures. She almost described the evolution of high performance computing, with the earliest IBM systems starting from Stretch, which was supposed to be 100X faster than the existing machines (but turned out to be only 50X faster) and was delivered to the National Security Agency. She also described some of failures she had been involved in (Stretch, since it was 2X slower than intended, and then the ACS project). What was interesting was that most of the basis of the pioneering work she described had its basis in the work she had done in these failed projects. The fact that failures are the foundations of mammoth successes is one message she clearly drove home with her optimistic outlook. She also described her work during the System 360 and the PowerPC projects.
Her appeal to computer science researchers and students was mainly about the programming models and architecture decisions revolving around multi-core, a buzz word most of us have been left confused and wondering about. This new revolution that promises to change the way we write software and exploit parallelism in our programs, is the biggest opportunity, as Fran put it!
What was also interesting was how we got to the lecture hall wading through mud in a construction site during the rain. Apparently, there are two conference halls at IISc — one called JRD Tata and the other JN Tata. No wonder, we got to the wrong one and found that it as hosting a conference on Power Electronics. Note to self: make sure you always check the location properly before setting forth.
Other than that, have been lately busy with hacking Python for S60 (this is a brilliant idea– having a platform agnostic scripting language!) to work on my phone and a Python based remote administration toolkit. Will post more about them soon!