AI in a post-capitalistic world

Having just finished Paul Mason's incredible PostCapitalism: A Guide to Our Future, I found it sparking in my mind many interesting ideas and thoughts on how artificial intelligence can fit into an economy increasingly dependent on the abundance of information. I must mention that I am not at all an economist, so apologies in advance if I confuse any aspects of the theory. Although Mason doesn't explicitly mention artificial intelligence, many of his ideas on info-tech are directly applicable and highly relevant, so I will first attempt to explain them.

Paul Mason envisions the postcapitalistic economy as a distributed network, without any central planner, but operating primarily on the free access and dissemination of information. The lifeblood of such a setup would be instantaneous, real-time information feedback, supported by a non-centralized technological network. His argument is that, as we shift increasingly toward an information-based economy, we move away from scarcity, because information is infinitely replicable. Something that is infinitely replicable has a zero marginal cost (meaning that after the first one is made, it costs pretty much nothing to make another). This makes it really difficult to determine its cost. 

In a free-market economy, prices are determined based on the scarcity of the item. For example, because there are only so many TVs available at any given time, if more people want a TV, the price will increase until supply meets demand. Supply can't increase immediately because there is a production cost for each additional TV. But with info-tech, this doesn’t apply. Netflix lets you share your account with multiple devices and family members without requiring you to pay more, because their additional cost per use (marginal cost) is practically nothing. The information itself, once created, can be freely copied as many times as needed to meet demand. Netflix is essentially charging for the fixed cost of creating the information aggregation technology (plus of course other things like advertisement, content creation, engineers, distribution, etc). Countless other examples exist of people/entities not even charging the fixed cost (i.e. Wikipedia, GitHub, open source) for access to information. 

So how do you price something like this? Paul Mason makes a good point that at present, the market doesn’t know how to price “information”. Hence the crazy valuations of startups that aim to aggregate and deliver services based on it. The most we can do is compare with prices of services without info-tech. For example, if I book a hotel room, it will cost me X dollars, but if I can find someone willing to let me stay in their home while they’re gone for Y dollars, then the value of the service that allows me to find that someone should be close to X - Y. But what about the price of having songs recommended to me on Pandora or Spotify? I would not have paid to listen to these songs otherwise, as I didn’t even know they existed. But surely this service is worth something to me. What about something I don’t even have to explicitly pay for, such as Twitter or Reddit? What is the value to me of having in my hands a snapshot of current trends and articles, tailored to my specific interests? 

With capitalism's inability to appropriately price information-based goods, along with the ease of copying information, Mason argues that the cost of labor will be driven to zero, thereby producing enormous profit for corporations. This is feasible until almost all the labor has been squeezed out of the system, at which point profit flatlines and prices are driven toward zero because nobody can afford the products anymore. (This is my very layperson understanding of labor theory.) Therefore, in a post-scarcity world, capitalism cannot exist.

I found this a fascinating idea, but I have a major issue with this contention. Information cannot reduce the cost of resources to zero. In fact, nothing can. We still need to make physical products such as housing, food, clothing, etc. Info-tech can perhaps make resource consumption much more efficient, such as encouraging telecommuting or reducing overproduction, but it can never drive the cost to zero, in the absence of infinite resources. 

However, his point is well-taken, because the second thought I had was that the replacement of labor is primarily going to be by artificial intelligence. Mason doesn't really mention AI in his book, instead simply referring to "automation," but he seems to assume that once a process automated, it essentially becomes capital. It cannot ever inject more value into the system. But AI appears to be making remarkable strides, and is predicted by many to cause massive unemployment in the near future (especially for routine work, see below, and in the automotive sector). As artificial intelligence replaces labor, most people will be driven out of the market. 

This will be a problem (to say the least), because postcapitalism assumes that we as individuals can retain control of social and other information. But most companies are extremely protective of their information (as they are essentially smart information aggregators). You can use Uber to get a ride, but you cannot have access to their database. (It is this information imbalance that gets converted to profit.) So with corporations owning both the aggregated information as well as the algorithms to process that information, the replacement of human labor could lead to an economic disaster. Perhaps, in the face of such widespread unemployment, social pressures will be so large that governments will be forced to institute a basic universal income. This is vigorously advocated by Mason as being one of the central requisites of a postcapitalism economy, but it seems like there will need to be incredible social upheaval before this will be considered. 

Another interesting idea that the book brings up is that, in an information-abundant, technologically connected society, profit derives primarily from creating something. In this world, anything that can be planned can be automated. Therefore, the only jobs left will be those that are inherently innovative, creative, and unplanned. But we are already starting to see AI that can create art, generate Shakespeare, and even dream. True, there is still a long way to go before we recognize such output as truly innovative or useful to daily life, but we are only beginning to explore the full potential of deep learning and other relatively new AI methods. Will it just be a matter of time before humans are competing with machines to innovate and create? And if/when that day comes, will it be ethical for anyone or anything to "own" these creative artificial intelligences?

Overall, I found PostCapitalism to be a great read. Highly recommended for anyone looking to get a better sense of how our economy is being transformed. Mason doesn't go into detail about how exactly we are supposed to make the transition to postcapitalism, but he makes a great case that certainly something interesting is on the horizon.

Apple vs the World: On the broken state of patent law

Image from Cult of Mac

Image from Cult of Mac

(originally posted at amaral-lab.org)

On August 24, 2012, Apple won a huge patent lawsuit against Samsung and was subsequently awarded $1.05 billion in damages, an amount that they are now seeking to triple because the infringement was found to be “willful.” However, this is only the latest battle of a huge litigation war waged by the two tech giants in multiple countries around the world. It also sets the stage for a larger attack on Android, which Steve Jobs in his autobiography claimed was stolen and that he would try to destroy (although I honestly don’t see how anyone could mistake iOS for Android).

The patents that were found to be violated: pinch to zoom, bounce back from scrolling, tap to enlarge, multi-touch sensing, rounded square icons, and the general shape and look of the phone.

The implications from this verdict could be far-reaching. In this modern-day technological age, IP (intellectual property) is becomingly increasingly hard to define, especially given the current state of patent law in the US, which many have decried as being broken. How do we determine the validity of a patent for “swipe to unlock” when it’s the digital equivalent of “turn doorknob to open door”? Incidentally, Apple lost a suit against HTC in the UK over this particular patent, which the judge declared as “obvious.”

Does this mean that tech companies shouldn’t patent? It’s telling that most IP scuffles in the tech industry end in settlement, with both sides agreeing to share a bit of their intellectual property in a fair manner. This is because the industry is so interdependent, and patents exist for everything. Any one device uses scores of patented technologies and hardware that are all essential to its function, and no tech company could possibly exist in isolation. I for one find it highly amusing that Apple itself is Samsung’s largest customer, purchasing over $5 billion (estimated to be as high as $11 billion for 2012) in processors, screens, storage, and other parts from it every year, and Samsung is Apple’s key supplier. The two literally could not survive without each other. Apple’s own iOS uses several technologies that Samsung invented, a fact which Samsung tried to use to countersue, although these efforts proved ultimately unsuccessful.

Let’s just say the Facebook relationship status of these two would be “it’s complicated.”

What is the fall-out from this trial? Most think it’ll lead to many more legal battles, since a precedent has been set that such battles can be won and are highly profitable. Phone makers will shy away from the current standard of smartphone designs, which Apple did indeed pioneer and establish. This could drive innovation, but could also create unnecessary obstacles if phone makers feel they can’t use common-sense features such as “rounded corners” or “slide to unlock.” There could also be a gradual shift away from Android OS, which is the ultimate target of the war Apple is waging. Google realizes that it can no longer afford to sit on the sidelines. The company has already stepped into the fray by capitalizing on its recent purchase of Motorola Mobility, filing a lawsuit against Apple for infringing on a wide range of mobility patents it acquired through the purchase.

Essentially, in an industry 100% dependent on constant innovation, it’s hard to define what is truly innovative, and how much of that innovation rests on what came before it, which is basically a credit assignment problem. How do you determine what is “obvious,” which is to say that this idea would have been independently reached by most people because it naturally follows from what already existed? On the flip side, how do you determine what is “innovative,” which is a significant, novel contribution that would not have existed otherwise? And is it fair to expect overworked, time-limited patent employees to answer these tough questions, answers that are literally responsible for the outcomes of billion-dollar lawsuits around the world?

I don’t have answers to these questions, but I do believe that the patenting of cosmetic features is completely frivolous and wastes both time and money. It’s interesting to compare the tech industry to the fashion sector, which is also predicated on never-ending reinvention and innovation. What would happen if we started to patent “green dresses with zig-zagging dark patterns” or “high-waisted plaid shorts worn with belts”? If it’s ridiculous to patent these design concepts, it should also be ridiculous to patent the “rounded rectangular shape” of the iPad or the “round square icons on a black background” on iPhones (patents which both exist). In fashion, just as in technology, you pay a premium for the brand. Therefore, it’s illegal for knock-off Gucci products to claim to be actual Gucci, but it’s perfectly fine to create discounted items which look highly similar, but don’t carry the same brand name. If Samsung had slapped an Apple logo on all their products and tried to capitalize on their premium brand name, then this would be a clear case of stealing. But technology, just like fashion, follows trends, and the current trend in smartphone technology is to have large touchscreen rectangular phones that are application-based. It is my opinion that you can’t patent trends. And chasing after perceived patent infringements could very well come back to bite Apple in the ass. Because if there’s one thing we know about fads, it’s that they all eventually die away.

The Wired Generation

(originally posted at amaral-lab.org)

At this present moment, as I am writing this, I am among the most connected human beings in all of history. If you wanted to, you could send me a Facebook message, email me, send me a Gchat message (all of which I get both on my laptop and my phone), text me, call me, or any combination of the above, and I can instantly see it. In the same vein, I can click over to Google and have access to almost any information I want, watch (legally or illegally) any movie or show that happens to strike me, and reach out and get connected to strangers from all around the world. I am truly a new breed of super-connected, wired human, and I am completely typical of my generation.

The problem is, science is starting to show that this is not an entirely good thing. In fact, it might be fundamentally changing the way we think and learn. Recently I watched a Frontline documentary call Digital Nation (highly recommended) which discussed these very matters. It spoke of how children and teenagers are almost never doing just one thing at a time anymore. They are doing homework at the same time as messaging their friends, watching a Youtube video, downloading music, and more. They don’t write essays; they write a series of paragraphs. They don’t read books; they find summaries and synopses online. None of this was particularly surprising to me, and I’ve been guilty of more than a few of the same things. What did surprise me was just how sure all of these students were of their ability to multitask and to be good at all the tasks they’re doing.

Researchers have demonstrated that, in fact, the opposite is true. A recent article in PNAS (Eyal Ophir et al. Cognitive control in media multitaskers. PNAS. 106 (37): 15583-7, 2009) shows that the heaviest multitaskers are worse than light multitaskers at filtering out irrelevant stimuli when trying to attend to multiple tasks at once and are slower to switch between tasks, both of which affect their performance negatively. Therefore, a person who chronically multitasks is actually worse at multitasking than someone who never multitasks, a finding with very troubling implications for the students today who grew up multitasking.

Why might this be? Shouldn’t someone who practices at something get better at it? My own opinion is that multitaskers are not actively controlling switching among their tasks in any productive fashion. They have in fact developed an addiction to stimuli and are extremely receptive to any distraction. Focusing on any one thing for a length of time invokes a sense of boredom or that they are missing out on something, that they could or should be doing something else at the same time. The only way to stop this feeling is to have constant stimulation. This is probably why most movie trailers nowadays are just a series of 1-second, high-voltage clips set to extremely emotional music – an absolute buffet of stimulation.

Does this mean that all of our attention spans are getting shorter? Can I write a paragraph without feeling the need to check Facebook or my email? Can I have a random question pop in my head without immediately reaching for my smartphone to look it up? Maybe it’s not a good thing that now it’s totally acceptable to be socializing with one friend in person and then start texting another one, mid-sentence sometimes. Maybe it’s not wise to allow students to be on their laptops during class, surfing the internet and Facebooking while they’re supposed to be absorbing new information and knowledge.

There is now a disconnect between what we’re doing and what we’re thinking about. We’re always somewhere else, jacked into everything and yet fully connected with nothing. I worry that it’s affected our ability to formulate complex thought, which requires a high level of focus and absorption. If we try to do everything at once, if we’re constantly being interrupted by our phones, our email, and even our own need for new stimulation, can we ever do any one thing truly well anymore? Maybe it’s enough to just be aware of our own tendency to distract ourselves and to make an active effort to focus and not let irrelevant stimuli in. After all, this was a rather long blog entry. Ask yourself: how many times did you stop reading this to check on Facebook or your email? Be honest.

A new type of mob: Group intelligence in the twenty-first century

For human beings, communication, as the pathway to understanding, forms the bedrock and vital infrastructure of our social nature.  Where it is prevalent and free, we are rewarded with a wealth of ideas and intellectual as well as social progress.  Where it is stifled, confused, or simply failed, we encounter the worst vices of mankind: exploitation, paranoia, aggression, and – inevitably – war.  The stakes are higher than ever in our modern-day culture of global economies and extreme specialization of professions.  As our technological prowess and standards of living steadily rise, legions of experts are called upon to study more and more about less and less.  The robotics engineer may know more than anyone else in the world about how to translate computer code into fluid lifelike interfaces, but probably has no idea where his breakfast comes from in the morning.  We are inextricably bound and dependent on each other for our survival, much like the different organs in a living organism, and yet we are really only beginning to study the patterns of our connections and the possibilities for intelligent cooperation. 

As I will show later on, there is ample evidence supporting the notion that a diverse mob of people can actually exhibit similar or even greater intelligence than that of the most intelligent person within it.  That a group of individuals can exhibit a collective intelligence is already enticing enough to warrant careful consideration; furthermore, once we have a stronger grasp of the underlying dynamics, perhaps it will be possible to apply learned methods to harness this mostly untapped resource of intelligence.  In order to motivate a structured and systematic overview of such group dynamics and cooperative problem-solving, it’s useful to consider analogies in a system which we naturally think of as possessing intelligence – the human brain.  I surveyed two models of the mind in order to obtain a wider range of perspectives on formulations of intelligence modeling, and will consider cooperative intelligence within the framework of what I consider the most important characteristics exhibited by these models.

Induction, by John Holland et al, builds a rigorous and highly specified model of learning and the conceptual structure of the mind.  Using the bucket brigade algorithm and classifier system we studied in his book Hidden Order, he describes a process by which an initially simple system adapts to its environment, learns, and become more complex and able to model and respond to the external world accurately.  Foremost to his construction of a mental model are what he calls “q-morphisms,” or quasi-morphisms.  He describes morphisms as mathematical structures related to how two spaces map to each other, and uses this formulism to illustrate how our minds map parts of the world into our models of it.  The external world undergoes a transition function, called T, from one state to another over time, and likewise our model of it undergoes a transition T’.  The external world both before and after the transition is mapped onto our model using a categorization function P; ostensibly it is time-invariant.

However, Holland makes the point that in case our model isn’t accurately depicting the environment, we have to create layers of exceptions and new transition and categorization functions.  It is this process of creating multiple layers of a mental model branching off of each other that he calls a “q-morphism,” because the world no longer maps uniquely into our model.  With incomplete knowledge, we can make decisions based on the default transition layers, but as more information is acquired, we can move into more complicated and specified levels. 

A vital supposition of this system is that we are able to accurately depict our environment in our model and then translate subsequent decisions into actions.  Holland doesn’t assume that these abilities are automatically built in from the start, but rather that they are acquired through testing and reinforcement processes via the bucket brigade algorithm.  He describes two types of empirical rules, analogous to “messages” that agents in his classifier system can post and match, each serving a different important purpose in building up and adapting the model.  Synchronic rules are categorical and create hierarchies of definitions and associations between different concepts.  An example given is that a dog is a type of animal, and hence it is below “animal” in the hierarchy of definitions, but it is also associated with “cat,” even though it’s not directly related to it in a hierarchical way.  Diachronic rules make predictions about the external world, such as “If you stay up too late, you’ll be tired tomorrow,” but also can dictate appropriate behaviors given certain situations, such as “If you see a car driving straight toward you, get out of the way.”  Rules are created and compete with each other in the same method as the messages which post to a bulletin board in Hidden Order, by bidding on each other and gaining or losing strength after getting feedback from the external world.  With increased experience, the model becomes more sophisticated and develops more specific rules for interpreting the world and formulating actions; essentially, the model learns.

Marvin Minksy, in his book Society of Mind, presents a much different view of a model of the mind, although the two still exhibit striking similarities.  Overall, Minsky’s version is much more qualitative, subjective, and fragmented than Holland’s.  Instead of presenting an overall model, he addresses many small issues concerning human behavior, almost independently of each other.  Because of this, it was hard to piece together a cohesive theory of the mind.  In addition, whereas Holland focused exclusively on learning and problem-solving, Minsky almost never discussed development or learning, choosing instead to examine aspects of the already formed mind such as goals and language.  It almost seems as if they were approaching the same model from two separate directions; the former building the mind from the ground up, focusing on how we learn, the latter breaking the mind down into its component parts, focusing on who we are.  Minsky viewed the mind as a very goal-driven collection of agents which can be called by each other to do subtasks.  For instance, in order to DRIVE, we must call on the subagents STEER and SPEED CONTROL.  These subsequently call other agents, and so on all the way down to mini-agents which control muscle movement.  Although very hierarchical, these hierarchies of agents are not static and their relationships can shift around so that an agent who was called by one agent may do the calling next time.  Interestingly, although he phrases it differently, Minsky also brings up the point that agents compete with each other in the mind.  If subagents are locked in tight conflict, their meta-agent would be weakened, thus paving the way for another agent to take the reins.  In addition, he recognizes that in the absence of complete information, the mind often resorts to default assumptions based on past experience. 

In both models, there is a tightly-knit system of adaptive agents which compete with each other and react to feedback from the environment with the goal of achieving something collectively.  This then begs the question: What happens if we replace the agents with people?  Is group intelligence then possible?  In The Wisdom of Crowds, James Surowiecki argues that there are already numerous manifestations of group intelligence.  In a particularly striking example, he makes the point that the decision reached by a group is often eerily accurate, despite no one person in the group possessing enough information to have come to the same conclusion.  In 1968, U.S. submarine which disappeared in deep waters was all but lost when naval officer John Craven decided to pursue a unique search strategy as an experiment.  Instead of asking a few experts their opinions, he asked a diverse group of people with a wide range of knowledge to bet on different scenarios.  From their responses, he concocted a likely location for the submarine, and amazingly enough it was found 220 meters away, an amazing feat considering how little information they had to go on.  Surowiecki’s main point is that aggregating diverse opinions which were reached independently can yield surprisingly brilliant results. 

On the other hand, there are also numerous examples of groups making poor decisions.  An obvious instantiation of this would be “mob mentality,” in which otherwise perfectly rational people exhibit extreme irrationality and volatility.  In a group where people are pressured to conform and heavily influence each other, the bulk of the decision-making often falls to the most vocal or nominally important person, which is not necessarily the person with the most information and best judgment.  As an example, Surowiecki cites the Mission Management Team which was in charge of ensuring the safe return of the space shuttle Columbia in 2003.  Although several people on the team actually expressed concern about a large piece of foam which had broken off and damaged the shuttle during launch, the group as a whole chose to ignore this assessment because the leadership had unilaterally decided that the foam was of no import and never put it before the whole group or opened up any sort of debate.  Ultimately this decision proved to be a very poor one which tragically cost seven crew members their lives. 

Indeed, groups can often leave meetings much more polarized than before, as a result of extremely vocal individuals swaying moderate opinion-holders to their side or pressures on the group to come quickly to a consensus.  In light of this, perhaps the close and intimate social settings in which juries are forced to make decisions might not be the most conducive environment to rational verdicts.  It seems as if groups are intellectually capable of collective intelligence, but the complex social interactions between people complicate and hinder this ability.  In general, humans are almost blinded from practical group decision-making by our myriad social interests and susceptibilities. 

However, I believe that we as a species are at a unique turning point in our development, with the advent of the internet and wireless communication.  Never in history has it been possible to communicate instantaneously with anyone in the world, to join an online virtual community of everyone who shares your same interests, or to share any idea you want with all of the World Wide Web.  Essentially, the internet is as close to a meritocracy of ideas as we can get.  Whereas in the past, we were bound to our geographic communities and therefore tended to form dense networks where everyone knew everyone else, our social networks are now much more dispersed and dependent on common interests.  Communication on the internet is both selective and nonselective.  Ideas propagate freely, but personal relationships are determined by choice and similar interests.  This creates a fertile environment for breeding new ideas within the tight communities sharing similar interests and rapidly disseminating them to the masses.  I believe that this change in network topology also tends to minimize the social complications which hinder intelligent decision-making.  People are not forced to listen to or be influenced by an extreme minority which is simply in the same vicinity, because the internet has no boundaries and it is all too easy to seek out a wide variety of opinions. 

The internet also tends to screen out status cues which might otherwise bias how people receive ideas.  In real life, people are awarded status in decision-making scenarios even when they are unqualified.  Even in academia, bias remains a major problem which precludes work from judged solely based on their merit.  Surowiecki remarks that “most scientific papers are read by almost no one, while a small number of papers are read by many people” (170).  This seems contrary to the scientific ideal of objectivity and reverence for the pursuit of truth, but unfortunately all scientists are still human and subject to the same social influences as anyone.  On the internet, however, this seems to be less of a problem, because even though reputation-based systems are common, the ability for rapid dissemination of ideas ensures that most of the time these reputations are well-deserved.  Despite the internet being an unimaginably vast place, the most obscure of entertaining and useful pieces manages to find its way to virtual stardom, motivating those already with solid reputations to work hard to maintain them. 

In fact, the concept of ideas proliferating on their own merit is not by any means new.  It was first introduced in 1976 by Richard Dawkins under the title of “memes.”  He described them in the following way:

Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches.  Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.” (194)

Considering ideas to be in evolutionary competition with each other is very reminiscent of Holland’s classifier system, with the internet as the “bulletin board” and the ideas propagating from person to person as the messages.  The proliferation of reputation-based sites such as Google or Amazon might be analogous to his bucket brigade system which assigns strength to ideas based on their performance.  If I’m searching for a particular web site, I can type in a few key words and use the combined experience of all those who used Google before me to track it down almost instantly. 

Of course, there are two parts to intelligence: the ability to accurately portray the world, and the ability to act on them.  Memes proliferating throughout cyberspace might correspond to the former, but in order for the latter to be true, there must be a form of rational, coordinated, and planned group cooperation.  I want to stress the “rational” and “planned” portion of the previous statement because although there are many instances of coordinated human activity throughout history, very few of them display a rationality and self-awareness which is evident in true intelligence.  To illustrate what I mean, I’d like to turn to another book which focuses on our new culture of “constantly connected.”  In Smart Mobs, Howard Rheingold writes about this new phenomenon of rapid wireless communication and state-of-the-art inventions which should revolutionize how we connect.  Although the possible technologies he describes are indeed sexy, far more intriguing are his presentations of group cooperation made possible by this new culture.  When the impeachment trials for President Estrada of the Philippines suddenly halted on January 20, 2001 as a result of efforts by senators linked to Estrada, “[o]pposition leaders broadcast text messages, and within seventy-five minutes […], 20,000 people converged on Edsa [a popular and well-known meeting spot…]  Over four days, more than a million people showed up.  The military withdrew support from the regime; the Estrada government fell” (160).  Instead of a spontaneous mob born out of passion or random self-interests, the group that gathered at Edsa was well-informed and had the intention to accomplish a goal only possible through cooperative effort and coordination.  This would not have been possible without mobile wireless communication and the widespread use of SMS messaging throughout the country. 

It seems that effective communication is the key to harnessing the vast knowledge and considerable abilities of a diverse group of people, and only now are we in an opportune position to explore the ramifications of instantaneous, widespread sharing of ideas.  Far from being able to offer any serious predictions about the future, I’d at least like to make some observations and indulge in a bit of wild speculation.  The biggest limitation on the Holland classifier analogy to group intelligence, and also the most intriguing aspect, is that with a model of the mind, the external world is very well-defined.  When we extend the model to groups of people, it is no longer clear where the feedback comes from.  Because the largest problems facing humanity are in fact manmade, in order to engage in group problem-solving, we will be forced to model ourselves.  When a system becomes capable of modeling itself, an argument can be made that it has actually gained self-awareness and moved into the realm of consciousness.  Is it completely preposterous to contemplate the idea of a future collective consciousness of which we, as mere simple agents, will not be not aware?  Although I cannot say for sure that it’s out of the realm of possibility, I also cannot claim that such an analogy even holds when we’re dealing with groups of already sentient and intelligent beings.  In this case it’s extremely difficult to define at what stage the modeling actually occurs, inside the mind of each human, or between them. 

In addition, there are many complicating factors which could prevent this idealistic model from ever becoming reality.  It is not clear whether simply spreading an idea rapidly will necessarily mobilize cooperative action.  Because of the wide range of interests which exist on the internet, it’s possible that people would be unmotivated to act in accordance with each other.  In addition, in order for novel ideas to be created, people must maintain their diversity.  It’s unclear if the internet, despite being a haven for those who wish to indulge in their interests, will encourage others to develop interests in the first place.  If anything we can think of creating is already on the net somewhere, what is the point?  Can we manage to hold onto our very dear concepts of individuality in the face of an avalanche of other people’s thoughts and ideas?  The future could be a bleak and uneventful landscape of legions of people net-surfing all day, never contributing a thing. 

In spite of these objections and concerns, I find it exciting that the infrastructure is there nonetheless.  Mostly, I believe it’s important to acknowledge the significant impact that widespread mobile wireless communication will have on our ability to solve problems as a society.  At best, it will be a boon to our cultural and technological progress; at worst, it will create a culture of hyper-mass-consumerism.  Either way, it’s the beginning of a new era.

  

Works Cited:

    Dawkins, Richard. The Selfish Gene. New York: Oxford University Press Inc., 1976.

    Holland, John H., Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction:

    Processes of Inference, Learning, and Discovery. Cambridge: MIT Press, 1989.

    Minsky, Marvin. Society of Mind. New York: Simon & Schuster, 1988.

    Rheingold, Howard. Smart Mobs. New York: Perseus Books Group, 2002.

    Surowiecki, James. The Wisdom of Crowds. New York: Random House, 2004.

(Written 2007)