Internet Governance Forum continued

November 1, 2009

This year, the Internet Governance Forum will meet in Sharm El Sheikh, Egypt, in early November. This is year four of this interesting United Nations sponsored event, and the first to be held in the wake of increased independence for ICANN.

There is a lot on the program – here’s just a summary of some of the sessions I am personally involved in and some preliminary thoughts.

Civil Society meeting – 7pm, November 14. This will be my last civil society meeting as a Co-ordinator of the Internet Governance Caucus, the main coordinating body for non government sector input into the forum. It’s been a difficult year in several respects, and I glad to see my term of office come to an end.  The meeting is important, particularly for newcomers to get to meet their colleagues before the meeting proper gets underway.

Workshop on Transnationalisation of Internet Governance – where to from here? 9.30am, November 15. I’m really looking forward to this session right at the start of the meeting proper, which I will be chairing. After expert overview from Drs Wolfgang Kleinwachter (Giganet) and Janna Anderson (Imagining the Internet and Pew Internet Survey), we have a great panel to respond to issues raised, including Dr Jeremy Malcolm, Dr Elena Pavan by remote participation, Robert Pepper from Cisco, and also Robert Kahn, acknowledged by most people as  the co-founder of the Internet with Vint Cerf. We are looking to a good exchange of viewpoints here and with the audience as well – in the wake of increased independence for ICANN and some greater emphasis on the multistakeholder model, what will this mean for areas such as cybercrime, and for other bodies such as ITU?  Many questions here, it should be a great scene-setter.

Workshop on Core Values, November 16, 2pm. I’m looking forward to speaking at this one, with a great range of colleagues. This identification of core values is important – I am certain the technology will continue to change, but what is it that has made the Internet useful that we need to ensure we maintain amidst the changes? This should be an interesting workshop.

Net Neutrality Workshop, 2pm, November 18. I’ll be speaking here with a group with a wide range of interests in this subject, from activists through to network managers and national regulators. Ive written before on the confusion that surrounds this debate – I am looking forward to being able to say a few things from a consumer perspective about what we really need to achieve here.

And apart from these, I am looking forward to seeing old friends and socialising, and getting a chance to see the pyramids en route home (via Beijing of all places).

It will be interesting to see how the meeting goes. IGF has had a great influence on the Internet, bringing together stakeholders from business, government and non governmental organisations, to get a good exchange of viewpoints on where the Internet needs governance structures. To date its achievements have been facilitating a dialogue between all the key players – perhaps that is all it will ever achieve – and if thats the case, it has still been worthwhile.

Advertisements

Origins of packet switching

April 6, 2009

On packet switching

The following email exchange occurred in April 2009, and was precipitated by a discussion I had with Jovan Kurbalija from the Diplo Foundation in Geneva. The discussion related to the role of Paul Baran in defining Arpanet, and the extent to which that may have reflected a desire to build a network to avoid nuclear war. To help clarify the role I can record the following exchange.

Note: it seems packet switching, and the related datagram concept (Louis Pouzin), emerged in a number of places at once to anyone who thought deeply enough about how to construct a more reliable networking environment. I would also seem that packet switching had a precursor in the non-digital world in torn tape telegraph messaging. (thanks to Larry Press for pointing that out).

Question (Ian Peter)

From what I have read here and elsewhere, it would appear that both Paul Baran and Donald Davies (via Roger Scantlebury) met at times in the Arpanet design phase with various members of the Arpanet team to explain their similar theories on packet switching. I have also read that it was the Davies design which was adopted, not Baran’s.

Can anyone shed more light on this, who met who, and why if both designs were known Davies was preferred?

Vint Cerf

Bob Kahn was a key player in the design of the IMP and the ARPANET

Paul’s principal conceptual ideas related to packetization (message blocks) and hot-potato routing, I believe Donald Davies had a multi-node idea but implemented only one node  owing to funding limits. He or his team via Roger Scantlebury influenced the ARPANET design be convincing Larry Roberts to use 50 kb/s circuits rather than 2.4 kb/s circuits.

Donald also contributed the term “packet” into the literature.

Robert Kahn


Vint, my comments below are intended to amplify on what you briefly summarized. Im not sure what the context for this is, but many folks wrote about networking over a number of years.

You cited some of them. Kleinrock, Baran, Davies, Licklider to mention four early ones. Baran tried to explain how to build a packet net before it was practical to do so. He started out by postulating the use of broadcast radio stations (like normal AM stations) to hopefully get data from place to place “hopping” from radio station to radio station.

There was no available electronics to do so (vlsi had not been created yet), and only big mainframes would have been available for use at great cost. No discussion of protocols and all that involves. His was a fine conceptual start, but hardly a blueprint to build anything. Donald Davies actually tried to raise funds to build a packet net, but was unable to do so. He ended up outfitting a single minicomputer (which had only shortly come into commercial availability) with multiple terminals to demonstrate the packet concept from one terminal in a room to another.

Kleinrock’s MIT thesis provided interesting analytical approaches to evaluating commodity queuing networks and stimulated many follow-on research projects, out of which came many of the students that went into the field. He also pioneered the field of network measurement (and I presume modeling as well).

Licklider was a wonderful spokesman for the possibilities of networking, but never really a practitioner.

With encouragement from Bob Taylor and Charles Herzfield, Larry Roberts set out to create the first packet net – the ARPANET as it came to be called. He outlined the goals and the trajectory for the project sufficient for someone to figure out how to make happen what he outlined. A packet network (actually it was called a message switching network initially and only later replaced by the packet switching moniker) that consided of nodes connected by “high speed” digital lines to be leased from the (then) telephone company. He called for messages to be broken into packets of up to 1000 bits each, and then routed through the net and reassembled into the original message at the destination. Larry’s background would have likely enabled him to build the network himself, or with a small technical team, but he had managerial responsibilities at DARPA that did not directly lend itself to establishing the required level of technical immersion in-house. So, DARPA issued an RFQ to have a contractor design and build the network. BBN won the contract and that successful initiative created the ARPANET IMPS, which formed the basis for the ARPANET (with 50 Kbps lines from AT&T).

The technical task was to figure out how to build such a device, the IMP, to make such a network come into existence.

I was responsible for the system design of the network (and primarily the design of the IMP). Severo Ornstein was responsible for the hardware design and development, Bill Crowther and Dave Walden for the detailed software implementation. Frank Heart was the BBN executive that had managerial responsibility for the effort. Dave and I did the initial field testing and debugging of the initial 4 node-net on the west coast (+ utah). Larry masterminded the ARPANET project from his position at DARPA. I would say the BBN effort resulted in the first demonstrated proof of concept of a packet switching network and the ARPANET project demonstrated the first example of a computer communication network. In parallel with the IMP and packet net development, the effort to develop protocols and host computer applications proceeded and, in many ways, was the required counterpart to the packet switching network piece which made the network concept useful to real users.

Then there was the plan to interconnect multiple heterogeneous packet networks (and their attached computers cum protocols) which demonstrated the concept of Internetting and led to today’s Internet. But, then again, thats an entirely different story.

Bob

Vint Cerf

(in reply to a question about torn tape telegraph messaging)

Yes of course we did know about “torn tape” and also about message switching in the form of AUTODIN.

If you look at Len Kleinrock’s book that emerged from his dissertation it was about stochastic flow and delay in message switched systems. An ARPANET innovation was to break messages up into “packets” for purposes of transmission to reduce transmission delay which was significant over low speed backbone trunks available at the time.

Also, for purposes of noise resistance, the shorter packets had a higher probability of arriving intact than long messages.

Communication Nets: Stochastic Message Flow and Delay

This text develops a queuing theory model of communications nets, with realistic assessments that will benefit those working with computers and other communications systems. Topics include optimal channel capacity assignment, effect of priority and other queue disciplines, choice of routine procedure, fixed-cost restraint, and design of topological structures. 1964 edition.

Publisher: Dover Publications
ISBN: 0486458806
EAN: 9780486458809
No. of Pages: 209

Vint Cerf (to Roger Scantlebury)

Roger,

Is it your understanding that Donald and the NPL team were unaware of the Baran work at RAND during the period of development of the network at NPL? I don’t think this has to do with anyone fighting over paternity. It is just a question about when the various “packet”  efforts became aware of each other.

For example, your interaction with Larry Roberts in 1967 is the key link that drew ARPA and NPL groups into mutual awareness I think.
I had once thought that you might have drawn Roberts’ attention to Baran’s work but I suppose not, if you were unaware of it in 1967.
In fact, it would be of interest to know when and how you (or Donald) might have learned of it?

Roger Scantlebury

Hi Vint

We referenced Baran’s paper in our 1967 Gatlinburg ACM paper. You  will find it in the References. Therefore I am sure that we
introduced Baran’s work to Larry (and hence the BBN guys).

We were unaware of Baran’s work when we started our own design work in 1965, but were given a copy of his paper by one of our colleagues in the UK Ministry of Defense (in 1966) while we were writing the1967 paper. Clearly Donald and Paul Baran had independently come to a
similar idea albeit for different purposes. Paul for a survivable voice/telex network, ours for a high-speed computer network.

I hope this explains the time-line?.

 

Time to stop talking about net neutrality

February 12, 2009

It’s time we stopped talking about network neutrality and redescribed what we are trying to achieve here. The term has been distorted greatly to become a series of arguments about traffic shaping, network management, carrier profitability, and endless nit-picking technical arguments. This is distorting some important matters about the future of the Internet and only causing confusion. The term has lost its meaning.

And, in fact, there probably never was network neutrality. A few recent postings on the Internet History list have shown that, even in the pre-commercial Internet days of the NSF-Net, engineers were shaping traffic in order to give priority to the interactive telnet application (a precursor to world wide web) over the email and newsgroup traffic. Sensible bandwidth management.

For most of us, the reason for getting involved here was to protect a fundamental principle – the right for users to control which websites they can visit, which access methods, which applications they use, rather than this being forced on them by a carrier or other parties for their commercial gains. If this principle is lost, and we end up with something like a PAY TV model of Internet access where you pay to access certain sites, or content providers pay a premium to get fast access from carriers, something fundamentally important about the Internet will be lost.

The substance here is vitally important, but the terminology has lost its meaning. Perhaps we need to alk about equitable access to carriage for content providers, equitable access to content and applications for end users, and similar principles. It’s important to get this debate back on track.

Normal

0

false

false

false

/* Style Definitions */

table.MsoNormalTable

{mso-style-name:”Table Normal”;

mso-tstyle-rowband-size:0;

mso-tstyle-colband-size:0;

mso-style-noshow:yes;

mso-style-parent:””;

mso-padding-alt:0cm 5.4pt 0cm 5.4pt;

mso-para-margin:0cm;

mso-para-margin-bottom:.0001pt;

mso-pagination:widow-orphan;

font-size:10.0pt;

font-family:”Times New Roman”;

mso-ansi-language:#0400;

mso-fareast-language:#0400;

mso-bidi-language:#0400;}

Emerging Issues in Internet Governance

December 10, 2008

Below are my edited opening remarks from the Plenary Session on Emerging Issues at Hyderabad, India, December 6 2008

Thank you very much, and good morning, everybody. I think it helps in examining subjects like this to pull ourselves away from our immediate environment and, as best we can, look into the future, and see if we can identify some of the issues and some of the factors that we are going to address. So, to me, the next billion users of the Internet is going to happen, and happen very rapidly. It will be over perhaps by the time we meet again. But the last billion, well, it’s going to take some time. The first billion took 20 years, as was pointed out. I think by the time we get to the last billion, we have some order to difficulties and that might take us out another 20 years.

So in order to do so, I’d like to welcome you all to IGF 2028. The IGF 2028 meeting is taking place in Reykjavik, Iceland. I’ll explain a little bit why we’re there later. I’m participating virtually from Australia for various reasons. And I’m on high-definition conference link from Australia, being able to participate from there. So thank you for the opportunity. The other piece of news I must relate 20 years enhance is, Australia has just beaten India in the cricket for the first time in 20 years. So that is a great thrill for me. Thank you. When I’m talking about the last billion, I should say that we — that there are some people who are, obviously — we’re not talking about the last billion of the world’s population. I think we can leave out everybody who arrived yesterday and/or probably everybody under the age before three. But we do start to pick up at the age of about three or four, as I know from my grandchildren, who hop on the net and play games. So we do have three-and four-year-old users. I guess there are also a number of people who simply don’t want to have anything to do with the Internet. So that’s okay, too, I’m excluding them. I’m including in the last billion people who see there is advantage for their family, for their children, to be connected here and all the advantages that other people have from the Internet, they want them. So they’re in and we’re trying to deal with those.

So let’s get of profile of where they are, as best we can. And let me say that quite a few of this last billion are in developed countries. They are in rural pockets, which have not been connected and are very hard to connect, in countries like Australia it could be the remote indigenous communities. Who are amongst the last billion. So we certainly have pockets still in developing countries. And we have urban poor in developed countries who also will be part of that last billion. But to a large degree, the last billion will be those who are slower in adopting, at this point in time, will still be so. And that will be the case. So there are areas on the planet that will need higher concentration than others in order to bring the equity that does derive from all of us from having access to this thing. Quite a few of the last billion won’t be able to read or write. That won’t be a problem for them because a lot of the uses they will have will be around gaming or around downloading, downloading videos, downloading music, these sort of factors. So that’s not a problem but it is an interesting factor because then the tool for literacy that exists with the Internet becomes a very interesting thing that starts to come to the fore. So these are the things that are happening. And the other factor that comes in with the last billion that’s very interesting for us is multilingual. You ain’t seen nothing yet. But the time we get to the last billion and some of the languages we have to deal with, it starts to become quite complex.

Let’s talk about the devices we use with the last billion, and very few of them are going to be computers. Most of them are going to be mobile devices. We are going to be in a mobile world. And some of these devices will be mobile phones, some will be what we used to call PDAs and all sorts of smaller devices, but the computer won’t be amongst the dominant devices at that point in time. That raises fairly interesting issues, and we will come back to those as we start to talk about infrastructure and so on. But as I said before, the dominant — some of the dominant media we use at this point in time will not be the dominant Internet media with the last billion. The concentration will be more towards what our kids do, which is the downloading, the texting, the games. All these things come into this space. There’s a ramification of these things with mobile and we will get to it. Let me talk about some of the issues and try and structure this a little bit. And I am not going to paint the whole picture, but perhaps raise some questions and some ideas which are a part of the picture for you to fill in yourself and see how you think we ought to address some of these things.

Now, why are we in Reykjavik, Iceland? We are in Iceland because Iceland is the fastest growing Internet economy on the planet. Iceland, the basket case economically of the great depression of 2008 has jumped ahead because of the major project under way to create the carbon neutral Internet. Iceland’s vast geothermal resources have been put to use, and major server farms — in fact, most server farms in Europe, most major locations in Europe, a lot of government data centers in Europe have all relocated into Reykjavk to get near the geothermal thing. Other areas of the world with good renewable energy resources have also jumped ahead, too. So there is a great new economy and a great number of new possibilities arising from this. Some of the other features that we’ll see is most of us will have our biodegradable mobile phones. We have started with the carbon neutral Internet to start to address the major problems of junk. And I know one of my fellow panelists is going to raise this issue later. In getting to this carbon neutral Internet we have started to create a vastly different architecture and start to really use the way that the Internet works in a vastly different way.

Let’s talk about the infrastructure for this. So you might think that usage will probably be about six times what it would be at the moment. That would be nice. One of the factors that’s interesting to look at is the high end users of Internet bandwidth at this point of time are using 10,000 times the bandwidth of the normal — of the low-end Internet user. So there’s a vast discrepancy. And what is going to happen is more of us are going to move to this bigger group, the 10,000 — 10,000 times normal usage pattern of Internet usage, and this will happen more and more as particularly in developing countries. So I don’t think we will have anything like 6 times. We might be looking at 60,000 times the current capacity is needed. I would say conservatively we are definitely looking at 10,000 times the current capacity by 2028. That has a number of ramifications for the way we do things. I mentioned video. We ever not just talking about the recreational video and YouTube downloads, we are talking medical video, the high-image conferencing which allows me to participate from Australia. These sort of things are all part of this too. This creates a number of issues. One of the issues here is shared infrastructure. Is everybody going to roll out, ever telco, every ISP, going to roll out parallel infrastructure to across every country to try and do this or are we going to look at shared infrastructure models to create efficiency and to create this global network which we need. How are we going to cope with this vastly bigger issue?

And what’s this going to look like? Let me give you a couple of ideas. First of all, we are looking at the mobile Internet to a very large degree. And as I say, the dominant use will be mobile. Now, back in 2008, the mobile device was a strange hybrid. It used non-Internet standards when it connected by voice, but some of them did, but in fact some of them, the cheaper ones went over to what is called Internet standards to use this and then there was Internet standards for dial-up and other standards being introduced and there was quite a bit of mess. Now, I think this eventually, to deal with the expansion of all of this, led rise to the new standards institution which was created in about 2010 which was the IETFTU. The IETFTU looked to harmonize all the standards which were going on in the Internet area and telephony area to create this big globally connected network. The major work of the IETFTU was the workshop which was trying to reduce the number of standards to less than 10,000. And this was consuming a lot of energy as people started to try to get this to a workable number of standards. However, there was the new organization which has just arisen and I ask you to think about how strong it was and this is the III. This is the Internet intersect initiative. This was sort of like the WWW. They figured that the IETFTU were never going to cope. The IETFTU was far too old, far too staid. The IETFTU didn’t get it, and a lot of innovators have moved to the III. The III was looking at a clean-slate approach, and it had taken place, and many of the users were starting to use the III standards and the new III network. The III network had all these wonderful applications and became the platform for innovation because the other platform had got to the stage where innovation was becoming more and more difficult. And some of the people there remembered the thoughts of Robert Kahn at the IGF way back in 2007 who started to say the Internet standards and the Internet standards body are starting to ossify and become not capable of handling new areas of innovation. So that was an interesting fact that came in.

So what does our governance look like in 2028? Well, it would defy all known management logic if the current structures in their current form were able to cope. They won’t. By 2028, things will have been changed. Things will be moving very fast. I think the new motto which, because of the pace of all of this adoption, will be if it’s about to break, fix it. And that will be the dominant motto by which we start to look at the emerging governance models. One of the interesting questions around this is going to be jurisdiction. It will still be an issue. And we are living in lawyers’ paradise. Internet is the lawyers’ paradise where various countries all think they have jurisdiction of various thoughts over the same domain name. Where privacy issues and sort of issues as to who should do what are all out there. We have industry self-regulatory regimes start to go spring up, but governments are also, in their own way, setting up their regimes. Do we have an industry self-regulatory regime dealing with most of these issues or do we have a government regime separately? What is the structure that is going to deal with these tremendous emerging issues around intellectual property and privacy and individual rights and human rights and the content that’s offensive in one country but not offensive in another. How are we going to deal with all of those. So as I say, I think we are in lawyers’ paradise as we start to deal with these jurisdiction issues.

That’s about it, I think. I hope 2028 provokes you to start to think. If I just summarize that very quickly, the areas where I think we have issues is how we deal with climate change and environment issues, how we deal with infrastructure, how we deal with access, how we deal with ossification and standards and how we deal with governments as we go forward. Very interesting, and I hope I provoked some thinking.

Intervention in Hyderabad following Chinese Govt comments on root zone

December 10, 2008

What follows is my comments at the Hyderabad meeting of the Internet Governance Forum following the Chinese Government’s comments as regards the root zone authorisation role continued by the US Government .

I again want to take up the comments from the government of China, and to say thank you very much for those comments.  And it is not only governments that will agree with what you’re talking about.
Many of us feel that the root zone authorization process should be changed.  And I think you’ll find support from that broadly in civil society, you’ll find support for that in many other cases as well.
But adding to that, I think we should thank the U.S. government for the role it played taking on this role in the beginning of the Internet.
There was a time when this function was necessary.  There was a time when if this function had not existed, takeover by business would have been something that would have been not in the best interests of the growth of this.  So there should be thanks to the U.S. for the role, the legacy role, I must say, that they did carry when the Internet was very young and needed such a role.
I’m in the country of Mahatma Gandhi, and I do know what Mahatma Gandhi would say about a foreign government continuing to carry a role when things have grown up and we want to do it differently now.  Where there will be difference is what should happen instead of this unilateral control mechanism.
Most of us in civil society would like to see this carried on as a — perhaps a function of ICANN, which gives us reasonable multistakeholder input.
So, in fact, we can look at that particular function as absolutely unnecessary.
If ICANN has made a decision, having consulted on a multistakeholder basis, there is no need for any authority to override that decision, because everybody has been consulted.
So, yes, I would definitely agree with the government of China, this is an issue.  We should discuss it.  Change is necessary.  But I would see the change is to abandon the legacy function.  It is no longer needed.
Thank you very much.

Mumbai, media, and my thoughts

November 29, 2008

I followed this weeks events in Mumbai more closely than a lot of people – particularly because, on the morning the news broke of the terrorist attacks, I was due to travel to Mumbai and stay in Colaba, where the major attacks took place. I re-routed at considerable expense – havving discovered a clause in my travel insurance that excluded terrorist events.

That being said, a couple of obesrvations should be made.

Firstly, many more Indians died than Westerners. They died in an attack that singled out Westerners. They died trying to protect the rest of us from this act. Whatever we might say, the Indian people made a great sacrifice trying to stop this – and considering how many westerners were there at the time, did remarkably well. Let’s not forget that.

Secondly, the part played by Internet media, and here we can single out Twitter, was without precedent. The mainstream media followed Twitter for leads, citizen reporters provided a great deal of meaningful and important information. Although, as a friend pointed out, we have been able to watch citizen media playing a major role since the Glasnost era attacks on Russian parliament, the difference here is that now the mainstream media looked to alternative media for leads.

And thirdly – as long as we have a war on terror we will have more terror. The answer to what we have just witnessed is not more troops in Afghanistan or Pakistan, retaliation, blaming, or escalated religious rhetoric. We would do better to reflect on the factors at work in our current world that lead to the escalation if violence in certain groups and why they feel so angry about the west. There just might be something that needs to be understood here!

The Transboundary Internet – coping with new realities

November 17, 2008

At the Internet Governance Forum meeting in Hyderabad I will be chairing a workshop entitled – “The Transboundary Internet – Jurisdiction, Control, and Sovereignty”. I am delighted to be working with very knowledgeable panelists here including.

  • Meryem Marzouki – European Digital Rights (EDRI), Europe & National Centre
    for Scientific Research (CNRS) – Univ. Pierre et Marie Curie, France
  • Philippe Boillat- Director General of Human Rights and Legal Affairs, Council of Europe
  • James Love – Director, Knowledge Ecology International
  • Bret Fausett – Intellectual Property and Internet attorney, Adorno Yoss
  • William J. Drake – Centre for International Governance, Graduate Institute of
    International and Development Studies – Geneva
  • Rishi Jaitly, South Asia Government Affairs Manager, Google Inc.

Our rationale was stated as follows.

“The Internet crosses the boundaries of all nations and raises some unique transboundary jurisdictional problems. The recent case of a British citizen living in Spain, with Internet servers in the Bahamas, selling holidays to Cuba, and having his domain name impounded by a registrar located in the USA
because it appeared to break the US embargo against Cuba is one recent case in point. Another landmark case was the French-US Yahoo! case in 1999 dealing with sale of nazi memorabilia, but apart from these high profile content cases there are many examples in other areas such as privacy, consumer issues, cybercrime, and intellectual property”.

Let’s mention another example. The Pakistan government ordered a block on offensive content published on YouTube, headquartered in USA, for material by a Dutch politician publicising cartoons from a Danish newspaper which were offensive to many Islamic people. The way the block was implemented took all of You Tube’s global content off line for two hours. I won’t go into the technicalities, but the overkill was recitified and something more locally suitable implemented.

But questions remain as to how such issues should be dealt with. This workshop will discuss the many implications of competing national jurisdictions being projected into a globalized space where multiple normative sources apply, such as political, legal, technical, contractual, and behavioral
regulations. Through practical case studies, this workshop will look at the implications of various approaches to resolving these issues and the implications for Internet governance, international law, national sovereignty, democracy, and human rights and fundamental freedoms. It should be an interesting
session!

The workshop also explores the implications for Internet governance where no structures are in place to deal with emerging issues, and how default unilateral action in the absence of structural alternatives can lead to de facto Internet governance.

All inputs and thoughts welcome! Some detailed discussion of this area is long overdue and it would be good to see attention paid to how we should address these sorts of issues.

Ian Peter as Co-ordinator of Internet Governance Caucus

August 22, 2008

In August 2008 Ian Peter was elected as a Co-ordinator of the Internet Governance Caucus, an international alliance of non government organisations and individuals involved with the United Nations Internet Governance Forum (www.intgovforum.org). He joins Parminder Singh of ICT4Change in India as co-coordinator.

Mobile growth outpaces Internet

April 15, 2008

http://www.washingtonpost.com/wp-dyn/content/article/

2008/02/22/AR2008022202283.html

From essentially zero, we’ve passed a watershed of more than 3.3 billion active cellphones on a planet of some 6.6 billion humans in about 26 years.

This is the fastest global diffusion of any technology in human history — faster even than the polio vaccine.

“We knew this was going to happen a few years ago. And we know how it will end,” says Eric Schmidt, chairman of the board and chief executive officer of Google. “It will end with 5 billion out of the 6” with cellphones. “A reasonable prediction is 4 billion in the next few years — the current proposal is 4 billion by 2010. And then the final billion or so within a few years thereafter.

“Eventually there will be more cellphone users than people who read and write. I think if you get that right, then everything else becomes obvious.”

“It’s the technology most adapted to the essence of the human species — sociability,” says Arthur Molella, director of the Smithsonian’s Lemelson Center for the Study of Invention and Innovation. “It’s the ultimate tool to find each other. It’s wonderful technology for being human.”

Maybe. But do our mobiles now render us unprecedentedly free? Or permanently tethered?

Too much information?

April 15, 2008

IDC have published an update on the size of the digital universe. The full report can be found from http://arstechnica.com/news.ars/post/20080312-study-

amount-of-digital-info-global-storage-capacity.html

Here’s a summary

The digital universe in 2007 — at 2.25 x 1021 bits (281 exabytes or 281 billion gigabytes) — was 10% bigger than we thought. The resizing comes as a result of faster growth in cameras, digital TV shipments, and better understanding of information replication.

• By 2011, the digital universe will be 10 times the size it was in 2006.

• As forecast, the amount of information created, captured, or replicated exceeded available storage for the first time in 2007. Not all information created and transmitted gets stored, but by 2011, almost half of the digital universe will not have a permanent home.

• Fast-growing corners of the digital universe include those related to digital TV, surveillance cameras, Internet access in emerging countries, sensor-based applications, datacenters supporting “cloud computing,” and social networks.

• The diversity of the digital universe can be seen in the variability of file sizes, from 6 gigabyte movies on DVD to 128-bit signals from RFID tags. Because of the growth of VoIP, sensors, and RFID, the number of electronic information “containers” — files, images, packets, tag contents — is growing 50% faster than the number of gigabytes. The information created in 2011 will be contained in more than 20 quadrillion — 20 million billion — of such containers, a tremendous management challenge for both businesses and consumers.

• Of that portion of the digital universe created by individuals, less than half can be accounted for by user activities — pictures taken, phone calls made, emails sent — while the rest constitutes a digital “shadow” — surveillance photos, Web search histories, financial transaction journals, mailing lists, and so on.

• The enterprise share of the digital universe is widely skewed by industry, having little relationship to GDP or IT spending. The finance industry, for instance, accounts for almost 20% of worldwide IT spending but only 6% of the digital universe. Meanwhile, media, entertainment, and communications industries will account for 10 times their share of the digital universe in 2011 as their share of worldwide gross economic output.

• The picture related to the source and governance of digital information remains intact: Approximately 70% of the digital universe is created by individuals, but enterprises areresponsible for the security, privacy, reliability, and compliance of 85%.

To deal with this explosion of the digital universe in size and complexity, IT organizations will face three main imperatives:

One. They will need to transform their existing relationships with the business units. It will take all competent hands in an organization to deal with information creation, storage, management, security, retention, and disposal in an enterprise. Dealing with the digital universe is not a technical problem alone.

Two. They will need to spearhead the development of Organization wide policies for information governance: information security, information retention, data access, and compliance.

Three. They will need to rush new tools and standards into the organization, from storage optimization, unstructured data search, and database analytics to resource pooling (virtualization) and management and security tools. All will be required to make the information infrastructure as flexible, adaptable, and scalable as possible.