ICANNWatch
 
  Inside ICANNWatch  
Submit Story
Home
Lost Password
Preferences
Site Messages
Top 10 Lists
Latest Comments
Search by topic

Our Mission
ICANN for Beginners
About Us
How To Use This Site
ICANNWatch FAQ
Slash Tech Info
Link to Us
Write to Us

  Useful ICANN sites  
  • ICANN itself
  • Bret Fausett's ICANN Blog
  • Internet Governance Project
  • UN Working Group on Internet Governance
  • Karl Auerbach web site
  • Müller-Maguhn home
  • UDRPinfo.com;
  • UDRPlaw.net;
  • CircleID;
  • LatinoamerICANN Project
  • ICB Tollfree News

  •   At Large Membership and Civil Society Participation in ICANN  
  • icannatlarge.com;
  • Noncommercial Users Constituency of ICANN
  • NAIS Project
  • ICANN At Large Study Committee Final Report
  • ICANN (non)Members page
  • ICANN Membership Election site

  • ICANN-Related Reading
    Browse ICANNWatch by Subject

    Ted Byfied
    - ICANN: Defending Our Precious Bodily Fluids
    - Ushering in Banality
    - ICANN! No U CANN't!
    - roving_reporter
    - DNS: A Short History and a Short Future

    David Farber
    - Overcoming ICANN (PFIR statement)

    A. Michael Froomkin
    - When We Say US™, We Mean It!
    - ICANN 2.0: Meet The New Boss
    - Habermas@ discourse.net: Toward a Critical Theory of Cyberspace
    - ICANN and Anti-Trust (with Mark Lemley)
    - Wrong Turn in Cyberspace: Using ICANN to Route Around the APA & the Constitution (html)
    - Form and Substance in Cyberspace
    - ICANN's "Uniform Dispute Resolution Policy"-- Causes and (Partial) Cures

    Milton Mueller
    - Ruling the Root
    - Success by Default: A New Profile of Domain Name Trademark Disputes under ICANN's UDRP
    - Dancing the Quango: ICANN as International Regulatory Regime
    - Goverments and Country Names: ICANN's Transformation into an Intergovernmental Regime
    - Competing DNS Roots: Creative Destruction or Just Plain Destruction?
    - Rough Justice: A Statistical Assessment of the UDRP
    - ICANN and Internet Governance

    David Post
    - Governing Cyberspace, or Where is James Madison When We Need Him?
    - The 'Unsettled Paradox': The Internet, the State, and the Consent of the Governed

    Jonathan Weinberg
    - Sitefinder and Internet Governance
    - ICANN, Internet Stability, and New Top Level Domains
    - Geeks and Greeks
    - ICANN and the Problem of Legitimacy

    Highlights of the ICANNWatch Archive
    (June 1999 - March 2001)


     
    This discussion has been archived. No new comments can be posted.
    'Twas the Night Before Christmas (in Marina del Rey) | Log in/Create an Account | Top | 97 comments | Search Discussion
    Click this button to post a comment to this story
    The options below will change how the comments display
    Threshold:
    Check box to change your default comment view
    The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
    Re: 'Twas the Night Before Christmas (in Marina de
    by Anonymous on Tuesday December 25 2001, @03:06PM (#4391)
    MARY JO WHlTE
    United States Attorney for the
    Southern Distnct of New York
    By: STEVEN M. HABER (SH-83 15)
    Assistant United States Attorney
    100 Church Street - 19~ Floor
    New York, NY 10007
    Tel. (212) 637-2718

    UNITED STATES DISTRICT COURT
    SOUTHERN DISTRICT OF NEW YORK
    ---------------------------------x
    PGMEDIA, INC, :
    d/b/a NAME.SPACE

    Plaintiff, :

    : DECLARATION OF
    v. DR GEORGE STRAWN
    :
    CIVIL ACTION NO.
    : 97 Civ. 1946 (RPP)
    NETWORK SOLUTIONS, INC. and
    NATIONAL SCIENCE FOUNDATION :

    Defendants. :
    ---------------------------------x

    GEORGE STRAWN declares, pursuant to 28 U.S.C. § 1746, as follows: 1. I
    am the Advanced Networking Infrastructure and Research ("ANIR")
    Division Director for the Computer and Information Science and
    Engineering ("CISE") Directorate of the National Science Foundation
    ("NSF'), a position I have held since 1995. I also co chair the
    Federal Interagency Large Scale Networking working group, which
    oversees thc Presidential Next Generation Internet Initiative. From
    1991 to 1993, I was NSFNET Program Director in the same division and,
    among other things, I was responsible for the transition from NSFNET
    to the Internet. I have served at NSF full time while on leave from
    Iowa State University, under the Intergovernmental Personnel Act, 5
    U.S.C. §3374.

    2. From 1985 to 1995 I was Director ofthe Iowa State University
    Computation Center, where I was a charter member and chair of the
    advisory committee of an NSF supported regional network, MIDNET. From
    1983 to 1986 I was chair of the Iowa State Department of Computer
    Science. I have been an Iowa State Computer Science faculty member
    since 1966 and have worked in the programming language and compiler
    area. Prior to going to Iowa State I spent four years with IBM as a
    systems engineer and computer salesman. I hold a Ph.D. in abstract
    algebra from Iowa State (1969) and a BA in mathematics and physics
    from Comell College (1962).

    3. I am ultimately responsible for all aspects of program management
    within the ANIR Division of CISE, and submit this declaration in
    opposition to plaintif~s motion for partial summary judgment and in
    support of NSF's cross-motion for summary judgment. The statements
    herein are based both on personal knowledge and on information
    available to NSF.

    BACKGROUND ABOUT THE INTERNET

    4. Today's "Intemet" is an overarching network of computer networks
    and individual computers that are interconnected by communications
    facilities, such as telephone lines.

    5. The antecedents of the Internet were systems for two relatively
    small groups of research-oriented governmental, academic and corporate
    entities - ARPANET and NSFNET. The earlier group, ARPANET, was engaged
    in military research that received principal support from the
    Depaltment of Defense and related agencies. The second group, NSF~ET,
    consisted of many of the same antities that wae included in the
    ARPANET, along with other entities engaged in general scientific
    research that received support from numerous sources, including NSF
    and other federal agencies, academic institutions and corporate
    sponsors.

    6. The "ARPANET phase" of internet networking operated between
    approximately 1969 to 1985. During this phase, the networking system
    so~ware was called Network Control Protocol. The "NSFNET phase"
    operated between approximately 1985 and 1995.


    During this phase, a new generation of network system so~ware was
    utilized that was called Transmission Control ProtocoUIntemet Protocol
    ("TCP/IP"). And since approximately l99S, this networking has been
    commonly referred to as "the Internet" a~er the Internet Protocol part
    ofthe TCP/IP network so~ware. Throughout this declaration, I sometimes
    refer to the ARPANET/NSFNET phases as the "early Internet."

    7. Initially, most of the individual users of the early Internet were
    affiliated either with federal government agencies or with academic or
    corporate institutions carrying out research that was sponsored by one
    or more ofthose agencies. Each of these research institutions had
    timeshared mainframe computers and in the early stages of this
    evolution users would typically have access only through computer
    terminals connected to one of the mainframes.

    8. As the number of institutions and sites connected to NSFNET grew,
    and other fedcral research agencies interconnected their networks with
    the NSFNET, organizations wishing to communicate with this
    fast-growing "research Internet" and utilize its resources began to
    interconnect their &cilities and networks with it (and with the
    pre-existing ARPANET).

    9. Today's diversified Internet community thus grew from a relatively
    small-scale research-oriented environment. Historically, cultural
    ethics during the ARPANET/NSFNET days were based on cooperation and
    collegiality. This environment allowed participants in the community
    to handle many tasks informally, with many important responsibilities
    delegated to individual persons.

    10. The environment further allowed for voluntary participation in
    internet networking. Networks or institutions that elected to
    participate were required to operate in accordance with the
    consensus-based standards in order to communicate with similar
    entities through internet networking. The fundamental requirement for
    such interconnection was agreement to use TCP/IP to exchange traffic
    with other connected networks.

    11. The "best practices" for interconnection and new protocols for the
    growing number of networks were determined by a group of people from
    various organizations involved in the early Intemet. The activities of
    the group were supported by NSF and other federal agencies. In 1986,
    this group became known as the Internet Engineering Task Force
    ("IETF").

    12. The IETF is a loosely self-organized group of people who make
    technical and other contributions to the engineering and technologies
    of the Internet. According to its website (annexed hereto as Exhibit
    A), the IETF began (by that name) in 1986 with only 15 attendees at
    its first meeting, utilizing a collegial model in which individuals
    would put forward a suggestion and others would add to or amend it
    until they had "rough consensus and running code." The Internet
    community's consensus would ultimately be published on the
    Intemet‹designated as a Request For Comments ("RFC")‹by the Internet
    Architecture Board (originally known as the Internet Activities
    Board), an affiliate organization of the IETF. 13. In order for the
    Intemet to operate, each entity (computer, router, network, etc.)
    connected to the Internet must have one or more unique numeric
    "addresses" which will permit other connected entities to send it
    communications. Thus, every entity connected to the Internet has at
    least one numeric address, called an "IP number," under an addressing
    system that was implemented in 1983 on the ARPANET. IP numbers are
    four strings of numbers set apart by periods, with the IP number
    totaling no more than twelve digits.

    14. During the early Intemet, these addresses were maintained and
    assigned by one individual in order to insure uniqueness and
    reliability. That individual was Dr. Jon Postel at the University of
    Southern California's Information Sciences Institute, who performed
    this wor~ pursuant to the ARPANET experiment. Dr. Postel's project
    subsequently became identified as the Internet Assigned Numbers
    Authority ("IANA"). The IANA is still responsible for overseeing the
    allocation of IP numbers, a task it undertakes under contract with the
    Department of Defense.

    15. Initially, network users informally assigned names to their own
    computers and these names were tracked and associated with their
    corresponding IP numbers in a file maintained centrally and downloaded
    to the host computers at all Internet sites. It was widely felt that
    these names would be easier for people to remember than the
    twelve-digit IP numeric address.

    THE DOMAIN NAME SYSTEM

    16. In 1987, RFC 1034 (copy annexed hereto as Exhibit B) announced a
    new hierarchical Domain Name System ("DNS") for associating names with
    IP numbers on the Internet. Under the DNS, top level domain names,
    consisting of two or three characters, identi~ the highest
    subdivisions in which an address can be located. Second level and
    lower level domain names identify network "host" computers and
    individual sites.

    17. The DNS utilizes a system of databases that convert Internet
    domain names (e.g., bm.com or nsf.gov) into IP numbers (e.g.,
    204.146.46.8). This conversion function makes it possible for Internet
    users to address messages to other users and to Internet-attached
    computers by name (e.g., ibm.com or gstrawn~nsf.gov) rather than
    number. The DNS database is "distributed" ~I.e., different segments of
    the database are maintained on computers at various locations), so
    that an IP number query may be routed consecutively to databases
    located on different computers.

    18. The groups of alphanumeric characters ("strings") that make up
    domain names a~e separated by periods. The far right string is called
    the top level domain ("TLD"); the next

    is called the second level domain ("SLD"); and so on. In the above
    examples, ".com" and ".gov" are TLDs; "ibm" and "nsf" are SLDs. Under
    RFC 1034, no individual string can be longer than 63 characters, and
    the total length of a domain name may not exceed 2S5 characters. Under
    the DNS architecture, the only limit to the number of lower level
    domains, or strings within a domain name, is that the total length of
    the domain name must be within the maximum prescribed in RFC 1034.

    19. As mentioned above, the DNS utilizes a "distributed" database:
    different parts of the DNS database are stored on different
    Internet-connected computers called domain name servers. Each of these
    databases serves to assist in locating the intended recipient of an
    Internet communication. Address inquiries seeking to convert a
    particular domain name are processed hierarchically, that is, the
    address query will begin by locating the name server for the TLD, then
    the name server for the SLD, and so on.

    20. At the highest level, there is a part of the DNS database called
    "the root zone file" or "the dot" whose function is to "point the way
    to" (direct the address query to) other parts of the DNS database
    called the TLD zone files. The TLD zone files contain inforrnation
    regarding the location of the seven "generic" (non-country) TLDs
    (commonly referred to as "gTLDs") ~ ".com", ".org", ".net", ".gov",
    ".int", ".mil" and ".edu" -- as well as approximately 240 country code
    TLDs such as ".US" or".UK". The country code TLDs (commonly referred
    to as "ccTLDs"), are taken from the International Standards
    Organization official list ("ISO 3166"), a copy of which is annexed
    hereto as Exhibit C.

    21. The TLD zone files in turn direct queries to the SLD zone files -
    those parts of the distributed DNS database that contain entries for
    all of the SLDs under the given TLD. For example, the ".com" zone file
    contains entries for ibm.com and att.com, etc. Each of these SLD zone
    files in turn directs queries to lower-level portions of the DNS
    database.

    22. By following these "pointers," an Internet name-resolution query
    will eventuaUy come to a part of the DNS database that contains an IP
    number for the intended recipient ~ther than a pointer to yet another
    narne server. This IP number is returned to the requesting computer.

    23. It is, of course, necessary to maintain and update the DNS
    database continuously. The new domain name information is obtained and
    disseminated through a process called DNS registration. An Internet
    user who wishes to register a domain name first obtains (from an
    Intemet Service Provider or from an IP number registry) an IP number
    to be associated with a desired domain name. Under the existing
    registration system, if the desired domain name has not already been
    registered in the TLD of the user's choice, it can - subject to
    ~ademark considerations not discussed here - be registered on a first
    come, first served basis.

    24. Most individual users of the Internet connect their personal
    computers to an Internet Service Provider ("ISP") that has one or more
    computers that are continuously linked to the Internet. Such users
    will utilize the ISP's domain name as part of their Internet address,
    and thus will not need a domain name of their own.

    25. Because of the large number of requests for DNS name resolution
    (conversion of domain names into IP numbers) that are occurring
    continuously, the "root" and "generic TLD" zone files are replicated
    at a number of different locations. This replication permits
    concurrent processing of a greater number of name queries and thus
    speeds up the operation of this part of the Internet.

    26. This system of 13 identical root zone files is called the "root
    server system." The master (called the "A") root server is maintained
    by NSI in Herndon, Virginia, pursuant to the Cooperative Agreement and
    RFCs 1174 and 1591 (annexed hereto as Exhibits F, G and H
    respGctively). The other 12 root servers obtain the daily updated
    domain name information by copying from the "A" root zone server. t

    THE COOPERATIVE AGREEMENT AND NSF's INVOLVEMENT IN DNS

    27. During the early Intemet, the IANA had responsibility for
    registration of first-and second-level domain names. As such, the
    responsibility for assigning IP numbers and registenng domain names
    was centralized with the IANA. The Defense Information Systems Agency
    Network Information Center, a military contractor-operated facility,
    actually performed the number assignment registrations.

    28. By the late 1980s, however, a significant number of new
    registrants were research and educational institutions (primarily in
    the .edu TLD), which were likely to be supported by NSF and other
    civilian research agencies. Accordingly, NSF assumed support of
    registration services for the non-military Internet.

    29. Between 1987 and 1991, domain name and number registration were
    the responsibility of the IANA under a Department of Defense contract.
    The registry function was performed up to 1990 by SRI (formerly known
    as the "Stanford Research Institute"), and from 1991 to 1992 by
    Government Systems Incorporated ("GSr'). In March 1991, defendant
    Network Solutions, Inc. ("NSr') began to perform the registry
    functions as a subcontractor to GSI in support of the Defense Data
    Network and Internet under contract with the Defense Information
    Systems Agency.

    30. In March 1992, NSF released Program Solicitation 92-24
    (the"Solicitation") imiting competitive proposals for "Network
    Information Services Managers (NIS Managers) for NSF~ET and the NREN."
    (A true copy of the Solicitation is annexed hereto as Exhibit

    ' In 113 of Plaintiff~s Statement of Material Facts, the master root
    server is referred to as thc "czar" of the other root servers.
    Although the master root server does feed information to the other
    root servers. equating the m ster root server to a "czar'- implies a
    "czar-like" authority. To the contrary, the other root server
    operators h ve no contractual or other legal relationship with the
    master root server. They have a purely voluntary ~ociation with it
    because of their common interests in a universally resolvable DNS.

    D). The domestic, non-military portion of the Internet was defined to
    include NSFNET, as well as otha federally sponsored networks,
    collectively referred to as the National Research and Education
    Network ("NREN"). Pursuant to the Solicitation, the NIS Manager
    responsible for non-military registration services would provide
    registration services for non milita~y domain names.

    31. The Solicitation sought three types of "Information Services":
    registration services for the non-military Internet; a central
    directory and database service (also serving the broad Internet
    community); and an information service (help desk etc.) to support new
    institutions coming on to the Internet (usually with NSF support).

    32. The best proposal in each of the three areas was submitted by a
    different firm. NSI submitted the best proposal in the Registration
    Services area (annexed in relevant part hereto in rdevant part as
    Exhibit E), AT&T submitted the best proposal in the Directory and
    Database area, and General Atomics submitted the best proposal in the
    Information Services area. During the course of negotiations and as a
    part of their best and final offers, the three firms were asked to
    develop a service concept that would allow "one stop shopping" and a
    seamless interface for the academic research community (NSF's primary
    constituency). NSF wa;nted to simplify matters for the users so that
    they would perceive only one service entity, ratha than the three
    separate awardees. Thus, the concept the three firms developed
    involved ope~ating under a single name with a uniform interface. The
    name given to the joint activity was the "Internet Network Information
    Center (InterNIC)." NSI, being responsible for domain name
    registrations, is the domain name registrant of Internic.net.

    33. The NSI Proposal, No. 92-93, described the registration services
    to be provided as follows: "Network Solutions will provide
    registration services to include the ROOT tomain, top-level country
    code domains, and second level domains under .us, .edu, .com,

    .gov, .org and .net. In addition, we will register inverse addresses
    [matching IP numbers to domain narnes] ...."

    34. Effective January I, 1993 NSI and NSF entered into Cooperative
    Agreement No. NCR-9218742 (the "Cooperative Agreement" or
    "Agreement"). A true copy of the Agreernent is annexed hereto as
    Exhibit F. The Cooperative Agreement remains in effect through
    September 30, 1998.

    35. As a general matter, NSF uses either grants or cooperative
    agreements in making federal financial assistance awards, depending on
    the appropriate circumstances as defined in the Federal Grant and
    Cooperative Agreement Act. 31 U.S.C. §§ 6301 -08. In this case, NSF
    determined a cooperative agreement to be the instrument of choice
    because, unlike a grant, the Foundation contemplated that this
    situation would require "substantial involvement" by the agency.

    36. The Cooperative Agreement named NSI as the NIS Manager. Though NSI
    had not, at the point it began to implement the Agreement, been named
    in an RFC as the Intemet Registry. However, the task of registering
    second level domain names within five of the generic TLDs (or "gTLDs")
    (".com", ".org", ".net", ".edu" and ".gov") was transferred to NSI
    transferred from the GSI subcontract. Thus, NSI continued registration
    services, but under the Cooperative Agreement. IANA continued its
    function of overseeing the allocation of IP numbers and domain name
    registrations.

    37. The Cooperative Agreement requires that NSI conduct its
    registration services in accordance with RFC 1174. RFC 1174 (copy
    annexed hereto as Exhibit G), issued in August 1990, recognizes that
    the "Internet Registry" is the principal registry for all network and
    autonomous system numbers, and maintains the list of root DNS servers
    and a database of registered nets and autonomous systems. (See RFC
    1174, Exhibit G, Art. 1.2 & 1.3).

    38. In March 1994, RFC 1591 (copy annexed hereto as Exhibit H~- the
    successor to RFC 1174 - was issued. RFC 1591, like RFC 1174, concerned
    the functioning of the Internet Registry. However, while the earlier
    RFC referred to the DNS only in passing (concentrating instead on the
    allocation of IP numbers), the later RFC addressed in detail the
    structure and operation ofthe DNS. RFC 1591 officially named NSI as
    the Internet Registry ("IR"). That RFC also contemplated that it was
    "extremely unlikely" that any new gTLDs would be created, and in any
    event set forth the standard that "applications for new top-level
    domains (for example country code domains) [were to be] handled by the
    IR [NSI] with consultation with the IANA." See Exhibit H, ~ 2 and 3
    (emphasis added). NSF understood that the LANA would authorize
    substantive changes to the DNS only where those changes had consensus
    support within the Internet community.

    39. The Cooperative Agreement, read in conjunction with applicable RFC
    1591, provides that NSI will serve as the Internet Registry. That is
    an administrative support role, which consists of maintaining
    accurate, up-to-date lists of the categories of registrants. The list
    must be available to all Internet users on a central, authoritative
    basis, i.e., there must be only one source of this ~nformation that
    can be consulted.

    40. The level of registration services provided by NSI has grown
    exponentially since the start of the Cooperative Agreement. In 1993,
    the DNS database for the five gTLDs registered by NSI contained only
    several thousand total entries. In 1998, thousands of names are being
    registered per day, the great majority under the ".com" TLD. (The one
    millionth name was registered by NSI in 1997, the two millionth name
    was registered in 1998, and it is possib1e that the three millionth
    name will also be registered in 1998). At present, almost 2.4 million
    names are registered under ".com", while less than 400,000 are
    registered under ".org", ".net", ".edu" and ".gov" combined.

    UNITED STATES' ROLE IN ADMINISTRATION OF THE ROOT SERVERS

    41. Ofthe thirteen root servers worldwide, ten are located in the
    United States. Two of these (root servers "A" & "r') are maintained by
    NSI in Herndon, Virginia pursuant to the Coopcrative Agreement.

    42. Three root servers are either owned or directly funded by the
    United States Government. One ("E') is at the National Aeronautics and
    Space Administration Ames Research Center in Moffett Field, Califomia.
    One ("G") is at the Department of Defense Network Information Center,
    which is located at the Boeing Corporation facility in Tysons Corner,
    Vienna, Virginia and is filnded by the Defense Information Systems
    Agency to register ".mil" names. Another ("H") is at the United States
    Army's Aberdeen Proving Ground in Maryland.

    43. Three more root servers are at universities‹"B" and "L" at the
    Information Sciences Institute at the University of Southern
    California and "D" at the University of Maryland's Computer Science
    Center‹that receive significant federal funding.

    44. The remaining two U.S.-based root servers ("C" at Performance
    Systems Intffnational, Inc., in Herndon, Virginia, and "F' at the
    Internet So~ware Corporation in Palo Alto, California) are at private
    firms that, so far as I am aware, are not Government contractors.

    NSF's DIRECTIVE TO NSI REGARDING PGMEDIA's REQUEST FOR NEW gTLDs

    45. Under the Cooperative Agreement and RFC 1591, NSI had no
    unilateral authority to register new gTLDs. NSI instead was required
    to consult with the LANA regarding any applications for new TLDs.
    PGMedia's request for the addition of hundreds of new gTLDs was
    initially forwarded by NSI to the IANA. Subsequently, I was infommed
    that by letter dated April 4, 1997, the IANA disavowed any authority
    to make a decision in response to the request. NSI then referred the
    question to NSF, which viewed IANA's disavowal as inconsistent with
    RFC 1591's requirement that applications for new TLDs be disposed
    of"with consultation with the IANA."

    46. In the meantime, an interagency working group had been studying
    domain name problems since March of 1997. On July 1, 1997, President
    Clinton directed the establishrnent of an interagency task force to
    develop recommendations for privatizing, increasing competition in,
    and promoting intemational participation in the DNS. ~ 63 FR 8826. NSF
    raised the question of the PGMedia request in discussions with the
    interagency DNS working group.

    47. NSF then directed NSI not to add any new gTLDs, for the following
    reasons. Frst, the process by which the Government sought to transfer
    administration of the DNS to the private sector, and to address
    various issues regarding domain name administration, had just
    commenced. NSF believed that granting the PGMedia request to add new
    gTLDs could render that process moot, because as a practical matter it
    would be very difficult to undo the addition ofthe new gTLDs once new
    names were registered under them. The difficulty here would not be
    technical inability to delete names from the database, but the problem
    of dealing with businesses or members of the public who would have
    registered in the new gTLDs with the expectation that they were
    obtaining a durable Internet address, and the confusion
    engendcred for users if those gTLDs were to be removed from service.
    Granting the PGMedia request could thus preempt further decisions by
    the Government on the issue whether to add any gTLDs and if so, the
    principles that should govern such decisions.

    48. Second, NSF also believed that the granting of PGMedia's request
    for hundreds of additional gTLDs would set a precedent that might
    introduce risks of instability into the system, both in terms of
    potential confusion to users and overload of the queries to the top
    level of the transmission and routing infrastructure. As an initial
    matter, NSF believed that were PGMedia's request granted, any other
    like request would have to be granted as well, thus posing the
    prospect of vast numbers of new gTLDs.

    49. The hierarchical, distributed DNS had replaced a"flat" naming
    system by establishing multiple levels and a limited number of TLDs,
    because, in part, the hierarchical system provides some limits on the
    amount of inquiries that the servers at the top level would be
    required to handle. In the Internet system, unlike the telephone
    system, both routing/switching and content are handled on the same
    &cilities. The inquiries regarding where messages are to be sent
    presently constitute a significant amount of the Intemet traffic. A
    hierarchical system allows messages to be sent at the lowest level,
    with the least amount of queries to the top level, depending on the
    configuration below the Top Level Domain and the messaging pattems. By
    contrast, the original ARPANET naming system was "flat," i.e. all
    names were retained in one file that was downloaded to all host
    computers. The present hierarchical system was implemented due to the
    strain imposed on the Internet by having the prior, flat system handle
    the vastly increased (and continually increasing) Intemet traffic.

    50. Thus, the expansion of the number of TLDs is constrained by
    operational concerns having to do with Intemet performance beyond the
    technical question of expansion of the root zone file. The DNS, as an
    hierarchical naming system, pemmits hierarchically
    focused name searches in order to minimize the query time for the ~
    number involved in each Internet message transaction. In the worst
    case, with a completely flat name space at the top level and no lower
    level structure, such focused name searching becomes impossible. This
    would be equivalent to having a single telephone book for the entire
    world. While such a book is "technically" feasible, it offers little
    by way of ease of use or efficiency. To use a different analogy, that
    of traffic at a busy intersection, there is no debate that a relatively
    small number of cars can be added without significantly impeding the
    flow of traffic. Equally, there should be no debate that the
    simultaneous addition of "unlimited" cars - like unlimited TLDs - poses
    a substantial risk of gridlock.

    EXPLANATION OF COUNTRY CODE TLDS

    51. Generic TLDs were the only TLDs in use in the original Internet.
    At this time the Internet was contained entirely within the U.S.

    52. As foreign institutions were authorized and began to use the
    Internet, they initially used some of these gTLDs. However,
    subsequently, as country code TLDs were implemcnted, they became the
    usual TLDs of choice for entities residing within the respective
    countries that they denoted.

    53. The fact that during the early Internet a small number of non-U.S.
    organizations rcgistcred under the gTLDs causes some confusion.
    However, these foreign organizations all registcred bcfore the country
    code TLDs became the norrn. Accordingly, gTLDs (except for".mir and
    ".gov") wcre identified in RFC 1591 as being both "generic" and
    "worldwide."

    54. It is important to recognize that, in 1994, within the Internet
    community (still predominantly a U.S.-based cooperative and coherent
    tcchnically-oriented group) the "generic" TLDs were largely considered
    a U.S. anachronism that the Internet would soon outgrow. When RFC 1591
    established the procedures for thc atdition of TLDs, it was written
    with specific reference to ISO 3166 country codes, because the
    assumption within the Internet cornmunity was that ~1) all &ture TLDs
    would be based on ISO 3166 country codes, and ~li) that the U.S. would
    eventually "rationalize" its naming conventions to conforrn to that
    used by the rest of the world.

    55. NSI's responsibilities under the Cooperative Agreement did not
    initially include registration of gTLD's. See NSI proposal, § I.2.2 at
    I-3, Exhibit E (noting that NSI will "provide registration services to
    include the ROOT domain, top-level country domains, and second-level
    domains under .us, .edu, .com, .gov, .org, and .net"). The role of NSI
    in possible registration of new TLDs was addressed in RFC 1591. Under
    that RFC, NSI looked to IANA for (and IANA provided) guidance
    regarding the addition of country code TLDs. PGMedia's request,
    however, was different in that it requested the addition of hundreds
    of new ~. Accordingly, when the NSF issued its 1997 directive to NSI
    not to add any new TLDs, the directive was mutually understood to be
    limited to the addition of gTLDs, not country code TLDs, given that
    IANA continued to participate in the process of adding new country
    code TLDs.

    PGMEDlA's TLDs DO NOT MAKE "SEARCING THE INTERNET' EASIER"

    56. In paragraph 10 of his declaration, Paul Garrin claims that
    PGMedia's proposal for greatly expanded TLDs "would rationalize the
    organization of the vast quantity of information available on the
    Internet." In particular, he asserts by way of example that
    individuals interested in purchasing a camera could "immediately go to
    the '.cameras' directory rather than [sic] searching through the
    nearly two million entries in '.com."' To the extent plaintiff claims
    that such a search would be more efficient or effective than current
    searches, this claim is &ctually incorrect for a number of reasons.

    57. Searches, in which the user is looking for information but does
    not know the domain name or numerie address of the location(s) at
    whieh the infommation may be found, are different from IP number
    queries, discussed above, in whieh the user sending a message knows
    the domain name of the addressee. Searches on the Intemet are
    primarily conducted through use of so~ware known as "search engines."
    Many Intemet registrants that want to provide infommation or
    transactional opportunities to users participate in a communieations
    overlay on the Intemet ealled the World Wide Web, or "the Web." Search
    engines do not eonduet their searehes by domain name. Rather, they
    seareh what is ealled "metadata," whieh are expressions contained
    within individual websites that list the categories of infommation
    loeated at that site. Metadata‹whieh are viewable only by the seareh
    engine, not by an individual visiting the website - permit seareh
    engines to evaluate the content of each site and determine which sites
    are most likely to contain the infommation sought by the user.

    58. To the extent plaintiffimplies that searching two million entries
    under ".com" is signifieantly more time eonsuming than a search under
    a more limited TLD such as ".eamera", that implieation is ineorreet.
    Searehing two million - or even twenty million - reeords is a tnvial
    number given the speed with whieh search engines view the metadata
    contained within individual websites. For example, a search using
    "Yahoo!" (a common search engine) for the terms "eamera" and
    "purchase" - the example cited by plaintiff- would take on average
    about one to three seconds to search through all websites contained
    under all the existing TLDs (both generic and country eode).2

    59. Even putting aside the fact that plaintiff's proposed system would
    not appreeiably alter the time involved in searehing on the Intemet,
    and assuming that searehes were

    2 The example cited assumes the searcher is utilizing common
    up-to-date hudware such as a Pentium Processor and a 56,000,000 baud
    high speed modem. Less up-to-date hardwarc could result in slower
    search times. However, under any scheme of searching, search times
    would fluctuate depending on the hardware utilized by the searchcr.

    conducted within individual TLDs (as opposed to across all TLDs), the
    system would in fact make searching more difficult by increasing the
    number of different directories that must be searched. To again take
    plaintiff's example, an individual searching the current Internet for
    a camera to purchase can enter certain key words (such as "camera" and
    "purchase," or "buy," or "sell") in a search engine that then examines
    all websites within all TLDs. Under plaintiff's proposed system, a
    user would have to choose which of a number of potentially rdevant
    TLDs to search. While a camera store might be registered under
    ".camera", it might also - using just the list of proposed TLDs
    attached as Exhibit C to plaintiff's Second Amended Complaint - be
    registered under ".art", ".artists", ".arts", ".cam", ".corp",
    ".electric", "electronique", ".enterprises", ".entertainment",
    ".factory", ".film", ".firm", ".general", "graphics", ".image",
    ".inc", ".Itd", ".mall", ".market", ".movie", ".multimedia", ".photo",
    ".pictures", "products", ".sale", ".shop", or ".video". For that
    matter, the store probably would also continue to be registered under
    ".com". Thus, a user seeking to conduct a thorough search by
    individual TLDs would have to undertake nearly thirty separate
    searches. By the same token, a camera seller might feel obliged to
    register under each of those many TLDs in order to maximize the
    potential for being located by prospective users.

    60. In this way, plaintiff's system fundamentally differs from a
    closed system such as Westlaw. In Westlaw, one can, for example, search
    for all Second Circuit cases by limiting the search to the "CTA2"
    directory, with no concern that the search might not include all
    potentially relevant cases. This system works because the contents of
    the directories are controlled by a central authority (Westlaw), such
    that all Second Circuit cases, and no cases from outside the Second
    Circuit, are contained within the directory. By contrast, plaintiff's
    proposed system contains no such limitations. Accordingly, one caMot
    search within a particular TLD (such as ".camera") with any assurance
    at all that the websites listed thereunder constitute all, or even a
    significant part, of the potentially relevant sites.

    61. To overcome this problem, plaintiff's system would have to be
    altered in one of two fundamental ways. First, searches could be
    conducted not under the individual TLDs but under all TLDs (to
    analogize again to Westlaw, searches would be conducted in a directory
    including all federal cases rather than a more specific directory like
    "CTA2"). Such a global search, of course, would be no different from
    what occurs today on the Internet. Second, there could be a central
    authority that imposes some standard conventions governing which T.LDs
    can be utilized by what types of entities (for example, requiring all
    camera sellers to register under ".camera"). Even assuming that such a
    central authority existed, that the authority could possibly create a
    classification system competent to organize the entire Internet, and
    that the Intemet community would accept such an authority, that system
    would be inconsistent with plaintiff's proclaimed desire to open up
    the DNS to unlimited TLDs.

    62. Another problem with plaintiff's proposed system is that it
    lessens the ability of an Internet user to guess the domain name of a
    known entity. To use plaintiff's example again, suppose that a user
    were seeking information about a particular camera store. In the
    present system, the user knows that one likely domain name would be
    the store name followed by ".com". Under plaintiff's system of
    unlimited TLDs, a user seeking to guess a domain name would have to
    make a large number of guesses in order to cover all the potentially
    relevant TLDs. Similarly, there are many entities that engage in a
    large range of commercial activity. Thus, to find infommation in the
    current system on, for example, Sears, one would guess at "sears.com";
    under plaintiff's system, the user would have to guess at "sears"
    followed by one of a range TLDS that possibly correspond to the
    businesses engaged in by the store (e.g., "jewelry, ".shoes',
    ".camera", etc.).

    I declare under penalty of pejury that the foregoing is true and correct.

    George Strawn

    Dated: July 2, 1998
    [ Reply to This | Parent ]


    Search ICANNWatch.org:


    Privacy Policy: We will not knowingly give out your personal data -- other than identifying your postings in the way you direct by setting your configuration options -- without a court order. All logos and trademarks in this site are property of their respective owner. The comments are property of their posters, all the rest © 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 by ICANNWatch.Org. This web site was made with Slashcode, a web portal system written in perl. Slashcode is Free Software released under the GNU/GPL license.
    You can syndicate our headlines in .rdf, .rss, or .xml. Domain registration services donated by DomainRegistry.com