Inside ICANNWatch  
Submit Story
Lost Password
Site Messages
Top 10 Lists
Latest Comments
Search by topic

Our Mission
ICANN for Beginners
About Us
How To Use This Site
Slash Tech Info
Link to Us
Write to Us

  Useful ICANN sites  
  • ICANN itself
  • Bret Fausett's ICANN Blog
  • Internet Governance Project
  • UN Working Group on Internet Governance
  • Karl Auerbach web site
  • Müller-Maguhn home
  • UDRPinfo.com;
  • UDRPlaw.net;
  • CircleID;
  • LatinoamerICANN Project
  • ICB Tollfree News

  •   At Large Membership and Civil Society Participation in ICANN  
  • icannatlarge.com;
  • Noncommercial Users Constituency of ICANN
  • NAIS Project
  • ICANN At Large Study Committee Final Report
  • ICANN (non)Members page
  • ICANN Membership Election site

  • ICANN-Related Reading
    Browse ICANNWatch by Subject

    Ted Byfied
    - ICANN: Defending Our Precious Bodily Fluids
    - Ushering in Banality
    - ICANN! No U CANN't!
    - roving_reporter
    - DNS: A Short History and a Short Future

    David Farber
    - Overcoming ICANN (PFIR statement)

    A. Michael Froomkin
    - When We Say US™, We Mean It!
    - ICANN 2.0: Meet The New Boss
    - Habermas@ discourse.net: Toward a Critical Theory of Cyberspace
    - ICANN and Anti-Trust (with Mark Lemley)
    - Wrong Turn in Cyberspace: Using ICANN to Route Around the APA & the Constitution (html)
    - Form and Substance in Cyberspace
    - ICANN's "Uniform Dispute Resolution Policy"-- Causes and (Partial) Cures

    Milton Mueller
    - Ruling the Root
    - Success by Default: A New Profile of Domain Name Trademark Disputes under ICANN's UDRP
    - Dancing the Quango: ICANN as International Regulatory Regime
    - Goverments and Country Names: ICANN's Transformation into an Intergovernmental Regime
    - Competing DNS Roots: Creative Destruction or Just Plain Destruction?
    - Rough Justice: A Statistical Assessment of the UDRP
    - ICANN and Internet Governance

    David Post
    - Governing Cyberspace, or Where is James Madison When We Need Him?
    - The 'Unsettled Paradox': The Internet, the State, and the Consent of the Governed

    Jonathan Weinberg
    - Sitefinder and Internet Governance
    - ICANN, Internet Stability, and New Top Level Domains
    - Geeks and Greeks
    - ICANN and the Problem of Legitimacy

    Highlights of the ICANNWatch Archive
    (June 1999 - March 2001)

    This discussion has been archived. No new comments can be posted.
    Secret, Closed WHOIS Meeting Excludes Privacy Advocates | Log in/Create an Account | Top | 70 comments | Search Discussion
    Click this button to post a comment to this story
    The options below will change how the comments display
    Check box to change your default comment view
    The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
    The Immaculate Conception of TLDs That Linger On ?
    by Anonymous on Thursday May 26 2005, @08:48PM (#15368)
    What happens when 20 million $50 WIFI routers
    start to trigger their hidden TLD creation code ?

    A user just happens to ask for a web-site for
    a TLD that is not widely used. The always-on
    routers start to collaborate and a Mesh-Registry
    is seeded. The user is informed that the site does
    not exist, but the name is available for FREE or
    a small fee. [It must be FREE, where would the
    money be sent ?]

    Poof, out of nowhere a TLD starts to emerge.
    The ICANN thought police could spend a long time
    trying to find "the Registry". The TLD could
    linger on for a long time, just like bit-torrent
    files. The original seed can be removed but the
    files live on.

    Try to convice 20 million people to replace
    their WIFI routers or turn them off. They do
    not use any root servers. It does not matter if
    the TLD is removed from servers that are not used.

    Stay tuned...
    [ Reply to This | Parent ]
    The Immaculate Conception of TLDs That Linger On ? by Anonymous
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @09:04PM (#15369)

    Open DHT is a publicly accessible distributed hash table (DHT) service. In contrast to the usual DHT model, clients of Open DHT do not need to run a DHT node in order to use the service. Instead, they can issue put and get operations to any DHT node, which processes the operations on their behalf. No credentials or accounts are required to use the service, and the available storage is fairly shared across all active clients.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @09:13PM (#15370)

    A distributed hash table, or DHT, is a building block for peer-to-peer applications. At the most basic level, it allows a group of distributed hosts to collectively manage a mapping from keys to data values, without any fixed hierarchy, and with very little human assistance. This building block can then be used to ease the implementation of a diverse variety of peer-to-peer applications such as file sharing services, DNS replacements, web caches, etc.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @09:20PM (#15371)
    The Router stage runs the Bamboo router. It requires one or more gateways; if the only gateway is the node itself, it will start a DHT all by itself. Otherwise, it will try and contact each of the other gateways listed (one at a time) and try and join their DHT(s). As soon as it succeeds with one, it will stop trying to contact the others.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @09:28PM (#15372)

    SFS is a secure, global network file system with completely decentralized control. SFS lets you access your files from anywhere and share them with anyone, anywhere. Anyone can set up an SFS server, and any user can access any server from any client. SFS lets you share files across administrative realms without involving administrators or certification authorities.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @09:41PM (#15373)
    http://research.microsoft.com/~antr/Pastry/default .htm

    Pastry is a generic, scalable and efficient substrate for peer-to-peer applications. Pastry nodes form a decentralized, self-organizing and fault-tolerant overlay network within the Internet. Pastry provides efficient request routing, deterministic object location, and load balancing in an application-independent manner. Furthermore, Pastry provides mechanisms that support and facilitate application-specific object replication, caching, and fault recovery.
    [ Reply to This | Parent ]
    "A key in OpenDHT is any 20-byte value."
    by Anonymous on Thursday May 26 2005, @10:53PM (#15374)
    A key in OpenDHT is any 20-byte value. There are no other restrictions. If one or more clients put more than one value under the same key, Open DHT stores them all. On a get, all of the values are returned. (If there are many values, only some are returned at a time. Subsequent gets can fetch the rest. See the User's Guide for details.)

    "20-byte value" ? the same size as an IP packet header (160 bits)

    That means one can take a packet header and use
    it as a "key" (which includes the Source and
    Destination address) and the "value" can be the
    packet contents. All 16 version numbers work
    because those bits are part of the "key". IPv4
    set to 4 and IPv6 set to 6.

    If you are the receiver you can go read your
    value from a particular Source address. Instead
    of packets being routed to you, you go pick
    them up when you want them.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Thursday May 26 2005, @11:01PM (#15375)
    Bamboo already has code to heal the network from partitions that works as follows. Each node keeps a set of the last 20 or so nodes that used to be its neighbors but have since become unreachable. This list is called the down_nodes set. Every minute, it randomly selects one of these nodes and sends it a join message. If the node is reachable again, and it's part of the same network or a smaller partition, then the leaf set returned in the join response will not be useful to the first node and nothing will change. On the other hand, if the node sending the join message is the one in the smaller partition, then the returned leaf set will be made up of nodes much closer to it (in the ID space) than its existing leaf set. As a result, it will use those new nodes instead, healing the partition.

    The trick to fixing the multiple gateways problem is to reuse this mechanism. If you set the config variable immediate_join to true, on startup the Router will add all of the gateways in the gateway list to its down_nodes set (except itself, in the case where it's a gateway, too) and then consider itself joined into a network of one node. It also sends its first partition-healing join message immediately. Eventually, it will contact one of the gateways and join that network, and if all of the gateways come up, the partition-healing code guarantees that they'll all eventually be in the same network. I've been using this mechanism on PlanetLab for a week or so now and it seems to work fine.
    [ Reply to This | Parent ]
    "signed by a threshold of the directory servers"
    by Anonymous on Friday May 27 2005, @06:50AM (#15377)
    Tor uses a small group of redundant, well-known onion routers to track changes in network topology and node state, including keys and exit policies. Each such directory server acts as an HTTP server, so clients can fetch current network state and router lists, and so other ORs can upload state information. Onion routers periodically publish signed statements of their state to each directory server. The directory servers combine this information with their own views of network liveness, and generate a signed description (a directory) of the entire network state. Client software is pre-loaded with a list of the directory servers and their keys, to bootstrap each client's view of the network.
    When a directory server receives a signed statement for an OR, it checks whether the OR's identity key is recognized. Directory servers do not advertise unrecognized ORs-if they did, an adversary could take over the network by creating many servers [22]. Instead, new nodes must be approved by the directory server administrator before they are included. Mechanisms for automated node approval are an area of active research, and are discussed more in Section 9.
    Of course, a variety of attacks remain. An adversary who controls a directory server can track clients by providing them different information-perhaps by listing only nodes under its control, or by informing only certain clients about a given node. Even an external adversary can exploit differences in client knowledge: clients who use a node listed on one directory server but not the others are vulnerable.
    Thus these directory servers must be synchronized and redundant, so that they can agree on a common directory. Clients should only trust this directory if it is signed by a threshold of the directory servers.
    [ Reply to This | Parent ]
    Re:The Immaculate Conception of TLDs That Linger O
    by Anonymous on Friday May 27 2005, @07:20AM (#15378)
    In wireless networks comprised of numerous mobile stations, the routing problem of finding paths from a traffic source to a traffic destination through a series of intermediate forwarding nodes is particularly challenging. When nodes move, the topology of the network can change rapidly. Such networks require a responsive routing algorithm that finds valid routes quickly as the topology changes and old routes break. Yet the limited capacity of the network channel demands efficient routing algorithms and protocols, that do not drive the network into a congested state as they learn new routes. The tension between these two goals, responsiveness and bandwidth efficiency, is the essence of the mobile routing problem.

    Greedy Perimeter Stateless Routing, GPSR, is a responsive and efficient routing protocol for mobile, wireless networks. Unlike established routing algorithms before it, which use graph-theoretic notions of shortest paths and transitive reachability to find routes, GPSR exploits the correspondence between geographic position and connectivity in a wireless network, by using the positions of nodes to make packet forwarding decisions. GPSR uses greedy forwarding to forward packets to nodes that are always progressively closer to the destination. In regions of the network where such a greedy path does not exist (i.e., the only path requires that one move temporarily farther away from the destination), GPSR recovers by forwarding in perimeter mode, in which a packet traverses successively closer faces of a planar subgraph of the full radio network connectivity graph, until reaching a node closer to the destination, where greedy forwarding resumes.
    [ Reply to This | Parent ]

    Search ICANNWatch.org:

    Privacy Policy: We will not knowingly give out your personal data -- other than identifying your postings in the way you direct by setting your configuration options -- without a court order. All logos and trademarks in this site are property of their respective owner. The comments are property of their posters, all the rest © 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 by ICANNWatch.Org. This web site was made with Slashcode, a web portal system written in perl. Slashcode is Free Software released under the GNU/GPL license.
    You can syndicate our headlines in .rdf, .rss, or .xml. Domain registration services donated by DomainRegistry.com