| At Large Membership and Civil Society Participation in ICANN |
|
|
|
|
|
This discussion has been archived.
No new comments can be posted.
|
Building the alternative to DNS
|
Log in/Create an Account
| Top
| 26 comments
|
Search Discussion
|
|
The Fine Print:
The following comments are owned by whoever posted them.
We are not responsible for them in any way.
|
|
 |
I made it up, though I may not be the first.
I'm unaware of any references.
Key size shouldn't have any effect on latency.
Latency is an effect of how many computers you need to go through
to get the answer. This is mostly determined by cache time, and delegation.
Cache time would be longer with this scheme, so that would reduce latency.
There would be more authoritative handle servers, which means more
geographically diversity, which would reduce latency.
There would also be less delegation, which means fewer lookups
which means less latency. But this is dependent on the distribution
model chosen. You could map it onto DNS, in which case
it would have the same amount of delegation that DNS does now.
Or you could use the Usenet model, in which case the local
caching server is likely to have a complete copy at all times.
Overall, a lot less latency, and a lot less network traffic.
Larger keys are more secure, but require more space to store.
These days, people recommend at least 2048 bits (256 bytes) of key
for RSA. Actually, that's pretty small by today's standards -
a 20 Gig drive could hold all the keys for every domain currently registered.
There are no known attacks that are successful against even 756 bit RSA keys,
but Bernstien thinks it may be possible to crack 1024 bit keys with a few
million dollars worth of hardware.
The chance of any two MD5 hashes colliding is
340,282,366,920,938,463,463,374,607,431,768,211,456 to 1 against.
Even with 5 billion handles, the odds against any two colliding
is still several septillion to 1 against. But if you are incapable of accepting
any risk at all, no matter how infinitesimal, (or you just think you can't sell
"acceptable risk") then say the first person to publish a key "wins".
It might be desirable to have a central repository that stored them so
that the list didn't have to be replicated across thousands of severs.
On the other hand, I would expect all ISPs over a certain size to set up a
cache (repository) for entries, since they do it now with DNS for
performance reasons. If the handles are verified with use,
then they could be cached indefinitely, so every caching server
becomes a central repository. The downside to verification is that
each endpoint server must do more work, and the total amount
of work that is done is slightly higher. But the upside is that
changes can propagate instantly, and it scales much better.
|
|
|
[ Reply to This | Parent
]
|
| |

Privacy Policy: We will not knowingly give out your personal data -- other than identifying your postings in the way you direct by setting your configuration options -- without a court order. All logos and trademarks in this site are property of their
respective owner. The comments are property of their posters, all the rest © 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 by ICANNWatch.Org. This web site was made with Slashcode, a web portal system written in perl. Slashcode is Free Software released under the GNU/GPL license.
You can syndicate our headlines in .rdf, .rss, or .xml. Domain registration services donated by DomainRegistry.com
|