Internet i inne organizacje

The Christmas Goat and IPv6 (Year 8)

CircleID - Śro, 2017-12-27 20:19

This is the eighth year we measure IPv6 on the Christmas Goat. And with the crazy climate we have to live in now where there is no snow on the goat or ground… (If you want to remember the crazy snow storm from 1998, watch this.) But IPv6 is doing better than the climate this year. This year we increased from 27% 2016 to 40% 2017. In Sweden Tele2, Tre and Comhem are still the only major ISPs with IPv6 enabled. Tele2 (with IPv6 since ~three years) and Tre is mostly mobile operators, and Comhem has enabled IPv6 in their Docsis network.

But this three only get up to 10% by themselves and the other 30% is mostly from outside Sweden.

Values from previous measurements:

2010 – 0.1% Native IPv6
2011 – 1 %
2012 – 1.4 %
2013 – 3.4 %
2014 – 11.1 %
2015 – 14 %
2016 – 27%
2017 – 40%

You see what the line is approaching, next year we easily break the 50% barrier! :)

In Sweden the IPv6 traffic increased from 4% to 6% this year according to Google measurements. That's a great increase but we still have much to do. I have done some articles (in Swedish but with Google translate link at the bottom) about our own problem the first of which is here (Why Internet in Sweden is broken).

And Google's worldwide measurements show an increase from ~15% 2016 to ~23% in 2017.

Have a happy and good IPv6 year during 2018 and I hope I can do the measurement 2018 too!

Written by Torbjörn Eklöv, CTO, Senior Network Architect, DNSSEC/IPv6

Follow CircleID on Twitter

More under: IPv6

The Emergence of Consensus in the UDRP

CircleID - Śro, 2017-12-27 19:13

The modus operandi of the Internet Corporation for Assigned Names and Numbers (ICANN) is achieving consensus. This also holds true for the principal rights protection mechanism that emerged from a two-year round of debates organized by the World Intellectual Property Organization (WIPO) that ICANN implemented in 1999 as the Uniform Domain Name Dispute Resolution Policy (UDRP). Consensus rules; not precedent, although consensus inevitably becomes that. The concept of consensus was highlighted in the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Original Edition (2005): "On most of these issues, consensus or clear majority views have developed." It also pointed out that for certain other questions there had emerged (not surprisingly given the diversity of panelists from different jurisdictions) different views. The word "consensus" appeared 21 times in the Original Overview.

In the Second Edition of the Overview (2011), "consensus" appears 28 times: "This WIPO Overview was originally created and has been updated and expanded in recognition of the need that has been expressed to identify, as much as possible, consensus among UDRP decisions, so as to maximize the consistency of the UDRP system." It continued the practice of the First Edition by identifying where there was divergence, "View 1" and "View 2". The Third Edition (2017) eliminates the word "consensus" as it applies to decisions for the simple reason that there is now more consensus and that for the most part differences have converged to become "precedent" even though only informally acknowledged:

While the UDRP does not operate on a strict doctrine of binding precedent, it is considered important for the overall credibility of the UDRP system that parties can reasonably anticipate the result of their case.

I say "for the most part" because while the diversity of view has been narrowed by a jurisprudence endorsed by consensus, some panelists continue to act as though there was no consensus which has a negative effect on "the overall credibility of the UDRP system." A recent example of a nonconformist decision is the <> case,, Incorporated v. Manuel Schraner, FA171000 1755537 (Forum November 27, 2017). This decision has been treated to a blistering attack from the Internet Consumer Association here in CircleID. (I also discussed the decision earlier in a separate essay). When the agreed upon consensus is willfully upset as it is in "devex," it introduces an unacceptable bias into the ICANN process. (Appointed Panels are supposed to be neutral). One of the comments to the ICA essay is that de-accrediting panelists "seems to be a sensible approach [in that it] would provide for some pause to rogue panelists who decide to rewrite policy." I will return to this de-accrediting suggestion in my conclusion, where I will also point out a couple of other possibilities).

A general view formed early in the jurisprudence is that the UDRP should not be a roulette wheel. It "should consist of more than, '[i]t depends [on] what panelist you draw.'" Time Inc. v. Chip Cooper, D2000-1342 (WIPO February 13, 2001) (<>). The goal (as stated in many UDRP decisions) is achieved through "a strong body of precedent" which "is strongly persuasive" even if not binding. Pantaloon Retail India Limited v. RareNames, WebReg, D2010-0587 (WIPO June 21, 2010) ("Whether [the consensus in holding that a respondent in the domain name business] is justified may be a matter for debate, but in the opinion of the Panel there is a strong body of precedent which, though not binding, is strongly persuasive.")

Consistency with consensus views (the "strong body of precedent") is officially encouraged by WIPO, and presumably also by the other providers. Although the devex case comes from the Forum, there are some past WIPO decisions in which the reasoning and conclusions are clearly in opposition to the law. A WIPO email I received on December 20 reported that on "October 23 some 100 WIPO Panelists met at WIPO Headquarters in Geneva to discuss recent jurisprudential developments in cases administered by the WIPO Center." The email emphasized that "the meeting provided a venue for presentations and discussions to help maintain jurisprudential consistency” (My emphasis).

Panelists whose rulings are inconsistent with the jurisprudence mainly favor trademark owners whose rights did not exist when the domain names were registered. If the jurisprudence is properly applied, there can be no cybersquatting claim under the UDRP (or for that matter the Anticybersquatting Consumer Protection Act). Priority is one of the Policy's basic principles. De Lage Landen International B.V. v. Steve Thomas, D2017-2045 (WIPO December 7, 2017) (<>) and Virgin Enterprises Limited v. Domain Admin/This Domain is for Sale,, D2017-1961 (WIPO December 11, 2017) are the antithesis of "Complainant's registered rights in the letters 'DLL' as a word mark dating from 2007, long after the Disputed Domain Name was registered." In Virgin Enterprises registering "domain names that include generic words for the purposes of selling them ... can be legitimate and are not in themselves a breach of the Policy, so long as they do not encroach on third parties' trademark rights." In both cases, the domain names were registered before the existence of a mark. Potent mark as VIRGIN is, Complainant does not own the word; its exclusive right does not extend to all phrases in which virgin is combined with another word.

In the devex case the Panel willfully misstates the law when it intones that "[t]he fact that Complainant had no rights in DEVEX at the time Respondent registered <> does not permit Respondent to exploit another's after-acquired rights." Rights acquired after registration of the domain name are not actionable under the UDRP, period.

That such maverick decision-making comes about is due in large measure to the repudiation of consensus in two 2009 sharply reasoned decisions from (surprisingly!) the Panel responsible for the binary view becoming the bedrock principle of the UDRP. When in the first UDRP decision (2000) the Panel announced that "[i]t is clear from the legislative history that ICANN intended that the complainant must establish not only bad faith registration but also bad faith use" he created precedent by interpreting what he believed was the intention agreed upon by the multiple constituencies engaged in birthing the UDRP, World Wrestling Federation Entertainment, Inc. v. Michael Bosman, D1999-0001 (WIPO January 14, 2000). However, in Octogen Pharmacal Company, Inc. v. Domains By Proxy, Inc. / Rich Sanders and Octogen e-Solutions, D2009-0786 (WIPO August 19, 2009) the first Panel took back his interpretation of the UDRP by formulating the so-called "retroactive bad faith" principle: "[I]n this Panel's view bad faith registration can be deemed to have occurred even without regard to the state of mind of the registrant at the time of registration, if the domain name is subsequently used to trade on the goodwill of the mark holder." In Guru Denim Inc. v. Ibrahim Ali Ibrahim abu-Harb, D2013-1324 (WIPO September 27, 2013) (<>) the same Panel (in dissent) explained that "[i]t would be much easier for this panelist to maintain that his original decision [approving the binary concept] was correct, and not recant. But in view of the evidence [of the correctness of the unitary view], I am unable to do so." The majority got it right by following consensus:

In the present case, when the Respondent registered the disputed domain name, the trademark did not exist. Respondent was therefore entitled to continue using the domain name <> as he had been using it, or to use it for any purpose whatsoever, so long as he did not intentionally use it to profit from the goodwill associated with Complainant's later created trademark.

Nevertheless, several panelists agreed with the Octogen e-Solutions analysis (and some continue to agree, as we see in the devex case). In Big 5 Corp. v. / Roy Fang, FA130800 1513704 (Forum October 11, 2013) (with dissent in favor of consensus. <>) the majority (hewing to the new construction) held that it "deems Respondent's 2012 renewal of the disputed domain name to be the date on which to measure whether the disputed domain name was registered and used in bad faith for purposes of paragraph 4(a)(iii)." Similarly in Milly LLC v. Domain Admin, Mrs. Jello, LLC, D2014-0377 (WIPO May 25, 2014) (<>) in which Complainant requested the Panel consider the retroactive bad faith or the unified concept approach which the Panel obligingly did: "[t]he fact Respondent may have registered the Disputed Domain Name prior to Complainant's acquisition of trademark rights does not per se preclude a finding of bad faith under the circumstances of this case for the purposes of paragraph 4(a)(iii)." Ditto Jappy GmbH v. Satoshi Shimoshita, D2010-1001 (WIPO September 28, 2010) and other cases, all of which were enabled by the Octogen e-Solutions decision and all contrary to precedent which holds that paragraph 4(a)(iii) is a unitary concept.

Interestingly, the Octogen e-Solutions Panel appears more recently to have realigned with the consensus view. In Group One Holdings Pte Ltd v. Steven Hafto, D2017-0183 (WIPO March 28, 2017) a three-member panel including the first panel unanimously rejected the retroactive bad faith reasoning thereby essentially declaring it a dead end. The interpretation that the UDRP demands proof of bad faith registration AND (not "OR") bad faith use quickly became the consensus view, and still is (the devex Panel's misapplication of the law is the equivalent to putting a finger on the scale in favor of a party having no actionable claim either under the UDRP or the ACPA. The latest WIPO Overview even repudiates that reasoning).

Since the only legal challenge of erroneous awards (in the U.S. at least) is a lawsuit under the Anticybersquatting Consumer Protection Act (ACPA) few domain name losers have the stomach to afford or endure an expensive lawsuit. But several have. For example, in Camilla Australia Pty Ltd v. Domain Admin, Mrs Jello, LLC., D2015-1593 (WIPO November 30, 2015) the Respondent, a domain investor, challenged the award under the, Mrs. Jello, LLC v. Camilla Australia Pty Ltd. 15-cv-08753 (D. NJ 8/1/2016) and it was vacated by stipulation, a result that should (if Panels like the one in the devex case are paying attention) should clarify the law as it ought to be applied in UDRP disputes. (Telepathy, Inc. has also gone the ACPA route to vindicate its rights with similar results including recovery of attorney's fees). (But, if the facts are not right, investors can also get hammered in ACPA cases. They too should have sought professional advise).

So what can be done about panelists who willfully apply their own law? Is de-accreditation of panelists the right answer? Certainly, panelists should be educated in the jurisprudence; they should know what the law is and apply it consistently. It is a disgrace when they are clearly in error; when they apply wrong law! But, I do not believe there would be consensus for de-accreditation and it certainly would not be approved by providers. Another solution for which there may be consensus is creating an appeals procedure similar to Nominet UK's and a differently phrased version of a rule ICANN introduced in 2013 for the Uniform Rapid Suspension System (URS). One other possibility would be the formation of a corporate entity funded by the domain community to retain counsel to represent investors in challenging decisions inconsistent with UDRP jurisprudence, repaid from recovery under the ACPA of attorney's fees and damages.

Final thought: the reason there is a secondary market in domain names at all (it seems to me) is due to the first Panel's original (and luminously reasoned) construction in World Wrestling. Had he construed the Policy in 2000 as he later did in Octogen there would never have developed a secondary market and we would not be discussing what to do with panelists who rule in favor of complainants whose marks postdate domain name registrations. (In other words, the devex ruling would have emerged as consensus!)

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP

Follow CircleID on Twitter

More under: Domain Names, ICANN, Law, UDRP

LeoSat Satellite Internet Project

CircleID - Wto, 2017-12-26 18:05

I began tracking Leosat's Internet service project in 2015 before I began contributing to CircleID. The following is my original post, with subsequent updates on their progress including an investment by the Japanese geosynchronous satellite company SkyPerfectJsat and an agreement for satellite development with Thales Alenia Space will develop the relatively.

I've been tracking Greg Wyler and Elon Musk's plans to launch low-Earth orbit satellites to provide Internet connectivity. Musk's SpaceX and Wyler's OneWeb have been joined by a would-be low-Earth connectivity provider, Leosat.

Leosat will not be marketing to individual end users but will target government and business — maritime applications, oil and gas exploration and productions, telecom back-haul and trunking, enterprise VSAT, etc. Their market seems closer to Wyler's former company O3b, but Leosat plans to cover the entire Earth, while O3b is restricted to locations near the equator.

They plan to offer encrypted connectivity at up to 1.2 gbps with latency under 50 ms using a constellation of 80 to 120 small satellites, with launches beginning in 2019 or 2020.

While SpaceX and OneWeb have focused their publicity on end users and developing nations, they will also have the ability to deliver low latency service over long distances. As shown below, a terrestrial link from my home in Los Angeles to La Universidad de Magallanes in Punta Arenas, Chile required 14 hops whereas a satellite route could be achieved with five hops. (The following illustration is drawn to approximate scale assuming a satellite altitude of 700 miles).

The Ping time for the terrestrial link averages around 224 ms, considerably slower than the sub 50 ms latency Leosat hopes to achieve.

Like many Americans, I am served by a monopoly Internet service provider. Might these folks actually be able to provide competition — at least in the developing world — some day?

Update Dec 4, 2017

Leosat, which recently received an investment by Japanese geosynchronous satellite company SkyPerfectJsat, has announced rough milestone dates for their LEO project. They plan on offering store-and-forward service using two "early bird" satellites in 2019. In 2021 they will begin launching the rest of their 108-satellite constellation. Completion of the constellation and full worldwide service is planned for 2022. (Their initial store and forward connectivity is reminiscent of VITAsat, which offered 38.4 kbps email service in Africa in the 1990s). Judging from the illustration, they seem to be planning both polar and inclined orbits like Telesat.

Leosat seems to be planning a mix of polar and inclined-orbit satellites.

In this interview (5:31), CEO Mark Rigolle says they will focus on point-point connections rather than linking to terrestrially connected ground stations. Doing so will cut latency — he estimates 119 ms between Singapore and London. These point-point links will also be more secure than those using the terrestrial Internet. These features will appeal to enterprises needing to synchronize databases, financial trading firms, firms with a lot of sensitive data online, etc.

Thales Alenia Space will develop the relatively large, 670 kg satellites will have four optical links to other satellites, 10 Ka-band steerable antennas, each providing up to 1.6 Gbps of symmetrical data connectivity and two steerable high-performance antennas, each providing up to 5.2 Gbps of symmetrical data connectivity. With their emphasis on speed and security, they are focusing on a premium market in contrast to OneWeb or SpaceX, which hope to provide affordable connectivity to homes, schools, community centers, etc. as well as long-distance links like the one illustrated above.

I also found their patent application for a "System and method for satellite routing of data" on Google, but could not find one in the US Patent Office database. I am not sure why that is nor am I sure what patent-worthy unique invention they claim.

Update Dec 21, 2017

Leosat has updated their Web site — it has a rotating image gallery at the top with illustrations and the following captions:

  • Faster Than Fiber
  • Secure Data Communication
  • Instant Infrastructure
  • From Anywhere to Everywhere

This, along with their recent SkyPerfectJsat investment is an indication that they are making progress and focusing on high-end, fast, secure links.

In a talk at the opening of the SpaceX office in Seattle, Elon Musk predicted that they would get 50% of the long-haul Internet traffic. It seems like Leosat will be a strong, focused competitor. (Note that Musk also based his prediction on inaccurate assumptions).

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Do We Really Need a New BGP?

CircleID - Wto, 2017-12-26 18:00

From time to time, I run across (yet another) article about why Border Gateway Protocol (BGP) is so bad, and how it needs to be replaced. This one, for instance, is a recent example.

It seems the easiest way to solve this problem is finding new people — ones who don't make mistakes — to work on BGP configuration, building IRR databases, and deciding what should be included in BGP? Ivan points out how hopeless of a situation this is going to be, however. As Ivan says, you cannot solve people problems with technology. You can hint in the right direction, and you can try to make things a little saner, and a little less complex, but people cannot be fixed with technology. Given we cannot fix the people problem, would replacing BGP itself really help? Is there anything we could do to make things better?

To understand the answer to these questions, it is important to tear down a major misconception about BGP. The misconception?

BGP is a routing protocol in the same sense as OSPF, IS-IS, or EIGRP.

BGP was not designed to be a routing protocol in the way other protocols were. It was designed to provide a loop-free path through a series of independently operated networks, each with its own policy and business goals. In the sense that BGP provides a loop-free route to a destination, it provides routing. But the "routing" it provides is largely couched in terms of explicit, rather than implicit, policy (see the note below). Loop-free routes are not always the "shortest" path in terms of hop count, or the "lowest cost" path in terms of delay, or the "best available" path in terms of bandwidth, or anything else. This is why BGP relies on the AS Path to prevent loops. We call things "metrics" in BGP in a loose way, but they are really explicit expressions of policy.

Consider this: the primary policies anyone cares about in interdomain routing are: where do I want this traffic to exit my AS, and where do I want this traffic to enter my AS? The Local Preference is an expression of where traffic to this particular destination should exit this AS. The Multiple Exit Disciminator (MED) is an expression of where this AS would like to receive traffic being forwarded to this destination. Everything other than these are just tie breakers. All the rest of the stuff we do to try to influence the path of traffic into and out of an AS, like messing with the AS Path, are hacks. If you can get this pair of "things people really care about" into your head, the BGP best-path process, and much of the routing that goes on in the DFZ, makes a lot more sense.

It really is that simple.

How does this relate to the problem of replacing BGP? There are several things you could improve about BGP, but automatic metrics are not one of them. There are, in fact, already "automatic metrics" in BGP, but "automatic metrics" like the IGP cost are tie breakers. A tiebreaker is a convenient stand-in for what the protocol designer and/or implementor thinks the most natural policy should be. Whether or not they are right or wrong in a specific situation is a… guess.

What about something like the RPKI? The RPKI is not going to help in most situations where a human makes a mistake in a transit provider. It would help with transit edge failures and hijacks, but these are a different class of problem. You could ask for BGPsec to counter these problems, of course, but BGPsec would likely cause more problems than it solves (I've written on this before, here, here, here, here, and here, to start; you can find a lot more on rule11 by following this link).

Given replacing the metrics is not a possibility, and RPKI is only going to get you "so far," what else can be done? There are, in fact, several practical steps that could be taken.

You could specify that BGP implementations should, by default, only advertise routes if there is some policy configured. Something like, say… RFC8212?

Giving operators more information to understand what they are configuring (perhaps by cleaning up the Internet Routing Registries?) would also be helpful. Perhaps we could build a graph overlay on top of the Default Free Zone (DFZ) so a richer set of policies could be expressed, and policies could be better observed and understood (but you have to convince the transit providers that this would not harm their business before this could happen).

Maybe we could also stop trying to use BGP as the trash can of the Internet, throwing anything we don't know what else to do with in there. We've somehow forgotten the old maxim that a protocol is not done until we have removed everything that is not needed. Now our mantra seems to be "the protocol isn't done until it solves every problem anyone has ever thought of." We just keep throwing junk at BGP as if it is the abominable snowman — we assume it'll bounce when it hits bottom. Guess what: it's not, and it won't.

Replacing BGP is not realistic — nor even necessary. Maybe it is best to put it this way:

  • BGP expresses policy
  • Policy is messy
  • Therefore, BGP is messy

We definitely need to work towards building good engineers and good tools — but replacing BGP is not going to "solve" either of these problems.

P.S. I have differentiated between "metrics" and "policy" here — but metrics can be seen as an implicit form of policy. Choosing the highest bandwidth path is a policy. Choosing the path with the shortest hop count is a policy, too. The shortest path (for some meaning of "shortest") will always be provably loop-free, so it is a useful way to always choose a loop-free path in the face of simple, uniform, policies. But BGP doesn't live in the world of simple uniform policies; it lives in the world of "more than one metric." BGP lives in a world where different policies not only overlap but directly compete. Computing a path with more than one metric is provably at least bistable, and often completely unstable, no matter what those metrics are.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Internet Protocol, Networks

The Great Forking Bitcoins of China

CircleID - Sob, 2017-12-23 21:31

Let's say I'm with the Chinese government and decide that I am tired of people evading currency controls and money laundering using Bitcoin. So we adjust the Great Firewall of China to block port 8333. We also add some proxies that allow some uncleared transactions from outside to flow into Chinese networks but not the other way and keep track of which ones we let through.

Since a large fraction of the miners are inside China, and all of the hard currency exchanges are outside, this will cause a pretty serious fork. No doubt people will start trying to evade the block, but the Great Firewall of China works pretty well, and any evasion will take a while to start being effective. It'd also be easy to tell who was trying to evade (look for outside transactions in the chains they publish) and send someone around to chat with them.

Even if the two sides are eventually reunited, then what? You have two separate chains, with overlapping sets of transactions, which would make any sort of ad-hoc hack to splice one chain onto the other impossibly hard, even if the anarchists in the Bitcoin world could agree to it. The bitcoin voting algorithm would eventually make one chain win and the other one disappear. If some of the disappeared transactions were yours, how would this affect your opinion on Bitcoins?

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: Blockchain

ISPs in UK Legally Obliged to Provide High-Speed Broadband Upon Request, Starting 2020

CircleID - Pią, 2017-12-22 00:20

UK Government says internet providers will be legally required to meet user requests for speeds of at least 10Mbps starting in 2020. Jessica Elgot reporting in the Guardian: "British homes and businesses will have a legal right to high-speed broadband by 2020 ... dismissing calls from the network provider BT that it should be a voluntary rather than legal obligation on providers. Broadband providers will now have a legal requirement to provide high-speed broadband to anyone who requests it, no matter where they are in the country." It is reported that 4% of UK homes and offices (i.e., about 1.1m properties) cannot access broadband speeds of at least 10Mbps.

Follow CircleID on Twitter

More under: Access Providers, Broadband, Law, Policy & Regulation

Another Registrant Loses UDRP Where Trademark 'Spans the Dot'

CircleID - Czw, 2017-12-21 23:17

Here's another example of a domain name dispute where the top-level domain (TLD) was essential to the outcome of the case — because it formed a part of the complainant's trademark: <>.

In this decision under the Uniform Domain Name Dispute Resolution Policy (UDRP), the panel joined a short but (slowly) growing list of disputes in which the TLD plays a vital role.

These decisions contravene the longstanding but now outdated rule that a TLD "should" or even "must" be disregard in a domain name dispute, simply because the TLD is only a "technical requirement of registration." That rule has started to fade as new gTLDs appear in UDRP proceedings.

The <> decision, like some of the others that have considered the TLD to be a relevant factor, notes that the complainant's trademark "spans the dot" — that is, the trademark appears when the the second-level domain ("mr") is combined with the TLD ("green").

The <> decision even includes an interesting but ultimately irrelevant argument about the role of the dot itself, given that "Mr." is a common abbreviation for "Mister".

In comparing the respondent's domain name <> to the complainant's trademark MR GREEN (used in connection with online casinos), the panel wrote:

In these circumstances, the Panel will compare the Complainant's trademark MR GREEN to the entirety of the disputed domain name <> in the assessment of confusing similarity. In so doing, the Panel finds that the Complainant's mark is readily identifiable in the disputed domain name, taken as a whole. It should be noted that the disputed domain name is alphanumerically identical to the trademark with the exception of the addition of the dot which does nothing in the Panel's view to distinguish the mark from the disputed domain name. In the Panel's view, this leads to a finding of confusing similarity.

Other similar decisions involving new gTLDs include those for <>; <> (subject of an earlier blog post); and three involving the WE WORK trademark. Plus, at least two important decisions involving country-code domains (ccTLDs) also include trademarks that span the dot: <> and <>.

This concept of a trademark spanning the dot is now recognized in the new WIPO Overview of UDRP decisions (although that phrase is not actually used). The Overview says: "Where the applicable TLD and the second-level portion of the domain name in combination contain the relevant trademark, panels may consider the domain name in its entirety for purposes of assessing confusing similarity."

Trademark owners and domain name registrants should be aware of the growing importance of TLDs in domain name disputes.

Written by Doug Isenberg, Attorney & Founder of The GigaLaw Firm

Follow CircleID on Twitter

More under: Domain Names, Law, New TLDs, UDRP

New Report Uses Media Impressions as a Measure for New TLD Usage and Success

CircleID - Czw, 2017-12-21 00:56

(This is an excerpt from a new report, powered by media monitoring tool Meltwater. This is an initial report, and the plan is to replicate this report over time in order to show increases (or decreases) in the media impression metric. You can download a PDF of the full report here.)

With so many new domain extensions now available in the online space, it is very hard to measure the success of an individual TLD (Top Level Domain). There have been various methods used, such as total domain registration count, hosted live sites in the Alexa top-million, premium domain sales, aftermarket value, etc. Many of these metrics do help in understanding a new extension's progress, however, top-line registrations alone do not tell a complete story and leaves room for loopholes and shortcomings from free or near free promotions, large individual portfolio holders, and other factors that may contribute to inflated registration numbers.

It is believed that usage is a more reliable measure of a TLD's overall success. True usage is when a person or business uses a domain name as a primary web address for their business or online identity. When a functioning business or end-customer use their domain in a primary fashion, they actively promote it, advertise it, invest in it and renew it over a longer term as they use it as a base for their online presence. A business will invest their time and money to incorporate a domain name that they trust and value. Their domain becomes an active component of their branding, marketing, and PR activities.

When the press or media picks up announcements and/or writes articles about these businesses, the domain name typically gets mentioned in the articles and press releases. This leads to further awareness, familiarity, and trust built around the domain name extensions that are mentioned most frequently in the press.

Meltwater is a media monitoring software tool that collects data on online press impressions (articles) — when there is a mention of a specified keyword or term — in this case, a domain name with a particular domain extension cited in an online press or media article. A few examples of such press impressions are:

  • Mentions of new or existing businesses, startups, celebrities, individuals using any of the new domain extensions, in news articles, blog articles, etc.
  • Press releases from businesses or any media/press articles that have mentioned web addresses using new domain extensions.

This report uses the Meltwater tool to examine and analyze nearly 6 million press impressions over a six month period (January 1st 2017 to June 30th 2017), searching for press mentions of web addresses using ten targeted domain extensions: .xyz, .top, .loan, .club, .win, online, .vip, .wang, .site and .bid. At the time of the study, these were the top-10 new domain extensions based on the total number of registered domains according to

For this report, the TLDs were analyzed on various factors such as Media Impressions, Countries, Languages, Reach, and Sentiment. Consistent search parameters and filters were used across all ten TLDs. In order to focus on domain usage, articles that mentioned the extension(s) in the context of the registry, or as a domain extension (i.e. articles about the domain industry mentioning the new extensions as TLDs, or articles specifically about the activities of a registry operator) were filtered out of the results.

Summary of Results: When tracking the number of press impressions (articles) in terms of raw numbers, the top 3 were: .CLUB, with 14,519 impressions; .XYZ, with 10,770 impressions; and .ONLINE, with 9,595 impressions. When looking at the impression data against topline registration numbers, the top 3 TLDs were: .CLUB, with 13.29 impressions for every 1,000 registrations; .ONLINE, with 12.87 impressions for every 1,000 registrations; and .SITE, with 6.55 impressions for every 1,000 registrations. As for positive sentiment, the top 3 TLDs were: .CLUB, with 4,300 articles; .ONLINE, with 2,200 articles; and .XYZ with 2,189 articles.

Figure 1. Total Impressions for each of the top 10 gTLDs — January 1 through June 30, 2017.

Figure 2. Media impressions compared to total registrations

You can download a PDF of the full report here: DOWNLOAD FULL REPORT

This excerpt first appeared on the .CLUB blog.

Written by Jeffrey Sass, Chief Marketing Officer, .CLUB Domains

Follow CircleID on Twitter

More under: Domain Names, New TLDs, Web

The Internet's Obesity Crisis

CircleID - Śro, 2017-12-20 21:58

In 2001, I published a report on website weights and their impacts on website performance. Why you might ask, was I researching website weights all the way back in 2001…

The great broadband divide

At the time, in the United States and many other countries, homes and businesses were in the process of upgrading from dial-up internet connections to broadband connections. Because businesses were on the leading edge of this upgrade, many web teams designed fancy new websites that relied heavily on images and this fancy new technology known as Flash. But at the time just 5% of US homes had broadband connections, so they were forced to wait 30 seconds and beyond for many of these fancy new web pages to display.

For example, in 2001, the home page of Wal-Mart weighed 238 kilobytes, which, for a dial-up internet user, required up to a 30-second wait for the home page to display.

Around this time period, a startup was emerging that prioritized speed to such a degree that its homepage subsisted of nothing more than a few words of text and a logo, weighing all of 13 kilobytes. It's homepage loaded in less than 3 seconds.

That startup was Google.

The Google homepage weighed less than half of the Yahoo! homepage and users noticed. It wasn't just the quality of search that won Google its customers; it was the responsiveness of the interface.

Flash forward to 2017.

Here is the weight of the Google homepage in 2001 (blue) compared with today (in green). Google now comes in at a whopping 550 kilobytes (on average). But you don't have to look far to find websites that weigh many times more than Google, such as IBM and Microsoft and Amazon.

The mobile broadband divide

So what does this mean in terms of website performance?

If you don't have a high-speed connection, it means the difference between a fast-loading website and a website that you might just give up on.

Not everyone has a high-speed connection

So let's say you have a smartphone on a 3G network — which represents vast portions of China and most emerging countries, such as Indonesia and Turkey. A web page that weights more than 3 MB could take anywhere from six to 10 seconds to load. If you want your website to display in under the coveted 3-second threshold, you would be wise to keep your website under 1MB.

Based on my research for the Web Globalization Report Card, mobile websites have been steadily increasing in weight. Just over the past two years, they have nearly doubled in weight.

Mobile website weight is now one of the many elements that factor into a website's total score.

If you want to better understand the speed of Internet connections around the world, check out the Speedtest global index.

The Speedtest Global Index compares internet speed data from around the world on a monthly basis. Data for the Index comes from the hundreds of millions of tests taken by real people using Speedtest every month. To be included in the Index, countries must have more than 3,333 unique user test results for fixed broadband and more than 670 unique user test results for mobile in the reported month. Results are updated at the beginning of each month for the previous month.

Here's an excerpt from October:

So while Norway currently leads the pack with nearly 60Mbps, Brazil comes in at 15Mbps. And Brazil is far from alone in the bottom half of this list.

What's the key takeaway here?

All the usability testing in the world is meaningless if your customers can't quickly load your website or mobile web app.

Get your mobile website under 1MB, and you'll be well positioned against the competition — and you'll be better serving your customers. Get it under 500 kilobytes, and you'll be on par with Google's home page; not a bad place to be.

Written by John Yunker, Author and founder of Byte Level Research

Follow CircleID on Twitter

More under: Broadband, Mobile Internet, Web

Blockchain Technology Can Solve Some of Africa's Problems

CircleID - Śro, 2017-12-20 18:46

Lately, the word blockchain is gaining a lot of attention from businesses, investors and governments, especially around how it could transform how we do business today. As the world looks up to Blockchain technology for radical transformations in many industries and sectors, I want to take a look at how it could help governance in Africa.
The African continent is plagued with corruption, bad governance, mismanagement and lack of accountability. These issues see states lose millions of dollars yearly in tax evasion, excessive spending, and mismanagement of public funds. It is also not uncommon to hear of efforts to eliminate these ills from the system by different states in Africa, with almost guaranteed failure to implement such reforms. It will be naive to assume the alarming failure is solely due to high levels of corruption and nepotism but one cannot also ignore the importance of accountability and transparency in implementing such projects. This is where the blockchain technology could help, with accountability and integrity built in the technology by design. But before we look at how, let's take a look at what blockchain is.

So what is blockchain?

Originally developed as an accounting platform (ledger) for Bitcoin and other cryptocurrencies, a blockchain is a distributed ledger technology, a way for untrusted parties to use a distributed database to ensure a common record of truth or proof of record. It operates in a distributed system of nodes, where each node holds a copy of the transaction ledger and trust is managed by all nodes, not a central authority.

A blockchain is essentially a chain of blocks. Each block storing a transaction record or any data for that matter. A block is a permanent store of records on the chain which must have a relationship to a parent block and once created cannot be altered or removed from the chain because any subsequent block or record must be linked to that block. This is a very important attribute especially in relation to use cases that attempt to solve issues of accountability and corruption.

How does this fit in?

Smart contracts are blockchain-based applications that control the exchange of assets (money, property, shares or anything of value) in a transparent and immutable way, according to certain conditions. Smart contracts do not only define the conditions and penalties of an agreement or transaction but they also automatically enforce them. Applied in some areas of governance, this will eliminate corruption and the middleman effect because of its accuracy, provenance, transparency and automated system benefits. But in what areas could smart contracts help the fight against corruption?

Procurement – Procurement and many other areas of government in Africa are usually paper-based systems, involving many people, and numerous channels for forms to pass through for approval, hence costly, lengthy and increasing the exposure to fraud and loss. With blockchain, transactions are transparent ensuring authenticity and proof of record. Because Blockchain smart contracts also enforce the conditions and penalties of transactions automatically,this eliminates the problem of corruption and fraud. Faster transactions also mean better effieciency and reduced costs, saving millions.

Land and property administration – Real estate and land property ownership is usually not effectively managed by African governments, making land disputes very difficult to deal with and millions lost in unclaimed taxes. Attempting to fix this issue has been largely unsuccessful in the past due to the sector being plagued with corruption and nepotism. Having a system that not only ensures each record of ownership is not only immutable but publicly accessible, solves the corruption problem and also introduces efficiency when potential buyers need to do ownership checks of current and past owners of a property they are interested in purchasing. In addition, with blockchain, land could be held as equity, making bank loans more accessible.

Tax administration – With corruption being the primary reason and cause of under-collected or fraudulent filed tax returns in most African countries, the blockchain technology could greatly benefit the economies, allowing the countries to collect billions in tax revenue that are being lost today. Because Blockchain technology provides provenance, traceability, transparent and immutable information about transactions, fraud and corruption becomes almost impossible in the system and easier to detect. Using Blockchain technology to also increase efficiency of tax payments and decreasing the amount of time it takes to pay taxes will also encourage businesses to pay tax and discourage fraudulent workarounds and tax evasion.

Infact, a government-wide implementation of blockchain technology could potentially help drive a general behavioral change in the society. The knowledge of ease of being caught and records that cannot be manipulated might deter actors involved in corruption and fraud.

Written by Tomslin Samme-Nlar, Technology Consultant

Follow CircleID on Twitter

More under: Blockchain, Policy & Regulation

A Safe Pharmacy Environment in the Digital Age

CircleID - Śro, 2017-12-20 02:45

Today's ever-evolving, digital world has fundamentally changed, enhanced and challenged the way in which businesses all over the world must operate. For organizations and professions that have existed for centuries, this has created the opportunity and the test of adapting to change to remain successful and relevant.

The National Association of Boards of Pharmacy (NABP) was founded in 1904, at a time when there was little uniformity in the practice of, or standards for pharmacy. NABP Executive Director/Secretary Carmen A. Catizone, MS, RPh, DPh, explains why NABP was so important at that time in the industry's history.

"Each state had very different requirements, and there was no uniform measure of what skills and knowledge a pharmacist needed to receive a license. Pharmaceuticals, too, were unregulated. There were no standards for safety or efficacy.

"NABP was formed to assist the state boards in achieving inter-state reciprocity of pharmacists' licenses based on minimum standards of education and uniform legislation."

The dark side of technological advancement

Over the decades, developments targeted at patient safety have resulted in regulations that have raised standards for pharmacy practice and the pharmaceutical supply chain to a level where "pharmacists have become one of the most trusted sources for medication and related information," says Dr. Catizone.

However, as it did for many established industries, the widespread adoption of the internet raised challenges.

"Online drug sellers became more common in the late 1990s, and we started seeing some back-sliding in the achievement of safety made to that point. They began circumventing the laws and standards established to keep patients safe."

These developments have made NABP's role both more complex and more crucial. NABP stated that it has evaluated more than 11,500 websites selling prescription drugs and found 96 percent of them to be operating illegally — the majority of them dispensing potentially dangerous medicine without requiring a prescription.

"Consumers today put their safety at risk when they buy medicine online," explains Dr. Catizone. "It's not enough to make sure high standards are in place for the practice of pharmacy because consumers don't always go to their corner drugstore. They need to know how to find safe, legitimate online pharmacies that are in compliance with patient safety standards."

How a 110+ year-old organization adapts to change

To address the need for better consumer education, NABP now runs awareness programs on the dangers of online criminals and counterfeit drugs, empowering consumers to make safer decisions when buying medicine online.

NABP also grants certifications to legitimate pharmacies and related organizations that uphold high standards and demonstrate best practices.

"The Verified Internet Pharmacy Practice Sites, or VIPPS, program was launched in 1999 in response to public and regulatory agency concerns regarding the safety of internet pharmacy practices and to provide a means for patients to identify legitimate online pharmacies. VIPPS accreditation is still considered the gold standard for internet pharmacies in the United States."

As Dr. Catizone explains, the VIPPS program itself has also evolved, most recently with the introduction of the .pharmacy Top-Level Domain as part of the .Pharmacy Verified Websites Program.

"NABP believes strongly that its .Pharmacy Program is the future of safe pharmacy and pharmacy-related services online, offering a superior means of displaying approval to consumers and other entities," he said.

"As such, NABP is requiring all VIPPS-accredited pharmacies to register and use a .pharmacy domain name in order maintain their accreditation. It's no longer enough to have a seal of approval. Seals can be copied and pasted and displayed fraudulently to dupe patients into thinking they are visiting a safe website.

"The .pharmacy domain name identifies legitimately operating pharmacies and pharmacy-related entities for consumers, advertisers, and search engine companies by incorporating the 'seal of approval' into the domain name."

The future of online pharmacy standards

Ultimately, NABP aims to see widespread usage of the .pharmacy domain across the industry, as a sign of credibility and security for consumers.

"Pharmacies and related entities will recognize the .pharmacy domain name as a way to stand out against the overwhelming number of websites selling medicine illegally, including substandard and counterfeit products," said Dr. Catizone.

As the internet challenges organizations like NABP to evolve, NABP has decided not just to keep up, but to contribute to a better, safer, environment for online consumers.

"NABP would like to see search engines recognizing domains ending in .pharmacy as verified and legitimate and pushing them to the top of search results. The more consumers know about the .pharmacy domain and that it is a beacon of safety online, the more they will choose to visit websites with a .pharmacy domain name. In the end, the .Pharmacy Program contributes to a safer internet."

Anyone interested in learning more about NABP and the .Pharmacy Program can visit NABP's website and the .pharmacy domain website.

Written by Sue Schuster, Client Engagement Manager, Registry Services at Neustar

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, New TLDs

The Top 10 .Brand Moments of 2017

CircleID - Śro, 2017-12-20 00:50

Each year in December, I sit down and take a moment to reflect on how the .brands space has progressed in the previous 12 months.

Most folks will understand that starting a movement to create the next evolution of the internet with 'digital superbrands' was a little slow at first. Slowly but surely, that has started to change and especially in 2016, it really felt like people were starting to 'get it' — both in terms of brands themselves, and the media and consumers.

Then came 2017. We entered the year with a lot of ambition, and a great sense of optimism.

That optimism was well and truly rewarded and I can honestly say the progress we've seen in 2017 has been amazing, and incredibly exciting in terms of the future.

From full transitions to emerging usage trends, to some brilliant examples of promotional pages and everything in between — there have been so many great moments.

But let's get to it – my top 10 .brand moments of 2017.

* * *

1. Major Amazon Web Services advertising campaign featuring .aws

We're always on the lookout for .brand domains being used 'in the wild', and in October we were excited to see a national US campaign involving out of home and TV advertising by the tech giant Amazon Web Services. Of equal importance was that this is reportedly the first time that Amazon has done major campaign advertising for AWS Cloud solutions, reinforcing the 'build on' catchphrase with the memorable domain

As I wrote in a blog about this campaign, the size and scale of AWS as a brand makes this all the more impressive — and it serves as a true challenge to other .brand applicants to "tackle the challenges of internal engagement and take advantage of the unique branding opportunities offered by .brand domains — or be left behind by those who are."

2. Major League Baseball's SEO expert shared his advice for .brands

We were thrilled earlier this year to have Matt Dorville of Major League Baseball write a guest series on how a vanity URL strategy using .brand domains could mean great things for search ranking. Matt's insights are well worth a read (start with Part 1 here) and as I've mentioned already, redirecting domains have increased significantly throughout the year — so it seems brands are definitely catching on to the potential here.

3. Full transitions

In addition to the ever-increasing use of .brands being used for campaign sites and social media, it's been incredibly encouraging to see that some have taken the plunge and fully transitioned their digital properties to their .brand TLD.

Whilst most have undertaken the transfer to their .brand as part of a web redesign effort, these organizations should be applauded for embracing the future and taking the leap into the next evolution of the internet.

Some examples of companies that have made this shift include The State Bank of India which moved onto .sbi, Spanish brand SENER Engineering and Construction to .sener, and FAGE yogurt to .fage.

4. Australian Football League paves the way for global sporting organizations

With a live attendance of more than 6 million people in 2016 and an average television audience of over 4 million, the Australian Football League (AFL) is Australia's most popular and influential sporting code. In 2017, the AFL launched its historic women's league, a milestone moment for elite sport in Australia, and chose to use its .brand for the league's primary site. Drawing massive media attention around the country, the site joined other important community assets, including the AFL's grassroots engagement program

5. .brands: the future for a connected world – a video

This year we produced a video that really tells the story of .brands and what they could mean for businesses and consumers. Our 'Connected World' video has already had over 400,000 views, and we're super proud of how it looks and the narrative it conveys — you can check out the video below and read a bit more about how it came together in my blog post 'The story of a video.'

6. Use of .brands in social media becomes a go-to

The topic of integrating .brands into social media through branded URL shorteners has dominated our recent content and been the number one topic of conversation with our clients in recent months. The benefits for branding and security, combined with the ease and simplicity of implementation, have really captured the attention of brands who are moving towards usage of this strategy. You can read my previous blog on this topic or some recent pieces from Neustar's Corey Grant on how branded links in social can boost security, as well as 5 reasons you should use your .brand in social media.

7. AXA integrates Blockchain technology with .brand domains

In recent months insurance company AXA launched a new product for travelers called Fizzy and launched via the domain. Not only is it an interesting product offering, but in this case, AXA has integrated its .brand with an additional new technology — as Andrew Allemann of Domain Name Wire reported, "Fizzy is based on the Ethereum blockchain. Insurance purchases are recorded on the blockchain, so they are tamperproof. Payments are automatically made whenever a contract should pay off; in this case if a flight is 2+ hours late." This is a super cool concept, and I predict we'll see more innovation like this among .brands in 2018.

8. FOX uses its .brand to promote Predator strategy

The media and entertainment industries have been a little slower off the mark in terms of .brand usage, but given their incredible brand recognition and audience sizes, even small steps can be really significant. And recently mass media corporation FOX has taken a huge leap, using its .brand to launch a fan application promoting the new Predator movie at This could be fantastic publicity for .brands as a whole and is an excellent, isolated way for FOX to roll out its TLD and start educating users about new possibilities for navigating to FOX content.

9. French transport company promises customers more with its .brand

Only last week, the French National Railway Corporation (Société nationale des chemins de fer français or 'SNCF') has completely moved its existing ticketing portal from the ugly domain over to (or 'yes'.sncf translated to English).

This is a huge deal, as the site is responsible for millions of tickets and billions of dollars in annual revenue.

While promising no disruption to the service it offers consumers, SNCF explained that the new .sncf website would provide "more clarity in choice", "advice and support" and "more ideas and inspiration" to become "your favorite go-to travel companion." After 17 years in operation and with more than 180,000 employees in 120 countries, this is yet another major European organization to go big with its .brand.

10. A few industries rise to the top

This year we launched the Spotlight series to delve deeper into the some of the interesting facets of the .brands space. Through it, we've examined some of the industry sectors that showed high rates of adoption and interesting use cases — including the Banking & Finance industries, Insurance industry, and Automotive sector.

* * *

With all this momentum driving us into 2018, we're looking forward to some great progress in the New Year and wish all of our readers the happiest of holiday seasons.

This article was originally published on MakeWay.World.

Written by Tony Kirsch, Head of Professional Services at Neustar

Follow CircleID on Twitter

More under: Blockchain, Domain Names, New TLDs, Web

Sorry, Ajit Pai: Smaller Telcos Did Not Reduce Investment After Net Neutrality Ruling

CircleID - Pon, 2017-12-18 23:11

Pai justifies his Net Neutrality choice with the claim, "the impact has been particularly serious for smaller Internet service providers." #wrong (Actually, NN has minimal effects on investment, up or down, I'm convinced. Competition, new technology, customer demand and similar are far more important.)

The two largest suppliers to "smaller ISPs" saw sales go up. Adtran's sales the most recent nine months were $540M, up from $473M the year before. 2016 was $636M, 2015 $600M. Calix the last nine months sold $372M, up from $327M. The full year 2016 was $459M, up from $407M in 2015. Clearfield, a supplier of fiber optic gear, was up 8% in sales in the smaller ISPs.

There is nothing in the data from others that suggests an alternate trend.

The results in larger companies are ambiguous. I can "prove" capex went up or went down by selecting the right data.

Anyone could have found this data in a few minutes from the company quarterly reports. Pai offered no numbers to back that up. He based that claim on what he heard from fewer than 5% of the smaller service providers. They provided no figures. Pai strongly opposed Net Neutrality. It's reasonable to assume the handful of people he's citing wanted to say something he would agree with.

It's also reasonable to assume that his "confirmation bias" would make it hard for him to hear contrary evidence, such as the clear comments of the biggest suppliers in this space. I constantly have to fight a similar problem.

For example, I've spent over a decade reporting DSL and landline broadband. It was hard for me to see how important mobile broadband is becoming. Eventually, I've come to recognize the impact, probably two years later than I should have seen it. Landlines are still important — AT&T is extending 3M/year — but wireless is now good enough for many people.

Japan has fiber almost everywhere, but Softbank reports ~3M have chosen to use connection recently. The convenience of connecting 15 minutes before you bring a modem home was more important to them than the greater performance of wireless. Most were young people who could take the modem with them if they moved. (I was surprised by this, so I doublechecked.)

Actually, whether capex went up or down in 2016 tells us almost nothing about the choice on neutrality. Everyone knows a single datapoint could be random or due to other causes.

Even if he had enough data to have some hypothesis, any statistician would require he eliminate "confounding variables" — other reasons to explain the effects he sees. In this case, many are well-known. AT&T, the largest spender, had told Wall Street before Title II was considered likely, that it intended to cut capital spending. They were finishing a large project of bringing LTE to 97% of the U.S. and would now reduce capex. This was well-known. Less discussed was that they also had finished a very large project of extending fiber to a million business locations.

In addition, 5G technology was not ready to deploy in 2015-2017. At the time, that was expected to be the next big telco investment. 5G will be modest in 2018 but likely will pick up in 2019 and 2020 at AT&T. In the last two months, Verizon has said the reach is better than expected so that the spending bump won't be necessary.

Other factors loom large in carrier investment. In over 15 years of listening to CFOs and CTOs explaining capex decisions, I've nearly never heard investment attributed to any government action. They are far more likely to cite competition worries, better technology, and customer demand.

Competition often inspires investment. Verizon President Larry Babbio told me they built Fios, the largest fiber network in the western world, because "we have to get cable out of the house." Cablevision was taking away too many customers. More recently, Verizon's successful first in the world large LTE network was killing AT&T, which was upgrading more slowly. AT&T added several billion dollars to capex after 2011 to catch up.

Cecilia Kang at the Times and the other D.C. reporters knew the decision was coming. Pai made this claim in the past, so they should have found the data. "The usual sources" in D.C. are typically unreliable about the telecom world outside Washington. Both left and right suffer from Beltway Blindness.

The results in larger companies are ambiguous. I can "prove" capex went up or down.

The results in the larger companies were more ambiguous.

The four largest companies' capex — two/thirds of the total — went up from $52.7B during 2015 to $55.7B in 2016, an increase in line with sales. The result remains positive after making sensible adjustments for mergers and acquisitions. That's as close to "proving" that Net Neutrality led to increased spending as the facts chosen to prove the opposite.

I could also find data that shows spending going down. To get meaningful results, you have to choose the right data set.

Pai cites a paper by Bob Hahn here. Hahn happens to be a strong opponent of Neutrality. The telcos are important clients to him. That doesn't mean he's wrong, but it's irresponsible not to look at the assumptions in his work. In fact, a Neutrality proponent has done that. Making different assumptions, he came to the opposite conclusion. Both points of view are totally unproven because of inadequate data.

Probably, the effect of Net Neutrality on investment is minimal in either direction, but even I don't have enough data to be certain.

* * *

For the record: I have supported Net Neutrality since 1999. It's my job to report accurately no matter what my personal opinion. I have written that the claims from some people who agree with me are ridiculous. According to former FCC commissioner Michael Copps, ending net neutrality will end the Internet as we know it. Michael knows I respect him, but this is ridiculous.

Update, Dec 19, 2017: Initial paragraphs at the beginning of the post updated for added clarity.

Written by Dave Burstein, Editor, DSL Prime

Follow CircleID on Twitter

More under: Net Neutrality, Policy & Regulation, Telecom

Is it Finally Time to Eliminate Needs-Based IPv4 Transfer Policies?

CircleID - Pon, 2017-12-18 18:13

This article was co-authored by Marc Lindsey, President and Co-founder of Avenue4 LLC and Janine Goodman, Vice President and Co-founder of Avenue4 LLC.

Two weeks before depletion of the American Registry for Internet Numbers (ARIN) IPv4 free pool in September 2015, we published an article recommending that the ARIN community adopt transfer policies that encourage trading transparency and improve whois registry accuracy. By eliminating needs justification as a pre-condition to updating the registry, we argued that ARIN could eliminate existing policy-based barriers that have kept many otherwise lawful and legitimate commercial transactions in the shadows.

At that time, the majority of ARIN's active members participating in its policy development processes strenuously objected to any policy proposal that relaxed needs requirements for IPv4 block transfers. During 2014 and 2015, eight such policies were proposed. All of them were abandoned for lack of support.

In the public comment period, opponents of these proposals asserted that rampant fraud, hoarding, and speculation in the market would follow adoption of policies that relaxed needs-based requirements for IPv4 transfers. These arguments prevailed within the ARIN community even though there was no data or other evidence to support them.

Experience Leads to Experimentation

After September 2015, attitudes within the ARIN community shifted. Rigorous needs-based requirements imposed real-world market impediments for smaller network operators while the largest network operators were able to readily acquire substantial quantities of address space. At around the same time, stiff opposition to relaxing needs justification for all transfers began to fade.

In 2016, three policies were introduced that collectively reduced transfer utilization thresholds and needs criteria. All were overwhelmingly supported by the community and adopted by ARIN the following year. The core of these new policies appears in Sections 8.5.4 and 8.5.7 of ARIN's Number Resource Policy Manual ("NRPM").

Section 8.5.4 of the NRPM allows network operators without any existing address holdings to obtain a /24 block without any demonstration of need. This dramatically reduced the burden on start-ups, downstream ISPs and end users to obtain an initial small IPv4 block via the transfer market.

Section 8.5.7 of the NRPM allows any organization who can demonstrate 80% utilization of its current IPv4 address space to receive a transfer that would double the size of its then-current holdings — up to a /16 (65,536 numbers). Under this policy, the transfer recipient does not have to convince ARIN that the recipient will attain any future utilization threshold as a condition to the transfer. There is, however, a cap on the quantity of address space any one organization can acquire pursuant to this policy — an organization may not receive more than 65,536 numbers in any rolling six-month period.

According to a presentation at the ARIN 39 meeting in April 2017, approximately 97% of transfers to date would have qualified for this policy. This means the policy effectively eliminated needs justification for most market participants. But there was relatively little dissent during the policy discussion and comment period. The fear of fraud, hoarding, and speculation that had scuttled prior liberalizing transfer policy proposals had no sway.

Although this outcome is an abrupt shift in policy position, it aligns with actual market activities. Fraud, speculation and hoarding on the buying/receiving side of transfers simply isn't a factor. In his presentation on IP addressing at ARIN 39, APNIC scientist Geoff Huston noted that he saw no evidence of speculation based on his review of historical transfer data. Six months after adoption of relaxed needs justification policies there is still no evidence of market-distorting bad behavior by transfer recipients.

In contrast, these new policies have been largely successful in enabling smaller network operators to access and participate in the market to obtain the address space they need to operate their businesses.

Collateral Damage: Whois Registry Accuracy

The success of these limited relaxed-needs policies requires a close look at whether any needs-based requirement is desirable when operating under IPv4 market conditions, where supply is unable to keep up with demand, and hidden transactions are introducing registry inaccuracies.

In her presentation on whois registry accuracy last spring, Leslie Nobile described whois accuracy as a key responsibility of the RIRs and vital to the operation of the Internet. She noted that network operators rely on whois to resolve technical and abuse issues, law enforcement relies on whois in its investigations, and greater registry accuracy helps protect against number hijacking. She cautioned that, unless changes are made, whois accuracy would likely worsen, not improve, over time.

One important change that would advance whois accuracy is the elimination of needs-based requirements for all IPv4 transfers. The experiences of other Internet registries bear out this result.

When RIPE NCC decided to eliminate its needs requirements for intra-RIR transfers in order to encourage more transparent IPv4 market transactions, one of the key cited benefits was greater whois registry accuracy. Imposition of needs requirements is distorting the public reporting of marketplace activity, and contributes to a whois registry inaccuracy by encouraging market participants to enter into transactions that avoid reporting RIR registry updates.

Since 2011, there have been over 50 large block transfers (i.e., greater than 250,000 IPv4 numbers) recorded in the ARIN registry. For this purpose, a "transfer" means 1 or more IPv4 blocks transferred between the same two parties and registered on the same day. More than half of those transfers — comprising nearly 50 million numbers — are attributable to just ten buyer/seller "pairs" who engaged in two or more transfers over months or years.

In some cases, the same contracting parties may have entered into separate contracts for each transfer (either as part of completely separate deals or the result of a series of maturing options in a single deal). In other instances, parties entered into a single transaction that involved numbers being registered over time in order to better accommodate ARIN's needs requirements, which frustrates the very purpose and function of the whois registry (accurate recordation of who controls allocated IP address space) and the needs-based policy constraint (limiting the quantity of address space one entity can acquire).

These multi-step transactions are just part of a larger problem. Evidence suggests that there is a non-insignificant number of alternative transactions where the parties have agreed to forego the registration process altogether in order to avoid the ARIN transfer hoops. E.g., long-term leases, corporate acquisitions, or straight sales where buyer and seller simply agree to take the risk of conveying beneficial use without updating the RIR registries.

In all of these cases, the net result is an ARIN registry that does not identify the real parties in control of specific IP address space. This hampers efforts by network operators to resolve technical coordination issues and impairs law enforcement when conducting investigations that rely on whois data to trace IP addresses to identifiable organizations.

Onward Progress – Reframing the Problems and the Solutions

The collective experience of the RIRs, including ARIN, is that neither relaxing needs requirements nor eliminating them altogether causes hoarding or speculation or any other nefarious activity as long as there is a means to ensure that receiving organizations are lawfully organized and operate IP networks. When IPv4 numbers were being allocated from the free pool, constraining allocations and assignments based on demonstrated need was a useful means to stave off exhaustion and fairly allocate limited (but free) resource supply. Now that the free pool is exhausted, supply of new address space must be stimulated through significant capital outlays, and conveyed under structured contracts (with the associated transactional and counterparty risks) — all of which introduce considerable barriers for organizations seeking additional IPv4 address space.

With these market realities, transfer policies that focus on vetting the legitimacy of the parties' standing to participate in a transfer without any additional needs constraint would represent a material improvement for all market participants. Existing policies as applied by ARIN's staff on the source side of designated transfers are working very well. On the recipient side, however, policies should be reshaped to focus only on validating the legal standing of recipients, and the existence of their operational networks without imposing judgments on the quantity of address space they should buy. This would reduce the number of hidden transactions, encourage all market participants to submit their transactions for registration, and still provide adequate safeguards against pure financial speculation.

In a post-free IPv4 pool world with limited supply of IPv4 number blocks, it's time to retire needs justification for IPv4 transfers and re-focus transfer policy on addressing the real market and related registry challenges.

Written by Marc Lindsey, President and Co-founder at Avenue4 LLC

Follow CircleID on Twitter

More under: IP Addressing, Policy & Regulation

So, You Claim to Have an Unregistered Mark! Is there Cybersquatting?

CircleID - Pon, 2017-12-18 18:08

Complainants have standing to proceed with a claim of cybersquatting under the Uniform Domain Name Dispute Resolution Policy (UDRP) if the accused "domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights" (4(a)(i) of the Policy). Quickly within the first full year of the Policy's implementation (2000) Panels construed "rights" to include unregistered as well as registered marks, a construction swiftly adopted by consensus. Only if a complainant has rights may it proceed to the second and third requirements. Excluded from the term "rights" are "intent to use" marks (because their market presence looks to the future) and marks on the supplemental register (for lacking any secondary meaning).

This consensus came with a proviso, however, namely that complainant must prove its unregistered mark predated the registration of the accused domain name. (Predating is not required for registered marks; complainant has standing as long as its registered mark predates the filing of the complaint, but if it does not predate the registration of the domain name, it will have standing but no actionable claim). The current version of the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Jurisprudential Overview 3.0 states that

Specific evidence supporting assertions of acquired distinctiveness should be included in the complaint; conclusory allegations of unregistered or common law rights, even if undisputed in the particular UDRP case, would not normally suffice to show secondary meaning. In cases involving unregistered or common law marks that are comprised solely of descriptive terms which are not inherently distinctive, there is a greater onus on the complainant to present evidence of acquired distinctiveness/secondary meaning.

There have been, and continue to be, cases in which complainants show they have registered marks postdating domain name registration but alleging unregistered priority. As the Overview states, marks "comprised solely of descriptive terms ... [have] a greater onus ... to present evidence of acquired distinctiveness/secondary meaning." As a practical matter, what satisfies the requirement?

There are four possible factual patterns: 1) complainant makes a naked claim but lacks sufficient proof to establish an unregistered right [Barnes Crossing Auto LLC v. Jonathon Hewitt, SEO Sport, LLC., D2017-1782 (WIPO December 5, 2017) (<>, <>, <>, and three more)]; 2) complainant establishes that it does have an unregistered right [Marquette Golf Club v. Al Perkins, FA1706001738263 (Forum July 27, 2017) (<>)], 3) complainant alleges its mark predates the domain name but satisfies its standing requirement by having a registered mark, however lacks proof to support the the second and third requirements [Weeds, Inc. v. Registration Private, Domains By Proxy, LLC / Innovation HQ, Inc, D2017-1517 (WIPO November 23, 2017) (<>)]; 4) complainant has both a registered right postdating registration of the domain name and proof of unregistered right predating the domain name [Biofert Manufacturing Inc. v. Muhammad Adnan / Biofert manufacturing, FA171000 1753132 (Forum November 27, 2017) (<>)]. If complainant cannot satisfy the first limb of the Policy the complaint must be dismissed for lack of standing.

Unless complainants can prove their allegations of market presence predating registration of the domain name they will invariably fall short of satisfying their burden of proof. They may also fall short by waking up many years later to allege cybersquatting because the passage of time supports an inference the claim is manufactured for the occasion. Even though Panels generally reject laches as a defense, waiting likely supports respondent under 4(c)(i) of the Policy, particularly if it is conducting a bona fide business which can include investors in the business of buying and selling domain names.

Complainant in Weeds had standing based on its registered mark but it waited too long. Barnes Crossing Auto is not a waiting case. It had a two-fold burden for standing, to prove a common law right to "BC" either separate from or joined with well-known "Manufacturer Marks." The three-member Panel provided no formal answers but it offered some thoughtful guidance on what was missing and would have been required. First, there was no evidence that "bc" alone had (or could have) earned secondary meaning, and (second) there was no evidence Complainant had permission to claim infringement of Manufacturer Marks: "[t]he only thing that is certain from the record is that Complainant is not the owner of the Manufacturer Marks." Neither is there proof it had "'direct authorization' from Chevrolet to file its Complaint, yet the supporting evidence is merely an email (not a declaration) from a Chevrolet 'zone manager' (whose authority with respect to any trademarks is unknown) that 'Chevrolet has no objections to Barnes Crossing Chevrolet' (not the Complainant) 'filing the complaint as you outlined below.'"

Instead of ruling on the standing requirement, the Panel in Barnes Crossing Auto found the claim outside the scope of the Policy and dismissed the complaint. It explained that

Given the number of unresolved factual questions, especially the nature of the relationship between the parties and their intentions concerning the Disputed Domain Names, the Panel finds that this is primarily a contractual dispute between the parties that is not appropriate for resolution in this proceeding.

If one were to look for a mark composed of a "solely ... descriptive term[ ]" none would so fit the portrayal so well as "weeds." In Weeds, Inc. the three-member Panel dismissed the complaint for failure to prove a prima facie case that Respondent (domain investor) lacked rights or legitimate interests in the domain name. The facts are both interesting and confusing: Complainant alleges that its unregistered mark dates from 1966 (although this is undercut by its application that first use in commerce on the trademark application was 2000). (Note: even before the Panel filed its award, complainant commenced an action under the Anticybersquatting Consumer Protection Act (ACPA) so the denouement is yet to come).

The Panel noted that in its registration Complainant disclaimed "weeds" from its earlier (W. WEEDS INC., "No claim is made to the exclusive right to use 'weeds' and 'Inc.' apart from the mark as shown.") but its later application for WEEDS was filed as a 1b and not registered until 2007 (Respondent registered the domain name either in 2001 or 2004). As the Panel noted, disclaimers have consequences:

Complainant contends that it has been using the mark WEEDS at common law since September 28, 1966, the date of incorporation of Complainant Weeds, Inc. However, Complainant fails to submit persuasive evidence that the term "weeds" had acquired secondary meaning at the time of the registration of the disputed domain name. Having in mind that "weed" is a descriptive, dictionary word, such specific evidence is essential for a complainant invoking a common law mark.

That conclusion pretty much explains the Panel's decision. Absent sufficient evidence of secondary meaning the decision essentially echoes the Jurisprudential Overview:

Relevant evidence demonstrating such acquired distinctiveness (also referred to as secondary meaning) includes a range of factors such as (i) the duration and nature of use of the mark, (ii) the amount of sales under the mark, (iii) the nature and extent of advertising using the mark, (iv) the degree of actual public (e.g., consumer, industry, media) recognition, and (v) consumer surveys.

While Panels have not been consistent in checking off each of these factors, they are consistent in insisting that complainant's burden is higher as the mark descends to the generic end of the classification scale. Weeds and BC are probably as far down as it is possible to go. Dictionary words, even when they have acquired secondary meaning, do not shed their primary meaning. Complainant in Fitness People B.V. v. Jes Hvid Mikkelsen, CAC 101587 ( July 18, 2017) alleged that <> infringed its common law right to that combination. The Panel disagreed:

[Where the alleged mark] consists of two generic words that could be seen by the public to have very wide meanings ... [the] onus on the Complainant to show a clear secondary meaning of the words, i.e., another meaning in addition to the primary meaning of the words which links them to the complainant and its goods and services [is higher]."

The Panel concluded that "there is no evidence to establish that important element." Rather, the "Complainant in the present case has not proved or even attempted to prove in any understandable or persuasive way that it has such [an unregistered] trademark in FITNESS PEOPLE."

The skill in creating a proper record (whether complainant or respondent) is in laying out the evidence in a persuasive manner. How this is done is illustrated in Marquette Golf Club and Biofert Manufacturing (Respondents did not appear in either case). Complainant in Marquette Golf Club made "intense efforts to advertise and promote its golf club." Complainant in Biofert supported its claim for common law rights by submitting evidence of "substantial sales and extensive advertising and promoting [through which it has] become very well-known [ ] as identifying fertilizers and supplements for agricultural use originating from, sponsored by, or associated with Complainant."

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Intellectual Property, Law, UDRP

Net Neutrality Not a Serious Issue Outside America

CircleID - Nie, 2017-12-17 18:17

Most countries, don't have to fear internet quality problems in the same way as would be possible in the USA.

The US competition watchdog has little power to hold telcos accountable to the nature of their broadband services. Back in 1996 broadband was classified as a content service and not a telecom service.

So, for example, if a telco wants to provide preferred access to Google, it can sell them a superior broadband services which could create a two speed internet service one for those who pay the premium — their services will get priority and higher speed — and those service providers who don't pay and could be relegated to the slow speed internet lane.

Because of the lack of regulatory protection concerning broadband, the US — under President Obama and the FCC Chief Tom Wheeler — created a bandage solution in the form of net neutrality, aimed to protect consumers against telcos exploiting this 'content' situation — these protections have been removed.

The incumbents were against Net Neutrality as that would stop them from exploiting the generous content status that they have, which allows them to stop any serious retail broadband competition as they don't have to make broadband capacity available on a wholesale basis.

The situation in all of the other developed economies is totally different. Broadband is part of the telecoms regulatory regime, and this has created a well-functioning competitive retail market, in Australia for example with over 50 providers.

Competition, not bandage solutions such as Net Neutrality, is the solution to the American problem. If there is sufficient competition, the incumbents can't misuse their dominant position that has the potential to create a two-tier broadband system. Furthermore, net neutrality is rather a blunt tool; one would like to allow providers to create the best possible broadband configuration for the services that they want to develop. Net neutrality rules would make that more difficult.

Even if in more competitive markets, incumbents were able to create a two-tiered access system, and if such a service is deemed to be anti-competitive, the regulator can step in and say and stop them.

As broadband is now again outside the protection of the US regulator (FCC), the big telcos can, if they want, exploit the fast lane/slow lane internet system.

It is clear that under President Trump there is little interest to provide the regulator with the powers to intervene in such situations.

For many years there has been a serious "conflict of interest" within US politics whereby the three major incumbent telcos (who call themselves ISPs) are giving senators and congressmen an incentive to protect their interests. Over $100 million is annually spent on lobbying. The politicians are given this money for community projects etc. Voting against the interest of the incumbents will see an end to such handouts.

While it is important for all countries to keep a close eye on anti-competitive behavior, because of much better functioning competitive markets, it is highly unlikely that such misuse of market power is possible anywhere outside America (in developed economies).

One of the reasons incumbents used to get rid of regulations is that it hampers competition. Being one of the least regulated telecoms markets in the world, the country has dropped on the international broadband ladder. From being in the top three or five, twenty years ago, they now feature around position 15 (depending on what is measured by whom). So that argument doesn't cut it. On the other side, countries, where broadband is treated as telecoms, in at least 15 cases are providing a better broadband service than the ones available in the USA.

The financial market in the USA has reacted subdued to the announcement because telecoms regulations are such a hot political potato, they don't see the current ruling as a final solution. It is highly likely that under a next Administration, new regulations will be introduced and this uncertainty is not stimulating the much-needed investments required in the American telecoms market.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband, Net Neutrality, Policy & Regulation, Telecom

The Digital Geneva Convention Exists: Just Use It

CircleID - Sob, 2017-12-16 18:41

It is one of those surreal, ironic moments in time. This coming week, an event called the Internet Governance Forum (IGF) 2017 will be held at Geneva in the old League of Nations headquarters now known as the Palais des Nations. On its agenda is a workshop to discuss "A Digital Geneva Convention to protect cyberspace."

If the IGF participants, as they enter the Palais grounds, simply look in the opposite direction south across the Place des Nations, they would see 100 meters away, a glass cube building provided by the Republic and Canton of Geneva. Two floors down in the deuxième sous-sol are the archives that hold the existing Digital Geneva Convention to protect cyberspace — signed and ratified by every nation in the world. The archivist would probably make the documents available for view, but thanks to one of the most extensive digital archival initiatives of treaty instruments in the world, the entire series of Digital Geneva Conventions are available together with all the treaty conference materials going back 152 years. (The previous 15 years of convention materials are still in the Austrian State Archives in Vienna.)

The existing Digital Geneva Convention was crafted when the first digital networks were interconnected across national borders in 1850. Many of the basic cybersecurity issues were vetted for weeks among the nations present and provisions placed in the treaty instrument. Protection of users, national security, privacy, identity management, structured reporting, technical protocols — the provisions are all there. As new services, facilities, and technologies emerged over the subsequent decades, the provisions were evolved and expanded. The biggest expansions were those relating to radiocommunication, undersea cables, voice communications, broadcasting, satellite communications, digital networks, and datagram internets.

Further digital cybersecurity protections were undertaken as part of this process in 1988. Prior to then, the deployment of public datagram internets based on any of the multiple internet protocols was prohibited. When the treaty conference was held, the infamous first major internet cybersecurity incident occurred — the Morris Worm — which resulted in additional cybersecurity provisions being included in the treaty. The treaty instrument enabled the offering of public datagram internet services and the considerable array of supported applications when they came into force the following year, and remain in force today — literally providing the basis in public international law for these services.

The existing Digital Geneva Convention is actually a comprehensive set of treaty instruments and technical standards together with a permanent organization with well-established, very effective, open processes, and state-of-the-art facilities provided by the Canton of Geneva. The scope encompasses all digital (and analog) communications, services, and technologies. The focus on effecting legal and policy agreements among nations on matters such as cybersecurity through ancillary technical specifications has proven an effective component in actually implementing meaningful cybersecurity capabilities. The permanent organization for the Digital Geneva Convention regime was given the name International Telecommunication Union in 1932. It owes the name to an intergovernmental treaty devised in 1920 by the U.S. Wilson Administration — the Universal Electrical Communications Union — to effect cybersecurity following events and technology developments during World War I.

Along with that history and stature as the sole global intergovernmental mechanism for digital security, comes additional features. It has a partnership with the U.N. and just about every other intergovernmental and industry body in the world. Its published documents are freely available on-line with persistent identifiers, and exist in five languages. It has one of the best information systems and meeting support capabilities in the world. It curates best-of breed cybersecurity specifications from other bodies and republishes them so as not to reinvent work already accomplished. And, it even has a free headquarters campus of elegant buildings in a country that provides ready access to every nation's citizens, in the most international city in the world, with one of the world's best air traffic hubs.

Following the legalization of international internets in 1988, the cybersecurity components were implemented through joint ITU-ISO specifications. Many of these were implemented by companies such as Microsoft — whose secure eMail platform is based on ITU-T X.400. The standards additionally included trusted identity management, PKI, network management, transport and network layer security, and threat sharing — that have been widely adopted and enhanced by other standards bodies, government agencies, and industry implementers. These platforms remain essential components of cybersecurity today in all networks and services.

So, the first obvious question is: why on earth would one try to invent another Digital Geneva Convention? Such a convention would have to have to replicate everything in the existing Convention ensemble that has existed and evolved over the past 167 years, and get the same 193 Nation States to sign and ratify to the provisions. Furthermore, the materials that have been introduced in the IGF2017 workshop on this topic are far less comprehensive, immature, and ignore long-standing public international law that already exists.

The second obvious question is: why don't the participants simply exit the Palais, and walk across the Place des Nations plaza, over to the ITU campus and begin participating in the considerable array of cybersecurity technical, development, and legislative activities underway — together with all the member countries and participants. That could start by visiting the best communications network reference library in the world on the top floor above the archives. Participation can occur directly, or through a national administration, or through the many cooperating organizations.

On an especially important last point — if anyone is seriously interested in advancing the existing Digital Geneva Convention for cybersecurity to enhance it with any provisions or capabilities felt necessary, they have the opportunity to do that with all the nations of the world at the 2018 Plenipotentiary Conference at the end of October. Most national Administrations and many industry bodies are now beginning preparations.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cybersecurity, Internet Governance, Policy & Regulation, Privacy

A Closer Look at Why Russia Wants an Independent Internet

CircleID - Pią, 2017-12-15 21:46

Actually practical and not necessarily a problem. The Security Council of the Russian Federation, headed by Vladimir Putin, has ordered the "government to develop an independent internet infrastructure for BRICS nations, which would continue to work in the event of global internet malfunctions." (RT, the Russian government-funded news service.) RT believes "this system would be used by countries of the BRICS bloc — Brazil, Russia, India, China and South Africa."

Expect dramatic claims about Russia's plan for an alternate root for the BRICs and not under Western control. The battle over ICANN and domain names are essentially symbolic. Managing the DNS is a relatively insignificant task, more clerical than governing. ICANN Chair Steve Crocker pointed out they had very little to do with policy.

Columbia University Professor Eli Noam and then ICANN CEO Fadi Chehadé have both said such a system is perfectly practical as long as there is robust interconnection.

Noam has pointed out that "multiple Internets" might actually be a good thing. Fadi agreed that this could work but worried about who would protect that "robust interconnection."

Some already discussed this as "splitting the Internet," with the implication that would destroy the net and be a major human rights issue. The U.S. walked out of the ITU, the U.N. organization for the Internet, over issues like this. (I've asked some of the likely people for comments and will pass them on verbatim.)

Some will claim this is about blocking free speech, but that's rhetoric. Russia doesn't need to fiddle with the DNS for censorship, as the Chinese have demonstrated.

The biggest obstacle to the Russian proposal is that China may not be interested. After the WCIT, they realized that ICANN and the DNS are side issues not worth bothering about. They de-emphasized the ITU because the Americans made it obvious they would block anything they didn't like. Instead, they have been building alternate institutions including the World Internet Summit in Wuzhan and the BRICs conferences. Tim Cook of Apple and Sundar Pichai of Google paid homage to the Chinese in Wuzhan.

The Chinese have put their main work where decisions that matter are made. Wireless standards are set by 3GPP, where nothing can be approved without China's consent. In 5G, "the Chinese hold more than 30 key positions in standards organizations, with 23% of the voting power, 30% of the manuscripts, and 40% of the lead projects." They are leaders at IEEE, where Wi-Fi standards are set. Anywhere the future is being designed, I see many Chinese, from SDN to Autonomous cars.

The American battle at ITU is proving to be a historic mistake. Making a few compromises at ITU would have kept the institution central. The U.S. and allies would continue in a strong role even if they shared some power with others. The result: new centers of focus where the U.S. government has very little impact.

Why does Russia want an independent Internet?

Russia's communications minister, Nikolay Nikiforov, worries about, "a scenario where our esteemed partners would suddenly decide to disconnect us from the internet." I think that's highly unlikely but Nikiforov points out, "recently, Russia is being addressed in a language of unilateral sanctions: first, our credit cards are being cut off; then the European Parliament says that they'll disconnect us from SWIFT."

U.S. Senators have called for much stronger sanctions. U.S. FCC Commissioner Mike Reilly suggested defunding the ITU unless the U.S. gets more power. (He doesn't realize the U.S. already has an effective veto over any ITU action, but that's another story.)

It makes sense for the Russians to be prepared for such a contingency as the Cold War has been warming up on both sides. "Britain's top military chief Air Chief Marshal Sir Stuart Peach just made headlines warning Russian subs "could CRIPPLE Britain by cutting undefended undersea internet cables."

Why the "splitting the Internet" mime was invented

ICANN's American contract became a symbol of the "control of the Internet." Actually, ICANN has less real power than France Telecom/Orange, Google, Alibaba, or Facebook. But as the center of the Internet moved South, those countries believed they should have a share of control. The Americans and friends resisted change, although the most important opposition was the U.S. security organizations.

Many folks in good faith saw this as a conflict over freedom of speech, including Vint Cerf and Kathy Brown of ISOC. They were the public face of the dispute, but the real power came from the U.S. government. The main battle was at the ITU WCIT in Dubai, which I attended. So did 14 representatives of U.S. three letter agencies. (NSA, CIA, HSA, DOD.) They weren't there to protect freedom of speech; their mission was to protect the ability of the NSA to do what the NSA does so well.

U.S. delegation lead at WCIT Larry Strickling explained to me the battle was necessary "unless you want Russia or China to take over the Internet." (Anyone objective would realize Larry, generally a good guy, was offbase. They would have been happy with minor concessions but we gave them nothing.)

A multi-million dollar campaign resembled U.S. political campaigns. It found emotional issues that would win them support and hammered them home worldwide. Political pros led by U.S. State dominated the media. The folks at the ITU tried to answer back with facts but were totally out-classed as campaigners.

One theory they invented and propagated was that collecting any taxes from companies like Google would result in them boycotting Africa, which would cripple education. Google was not going to abandon a billion potential viewers because of modest taxation.

They also circulated everywhere a picture of Secretary-General Tourè shaking hands with the President of one of his largest members and spread the rumor he was a Russian stooge. While Hamadoun did his graduate work in Russia, he was an ardent capitalist. He saw Russia during the era of stagnation and was not impressed. In fact, he wanted to do his Ph.D. in Canada but couldn't get a scholarship. Russia was the only country that offered a scholarship. He said the polite things diplomats say about a powerful country, but in private was very clear.

There are hundreds of "Internet governance" professionals concentrating on ICANN and things like IGF. Some are well paid; most are of good faith. Many are more interested in access and getting everyone connected and don't realize they should be working elsewhere.

The picture is the ICANN board in 2016. The Internet doesn't look like this anymore.


I went to, searched Internet, and found this story. Few have picked it up in English except the Russian RT service.

The story began at,

I found more at,

The ITU, ICANN, and WCIT reporting are mine or from the organization's websites.

Written by Dave Burstein, Editor, DSL Prime

Follow CircleID on Twitter

More under: DNS, ICANN, Internet Governance

Cuban Satellite Connectivity - Today and (Maybe?) Tomorrow

CircleID - Pią, 2017-12-15 16:58

Source: Dyn ResearchLast January, Doug Madory of Dyn Research reported on Cuban traffic, noting that C&W's share had increased. And this week Madory reported that ETECSA had activated a new internet transit provider, medium-Earth orbit (MEO) satellite-connectivity provider O3b Networks (Other 3 billion), replacing geostationary satellite provider Intelsat. (They have also added Telecom Italia, which, until 2011, owned 11% of ETECSA, but I will save that for another post).

O3b's MEO satellites orbit at an altitude of around 8,012 km above the equator while Intelsat's geosynchronous satellites are at around 35,786 km. Therefore the time for a data packet to travel from earth to an O3b satellite and back to Earth is significantly less than to an Intelsat satellite. This move to O3b may be related to ETECSA's recent decision to offer SMS messaging service to the US (at an exorbitant price), and it will surely improve the speed of interactive applications.

That is today's situation as I understand it, but now I want to speculate on the future of Cuban satellite connectivity — say in the early 2020s.

First a little background on O3b Networks. O3b is a wholly owned subsidiary of SES, but it was founded in 2007 by Greg Wyler, who has since moved on to a new venture called OneWeb. While O3b provides service to companies like ETECSA, OneWeb plans to also provide fast global connectivity to individuals in fixed locations like homes and schools as well as the "Internet of things."

OneWeb plans to connect the "other 3 billion" people using a constellation of around 1,600 satellites in low-Earth orbit (LEO) at an altitude of 1,200 km and another 1,300 in MEO at 8,500 km. They are working with many vendors and partners and plan to launch their first satellites in March 2018. They will begin offering service in Alaska in 2019 and hope to cover all of Alaska by the end of 2020. By 2025 they expect to have 1 billion subscribers, and their mission is to eliminate the global digital divide by 2027.

Now, back to Cuba. ETECSA is doing business with Wyler's previous company O3b. Might they also be talking with his current company, OneWeb? It takes time to launch hundreds of satellites, so service is being phased in — might Cuba come online sometime after Alaska? By connecting Cuba, OneWeb would gain publicity, the goodwill of many nations and access to a relatively well-educated, Internet-starved market and it would enable Cuba to quickly deploy broadband technology.

As I said, this is pure speculation. OneWeb faces significant technical, business and political challenges and may fail. Politics would be particularly challenging in the case of Cuba. Both the US and Cuba would have to make major policy changes, but maybe the time is right for that — the Cuban government will change in 2018, and the US government is likely to change in 2020 when Alaska comes online.

OneWeb has established a relationship with ETECSA through O3b, but other companies, including SpaceX and Boeing, are working on similar LEO projects. Might ETECSA be talking to the others? You can see a survey of LEO satellite plans and issues here.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Wireless

Cyberattack Causes Operational Disruption to Critical Infrastructure Using New Malware TRITON

CircleID - Pią, 2017-12-15 00:54

A new malware designed to manipulate industrial safety systems was deployed against a critical infrastructure organization that provides emergency shutdown capability for industrial processes, according to a report released today. FireEye security firm says: "This malware, which we call TRITON, is an attack framework built to interact with Triconex Safety Instrumented System (SIS) controllers. ... The attacker gained remote access to an SIS engineering workstation and deployed the TRITON attack framework to reprogram the SIS controllers. During the incident, some SIS controllers entered a failed safe state, which automatically shutdown the industrial process and prompted the asset owner to initiate an investigation. The investigation found that the SIS controllers initiated a safe shutdown when application code between redundant processing units failed a validation check — resulting in an MP diagnostic failure message."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware

Subskrybuj zawartość