Subskrybuj zawartość CircleID
Latest posts on CircleID
Zaktualizowano: 3 godziny 43 minuty temu

Broadband Consumption Continues Explosive Growth

Czw, 2020-02-20 23:01

OpenVault Just released its Broadband Industry Report for 4Q 2019 that tracks the way that the US consumes data. The results of the reports are as eye-opening as OpenVault reports for the last few years. OpenVault has been collecting broadband usage for more than ten years.

As usual, the OpenVault statistics are a wake-up cry for the industry. The most important finding is that the average monthly data consumed by households grew by 27% from 2018 to 2019, and in the fourth quarter of 2019, the average home used 344 gigabytes of data, up from 275 gigabytes a year earlier. Note that consumption is a combination of download and upload usage — with most consumption being downloaded.

The monthly weighted average data consumed by subscribers in 4Q19 was 344 gigabytes, up 27% from 4Q18’s weighted average of 270.2 GB and up almost 69 GB, or 25%, from 3Q19.
Source: OpenVault Broadband Industry Report (OVBI) 4Q 2019

For the first time, the company compared homes with unlimited data plans to those that have plans with data caps. They reported that homes with no data caps used 353 gigabytes per month, while homes with data caps used 337 gigabytes per month. That statistic would suggest that homes with data caps try to curb their usage to avoid overage charges.

Interestingly, median usage was significantly different from average usage. Median means the same as midpoint, and the median data usage was 191 gigabytes per month, meaning half of US homes used more than that and half used less. In looking at their numbers, I have to suppose that the median is a lot less than average due to the many homes using slow DSL that can't consume a lot of broadband.

The report also looks at power users — homes that consume a lot of broadband. They report that nearly 1% of homes now use 2 terabytes per month and 7.7% use over 1 terabyte per month. A terabyte is 1,000 gigabytes. The percentage of homes using over 1 terabyte climbed from 4% a year earlier. This statistic is important because it shows a quickly increasing number of homes that will be hitting the 1 terabyte data caps of ISPs like Comcast, AT&T, Cox, and CenturyLink. I clearly remember Comcast saying just a few years ago that almost no homes had an issue with their data caps, but that no longer can be true.

Homes are starting to buy 1 gigabit broadband when it's available and affordable. 2.8% of homes in the country now subscribe to gigabit speeds, up 86% from the 1.5% of homes that bought gigabit in 2018.

54% of homes now purchase broadband plans with speeds of 100 Mbps or faster. Another 23.6% of homes are subscribing to broadband between 50-75 Mbps. This means that nearly 78% of homes are subscribing to data plans of greater than 50 Mbps. The average subscribed speed grew significantly since 2018 from 103 Mbps to 128 Mbps. These subscriber statistics should shame the FCC for deciding to stick with the 25/3 Mbps definition of broadband. The agency is clearly setting a target speed for rural America that is far behind the reality of the marketplace.

OpenVault made one comparison to Europe and showed that we consume a lot more broadband here. While the US average consumption of broadband in 4Q 2019 was 344 gigabytes here, it was 196 gigabytes in Europe.

As OpenVault statistics have done in the past, they show network engineers that the demand for new broadband is not abating but is continuing to explode. An annual 27% increase in broadband consumption means that broadband demand is continuing to double every three years. If that growth rate is sustained, then our networks need to be prepared within a decade to carry 8.6 times more data than today. That's enough to keep network engineers up at night.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

The Sale of .ORG Registry: Continuing the Conversation We Should Be Having

Czw, 2020-02-20 15:02

On 11 February, I participated in a discussion about the pending sale of PIR at American University Washington College of Law, appropriately titled, The Controversial Sale of the .ORG Registry: The Conversation We Should Be Having. It was great to have a balanced discussion, free of some of the emotions that have often made it hard to discern the realities of the transaction. Certain misapprehensions arose in the discussion that we lacked the time to explore fully, so I want to take those up here.

Misapprehension #1: PIR is being sold for the wrong price

Some are suggesting that the $1.135 billion price for PIR is too low. One version of this argument compares the number of domains under management by PIR to those of VeriSign, and then draws conclusions about PIR's putative value based on VeriSign's market capitalization. The math may be right as far as it goes, but it's misapplied.

It is important to note that, because it operates its own registry back end, VeriSign is in a line of business that PIR is not. One would expect, then, some differences between the valuation of PIR and the valuation of VeriSign.

The valuation of PIR in this transaction is roughly 27 times EBIDTA, a ratio reflecting one standard way of evaluating company valuations. According to public reports, VeriSign's Enterprise Value/EBIDTA is 28.66, which suggests that the value of PIR in this proposed transaction is approximately correct.

Others claim that the Internet Society will not get too little for PIR, but too much. They claim PIR will need to gut services, underspend on technical infrastructure, and increase prices by large amounts to ensure a return.

Those in the best position to know about ongoing business expenses — PIR and its back-end service provider, Afilias — have stated clearly that such service and infrastructure cuts are not contemplated. PIR will remain the operator of .ORG and its contracts will continue. Since PIR will no longer deliver a significant part of its revenue to the Internet Society, that revenue will now be available for investment in the business as well as to provide a return on investment. So, there is no reason to suppose that the sale price is too high.

Misapprehension #2: This is a dangerous leveraged buyout with lots of debt

People hear a private equity firm is involved and suppose this case will necessarily resemble other celebrated cases where a private equity firm acquired a company with a lot of debt and bankrupted the acquired company. I understand this concern; however, not all private equity works that way, and this particular transaction does not have a lot of debt. There is a loan of a little over $300 million involved. Although that sounds like a lot, in a transaction worth more than $1 billion, it is not. This is not a "leveraged buyout” where 90% of the payment is borrowed money.

Misapprehension #3: A public auction should have been held

It seems intuitive that the surest way to get the best price for a business is to put it up for sale, advertise it, and run an auction among prospective bidders. This idea, while seemingly appealing, turns out to be bad from a business perspective. The problem is what happens while the "for sale" sign hangs on the door.

In such cases, the very announcement causes uncertainty; some employees seek other jobs. Meanwhile, the distraction consumes the employees who remain, devaluing the very asset being sold.

The Internet Society's Trustees and PIR's Directors appropriately consulted with both financial and legal advisors. A wide ranging public auction would have been bad for PIR and .ORG. Multiple suitors, however, submitted bids that were evaluated. In addition, the Boards of both the Internet Society and PIR obtained and relied upon a valuation report provided by an independent firm to ensure that the transaction was no less than fair market value.

The risk of depressing PIR's value is also why it would have been unacceptable for the Internet Society and PIR to consult widely about the sale: it would have done damage to PIR, which the Trustees and Directors could not do. Cases like this are why the Internet Society has Trustees, and PIR has Directors, who are selected the way they are. Without being "community representatives" per se, they bring to their position the perspective from the community that selected them. And they make prudent decisions for the organization.

Misapprehension #4: Considerations from 2002 override everything

Some of those concerned about the pending transaction focus only on an event of nearly two decades ago. The 2002-3 reassignment of .ORG, however, was a complete redelegation: In current ICANN-speak it would be a change of direct control. In today's proposed transaction, PIR will remain the registry operator with the same contracts and provider. So it's inaccurate to call this "the sale of .ORG."

Second, many of the conditions from 2002 no longer apply: The Internet has changed. People used to see domain names a lot when using the Internet, especially in the address bar of a web browser. On modern mobile devices and tablets, nobody types a domain name. When .ORG was reassigned, the iPhone was still 5 years in the future. Moreover, in 2002 there were only a few gTLDs. Many new TLDs started operating more recently, and now .ORG has thousands of competitors.

These realities mean that things have changed for PIR and .ORG. It's not some fatal change, but as ISOC Trustee Mike Godwin has stated, the status quo is not the best way to ensure .ORG thrives for the long term.

Taking all these factors into account and exercising the sober duty of Trustees, the Internet Society and PIR Boards made a big decision, ensured PIR was priced appropriately, prevented loss of value, and performed extensive due diligence activities to protect the Internet and .ORG. In the process, we did what I am sure is best for the prosperity of the Internet and the .ORG registry, enabling these priceless resources for tremendous good to get even better.

All parties involved in the transaction are committed to the conversation we should be having — one where facts prevail. For more information about the sale, follow

Written by Andrew Sullivan, President and CEO, Internet Society (as of Sept 1, 2018)

Follow CircleID on Twitter

More under: Domain Names, Registry Services

ISOC and the PIR Sale: Lessons Being Learned

Czw, 2020-02-20 06:25

The PIR/.ORG transaction is a watershed moment for ISOC. What had once seemed (at least to ISOC and its Board) to be ISOC's chance to transform its finances now seems to many to be a threat to ISOC's essence, and even its very existence.

From the ISOC-NY perspective, this entire affair points out the paucity of community-involved multistakeholder participation in ISOC's critical decision-making processes (and other processes, too). Navigating the very different worlds of non-profits, ICANN and billion-dollar private equity transactions has been a huge challenge for ISOC. This has exposed cracks in ISOC's processes and in its sense of self at every point where decisions had to be weighed and made. ISOC's claim to be even moderately multistakeholder in governance has been found wanting.

A fair amount of the (most visible) opposition to the sale is alarmist, ill-informed and rife with self-dealing. But that doesn't make the sale a Good Thing. It also doesn't make all of their criticisms wrong or unfounded (even if they are overblown).

Frustratingly, this fervent opposition tends to obscure a number of the more thoughtful and analytical critics and criticisms of the sale, coming from a variety of different angles. Those who are not taking a hard line in opposing the sale are not supporters of the sale (though some of the hardliners want to say they are or convert them to their cause). This is the skeptical but undecided "middle" portion of the community, trying to think this through and avoiding rushed judgments, even if they feel a bit queasy.

The ultimate positions of these swing groups are important, maybe even more important than those of the entrenched opposition. In one version of reality, these folks would have swung toward supporting the deal, their concerns addressed, their voices counted. We are not in that version of reality. For better or worse, I think these "swing groups" are now swinging toward the opposition, or at least putting on the brakes until significant concerns can be addressed.

The demise of the summit meeting being planned for the weekend of February 22, shows how troubled this whole steaming stew has become. Speculating wildly, it sounds to me like some of the hardliners pulled out (or were never in) the meeting, because they have a plan, something up their sleeves, perhaps even a counterattack in mind. This left the more moderate but concerned contingent in the lurch. Apparently, this, in turn, caused one of the deal participants to shrink the meeting to irrelevancy, perhaps because the participants wanted to deal with the noisy hardliners, rather than the more low-profile opponents that were left on the agenda. This move further alienated those in the moderate camp.

This is a negative outcome for those who want the deal to go forward, and for all those involved in the deal regardless of the outcome.

The moderates were at least fact-checking the hardliners while still demanding facts and changes from the deal team. Potentially, at least some of this group could have been brought around to support the sale, with some significant safeguards and adjustments. Without some serious support from within this group, the deal participants, their advisors, and the few active proponents of the deal look pretty isolated.

The traditional playbook of M&A dealmakers at this point would likely be to "Get The Deal Done," by any means necessary. The Proskauer letter is solidly within the tradition. The traditional "Get The Deal Done" playbook would focus efforts on those with the clear, formal power to stop the deal (ICANN, government regulators, AGs), convincing, mollifying, co-opting neutralizing or defeating them. Meanwhile, the noise of the rabble would be more or less ignored (and guess what? We're all rabble). The deal participants would be limited to carefully-crafted statements and sound bites and carefully orchestrated situations. No matter how many words, ultimately they would say very little.

The traditional playbook has certainly informed the buyer's decisions. It has seemed to inform much of PIR and ISOC's thinking as well.

The traditional playbook has been a disaster at every turn. If the deal participants keep following this playbook, there's still a possibility the deal will get through and close, while leaving scorched earth in every direction and a persistent stench wafting over the land. More likely every day, taking the traditional approach will continue to harden and grow the opposition to the deal and spawn more significant, powerful and effective opponents and methods of killing the deal.

The private equity veterans, investment bankers and lawyers involved in the deal have likely been very persuasive in convincing the other deal participants to "stay the course." Sadly, these experts had little or no idea what they were getting themselves into. The world of investment banks, private investment firms, Big Law, big finance, and private equity could not be more different than the world of domain names, ICANN, non-profits, domain investors, social activists and Internet pioneers. This is a culture clash.

As someone with experience in both worlds, it's been frustrating to watch this slow-motion train wreck. At every major inflection point, I've thought I should tell ISOC that "this is going to get worse before it gets better." And I hoped for the best. At this point, the message feels like "this is going to get worse, and it may not get better." Unfortunately, this is almost as much a product of the way the deal has been handled, as it is a product of the fundamentals of the deal.

To my mind, the only way this deal gets done in a satisfactory way (and maybe if at all) is if there is a radical restructuring of the deal, of the buyer, of the post-ISOC shape of PIR combined with an embrace of radical transparency, the adoption of genuine accountability, and a true multistakeholder stake and meaningful voice in PIR.

No matter what the outcome, ISOC will come through this facing some real challenges. It is dented and scuffed. It didn't help that PIR and Ethos have adopted a low public profile (at least in person), leaving ISOC (and particularly Andrew Sullivan) swinging in the wind while trying to explain and defend the transaction, including parts that have nothing to do with ISOC. This has been unfair to ISOC (and particularly Andrew Sullivan).

Though worse for wear, ISOC is still structurally sound. It is not a given that ISOC will remain so as things move forward. How ISOC handles matters, how it defines itself, how it makes decisions and how it communicates from this moment on (both during and after the denouement of this entire affair), will speak volumes about its long-term identity and even viability. ISOC needs to figure this moment out, and who it wants to be.

I, for one, want ISOC to come through this better and stronger than before, more multistakeholder, more meaningful and more engaged. A noisy few want quite the opposite (and quite a number of them want something for themselves). Many others opposing the sale know little about ISOC, and may not have heard of it at all just a few months ago. Most of what these people have heard is so negative (some of it true, some slanted and some "fake news") that ISOC is merely a cartoon villain to them. This increases both the difficulty and the stakes for ISOC "getting it right" from now on.

The rest of this month (and beyond) will be a true test for all involved. Innovative, inclusive, empathic thinking will likely be rewarded. Accepting outcomes that are not any one party's ideal solution will be required if there are going to be real winners. (Not surprisingly, this is often the successful result of the bottom-up consensus-driven multistakeholder process.) If the various participants choose to battle it out instead, it seems likely there will be no real winners — only survivors.

Written by Greg Shatan, Partner, Moses & Singer; President, ISOC-NY; ICANN Participant

Follow CircleID on Twitter

More under: Domain Names, Registry Services

ICANN to Hold First-Ever Remote Public Meeting

Czw, 2020-02-20 01:16

The following announcement was issued today by ICANN:

The Internet Corporation for Assigned Names and Numbers (ICANN) today announced that its ICANN67 Public Meeting, which was to be held in Cancún, Mexico, will now be held via remote participation-only. This decision was made as a result of the COVID-19 outbreak, considered a public health emergency of international concern by the World Health Organization.

The meeting, scheduled for 7-12 March 2020, marks the first time in ICANN's history that it will hold a Public Meeting solely with remote participation.

Each ICANN Public Meeting attracts thousands of attendees from more than 150 countries. With cases in at least 26 of those countries, there is the potential of bringing the virus to Cancún and into the ICANN meeting site. If this were to happen, there could be accidental exposure of the virus to attendees, staff, and others who come in contact with an infected individual.

COVID-19 continues to be a rapidly evolving global situation, with new cases emerging daily.

"This is a decision that the ICANN Board has been considering since the outbreak was first announced and it is one that we haven't taken lightly," said Maarten Botterman, ICANN Board Chair. "We know that changing this meeting to remote participation-only will have an impact on and cause disruption to our community; however, this decision is about people. Protecting the health and safety of the ICANN community is our top priority."

The community will have many questions about travel arrangements, scheduling, and other meeting-related issues. ICANN will consult with community leaders and groups to focus the virtual program on the most essential sessions, and will publish a Frequently Asked Questions (FAQs) page to in the coming days.

ICANN thanks its regional partners in Latin America and the Caribbean (LAC) who had worked tirelessly to host this meeting in Cancún. We appreciate their understanding and we look forward to returning to the LAC region for ICANN70 in 2021.

Remote participation is an integral part of any ICANN Public Meeting, but it will be vastly expanded for ICANN67, and will leverage the robust technology platform in use by the community today.

Those interested in attending the remote meeting should still register here, if they have not done so previously. To learn more about remote participation, visit ICANN Public Meetings.

ICANN will continue to make further announcements as circumstances warrant. In the meantime, ICANN org will hold a webinar to provide a short update and take questions. The webinar will be held on Thursday, 20 February at 1800 UTC.

ICANN also is reviewing upcoming meetings, such as the GDD Summit in Paris and the ICANN68 Meeting in Malaysia. So far, no decisions have been made and these are proceeding as planned. ICANN will keep the community informed of any changes.

Follow CircleID on Twitter

More under: ICANN

Troubling Efforts to Distort and Undermine the Multistakeholder Process

Śro, 2020-02-19 00:45

ICANN's request for comment on amending the .com registry agreement to restore Verisign's pre-2012 pricing flexibility ended last Friday and, with 8,998 responses submitted by stakeholders, may have been a multistakeholder version of the St. Valentine's Day Massacre. Public interest in .com pricing is understandably high but the sheer volume of responses — nearly three times the number of comments submitted this summer on deregulating .org pricing — also suggests a show of force from stakeholders still outraged by ICANN's decision to deregulate .org pricing despite an overwhelming 98.1% of stakeholder comments opposing the move.

Stakeholders became further inflamed when, shortly after .org pricing was deregulated, it was announced that Public Interest Registry (PIR), which operates the .org registry, was being sold to private equity interests in a complex deal involving several ICANN insiders and worth more than a billion dollars. The resulting controversy is ongoing, and California's Attorney General has launched an investigation.

Given ICANN's disregard for their views on .org, stakeholders are skeptical, if not cynical, about their ability to influence decisions on .com pricing — particularly because of the quid pro quo where ICANN will be paid $20 million over five years by Verisign after approving .com price increases. An unscientific analysis of responses reveals stakeholder sentiment to be a mix of outrage and frustration along with pessimism and resignation about a process that is seen as rigged. If negative sentiment persists or worsens, then it may lead to a crisis of confidence in ICANN. This may bring into question the integrity and legitimacy of private sector-led multistakeholder Internet governance such that momentum is added to efforts led by Russia and China to establish authoritarian and censored versions of the Internet.

Interestingly, Verisign submitted one of the last responses, an unsigned letter to ICANN with the subject "Troubling Efforts to Distort and Undermine The Multistakeholder Process." The letter is rather bombastic, accusatory, and mostly wrong. But it also offers a rare glimpse into the mindset of the Internet's dominant registry operator that bears scrutiny.

The letter opens by asserting the belief that Amendment 3's changes are "consistent with those approved by the United States Government." However, this is not germane — the United States Government is not a party to this Amendment or to the .com Registry Agreement itself. Verisign and the U.S. Government, through the National Telecommunications and Information Administration (NTIA) maintain a separate but related Cooperative Agreement and which was modified in November 2018 when Verisign and NTIA agreed to Amendment 35 which states, in part, that:

Without further approval by the Department, at any time following the Effective Date of this Amendment 35, Verisign and ICANN may agree to amend Section 7.3(d)(i) (Maximum Price) of the .com Registry Agreement to permit Verisign in each of the last four years of every six year period, beginning two years from the Effective Date of this Amendment 35 (i.e., on or after the anniversary of the Effective Date of this Amendment 35 in 2020- 2023, 2026-2029, and so on) to increase the Maximum Price charged by Verisign for each yearly registration or renewal of a .com domain name up to seven percent over the highest Maximum Price charged in the previous calendar year. (emphasis added)

Rather than requiring price increases, NTIA merely says that Verisign and ICANN "may agree" to price increases. If NTIA had intended for Amendment 35 to require price increases, then this provision would have said that ICANN and Verisign "shall agree" and more than likely would have included a deadline for doing so. Amendment 35 is NTIA acting, on behalf of the U.S. Government, to regulate wholesale .com pricing by setting out and prescriptively describing what may be — not shall be — agreed upon by Verisign and ICANN for purposes of Maximum Price of .com registrations.

Reference is often made to an agreement between Verisign and ICANN that is part of a 2016 extension of the .com Registry Agreement (RA) and which calls for ICANN and Verisign to "flow through" to the RA any relevant changes that are made to the Cooperative Agreement. Regulating pricing is a messy job that ICANN has shown no real aptitude or interest for. But NTIA has delegated to ICANN an important and potent tool that offers new leverage for helping to restore balance to its relationship with Verisign. "Flow through" doesn't mean "rubber stamp" and ICANN should insist that implementation of NTIA-regulated pricing flexibility be predicated upon an assessment of benefit to the public interest — which is different from benefit of $20 million being paid to ICANN. This would be a deliberate act of responsible DNS stewardship — and would also demonstrate that ICANN has a rudimentary grasp of Negotiation 101 — that doesn't require ICANN to regulate pricing. The most obvious starting point for any public interest review is with the expectations communicated in NTIA's announcement of Amendment 35, in November 2018, which stated:

Amendment 35 confirms that Verisign will operate the .com registry in a content neutral manner with a commitment to participate in ICANN processes. To that end, NTIA looks forward to working with Verisign and other ICANN stakeholders in the coming year on trusted notifier programs to provide transparency and accountability in the .com top level domain. (emphasis added)

What benefit can there be to the public interest by serving the pricing flexibility "dessert" before the transparency and accountability "peas and carrots" have been cleared from the plate?

The next few sentences of Verisign's letter allege that "a small but vocal group" with "undisclosed pecuniary interests" are attempting to distort the truth, hijack the process, and undermine ICANN's legitimacy. It also makes gratuitous attacks on two registrars, Namecheap and Dynadot, as being aligned with the speculation business and fomenting revolution with campaign-style tactics to flood ICANN with letters.

Attacking registrars for being aligned with the speculation business isn't new and results from a disconnect between Verisign and its registrar sales channel that's rooted in there being no private or public interest that benefits from higher .com pricing except Verisign. Another example is when the company celebrated the day after Amendment 35 was announced by issuing a bizarre blog that attacked two registrars for, among other things, being aligned with speculative domain name investors. Now, as then, it seems strange for any company to publicly bully two significant channel sales partners; but why is the .com registry operator attacking registrars for aligning with registrants who acquire and hold large portfolios of .com domain names?

First, Verisign is scapegoating when it selectively characterizes registrants of large domain name portfolios as being shadowy speculators illicitly profiting from ill-gotten domain names instead of investors engaged in the age-old activity of buying low and selling high. This characterization also excludes brands, media companies, and large conglomerates, among others, that maintain portfolios of hundreds of thousands and even millions of domain names (i.e. Verizon, 21st Century Fox, Unilever, etc.).

The reality is that the demographic presented opportunistically as the avatar of domain name registrant purity — i.e. the business, organization, individual, or cause that registers a domain name for building an online presence — is in the minority. A study conducted last year by the Singapore Data Company concluded that this demographic comprised about 31% of the .com namespace, and the other 69% of domain names are parked, serving ads, or porn.

Two other factors are relevant here as well. First, most registrars don't make the bulk of their revenue from bulk domain name sales. Instead, they sell domain name registrations at a small margin, at cost, or as a loss leader and bundled with services, such as web design, hosting, etc. Secondly, domain investing has evolved since the early days when "domainers" included a lot of arms merchants, porn kings, and other wildcatters who were "cleaning" money. Today, domain investors include all sorts of people seeking to earn financial returns and who are driven by diverse profit motives, including professional speculators, working-people mistrustful of Wall Street, and domain developers that offer turnkey branded online businesses built on domain names.

Amidst this churn of commerce is a monopoly with one of the largest gross profit margins of any company ever and yet, far from being contented, covets more. People covet what they envy others having, and Verisign's own words are forthright and revealing. The unsigned letter leans heavily on detailed and suggestive insinuations to try conjuring jabberwockies of shadowy speculators that "revel in the exorbitant prices that they themselves set and obtained for .com domain names." But there's nothing wrong with setting a price, exorbitant or not, that can be obtained from a free market — that's the basis for capitalism.

This statement suggests that Verisign either fundamentally misunderstands or disagrees with its role in the DNS and believes that it's been shortchanged somehow. Instead of seeing the billions of dollars generated by more than 130 million domain name registrations, Verisign is blinded by envy at the relative pocket change being earned by investors on the secondary market. Envy deceives them into seeing "deceptive campaigns" filled with "misleading omissions" being driven by shadowy speculators with "undisclosed pecuniary interests" where there are, in reality, only grassroots efforts to drive public awareness of and engagement on an important public interest issue. Perhaps it is also envy that causes them to forget that not everybody can afford to put ICANN's constituency chairs and other influencers on retainer.

This letter — by coming late in the process when resounding opposition to price increases was already clear and with no executive willing to sign their name to it — should be seen for what it actually is — a hegemonic bellowing of frustration and rage.

That's all.

Written by Greg Thomas, Founder of The Viking Group LLC

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Registry Services

Broadband in China

Wto, 2020-02-18 20:15

For years I've been hearing how we are losing the broadband battle with China, so I decided to take a look at the current state of broadband in the country. The China Internet Network Information Center (CNNIC) publishes statistics about the state of broadband in the country, and I used the Statistical Report on Internet Development in China from August 2019 in writing this blog.

Here are some of the more interesting statistics about the state of broadband in the country:

  • China is a lot larger than the US with a current population just below 1.4 billion, compared to an estimate of US population of around 327 million.
  • As of June 2019, China had 854 million people connected to the web in some manner, for an overall Internet penetration based on a population of 61.2%. It's not easy to compare that statistic to the US since we track Internet usage using subscriptions to households.
  • China is still rapidly adding people to the Internet. In the first six months of 2019, the country added 26 million new Internet users.
  • The Chinese interface with the Internet in a variety of ways, with the following statistics for June 2019:
    • Cellphone 847 million 99%
    • Desktop 394 million 46%
    • Laptop 308 million 36%
    • TV 283 million 33%
    • Tablet 242 million 28%
  • As of June 2019, China had 396 million users on fiber-to-the-home. China is adding fiber faster than the US and there were over 67 million customers added for the year ending in June 2019.
  • Chinese speeds for landline connections averaged 31.3 Mbps in June 2019, up 25% since 2018. Mobile speeds in 2019 averaged 23 Mbps, up 7% from 2018.
  • Like the US, China has a rural digital divide. In 2018 the country had 225 million rural Internet users representing a 39% penetration. Urban Internet users were 630 million, a 77% penetration. There are 347 million rural Chinese without access to the Internet, almost 25% of all citizens in the country. It's hard to compare that statistic to the US since the FCC does such a lousy job of counting households with broadband.
  • China is working to solve the rural digital divide and added 3 million rural Chinese to the Internet in the first half of 2019. However, much like here, that rate of growth is glacial, and at that rate of growth it will take 36 years for the rural population to grow to the same current penetration seen in urban areas.
  • The Chinese are heavy users of instant messaging with 96.5% of Internet users using messaging in 2018.
  • It's important to remember that Chinese web users are monitored closely and live behind what the west calls the Great Firewall of China. The government tracks how people use broadband, and we don't have direct statistics for the following:
    • Watch online video 88.8%
    • Use online news 80.3%
    • Shop online 74.8%
    • Online bill payment 74.1%
    • Order meals online 49.3%
    • Car hailing services 39.4%
  • China's mobile data traffic is growing even faster than in the US. In the first half of 2018, the Chinese mobile networks carried 266 petabytes of traffic. By the first half of 2019 that traffic had doubled to 554 petabytes. China's cellular data usage doubled in one year, while here it's taking two years to double. The numbers are huge, and a petabyte equals 100 billion gigabytes.
  • The average Chinese broadband user spent 27.9 hours online in 2019.
  • The CNNIC tracks why people don't use the Internet. 45% don't have access to broadband; 37% lack the skills to use broadband; 15% don't have computers; 11% say they have no need. The interesting thing about the list in China is that nobody said they couldn't afford Internet access.

There was one interesting thing missing in the Chinese report. There was no mention of 5G. That means, at least to the government agency that tracks broadband usage in China, there is no 5G race. It's obvious that the Chinese need 5G, probably more badly than here, since the volumes of data on their mobile networks are doubling annually. But the topic wasn't worth a mention in their annual report of the status of broadband.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Mobile Internet, Telecom, Wireless

Stop Propagating False Information About the .ORG Transaction

Wto, 2020-02-18 17:54

We were disappointed to see The Pittsburgh Post-Gazette publish a recent editorial on February 13 about the sale of Public Interest Registry (PIR, the company that operates .ORG) that propagates false information about the transaction, including runaway prices, censorship and lack of experience.

Runaway prices? Ethos Capital and PIR have committed to capping price increases to no more than ten percent per year on average. With current pricing of $9.93, that equates to a price increase of less than one dollar. It is surprising the editorial staff would term $1 a year a big price increase, especially since the Post-Gazette itself has recently more than doubled its subscription price.

Censorship? Critics have put forth the idea that somehow, Ethos and PIR would start to curb free expression on .ORG websites. That is absolutely not going to happen. Ethos and PIR have repeatedly publicly committed to establishing a Stewardship Council of outside Internet experts to guide .ORG policies and ensure that the registry's commitment to free speech will continue unabated.

Lack of experience? The Post-Gazette's claim that there would be a lack of expertise at .ORG ignores the fact that Ethos is acquiring PIR, the operator of .ORG. PIR's management team, including its CEO Jon Nevett (a 15-year domain-industry veteran), would remain intact and operate PIR and .ORG. It also ignores the fact that several members of the Ethos team have extensive experience in the domain industry.

So, back to what the proposed sale is really about. .ORG is the most respected brand in the domain industry, and for good reason. It sets the standard as a platform for non-profits and other .ORG users to serve communities. And we think it can have an even bigger future helping more non-profits and other organizations fulfill their mission. That starts with ensuring that current profits from .ORG are used to directly benefit the .ORG community. Currently, those profits go to the Internet Society for the general benefit of the Internet. We'd like to see them dedicated to .ORG.

Ethos understands that owning PIR makes us stewards of an essential part of the fabric of the Internet. The .ORG domain is bigger than its 10 million domains. It is both a symbol of non-profits and mission-driven organizations on in the Internet and a means by which millions of organizations operate, communicate, fundraise, and, provide services to those in need.

For that reason, stewardship of .ORG is paramount. The community deserves guarantees about .ORG's future. That is why Ethos and PIR have made commitments on prices, policy making and community enablement. It is unfortunate that the Post-Gazette has failed to mention those important commitments in their editorial.

Written by Nora Abusitta-Ouri, Chief Purpose Officer at Ethos Capital

Follow CircleID on Twitter

More under: Domain Names, Registry Services

ICANN: Do Not Allow Closed New gTLDs With Generic Strings

Nie, 2020-02-16 18:42

The Board was right in 2014 when it disallowed them.

Within the next year, the ICANN Board may well face a decision that will help determine whether ICANN is capable of serving the global public interest or whether it is degenerating into an industry-controlled self-regulatory association. The issue can be framed quite simply: will ICANN approve a process for the creation of a new wave of new generic top level domains that will include "closed generic" gTLDs?

The issue can be semantically confusing since the word 'generic' is used in two different senses. There are at present about 1200 gTLDs, or generic top level domains that are recognized by the DNS. Each of these gTLDs are operated by a registry under contract with ICANN. Most of these gTLDs are 'open,' which means that anyone who subscribes to the requirements of the registry and pays the subscription fee can establish a second level domain under the gTLD.

By "Closed Generics" I mean the situation in which the gTLD registry controls registration of all second level domains of the "generic name" of its business or industry. As an example, the new gTLD .book is an open generic gTLD, and anyone should be able to apply for a second level domain name registration under it. Alternatively, the registry may have a stated policy to restrict registration, but must allow domain names to be registered by all "similarly-situated" businesses or industries, e.g., all cloud companies in .cloud, all search industries in .search, and all bloggers in .blog.

A small number of new gTLDs are closed, and the owners of those registries determine how the second and further levels of the DNS within their gTLDs are to be allowed, structured and used. Some brand TLDs are closed, and can be used for the specific purposes of the brand holder; .canon and .ibm are examples of such domains. They have been approved by ICANN and are legitimate TLDs.

There was a real question whether "closed generics" were barred in the 2012 round of the new gTLD process. Some of those who wrote the original gTLD Applicant Guidebook said no, and some said yes. After the evaluation process had begun, the ICANN Board was confronted with a number of applications in which the registries applied for strings that represented generic categories and that were to be closed — specifically closed in that the registry said that 'it would be owner of all the second level domains of the "closed generic" gTLD.' Thus, Amazon applied for .book (among others) as "closed generics," Google for .search (among others) and Dish DBS for .mobile. This would have given these large, senior competitors in these industries and businesses total control of an information space for major project groups and industries — control to shape, or distort, an Internet user's view of the market for goods and services in these areas.

Asked to review the issue, and after a public comment period that included many new voices from across the world, including the Global South, the ICANN Board decided that closed generic strings (such as .book) would not be allowed in the 2012 round of new gTLDs. The essence of the concern on the part of the Board was that "closed generics" would allow the monopolization of a significant part of the information space and would create a non-level playing field for others within that space. In effect, a registry awarded a closed generic string gTLD would have complete control over which second level domains. Nothing would prevent it from biasing its selection in any manner it wished, whether in the public interest or in any of its own competitive interests.

I was part of the group on the ICANN Board that revised the objections to the pending Closed Generics applications and decided to require these gTLD applications to "become open" before moving forward. My concern was simple. Consider the following analogy: In the mid-1900s, not too long ago, public libraries were a major source of information, and were the equivalent of the Internet of today. Let's suppose that your local government decided to subcontract its public library to a commercial organization. Suppose further that that the terms of the contract included allowed the contracting organization to have complete control over what books, periodicals, and newspapers they would acquire and make available to patrons. The subcontractor could say, "we don't believe in racial tolerance, we don't believe in social class mobility, and we don't believe in sexual health of women. So let's make sure that the material that we make available does not focus on any of those things, and let's only buy nice books that reinforce our readers' comfort and understanding of the benefits of the status quo. We don't need to state our beliefs up front; we can embody them in the material that we make available as a result of our acquisition policies, so let's just let our collection of materials speak for itself."

What I have just described is analogous behavior to what a registry running a closed generic string could do. In our wildest imagination, would we ever accept a library that acted in a manner described above? In fact, would we ever even consider even accepting such a contractual arrangement? If not, why do we even admit the possibility of consenting to delegate closed generic string new gTLDs?

Closed generics give total control of a very important and intensively used Internet information space about a major product group or industry to that registry, a private organization, often one of the largest competitors in the field. Control of that information space by the registry/competitor includes the power to shape and to distort in any way the registry/competitor wants — an Internet user's view of that market or industry. Further, the information contained in all of the web pages associated with the domain names of the "closed generic gTLD" are completely under the control of the registry and could route all business to themselves or even contain false information, without the possibility of on-site on-line refutation.

It could be argued that ICANN could write contracts with the owners of generic string new gTLD that would mitigate any such harms, possibly using public interest commitments as a safeguard against various kinds of misuse. To do so would generate a difficult exercise in defining the misuse and the result would surely be gamed, probably successfully. To the extent that adherence to a "no-misuse of a closed generic gTLD" policy would be voluntary, it would simply camouflage the uselessness of such PICs. Further, markets work best when the financial interests of the participants work in the same direction as the public interest rather than against it as is the case here.

Finally, given current ICANN policies of purchase and sale of registries as financial assets, any promises of "openness" and competition with a "closed generic" gTLD could amount to nothing should the original gTLD registry transfer or sell the gTLD to another leading industry competitor — or another owner who might repurpose this term of an entire industry or field with apparent impunity.

At this time, the GNSO's New gTLD Subsequent Procedures Policy Development Process Working Group is finalizing its recommendations for the specifications of the next (and perhaps ongoing) edition of the Applicant Guidebook — rules that will govern the process of applying for and receiving approval for new gTLDs. The application period is estimated to start in the next several years. Among the undecided issues is whether new gTLDs with generic strings will be allowed as "open" or "closed". The working group appears to be split, with adherents on both sides of the issue. The issue is likely to be resolved one way or the other in the near future.

To the Subsequent Procedures WG, I would like to share that during the evaluation period of the last round of new gTLDs, the committee of the ICANN Board tasked with overseeing the implementation of that round ruled that delegation of closed generics would not be permitted for that round. I was a member of that committee and I voted in favor of the decision to bar closed generic gTLDs. It was the right decision.

Some people now believe that the way in which this Board decision was reported suggested that the Board expected that the issue of Closed Generics would be revisited and potentially revised for later rounds. That was not my understanding, although my memory of the details of the discussion and decision is not precise after six years. Our Board work at the time was comprehensive and our decision was based on global input.

Just the knowledge that closed generics are again being advocated by some members of the community tells me that there is a lack of understanding of the role of ICANN with respect to the global public interest. If the Subsequent Procedures Working Group — incidentally, one of the most obscure and misleading names a working group could ever have — decides to recommend that new gTLDs in future application rounds may be "closed generics" then this decision will be both against previous board policy, and will significantly reinforce my sense that those members of the working group regard ICANN as an industry lobbying group for registries, and are helping to push ICANN in that direction.

Should the Subsequent Procedures WG make such a decision — I would hope that the ICANN Board would intervene and return the New gTLD Program to its 2014 stance — no Closed Generics allowed.

Should Closed Generics still be implemented in the next and future rounds of new gTLDs — unstopped by the WG, the GNSO or the ICANN Board — I believe this would signal a huge loss of trust and show that ICANN and the ICANN community are indifferent to the global public interest.

Such a result would, to me, signal that the ICANN multistakeholder experiment in Internet governance is inconsistent with a commitment to uphold the global public interest, and that it has failed in action. This would be disappointing to many, but would also be evident. It would then be time to start thinking about significant changes to or replacement of some of the Internet's governing organizations. It's essential that we have governance mechanisms that can understand and enforce the concept of the public good, open and competitive markets and can differentiate and ensure its independence from rapacious private enterprise. The "closed generics" issue is one test of whether that time has come.

Written by George Sadowsky, Information Communication Technology Consultant

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, New TLDs

LEO Broadband Will Create Millions of Jobs

Nie, 2020-02-16 03:21

If the satellite broadband ISP business model pans out, SpaceX and the other ISPs, their suppliers, partners and organizations that serve three billion new users will create millions of jobs.

Earlier this month, Elon Musk tweeted an invitation to a job fair at the new SpaceX production and launch facility near Boca Chica, Texas. As shown here, the tweet says they want hard-working, trustworthy people with common sense. They are not looking for specific skills or education, but certain character traits — "the rest we can train."

That tweet reminded me of hiring practices when I graduated from college. My first professional job was with IBM, but I had no experience with computers or unit-record (punch-card) data processing machines. They interviewed me, gave me an aptitude test, hired me, and then sent me to school to pick up the skills they needed. At the time, new hires at IBM were enrolled in a two-year, three-phase training program that alternated between classes and field experience. I don't recall the details, but phase one was eight weeks of full-time training on IBM policy and culture and the programming of unit-record machines. We learned to program computers in phase two. IBM was not unusual — that sort of training was common in those days.

Postgraduate training programs were particularly necessary for industries that anticipated rapid growth — like electronic computers then and space launch and Internet service now. For example, in the late 1950s and early 1960s, IBM built the SAGE early-warning network. The Department of Defence spent approximately $8 billion on SAGE, which required IBM to hire and train 3,000 computer programmers, not to mention the people who designed, manufactured, installed, operated and maintained the system and the workers hired by IBM's supply-chain companies. This was just one example of the demand for programmers, salespeople, support technicians, etc. hired and trained by IBM at that time.

SpaceX and its would-be competitors hope to bring broadband connectivity to the roughly 3 billion people who lack Internet access today, rural schools, clinics, markets and businesses, ships at sea, planes in the air, mobile-phone towers, high-speed arbitrage traders on Wall Street, cars, trains, buses, Internet of things sensors and appliances, governments, enterprises, space forces, etc. How long would that take and how many direct, supporting and supply chain jobs — technical and non-technical — would have to be created and filled? How many secondary jobs would be needed to serve a couple of billion new Internet users?

SpaceX can not do all of that alone. If the satellite broadband ISP business model pans out, SpaceX and the other ISPs, their suppliers, partners and organizations that serve three billion new users will create millions of jobs. Space and renewable energy may keep us employed for years.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband

WTSA-2020: Reflecting on a Contemporary ITU-T Role

Sob, 2020-02-15 19:46

Setting the stage

Every four years — as it has done for nearly a hundred years — the ITU-T as the world's only global intergovernmental standards body for all telecommunication, invites its 196 sovereign state members to a meeting where they examine their work and set the stage for the next four years. There is no treaty prepared, but they do examine major developments and decide needed standardization work, priorities and structure of the organization itself — including their leadership. The plenary meeting is called World Telecommunication Standardization Assembly (WTSA), and this time will be hosted 17-27 November at Hyderabad, India.

Although there all manner of standards bodies today treating parts of the telecom, internet, mobile, IoT spectrum, the ITU-T pursuant to longstanding public international law acceded to by the U.S. and 195 other sovereign nations, exists as the only global intergovernmental standards body for an array of essential purposes. Those purposes encompass the foundations for global electronic communication, and the ITU-T remains unique as a public-private standards body.

Because of this unique ITU-T stature as a treaty-based body tracing its origins back to 1850, the formal participation by the United States has necessitated the involvement and oversight of the State Department acting pursuant to powers provided in the U.S. Constitution. Even if the U.S. does not give treaty status to ITU-T actions and enactments, most other nations of the world do so. Whatever ITU-T vision a U.S. Administration chooses for itself is substantially constrained by both Constitutional obligation as well as the visions of other sovereign nations expressed in multilateral telecommunication treaty instruments and activities.

As it has for decades, the U.S. State Department is conducting a public comment process to solicit views about the U.S. and its ITU-T activities. Although the comments are supposed to be made public, it is not clear exactly where and when. Given the somewhat conflicting views of the current U.S. administration about leadership, and yet hostility toward multilateral cooperation, what comes next should be interesting. Unlike most other sectors, global telecommunication, by definition, compels multilateral cooperation.

Adding to the complexity is that over most of the past hundred years, the U.S. has played by far the most substantive leading role in shaping ITU treaty provisions and activities to meet widely varying U.S. long-range strategic economic and national security interests. The technologies evolve, the Administrations change, but the strategic interests remain largely the same.

Today — in an emerging world of virtualised infrastructure and services based on 5G specifications — the establishment of extraterritorial norms and practices through the ITU-T become much more important. As similar needs emerged for internetworking platforms in the 1980s under the Reagan Administration, the U.S. national security community, industry, and the State Department engaged significantly in ITU-T work. The additional complexity now being faced with vastly expanded work and engagement in global industry forums such as 3GPP, NFV, ETSI, OASIS, etc., does not diminish the unique importance of the ITU-T — a fact realized today by most leading nations in the ICT sector.

The U.S. public and private-sector agencies must expand their roles together, and develop new strategies based on significantly increased enhancement of knowledge and involvement in the constellation of technical standards venues that very much includes ITU-T. The State Department, as always, remains essential to facilitate that strategy. Not doing so, will have adverse effects on the economic and national security interests of the nation.

This article provides the author's personal perspectives on this subject based on 45 years of extensive involvement in almost every ITU body and many activities while in the government and private sector on behalf of many companies and organizations, and as a prominent historian. They should not be inferred to represent the views of any entity with which he is associated.

The unique nature and increasingly important stature of the ITU-T requires enhanced public-private knowledge and strategic engagement in its activities

The ITU and its precursor bodies and instruments stem from the necessity - from the emergence of the first transborder electrical communication internets in 1850 and onward — to develop legal norms in public international law among sovereign nations together with technical standards for operations, services, and equipment among network operators. (Today's ITU sector bodies have existed is various forms extending back to 1850. However, an integrated treaty-based organization with instruments designated as the ITU and based on the Interallied Commission Meetings at Paris between 1918 and 1921 - did not come into existence until 1932.) The provisions are based on the fundamental reality and shared agreement that every nation exercises sovereign jurisdiction over any and all electrical communication and media within its geospatial boundaries, coupled with a common desire to ensure national security, bring about global communication, facilitate development of the technologies and infrastructure, and provide expanded global economic opportunities for its commercial enterprises. Over the ensuing decades, as new technologies and applications emerged, the needs have remained the same. The most disruptive of all technologies — radiocommunication — enhanced the value of ITU activities because of radio's instantaneous extraterritorial reach and ability to implement cyberattacks.

The U.S. as a nation did not have representation in the ITU precursor activities until a lone person went to the 1890 Paris Conférence télégraphique international and watched. The non-participation existed because, unlike most other nations, the U.S. government itself did not own telecommunication facilities used for providing service to the public and had no need to participate in the intergovernmental meetings. The participation began scaling in 1903 when General A. W. Greeley and Commander F. M. Barber, as leaders of the U.S. communications national security community, represented the U.S. at the Berlin Preliminary Conference on Wireless Telegraphy.

Over the next 117 years, the level of U.S. engagement, strategy, and leadership in ITU bodies went through periods of dramatic highs and lows. The high points of ITU engagement occurred during the Administrations of Wilson, Hoover, Roosevelt, Johnson, and Reagan — when each devoted substantial resources to develop and lead strategically significant legal and institutional innovations through major evolutions in the ITU instruments and activities to advance U.S. economic and national security interests.

Over those 117 years, the State Department played three essential roles necessary to facilitate participation in ITU-T activities. First — because the activities occur in a treaty-based activity and often involved significant negotiations with the representatives of foreign governments — State was ultimately responsible for the representation of the country and the outcomes, even if private-sector experts were involved. This role also shielded private-sector participants from any antitrust or other legal culpabilities. Second — because of the frequently significantly divergent and often conflicting interests of the government agencies and private-sector actors in the ITU-T activity strategies and outcomes, State provided an essential role of consensus building and adjudication among the parties to move forward with common U.S. positions. During some Administrations, this role was substantially expanded to develop and expand ITU-T activities — especially relating to expanding economic interests and national security matters. Third — State provided an array of essential, highly-specialized support services for ITU-T venues, including diplomatic protocols, collaboration with secretariats and foreign offices, long-range institutional knowledge, determinations of public international law, and engagement of classified intelligence assets. Most other major nations engaged in the ITU-T have similar or even greater support capabilities, and this is not the kind of expertise commonly found in the private-sector — especially smaller entities involved in new innovations.

The bottom line is that the ITU-T is unlike any other standards development organization because of its intrinsic intergovernmental nature — where the results may have bindings to treaty provisions which have force-and-effect in many if not all jurisdictions throughout the world, and where the participants are understood to act on behalf of their administration. This feature of the ITU-T has essentially existed for the past 100 years, and it is plainly not going to change in the foreseeable future. Furthermore, other nations will — like the U.S. itself has frequently done through many periods of ITU-T history — train and support personal who are experts in maximizing the nation's interests and effectiveness in ITU-T venues, and facilitate involvement of the best of its private-sector industry and academic institutions.

Lastly, and perhaps most significantly, the ITU-T is a multilateral venue for which there is no plausible multilateral or bilateral substitute. Only the Common Criteria Recognition Agreement approximates this global reach on a more limited scale relating to cybersecurity. The ITU-T's long, successful history is a testament to this reality. A hundred years ago when the Harding Administration killed off the Wilson Administration's grand ITU plans, the Hoover Administration a few years later reversed course to ensure both U.S. global security interests, as well as a significant stake in the rapidly emerging global internet radiocommunication marketplace. Indeed, the equivalent of today's NSA Director, the legendary William Friedman, who had worked during WW-1 with France's Father of Cryptology, François Cartier, prominently participated in the 1927 Washington Conference to provide a seminal report on cryptography in ITU international telecommunication services. Today — in a rapidly emerging world of virtualized infrastructure and services based on 5G specifications — the establishment of extraterritorial norms and practices through the ITU-T becomes significantly more important to economic growth and national security interests shared by all nations.

Failure to pursue an effective, knowledgeable ITU-T engagement strategy places important economic and national security interests at risk

Two decades ago, the Clinton-Gore Administration reversed the course of previous U.S. administrations and the national security community, and began withdrawing support for ITU telecommunication treaty instruments and engagement in related ITU-T activities to meet economic and political objectives in the late 1990s expressed as the "information superhighway” and the "Internet economy." Domestically, substantial engagement of Federal regulatory and national security agencies was largely eliminated, even as concern was expressed that the national security implications of the strategy had significant adverse long-term consequences — which included the DARPA Director who had approved TCP/IP development. The strategy was grounded on a belief that U.S. industry dominance in one particular protocol (TCP/IP) and use of open end-to-end architectures — unfettered by any government oversight or regulation, including engagement in the ITU-T — would significantly benefit the nation's economic and global strategic interests in the Internet Economy. This strategy may have worked in the short term to increase market share, revenue, and share prices of some private-sector companies. However, the price paid in exponentially increasing cybersecurity incidents and vulnerabilities, as well as mounting network crime and diminishment of information trust — all bring into question the sustainability and wisdom of the Clinton-Gore strategy, which seems to be still supported by some parties today who have a stake in maintaining it.

However, anyone nation's strategy to embrace a particular network protocol and architecture does not bring about a dramatic change in a longstanding inter-governmental organization. It was two dramatic developments in telecommunication which accomplished the real changes — a transition in most countries away from a government agency based national provisioning model known as PTTs, and the emergence of GSM-based mobile networks as the common global electronic communications infrastructure of value. Those two changes shifted the principal venue of choice for international standards and operational norms from ITU-T to 3GPP, GSMA, ETSI, and an array of related new mobile venues. Even the market served by the ITU's Telecom trade show, which operated in conjunction with the ITU-T and dominated the industry, largely transitioned away to the GSMA Mobile World Congress.

Over the past twenty years, the ITU-T sought to adapt to these major transitions. The most significant organizational changes included restructuring and rationalization of its work, significantly accelerating the standards development process, management accountability, and transitioning to free on-line availability of both ITU-T standards and "names and numbers" registry databases — including use of real-time identifier resolvers. It also engaged in extensive outreach and collaboration with other standards bodies that included the creation of the GSC (Global Standards Collaboration) organization. These transitions were facilitated by a series of experienced, knowledgeable and visionary Directors of the TSB Secretariat from Germany, China, the UK, and South Korea..

On the substantive standards side, ITU-T shifted to its knitting — legacy telecommunication network communication protocols and services — combined with a focus on specific proven protocols and platforms for greater identity trust and cybersecurity, as well as engaging in areas that were not being addressed in other standards bodies. Outstanding cybersecurity examples included: continued evolution of the universally-used digital PKI certificate standard X.509, OID IoT identifiers, and ASN.1 syntax language in SG17; the introduction by ARPA networking legend Larry Roberts of a secure Internet Protocol in SG13 in 2006; and the many Identity Management and the cybersecurity information exchange (CYBEX) initiatives of NSA's most experienced standards leader in SG17. Because the ITU-T is responsible for the specification and registration of telephone calling identifiers, it has also sought to enhance measures to mitigate international callerID and Web owner spoofing in SG2 and SG17 — which is only now being appreciated. In many areas, the latest best of breed standards from other bodies were also curated as useful ITU-T implementation guides through the CJK (China-Japan-Korea) group who have typically maintained comprehensive, well-regarded strategic oversights of developments in a broad array of standards organizations.

Today's fundamentally different environment

The past two decades constituted a kind of ICT Old World. Today, we are collectively experiencing a transformational revolution through virtualised electronic communication infrastructures and services optimized based on 5G NFV and SDN for tailored content distribution on-demand or through AI — which is rapidly, exponentially unfolding. The emerging low-latency, content distribution networks use fundamentally different, new protocols, architectures, end-point addresses, and resolvers that can be scripted on-demand. The equipment component largely becomes a low-cost global commodity market pursued by a handful of companies who can meet the worldwide demand and implementation specifications; and where wholesale banning of equipment is a losing proposition. The appropriate course of action is compliance with new tailored security measures and boite à outils 5G . That equipment is deployed both by end-users, fixed in their premises or vehicles, or roaming worldwide, and by the providers of resilient large regional data centres at strategic locations and network edges, cable headends or satellite hubs, connected by wholesale physical layer transport bandwidth and a plethora of local radio cells and satellite downlinks using expanded spectrum allocations. The most significant market, however, lies in the ability on a significant scale to locate and securely deliver tailored content and communication channels to as many constantly roaming and fixed customers and devices as possible at the lowest cost. Most markets are potentially transnational.

As part of this transformational revolution, the ecosystem of institutions for collaboration has also expanded and reshaped itself. All of the technical and operational changes give rise to complex combinations of requirements for trusted implementations that meet diverse legal, regulatory, competition, and national security requirements, including extraterritorial retained data and lawful interception requests, that are required by almost every nation. The challenges are certain to evolve over time, but several that are obvious at this point include: 1) effective, inclusive global arrangements for extraterritorial orchestrations of 5G architectures and services including access to forensics, 2) concentration of 5G orchestration, resolver and end-point intelligence services in the hands of a few commercial providers, and 3) end-to-end ephemeral encryption protocols by users which impede the ability of 5G providers to manage their networks and meet legal and security obligations. Providing solutions to these challenges will inure to the market success of those enterprises best suited to provide the virtualised network architectures and services on a global scale. The ITU-T has historically proven a useful venue for meeting these needs that were used extensively by the U.S. over many decades. It may prove useful again in the world of extraterritorial 5G virtualized architectures.

In the 5G virtualisation ecosystem, wholesale banning of equipment based on national origin is highly counterproductive as an economic or national security strategy. It produces only a domestic political illusion of security and deflects from the kinds of steps necessary to actually reduce threat risks — which include independent certification and testing of all equipment to stringent specifications plus rigorous supply chain accountability, and then followed by the application of Critical Security Controls tailored for virtualisation, constant patching of software, and continuous monitoring and remediation of security postures and threats. Wholesale banning induces retributions against U.S. suppliers of lucrative extraterritorial 5G orchestration, discovery, and content services. The practice also results in a toxic cooperative environment that significantly impedes U.S. public-private work in ITU-T. Notably, it was the U.S. itself - 25 years ago in the ITU and WTO — that orchestrated what was then a Reagan Administration strategy to eliminate banning of equipment and services based on national origin.

In 3GPP, where the preponderance of 5G virtualisation work occurs among massive numbers of meetings and participating parties, the "R" and "T" sectors (known as RAN and SA) are far more integrated and constantly collaborate among their working groups and external organizations. Going forward in the ITU-T, this integration model that would involve the ITU-R, may be worth emulating because the level of interaction between virtualised network architectures and virtualised radio access platforms is significant.

Part of this 5G virtualisation revolution includes the disappearance of the TCP/IP "Internet" and its legacy institutions. In a fully 5G world, anyone can potentially run a script to instantly create a network architecture and services for any desired periods of time, anywhere in the world. Those architectures can be instantiated to use any kind of network protocol, and the most attractive configurations make use of Ethernet and the MEF 3.0 industry standards or LISP. There are many other network protocols more attractive for use than legacy TCP/IP (or IPv6). The new environment is regarded in the industry as transformational, and an exciting next-generation era of robust competition in network protocols is facilitated. The policy dimension is no less transformational — as the so-called "governance" organizations clustered around the nearly 50-year-old TCP/IP protocol lose relevance. What also loses relevance is the 25-year-old Clinton-Gore embrace of that protocol and its governance bodies, which have underpinned "U.S. international digital economy policy [and its] approach to international standards."

The State Department's ITU-T related role going forward should also expand to deal with several significant challenges within the U.S. government itself in facilitating a difficult transition from a legacy policy world, to a new one of integrated national security and international digital economy policy appropriate for 5G extraterritorial virtualised content delivery architectures and services through public-private multilateral cooperation, and moves away from counterproductive equipment banning to knowledgeable approaches that are actually effective. Each government agency typically focuses on its own needs, practices and constituencies — which typically encompasses only a few of the standards bodies that are part of the new 5G virtualised network ecosystem. If State can expand its scope beyond the relative handful of treaty-based standards bodies, it can play a useful role in curing international myopia and segmentation that currently exists among Federal agencies, and effect holistic engagement with the entire 5G virtualisation standards ecosystem. A key component is the need to expand involvement of the NSA and FCC in the activities — as was once a key part of the U.S. ICT strategy. Also needed here is State dealing with NIST's continued support for ISO as a viable international standards venue, when ISO impedes availability of standards which NIST helps develop, in order to sell them for enormous amounts of money — unlike most other international standards bodies, including ITU-T.

Lastly, State can facilitate and coordinate private-sector involvement in ITU-T (thereby providing private-sector antitrust immunity) and the array of other international 5G virtualisation standards bodies — similar to most other major nations, and as the U.S. once very successfully did. Other nations emulated that public-private success. The U.S. largely abandoned it. What is actually at risk without significant U.S. participation is an existential question, as other nations will simply carry on the work of collaboration — reaching agreements and pursuing global opportunities. After all, many third world countries do not participate. What is lost is the stature of the nation, and the ability to influence important standards that affect national security and global opportunities of its enterprises. What is that worth?

The Bottom Line

The current U.S. Administration is faced with an ironic conundrum. Do they stick with the Clinton-Gore strategy of the 1990s that attempts to depreciate the ITU-T and effecting no U.S. leadership, or do they re-invent the Reagan multilateral strategy of extensive leadership and making the ITU-T part of a global strategy to lower national barriers and expand U.S. vendor market opportunities in the 5G virtualized world of the future? The former 1990s strategy today will diminish U.S. company opportunities — especially when coupled with equipment banning based on national origin — and force them to establish independent data centres and operations in each foreign market. Perhaps most importantly, the former strategy will lead to greater destabilization and impairment of global telecommunication-related security for everyone.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Internet Governance, Telecom

It's Time to Curtail the Censorship Industry Cartel

Pią, 2020-02-14 18:05

Last month INHOPE, a global trade association of child abuse reporting hotlines, rejected a joint call from Prostasia Foundation, the National Coalition Against Censorship, Article 19, and the Comic Book Legal Defense Fund, that its members should stop treating cartoons as if they were images of child sexual abuse. As our joint letter pointed out, INHOPE's conflation of offensive artwork with actual abuse images has resulted in the misdirection of police resources against artists and fans — predominantly LGBTQ+ people and women — rather than towards the apprehension of those who abuse real children.

INHOPE is not a child protection organization, but an industry association for organizations and agencies that provide censorship services to government and private industry. Its Articles of Association are surprisingly explicit about this: its objective is to "facilitate and promote the work of INHOPE Member Hotlines, whose work is to eradicate illegal content, primarily child sexual abuse material, on the internet" [emphasis added].

It executes this mission by collecting personal information of those who share images that are reported to it (which can include a name, email address, phone number, and IP address), and sharing this information among its member hotlines and with police. Again, it is explicit about this, acknowledging that its "core business revolves around the exchange of sensitive data." INHOPE members have actively lobbied to weaken European privacy rules so that they can maintain these data collection practices, while refusing to accept a compromise allowing continued scanning for actual child abuse images.

Such data collection is clearly justifiable when it is limited to actual sexual abuse images. But INHOPE's data collection isn't limited to this. It siphons up reports of all manner of reports that its members declare to be illegal in their country, and (with one exception mentioned below) gives them another "once-over" to determine whether they are illegal worldwide, only in the reporting or hosting country, or not at all, before forwarding them to INTERPOL. Even if this assessment leads to a determination that the images are lawful, INHOPE doesn't delete them. Inexplicably, it instead classifies them as "Other Child-Related Content," retains them in a database, and sends them to law enforcement for what it describes as "documentation purposes."

Images reported by NCMEC, the American hotline, undergo even less vetting. Despite being an INHOPE member, NCMEC doesn't utilize the services of INHOPE analysts, but directly shares reported images and associated personal information with law enforcement agencies around the world. According to Swiss authorities, up to 90% of these images are later found to be lawful.

INHOPE chose to mischaracterize our call as being grounded in a misunderstanding of the fact that some countries do prohibit artistic sexual representations of minors by law. But our letter explicitly acknowledged that fact, by calling on INHOPE to establish a policy for its members that "artistic images should not be added to image hash lists that INHOPE members maintain, and should not be reported to authorities, unless required by the law where the hotline operates” [emphasis added].

There are indeed some countries in which lawmakers do ill-advisedly use the same laws to criminalize the dissemination of offensive art as they use to prohibit the image-based abuse of real children. But the risks of an international organization allowing national authorities to act as gatekeepers of the images that it it treats as child abuse and reports to INTERPOL should be obvious.

For example, Canada's overbroad child pornography laws have recently drawn public attention over the much-criticised prosecution of an author and publisher for a novel that includes a brief scene of child sexual abuse in its retelling of the story of Hansel and Gretel. The Canadian Center for Child Protection, one of only two INHOPE members that proactively searches for illegal material, was responsible for the arrest of a a 17 girl for posting artwork to her blog, when it reported her to authorities in Costa Rica where such artwork is also illegal.

In other countries where cartoon images are illegal, criminal laws are used to disproportionately target and criminalize LGBTQ+ people and women. An example given in our letter was the case of a Russian trans woman who was arrested over cartoon images and sentenced to imprisonment in a men's prison.

Russia's INHOPE member the Friendly Runet Foundation encourages people to report if they are "exasperated by the on-line materials transgressing morality," and boasts that it was "created at the direct participation and works in close partnership with the Department "K" of the Russian ministry of Interior." This terminology, and the hotline's association with the ministry that criminalized "gay propaganda," is understood by Russian citizens as an attack on LGBTQ+ people's speech. It is noted that no LGBTQ+ representatives are included on INHOPE's Advisory Board. 

INHOPE can't do anything, directly, about unjust national laws that conflate artistic images with child abuse. INHOPE and its members also can't do much to prevent conservative members of the public from reporting non-actionable content (although one member has taken steps to address this problem). That's why we are directly targeting the public with our "Don't report it, block it” information campaign, to stem such false reports at the source.

But what INHOPE can do is to decide what to do with reports that it receives about artistic content. Passing them to law enforcement authorities, using a censorship and surveillance infrastructure that was established to deal with real images of child sexual abuse, isn't its only option here. Neither is it necessary to place those who share such images in the crosshairs of police, especially in countries that have unjust laws or repressive governments.

In 2019, we held a seminar with Internet companies and experts to discuss more proportionate ways of dealing with content such as child nudity, child modeling, and artistic images, that doesn't rise to the legal of child abuse, but which can still be triggering or offensive, or harmful when shared in the wrong context. Through a multi-stakeholder process, this resulted in the development of a set of principles for sexual content moderation and child protection that were launched at last year's Internet Governance Forum.

INHOPE already has a Code of Practice that its members are required to comply with. To be clear, some INHOPE members already do have good practices, and Britain's Internet Watch Foundation (IWF) is one of these: although cartoon images are unlawful in the United Kingdom and the IWF is mandated to accept reports about them, it doesn't include these reports in its hash lists of abuse images, nor share them with foreign police. Our joint letter invited INHOPE to take the opportunity to amend its Code of Practice to apply similar standards to its other members. Its decision not to consider this doesn't reflect well on the organization.

Internet reporting hotlines are selling a product to law enforcement authorities: a censorship service for which actual images of child abuse are only the selling point. This can be a lucrative gig; NCMEC alone received $33 million from the United States government in 2018. Therefore, as a business proposition, it makes sense for INHOPE and its members to ask few questions about the scope of the censorship services their governments call upon them to provide. Conversely, since almost no federal money is being allocated towards abuse prevention, there is little incentive for them to invest in prevention interventions that could reduce abuse in the long run.

But these perverse incentives are leading it down a dangerous path. It's time for us to call this censorship cartel to account, and to demand that it consider the human rights of the innocent people who are being hurt by its approach. The plain fact is that INHOPE doesn't represent the voices of experts who work on child sexual abuse prevention, it represents the law enforcement sector. By refusing to curtail its activities to place the censorship of artistic images outside its remit, INHOPE has lost the moral authority that provides the only justification for its sweeping and dangerous powers.

Written by Jeremy Malcolm, Executive Director, Prostasia Foundation

Follow CircleID on Twitter

More under: Censorship, Internet Governance

Predicting the Cost of Cryptocurrency Hacks in 2020

Czw, 2020-02-13 18:55

The last few years have proven to be a crucial moment for cryptocurrency security. The more cryptocurrency has risen in popularity, the more high profile security breaches have occurred, and the more key institutions have been targeted.

The young cryptocurrency industry has always been brimming with opportunity, but with this comes risk, especially when there are lapses in security. Crypto security is especially important to crypto owners because one of the main points of cryptocurrencies like Bitcoin in the first place has been to prevent criminals from accessing your database to access your currency as easily as actual money.

There are two key hacks that shed light on such lapses:

In early 2018, bad agents targeted Coincheck Japan and succeeded in stealing over $500 million in NEM tokens. To this day, it is one of the largest and most notable crypto heists, standing shoulder to shoulder with hacks such as the notorious Mt. Gox attack — a heist of roughly 800,000 BTC.

Even earlier, in 2016, Bangladesh Bank found itself in the crosshairs of ambitious and skilled hackers. Using fully authenticated transitions, thieves attempted to steal over $800 million across the SWIFT network. Although the thieves received a "meager" $101 million for their efforts, $81 million did eventually make its hands into beneficiaries in South Asia.

What is it that ties these examples together? The victims were sloppy. Both central banks and notable cryptocurrency exchanges had poorly managed security (such as login details) when it came to the transfer of cryptocurrency or fiat money.

Although the SWIFT network was at the center of the Bangladesh Bank heist and similar cybercrimes, the network itself was not hacked; the network's users were. Likewise, in both the Coincheck and Mt. Gox hack, the blockchains central to the hack were never compromised. Rather, the exchanges themselves, and the users were. The login usernames, passwords, and even the systems themselves had such poor security that hackers were essentially left an open door. A door they had no compunction about using.

Thankfully, greater cybersecurity controls were put in place by the SWIFT community. The weak links were quickly identified, and the hackers' go-to methods of attack were disseminated amongst the community.

Can the cryptocurrency industry claim at the enterprise level that it has done the same? Can it claim that it has learned from its own mistakes in an age where negative media coverage is one of the first things customers will often see online? It is difficult to say, but what is clear is that 2020 must see it come together and rise to face the growing risk of crypto threats.

Crypto has matured, but a lot of growth is still needed

The crypto industry's security has grown more robust over the last few years. The solutions presented by custodial and noncustodial wallet providers are increasingly resilient.

Powered by new multiparty protocols or hardware security, these enable secure asset transfers on a consistent basis. Given how popular crypto trading has become with multiple codes in both the EU and USA, these new tools are essential.

Both hardware and software-based multi-signature wallet access are being widely used by organizations. Operating environments are increasingly being encrypted, addresses are being whitelisted, and many other areas of security are being monitored and tightened. Additional improvements have been seen in wallet management systems.

The security community now discusses hacks as they happen, taking steps to patch holes in their security and blacklist any addresses that were party to the theft. However, as these attacks have repeatedly occurred in 2019, there is still much more work to be done.

Upgrading security technology is important, yes, but even more important are the steps taken to improve the risk management operations at the enterprise level. While technology is important, having efficient operations will make all security efforts far more productive and effective. Likewise, more rigorous checks on access to customer assets are key.

Customer investments must be secured, and the industry must adopt standard business practices when it comes to security, access, and any conflicts of interest. In other words, the industry has to start taking itself more seriously.

While no typical asset manager in the world has custody over their customer's assets, this is not the case in the crypto industry. This is a huge mistake. Without having the right principles in place, the industry will continue to deny itself the investment it needs — investment it often needs to keep it from remaining vulnerable.

Security has become a huge concern not just for companies and exchanges but also for individuals who possess cryptocurrency. More and more people are looking to security measures such as using hardware wallets, two-factor authentication, and VPN services to keep their cryptocurrency wallets and transactions safe.

But if they see an industry that isn't doing the same, will they trust it? How long will it take the industry to realize that it needs to adopt the financial practices that have proven to work in traditional finance?

In the last year alone, countless foundations, exchanges, and funds have recognized that the crypto industry will never reach its full potential without mature business practices and complete transparency. These are the two things that incidentally protect the customers and their assets and are the elements that matter most. In an age where cybercrime seems to be hitting its stride, this is essential.

As the industry has started to shift towards transparency and best practices, it has increasingly seen enterprise-level solutions emerge to counter hacking risks. Machine learning, and AI, for instance, are cutting edge technologies that hackers struggle to counter. This has brought more willingness from insurance companies to cover third-party custodians who are using the right security technologies.

How will 2020 change the cryptocurrency industry?

For the cryptocurrency to evolve in the ways it needs to, there needs to be more awareness of security risks and a lot more education. Funds, foundations, exchanges, projects, and more must ensure that their processes are secure, transparent and follow best practices — the practices that keep their customer's assets safe. Most players will correctly decide to outsource this important task to third party companies who specialize in these exact practices.

This will lead to a state of affairs that sees 2020 close with funds being more difficult to hack than ever. With more organization and collaboration between players and more adoption of enterprise-level security practices and principles, thieves will be far more discouraged from undertaking an attack on a crypto organization.

If the industry can manage to galvanize and make this happen, then the future of the cryptocurrency industry will be looking bright.

Written by Samuel Bocetta, Security Analyst and Consultant

Follow CircleID on Twitter

More under: Blockchain, Cybercrime, Cybersecurity

Can 5G Replace WiFi?

Czw, 2020-02-13 16:53

Verizon recently posted a webcast with investors where Ronan Dunne, EVP and CEO of the Verizon Consumer Group said that he believed that 5G hotspots using millimeter wave spectrum would eventually displace WiFi in homes.

He cites major benefits of 5G over WiFi. He believes that a 5G network will be more reliable and more secure. He thinks that people will value the safety that comes from having traffic inside their home being encrypted as it rides Verizon's 5G network compared to the more public nature of WiFi where every neighbor can see a home's WiFi network.

He also cites the convenience of being able to transfer 5G traffic between networks. He paints a picture where a customer making a call or watching a video using a home 5G hotspot will be able to walk out the door and seamlessly continue the session outside on their cellphone. That's pretty slick stuff should that ever come to pass.

The picture he's painting for Verizon investors is a future where homes buy a Verizon 5G subscription to use in place of WiFi. This is part of Verizon's ongoing effort to find a business case for 5G. His vision of the future is possible, but there are many hurdles for Verizon to overcome to achieve that vision.

It's going to get harder to compete with WiFi since the technology is getting a lot better with two major upgrades. First, the industry has introduced WiFi 6, which brings higher quality performance, lower latency, and faster data rates. WiFi 6 will use techniques like improved beamforming to greatly reduce interference between WiFi uses within the home.

Even more importantly, WiFi will be incorporating the new 6 GHz spectrum band that will increase bandwidth capabilities by adding seven 160 MHz bands and fourteen 80 MHz bands. It will be much easier to put home devices on separate channels when these new channels are added to the existing channels available on 2.4 and 5 GHz. This means that 5G will be competing against a much-improved WiFi compared to the technology we all use today.

Another big hurdle for Verizon to overcome is that WiFi is ubiquitous today. WiFi is built into a huge number of devices, and a homeowner might already own a dozen or more devices capable of using WiFi. Verizon will have to convince homeowners somehow that 5G is so superior that it's worth replacing the panoply of WiFi devices.

Another hurdle is that there are going to be WiFi vendors painting almost the same picture as Verizon. The makers of WiFi routers are already envisioning future devices that will introduce millimeter-wave spectrum, including 5G into the home. There are vendors already working on devices that will provide both WiFi 6 and 5G using millimeter-wave connections simultaneously, using the publicly available 60 GHz V band. These solutions envision offering everything that Verizon can do, except the ability to roam seamlessly in and out of a home — and it will be done by selling a box instead of a new monthly subscription.

Another interesting hurdle to switching home networks to 5G is that there might be separate 5G solutions for each cellular carrier that uses different bands of spectrum. It's relatively easy for device makers today to build a cellphone or other device that can use different cellular carriers because the carriers all use similar spectrum. But as each cellular company picks a different mix of frequencies moving forward, there is likely going to be cellphones and other devices that are specific to one carrier. It's impossible to build a cellphone with today's battery technology that can receive a huge range of spectrums — the multiple antenna systems would drain a cellphone dry in no time.

The largest hurdle of all is that WiFi is free to use after buying a WiFi router or meshed WiFi devices for the home. There is no monthly subscription fee to use the wireless WiFi connections within the home. Verizon clearly foresees a world where every home has a new monthly subscription to use its in-home 5G network.

Mr. Dunne makes one good point. It's becoming increasingly clear that public WiFi networks are susceptible to hacking. A 5G network controlled by a carrier should be a lot safer than a WiFi hotspot managed by a coffee shop. The big question is if this enough incentive for people to buy 5G-capable devices or for coffee shops to switch to 5G networks. Even should coffee shops go with a 5G solution, will homes follow suit?

Mr. Dunne's vision has an underlying assumption that people will value data security enough to be willing to pay more for it. He envisions people choosing a managed network when they have a choice. He could be right, and perhaps there will be enough data breaches in the coming years with WiFi that the paradigm will change from WiFi to 5G. But it's going to be incredibly hard to dislodge WiFi, particularly when it's evolving and improving along with 5G.

Even if Mr. Dunne is right, this shift is not coming soon, probably not within this decade. For now, WiFi has won the device war, and any shift to 5G would drag out over many years. It's going to be incredibly difficult for the cellular carriers to convince everybody to switch to 5G.

I sympathize with Mr. Dunne's dilemma. Investors want to understand where the revenues will come from to fund the expensive upgrades to 5G. Verizon and the other cellular carriers have tossed out many ideas, but so far, none of them have stuck to the wall. Investors are getting rightfully nervous since there doesn't appear to be any significant 5G revenues coming in the next few years. The carriers keep painting pictures of an amazing 5G future as a way to not have to talk about the lack of 5G revenues today.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Mobile Internet, Wireless

Cracks Appearing in Trump's Huawei Boycott

Śro, 2020-02-12 02:29

It must have been a galling experience for President Trump when his good mate British Prime Minister Boris Johnson failed to step in line with Trump's demand that the UK should also boycott the Chinese firm Huawei by not allowing them to be involved in the rollout of 5G in Britain. However, the involvement of Huawei will be limited.

It further proves that boycotting Huawei is a political and not a technical issue. Huawei is a poster child for China's international technology success and, by boycotting Huawei, Trump is hurting China as a global technology leader.

While there are other good telecoms manufacturers, Huawei is internationally recognized for being the leader in 5G technology, innovation and R&D, at the same time, it has been able to offer their products and services at a significantly lower cost than its competitors. Britain recognizes, as do many other countries in Europe and Asia that this provides them with the best possible mobile technology, which will assist these countries in global competitiveness and provides lower prices to its citizens.

To highlight this situation, the restriction put on Huawei in the rollout of 5G in the UK is going to cost British Telecom £500 million, as it will have to buy more expensive gear from other suppliers. BT's shares, already down 25% over the previous 12 months, were down a further 7.5% after the company's assessment of the Huawei impact.

I totally agree we need to be very wary of the totalitarian regime in China, where President Xi Jinping is using technology in an Orwellian way to control and manipulate its population, with the aim of making them placid and complacent. And he would like to extend his surveillance state model beyond the Chinese borders.

However, these sorts of concerns should be addressed through international forums putting pressure on China to adhere to global values and agreements. In these international forums, the rest of the world shouldn't shy away from strong pressure and strong condemnation.

As mentioned, the UK is not giving Huawei a free ride — there is a range of restrictions on the company's participation in the 5G rollout. Also, Boris Johnson voiced its support for more local R&D support in order to stimulate more competition into the telecoms equipment market. There are basically three major global telecoms manufacturers, apart from Huawei, Ericsson and Nokia (the latter two both European companies).

Back to the politics of the issue, in my opinion, Trump tries to mix these real concerns with global hegemony issues and the fear of the United States losing out economically to China.

It will be interesting to see if there will be any fallout of Johnson's decision not to follow Trump's lead. Unlike other countries in Europe and Asia who are still buying Huawei equipment, Britain is part of the Five Eyes countries. These Anglo-Saxon countries (Australia, Canada, New Zealand, the United Kingdom and the U.S.) share intelligence, and Trump has already mentioned his concerns about any of these countries not complying with the U.S. policy on Huawei.

With Britain leaving the EU, this country is now desperately looking for new bilateral trade deals, and Trump could make life difficult for Johnson by dragging out negotiations and/or being stubborn about making deals.

By the same token, an unpredictable President Trump could suddenly end the Huawei boycott if he believes he may get good concessions out of President Xi Jinping.

Another interesting development to follow is the reaction of other countries in the process of making decisions about the rollout of their 5G network. Will they follow UK's lead and withstand the Trump threats? Through the so-called Nine Eyes and Fourteen Eyes alliances, many more countries are linked to intelligence sharing arrangements with the United States. Apart from Australia and Japan, none of them have followed the U.S. lead.

It is expected that New Zealand and Canada are now expected to follow the UK's lead. The EU, as a group, has already indicated it is not in favor of banning any company from the 5G rollouts. Instead, they are working on a stringent security framework for these networks that will be imposed on all players. German Chancellor Angela Merkel has also voiced her opposition to a Huawei ban, and the UK decision will no doubt also further strengthen her stand on the issue.

At the same time, countries in Africa and Asia are continuing to roll out networks with Huawei's 5G equipment, and here the UK decision will have a positive effect on further decisions to be made on these continents.

In short, this story is far from over, and there will be many more twists and turns before we will see the end of this. In the meantime, the real focus should be on global corporations aimed at ensuring that our democratic and human rights values are well protected in the wake of all the new technologies — not just in relation to 5G, but also and in particular AI.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Mobile Internet, Policy & Regulation, Telecom, Wireless

Looking Forward to 'The Conversation We Should Be Having'

Wto, 2020-02-11 14:00

No, this topic hasn't yet been exhausted: There's still plenty more conversation we can and should have about the proposed sale of the .ORG registry operator to a private firm. Ideally, that conversation will add more information and more clarity about the issues at stake and the facts that underpin those issues.

That's why I'm planning to attend today's event at American University where the sale's proponents, opponents and undecideds will have a tremendous opportunity to better understand one another. The event title says it all: "The Controversial Sale of the .ORG Registry: The Conversation We Should Be Having.

Andrew Sullivan, president and CEO of the Internet Society, will be joined on stage by representatives from the Electronic Frontier Foundation and the Washington College of Law, as well as both the former chair and former director of policy at Public Interest Registry. My plan is to be a spectator, but I hope my presence underscores my conviction, even in the face of sharp criticism, that this sale is the right way to #SaveDotOrg. (This isn't exactly a spoiler, given my earlier op-ed: I believe the move is good for .ORG, good for the Internet Society, and good for people who use, or hope to use, the Internet.)

I know the hosts want to foster an objective, evenhanded discussion. All I hope for is a balanced dialogue. The Internet Society's goal is to answer, as candidly as possible, any and all questions around the decision, which was endorsed by myself and other board members unanimously after extensive, wide-ranging discussion and due diligence.

To be clear, ISOC board members remain confident in the wisdom and integrity of our decision, just as our strongest critics continue to press the objections and doubts. But it's heartening that everyone is still open to conversation. In my mind, the conversation we should be having would cover:

  • How the pending transaction aims to protect, preserve, and empower the Public Interest Registry and to maintain and build on the .ORG community rather than stick with the status quo.
  • How appeals to emotion have eclipsed measured analysis of facts, e.g., here, here and here.
  • How .ORG will get stronger at a time when most of the original TLDs have been on the decline.
  • How the Internet Society, a dot-org itself, will now be able to work more independently on the Internet's biggest threats, including efforts to censor freedom of expression and undermine digital security.

Perhaps the most important fact is this: There are real threats to the Internet right now that plainly dwarf even the most outlandish vision of mercenary administration of .ORG. Take these, for example:

  • Governments around the world want backdoors to encrypted devices and networks, a development that would eliminate Internet privacy forever.
  • Both nations and private companies are making efforts to create centralized, controlled, Internet-like islands that undermine the Internet way of networking.
  • Nearly half of the world lacks access to the Internet and the enormous economic, social and educational opportunities it provides.

I hope we get to all of these topics. Because these bigger issues constitute the more important conversation we should be having. At the end of the day, even if we don't agree, finally, about the right decisions to make regarding .ORG and the Public Interest Registry, we absolutely will need to agree to join forces on the bigger Internet problems that are hurtling toward us in this century. I look forward to standing shoulder to shoulder with today's critics to defend our shared Internet against tomorrow's challenges.

See you at the event IRL or on the webcast.

Written by Mike Godwin, Member Board Of Trustees at Internet Society

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Registry Services

Israel's Entire Voter Registry Exposed, the Massive Data Leak Involves 6.5 Million Voters

Wto, 2020-02-11 03:07

Israel's entire voter registry was recently uploaded to a vulnerable voting management app which effectively left the data wide open for days. The exposed information includes names, identification numbers, phone numbers and addresses which could be accessed by any one from a web browser. "Developed and managed by a company called Feed-b, the Elector app is used by prime minister Netanyahu's party to contact voters with news and updates," reports Phil Muncaster in Infosecurity. "The problem stemmed from an API endpoint which was left exposed without a password, and a lack of two-factor authentication throughout the site." The leak was discovered and reported on Monday by Ran Bar-Zik, an Israeli-born frontend developer for Verizon Media, according to various sources.

Follow CircleID on Twitter

More under: Cybersecurity, Privacy

American University Washington College of Law to Hold Open Discussion on the .ORG Sale Controversy

Pon, 2020-02-10 22:43

The American University Washington College of Law has announced it will be hosting a fireside chat on the sale of the Public Interest Registry (PIR) to the private equity firm Ethos Capital. The event titled "The Controversial Sale of the .ORG Registry: The Conversation We Should Be Having," will be held on Tuesday, February 11th, with the following confirmed speakers:

Andrew Sullivan, President & CEO, Internet Society

Mitch Stoltz, Senior Staff Attorney, Electronic Frontier Foundation

Benjamin Leff, Professor of Law, Charitable and Non-Profit Organizations, Washington College of Law

Marc Rotenberg, President, Electronic Privacy Information Center, Former Chair, Public Internet Registry (.ORG)

The discussion will be facilitated by Kathryn Kleiman, IP & Tech Clinic, Washington College of Law and Former Dir. of Policy, Public Interest Registry (.ORG), and will attempt to address questions such as: Can a non-profit (ISOC) sell a non-profit (PIR)? Are top-level domains still global public resources? What can ISOC and PIR do to protect the online communication of millions of .ORG registrants? What mechanisms could exist to address concerns of the .ORG community? "The answers could profoundly affect Internet speech for decades to come," says WCL.

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Registry Services

Deep Sea Diving: The State of Submarine Cable Technology

Pon, 2020-02-10 21:39

Last month I attended the New Zealand Network Operators' Group meeting (NZNOG'20). One of the more interesting talks for me was given by Cisco's Beatty Lane-Davis on the current state of subsea cable technology. There is something quite compelling about engineering a piece of state-of-the-art technology that is intended to be dropped off a boat and then operate flawlessly for the next twenty-five years or more in the silent depths of the world's oceans! It brings together advanced physics, marine technology, and engineering to create some truly amazing pieces of communications infrastructure.

A Potted (and Regionally Biased) History of Subsea Cables

On the 5th August 1856, after a couple of false starts, the Atlantic Telegraph Company completed the first trans-Atlantic submarine telegraph cable. It was a simple affair with seven copper conductor wires, wrapped with three coats of the new wonder material, gutta-percha (or as we know it today, rubber). This was further wrapped in tarred hemp and an 18-strand helical sheath of iron wires. It didn't last long, as the cable company's electrical engineer, Dr. Wildman Whitehouse, had a preferred remedy for a fading signal by increasing the voltage on the circuit (as distinct from William Thompson's (later Lord Kelvin) chosen remedy of increasing the sensitivity of his mirror galvanometer receivers). A 2KVDC power setting proved fatal for the insulation of the cable, which simply ceased to function from that point onward.

Figure 1 – Isambard Kingdom Brunel's SS Great Eastern, the ship that laid the first lasting transatlantic cable in 1866

Over the ensuing years, techniques improved, with the addition of in-line amplifiers (or repeaters) to allow the signal to be propagated across longer distances, and progressive improvements in signal processing to improve the capacity of these systems. Telegraph turned to telephony, valves turned to transistors, and polymers replaced rubber, but the basic design remained the same: a copper conductor sheathed in a watertight, insulating cover, with steel jacketing to protect the cable in the shallower landing segments.

In the Australian context, the first telegraph system, completed in 1872, using an overland route to Darwin, and then short undersea segments to connect to Singapore and from there to India and the UK.

Figure 2 — Erecting the first telegraph pole for the Overland Telegraph Line in Darwin in 1870

Such cables were used for telegraphy, and the first trans-oceanic telephone systems were radio-based; it took some decades for advances in electronics to offer cable-based voice service.

One of the earlier systems to service Australia was commissioned in 1962. COMPAC supported 80 x 3Khz voice channels linking Australia via New Zealand, Fiji, and Hawaii to Canada, and from there via a microwave service across Canada and then via CANTAT to the UK. This cable used valve-based undersea repeaters. COMPAC was decommissioned in 1984 at the time when the ANZCAN cable was commissioned.

ANZCAN followed a similar path across the Pacific, with wet segments from Sydney to Norfolk Island, then to Fiji, Hawaii and Canada. It was a 14Mhz analogue system with solid-state repeaters spaced every 13.5km.

This was replaced in 1995 by the PACRIM cable system, with capacity of 2 x 560Mhz analogue systems. COMPAC had a service life of 22 years, and ANZCAN had a service life of 11 years. PACRIM had a commercial life of less than two years as it was already superseded by 2.5Ghz submarine circuits on all-optical systems, and it was woefully inadequate for the explosive demand of the emerging Internet.

These days there are just under 400 submarine cables in service around the world and 1.2 million km of cable. Telegeography's cable map is a comprehensive resource that shows this cable inventory.

Figure 3 – Telegeography's map of subsea cables


The first cable systems were incredibly expensive undertakings as compared to the size of the economies that they serviced. The high construction and operating costs made the service unduly prohibitive to most potential users. For example, when the Australian Overland Telegraph was completed, a 30-word telegram to the UK cost the equivalent of three week's average wages. Early users were limited to the press and government agencies as a result.

The overland telegraph system used 'repeater' stations where human operators recorded incoming messages and re-keyed the message into the next cable segment. Considering that across Asia, few of these repeater station operators were native English speakers, it is unsurprising that the word error rate in these telegrams was up to one-third of the words in a message. So the service was both extremely expensive and high error-prone.

What is perhaps most surprising is that folk persisted in spite of these impediments, and, in time, the cost did go down, and the reliability improved.

Many of these projects were funded by governments and used specialised companies that managed individual cables. In the late 19th century there was the British Indian Submarine Telegraph Company, the Eastern Extension Australasia and China Telegraph Company and the British Australia Telegraph Company, among many others. The links with government were evident, such as the commandeering of all cable services out of Britain during WWI. The emergence of the public national telephone operator model in the first part of the twentieth century was mirrored by the public nature of the ownership of cable systems.

The model of consortium cable ownership was developed within the larger framework, where a cable was floated as a joint-stock private company that raised the capital to construct the cable as a debt to conventional merchant banking institutions. The company was effectively 'owned' by the national carriers that purchased capacity on the cable, where the share of ownership equated to the share of purchased capacity.

The purchased capacity on a cable generally takes the form of the purchase of an indefeasible right of use (IRU), giving the owner of the IRU exclusive access to cable capacity for a fixed term (commonly for a period of between 15 and 25 years, and largely aligned to the expected service life of the cable). The IRU conventionally includes the obligation to pay for a proportion of the cable's operational costs.

When the major customer for undersea cable systems was the national carrier sector, the costs of each of the IRUs were normally shared equally between the two carriers who terminated either end of the IRU circuits. (this was part of the balanced financial settlement regime of national carriers, where the cost of common infrastructure to interconnect national communications services was divided equally between the connecting parties). While this model was developed in the world of national monopoly carriers, the progressive deregulation of the carrier world did not greatly impact this consortium model of cable ownership for many decades. Part of the rationale of this shared cost model of half circuits constructed on jointly owned IRUs was one where carriers cooperated on the capital and recurrent costs of facility construction and operation and competed on services.

One of the essential elements of this bureaucratic style of ownership was that capacity pricing of the cable was determined by the consortium, presenting any carrier or group of carriers to undue the other members of the consortium. The intention was to preserve the market value of the cable by preventing undercutting and dumping. The actual outcome was a classic case of supply-side rationing and price-fixing where the capacity of the cable was released into the market in small increments, ensuring that demand always exceeded available capacity over the lifetime of the cable, and cable prices remained buoyant.

The internet construction boom in the 1990s coincided with large scale deregulation of many telco markets. This allowed other entities to obtain cable landing rights in many countries, which led to the entrance of carrier wholesalers into the submarine cable market, and also led to the concept of "wholly-owned capacity" on cable system. Early entrants in these markets were wholesale providers to the emerging ISP sector, such as Global Crossing, for example, but it was not long for the large content enterprises, including Google, Facebook and others to enter this submarine cable capacity market with their own investments. The essential difference in this form of operation is the elimination of price setting, allowing cable pricing to reflect prevailing market conditions of supply and demand.


The basic physical cable design that's used today for these subsea systems is much the same as the early designs. The signal bearer itself has changed from copper to fibre, but the remainder of the cable is much the same. A steel strength member is included with the signal bearers, and these signal cables are wrapped in a gel to prevent abrasion.

Surrounding the signal bundle is a copper jacket to provide power, then an insulting and waterproofing sheath (a polyethylene resin), then, depending on the intended location of the particular cable segment, layers of protection. The shallower the depth of the cable segment and the greater the amount of commercial shipping, the higher the number of protection elements, so that the cable will survive some forms of accidental snagging of the cable (Figure 4).

Figure 4 – Cable cross-section

Normally the cable is laid on the seabed, but in areas of high marine activity, the steel sheathed cable might be laid into a ploughed trench, and in special circumstances, the cable may lay in a trough cut out from a seabed rock shelf.

The cable laying technique has not changed to any significant degree. An entire wet segment is loaded on a cable-laying ship, end-to-end-tested, and then the ship sets out to traverse the cable path in a single run. The speed and position of the ship are carefully determined so as to lay the cable on the seabed without putting the cable under tensile stress. The ship sails the lay path in a single journey without stopping, laying the cable on the seabed, whose average depth is 3,600m, and up to 11,000m at its deepest. The cable is strung out during laying up to 8,000m behind the lay ship.

Figure 5 – Loading a Cable ShipFigure 6 – A Cable Laying Ship (left) & Cable Repeater (right)

Cable repair is also a consideration. It takes some 20 hours to drop a grapnel to a depth of 6,000m, and that depth is pretty much the maximum feasible depth of cable repair operations. Cables in deeper trenches are not repaired directly but spliced at either side of the trench. The implication is that when very deep-water cable segments fail, repairing the cable can be a protracted and complex process.

Figure 7 – Cable Branching UnitWhile early cable systems provided simple point-to-point connectivity, the commercial opportunities in using a single cable system to connect many endpoints fuelled the need for the provision of Branching Units. The simplest form of an optical branching unit is to split up the physical fibres in the cable core. These days its more common to see the use of reconfigurable optical add-drop multiplexers (ROADMs). These units allow individual or multiple wavelengths carrying data channels to be added and/or dropped from a common carriage fibre without the need to convert the signals on all of the wavelength division multiplexed channels to electronic signals and back again to optical signals. The main advantages of the use of ROADMs are the deferral of planning of the entire bandwidth assignment in advance, as ROADMS allow reconfiguring capacity in the system in response to demands. The reconfiguration can be done as and when required without affecting traffic already passing the ROADM.

The undersea system is typically referred to as the wet segment, and these systems interface to surface systems at cable stations. These stations house the equipment that supplies power to the cable. The power configuration is DC, and long haul cable systems are powered by systems that typically use 10KVDC feeds at both ends of the cable. The cable station also typically includes the wavelength termination equipment and the Line Monitoring Equipment

Optical Repeaters

Optical "Repeaters" are perhaps a misnomer these days. Earlier electrical repeaters operated in a conventional repeating mode, using a receiver to convert the input analogue signal into a digital signal, and then re-encoding the data into an analogue signal and injecting it into the next cable segment.

Figure 8 – EDFA amplification These days optical cable repeaters are photon amplifiers that operate at full gain at the bottom of the ocean for an anticipated service life of 25 years. Light (at 980nm or 1480nm) is pumped into a relatively short erbium-doped fibre segment. The erbium ions cause an incoming light stream in the region around 1550nm to be amplified. The pump energy causes the erbium ions to enter a higher energy state, and when stimulated by a signal photon, the ion will decay back to a lower energy level, emitting a photon at the energy level of the stimulated state, but with a light frequency equal to the triggering incoming signal. This emitted amplified signal conveniently shares the same direction and phase as the incoming light signal. These are called "EDFA" units (Erbium Doped Fibre Amplifiers). This has been totally revolutionary for subsea cables. The entire wet segment, including the repeaters, are entirely agnostic with respect to the carrier signal. The number of lit wavelengths, the signal encoding and decoding, and the entire cable capacity is now dependant on the equipment at the cable stations at each end of the cable. This has extended the service life of optical systems, where additional capacity can be scavenged from deployed cables by placing new technology in the cable stations at either end, leaving the wet segment unaltered. The wet plant is agnostic with respect to the cable carrying capacity in these all-optical systems.

The subsea optical repeater units are designed to operate for the entire operational life of the cable with any further intervention. The design includes an element of redundancy in that if a repeater fails, then the cable capacity may be degraded to some extent but will still operate with viable capacity.

Figure 9 – EDFA Gain EqualizationThe EDFA units have a bias in amplification across the operational frequency range, and it's necessary to add a passive filter to the amplified signal in order to generate a flatter power spectrum. This allows the cumulative sum of these in-line amplifiers to produce an outcome that maximises signal performance for the entire spectrum of the band used on the cable. Over extended distances, this is still insufficient, and cables may also use active units, called "Gain Equalisation Units." The number, spacing and equalisation settings used in these units are part of the customised design of each cable system.

In terrestrial systems, amplifier control can be managed dynamically, and as channels are added or removed, the amplifiers can be reconfigured to produce optimal gain. Subsea amplifiers have no such dynamic control, and they are set to gain saturation, or always on "maximum." In order to avoid overdriving the lit channels, all unused spectrum channels are occupied by an 'idler' signal.

Repeaters are a significant cost component of the total cable cost, and there is a compromise between a 'close' spacing of repeaters, every 60km or so, or stretching the inter-repeater distance to 100km and making significant savings in the number of repeaters in the system. On balance, it is the case that the more you are prepared to spend on the cable system, the higher the cable carrying capacity.

The observation here is that a submarine cable is not built by assembling standard components and connecting them together using a consistent set of engineering design rules, but by customising each component within a bespoke design to produce a system which is built to optimise its service outcomes for the particular environment where the cable is to be deployed. In many respects, every undersea cable project is built from scratch.

Cable Capacity and Signal Encoding

The earliest submarine cable optical systems were designed in the 1980s and deployed in the late '80s. These first coaxial cable systems used electrical regeneration and amplification equipment, and amplifiers were typically deployed every 40km on the cable. The first measure that was used to increase cable capacity was to use a system that had been the mainstay of the radio world for many years, namely frequency division multiplexing (FDM). The first optical transmission electrical amplification cables used FDM to create multiple voice circuits over a single coaxial cable carrier. These cables supported a total capacity of 560Mb divided into some 80,000 voice circuits.

Plans were underway to double the per coax-bearer cable capacity of these hybrid optical/electrical systems when all-optical EDFA systems were introduced, and the first deployments of EDFA submarine cable systems were in 1994. These cables used the same form of frequency-based optical multiplexing, where each optical cable is divided into a number of discrete wavelength channels (lambdas) in an analogous sharing framework termed WDM (Wave Division Multiplexing).

As the carrier frequency of the signal increased, coupled with long cable runs, the factor of chromatic dispersion became more critical. Chromatic dispersion describes the phenomenon that light travels at slightly different speeds at different frequencies. What this means is that a square wave input will be received as a smoothed wave, and at some point, the introduced distortion will push the signal beyond the capability of the receiver digital signal processor (DSP) to reliability decode. The response to chromatic dispersion is the use of negative dispersion fibre segments, where the doping of the negative dispersion cable with germanium dioxide is set up to compensate for chromatic dispersion. It's by no means a perfect solution, and while it's possible to design a dispersion compensation system that compensates for dispersion at the middle frequency of the C-band, the edge frequencies will still show significant chromatic dispersion when using long cable runs.

This first generation of all-optical systems used simple on/off keying (OOK) of the digital signal into light on the wire. This OOK signal encoding technique has been used for signal speeds of up to 10Gbps per lambda in a WDM system, achieved in 2000 in deployed systems, but cables with yet higher capacity per lambda are infeasible for long cable runs due to the combination of chromatic dispersion and polarisation mode dispersion.

At this point, coherent radio frequency modulation techniques were introduced into the digital signal processors used for optical signals, combined with wave division multiplexing. This was enabled with the development of improved digital signal processing (DSP) techniques borrowed from the radio domain, where receiving equipment was able to detect rapid changes in the phase of incoming carrier signal as well as changes in amplitude and polarization.

Figure 10 – Phase-Amplitude space mapping of QPSK keyingUsing these DSPs, it's possible to modulate the signal in each lambda by performing phase modulation of the signal. Quadrature Phase Shift Keying (QPSK) defines four signal points, each separated at 90-degree phase shifts, allowing 2 bits to be encoded in a single symbol. A combination of 2-point polarisation mode encoding and QPSK allows for 2 bits per symbol. The practical outcome is that a C-band based 5Thz optical carriage system using QPSK and DWDM can be configured to carry a total capacity across all of its channels of some 25Tbps, assuming a reasonably good signal to noise ratio. The other beneficial outcome is that these extremely high speeds can be achieved with far more modest components. A 100G channel is constructed as 8 x 12.5G individual bearers.

This encoding can be further augmented with amplitude modulation. Beyond QPSK, there is 8QAM that adds another four points to the QPSK encoding, adding additional phase offers of 45 degrees and at half the amplitude. 8QAM permits a group coding of 3 bits per symbol but requires an improvement in the signal to noise ratio of 4db. 16QAM defines, as its name suggests, 16 discrete points in the phase-amplitude space, which allows the encoding of 4 bits per symbol, at a cost of a further 3db in the minimum acceptable S/N ratio. The practical limit of increasing the number of encoding points in phase-amplitude space is the signal to noise ratio of the cable, as the more complex the encoding, the greater the demands placed on the decoder.

Figure 11 – Adaptive Modulation Constellations for QPSK, 8PSK, 16QAM AND 64QAM

The other technique that can help is extracting a dense digital signal from a noise-prone analogue bearer is the use of Forward Error Correcting codes in the digital signal. At a cost of a proportion of the signal capacity, the FEC codes allow the detection and correction of small number of errors per FEC frame.

The current state of the art in FEC is Polar code, where the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level.

It's now feasible and cost-efficient to deploy medium to long cable systems with capacities approaching 50Tbps in a single fibre. But that's not the limit we can achieve in terms of the capacity of these systems.

There are two frequency bands available to cable designers. The conventional band is C-band, which spans wavelengths from 1,530nm to 1,565nm. There is an adjacent band, L-band, which spans wavelengths from 1,570nm to 1,610nm. In analogue terms, there is some 4.0 to 4.8 THz in each band. Using both bands with DWDM and QPSK encoding can result in cable systems that can sustain some 70Tbps per fibre through some 7,500km of cable.

Achieving this bandwidth comes at a considerable price, as the EDFA units operate either in C or L bands, so cable systems that have both C and L bands need twice the number of EDFA amplifiers (which of course, requires double the injected power at the cable stations. At some point in the cable design exercise, it may prove to be more cost-efficient to increase the number of fibre pairs in a cable than it is to use ever more complex encoding mechanisms, although there are some limitations to the total amount of power that can be injected into long-distance submarine cables, so typically such long-distance systems have no more than 8 fibre pairs.

A different form of optical amplification is used in the Fibre Ramen Amplifier (FRA). The principle of FRA is based on the Stimulated Raman Scattering (SRS) effect. The gain medium is undoped optical fiber, and power is transferred to the optical signal by a non-linear optical process known as the Raman effect. An incident photon excites an electron to the virtual state, and the stimulated emission occurs when the electron de-excites down to the vibrational state of the glass molecule. The advantages of FRAS are variable wavelength amplification, compatibility with installed single-mode fibre, and its suitability to extend EDFAs. FRAs require very high pump power lasers and sophisticated gain control; however, a combination of EDFA and FRA may result in a lower average power over a span. FRAs operate over a very broad signal band (1280nm — 1650nm).

All this assumes that the glass itself is a passive medium that exhibits distortion (or noise) only in linear ways — twice the cable, twice the signal attenuation and twice the degree of chromatic dispersion and so on. When large amounts of energy, in the form of photons, are pumped into glass, it displays non-linear behavior, and as the energy levels increase, the non-linear behaviours become significantly more evident.

The Optical Kerr effect is seen with laser injection into optical cables where the intensity of the light can cause a change in the glass' index of refraction, which in turn can produce modulation instability, particularly on phase modulation used in optical systems. Brillouin scattering results from the scattering of photons caused by large scale, low-frequency phonons. Intense beams of light through glass may induce acoustic vibrations in the glass medium that generate these phonons. This causes remodulation of the light beam as well as modification of both light amplification and absorption characteristics.

The result is that high-density signaling in cables requires high power, which results in non-linear distortion of the signal, particularly in terms of phase distortion. The trade-offs are to balance the power budget, the chromatic and phase distortion, and the cable capacity. Some of the approaches used in current cable systems include different encoding for different lambdas to optimise the total cable carrying capacity.

Figure 12 – Non-linear distortion of a QPSK keyed signal

Current cable design since 2010 largely leaves dispersion to the DSP and does away with dispersion compensation segments int the cable itself. The DSP now performs feedback compensation. This allows for greater signal coherency, albeit at a cost of increased complexity in the DSP. Cable design is now also looking at larger diameter of the glass core in SPF fibre. The larger effective size of the glass core reduces the non-linear effects of so much power being pumped into the glass, leading to up to ten times the usable capacity in these larger core fibre systems. Work is also looking at higher-powered DSPs that can perform more functions. This relies to a certain extent on Moore's Law in operation. Increasing the number of gates in a chip allows for greater functionality to be loaded onto the DSP chip and still have viable power and cooling requirements. Now only can DSPs operate with feedback compensation; they can also perform adaptive encoding and decoding. For example, a DSP can alternate between two PSK encodings for every symbol pair. For example, if every symbol was encoded with alternating 8QAM and 16QAM, then the result is an average of 7 bits per symbol, reducing the large increments between the various PSK encoding levels. The DSP can also test the various phase-amplitude points and will reduce its use of symbols that have a higher error probability. This is termed Probabilistic Constellation Shaping or PCS. This can be combined with FEC operating at around 30% to give a wide range of usable capacity levels for a system.

Taking all this into account, what is the capacity of deployed cable systems today? The largest capacity cable in service at the moment is the MAREA cable, connecting Bilboa in Spain to Virginia Beach in the US, with a service capacity of 208Tbps.


We are by no means near the end to the path in the evolution of subsea cable systems, and ideas on how to improve the cost and performance abound. Optical transmission capability has increased by a factor of around 100 every decade for the past three decades, and while it would be foolhardy to predict that this pace of capability refinement will come to an abrupt halt, it also has to be admitted that sustaining this growth will take a considerable degree of technological innovation in the coming years.

One observation is that the work so far has concentrated on getting the most we can out of a single fibre pair. The issue is that to achieve this, we are running the system in a very inefficient power mode where a large proportion of the power gets converted to optical noise that we are then required to filter out. An alternative approach is to use a collection of cores within a multi-core fibre and drive each core at a far lower power level. System capacity and power efficiency can both be improved with such an approach.

The refinements of DSPs will continue, but we may see changes to the systems that inject the signal into the cable. In the same way that vectored DSL systems use pre-compensation of the injected signal in order to compensate for signal distortion in the copper loop, it may be possible to use pre-distortion in the laser drivers, or possibly even in the EDFA segments, in order to achieve even higher performance from these undersea systems.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Broadband, Telecom

Truth in Web Digital Identity?

Nie, 2020-02-09 19:07

Most of us, when we go to a website and see the little lock at the top of the browser, don't think twice and trust that we are communicating with the right company or organization. However, this is no longer the case because of a rather radical development that has largely occurred without notice or intervention by almost everyone. The web now has its own rapidly spreading version of CallerID spoofing that is about to get worse.

Thirty-five years ago, the National Security Agency working with the private sector, developed what has proven the most important and widely used means for digital identity trust. It is known as the Public Key Infrastructure digital certificate or "PKI cert" for short and was specified in a global intergovernmental standard known as ITU-T X.509.

The idea was simple. Any organization that wants to be trusted goes to a special provider known as a public Certificate Authority (CA) who is supposed to verify certain essential identity basics, and then issue a unique, encrypted key — the PKI cert — to the organization with its identity information securely contained. The platform was approved by all the world's governments and became the basis for trusted digital identity globally. Europe added further trust features through an ETSI Electronic Signatures and Infrastructures standards group.

Then came the World Wide Web with sites all over the world as a kind of universal user interface to billions of people. The problem was that users couldn't trust who was actually running the websites. So a little over ten years ago, the five companies which produce most of the world's web browsers got together with most of the CAs to develop a standard for vetting organization identity for trusted website certificates and display that information in a little lock icon that appears at the top of the browser. They collaborate and reach agreements through an organization known as the CA/Browser Forum. The activity has very far-reaching, fundamental cybersecurity consequences as they control who gets trusted, how verification occurs, and how that trust is provided to billions of users around the world.

Until relatively recently, as required by well-established global standards and practices, the PKI certs had some substantial vetting of an organization's identity, which was then coded into the certificates and displayed to end-users in the browser lock. There was even a high trust certificated known as an "extended validation certificate" that turned the lock green in most browsers and displayed the validated name.

However, starting in 2013, several parties started up a 501(c)(3) non-profit corporation in Silicon Valley (Internet Security Research Group) to dramatically disrupt the digital identity world by issuing free, zero-trust, instant certificates with no organization identity vetting. These so-called Domain Certificates were then marketed commercially beginning in 2016 under the registered trademark Let's Encrypt® and browser vendors were asked to recognize them as a trusted CA. If you see one of these Let's Encrypt certificates (identified as "DST Root CA X3) and click on the lock, the Subject Organization identity information is completely missing and simply says "unknown." It is caveat emptor.

The tactic proved enormously successful as the organization itself described in a highly detailed, tell-all paper presented in a London conference made public last December. As they note in the paper, it "has grown to become the world's largest HTTPS CA… and by January 2019, it had issued over 538 million certificates..." The paper also documents how Let's Encrypt has had a profound effect on the CA market — now dominating it with 57% of the certificates. "Let's Encrypt has seen rapidly growing adoption among top million sites since its launch, while most other CAs have not." They also describe how they used the Internet Engineering Task Force (IETF) to leverage their activities. The commercial opportunity was further facilitated through sponsors who make tax-exempt contributions to the organization's $3.5 million reported 2018 income - some of whom then market the certificates as part of their business offerings.

The paper also admits that "important security challenges remain." The cybersecurity impacts arise — because with zero validation, anyone with interest in spoofing, hiding their identity, or otherwise exploiting security flaws can do so — and indeed have.

Legal and public policy concerns

Although Let's Encrypt has a small section in its December paper describing the "legal environment," it doesn't even begin to treat the major national security, public policy, public safety, antitrust, tort liability, law enforcement, IRS, consumer protection dimensions that have gone with virtually no notice or discussion. Perhaps the most central concern can be summed up by four questions: who gets to decide who is trusted, with what level of vetting, with what manner of notice to end users, and who bears the consequences.

The challenge of digital identity trust was largely solved 35 years ago through a comprehensive, visionary Reagan Administration initiative known as Secure Data Network Systems (SDNS) that in fact was responsible for today's X.509 PKI environment. However, all the required public-private administrative and identity vetting actions necessary to successfully implement the platform were eliminated a decade later by the Clinton-Gore Administration in the belief that Silicon-Valley itself could handle everything and grow the information economy.

As a result, we have inherited today a world of rampant cybersecurity and societal problems stemming from an inability to trust anything online, and where some of the most important identity trust decisions for most of the world's population are made by a handful of firms and organizations with no oversight or control or consequences. It seems long overdue for a concerted global public-private effort to significantly improve digital identity trust for the web and all the giga-objects and services that will constitute the new 5G virtualised communications ecosystem. Potential sweeteners for Silicon Valley with government involvement is the relief from the potentially enormous antitrust, consumer protection, and tort liability consequences.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cybersecurity, Internet Governance, Web

Cyberspace Security in Africa – Where Do We Stand?

Nie, 2020-02-09 18:51

Very few African states today have developed a national cybersecurity strategy or have in place cybersecurity and data protection regulations and laws. Yet, the continent has made major headway in developing its digital ecosystem, and moreover, it is home to the largest free trade area in the world, which is predicted to create an entirely new development path harnessing the potential of its resources and people.

The world bank believes a digital economy in Africa can boost economic growth on the continent by up to two percentage points per year and reduce poverty by one percentage point per year in Sub-Saharan Africa alone. But not even such great predictions and clear solutions to poverty alleviation have convinced the continent's leadership to work towards ensuring that once the digital ecosystem (an ecosystem so critical to the continent's success and future) is developed, it should be protected and kept stable. Such laxity explains why according to a survey carried out by the African Union Commission (AUC), out of the 55 African states, only 8 countries have a national strategy on cybersecurity, only 13 with a Computer Emergency Response Team (CERT) or Computer Security Incident Response Teams (CSIRTs), 14 with personal data protection laws, and only 11 with cybercrime laws. A similar report by Deloitte expresses similar concerns.

While the individual governments on the continent seem to be very slow to appreciate the importance of the concept of cyber safety, the regional political body, the African Union seems to be making some gains in raising awareness and advocating for better cyber safety, well, at least to the continent's ministers of Information and Communications Technology. On September 20, 2018, The African Union Commission (AUC) put out a call for experts to join its African Union Cyber Security Expert Group (AUCSEG), based on a resolution by its Executive Council earlier in January of the same year to create an Africa Cyber Security collaboration and coordination committee to advise the AUC and policymakers on Cyber strategies, with the following specific tasks:

  • Advising the AUC on cybersecurity issues and policies, such as capacity building initiatives;
  • Proposing solutions to facilitate the ratification and domestication of the Malabo Convention into national laws;
  • Sharing best practice on critical and Internet infrastructure security and how to mitigate current and new threats;
  • Identifying areas of research needed for the formulation of policies, guidelines, etc., which can be general or sector-specific, for instance, cybersecurity for smart grid technologies in the electric power industry, for financial systems, and for equipment monitoring tools;
  • Identifying ways to support Computer Security Incident Response Teams (CSIRTs), in the area of capacity building and information sharing at the regional and African Union level;
  • Encouraging close collaboration among the AU Member States and stakeholders, including in responsible and coordinated disclosures;
  • Proposing ways to increase the skills of information systems and cybersecurity professionals in Africa (e.g., by fostering trusted certification programs);
  • Supporting AUC in formulating strategies for cybersecurity and capacity building programs;
  • Supporting AUC and Member States on international cooperation matters regarding cybersecurity, personal data protection and combating cybercrime.

The group was formed and held its inaugural meeting on December 10, 2019. They have, through its chair, been asking African experts to submit their personal assessments of the state of cybersecurity in the continent, especially as it pertains to what the continent has done right and what it can do better.

To answer that call, I would say I think the adoption of the African Union Convention on Cyber Security and Personal Data Protection in 2014 is amongst some of the things that Africa has done right in this area, even though most countries are yet to ratify the convention. Even with the challenge in ratification, it remains a major step forward towards increasing awareness amongst the ministers and administrators from member states. Then there was the piece of work that was done to develop and launch the Privacy and Personal Data Protection Guidelines by the African Union Commission in partnership with Internet Society (ISOC). That was also an important milestone towards secure cyberspace in Africa.

However, and as I've written before, it is disappointing to see that continent-wide and regional initiatives like the Continental Free Trade Area (CFTA) do not embed cybersecurity considerations and concepts at their conception phases and when such projects are developed. In light of current technological trends and in line with progress being made in developing the African digital ecosystem, free intra-regional trade will not only be offline. Rather, we are sure to see a significant amount of the intra-regional trade taking place on the Internet. Digital trade generally requires a great deal of free movement and flow of personal data, as data is the lifeblood of the digital economy. A continent-wide digital trade involving consumers cannot occur without the collection and movement of personal data like names, email addresses, and billing information across borders. In order for such a market to be efficiently regulated, the region will need to look into unifying implementations of cybersecurity and data protection regulations across the continent. The best way to do that (in my opinion) would be for African states to adopt the African Union Convention on Cyber Security and Personal Data Protection or at least align their national cybersecurity legislation with it. Current disparate implementations of data protection regulation (where they exist) make it a very tedious task for multinational businesses or any company carrying out business with partners in multiple countries in the region to lawfully transfer data across borders as part of their operations. Non-compliance with the different data protection regulations may preclude companies from potential business exploits in the region.

We must also remember that in most advanced information societies, regulation tends to play catch up to innovation. Technology use led by the private sector should, in theory, be speeding ahead, while government and public policymakers struggle to catch up. But that is not even the picture we see across the continent. Admittedly, there is some technological progress, but not nearly fast enough to transform the continent into an information society. Therefore, we must start asking questions like what the implications are, if the private sector that is meant to lead innovation also suffers from lack of awareness in cybersecurity, just like their public sector and civil society counterparts.

It is often assumed that the key issue hindering progress in the maturity of cybersecurity posture in Africa is the public leaders. In fact, in the request by the chair of the AUCSEG in one of the African policy chat forums — the Free Software and Open Source Foundation for Africa (FOSSFA) telegram channels, the chair asked for "suggestions on how to message cybersecurity/technology and digital trust ideas to analog African leadership." Yet, in an empirical study on National Cyber Security Awareness in Africa using focus groups, some African stakeholders responded that, "the government realizes that lack of awareness is crucial and recognizes the importance of a multi-stakeholder approach towards this goal." This raises many questions. Amongst them are questions like — are our assumptions of what seems to be the challenge of advancing the cybersecurity posture on the continent and even the general adoption of technological solutions wrong?

Another pertinent question that comes out of the above statement is, if African governments are aware, or at the very least have an idea of what needs to be done to improve their countries' cybersecurity posture but no progress is being made on that front, then what exactly is stopping them?

As the new year and decade begin, these are some of the important questions the AUCSEG should be finding answers to, and hopefully propel the continent to a better cybersecurity posture than we find ourselves today. With the right answers, the continent might move from a Start up stage (stage 1) to at least the Established stage (stage 3) of the University of Oxford Cyber Security Maturity Model for Nations (CMM) which assesses the cyber security capacity maturity capabilities of states over five dimensions (Cybersecurity Policy and Strategy; Cyber Culture and Society; Cybersecurity Education, Training and Skills; Legal and Regulatory Frameworks; and Standards, Organizations, and Technologies) with indicators that describes steps and actions that must be taken to achieve maturity in one of the following 5 stages of maturity: 1) Start up; 2) Formative; 3) Established; 4) Strategic; 5) Dynamic.

But if in answering these questions, the AUCSEG finds that it is indeed the 'analog-ness ' of our leaders that is hindering progress in cybersecurity on the continent, then I would recommend the following next steps:

  1. Investing in awareness of the 'analog' leaders on how cybercrime and poor or lack-of a national cybersecurity strategy and regulation affect the various state economies and their governments' legitimacy.
  2. The AUC should invest in trust-building mechanisms between governments and their private sectors and civil society, in order to create channels of communication and trust in local expert advice. It also makes it possible for successful government-private partnerships in national security.

Once these are in place, strategies like a Whole-of-Government (WoG) approach, which is necessary to achieve an efficient and cost-effective national cybersecurity should be recommended to African states. This approach lends to the process of better coordination and the use of existing resources.

And finally, if the AUCSEG is going to support the AUC and member states on international cooperation on matters of cybersecurity and cybercrime as listed on its list of tasks, then it should investigate and advise the AUC on how recognition (or the lack of) of cyberspace as the fifth domain in military warfare could possibly impact the national security of African states. Only one country in Africa, the Republic of South African, has researched and considered the concept of Revolution in Military Affairs (RMA), which is a military concept that proposes that new military doctrines, strategies, tactics and technologies are required for future warfare. Especially in this digital era where more and more public civilian infrastructure is also being targeted both at peacetime and at wartime as legitimate military targets due to the dual-use nature of cyber infrastructure.

While it is understandable that there are financial limitations amongst other things, that limit developing countries from adopting such a concept, African leadership must be aware and well versed with the concept to substantially contribute to current global security and International law (as it relates to cyberspace) discussions and fora, like the United Nations Group of Governmental Experts (UN GGE) on Developments in the Field of Information and Telecommunications in the Context of International Security and the UN Open-Ended Working Group (OEWG) looking at cyberspace norms.

Written by Tomslin Samme-Nlar, Researcher

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Internet Governance, Policy & Regulation