Internet i inne organizacje

What Does the Future Hold for the Internet?

CircleID - 23 godziny 9 minut temu

Explore the interactive 2017 Global Internet Report: Paths to Our Digital FutureThis is the fundamental question that the Internet Society is posing through the report just launched today, our 2017 Global Internet Report: Paths to Our Digital Future.

The report is a window into the diverse views and perspectives of a global community that cares deeply about how the Internet will evolve and impact humanity over the next 5-7 years. We couldn't know what we would find when we embarked on the journey to map what stakeholders believe could shape the future of the Internet, nor can we truly know what will happen to the Internet, but we do now have a sense of what we need to think about today to help shape the Internet of tomorrow. The report reflects the views and aspirations of our community as well as some of the most pressing challenges facing the future of this great innovation.

What have we learned? We've learned that our community remains confident that the core Internet values that gave rise to the Internet remain valid. We also heard very strong worries that the user-centric model of the Internet is under extraordinary pressure from governments, from technology giants, and even from the technology itself. There is a sense that there are forces beyond the users' control that may define the Internet's future. That the user may no longer be at the center of the Internet's path.

It is, perhaps, trite to say that the world is more connected today than ever before. Indeed, we are only beginning to understand the implications of a hyperconnected society that is dependent on the generation, collection and movement of data in ways that many do not fully understand. The Internet of the future will most certainly enable a host of products and services that could revolutionize our daily lives. At the same time, our dependence on the technology raises a myriad of challenges that society may be ill-equipped to address.

Clearly, the Internet is increasingly intertwined with a geopolitical environment that feels uncertain and even precarious. The Internet provides governments with both opportunities to better the lives of their people but also tools for surveillance and even control. This report highlights the serious choices we all must make about how to ensure that rights and freedoms prevail in the Internet of the future. The decisions we make will determine whether humanity remains in the drivers' seat of technology or not.

In short, the decisions we make about the Internet can no longer be seen as "separate", as "over there" — the implications of a globally interconnected world will be felt by all of us. And the decisions we make about the Internet will be felt far and wide. We are still just beginning to understand the implications of a globally connected society and what it will mean for individuals, business, government and society at large.

How we address the opportunities and challenges that today's forces of change are creating for the future is paramount, but one thing above all others is certain — the choices are ours alone to make, and the future we want is up to us to shape.

Explore the interactive 2017 Global Internet Report: Paths to Our Digital Future

Written by Sally Shipman Wentworth, VP of Global Policy Development, Internet Society

Follow CircleID on Twitter

More under: Broadband, Censorship, Cybersecurity, Internet Governance, Internet Protocol, Mobile Internet, Networks, Policy & Regulation, Privacy, Web

Google Global Cache Servers Go Online in Cuba, But App Engine Blocked

CircleID - Pią, 2017-09-22 20:28

I had hoped to get more information before publishing this post, but difficult Internet access in Cuba and now the hurricane got in the way — better late than never.

Cuban requests for Google services are being routed to GCC servers in Cuba, and all Google services that are available in Cuba are being cached — not just YouTube. That will cut latency significantly, but Cuban data rates remain painfully slow. My guess is that Cubans will notice the improved performance in interactive applications, but maybe not perceive much of a change when watching a streaming video.

Note the italics in the above paragraph — evidently, Google blocks access to their App Engine hosting and application development platform. Cuban developers cannot build App Engine applications, and Cubans cannot access applications like the Khan Academy or Google's G-Suite.

The last time I checked, Rackspace and Amazon allow access to their hosting platforms from Cuba, but IBM Softlayer and Google did not. President Obama clearly favored improved telecommunication for Cuba, in his Cuba Policy Changes, stating:

"I've authorized increased telecommunications connections between the United States and Cuba. Businesses will be able to sell goods that enable Cubans to communicate with the United States and other countries."

While Trump claimed that he was "canceling the last administration's completely one-sided deal with Cuba," he made few changes and has said nothing about restrictions on access to Internet services by Cubans.

I wonder why IBM and Google do not follow the lead of Amazon and Rackspace.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Internet Governance, Policy & Regulation, Web

Networks Are Not Cars Nor Cell Phones

CircleID - Czw, 2017-09-21 18:24

The network engineering world has long emphasized the longevity of the hardware we buy; I have sat through many vendor presentations where the salesman says "this feature set makes our product future proof! You can buy with confidence knowing this product will not need to be replaced for another ten years..." Over at the Networking Nerd, Tom has an article posted supporting this view of networking equipment, entitled Network Longevity: Think Car, not iPhone.

It seems, to me, that these concepts of longevity have the entire situation precisely backward. These ideas of "car length longevity" and "future proof hardware" are looking at the network from the perspective of an appliance, rather than from the perspective as a set of services. Let me put this in a little bit of context by considering two specific examples.

In terms of cars, I have owned four in the last 31 years. I owned a Jeep Wrangler for 13 years, a second Jeep Wrangler for eight years, and a third Jeep Wrangler for nine years. I have recently switched to a Jeep Cherokee, which I've just about reached my first year driving.

What if I bought network equipment like I buy cars? What sort of router was available nine years ago? That is 2008. I was still working at Cisco, and my lab, if I remember right, was made up of 7200's and 2600's. Younger engineers probably look at those model numbers and see completely different equipment than what I actually had; I doubt many readers of this blog ever deployed 7200's of the kind I had in my lab in their networks. Do I really want to run a network today on 9-year-old hardware? I don't see how the answer to that question can be "yes." Why?

First, do you really know what hardware capacity you will need in ten years? Really? I doubt your business leaders can tell you what products they will be creating in ten years beyond a general description, nor can they tell you how large the company will be, who their competitors will be, or what shifts might occur in the competitive landscape.

Hardware vendors try to get around this by building big chassis boxes and selling blades that will slide into them. But does this model really work? The Cisco 7500 was the current chassis box 9 years ago, I think — even if you could get blades for it today, would it meet your needs? Would you really want to pay the power and cooling for an old 7500 for 9 years because you didn't know if you would need one or seven slots nine years ago?

Building a hardware platform for ten years of service in a world where two years is too far to predict is like rearranging the chairs on the Titanic. It's entertaining, perhaps, but it's pretty pointless entertainment.

Second, why are we not taking the lessons of the compute and storage worlds into our thinking, and learning to scale out, rather than scaling up? We treat our routers like the server folks of yore — add another blade slot and make it go faster. Scale up makes your network do this —

Do you see those grey areas? They are costing you money. Do you enjoy defenestrating money?

These are symptoms of looking at the network as a bunch of wires and appliances, as hardware with a little side of software thrown in.

What about the software? Well, it may be hard to believe, but pretty much every commercial operating system available for routers today is an updated version of software that was available ten years ago. Some, in fact, are more than twenty years old. We don't tend to see this because we deploy routers and switches as appliances, which means we treat the software as just another form of hardware. We might deploy ten to fifteen different operating systems in our network without thinking about it — something we would never do in our data centers, or on our desktop computers.

So what this appliance-based way of looking at things emphasizes is this: buy enough hardware to last you ten years, and treat the software a fungible — software is a second tier player that is a simple enabler for the expensive bits, the hardware. The problem with this view of things is it simply ignores reality. We need to reverse our thinking.

Software is the actual core of the network, not hardware.

If you look at the entire networking space from a software centric perspective, you can think a lot differently. It doesn't matter what hardware you buy; what matters is what software it runs. This is the revolutionizing observation of white box, bright box, and disaggregated networking. Hardware is cheap, software is expensive. Hardware is CAPEX, software is OPEX. Hardware only loosely interacts with business and operations; software interacts with both.

The appliance model, and the idea of buying big iron like a car, is hampering the growth and usefulness of networks in real businesses. It is going to take a change to realize that most of us care much less about hardware than software in our daily lives, and to transfer this thinking to the network engineering realm.

It is time for a new way of looking at the network. A router is not a car, nor it is a cell phone. It is a router, and it deserves its own way of looking at value. The value is in connecting the software to the business, and the hardware to the speeds and feeds. These are separate problems which the appliance model ties into a single "thing." This makes the appliance world bad for businesses, bad for network design, and bad for network engineers.

It's time to rethink the way we look at network engineering to build networks that are better for business, to adjust our idea of future proof to mean a software-based system that can be used across many generations of hardware, while hardware becomes a "just in time" component used and recycled as needs must.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Networks

Spanish Police Raid the Offices of .cat gTLD Registry

CircleID - Śro, 2017-09-20 16:29

Photo posted by Fundació puntCAT‏ during the raid.The offices of the .cat gTLD registry Fundació puntCAT were raided by the Spanish police this morning. The company reported the incident via a series of tweets as the raid was being carried out. "Right now spanish police @guardiacivil is doing an intervention in our office @ICANN," was tweeted just about 4 hours ago followed by another tweet reporting that the police was headed to CTO's home. "We're wating for him to arrive to our office to start the intervention."

Michele Neylon writes: "The move comes a couple of days after a Spanish court ordered the domain registry to take down all .cat domain names being used by the upcoming Catalan referendum. The .cat domain registry currently has over 100 thousand active domain names, and in light of the actions taken by the Spanish government, it's unclear how the registry will continue to operate if their offices are effectively shutdown by the Spanish authorities. The seizure won't impact live domain names or general day to day operations by registrars, as the registry backend is run by CORE and leverages global DNS infrastructure. However, it is deeply worrying that the Spanish government's actions would spill over onto an entire namespace."

Update – 20 SEP 2017: puntCAT's head of IT, Pep Masoliver, has been arrested as part of a Spanish government crackdown on pushes for independence, reports Kevin Murphy in Domain Incite: "He's been charged with 'sedition' and is still in police custody this evening… His arrest coincided with the military police raid of puntCAT's office in Barcelona that started this morning, related to a forthcoming Catalan independence referendum."

Fundació puntCAT releases statement: "The Fundació puntCAT wants to express its utmost condemnation, indignation and reprobation for the actions that it has been suffering lately with successive judicial mandates, searches and finally the arrest of our Director of Innovation and Information Systems, Pep Masoliver. ... The show that we have experienced in our offices this morning has been shameful and degrading, unworthy of a civilized country. We feel helpless in the face of these immensely disproportionate facts. We demand the immediate release of our colleague and friend."

Update 21 Sep 2017: EFF issues press letter condemning the police raid: "We have deep concerns about the use of the domain name system to censor content in general, even when such seizures are authorized by a court, as happened here. And there are two particular factors that compound those concerns in this case. First, the content in question here is essentially political speech, which the European Court of Human Rights has ruled as deserving of a higher level of protection than some other forms of speech. Even though the speech concerns a referendum that has been ruled illegal, the speech does not in itself pose any imminent threat to life or limb. The second factor that especially concerns us here is that the seizure took place with only 10 days remaining until the scheduled referendum, making it unlikely that the legality of the domains' seizures could be judicially reviewed before the referendum is scheduled to take place."

Follow CircleID on Twitter

More under: Registry Services, Top-Level Domains

The Madness of Broadband Speed Tests

CircleID - Wto, 2017-09-19 19:55

The broadband industry has falsely sold its customers on "speed", so unsurprisingly "speed tests" have become an insane and destructive benchmark.

As a child, I would go to bed, and sometimes the garage door would swing open before I went to sleep. My father had come home early from the late shift, where he was a Licensed Aircraft Maintenance Engineer for British Airways. I would wait for him eagerly, and he would come upstairs, still smelling of kerosene and Swarfega, With me lying in bed, he would tell me tales of his work, and stories about the world.

Just don't break the wings off as you board!Funnily enough, he never told me about British Airways breaking the wings off its aircraft. You see, he was involved in major maintenance checks on Boeing 747s. He joined BOAC in 1970 and stayed with the company for 34 years until retirement. Not once did he even hint at any desire for destructive testing for aircraft.

Now, when a manufacturer makes a brand new airplane type, it does test them to destruction. Here's a picture I shamelessly nicked showing the Airbus A350 wing flex test.

I can assure you, they don't do this in the British Airways hangars TBJ and TBK at Hatton Cross maintenance base at Heathrow. Instead, they have non-destructive testing using ultrasound and X-rays to look for cracks and defects.

So what's this all got to do with broadband? Well, we're doing the equivalent of asking the customers to break the wings off every time they board. And even worse, our own engineers have adopted destructive testing over non-destructive testing!

Because marketing departments at ISPs refuse to define what experience that actually intends to deliver (and what is unreasonable to expect), the network engineers are left with a single and simple marketing requirement: "make it better than it was".

When you probe them on what this means, they shrug and tell you "well, we're selling all our products on peak speed, so we try to make the speed tests better".

This, my friends, is bonkers.

The first problem is that the end users are conducting a denial-of-service attack on themselves and their neighbours. A speed test deliberately saturates the network, placing it under maximum possible stress.

The second problem is that ISPs themselves have adopted speed tests internally, so they are driving mad levels of cost carrying useless traffic designed to over-stress their network elements.

Then to top it all, regulators are encouraging speed tests as a key metric, deploying huge numbers of boxes hammering the broadband infrastructure even in its most fragile peak hour. The proportion of traffic coming from speed tests is non-trivial.

So what's the alternative? Easy! Instead of destructive testing, do non-destructive testing.

We know how to X-ray a network, and the results are rather revealing. If you use the right metrics, you can also model the performance limits of any application from the measurements you take. Even a speed test! So you don't need to snap the wings off your broadband service every time you use it after all.

I think I'll tell my daughters at their next bedtime. It's good life guidance. Although I can imagine my 14 year old dismissing it as another embarrassing fatherly gesture and uninteresting piece of parental advice. Sometimes it takes a while to appreciate our inherited wisdom.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

EFF Resigns from World Wide Web Consortium (W3C) over EME Decision

CircleID - Wto, 2017-09-19 16:36

In an open letter to the World Wide Web Consortium (W3C), the Electronic Frontier Foundation (EFF) announced on Tuesday that it is resigning from World Wide Web Consortium (W3C) in response to the organization publishing Encrypted Media Extensions (EME) as a standard. From the letter: "In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing "Encrypted Media Extensions," an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties. ... We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they'll be able to ensure no one ever subjects them to the same innovative pressures."

Follow CircleID on Twitter

More under: Cybersecurity, Policy & Regulation, Privacy, Web

Net Neutrality Advocates Planning Two Days of Protest in Washington DC

CircleID - Pon, 2017-09-18 18:53

A coalition of activists and consumer groups are planning to gather in Washington, DC to meet directly with the members of Congress, as they protest plans to defang regulations meant to protect an open internet.

The event organizer, Fight for the Future, is running a dedicated website 'battleforthenet.com/dc' in which it states in part: "On September 26-27 Internet users from across the country will converge on Washington, DC to meet directly with their members of Congress, which is by far the most effective way to influence their positions and counter the power of telecom lobbyists and campaign contributions. ... The only thing that can stop them is a coordinated grassroots effort of constituents directly pressuring our members of Congress, who have the power to stop the FCC and vote down bad legislation."

Participating organizations in the protest include Fight for the Future, Public Knowledge, EFF, Center for Media Justice, Common Cause, Consumers Union, Free Press and the Writers Guild of America West. See additional report by Dominic Rushe in The Guardian.

Follow CircleID on Twitter

More under: Net Neutrality, Policy & Regulation

Forty Percent of New Generic TLDs Shrinking, According to Domain Incite Analysis

CircleID - Pon, 2017-09-18 17:39

Forty percent of non-brand new gTLDs are shrinking, reports Kevin Murphy in Domain Incite: "According to numbers culled from registry reports, 172 of the 436 commercial gTLDs we looked at had fewer domains under management at the start of June than they did a year earlier. ... As you might expect, registries with the greatest exposure to the budget and/or Chinese markets were hardest hit over the period. .wang, .red, .ren, .science and .party all saw DUM decline by six figures. Another 27 gTLDs saw declines of over 10,000 names."

Follow CircleID on Twitter

More under: Domain Names, Registry Services, Top-Level Domains

Preliminary Thoughts on the Equifax Hack

CircleID - Nie, 2017-09-17 19:08

As you've undoubtedly heard, the Equifax credit reporting agency was hit by a major attack, exposing the personal data of 143 million Americans and many more people in other countries. There's been a lot of discussion of liability; as of a few days ago, at least 25 lawsuits had been filed, with the state of Massachusetts preparing its own suit. It's certainly too soon to draw any firm conclusions about who, if anyone, is at fault — we need more information, which may not be available until discovery during a lawsuit — but there are a number of interesting things we can glean from Equifax's latest statement.

First and foremost, the attackers exploited a known bug in the open source Apache Struts package. A patch was available on March 6. Equifax says that their "Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." The obvious question is why this particular system was not patched.

One possible answer is, of course, that patching is hard. Were they trying? What does "took efforts to identify and to patch" mean? Were the assorted development groups actively installing the patch and testing the resulting system? It turns out that this fix is difficult to install:

You then have to hope that nothing is broken. If you're using Struts 2.3.5 then in theory Struts 2.3.32 won't break anything. In theory it's just bug fixes and security updates, because the major.minor version is unchanged. In theory.

In practice, I think any developer going from 2.3.5 to 2.3.32 without a QA cycle is very brave, or very foolhardy, or some combination of the two. Sure, you'll have your unit tests (maybe), but you'll probably need to deploy into your QA environment and do some kind of integration testing too. That's assuming, of course, that you have a compatible QA environment within which you can deploy your old, possibly abandoned application.

Were they trying hard enough, i.e., devoting enough resources to the problem?

Ascertaining liability here — moral and/or legal — can't be done without seeing the email traffic between the security organization and the relevant development groups; you'd also have to see the activity logs (code changes, test runs, etc.) of these groups. Furthermore, if problems were found during testing, it might take quite a while to correct the code, especially if there were many Struts apps that needed to be fixed.

As hard as patching and testing are, though, when there are active exploitations going on you have to take the risk and patch immediately. That was the case with this vulnerability. Did the Security group know about the active attacks or not? If they didn't, they probably aren't paying enough attention to important information sources. Again, this is information we're only likely to learn through discovery. If they did know, why didn't they order a flash-patch? Did they even know which systems were vulnerable? Put another way, did they have access to a comprehensive database of hardware and software systems in the company? They need one — there are all sorts of other things you can't do easily without such a database. Companies that don't invest up front in their IT infrastructure will hurt in many other ways, too. Equifax has a market capitalization of more than $17 billion; they don't really have an excuse for not running a good IT shop.

It may be, of course, that Equifax knew all of that and still chose to leave the vulnerable servers up. Why? Apparently, the vulnerable machine was their "U.S. online dispute portal". I'm pretty certain that they're required by law to have a dispute mechanism, and while it probably doesn't have to be a website (and some people suggest that complainants shouldn't use it anyway), it's almost certainly a much cheaper way to receive disputes than is paper mail. That opens the possibility that there was a conscious decision that taking the risk was worthwhile. Besides, if many applications needed patching and they had limited development resources, they'd have had to set priorities on whic web servers were more at risk. Again, we need more internal documents to know.

Some text in the announcement does suggest either ignorance or a conscious decision to delay patching — the timeline from Equifax implies that they were able to patch Struts very quickly after observing anomalous network traffic to that server. That is, once they knew that there was a specific problem, rather than a potential one, they were able to respond very quickly. Alternatively, this server was on the "must be patched" list, but was too low down on the priority list until the actual incident was discovered.

We thus have several possible scenarios: difficulty in patching a large number of Struts applications, ignorance of the true threat, inadequate IT infastructure, or a conscious decision to wait, possibly for priority reasons. The first and perhaps last would seem to be exculpatory; the others would seem to leave the company in a bad moral position. But without more data we can't distinguish among these cases.

A more interesting question is why it took Equifax so long to detect the breach. They did notice anomalous network traffic, but not until July 29. Their statement says that data was exposed starting May 13. Did they have inadequate intrusion detection? That might be more serious from a liability standpoint — unlike patching, running an IDS doesn't risk breaking things. You need to tune your IDS correctly to avoid too many false positives, and you need to pay attention to alerts, but beyond dispute an enterprise of Equifax's scale should have such deployed. It is instructive to read what Judge Learned Hand wrote in 1932 in a liability case when some barges sank because the tugboat did not have a weather radio:

Indeed in most cases reasonable prudence is in fact common prudence; but strictly it is never its measure; a whole calling may have unduly lagged in the adoption of new and available devices. It may never set its own tests, however persuasive be its usages. Courts must in the end say what is required; there are precautions so imperative that even their universal disregard will not excuse their omission… But here there was no custom at all as to receiving sets; some had them, some did not; the most that can be urged is that they had not yet become general. Certainly in such a case we need not pause; when some have thought a device necessary, at least we may say that they were right, and the others too slack… We hold [against] the tugs therefore because [if] they had been properly equipped, they would have got the Arlington [weather] reports. The injury was a direct consequence of this unseaworthiness.

It strikes me as entirely possible that Equifax's exposure is greater on this issue than on patching.

This is a big case, affecting a lot of people. The outcome is likely to change the norms of how corporations world-wide protect their infrastructure. I hope the change will be in the right direction.

* * *

Update – Monday, Sep 18:

A news report today claims that Equifax was hacked twice, once in March (which is very soon after the Struts vulnerability was disclosed) and once in mid-May. The news article does not say if the same vulnerability was exploited; it does, however, say that their sources claim that "the breaches involve the same intruders".

If it was the same exploit, it suggests to me one of the possibilities I mentioned above: that the company lacked an comprehensive softare inventory. After all, if you know there's a hole in some package and you know that you're being targeted by attackers who know of it and have used it against you, you have very strong incentive to fix all instances immediately. That Equifax did not do so would seem to indicate that they were unaware that they were still vulnerable. In fact, the real question might be why it took the attackers so long to return. Maybe they couldn't believe that that door would still be open…

On another note, several people have sent me notes pointing out that Susan Mauldin, the former CSO at Equifax, graduated with degrees in music, not computer science. I was aware of that and regard it as quite irrelevant. As I and others have pointed out, gender bias seems to be a more likely explanation for the complaints. And remember that being a CSO is a thankless job.

Update – Thursday, Sep 21:

In the Sep. 18 update above, I noted that Equifax had been breached in March, and quoted the article as saying that the attackers had been "the same intruders" as in the May breach. In a newer news report, Equifax has denied that:

"The March event reported by Bloomberg is not related to the criminal hacking that was discovered on 29 July," Equifax's statement continues. "Mandiant has investigated both events and found no evidence that these two separate events or the attackers were related. The criminal hacking that was discovered on 29 July did not affect the customer databases hosted by the Equifax business unit that was the subject of the March event."

So: I'll withdraw the speculation I posted about this incident confirming one of my hypotheses and wait for further, authoritative information. I repeat my call for public investigations of incidents of this scale.

Also worth noting: Brian Krebs was one of the very few to report the March incident.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Law

China to Create National Cyberattack Database

CircleID - Pią, 2017-09-15 22:43

China has revealed plans to create a national data repository for information on cyberattacks and will require telecom firms, internet companies and domain name service providers to report threats to it. Reuters reports: "The Ministry of Industry and Information Technology (MIIT) said companies and telcos as well as government bodies must share information on incidents including Trojan malware, hardware vulnerabilities, and content linked to "malicious" IP addresses to the new platform. An MIIT policy note also said that the ministry, which is creating the platform, will be liable for disposing of threats under the new rules, which will take effect on Jan. 1."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Policy & Regulation, Registry Services, Telecom

Bluetooth-Based Attack Vector Dubbed "BlueBorne" Exposes Almost Every Connected Device

CircleID - Pią, 2017-09-15 22:30

New discovery of a set of zero-day Bluetooth-related vulnerabilities can affect billions of devices in use today. Security firm, Armis Labs, has revealed a new attack vector that can target major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them. The new vector named "BlueBorne", as it spread through the air (airborne) and attacks devices via Bluetooth.

No pairing required: "BlueBorne is an attack vector by which hackers can leverage Bluetooth connections to penetrate and take complete control over targeted devices. BlueBorne affects ordinary computers, mobile phones, and the expanding realm of IoT devices. The attack does not require the targeted device to be paired to the attacker's device, or even to be set on discoverable mode."

— "The BlueBorne attack vector has several qualities which can have a devastating effect when combined. By spreading through the air, BlueBorne targets the weakest spot in the networks' defense — and the only one that no security measure protects. Spreading from device to device through the air also makes BlueBorne highly infectious. Moreover, since the Bluetooth process has high privileges on all operating systems, exploiting it provides virtually full control over the device."

Vulnerabilities found in Android, Microsoft, Linux and iOS versions pre-iOS 10. "Armis reported the vulnerabilities to Google, Microsoft, and the Linux community. Google and Microsoft are releasing updates and patches on Tuesday, September 12. Others are preparing patches that are in various stages of being released."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware, Mobile Internet, Wireless

U.S. Navy Investigating Possibility of Cyberattack Behind Two Navy Destroyer Collisions

CircleID - Pią, 2017-09-15 21:53

Deputy chief of naval operations for information warfare, Vice Adm. Jan Tigh, says the military is investigating the possibility of compromised computer systems behind two U.S. Navy destroyer collisions with merchant vessels that occurred in recent months. Elias Groll reporting in Foreign Policy: "Naval investigators are scrambling to determine the causes of the mishaps, including whether hackers infiltrated the computer systems of the USS John S. McCain ahead of the collision on Aug. 21, Tighe said during an appearance at the Center for Strategic and International Studies in Washington… he Navy has no indication that a cyberattack was behind either of the incidents, but it is dispatching investigators to the McCain to put those questions to rest, she said."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

In Response to 'Networking Vendors Are Only Good for the Free Lunch'

CircleID - Pią, 2017-09-15 00:39

I ran into an article over at the Register this week which painted the entire networking industry, from vendors to standards bodies, with a rather broad brush. While there are true bits and pieces in the piece, some balance seems to be in order. The article recaps a presentation by Peyton Koran at Electronic Arts (I suspect the Register spiced things up a little for effect); the line of argument seems to run something like this —

  • Vendors are only paying attention to larger customers, and/or a large group of customers asking for the same thing; if you are not in either group, then you get no service from any vendor
  • Vendors further bake secret sauce into their hardware, making it impossible to get what you want from your network without buying from them
  • Standards bodies are too slow, and hence useless
  • People are working around this, and getting to the inter-operable networks they really want, by moving to the cloud
  • There is another way: just treat your networking gear like servers, and write your own protocols--after all you probably already have programmers on staff who know how to do this

Let's think about these a little more deeply.

Vendors only pay attention to big customers and/or big markets. – Ummm… Yes. I do not know of any company that does anything different here, including the Register itself. If you can find a company that actually seeks the smallest market, please tell me about them, so I can avoid their products, as they are very likely to go out of business in the near future. So this is true, but it is just a part of the real world.

Vendors bake secret sauce into their hardware to increase their profits. – Well, again… Yes. And how is any game vendor any different, for instance? Or what about an online shop that sells content? Okay, next.

Standards bodies are too slow, and hence useless. – Whenever I hear this complaint, I wonder if the person making the complaint has actually ever built a real live running system, or a real live deployed standard that provides interoperability across a lot of different vendors, open source projects, etc. Yes, it often seems silly how long it takes for the IETF to ratify something as a standard. But have you ever considered how many times things are widely implemented and deployed before there is a standard? Have you ever really looked at the way standards bodies work to understand that there are many different kinds of standards, each of which with a different meaning, and that not everything needs to be the absolute tip top rung on the standards ladder to be useful? Have you ever asked how long it takes to build anything large and complicated? I guess we could say the entire open source community is slow and useless because it took many years for even the Linux operating system to be widely deployed, and to solve a lot of problems.

Look, I know the IETF is slow. And I know the IETF has a lot more politics than it should. I live both of those things. But I also know the fastest answer is not always the right answer, and throwing away decades of experience in designing protocols that actually work is a pretty dumb idea — unless you really just want to reinvent the wheel every time you need to build a car.

In the next couple of sentences, we suddenly find that someone needs to call out the contradiction police, replete in their bright yellow suits and funny hats. Because now it seems people want inter-operable networks without standards bodies! Let make a simple point here many people just do not seem to realize:

You cannot have interoperability across multiple vendors and multiple open source projects, without some forum where they can all discuss the best way to do something, and find enough common ground to make their various products inter-operate.

I hate to break the news to you, but that forum is called a standards body.

In the end, if you truly want every network to be a unique snowflake, groaning under the technical debt of poor decisions made by a bunch of folks who know how to code up a UI, but do not understand the intimate details of how a network actually converges in the real world, feel free to abandon the standards, and just throw the problem to any old group of coders you have handy.

Let me know how it turns out — but remember, I am not the one who has to answer the phone at 2AM when your network falls over, killing your entire business.

People are working around this by moving to the cloud. Yep — this is what every company I've talked to who is moving to the cloud has said to me: "We're doing it to get to inter-operable networks." 'nuff said.

There is a better way. On this I can agree entirely. But the better way is not to build each network into a unique snowflake, nor to abandon standards. There is a real path forward, but as always it will not be the apparently easy path of getting mad at vendors and the IETF, and making the bald statement you can build it all on your own. The real path forward looks something like this —

  • Learn to be, and build, real engineers, rather than CLI slingers
  • Rationally assess the problems that need to be solved to build the network your organization needs
  • Choose a set of solutions that seem right to solve that set of problems (and I don't mean appliances here!)
  • Look around for implementations of those things (open source and commercial), take in lessons others have learned, and refine the solution set; in other words, don't abandon years of experience, but rather leverage it
  • If the solution set doesn't exist, decide how you can break the solution set into reasonable pieces
  • Figure out which pieces you should outsource, which you should not, and what the API looks like between these two
  • Build it

Oh, and along the way — rather than complaining about standards bodies, get involved in them. There are far too few people who even make an attempt at changing what is there, and far too many who just whine about it. You don't need to be involved in every IETF or W3C mailing list to be "involved;" you can pick a narrow realm to be useful in and make a real difference. Far too many people see these bodies as large monoliths; either you must be involved in everything, or nothing. This is simply not true.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Networks

Abusive and Malicious Registrations of Domain Names

CircleID - Czw, 2017-09-14 16:43

When ICANN implemented the Uniform Domain Name Dispute Resolution Policy (UDRP) in 1999, it explained its purpose as combating "abusive registrations" of domain names which it defined as registrations "made with bad-faith intent to profit commercially from others' trademarks (e.g., cybersquatting and cyberpiracy)." (The full statement can be found in the Second Staff Report on Implementation Documents for the Uniform Dispute Resolution Policy, Paragraph 4.1(c)). Bad actors employ a palette of stratagems, such as combining marks with generic qualifiers, truncating or varying marks or by removing, reversing, and rearranging letters within the second level domain (typosquatting). They are costly to police and likelier even more costly to maintain forfeited domain names, but for all the pain they inflict they are essentially plain vanilla irritants.

While these kinds of disputes essentially dominate the UDRP docket, there has been an increase in the number of disputes involving malicious registrations. The first instances of "phishing" and "spoofing" appear in a 2005 case, CareerBuilder, LLC v. Stephen Baker, D2005-0251 (WIPO May 6, 2005) in which the Panel found that the "disputed domain name is being used as part of a phishing attack (i.e., using 'spoofed' e-mails and a fraudulent website designed to fool recipients into divulging personal financial data such as credit card numbers, account usernames and passwords, social security numbers, etc.")

The quainter forms of abuse are registrants looking to pluck lower hanging fruit. They are so obviously opportunistic respondents don't even bother to appear (they also don't appear with the malicious cases, but for another reason, to avoid identity). The plain vanilla type is represented by such cases as Guess? IP Holder L.P. and Guess? Inc. v. Domain Admin: Damon Nelson — Manager, Quantec LLC, Novo Point LLC, D2017-1350 (WIPO August 24, 2017) (<guess accessories.com>) in which Complainant's product line includes "accessories." In these types of cases, respondents are essentially looking for visitors.

In contrast, malicious registrations are of the kind described, for example, in Google Inc. v. 1&1 Internet Limited, FA1708001742725 (Forum August 31, 2017) (<web-account-google.com> in which

respondent used the complainant's mark and logo on a resolving website containing offers for technical support and password recovery services, and soliciting Internet users' personal information). . . . Complainant's exhibit 11 displays a malware message displayed on the webpage, which Complainant claims indicates fraudulent conduct.

Malicious registrations are a step up in that they introduce a new, more disturbing, and even criminal element into the cyber marketplace. Respondents are not just looking for visitors, they are targeting brands for victims. Their bad faith is more than "profit[ing] commercially from others' trademarks" but operating websites (or using e-mails) as trojan horses. It aligns registrations actionable under the UDRP with conduct policed and prosecuted by governments.

The UDRP, then, is not just a "rights protection mechanism." The term "abusive registration" has enlarged in meaning (and, thus, in jurisdiction) to include malicious conduct generally. Total security is a pipe dream. ICANN has working groups devoted to mapping the problem, and there are analytical studies assessing its extent in legacy and new TLDs. Some idea of the magnitude is seen in "Statistical Analysis of DNS Abuse in gTLDs Final Report” commissioned by an ICANN mandated review team, the Competition, Consumer Trust and Consumer Choice Review Team (CCTRT). Incidents of abusive and malicious activity online and radiating out to affect the public offline represent the universe of cyber crime and uncivil behavior of which UDRP disputes play a minor, although important role in policing the Internet. In initiating complaints, mark owners are on the front line not only in protecting the integrity of their mark but also protecting visitors landing on fake websites by shutting down infectious domain names.

It is interesting to learn that disputes filed with UDRP providers are the tip of the iceberg. There are a number of organizations devoted to collecting, analyzing, correlating, and reporting incidents of abusive and malicious activity on the Internet. Stopbadware.org, for example, reports that there are currently blacklisted 3,918,603 domain names; Securedomain.org compiles "badness" indices of TLDs, registrars, spammers, and bot ISPs; Antiphishing.org and Arwg.org warn us to be vigilant against malware infected domain names and e-mails. Not surprisingly, cyberspace is a microcosm of the social world — calm on the surface; turbulence below.

Malicious registrations are reserved for more outrageous conduct (a step above abusive), not only threatening mark owners but also consumers. It is a kind of misconduct that has (I believe) become more common, even to the point of including miscreant complainants who have no actionable claims for cybersquatting but file complaints anyway (not without a spice of malice) for the cost of incurring a minor penalty. Somewhere on the time-line between the implementation of the UDRP and now there has been a marked increase in the number of these kinds of registrations. "Phishing" ("spoofing" is a less used term and appears to have become folded into phishing) became more common after 2008, and increasingly so in 2011 and 2012. Already in September 2017 there have been 8 decisions; over 20 in August of spoofing, phishing, and distribution of malware. This upward trajectory has been an evolutionary process in the direction of criminal conduct.

To take some examples of the various forms of malicious conduct. In CommScope, Inc. of North Carolina v. Chris Lowe / comm-scope / Chris Lowe / comm-scopes / Chris Lowa / commmscope, FA1707001742149 (Forum September 7, 2017) Respondent "used the domain names as an email suffix and has solicited third parties to submit personally identifiable information." In Novartis AG v. CHRIS TAITAGUE, FA170800 1744264 (Forum September 11, 2017) (<sandozcareers.com>) Respondent targets job seekers. In Goodwin Procter LLP v. GAYLE FANDETTI, FA1706001738231 () Respondent target a law firm to "to misdirect funds in an e mail for an illegal and fraudulent purpose."

The target is not necessarily the mark owner but consumers drawn to the website because of what the domain name implies. In the case of Yahoo Holdings, Inc. v. Registration Private, Domains By Proxy, LLC / Technonics Solutions,. D2017-1336 (WIPO August 11, 2017) (<yahoodomainsupport.com>) it offers "support":

The evidence supports the inference that Respondent sought to use the disputed domain name to create a false association with Complainant to perpetuate a phishing scam. Although Respondent has no affiliation with Complainant, the website associated with the disputed domain name purports to offer technical support for Yahoo-branded services and urges customers seeking assistance to call a provided phone number.

Also, Hill-Rom Inc. v. Jyoti Bansal, FA1703001724573 (Forum May 3, 2017) <himlrom.org>) in which Respondent was using the e-mail to send messages

to Complainant's distributors, fraudulently attempting to create the impression that the emails originate from Complainant and requesting payment from the recipients, in what Complainant describes as a "phishing attack."

Similarly in The Travelers Indemnity Company v. jack Halua / Google Inc., FA1707001739643 (Forum August 21, 2017) (<travelerschampionshipgolf.org>); Home Depot Product Authority, LLC v. Jim Brainard, FA1707001739571 (Forum August 8, 2017) (<homedepotmemphis.com>), and The Travelers Indemnity Company.

Good examples of spoofing (not always called as such, but that's the term for payment instruction fraud) are found in Arla Foods Amba v. ESMM EMPIRE staincollins, CAC 101578 (ADR.eu August 14, 2017) and optionsXpress Holdings, Inc. v. David A., FA1701001711999 (Forum February 15, 2017) (<optionexpress.net>). In Arla Foods, Respondent was both spoofing the mark owners and phishing for personal information. The general complaint is that Respondent was engaged in a "fraudulent scheme to deceive Internet users into providing their credit card and personal information." Respondent was using the domain name to "send emails in the name of Complainant's employees, in an attempt to commit fraud and deceptively steal sensitive information by "impersonat[ing] the Complainant and fraudulently attempt[ing] to obtain payments and sensitive personal information" or by "solicit[ing] payment of fraudulent invoices by the Complainant's actual or prospective customers."

At bottom, respondents are engaged in a hunt to syphon funds from mark owners and anyone who deals with them such as distributors and customers.) In Shotgun Software Inc. v. Domain Admin / Hulmiho Ukolen, Poste restante, D2017-1273 (WIPO August 23, 2017) (<shotgunstudios.com>) Respondent added another layer of deceit by diverting visitors to "sponsored links" for the purpose of distributing malware:

The disputed domain name resolves to different successive websites after repeated access, named by the Complainant as a "Scam Page", a "Disable Tracking Page", "Malware Pages", and sponsored links. The "Scam Page" is designed to trick the visitor into taking action, through a specified telephone number, to eliminate a virus but is an attempt to phish for confidential information. The "Disable Tracking Page" is designed to trick visitors into supposedly disabling their Internet search history but leads to a phishing attempt. The "Malware Pages" may attempt to download malware on to the visitor's computer. The sponsored links pages lead to advertisements including those of the Complainant's competitors.

What brands are now experiencing with domain names can be seen as similar to the mischievous and criminal hacking of corporate aggregators of sensitive personal data. The business model employed by these registrants (if it can be dignified as such) is using domain names to commit fraud and larceny by testing how much they can get away with before they are shut down; only to reappear with other fraudulent and larcenous schemes. Cyber security is not just a matter of data protection; it extends to protection of reputation and general public on the Internet.

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP

Follow CircleID on Twitter

More under: Cybercrime, Cybersquatting, Domain Names, ICANN, Law

Can Constellations of Internet Routing Satellites Compete With Long-Distance Terrestrial Cables?

CircleID - Śro, 2017-09-13 23:16

The goal will be to have the majority of long distance traffic go over this network. —Elon Musk

Three companies, SpaceX, OneWeb, and Boeing are working on constellations of low-Earth orbiting satellites to provide Internet connectivity. While all three may be thinking of competing with long, terrestrial cables, SpaceX CEO Elon Musk said "the goal will be to have the majority of long-distance traffic go over this (satellite) network" at the opening of SpaceX's Seattle office in 2015 (video below).

SpaceX orbital path schematic, sourceCan he pull that off?

Their first constellation will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024.

The satellites will use radios to communicate with ground stations, but links between the satellites will be optical.

At an altitude of 1,110 kilometers, the distance to the horizon is 3,923 kilometers. That says each satellite will have a line-of-sight view of all other satellites that are within 7,846 kilometers, forming an immense mesh network. Terrestrial networks are not so richly interconnected and cables must zig-zag around continents and islands if undersea and other obstructions if under ground.

Latency in a super-mesh of long, straight-line links should be much lower than with terrestrial cable. Additionally, Musk says the speed of light in a vacuum is 40-50 percent faster than in a cable, cutting latency further.

Let's look at an example. I traced the route from my home in Los Angeles to the University of Magallanes in Punta Arenas at the southern tip of Chile. As shown here, the terrestrial route was 14 hops and the theoretical satellite link only five hops. (The figure is drawn roughly to scale).

So, we have 5 low-latency links versus 14 higher-latency links. The gap may close somewhat as cable technology improves, but it seems that Musk may be onto something.

Check out the following video of the speech Musk gave at the opening of SpaceX's Seattle office. His comments about the long-distance connections discussed here come at the three-minute mark, but I'd advise you to watch the entire 26-minute speech:

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom, Wireless

Innovative Solutions for Farming Emerge at the Apps for Ag Hackathon

CircleID - Śro, 2017-09-13 18:16

Too often, people consider themselves passive consumers of the Internet. The apps and websites we visit are made by people with technical expertise using languages we don't understand. It's hard to know how to plug in, even if you have a great idea to contribute. One solution for this problem is the hackathon.

Entering the Hackathon Arena

For the uninitiated, a hackathon is a place of hyper-productivity. A group of people converge for a set period of time, generally a weekend to build solutions to specific problems. Often, the hackathon has an overall goal, like the Sacramento Apps for Ag hackathon.

"The Apps for Ag Hackathon was created to bring farmers, technologists, students and others from the agriculture and technology industries together in a vibrant, focused environment to create the seeds of new solutions for farmers using technology," says Gabriel Youtsey, Chief Innovation Officer, Agriculture and Natural Resources.

Now in its fourth year, the hackathon was bigger than ever and was held at The Urban Hive in Sacramento, with the pitch presentations taking place during the California State Fair. The event kicked off on Friday evening, with perspectives from a farmer on the challenges for agriculture in California, including labor, water supply, food safety, and pests, and how technology can help solve them. Hackathon participants also had opportunities to get up and talk about their own ideas for apps or other technology-related concepts to solve food and agriculture problems for farmers.

From there, teams freely formed based on people's skills and inclinations. Although the hackathon is competitive, there is a great deal of collaboration happening, as people hash out ideas together. The hackathon itself provides tools and direction, and experts provide valuable advice and mentorship. At the end of the event, the teams presented working models of their apps and a slide deck to describe the business plan. Judges then decided who got to go home with the prizes, which often include support like office space, cash, and cloud dollars so that developers can keep building their software.

For Entrepreneurs, Newbies, and Techies Alike

In late July of this year, three people with very different career backgrounds entered the Apps for Ag Hackathon to dedicate their weekend to building a piece of software. They all walked away with a top prize and a renewed commitment to reimagining how technology can contribute to agriculture and food production. In the room was Sreejumon Kundilepurayil, a hackathon veteran who has worked for tech giants building mobile and software solutions, Scott Kirkland, a UC Davis software developer and gardener, and Heather Lee, a self-described generalist in business and agritourist enthusiast.

"I was terrified," Lee shared. "I'm tech capable — I've taken some coding classes — but I had no idea what my role would be. I decided to go and put myself in an uncomfortable position. When I got there, I realized that telling a story was my role." While her team members were mapping out the API and back-end development, Lee was working on the copy, graphics, video, and brand guide. Her idea for a mobile app that connects farmers and tourists for unique day-trips to farms ended up winning third place.

First place went to Kundilepurayul and Vidya Kannoly for an app called Dr Green, which will help gardeners and farmers diagnose plant diseases using artificial intelligence and machine learning. Initially built for the Californian market, it will eventually be available globally as the machine gets more and more adept at identifying plants and problems. Through their phone, growers will also have access to a messaging feature to ask questions and get advice.

The first place winners!

The benefits (and limitations) of a hackathon

By design, a hackathon encourages collaboration and allows for people's individual strengths to shine. Some people come to a hackathon with the goal of building software, while others are looking for career opportunities, mentorship, and feedback on business ideas. What they usually don't go home with is a totally perfect piece of software. Because of time constraints, teams are judged on a working mock-up of their idea.

That doesn't mean the app won't eventually be released to the market. For Lee, who now owns all rights to her business (as she discussed with her team at the beginning of the hackathon), she will have to rebuild the software so that it will work for her company at scale. Still, her experience at the hackathon was invaluable. "It's proof that I have a place in tech and a viable business plan," she said. When I talked to her, she was in the process of launching an LLC and was in Napa Valley pitching the idea to farmers.

It also doesn't mean that the teams have to dissolve after the hackathon. Kirkland, who was part of the winning team last year, is still involved in the roll-out of the app that sealed his team's victory. His work for this year's hackathon might have legs, too. The new app, called Greener, is an image-based app backed by machine learning that can diagnose plant diseases and problems. Because of connections he made at the hackathon, he's now in conversation with public and private entities about how to move forward with the idea.

The SF-Bay Area Chapter was a partner for the Apps for Ag Hackathon 2017 as part of ongoing efforts to promote its project, "Bridging California's Rural/Urban Digital Divide with Mobile Broadband".

This article was written by Jenna Spagnolo on behalf of the SF Bay Area Internet Society Chapter.

Written by Jenna Spagnolo, Consultant & Non-Profit Leader

Follow CircleID on Twitter

More under: Mobile Internet, Web

Amazon's Letter to ICANN Board: It's Time to Approve Our Applications for .AMAZON TLDs

CircleID - Wto, 2017-09-12 23:54

When ICANN launched the new gTLD program five years ago, Amazon eagerly joined the process, applying for .AMAZON and its Chinese and Japanese translations, among many others. Our mission was — and is — simple and singular: We want to innovate on behalf of our customers through the DNS.

ICANN evaluated our applications according to the community-developed Applicant Guidebook in 2012; they achieved perfect scores. Importantly, ICANN's Geographic Names Panel determined that "AMAZON" is not a geographic name that is prohibited or one that requires governmental approval. We sincerely appreciate the care with which ICANN itself made these determinations, and are hopeful that a full approval of our applications is forthcoming.

In a letter we sent to the ICANN Board on September 7, 2017 (the full text of which may be found below), we laid out the reasons for why our applications should be swiftly approved now that an Independent Review Process (IRP) panel found in our favor. Our letter highlights the proactive engagement we attempted with the governments of the Amazonia region over a five year period to alleviate any concerns about using .AMAZON for our business purposes.

First, we have worked to ensure that the governments of Brazil and Peru understand we will not use the TLDs in a confusing manner. We proposed to support a future gTLD to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. We also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities.

During the course of numerous formal and informal engagements, we repeatedly expressed our interest in finding an agreed-upon outcome. And while the governments have declined these offers, we stand by our binding commitment from our July 4, 2013 Public Interest Commitment (PIC) to the .AMAZON applications, which stated that we will limit registration of culturally sensitive terms — engaging in regular conversations with the relevant governments to identify these terms — and formalizing the fact that we will not object to any future applications of .AMAZONAS, .AMAZONIA and .AMAZONICA.

We continue to believe it is possible to use .AMAZON for our business purposes while respecting the people, culture, history, and ecology of the Amazonia region.

We appreciate the ICANN Board's careful deliberation of our applications and the IRP decision. But as our letter states, approval of our .AMAZON applications by the ICANN Board is the only decision that is consistent with the bottom-up, multistakeholder rules that govern ICANN and the new gTLD program. We urge the ICANN Board to now approve our applications. An ICANN accountable to the global multistakeholder community must do no less.

The full text of our letter is below.

* * *

Dear Chairman Crocker and Members of the ICANN Board of Directors:

We write as the ICANN Board considers the July 10, 2017 Final Declaration of the Independent Review Process Panel (IRP) in Amazon EU S.à.r.l. v. ICANN regarding the .AMAZON Applications. Because the Panel concluded that the Board acted in a manner inconsistent with its Bylaws, we ask the Board to immediately approve our long-pending .AMAZON Applications. Such action is necessary because there is no sovereign right under international or national law to the name "Amazon," because there are no well-founded and substantiated public policy reasons to block our Applications, because we are committed to using the TLDs in a respectful manner, and because the Board should respect the IRP accountability mechanism.

First, the Board should recognize that the IRP Panel carefully examined the legal and public policy reasons offered by the objecting governments and found each to be insufficient or inaccurate. The Board should respect the IRP Panel conclusions.

Second, for the last 5 years, Amazon has repeatedly offered to work with the concerned governments to find an amicable solution, offering to explore how we can best use .AMAZON for our business purposes while respecting the people, culture, history, and ecology of the Amazonia region. Although those governments consistently declined our offers, we remain willing to adhere to our July 4, 2013 Public Interest Commitment (PIC) to the .AMAZON Applications. This binding commitment, which provides a practical solution, underscores why acting towards approving these applications immediately is in the public interest.

Finally, the Board last acted in 2014 — before the IANA transition and the resulting changes to ICANN's Bylaws. The Board should take this opportunity to demonstrate to everyone — including those who objected to the IANA transition on the grounds that it would give too much control to governments — that ICANN is appropriately responsive to the accountability measures that the multistakeholder community required as part of the transition.

Almost one year ago, Chairman Crocker heralded the ICANN multistakeholder community's dedication and commitment in developing a broadly supported, consensus proposal to enhance ICANN's transparency and accountability — a proposal that preserved "the existing multistakeholder system while laying the foundation for a more accountable and equitable balance within the ICANN ecosystem." With the .AMAZON Applications, the Board should publicly and clearly honor this commitment to transparency and accountability. In contrast, permitting the GAC to veto TLD applications that received perfect application evaluation scores (41/41) based upon reasons that are neither well-founded nor merit-based directly contravenes ICANN's oft-stated and critically important commitment to serving the public interest, as determined by rules agreed to by the multi-stakeholder community.

The ICANN-authorized IRP, the ICANN-selected Community Objection dispute resolution provider, and the ICANN-selected legal expert have rejected every reason put forth for denying the .AMAZON Applications. The Board should not grant Brazil and Peru a fourth, and the GAC a third, opportunity to try to further delay the global public interest benefits associated with .AMAZON. It is now time for the Board to approve the .AMAZON Applications. (A full timeline of our applications is in the Appendix.)

We are aware that governmental pressure on the Board in connection with matters of Internet governance, although unrelated to the .AMAZON Applications, is of concern to ICANN. Such pressure does not change the truth — that for four years Brazil and Peru have been unable to provide legally and factually sound reasons for rejecting the .AMAZON Applications. If the Board yields to such pressure, it will undermine ICANN's leadership in advancing the multistakeholder approach to Internet governance. In fact, rejection of the .AMAZON Applications after they received perfect application evaluation scores will undoubtedly be used by those stakeholders who were (and are) skeptical of ICANN's ability to remain independent of governmental overreach to question and challenge ICANN's ongoing legitimacy.

Board rejection of the .AMAZON Applications may also adversely impact any new gTLD subsequent procedure. Globally, hundreds (if not thousands) of brands have names similar to regions, land formations, mountains, towns, cities, and other geographic places, and the uncertainty of ICANN's sui generis protection of geographic names will deter these potential .BRAND applicants. Other applicants will also have reason to doubt the certainty and predictability of the gTLD subsequent procedure. After all, if an application that receives a perfect score, clears all third-party objections, passes Geographic Names Panel review, and is the subject of a favorable IRP Panel decision can be rejected because of an arbitrary GAC veto, no gTLD applicant can be certain of its application's prognosis.

The ICANN Board should now re-evaluate the .AMAZON Applications, mindful of the Panel's recommendations, and approve the .AMAZON Applications. ICANN's Bylaws and Core Values mandate such a decision. The Board should not request or consider any further GAC advice on the .AMAZON Applications. The GAC had ample time and opportunity to develop and reach consensus on "well-founded, merits-based public policy reasons for denying [our] applications." It did not because it could not then, and it cannot now, as recognized by the IRP. The Board also does not need to wait for policy recommendations from the new Subsequent Procedures PDP WG Geographic Names Work Track; that work, while important, does not impact the .AMAZON Applications, which we properly submitted under the Applicant Guidebook.

We request the opportunity to present to the Board and answer questions about the .AMAZON Applications before the Board acts on them, as well as an opportunity to review and respond to any subsequent submission by the GAC, Brazil, Peru, or any other party in connection with the .AMAZON Applications. We filed these applications over 5 years ago. Since then, multiple independent and objective experts have repeatedly found that our .AMAZON Applications are consistent with ICANN rules and existing law. The IRP Panel heard arguments on the length of time the applications have been pending and recommended that the Board should act "promptly." It is now time for the Board to act promptly and allow our .AMAZON Applications to proceed. That is the only decision that is consistent with the global public interest, the IRP Final Declaration, and the rule of law.

Sincerely,

Scott Hayden
Vice President, Amazon

Brian Huseman
Vice President, Amazon

Written by Brian Huseman, Vice President, Public Policy at Amazon

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Top-Level Domains

CE Router Certification Opens Up the Last Mile to IPv6 Fixed-Line

CircleID - Wto, 2017-09-12 17:08

With reference to IPv6, probably most end users might not have any sense of it. The mainstream parlance in the industry is that network carriers and content and service providers stick to their own arguments. Carriers believe owing to the lack of IPv6 content and service, the demand for IPv6 from the users is very small. The content and service providers hold that users cannot have access to content and service through IPv6 and that why they should provide the service in this background.

Dr. Song Linjian of CFIEC stated in the article China, towards fully-connected IPv6 networks that Chicken and Egg paradox between IPv6 networks and content is just temporary and that it surely exists but not the key reason. China has already prepared itself. When the last mile problem is solved, the users will fully explode. Long ago, every telecom carrier started to strictly implement the network device procurement requirements that network devices must support IPv6 such as the IPv6 Ready Logo testing and certificating which can satisfy this requirement. However, the CE (home gateways and wireless routers, etc.) purchased by users themselves mostly do not support IPv6, which caused the last mile problem.

“When IPv6 is still burgeoning, it is hard to require the vendors and users to have the devices with IPv6-enabled and IPv6-certified. The enterprises produce mature CE Routers (Customer Edge Router, home gateway routers)that support IPv6 do not launch their products to the Chinese market in that customers do not have demand for IPv6. This has become the narrowest bottleneck that hinders the development of IPv6 fixed line users.” said the Director of the BII-SDNCTC Li Zhen with reference to the fixed line IPv6 development.

In the upcoming era of IoT, more and more devices need to be connected, and the home gateway CE routers, as the switch center of home network information and data, needs full support for IPv6. From another perspective, it can also be seen that the home gateways have won enough attention to IPv6. On March 19th 2014, international IPv6 organization IPv6 Forum and IPv6 Ready Logo committee officially announced the initiation of the IPv6 Ready CE Router Logo conformance and interoperability testing and certificating program, which marks the full support from brand-new CE Router certificating program of next generation Ipv6 deployment and commercialization. According to the statistics from IPv6 Forum, at present, there are 3000 network devices that passed the Ipv6 Ready certification. The rate of supporting IPv6 is very high. But when it comes to the home gateway CE devices, the next CE scaling testing program CE Router under the framework of IPv6 Ready Logo, only 17 devices from US Netgear, ZTE, Broadcom, etc. have passed IPv6 Ready Logo certification. As the key to access to the last mile of IPv6 in the households, the Chinese market for routing devices bears great potential. The CE Router certified devices will have stronger competitive edge to take hold of vantage ground in the next generation network deployment and commercialization.

According to the Global IPv6 Testing Center, the devices to be certified by CE Router Logo are the smart home gateways, such as the home routers, wireless routers, GPON&EPON end devices, etc. The testing content covers the core protocols (Phase-2 enhanced certificating), all the tests in DHCPv6 and RFC084. Compared to other certifications (Core, DHCPv6, Ipsecv6, SNMPv6), the certification is highly targeted at devices and much stricter. In the future, more CE routers will be certified by IPv6 and the seamless deployment of home IPv6 will be gradually realized to solve the last mile problem of the access to IPv6 by carriers. This will have far-reaching influence on the transition of the users to IPv6.

Written by Xudong Zhang, Vice President of BII Group

Follow CircleID on Twitter

More under: Access Providers, IP Addressing, IPv6, Networks, Telecom

Equifax Breach Blamed on Open-Source Software Flaw

CircleID - Wto, 2017-09-12 03:04

Equifax has blamed a flaw in the software running its online databases for the massive breach revealed last week that has allowed hackers to steal personal information of as many as 143 million customers. Kevin Dugan reporting in the New York Post: "Hackers were able to access the info — including Social Security numbers — because there was a flaw in the open-source software created by the Apache Foundation ... STRUTS is a widely available software system that's used by about 65 percent of Fortune 100 companies, including Lockheed Martin, Citigroup, Vodafone, Virgin Atlantic, Reader's Digest, Office Depot, and Showtime — plus the IRS, according to lgtm, a software development group."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity

Lessons Learned from Harvey and Irma

CircleID - Nie, 2017-09-10 00:28

One of the most intense natural disasters in American history occurred last week. Hurricane Harvey challenged the state of Texas, while Florida braced for Irma. As with all natural disasters in this country Americans are known to bond during times of crisis and help each other during times of need. Personally, I witnessed these behaviors during the 1989 quake in San Francisco.

You may wish to donate or get involved with hurricane Harvey relief to help the afflicted. That's great, but as we all know, we should be wary of who we connect with online. Scammers are using Hurricane Harvey and Irma relief efforts as con games and, even more despicably, as phishbait. The FTC warned last week that there are many active relief scams in progress and noted that there always seems to be a spike in registration of bogus domains.

If you doubt a charity you are not familiar with, you are wise to think before you give. We recommend you do some common sense vetting and donate through a charities you can verify. Even better, check out the Wise Giving Alliance from the Better Business Bureau, a tool to verify legitimate charities.

In this article, we focus on a group of shameless miscreants that are profiting from the misfortune of others during times of crisis and natural disasters. We illuminate the intensity of malicious domains which were created in the days before and after disasters like Hurricane Harvey and Irma. Finally, we address what we can learn during these difficult times.

The intensity of malicious domains creation during and several days after Hurricane Harvey is appalling. On August 30th alone, several hundred domains were created with the term "harvey" in them. While not all of the registrants had malicious intent, I'm betting at least a small percentage of them did. Their goal was to extort money, data, or both from innocent victims who happened to be in harm's way, as well as from good Samaritans whose compassion for the victims made them vulnerable.

On searches of "Harvey" and "Irma" related domains, between August 28th and September 8th, thousands of such domains were created. That does not even take into account homoglyphs which will be further outlined in this article. The domain names fall into four broad categories:

  • Legal / Insurance such as Attorney, Lawyer, Claims.
  • Rebuilding such as Roofing, Construction.
  • Storm tracking such as WILLHURRICANEIRMAHIT.US
  • New or fraudulent charities using terms such as Relief, Project, Victims, Help.

The legal / insurance terms are registered a year or more in advance for every hurricane name listed. You can see a full list of future hurricane names here, listed by the National Hurricane Center. By pivoting on the name servers or registrant data, we can see the same actors register all those domains far ahead of time.

This infographic shows words that appear in domains registered in Aug and Sept so far that related to hurricane, harvey or irma.

When crises strike, one needs the best tools plus a well-trained team that knows how to maximize your use of this exceptional data. Utilizing DNS techniques that can help your company avoid onboarding fraudulent fundraisers and profiteering opportunists is vital to protecting your company reputation and the reputation of your outbound IP address ranges.

Here's a deep dive tip that few companies have discovered, but all can apply: As one part of the recursive "domain name resolution" process, the TLD registry zone file connects each domain name to authoritative name server hosts, and each authoritative name server host to an IP address. Starting with one known malicious domain name — or one of your customer domains you are vetting — you can find other domains the same actor is using, hosting on the same IPs, or registered in the past. Even the TLD registry zone glue records provides clues and the ability to cluster malicious or legit domains registered by the same company. ZoneCruncher and other tools make this technique easy to implement for any size Compliance or Investigations unit.

Using the right tools, your trained staff can spot multiple malicious hosts using the same IP or CIDR block. The lesson here is that ESPs and other organizations with a large number of customer tenants should be on high alert to the risks of onboarding clients prior to, during, and right after natural disasters.

Zetalytics Global Passive DNS has visibility on all active registered domain names in the world. For anyone wanting to glance into the recently registered "hurricane" related domain names, a list is provided free here.

Here are a few domains on our radar, that you might find interesting for Irma:

The enhanced view of global DNS activity gives NOC, SOC and intel teams the ability to proactively tweak algorithms to flag terms related to the disaster.

Malicious Look-a-Like Domains Target Florida During Irma:

I heard concerning news from the Veteran Powered Cyber Notifiers project today. They are seeing a rash of new "look-a-like" domains seeking to take advantage of the Floridians attention to the impending hurricane.

Real websites for first responders, insurance companies, construction, medical and other vital organizations in the Florida and Texas areas — are being targeted by these malicious spoofed domain registrations.
Legit DomainLook-a-like DomainHomoglyphic Characterspeoplestrustinsurance.compeoplestrustlnsurance.com (see the L instead of the i)axogeninc.comaxogenlnc.com ( L vs of the i)crownproductsco.comcrovvnproductsco.com (two V vs w)mecofire.comrnecofire.com ( an r and n vs m)manateechamber.commanateecharnber.com (r and n vs m)vwinc.comvwlinc.com (L next to the inc)start2finishflooring.comstart2finishfloorlng.com(L vs i)trisourceph.comtrisuorceph.com (u and o interchanged)
Conclusions and Resources:

By reviewing DNS data over years of historical data, we see the patterns of actors — good and bad — who register domains to take advantage of disasters. Tools like ZoneCruncher enable us to pivot on email addresses in whois records, find clusters of related domains sharing a name server, and discover the history of types of domains hosted on each IP address used by scammers and good guys.

Using a hostname age checker, we were able to quickly sort and separate the new — probably fraudulent charity appeals — from old possibly legit domains that simply contain words related to disasters and storm names. Sharing this knowledge and data with the community means multiplying the positive effects of what we can do together, including the Veteran Powered Cyber Notify project that identifies trends in malicious domain registrations. Here again is that link to the list of domains, should you be curious or in a position to take some positive action.

Side note: We're having a lively discussion on our private "slack channel" about this and other hot topics including the Equifax breach. Email me if you want an invite to listen in or participate fredt@zetalytics.com.

Written by Fred Tabsharani, Director of Data Access at Zetalytics

Follow CircleID on Twitter

More under: Cybercrime, DNS, Domain Names

Subskrybuj zawartość