subskrybent kanałów informacyjnych

Vladimir Putin Signs Sweeping Internet-Censorship Bills

Slashdot - Wto, 2019-03-19 11:00
Russian President Vladimir Putin has signed two censorship bills into law Monday. One bans "fake news" while the other makes it illegal to insult public officials. Ars Technica reports on the details: Under one bill, individuals can face fines and jail time if they publish material online that shows a "clear disrespect for society, the state, the official state symbols of the Russian Federation, the Constitution of the Russian Federation, and bodies exercising state power." Insults against Putin himself can be punished under the law, The Moscow Times reports. Punishments can be as high as 300,000 rubles ($4,700) and 15 days in jail. A second bill subjects sites publishing "unreliable socially significant information" to fines as high as 1.5 million rubles ($23,000). [T]he Russian government has "essentially unconstrained authority to determine that any speech is unacceptable. One consequence may be to make it nearly impossible for individuals or groups to call for public protest activity against any action taken by the state," [analyst Matthew Rojansky told the Post]

Read more of this story at Slashdot.

Pentagon Wants To Test a Space-Based Weapon In 2023

Slashdot - Wto, 2019-03-19 08:00
pgmrdlm writes: Defense officials want to test a neutral particle-beam in orbit in fiscal 2023 as part of a ramped-up effort to explore various types of space-based weaponry. They've asked for $304 million in the 2020 budget to develop such beams, more powerful lasers, and other new tech for next-generation missile defense. Such weapons are needed, they say, to counter new missiles from China, Russia, North Korea and Iran. But just figuring out what might work is a difficult technical challenge. So the Pentagon is undertaking two studies. The first is a $15 million exploration of whether satellites outfitted with lasers might be able to disable enemy missiles coming off the launch pad. Defense officials have said previously that these lasers would need to be in the megawatt class. They expect to finish the study within six months. They're also pouring money into a study of space-based neutral particle beams, a different form of directed energy that disrupts missiles with streams of subatomic particles traveling close to light speed -- as opposed to lasers, whose photons travel at light speed.

Read more of this story at Slashdot.

Scientists Grow 'Mini-Brain On the Move' That Can Contract Muscle

Slashdot - Wto, 2019-03-19 04:30
An anonymous reader quotes a report from The Guardian: Scientists have grown a miniature brain in a dish with a spinal cord and muscles attached, an advance that promises to accelerate the study of conditions such as motor neurone disease. The lentil-sized grey blob of human brain cells were seen to spontaneously send out tendril-like connections to link up with the spinal cord and muscle tissue, which was taken from a mouse. The muscles were then seen to visibly contract under the control of the so-called brain organoid. The research is is the latest in a series of increasingly sophisticated approximations of the human brain grown in the laboratory -- this time with something approaching a central nervous system attached. The scientists used a new method to grow the miniature brain from human stem cells, which allowed the organoid to reach a more sophisticated stage of development than previous experiments. The latest blob shows similarities, in terms of the variety of neurons and their organisation, to the human foetal brain at 12-16 weeks of pregnancy. However, the scientists said the structure was still too small and primitive to have anything approaching thoughts, feelings or consciousness. While a fully developed human brain has 80-90 billion neurons, the organoid has a couple of million, placing it somewhere between a cockroach and a zebrafish in terms of volume of grey matter. After growing the organoid, the scientists "used a tiny vibrating blade to cut it into half millimeter-thick slices which were placed on a membrane, floating on a nutrient-rich liquid," reports The Guardian. "This meant the entire slice had access to energy and oxygen and it continued developing and forming new connections when it was kept in culture for a year. Alongside the organoid, the scientists added in a 1mm-long spinal cord, taken from a mouse embryo, and the surrounding back muscle. The brain cells automatically began to send out neuronal connections, linked up with the spinal cord and began sending electrical impulses, which caused the muscles to twitch." The study has been published in the journal Nature Neuroscience.

Read more of this story at Slashdot.

NVIDIA's Ray Tracing Tech Will Soon Run On Older GTX Cards

Slashdot - Wto, 2019-03-19 03:20
NVIDIA's older GeForce GTX 10-series cards will be getting the company's new ray-tracing tech in April. The technology, which is currently only available on its new RTX cards, "will work on GPUs from the 1060 and up, albeit with some serious caveats," reports Engadget. "Some games like Battlefield V will run just fine and deliver better visuals, but other games, like the freshly released Metro Exodus, will run at just 18 fps at 1440p -- obviously an unplayable frame-rate." From the report: What games you'll be able to play with ray-tracing tech (also known as DXR) on NVIDIA GTX cards depends entirely on how it's implemented. In Battlefield V, for instance, the tech is only used for things like reflections. On top of that, you can dial down the strength of the effect so that it consumes less computing horsepower. Metro Exodus, on the other hand, uses ray tracing to create highly realistic "global illumination" effects, simulating lighting from the real world. It's the first game that really showed the potential of RTX cards and actually generated some excitement about the tech. However, because it's so computationally intensive, GTX cards (which don't have the RTX tensor cores) will be effectively be too slow to run it. NVIDIA explained that when it was first developing the next gen RTX tech, it found chips using Pascal tech would be "monster" sized and consume up to 650 watts. That's because the older cards lack both the integer cores and tensor cores found on the RTX cards. They get particularly stuck on ray-tracing, running about four times slower than the RTX cards on Metro Exodus. Since Metro Exodus is so heavily ray-traced, the RTX cards run it three times quicker than older GTX 10-series cards. However, that falls to two times for Shadow of the Tomb Raider, and 1.6 times for Battlefield V, because both of those games use ray tracing less. The latest GTX 1660 and 1660 Ti GPUs, which don't have RT but do have integer cores, will run ray-traced games moderately better than last-gen 10-series GPUs. NVIDIA also announced that Unity and Unreal Engine now support ray-tracing, allowing developers to implement the tech into their games. Developers can use NVIDIA's new set of tools called GameWorks RTX to achieve this. "It includes the RTX Denoiser SDK that enables real-time ray-tracing through techniques that reduce the required ray count and number of samples per pixel," adds Engadget. "It will support ray-traced effects like area light shadows, glossy reflections, ambient occlusion and diffuse global illumination (the latter is used in Metro Exodus). Suffice to say, all of those things will make game look a lot prettier."

Read more of this story at Slashdot.

Google Seeking To Promote Rivals To Stave Off EU Antitrust Action

Slashdot - Wto, 2019-03-19 02:40
Google is trying to boost price comparison rivals such as Kelkoo in an effort to appease European Union antitrust regulators and ward off fresh fines following a $2.7 billion penalty nearly two years ago. "The European Commission said Alphabet unit Google had used its search engine market power to unfairly promote its own comparison shopping service," reports Reuters. From the report: The company subsequently offered to allow price-comparison rivals to bid for advertising space at the top of a search page, giving them the chance to compete on equal terms. But competitors said the measure failed to create a level playing field. Earlier this month, Google introduced a new link on its search results which aims to drive more traffic to price comparison rivals. British competitor Kelkoo said on its blog that it was one of several companies selected to try out the new link which will initially be available in Germany, France and the Netherlands. EU antitrust enforcers could levy fines up to 5 percent of Google's average daily worldwide turnover if it fails to comply with the 2017 order.

Read more of this story at Slashdot.

House Democrats Plan April Vote On Net Neutrality Bill

Slashdot - Wto, 2019-03-19 02:00
House Majority Leader Steny Hoyer announced that the House will hold a vote next month on the Democrats' bill to reinstate the Obama-era net neutrality rules. "Hoyer said in a letter to colleagues that the House will consider the Save the Internet Act during the week of April 8," reports The Hill. From the report: The Republican-led Federal Communications Commission (FCC) voted along party lines in 2017 to repeal the popular regulations prohibiting internet service providers from blocking or throttling websites, or from creating internet fast lanes. Democrats and consumer groups are fighting the repeal with a legal challenge in federal court and have pushed net neutrality regulations at the state level. While Republicans have floated their own bills to replace the rules, many oppose the Save the Internet Act because it reinstates the provision in the 2015 order that designates broadband providers as common carriers, opening them up to tougher regulation and oversight from the FCC. Though it enjoys widespread support among Democrats, the legislation may have a hard time getting through the GOP-held Senate. The "Save the Internet Act" was introduced earlier this month by Speaker Nancy Pelosi and other House and Senate Democrats.

Read more of this story at Slashdot.

Uber Used Secret Spyware To Try To Crush Australian Startup GoCatch

Slashdot - Wto, 2019-03-19 01:20
Uber used a secret spyware program, codenamed Surfcam, to steal drivers from an Australian competitor with the aim of putting that company out of business. The startup was backed by high-profile investors including billionaire James Packer and hedge fund manager Alex Turnbull. ABC News reports: GoCatch was a major competitor to Uber when the U.S. company launched in Australia in 2012. At the time, both companies were offering a new way to book taxis and hire cars using a smartphone app. Surfcam was developed in Uber Australia's head office in Sydney in 2015. A former senior Uber employee has told Four Corners that the idea behind the use of the Surfcam spyware was to starve GoCatch of drivers. "Surfcam when used in Australia was able to put fledgling Australian competitors onto the ropes," the former employee with direct knowledge of the program said on the condition of anonymity. "Surfcam allowed Uber Australia to see in real time all of the competitor cars online and to scrape data such as the driver's name, car registration, and so on." It allowed Uber to directly approach the GoCatch drivers and lure them to work for Uber. "GoCatch would lose customers due to poaching of its drivers draining their supply. With fewer and fewer drivers, GoCatch would eventually fold," the former Uber employee said. GoCatch's co-founder and chief executive, Andrew Campbell, said Uber's tactics damaged the company. He said: "The fact that Uber used hacking technologies to steal our data and our drivers is appalling. It had a massive impact on our business. It sets a really dangerous precedent for the Australian economy and Australian businesses as well. It tells every multinational company to come to Australia and follow the same practice. As an Australian small business, a technology start-up business based in Australia that's improving efficiency and service levels in the taxi industry, to have a company come to Australia and get away with that type of behavior is ... it's disgusting." A senior Uber source has confirmed the existence of Surfcam, saying it was developed by a staff member in the Sydney head office who modified off-the-shelf data scraping software. "They said the Sydney employee did it under his own authority, and that once Uber discovered this, they requested he stop," the report says.

Read more of this story at Slashdot.

New Mirai Malware Variant Targets Signage TVs and Presentation Systems

Slashdot - Wto, 2019-03-19 00:40
An anonymous reader quotes a report from ZDNet: Security researchers have spotted a new variant of the Mirai IoT malware in the wild targeting two new classes of devices -- smart signage TVs and wireless presentation systems. This new strain is being used by a new IoT botnet that security researchers from Palo Alto Networks have spotted earlier this year. The botnet's author(s) appears to have invested quite a lot of their time in upgrading older versions of the Mirai malware with new exploits. Palo Alto Networks researchers say this new Mirai botnet uses 27 exploits, 11 of which are new to Mirai altogether, to break into smart IoT devices and networking equipment. Furthermore, the botnet operator has also expanded Mirai's built-in list of default credentials, that the malware is using to break into devices that use default passwords. Four new username and password combos have been added to Mirai's considerable list of default creds, researchers said in a report published earlier today. The purpose and modus operandi of this new Mirai botnet are the same as all the previous botnets. Infected devices scan the internet for other IoT devices with exposed Telnet ports and use the default credentials (from their internal lists) to break in and take over these new devices. The infected bots also scan the internet for specific device types and then attempt to use one of the 27 exploits to take over unpatched systems. The new Mirai botnet is specifically targeting LG Supersign signage TVs and WePresent WiPG-1000 wireless presentation systems.

Read more of this story at Slashdot.

India’s Draft National E-Commerce Policy: A Bollywood Drama in Four Acts

CircleID - Wto, 2019-03-19 00:19

This article was co-authored with Prof Emeritus & Senior Scholar, York University, Sam Lanfranco.

India's recently published Draft National e-Commerce Policy, prepared by the Indian Commerce Ministry think-tank, can be read like the script of a four-act Bollywood drama.

Act 1: A match Made in Heaven

They were the dream couple: Princess India and Prince IT.

She was full of cultural richness and diversity, with beauty, mystique and natural resources. She also a dark side. She harbored the world's largest number of impoverished people, with little infrastructure, and facing sparse economic prospects.

He was young, with enormous potential. One day he would conquer all. He arrived like the sun rising after a long cold night. He had a solution to every problem. He would bring equality of access to a nearly unlimited economic playing field

She had the people and the land he needed. He would put them on the path to prosperity. Her children would become fat and content.

She was a willing lover, giving him all he asked. She sent her children to school to learn his ways. Programs like Digital India "Power to Empower" initiative, launched by Prime Minister Narendra Modi midyear 2015, were implemented to strengthen his hold of over the land. The dream would become a reality.

The Princess had good reason to believe in her choice. Her Prince, shining and full of promise, made significant progress on some fronts. 1.23 of her 1.3 billion children carried Aadhaar digital biometric identity cards. Nearly all (1.21 billion) had mobile phones, almost half with smartphones and connected to the Internet. Her country became the world's fastest-growing fifth-largest economy. By 2017 exported IT services garnered $154 billion in revenue, were the fastest-growing part of the economy and the largest private-sector employer. Technology start-ups mushroomed to 3,100 in 2018–19.

Their big wedding dance scene, insanely happy, had predicted this bright future!

Act 2: Disenchantment

Even matches made in heaven can fade with the passage of time. The Princess traveled her land, and something seemed not quite right. The resources that had gone into the Prince's IT efforts resulted in a 51 percent growth in e-commerce but captured only about 3 percent of the national retail market. Some of her children had become much richer, but they were mostly the few who had been rich before — most of her children, those supposed to prosper, were as poor as ever. What had gone wrong she asked her people. They were quick to answer. Mother of us all who cares, we know that you and the Prince wanted to help, but the Prince has many distant relatives who have bad intentions. When we started to use the technologies, they came from abroad and destroyed our businesses. They used investor money to undercut every effort we made until we were gone. They took control of marketplaces and dictated prices that made them unimagined profits which they took abroad to their homes.

That was not enough for them. The "price" of their technology was access to our personal data. They mined and monetized our data for their profits. The Prince and his relatives have taken our money and our souls. We have gotten little in return.

When the Princess heard this, she became furious and turned into the Hindu Goddess Kali, in her earliest guise, as a destroyer of evil forces. She's clever and vicious, but to plot her revenge she turned to those who were even more dangerous and fiendish than she: her bureaucrats. She asked: What can I do to make my people prosper and punish the wrongdoers?

Her bureaucrats went into their ministry. They thought and thought, and talked and talked. They came forth with a policy egg they named the "Draft National e-Commerce Policy," a policy egg pregnant with bureaucratic self-interest.

Enter the slow waltz dance of the bureaucrats, to seduce the goddess Kali.

Act 3: The Reckoning

And the bureaucrats said: Your people are right. The relatives of the Prince are greedy, unscrupulous robber barons. It is the people's data they take, and it makes them rich. They monetize data into marketable products. They monetize and sell data that is not their own. Like drug addicts, they are hooked and totally dependent on data. Day and night, they think about nothing other than how to get more data, and how to turn it into more marketable products.

They profess to collect data in the name of development, prosperity, and innovation. They love India not for what they give it, but for what they can get as India's people become one of the world's biggest sources of monetized data. The more data they control, the more they can monopolize markets and innovation. They tell the Princes that this will obstruct her children's access to innovation and economic opportunity. This will negate Prince IT's promise of equal access to nearly unfettered opportunity. Oligopolies controlled by the few will never permit access to equitable prosperity!

The Princess/Kali is reminded that data in and of itself is not a bad thing. Processed big data will be the lifeblood of future socio-economic activity. The importance of data will grow as Artificial Intelligence (AI), and the Internet of Things (IoT) populate the data cloud with clusters of data asteroids of use for a myriad of innovative uses.

This causes Princess India to shed Kali and return with three questions. What are one's rights with regard to the uses of one's individual data? What are the proper uses for data in the cloud? How is this done to promote equitable prosperity? Princess India begins to glimpse the light in the data cloud, and the promise of "India's Data for India's Development." Good policies will bring advantages and opportunities to all. The IT Prince husband's marital promise will come to pass.

Princess India, convinced she would get her will, returns to bull benevolent human form and asks: My wise servants, what shall I do? They reply: To control data, you need to establish who owns it, and the rights and obligations of ownership. Your subjects must know that only they own the rights to their data and that the data cannot be used without their consent. Even anonymous data need policies to regulate the use and protect rights under the law.

The Princess is told don't be alarmed by such control in the hand of your subjects. As the world's largest democracy India will become the world's largest digital democracy. Indian data and all that comes from it belongs to India and its citizens. The sovereign right to this data cannot be assigned to strangers, even if they are your husband's distant relatives.

Entities that collect or process data deemed private under Indian law, even if stored abroad, would be required to adhere to Indian data policies. India will be like an island with data sniffer dogs at every port. Transgressions will be caught and prosecuted to the full extent of the law.

Cross-border data flow regulations will ensure that Indian data generates value for India. Negotiated access will adhere to Indian data use policies. India's governance structures will do what is necessary under its laws and regulations to ensure that it will fulfill its holy duty to you, Princess India, to generate equitable benefits, including appropriate taxes and revenues to finance governance.

The bureaucrats further tell the Princess that proper policies and data regulations will benefit India in many ways:

  • Protecting the privacy and data ownership rights of citizens
  • Enabling proper data access for start-ups and Indian data use innovation
  • Promoting the domestic use of data for Indian economic gain
  • Controlling and pricing access to government data for legitimate uses
  • Requiring e-commerce entities operating in India to be registered in India
  • Having taxing and duty structures that level the economic playing field
  • Ensuring that taxes, duties and economic gains from India data stay in India
  • Enacting data use policies that protect national security and law and order
  • Regulating intellectual property to fight counterfeits and protect brands

The bureaucrats propose a robust administrative, regulatory and legal structure, using a multi-pronged six issue approach dealing with: data assembly, regulatory issues, infrastructure development, e-commerce marketplaces, digital economy development; and e-commerce export promotion.

Collecting and analyzing data is also a strategic national task. Data focused agencies need to be established or strengthened, to support evidence-based data policy, and to track the economy through a digital "data lens."
Issues like compulsory intellectual property and data-licensing will require extensive research and review. Such practices can run afoul of principles of data privacy and data ownership.

India's position on policies like the World Trade Organization (WTO) efforts to permanently exempt electronic transmissions from duties will require extensive research and review. They may unfairly benefit rich developed country companies while preventing poorer countries like India from extract taxes on cross border trade. This is particularly problematic when cross border digital trade can consist of digital objects of considerable value, such as 3D printer production algorithms, AI algorithms, and the like.

The complex relationship between cross border source code flows, the terms of technology transfer, and the impacts on local industry and national security again require extensive research and review. This calls for appropriate national research funding and digital/data focused authorities with a remit to explore consequences and policies in these areas.

There are multiple emergent foreign investments and cross-border trade models. Some reflect a presence, with local supply lines, in a national marketplace. Others reflect a cross-border inventory-based model of sale and distribution. National policy has to balance foreign engagement in the Indian 'marketplace', investment restrictions, and cross-border inventory-based commerce.

Act 4: Princess India's dream: Dance of the Data Ministry. (heavy stomp!)

Content for the moment, the Princess falls into a slumber and is soon dreaming. In her dream, she sees an enormous mountain range made up of data, from which has sprung a mighty river of rupees that flows to nourish the country. But soon the river begins to dry until there is only a trickle, and the land turns to dust.

"What happened?", asked the Princess as she awakes. The land answers: "Your bureaucrats did exactly what they told you." They build an enormous all-knowing and powerful ministry of data that controlled all data. First, they took the data to control the marketplace, but instead of creating opportunities for all they just used it to create opportunities for their benefit, and to generate taxes and revenues. They did not care about opportunities and equitable prosperity. They forgot the people. They gave the data to the IT Prince's relatives who had learned how best to work with bureaucratic interests within the government.

The ministry was charged with empowering the citizens, protecting their rights and maintain their dignity. But gradually the ministry claimed those rights, imposing data governance from above and curtailing digital democracy from below.

Soon the ministry wielded more power, using artificial intelligence algorithms to extend control across all aspects of life in the land. The bureaucrats argued that AI made better, cheaper and faster decisions than could citizens with traditional governance processes. As the machines demanded more data, and the bureaucracy was given more control, the results left the poor even more marginalized. Left with little access to Prince IT's digital opportunities, and unable to sustain themselves on what little data they retained, despair permeated the land.

The Princess wept and asked: What shall I do? The country answered again: Do not leave control in the hands of the bureaucrats. Let them learn. Let us all learn that development and sustainability do not come from more data alone, but from its selective and wise uses. Help us understand that e-commerce does not mean more data manipulation, so customers buy more or buy what others want us to buy. Put data first in the service of needs, not wants.

Let us rebuild our social fabric, where sustainable human relationships are based on trust and respect. Sustainable commerce is a beneficial relationship between humans and not a crass want generation calculation.
Help us remember that sustainable and equitable business models are based on trust, dignity, and respect. Anything less makes a mockery of our human experience and the lessons learned. The marriage to the IT Prince should be to build on the shoulders of that historical experience, and not squander the Prince's promise in the pursuit of hegemonic market or political power.

With that, the Princess fully awake, looked at the mess the bureaucrats had created. She called them together, and said only two words: Think again! She continued: We are the world's biggest democracy and that should extend to the digital sphere and be in the service of all. How do we get there from here?


Like all Bollywood dramas, this one will end with a big dance scene. Will it be an elite affair, a waltz of the oligarchs, or an engaged dance of the people? The Princess is looking to her people to lead to which it will be.

Written by Klaus Stoll, Digital Citizen

Follow CircleID on Twitter

More under: Cloud Computing, Cybersecurity, Intellectual Property, Internet Governance, Internet of Things, Policy & Regulation, Privacy

Education and Science Giant Elsevier Left Users' Passwords Exposed Online

Slashdot - Wto, 2019-03-19 00:00
The world's largest scientific publisher, Elsevier, left a server open to the public internet, exposing user email addresses and passwords. "The impacted users include people from universities and educational institutions from across the world," reports Motherboard. "It's not entirely clear how long the server was exposed or how many accounts were impacted, but it provided a rolling list of passwords as well as password reset links when a user requested to change their login credentials." From the report: "Most users are .edu [educational institute] accounts, either students or teachers," Mossab Hussein, chief security officer at cybersecurity company SpiderSilk who found the issue, told Motherboard in an online chat. "They could be using the same password for their emails, iCloud, etc." Motherboard verified the data exposure by asking Hussein to reset his own password to a specific phrase provided by Motherboard before hand. A few minutes later, the plain text password appeared on the exposed server. Elsevier secured the server after Motherboard approached the company for comment. Hussein also provided Elsevier with details of the security issue. An Elsevier spokesperson told Motherboard in an emailed statement that "The issue has been remedied. We are still investigating how this happened, but it appears that a server was misconfigured due to human error. We have no indication that any data on the server has been misused. As a precautionary measure, we will also be informing our data protection authority, providing notice to individuals and taking appropriate steps to reset accounts."

Read more of this story at Slashdot.

NVIDIA's $99 Jetson Nano is an AI Computer for DIY Enthusiasts

Slashdot - Pon, 2019-03-18 23:24
Sophisticated AI generally isn't an option for homebrew devices when the mini computers can rarely handle much more than the basics. NVIDIA thinks it can do better -- it's unveiling an entry-level AI computer, the Jetson Nano, that's aimed at "developers, makers and enthusiasts." From a report: NVIDIA claims that the Nano's 128-core Maxwell-based GPU and quad-core ARM A57 processor can deliver 472 gigaflops of processing power for neural networks, high-res sensors and other robotics features while still consuming a miserly 5W. On the surface, at least, it could hit the sweet spot if you're looking to build your own robot or smart speaker. The kit can run Linux out of the box, and supports a raft of AI frameworks (including, of course, NVIDIA's own). It comes equipped with 4GB of RAM, gigabit Ethernet and the I/O you'd need for cameras and other attachments.

Read more of this story at Slashdot.

Google, Microsoft Work Together For a Year To Figure Out New Type of Windows Flaw

Slashdot - Pon, 2019-03-18 22:40
Google researcher James Forshaw discovered a new class of vulnerability in Windows before any bug had actually been exploited. The involved parts of the flaw "showed that there were all the basic elements to create a significant elevation of privilege attack, enabling any user program to open any file on the system, regardless of whether the user should have permission to do so," reports Ars Technica. Thankfully, Microsoft said that the flaw was never actually exposed in any public versions of Windows, but said that it will ensure future releases of Windows will not feature this class of elevation of privilege. Peter Bright explains in detail how the flaw works. Here's an excerpt from his report: The basic rule is simple enough: when a request to open a file is being made from user mode, the system should check that the user running the application that's trying to open the file has permission to access the file. The system does this by examining the file's access control list (ACL) and comparing it to the user's user ID and group memberships. However, if the request is being made from kernel mode, the permissions checks should be skipped. That's because the kernel in general needs free and unfettered access to every file. As well as this security check, there's a second distinction made: calls from user mode require strict parameter validation to ensure that any memory addresses being passed in to the function represent user memory rather than kernel memory. Calls from kernel mode don't need that same strict validation, since they're allowed to use kernel memory addresses. Accordingly, the kernel API used for opening files in NT's I/O Manager component looks to see if the caller is calling from user mode or kernel mode. Then the API passes this information on to the next layer of the system: the Object Manager, which examines the file name and figures out whether it corresponds to a local filesystem, a network filesystem, or somewhere else. The Object manager then calls back in to the I/O Manager, directing the file-open request to the specific driver that can handle it. Throughout this, the indication of the original source of the request -- kernel or user mode -- is preserved and passed around. If the call comes from user mode, each component should perform strict validation of parameters and a full access check; if it comes from kernel mode, these should be skipped. Unfortunately, this basic rule isn't enough to handle every situation. For various reasons, Windows allows exceptions to the basic user-mode/kernel-mode split. Both kinds of exceptions are allowed: kernel code can force drivers to perform a permissions check even if the attempt to open the file originated from kernel mode, and contrarily, kernel code can tell drivers to skip the parameter check even if the attempt to open the file appeared to originate from user mode. This behavior is controlled through additional parameters passed among the various kernel functions and into filesystem drivers: there's the basic user-or-kernel mode parameter, along with a flag to force the permissions check and another flag to skip the parameter validation...

Read more of this story at Slashdot.

US Reveals Details of $500 Million Supercomputer

Slashdot - Pon, 2019-03-18 22:03
An anonymous reader quotes a report from The New York Times: The Department of Energy disclosed details on Monday of one of the most expensive computers being built: a $500 million machine based on Intel and Cray technology that may become crucial in a high-stakes technology race between the United States and China (Warning: source may be paywalled; alternative source). The supercomputer, called Aurora, is a retooling of a development effort first announced in 2015 and is scheduled to be delivered to the Argonne National Laboratory near Chicago in 2021. Lab officials predict it will be the first American machine to reach a milestone called "exascale" performance, surpassing a quintillion calculations per second. That's roughly seven times the speed rating of the most powerful system built to date, or 1,000 times faster than the first "petascale" systems that began arriving in 2008. Backers hope the new machines will let researchers create significantly more accurate simulations of phenomena such as drug responses, climate changes, the inner workings of combustion engines and solar panels. Aurora, which far exceeds the $200 million price for Summit, represents a record government contract for Intel and a test of its continued leadership in supercomputers. The Silicon Valley giant's popular processors -- the calculating engine for nearly all personal computers and server systems -- power most such machines. But additional accelerator chips are considered essential to reach the very highest speeds, and its rival Nvidia has built a sizable business adapting chips first used with video games for use in supercomputers. The version of Aurora announced in 2015 was based on an Intel accelerator chip that the company later discontinued. A revised plan to seek more ambitious performance targets was announced two years later. Features discussed on Monday include unreleased Intel accelerator chips, a version of its standard Xeon processor, new memory and communications technology and a design that packages chips on top of each other to save space and power.

Read more of this story at Slashdot.

U.S. Students Have Achieved World Domination in Computer Science Skills -- For Now

Slashdot - Pon, 2019-03-18 21:24
When it comes to computer science skills, U.S. students approaching graduation have a significant advantage over their peers in China, India, and Russia. Tekla Perry shares a report: That's the conclusion of a study published today in the Proceedings of the National Academy of Sciences of the United States of America. The study was put together by a global team of researchers led by Prashant Loyalka, an assistant professor at Stanford University. The team constructed a careful sampling mechanism to select senior (typically fourth year) computer science or equivalent students in each of the four countries, making sure that both the educational institutions and students enrolled at those schools were statistically representative of schools and computer science students throughout the respective nations. The sampling also ensured that study participants represented both elite and non-elite universities. The final selection included 6847 students from the U.S., 678 from China, 364 from India, and 551 from Russia. Once the students were selected, the researchers then administered the Major Field Test in Computer Science, an exam that was developed by the U.S. Educational Testing Service and is regularly updated. The exam was translated for the students in China and Russia. When the researchers tabulated the results, the U.S. students came out ahead in every category. U.S. seniors outperformed their peers overall; students from elite U.S. schools outclassed their counterparts at the other countries' elite institutions; and the same was true for students at non-elite universities. (The differences among the scores of students in China, India, and Russia were not statistically significant, the researchers indicated.)

Read more of this story at Slashdot.

IBM Signs 6 Banks To Issue Stablecoins and Use Stellar's XLM Cryptocurrency

Slashdot - Pon, 2019-03-18 20:40
IBM is taking its banking clients a step closer to cryptocurrency. From a report: Announced Monday, six international banks have signed letters of intent to issue stablecoins, or tokens backed by fiat currency, on World Wire, an IBM payment network that uses the Stellar public blockchain. The network promises to let regulated institutions move value across borders -- remittances or foreign exchange -- more quickly and cheaply than the legacy correspondent banking system. So far three of the banks have been identified -- Philippines-based RCBC, Brazil's Banco Bradesco, and Bank Busan of South Korea -- the rest, which are soon to be named, will offer digital versions of euros and Indonesian rupiah, "pending regulatory approvals and other reviews," IBM said. The network went live Monday, although while the banks await their regulators' blessings, the one stablecoin running on World Wire at the moment is a previously announced U.S. dollar-backed token created by Stronghold, a startup based in San Francisco.

Read more of this story at Slashdot.

A Short History of DNS Over HTTP (So Far)

CircleID - Pon, 2019-03-18 20:21

The IETF is in the midst of a vigorous debate about DNS over HTTP or DNS over HTTPS, abbreviated as DoH. How did we get there, and where do we go from here?

(This is somewhat simplified, but I think the essential chronology is right.)

Javascript code running in a web browser can't do DNS lookups, other than with browser.dns.resolv() to fetch an A record, or implicitly by fetching a URL which looks up a DNS A or AAAA record for the domain in the URL.

It is my recollection that the initial impetus for DoH was to let Javascript do other kinds of DNS lookups, such as SRV or URI or NAPTR records which indirectly refer to URLs that the Javascript can fetch or TXT records for various kinds of security applications. (Publish a TXT record with a given string to prove you own a domain, for example.) The design of DoH is quite simple and well suited for this. The application takes the literal bits of the DNS request, and sends them as an HTTP query to a web server, in this case probably the same one that the Javascript code came from. That server does the DNS query and sends the literal bits of answer as a DNS response. This usage was and remains largely uncontroversial.

About the same time someone observed that if the DoH requests used HTTPS rather than HTTP to wrap DNS requests, the same HTTPS security that prevents intermediate systems from snooping on web requests and responses would prevent snooping on DoH. This was an easy upgrade since browsers and web servers already know how to do HTTPS, so why not? Since DoH prevents snooping on the DNS requests, a browser could use it for all of its DNS requests to protect the A and AAAA requests as well, and send the requests to any DoH server they want, not just one provided by the local network.

This is where things get hairy. If the goal were just to prevent snooping, there is a service called DNS over TLS or DoT, which uses the same security layer that HTTPS uses, but without HTTP. A key difference is that even though snooping systems can't tell what's inside either a DoT or a DoH transaction, they can tell that DoT is DNS, while there's no way to tell DoH from any other web request, unless it happens to be sent to a server that is known to do only DoH.

Mozilla did a small-scale experiment where the DNS requests for some of their beta users went to Cloudflare's DNS service, with an offhand comment that maybe they'd do it more widely later.

On the one hand, some people believe that the DNS service provided by their network censors material, either by government mandate or for the ISP's own commercial purposes. If they use DoH, they can see stuff without being censored.

On the other hand, some people believe that the DNS service blocks access to harmful material, ranging from malware control hosts to intrusive ad networks (mine blocks those so my users see a blue box rather than the ad) to child pornography. If they use DoH, they can see stuff that they would rather not have seen. This is doubly true when the thing making the request is not a person, but malware secretly running on a user's computer or phone, or an insecure IoT device.

The problem is that both of those are true, and there is a complete lack of agreement about which is more important, and even which is more common. While it is easy for a network to block traffic to off-network DNS or DoT servers, to make its users use its DNS or DoT servers, it is much harder to block traffic to DoH servers, at least without blocking traffic to a lot of web servers, too. This puts network operators in a tough spot, particularly ones that are required to block some material (notably child pornography) or business networks that want to limit the use of the networks unrelated to the business, or networks that just want to keep malware and broken IoT devices under some control.

At this point, the two sides are largely talking past each other, and I can't predict how if at all, the situation will be resolved.

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: Cybersecurity, DNS, Internet Protocol

Is Adobe's Creative Cloud Too Powerful for Its Own Good?

Slashdot - Pon, 2019-03-18 20:00
Reader samleecole writes: Recently I was looking around at the state of modern image editors and discovered something really disappointing. The issue? Well, even with the rise of modern Photoshop alternatives such as Affinity Photo and Pixelmator, these image editors are not designed to handle animated GIFs. Which means that, despite the fact that I'd certainly love to see what life is like outside of the world of Adobe, it looks like I'm stuck in that ecosystem for a little while longer. Don't get me wrong: Adobe's software is great, if a bit expensive. But I do think that its business model highlights just how consolidated its power actually is -- and it's not talked about nearly enough in the creative space. [...] Adobe is too powerful and can ignore things it doesn't want to do -- whether in the form of cutting prices or ignoring usability concerns -- in part because it carries itself like it's the only game in town. Here's a case in point that matters a lot to me, actually: Apple has supported a native fullscreen mode in Mac OS since 10.7, better known as Lion. It's a fundamental feature, and helps keep windows well-sorted on laptops in particular. It works pretty well in every major Mac application -- except Adobe's. Worse, if you drag a picture from a web browser into Photoshop, the window moves and doesn't stay in the middle of the screen, creating a constant frustration that could be remedied if, again, Adobe bothered to support the native fullscreen mode that has come in Mac OS for the past seven and a half years.

Read more of this story at Slashdot.

WIPO Reports Cybersquatting Cases Grew by 12% Reaching New Records in 2018

CircleID - Pon, 2019-03-18 19:58

According to a report from the World Intellectual Property Organization (WIPO), trademark owners filed a record 3,447 cases under the Uniform Domain Name Dispute Resolution Policy (UDRP) with WIPO’s Arbitration and Mediation Center in 2018.

"WIPO’s 2018 caseload covered 5,655 domain names in total." Disputes involving domain names registered in new generic Top-Level Domains (gTLDs) accounted for some 13% of the total, with disputes most commonly found in .ONLINE, .LIFE, and .APP. Representing 73% of the gTLD caseload, .COM demonstrated the continuing popularity of the legacy gTLDs.

The top three sectors of complainant activity were banking and finance (12% of all cases), biotechnology and pharmaceuticals (11%), and Internet and IT (11%).

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Intellectual Property, UDRP

Most Amazon Brands Are Duds, Not Disrupters, Study Finds

Slashdot - Pon, 2019-03-18 19:20
An anonymous reader shares a report: The explosion of Amazon's private-label products -- batteries, baby wipes, jeans, tortilla chips, sofas -- has prompted concern that the world's biggest online retailer could use its clout to promote these house brands at the expense of merchants selling similar products on the web store. The issue even surfaced in Senator Elizabeth Warren's recent proposal to break up big technology companies. Turns out most Amazon-branded goods are flops that don't threaten other businesses at all, according to Marketplace Pulse. In a study, the New York e-commerce research firm examined 23,000 products and found that shoppers aren't more inclined to buy Amazon brands even when the company elevates them in search results. The study suggests popular political and media narratives about Amazon's market power are overblown, despite the company capturing 52.4 percent of all online spending in the U.S. this year, according to EMarketer. The study used sales rankings and the number of customer reviews as indicators of sales volume for different products, including Amazon's own brands and brands sold exclusively on the site. Amazon's success has been limited to basic products like batteries where shoppers are inclined to seek generic alternatives to save money, the study found. But when competing against such categories as apparel, where household names have an entrenched position, such Amazon brands as "A for Awesome" children's wear don't stand out, the study found.

Read more of this story at Slashdot.

Flawed Analysis, Failed Oversight: How Boeing, FAA Certified the Suspect 737 MAX Flight Control System

Slashdot - Pon, 2019-03-18 18:30
In one of the most detailed descriptions yet of the relationship between Boeing and the Federal Aviation Administration during the 737 Max's certification process, the Seattle Times reports that the U.S. regulator delegated much of the safety assessment to Boeing and that the analysis the planemaker in turn delivered to the authorities had crucial flaws. 0x2A shares the report: Both Boeing and the FAA were informed of the specifics of this story and were asked for responses 11 days ago, before the second crash of a 737 MAX. [...] Several technical experts inside the FAA said October's Lion Air crash, where the MCAS (Maneuvering Characteristics Augmentation System) has been clearly implicated by investigators in Indonesia, is only the latest indicator that the agency's delegation of airplane certification has gone too far, and that it's inappropriate for Boeing employees to have so much authority over safety analyses of Boeing jets. "We need to make sure the FAA is much more engaged in failure assessments and the assumptions that go into them," said one FAA safety engineer. Going against a long Boeing tradition of giving the pilot complete control of the aircraft, the MAX's new MCAS automatic flight control system was designed to act in the background, without pilot input. It was needed because the MAX's much larger engines had to be placed farther forward on the wing, changing the airframe's aerodynamic lift. Designed to activate automatically only in the extreme flight situation of a high-speed stall, this extra kick downward of the nose would make the plane feel the same to a pilot as the older-model 737s. Boeing engineers authorized to work on behalf of the FAA developed the System Safety Analysis for MCAS, a document which in turn was shared with foreign air-safety regulators in Europe, Canada and elsewhere in the world. The document, "developed to ensure the safe operation of the 737 MAX," concluded that the system complied with all applicable FAA regulations. Yet black box data retrieved after the Lion Air crash indicates that a single faulty sensor -- a vane on the outside of the fuselage that measures the plane's "angle of attack," the angle between the airflow and the wing -- triggered MCAS multiple times during the deadly flight, initiating a tug of war as the system repeatedly pushed the nose of the plane down and the pilots wrestled with the controls to pull it back up, before the final crash. [...] On the Lion Air flight, when the MCAS pushed the jet's nose down, the captain pulled it back up, using thumb switches on the control column. Still operating under the false angle-of-attack reading, MCAS kicked in each time to swivel the horizontal tail and push the nose down again. The black box data released in the preliminary investigation report shows that after this cycle repeated 21 times, the plane's captain ceded control to the first officer. As MCAS pushed the nose down two or three times more, the first officer responded with only two short flicks of the thumb switches. At a limit of 2.5 degrees, two cycles of MCAS without correction would have been enough to reach the maximum nose-down effect. In the final seconds, the black box data shows the captain resumed control and pulled back up with high force. But it was too late. The plane dived into the sea at more than 500 miles per hour. [...] The former Boeing flight controls engineer who worked on the MAX's certification on behalf of the FAA said that whether a system on a jet can rely on one sensor input, or must have two, is driven by the failure classification in the system safety analysis. He said virtually all equipment on any commercial airplane, including the various sensors, is reliable enough to meet the "major failure" requirement, which is that the probability of a failure must be less than one in 100,000. Such systems are therefore typically allowed to rely on a single input sensor.

Read more of this story at Slashdot.

Subskrybuj zawartość