ump’s New FCC Chair Wants to Make the Internet Worse By Rolling Back Net Neutrality Protections

Photo: Eric Thayer/Getty Images
This afternoon, new FCC chairman Ajit Pai unveiled plans that threaten the free and open nature of the internet. This is … bad. Very bad. Let’s talk about why.
Pai’s plan is to roll back the principle of net neutrality and consumer protections, by removing internet-service providers from classification as utility providers.
A quick recap, before we go any further: “Net neutrality” is the principle that all of the traffic on the network should be treated with equal priority. Your internet-service provider should not be able to slow or speed up sources of traffic according to their own preferences. Imagine, for instance, if Netflix performed more reliably than YouTube because your ISP struck a deal with the former. In 2015, under the direction of the Obama administration, the FCC classified ISPs as Title II common carriers, in order to enshrine the principle of net neutrality with legal protection. Pretty much everyone except ISPs — that is, most internet users and all companies operating on the network — supported the FCC’s move, and the commission was flooded with millions of comments regarding the issue.
Through his speech, which had the tone of a smug high-school salutatorian, Pai outlined his plan to roll back the Title II classification of broadband, which designated ISPs as utility providers, and to reclassify them as Title I information providers. He also proposed jettisoning the FCC’s broad “internet conduct standard.”
The crux of Pai’s argument for rolling back classifications was that the regulations led to decreased investment in broadband deployment. Removing them will, according to his logic, lead to more effort to build out broadband networks, creating jobs, and eventually getting more poorly served areas online. In a hypothetical sense, sure — increased broadband access is a laudable goal. But Pai’s argument ignores the reality of the situation, one in which most large ISPs have regional monopolies that allow them to offer substandard service at higher prices compared to other developed nations, with little punishment. And those monopolies are protected by local laws and ordinances that make it difficult for smaller players to even enter the marketplace and build their own networks.
Pai accused the country’s current ISP regulations of fostering what he called “digital redlining.” An interesting choice of words, given that even the largest providers in the country already engage in these sorts of tactics. Verizon, for instance, systematically avoided deploying FiOS in poorer neighborhoods of Newark, New Jersey, by exploiting a loophole in its contract with the city. Expecting a free broadband market to fix itself has not worked, and will not work.
Pai warned of the dangers of “forcing the internet into the control of the government,” denying that “hypothetical harms and hysterical prophecies” of prioritized traffic would ever come to pass. (Tell that to Netflix, which paid Comcast to secure more reliable service to subscribers in 2014.) He leveled a straw-man argument against zero-rating, an inversion of net-neutrality principles that offers specific services over mobile networks without counting against monthly data limits.
Throughout his speech, Pai cited historic comments made by liberals supporting light-touch internet regulation with a rhetorical device he seemed extremely proud of — something like, “Which right-wing zealot said this? What if I told you [jazz hands] it was a Democrat.” For a guy who griped a lot about revisionist history in his speech, he offered up a lot of revisionist history himself. For one thing, while the internet of the ’90s flourished after the network backbone was privatized, the internet’s origins lie in decades of government research, deployment, and heavy regulation (commercial activity was prohibited on the internet until the early ’90s).
The missing piece of Pai’s argument is how deregulation will affect edge providers — that is, every service running on the internet (Facebook and Google, all the way down to mom-and-pop sites). While removing Title II classification might make it better for internet-service providers, it makes the internet more centralized, and less competitive for everyone else. When ISPs play favorites, it snuffs out other services and businesses. The history of the internet is one of edge-provider Goliaths rising and falling, thanks to a neutral network that allows upstart Davids to get off the ground. Pai offered zero perspective on this.
The argument that the internet of the ’90s and early new millennium worked just fine doesn’t really hold up any more. There are exponentially more users and businesses operating on top of the internet. Many rely on it not just to waste time, but also to perform their jobs — the government itself requires citizens to use the internet in order to take advantage of a myriad of services.
It took the government more than half a century after the invention of the automobile to require basic safety features like seat belts. Just because the NHTSA wasn’t established immediately after the Model T rolled off an assembly line doesn’t mean that it’s useless. Arguing that because we didn’t have net-neutrality protections before, we don’t need them now is a lazy, transparent, specious argument that ignores how technology and society have progressed over the last two decades.
There is an upside: Pai has graciously decided to put it to a vote. Next month, the FCC will vote on a “notice of proposed rulemaking.” If adopted, the FCC will open itself up to public comment. Last time around, the commission received more than 3.7 million comments, a considerable majority in support of net neutrality.
Already, the cable-and-telecom industry is mobilizing against it. Before Pai even took the stage to unveil his plan, the Information Technology and Innovation Foundation — a water-carrying organization funded by telecom trade groups — decried “all the hyperbole and misinformation from activists who want to see the Internet provided as a heavily regulated public utility.” Hey, that’s not such a bad idea.

Meteorologist Joseph D’Aleo: Are Global Warming claims & the so called Consensus, a Sinister Betrayal of Science?

By: Marc Morano - Climate DepotApril 26, 2017 2:46 PM
By Joseph D’Aleo
Excerpt:
Sir Karl Popper, an Austrian-British philosopher and professor is generally regarded as one of the greatest philosophers of science of the 20th century. Popper is known for his rejection of the classical inductivist views on the scientific method, in favor of empirical falsification: A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be scrutinized by decisive experiments.
See in this chapter by James R. Fleming, Professor of Science, Technology and Society at Colby College, how the scientific method worked in climate change theories all through history.
That held until politicians with a globalist viewpoint were searching for a cause that would drive their globalization goals. The Club of Rome was an organization formed in 1968 consisting of current and former heads of state, UN bureaucrats, high-level politicians and government officials, diplomats, scientists, economists and business leaders from around the globe. It raised considerable public attention in 1972 with its report The Limits to Growth. The club states that its mission is “to act as a global catalyst for change through the identification and analysis of the crucial problems facing humanity and the communication of such problems to the most important public and private decision makers as well as to the general public.” In 1991, the club published The First Global Revolution in which they decided:
“In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming…would fit the bill…It does not matter if this common enemy is “a real one or…one invented for the purpose.”
That is when massive investment began into building a case for their cause by funding the UN, global universities, scientists and in government agencies through published work and reports ensuring an alignment around the theory that we are responsible for all bad things that happen and paint them as unprecedented. That investment has exceeded $1 trillion dollars. Meanwhile instead of engaging and supporting critical thinking and testing of hypothesis, there was concerted effort to paint anyone not supporting their theory as deniers with not so subtle attempts to liken them to holocaust deniers and those who denied the dangers of cigarettes.
Scientists practicing the scientific method were demonized, stripped where possible of their role in universities and in government agencies. Many have remained silent to keep their position. A few courageous whistle blowers have emerged from the UN, government and universities but they have been attacked by other scientists and generally ignored by the media, which in many cases are trained in journalism schools, which prepare environmental journalists to battle, discredit or deny air-time to any skeptics.
As Ron Arnold wrote in 2015:
You can credit the Society of Environmental Journalists (SEJ), a 501(c)(3) tax-exempt organization with more than 1,200 member reporters and academics in the United States, Canada, Mexico, and 27 other countries, with the general decline in journalistic standards among environmental journalists.
SEJ has received 119 grants from 35 notorious anti-development foundations, totaling $9.5 million since 1999. With this financial prompting, the SEJ’s stalwarts, including Andrew Revkin (The New York Times), Seth Borenstein (Associated Press), and Suzanne Goldenberg (The Guardian), have led the decline of climate news into ideological warfare.
To many SEJ writers, it is not possible for them to be biased, because issues have only one side: their own.
Associated Press’ Borenstein asserted, “The nature of reporting is to get two sides to an issue. But the nature of science reporting is to get what’s really happening.” SEJ thinks whatever isn’t environmental dogma is a lie, as indicated by its incredible reference webpage “Climate Change: A Guide to the Information and Disinformation.”
SEJ writers also promote “false balance,” the notion that giving opposing views concerning climate change any mention at all is not real balance because skeptics are liars paid to undermine the truth, (which) justifies total censorship…. Some go as far as to recommend violence to achieve environmental goals

Artificial intelligence can be both good and bad for journalism

In recent years, the Associated Press (AP) has been using automated language generation platforms such as Automated Insights to generate news reports for earnings and Minor League Baseball. It is also reported that the AP planned to recap college sports using the same platforms. These software platforms are fed with scores, stats, play-by-play and interview transcripts, and other data to output stories in AP Style, which ought to be written by human journalists.
This is not the first time that journalism has undergone an identity crisis. As both a student journalist and software developer, I have very mixed feelings about it. I am amazed and fascinated by current technology advancements like machine learning, deep learning, and natural language processing. On the other hand, I am more concerned that such innovations may drift away from their core value, which is to make our world a better place.
It is worth noticing that automated news reporting can be beneficial for the publishing industry. In reality, resources are usually scarce. Long, tedious, repetitive reporting work can drain energy from human reporters and occupy their valuable time with filling in large amounts of quantitative data. If software platforms can substitute humans for writing these stories out of templates with fewer errors, more human resources can be focused on stories of higher priority.
From a publishing company’s perspective, prioritizing experienced human journalists for important coverage means better quality of investigations and analysis. And that’s not to say human journalists can’t be enhanced with the assistance of machines. If voice and video recognition are reliable enough, journalists no longer have to painstakingly go through recordings and sort out useful information themselves.
Former AP vice president Lou Ferrara claimed that no actual jobs have been lost or replaced by automated journalism. Slate writer Will Oremus believes that human journalists have little to fear for the foreseeable future. Without automated language generation platforms, some news may never have been covered. Yet still, all of this does not guarantee that journalism jobs are safe from the challenges of machines. 
In fact, no one’s job is safe if technology advancement strays from its original cause. The rise of the machines may not decimate human civilization in the foreseeable future, but it is possible that the balance and fairness of society will take the blow. Replacing human labor with machines and robots of higher efficiency may undermine social wellbeing. In the way society currently operates, less complicated jobs are still reserved for humans, not just machines. Simply laying off human workers and replacing them with robots could cause social issues, and may even threaten the structure of society.
Single-minded commercial ambition and success is inconsiderate. In terms of journalism, there has to be demand for young journalists to cover less significant news. If a publishing company implements so-called robot journalism without considering the future of journalism, the number of veteran journalists will diminish eventually. 
Assisting human journalists with software platforms is beneficial to journalism in terms of saving expenses and generating more revenue, but the focus on revenue itself is undermining journalism. Technology aims to make the world better yet simultaneously leads to a state of privileged separation, in which you can easily surround yourself with the latest technologies and forget about your origins: humans, society, and nature.
At their current stage, automated language generation platforms can only write quantitative stories instead of qualitative ones. A robot is not able to qualitatively analyze a sporting event or capture information that isn’t reflected in the data sheets. This is the importance of having a human journalist present to watch the game. While I have seen many recaps for sporting events written by human journalists that essentially rephrase box scores, interviews, and play-by-play transcripts, these uninspired reports make me unwilling to finish reading. If a human journalist fails to maintain the quality of their stories, or to add analytical elements, they should legitimately worry about being replaced by robots.
Journalism, technology, and society are all transforming. Facing opportunities and challenges brought by machines, there are core values which remain unchanged: journalism ethics, the quality of a story, and investigative and analytical abilities unique to the human mind. The purpose of technological advancement should be to benefit society, but our values shouldn’t change on a whim just because of new technology. 

Comments

Popular posts from this blog

technology